TTM Network Administration Documentation
- What’s New In This Release
- TTM Overview
- Unicast And Multicast Network Communication
- Network Setup And Configuration
- Installing And Upgrading TTM And Guardian
- TTM Daemon
TTM Remote Host Daemon And Remote Clients
- Introduction to the TTM Remote Host Daemon
- Remote Mode: Pros and Cons
- Remote Mode Operations
- Network Considerations when Deploying the Remote Host Daemon
- Configuring a Remote Host Daemon
- Configuring the Remote Client
- Compressing Data
- Manually Configuring TCP Window Size
- Testing the Remote Connection
- Remote Host Daemon Failover (Disaster Recovery)
- Advanced Topics
- Maintenance And Troubleshooting
- Ttmd.cfg File Reference
Multicast and Hardware Setup
To avoid multicast problems with your Network Interface Cards (NICs), do not use:
- D-link cards
- Linksys cards
- Non-brand-name NICs
- Older NICs
Use only newer, brand-name NICs. Older and cheaper network cards tend to have broken or limited chipsets.
Although many Ethernet chipsets support internal multicast filtering, many are broken or provide limited filtering logic. If a chipset cannot provide the multicast filtering needed by the node, it forces the operating system device driver to operate the NIC in promiscuous mode. In promiscuous mode, the node's CPU does the multicast packet filtering in the device driver. In turn, this takes processing speed and power away from TT applications (e.g., X_TRADER).
Cisco Switch Setup
By default, Cisco configures its switches to flood multicast packets to all ports in the same virtual LAN (VLAN), which negatively impacts the TT trading system in two ways:
- It wastes bandwidth on those ports that have not subscribed to the multicast packet (via the numeric group ID).
- It can consume CPU processing speed on all nodes that receive the multicast packet (i.e., in that network segment). Refer to the previous section called NIC Setup for further information.
However, you can configure Cisco switches to limit multicasts such that the switch sends packets only to those ports connecting to nodes that subscribe to the group ID of that particular multicast packet. Thus, the switch does not flood all ports with multicast packets.
When configuring Cisco switches in this manner, you have the following options:
- Bind multicast groups to specific ports (recommended on smaller networks)
- Enable Internet Group Management Protocol (IGMP) snooping (recommended)
- Enable Cisco Group Multicast Protocol (CGMP)
- Enable Generic Attribute Registration Protocol (GARP) Multicast Registration Protocol (GMRP)
On smaller networks, TT recommends that you bind the multicast groups to the switch. You can set this up using any size switch. However, whenever a change to a node on the network occurs (such as an add, update, or delete), network personnel must manually update the bindings. Additionally, whenever a new multicast group is added, all ports on the switch must be updated.
However, because this method is manually intensive, it quickly becomes an administrative burden. TT recommends that you statically bind multicast groups only on smaller networks. This method does not need a router with IP multicast enabled.
For details on configuring the other multicast protocols listed above, refer to Multicast Protocols.
Cisco Router Setup
Cisco supports two interior multicast protocols relevant to trading networks:
- Distance Vector Multicast Routing Protocol (DVMRP): This protocol is useful when connecting to legacy multicast routing daemons. However, TT recommends that you do not use this protocol. DVMRP supports only the dense mode configuration for IP multicast forwarding.
Independent Multicast (PIM): When setting up PIM, you must configure
all routers to be in the same mode. PIM supports dense, sparse, and
hybrid (Dense/Sparse) multicast forwarding modes:
- Dense Mode: In dense mode, PIM floods multicast packets for each Source (S), Group (G) tuple to all peer multicast routers. If a peer does not have any networks with clients that subscribe to the specified group, the peer sends a prune message for the (S,G) tuple towards the source. Prune messages time out periodically, after which the next message received for the (S,G) is flooded to all peers. TT recommends that you use dense mode when the majority of the network segments in the multicast domain have at least one subscriber to that multicast group.
- Sparse Mode: Sparse mode does not flood multicasts nor require the pruning of unwanted traffic back to the source. Instead, one or more (you should use two for redundancy) PIM routers are configured as a PIM Rendezvous Point (RP). When a node joins a multicast group, it sends a join message towards the RP. All multicast packets for a (S,G) tuple are routed through the RP until a configured bandwidth threshold is exceeded, after which the RP informs the intermediate routers to build a direct (S,G) forwarding path between the source and subscribed networks, eliminating the need to route all packets through the RP. TT recommends that you use sparse mode for multicast groups that have few subscribing nodes (or network segments) within the entire multicast domain.
- Hybrid - Dense/Sparse Mode: In hybrid mode, PIM consults its RP table and operates in sparse mode for multicast groups that have a registered RP and in dense mode for groups with no registered RP.
TT recommends that you use the PIM protocol and that you configure all PIM interfaces in a multicast domain for Hybrid mode. Furthermore, you must enable Cisco AutoRP for automatic distribution of RP mappings to all PIM routers in the multicast domain. This setup ensures no mismatches occur between dense and sparse mode operation while allowing for some groups to flood in dense mode and requiring other sparsely utilized groups to use an RP.