Manifold is a networking infrastructure project with the goal of supporting high performance end to end and multicast publish/subscribe networking. The heart of Manifold is ManifoldNexus, a hyper-connected fabric node. Each ManifoldNexus is fully connected to every other ManifoldNexus in the network to provide worst case three hop routing to any node within the same local network. A node in this context is any piece of software that forms part of the Manifold. Remote regions can also be joined together giving a worst case four hop routing between any pair of nodes. Local regions automatically connect and require no configuration in situations where default options are sufficient.
Manifold includes both a C++ API and a Python wrapper.
Using the test Python scripts, data rates of around 2G bytes per second are possible within one machine (Intel i7 5820K, 5000 messages per second, 400,000 byte messages) with machine to machine links typically limited by link speed for large messages.
Some Nice Features of Manifold
- Multicast is natively supported in Manifold. Multicast sources are not burdened with the task of replicating data or even caring about multicast at all – all work is performed by the Manifold Nexus infrastructure.
- Multicast sources know is anyone if subscribed so that they can stop using up CPU if a service isn’t required. For example, a multicast source might generate data in multiple formats on different published services. As it is aware of the subscription status of each service, it only needs to prepare data for those services actually in use.
- Manifold performs automatic rate-matching for multicast services. This means that slow subscribers see a subset of the total flow and do not slow down normal speed subscribers that can keep up with the source. Obviously not every application can make use of this but where loss can be tolerated on a regular basis, it provides a simple way for low power devices to subscribe to high speed streams.
- Global directory available to all Manifold nodes. This it very easy to locate published services.
There are two types of data service – multicast and end to end (E2E).
The multicast service has the concept of a multicast source that generates a stream of data and multicast sinks that subscribe to the stream. It is intrinsically a one-way flow. There can be zero, one or many sinks subscribed to the stream. The multicast source only generates one copy of the data – all replication is performed by the ManifoldNexus nodes in the Manifold. This is ideal for one way data where multiple sink nodes need access to the data from a source node.
All data in a multicast service has to be acknowledged by the receiver. This allows for rate matching between the source and the sinks. The maximum rate at which a source node can generate data is controlled by the need for the nearest ManifoldNexus to respond with acks to the source node to reopen its window. If a sink is too slow to keep up with the source rate, the nearest ManifoldNexus will drop a message to that node only so that the slow node sees a subset of the total flow. Higher speed sinks will see the full stream, unaffected by the slow sink.
The E2E data service is a point to point communications link between two nodes. One node is an E2E service node, the other is an E2E client node. A client node can send messages to a service node and receive responses from that node. Many clients can be using the same service as the service node is typically stateless.
ManifoldNexus uses hot potato routing for E2E messages. This means that delay is minimized but there is no flow control imposed by ManifoldNexus.
A common case is that a node generates a multicast stream but also publishes an E2E service that allows for feedback and control of the source. A multicast sink node receiving the multicast stream can have an E2E client to talk back to the multicast source node.
The Manifold Directory
An integral component of the Manifold is the directory of published services. ManifoldNexus nodes in the Manifold exchange directory updates in order to ensure a globally consistent directory view. In normal operation, other nodes do not need the directory. In order to find a published service, they just ask their ManifoldNexus to look up the topic name and return the globally unique identifier (UID – typically an extended MAC address) that identifies the publishing node.
However, it may also be desirable for a node to browse available services so there is a mechanism that allows nodes to request a copy of the directory so that they can search for any type of service that is currently active.
Within the local region (typically a single LAN), Manifolds can form automatically without user intervention. However, to connect multiple local regions to form a larger region, static tunnels must be configured. As with any Manifold link, these can be encrypted and authenticated. One end is the source and the other the destination. The source tries to call the destination on a specified address and port. Typically, the destination LAN router will port-forward the connection request to the designated ManifoldNexus. Alternatively a VPN can be used to provide the required visibility between regions.
ManifoldNexus provides message switching and directory management functions used to construct the Manifold. At least one ManifoldNexus must be present in a local region. If only one ManifoldNexus is running in a region, all nodes wihtin that region are at most two hops apart. In the other extreme, where every machine in the local rgeion is running a ManifoldNexus, in general there will be three hops between nodes unless the nodes are connected to the same ManifoldNexus. The specific ManifoldNexus to which a node connects can be controlled via the node’s basic setup dialog, available via the node’s window (if run in window mode) or via ManifoldManager.
ManifoldNexus nodes automatically discover each other. Other nodes in the Manifold will automatically discover a ManifoldNexus for connection. It is possible to force nodes to connect to specific ManifoldNexus nodes. For example, a list of up to three ManifoldNexus nodes can be configured in order to implement resilience with defined traffic flows in each case. Automatic resiliance will find another ManifoldNexus but the location may not be ideal depending on the actual traffic flows. ManifoldManager can be used to configure this for all non-ManifoldNexus nodes.
This is a management node that can be used to configure and monitor other nodes within the Manifold. Note that the node must be running in order to be managed and there has to be at least one ManifoldNexus running in the local region.