This chapter includes:
The QNX Neutrino networking stack is called io-pkt. It replaces the previous generation of the stack, io-net, and provides the following benefits:
The io-pkt manager is intended to be a drop-in replacement for io-net for those people who are dealing with the stack from an outside application point of view. It includes stack variants, associated utilities, protocols, libraries and drivers.
The stack variants are:
In this guide, we use “io-pkt” to refer to all the stack variants. When you start the stack, use the appropriate variant (io-pkt isn't a symbolic link to any of them). |
We've designed io-pkt to follow as closely as possible the NetBSD networking stack code base and architecture. This provides an optimal path between the IP protocol and drivers, tightly integrating the IP layer with the rest of the stack.
The io-pkt stack isn't backward-compatible with io-net. However, both can exist on the same system. For more information, see the Migrating from io-net appendix in this guide. |
The io-pkt implementation makes significant changes to the QNX Neutrino stack architecture, including the following:
# sysctl -a | grep do_loopback_cksum net.inet.ip.do_loopback_cksum = 0 net.inet.tcp.do_loopback_cksum = 0 net.inet.udp.do_loopback_cksum = 0
The io-pkt stack is very similar in architecture to other component subsystems inside of the Neutrino operating system. At the bottom layer are drivers that provide the mechanism for passing data to, and receiving data from, the hardware. The drivers hook into a multi-threaded layer-2 component (that also provides fast forwarding and bridging capability) that ties them together and provides a unified interface into the layer-3 component, which then handles the individual IP and upper-layer protocol-processing components (TCP and UDP).
In Neutrino, a resource manager forms a layer on top of the stack. The resource manager acts as the message-passing intermediary between the stack and user applications. It provides a standardized type of interface involving open(), read(), write(), and ioctl() that uses a message stream to communicate with networking applications. Networking applications written by the user link with the socket library. The socket library converts the message-passing interface exposed by the stack into a standard BSD-style socket layer API, which is the standard for most networking code today.
One of the big differences that you'll see with this stack as compared to io-net is that it isn't currently possible to decouple the layer 2 component from the IP stack. This was a trade-off that we made to allow increased performance at the expense of some reduced versatility. We might look at enabling this at some point in the future if there's enough demand.
In addition to the socket-level API, there are also other, programmatic interfaces into the stack that are provided for other protocols or filtering to occur. These interfaces — used directly by Transparent Distributed Processing (TDP, also known as Qnet) — are very different from those provided by io-net, so anyone using similar interfaces to these in the past will have to rewrite them for io-pkt.
At the driver layer, there are interfaces for Ethernet traffic (used by all Ethernet drivers), and an interface into the stack for 802.11 management frames from wireless drivers. The hc variants of the stack also include a separate hardware crypto API that allows the stack to use a crypto offload engine when it's encrypting or decrypting data for secure links. You can load drivers (built as DLLs for dynamic linking and prefixed with devnp-) into the stack using the -d option to io-pkt.
APIs providing connection into the data flow at either the Ethernet or IP layer allow protocols to coexist within the stack process. Protocols (such as Qnet) are also built as DLLs. A protocol links directly into either the IP or Ethernet layer and runs within the stack context. They're prefixed with lsm (loadable shared module) and you load them into the stack using the -p option. The tcpip protocol (-ptcpip) is a special option that the stack recognizes, but doesn't link a protocol module for (since the IP stack is already present). You still use the -ptcpip option to pass additional parameters to the stack that apply to the IP protocol layer (e.g. -ptcpip prefix=/alt to get the IP stack to register /alt/dev/socket as the name of its resource manager).
A protocol requiring interaction from an application sitting outside of the stack process may include its own resource manager infrastructure (this is what Qnet does) to allow communication and configuration to occur.
In addition to drivers and protocols, the stack also includes hooks for packet filtering. The main interfaces supported for filtering are:
For more information, see the Packet Filtering chapter.
The default mode of operation is for io-pkt to create one thread per CPU. The io-pkt stack is fully multi-threaded at layer 2. However, only one thread may acquire the “stack context” for upper-layer packet processing. If multiple interrupt sources require servicing at the same time, these may be serviced by multiple threads. Only one thread will service a particular interrupt source at any point in time. Typically an interrupt on a network device indicates that there are packets to be received. The same thread that handles the receive processing may later transmit the received packets out another interface. Examples of this are layer-2 bridging and the “ipflow” fastforwarding of IP packets.
The stack uses a thread pool to service events that are generated from other parts of the system. These events include:
You can use a command-line option to the driver to control the priority of threads that receive packets. Client connection requests are handled in a floating priority mode (i.e. the thread priority matches that of the client application thread accessing the stack resource manager).
Once a thread receives an event, it examines the event type to see if it's a hardware event, stack event, or “other” event:
This capability of having a thread change directly from being a hardware-servicing thread to being the stack thread eliminates context switching and greatly improves the receive performance for locally terminated IP flows.
If io-pkt runs out of threads, it sends a message to slogger, and anything that requires a thread blocks until one becomes available. You can use command-line options to specify the maximum and minimum number of threads for io-pkt. |
There are a couple of ways that you can change the priority of the threads responsible for receiving packets from the hardware. You can pass the rx_prio_pulse option to the stack to set the default thread priority. For example:
io-pkt-v4 -ptcpip rx_pulse_prio=50
This makes all the receive threads run at priority 50. The current default for these threads is priority 21.
The second mechanism lets you change the priority on a per-interface basis. This is an option passed to the driver and, as such, is supported only if the driver supports it. When the driver registers for its receive interrupt, it can specify a priority for the pulse that is returned from the ISR. This pulse priority is what the thread will use when running. Here's some sample code from the devn-mpc85xx.so Ethernet driver:
if ((rc = interrupt_entry_init(&mpc85xx->inter_rx, 0, NULL, cfg->priority)) != EOK) { log(LOG_ERR, "%s(): interrupt_entry_init(rx) failed: %d", __FUNCTION__, rc); mpc85xx_destroy(mpc85xx, 9); return rc; }
Driver-specific thread priorities are assigned on a per-interface basis. The stack normally creates one thread per CPU to allow the stack to scale appropriately in terms of performance on an SMP system. Once you use an interface-specific parameter with multiple interfaces, you must get the stack to create one thread per interface in order to have that option picked up and used properly by the stack. This is handled with the -t option to the stack. |
For example, to have the stack start up and receive packets on one interface at priority 20 and on a second interface at priority 50 on a single-processor system, you would use the following command-line options:
io-pkt-v4 -t2 -dmpc85xx syspage=1,priority=20,pci=0 \ -dmpc85xx syspage=1,priority=50,pci=1
If you've specified a per-interface priority, and there are more interfaces than threads, the stack sends a warning to slogger. If there are insufficient threads present, the per-interface priority is ignored (but the rx_pulse_prio option is still honored).
The actual options for setting the priority and selecting an individual card depend on the device driver; see the driver documentation for specific option information.
Legacy io-net drivers create their own receive thread, and therefore don't require the -t option to be used if they support the priority option. These drivers use the devnp-shim.so shim driver to allow interoperability with the io-pkt stack.
The io-pkt manager is the main component; other core components include:
QNX Neutrino Core Networking also includes applications, services, and libraries that interface to the stack through the socket library and are therefore not directly dependent on the Core components. This means that they use the standard BSD socket interfaces (BSD socket API, Routing Socket, PF_KEY, raw socket):
If you want to get the source code for io-pkt and other components, go to Foundry27, the community portal for QNX developers (http://community.qnx.com/sf/sfmain/do/home).