As with other service-providing processes in QNX Neutrino, the networking services execute outside the kernel. Developers are presented with a single unified interface, regardless of the configuration and number of networks involved.
This architecture allows:
Our native network subsystem consists of the network manager executable (io-pkt-v4, io-pkt-v4-hc, or io-pkt-v6-hc), plus one or more shared library modules. These modules can include protocols (e.g. lsm-qnet.so) and drivers (e.g. devnp-speedo.so).
The io-pkt* component is the active executable within the network subsystem. Acting as a kind of packet redirector/multiplexer, io-pkt* is responsible for loading protocol and driver modules based on the configuration given to it on its command line (or via the mount command after it's started).
Employing a zero-copy architecture, the io-pkt* executable efficiently loads multiple networking protocols or drivers (e.g. lsm-qnet.so) on the fly— these modules are shared objects that install into io-pkt*.
The io-pkt stack is very similar in architecture to other component subsystems inside of the Neutrino operating system. At the bottom layer, are drivers that provide the mechanism for passing data to and receiving data from the hardware. The drivers hook into a multi-threaded layer-2 component (that also provides fast forwarding and bridging capability) that ties them together and provides a unified interface for directing packets into the protocol-processing components of the stack. This includes, for example, handling individual IP and upper-layer protocols such as TCP and UDP.
In Neutrino, a resource manager forms a layer on top of the stack. The resource manager acts as the message-passing intermediary between the stack and user applications. It provides a standardized type of interface involving open(), read(), write(), and ioctl() that uses a message stream to communicate with networking applications. Networking applications written by the user link with the socket library. The socket library converts the message-passing interface exposed by the stack into a standard BSD-style socket layer API, which is the standard for most networking code today.
At the driver layer, there are interfaces for Ethernet traffic (used by all Ethernet drivers), and an interface into the stack for 802.11 management frames from wireless drivers. The hc variants of the stack also include a separate hardware crypto API that allows the stack to use a crypto offload engine when it's encrypting or decrypting data for secure links. You can load drivers (built as DLLs for dynamic linking and prefixed with devnp- for new-style drivers, and devn- for legacy drivers) into the stack using the -d option to io-pkt.
APIs providing connection into the data flow at either the Ethernet or IP layer allow protocols to coexist within the stack process. Protocols (such as Qnet) are also built as DLLs. A protocol links directly into either the IP or Ethernet layer and runs within the stack context. They're prefixed with lsm (loadable shared module) and you load them into the stack using the -p option. The tcpip protocol (-ptcpip) is a special option that the stack recognizes, but doesn't link a protocol module for (since the IP stack is already built in). You still use the -ptcpip option to pass additional parameters to the stack that apply to the IP protocol layer (e.g. -ptcpip prefix=/alt to get the IP stack to register /alt/dev/socket as the name of its resource manager).
A protocol requiring interaction from an application sitting outside of the stack process may include its own resource manager infrastructure (this is what Qnet does) to allow communication and configuration to occur.
In addition to drivers and protocols, the stack also includes hooks for packet filtering. The main interfaces supported for filtering are:
For more information, see the Packet Filtering and Firewalling chapter of the Neutrino Core Networking User's Guide.
The default mode of operation is for io-pkt to create one thread per CPU. The io-pkt stack is fully multi-threaded at layer 2. However, only one thread may acquire the “stack context” for upper-layer packet processing. If multiple interrupt sources require servicing at the same time, these may be serviced by multiple threads. Only one thread will be servicing a particular interrupt source at any point in time. Typically an interrupt on a network device indicates that there are packets to be received. The same thread that handles the receive processing may later transmit the received packets out another interface. Examples of this are layer-2 bridging and the “ipflow” fastforwarding of IP packets.
The stack uses a thread pool to service events that are generated from other parts of the system. These events may be:
You can use a command-line option to the driver to control the priority at which the thread is run to receive packets. Client connection requests are handled in a floating priority mode (i.e. the thread priority matches that of the client application thread accessing the stack resource manager).
Once a thread receives an event, it examines the event type to see if it's a hardware event, stack event, or “other” event:
This capability of having a thread change directly from being a hardware-servicing thread to being the stack thread eliminates context switching and greatly improves the receive performance for locally terminated IP flows.
The networking protocol module is responsible for implementing the details of a particular protocol (e.g. Qnet). Each protocol component is packaged as a shared object (e.g. lsm-qnet.so). One or more protocol components may run concurrently.
For example, the following line from a buildfile shows io-pkt-v4 loading the Qnet protocol via its -p protocol command-line option:
io-pkt-v4 -dne2000 -pqnet
The io-pkt* managers include the TCP/IP stack. |
Qnet is the QNX Neutrino native networking protocol. Its main purpose is to extend the OS's powerful message-passing IPC transparently over a network of microkernels.
Qnet also provides Quality of Service policies to help ensure reliable network transactions.
For more information on the Qnet and TCP/IP protocols, see the following chapters in this book:
The network driver module is responsible for managing the details of a particular network adaptor (e.g. an NE-2000 compatible Ethernet controller). Each driver is packaged as a shared object and installs into the io-pkt* component.
Once io-pkt* is running, you can dynamically load drivers at the command line using the mount command. For example, the following commands start io-pkt-v6-hc and then mount the driver for the Broadcom 57xx chip set adapter:
io-pkt-v6-hc & mount -T io-pkt devnp-bge.so
All network device drivers are shared objects whose names are of the form devnp-driver.so.
The io-pkt* manager can also load legacy io-net drivers. The names of these drivers start with devn-. |
Once the shared object is loaded, io-pkt* will then initialize it. The driver and io-pkt* are then effectively bound together — the driver will call into io-pkt* (for example when packets arrive from the interface) and io-pkt* will call into the driver (for example when packets need to be sent from an application to the interface).
To unload a legacy io-net driver, you can use the umount command. For example:
umount /dev/io-net/en0
To unload a new-style driver or a legacy io-net driver, use the ifconfig destroy command:
ifconfig bge0 destroy
For more information on network device drivers, see their individual utility pages (devn-*, devnp-*) in the Utilities Reference.