Warning: main(/www/www/htdocs/style/globals.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 1

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/globals.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 1

Warning: main(/www/www/htdocs/style/header.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 8

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/header.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 8

Overview

This chapter includes:

What's new in the networking stack?

The QNX Neutrino networking stack is called io-pkt. It replaces the previous generation of the stack, io-net, and provides the following benefits:

The io-pkt manager is intended to be a drop-in replacement for io-net for those people who are dealing with the stack from an outside application point of view. It includes stack variants, associated utilities, protocols, libraries and drivers.

The stack variants are:

io-pkt-v4
IPv4 version of the stack with no encryption or Wi-Fi capability built in. This is a “reduced footprint” version of the stack that doesn't support the following:
io-pkt-v4-hc
IPv4 version of the stack that has full encryption and Wi-Fi capability built in and includes hardware-accelerated cryptography capability (Fast IPsec).
io-pkt-v6-hc
IPv6 version of the stack (includes IPv4 as part of v6) that has full encryption and Wi-Fi capability, also with hardware-accelerated cryptography.

Note: In this guide, we use io-pkt to refer to all the stack variants. When you start the stack, use the appropriate variant (io-pkt isn't a symbolic link to any of them).

We've designed io-pkt to follow as closely as possible the NetBSD networking stack code base and architecture. This provides an optimal path between the IP protocol and drivers, tightly integrating the IP layer with the rest of the stack.


Note: The io-pkt stack isn't backward-compatible with io-net. However, both can exist on the same system. For more information, see the Migrating from io-net appendix in this guide.

The io-pkt implementation makes significant changes to the QNX Neutrino stack architecture, including the following:

Architecture of io-pkt

The io-pkt stack is very similar in architecture to other component subsystems inside of the Neutrino operating system. At the bottom layer are drivers that provide the mechanism for passing data to, and receiving data from, the hardware. The drivers hook into a multi-threaded layer-2 component (that also provides fast forwarding and bridging capability) that ties them together and provides a unified interface into the layer-3 component, which then handles the individual IP and upper-layer protocol-processing components (TCP and UDP).

In Neutrino, a resource manager forms a layer on top of the stack. The resource manager acts as the message-passing intermediary between the stack and user applications. It provides a standardized type of interface involving open(), read(), write(), and ioctl() that uses a message stream to communicate with networking applications. Networking applications written by the user link with the socket library. The socket library converts the message-passing interface exposed by the stack into a standard BSD-style socket layer API, which is the standard for most networking code today.

One of the big differences that you'll see with this stack as compared to io-net is that it isn't currently possible to decouple the layer 2 component from the IP stack. This was a trade-off that we made to allow increased performance at the expense of some reduced versatility. We might look at enabling this at some point in the future if there's enough demand.

In addition to the socket-level API, there are also other, programmatic interfaces into the stack that are provided for other protocols or filtering to occur. These interfaces — used directly by Transparent Distributed Processing (TDP, also known as Qnet) — are very different from those provided by io-net, so anyone using similar interfaces to these in the past will have to rewrite them for io-pkt.


Details of the io-pkt architecture


A detailed view of the io-pkt architecture.

At the driver layer, there are interfaces for Ethernet traffic (used by all Ethernet drivers), and an interface into the stack for 802.11 management frames from wireless drivers. The hc variants of the stack also include a separate hardware crypto API that allows the stack to use a crypto offload engine when it's encrypting or decrypting data for secure links. You can load drivers (built as DLLs for dynamic linking and prefixed with devnp-) into the stack using the -d option to io-pkt.

APIs providing connection into the data flow at either the Ethernet or IP layer allow protocols to coexist within the stack process. Protocols (such as Qnet) are also built as DLLs. A protocol links directly into either the IP or Ethernet layer and runs within the stack context. They're prefixed with lsm (loadable shared module) and you load them into the stack using the -p option. The tcpip protocol (-ptcpip) is a special option that the stack recognizes, but doesn't link a protocol module for (since the IP stack is already present). You still use the -ptcpip option to pass additional parameters to the stack that apply to the IP protocol layer (e.g. -ptcpip prefix=/alt to get the IP stack to register /alt/dev/socket as the name of its resource manager).

A protocol requiring interaction from an application sitting outside of the stack process may include its own resource manager infrastructure (this is what Qnet does) to allow communication and configuration to occur.

In addition to drivers and protocols, the stack also includes hooks for packet filtering. The main interfaces supported for filtering are:

Berkeley Packet Filter (BPF) interface
A socket-level interface that lets you read and write, but not modify or block, packets, and that you access by using a socket interface at the application layer (see http://en.wikipedia.org/wiki/Berkeley_Packet_Filter). This is the interface of choice for basic, raw packet interception and transmission and gives applications outside of the stack process domain access to raw data streams.
Packet Filter (PF) interface
A read/write/modify/block interface that gives complete control over which packets are received by or transmitted from the upper layers and is more closely related to the io-net filter API.

For more information, see the Packet Filtering chapter.

Threading model

The default mode of operation is for io-pkt to create one thread per CPU. The io-pkt stack is fully multi-threaded at layer 2. However, only one thread may acquire the “stack context” for upper-layer packet processing. If multiple interrupt sources require servicing at the same time, these may be serviced by multiple threads. Only one thread will service a particular interrupt source at any point in time. Typically an interrupt on a network device indicates that there are packets to be received. The same thread that handles the receive processing may later transmit the received packets out another interface. Examples of this are layer-2 bridging and the “ipflow” fastforwarding of IP packets.

The stack uses a thread pool to service events that are generated from other parts of the system. These events include:

You can use a command-line option to the driver to control the priority of threads that receive packets. Client connection requests are handled in a floating priority mode (i.e. the thread priority matches that of the client application thread accessing the stack resource manager).

Once a thread receives an event, it examines the event type to see if it's a hardware event, stack event, or “other” event:

This capability of having a thread change directly from being a hardware-servicing thread to being the stack thread eliminates context switching and greatly improves the receive performance for locally terminated IP flows.


Note: If io-pkt runs out of threads, it sends a message to slogger, and anything that requires a thread blocks until one becomes available. You can use command-line options to specify the maximum and minimum number of threads for io-pkt.

Threading priorities

There are a couple of ways that you can change the priority of the threads responsible for receiving packets from the hardware. You can pass the rx_prio_pulse option to the stack to set the default thread priority. For example:

io-pkt-v4 -ptcpip rx_pulse_prio=50

This makes all the receive threads run at priority 50. The current default for these threads is priority 21.

The second mechanism lets you change the priority on a per-interface basis. This is an option passed to the driver and, as such, is supported only if the driver supports it. When the driver registers for its receive interrupt, it can specify a priority for the pulse that is returned from the ISR. This pulse priority is what the thread will use when running. Here's some sample code from the devn-mpc85xx.so Ethernet driver:

if ((rc = interrupt_entry_init(&mpc85xx->inter_rx, 0, NULL,
    cfg->priority)) != EOK) {
        log(LOG_ERR, "%s(): interrupt_entry_init(rx) failed: %d",
           __FUNCTION__, rc);
        mpc85xx_destroy(mpc85xx, 9);
                return rc;
}

Note: Driver-specific thread priorities are assigned on a per-interface basis. The stack normally creates one thread per CPU to allow the stack to scale appropriately in terms of performance on an SMP system. Once you use an interface-specific parameter with multiple interfaces, you must get the stack to create one thread per interface in order to have that option picked up and used properly by the stack. This is handled with the -t option to the stack.

For example, to have the stack start up and receive packets on one interface at priority 20 and on a second interface at priority 50 on a single-processor system, you would use the following command-line options:

io-pkt-v4 -t2 -dmpc85xx syspage=1,priority=20,pci=0 \
-dmpc85xx syspage=1,priority=50,pci=1

If you've specified a per-interface priority, and there are more interfaces than threads, the stack sends a warning to slogger. If there are insufficient threads present, the per-interface priority is ignored (but the rx_pulse_prio option is still honored).

The actual options for setting the priority and selecting an individual card depend on the device driver; see the driver documentation for specific option information.

Legacy io-net drivers create their own receive thread, and therefore don't require the -t option to be used if they support the priority option. These drivers use the devnp-shim.so shim driver to allow interoperability with the io-pkt stack.

Components of core networking

The io-pkt manager is the main component; other core components include:

pfctl, lsm-pf-v6.so, lsm-pf-v4.so
IP Filtering and NAT configuration and support.
ifconfig, netstat, sockstat (see the NetBSD documentation), sysctl
Stack configuration and parameter / information display.
pfctl
Priority packet queuing on Tx (QoS).
lsm-autoip.so
Auto-IP interface configuration protocol.
brconfig
Bridging and STP configuration along with other layer-2 capabilities.
pppd, pppoed, pppoectl
PPP support for io-pkt, including PPP, PPPOE (client), and Multilink PPP.
devnp-shim.so
io-net binary-compatibility shim layer.
nicinfo
Driver information display tool (for native and io-net drivers).
libsocket.so
BSD socket application API into the network stack.
libpcap.so, tcpdump
Low-level packet-capture capability that provides an abstraction layer into the Berkeley Packet Filter interface.
lsm-qnet.so
Transparent Distributed Processing protocol for io-pkt.
hostapd, hostapd_cli (see the NetBSD documentation), wpa_supplicant, wpa_cli
Authentication daemons and configuration utilities for wireless access points and clients.

QNX Neutrino Core Networking also includes applications, services, and libraries that interface to the stack through the socket library and are therefore not directly dependent on the Core components. This means that they use the standard BSD socket interfaces (BSD socket API, Routing Socket, PF_KEY, raw socket):

libssl.so, libssl.a
SSL suite ported from the source at http://www.openssl.org.
libnbdrvr.so
BSD porting library. An abstraction layer provided to allow the porting of NetBSD drivers.
libipsec(S).a, setkey
NetBSD IPsec tools.
inetd
Updated Internet daemon.
route
Updated route-configuration utility.
ping, ping6
Updated ping utilities.
ftp, ftpd
Enhanced FTP.

Getting the source code

If you want to get the source code for io-pkt and other components, go to Foundry27, the community portal for QNX developers (http://community.qnx.com/sf/sfmain/do/home).


Warning: main(/www/www/htdocs/style/footer.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 917

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/footer.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/io-pkt_en/user_guide/overview.html on line 917