This appendix includes:
In this appendix, we'll take a look at QSS's previous operating system, QNX 4, and see how it compares to Neutrino. This appendix will mainly be of interest if you are a current QNX 4 customer and want to see:
Or you may be developing for, or porting to, both operating systems.
Let's first start with how the two generations of operating systems are similar:
Note that while some of the basic features listed above are indeed similar, in general Neutrino has extended the support. For example, Neutrino has more POSIX support than QNX 4, simply because a large number of the POSIX specifications were still in draft status when QNX 4 was released. While less of them are in draft status as of Neutrino's release, there are still more new drafts being released as this book is written. It's a never-ending game of catch-up.
Now that you've seen what's the same about the two generations of OS, let's look at where Neutrino has improved functionality over QNX 4:
While some of these improvements are “free,” meaning that there are no compatibility issues (for example, POSIX pthreads weren't supported under QNX 4), some things did require fundamental changes. I'll briefly mention the classes of changes that were required, and then we'll look in detail at the compatibility issues caused as well as suggestions on how to port to Neutrino (or keep your code portable between the two).
Neutrino totally redesigned the way that the operating system was embedded. Under QNX 4, in the original release, it was marginally embeddable. Then Neutrino came along, designed to be embeddable. As a bonus, QNX 4 underwent some changes as a result of the experience gained in Neutrino, and now QNX 4 is vastly more embeddable than it had been. In any event, embedding QNX 4 versus embedding Neutrino is almost like night and day. QNX 4 has no real support for things like:
whereas Neutrino does. The definitive book on that subject is QSS's Building Embedded Systems.
QNX 4 had a function called tfork() that let you use “threads” by creating a process with its code and data segments mapped to the same memory locations as the creating process. This gave the illusion of a thread by creating a process, and then changing the characteristics of the newly created process to make it look like a thread. While there is a thread library available for QNX 4 on QSS's update system, the kernel itself doesn't support threads directly.
Under Neutrino, the POSIX “pthread” model is used for all threading. This means that you'll see (and have seen in this book) familiar function calls like pthread_create(), pthread_mutex_lock(), and others.
While the impact of threads on message passing may seem minimal, it resulted in a fundamental change to the way message passing was done (not to the fundamental concepts of message passing, like SEND/RECEIVE/REPLY, but to the implementation).
Under QNX 4, messages were targeted at process IDs. To send a message, you simply found the process ID of the target and did your Send(). For servers to receive a message under QNX 4 they just did a Receive(). This would block until a message arrived. The server would then reply with the Reply() function.
Under Neutrino, message passing is identical (different function names, though). What's changed is the mechanism. The client now has to create a connection to a server before it can do the standard message-passing functions. And the server has to create a channel before it can do the standard message-passing functions.
Note that the QNX 4 Creceive() function, which would do a
non-blocking Receive(), is missing from Neutrino.
We generally discourage such “polling” functions, especially
when you can start a thread, but if you really insist on performing a
non-blocking
MsgReceive(),
you should take a look at the
Clocks, Timers, and Getting a Kick Every So Often chapter (under “Kernel timeouts”) for more information.
For the short story version, here's the relevant code sample:
TimerTimeout (CLOCK_REALTIME, _NTO_TIMEOUT_RECEIVE, NULL, NULL, NULL); rcvid = MsgReceive (… |
QNX 4 provided something called a “proxy.” A proxy is best described as a “canned” (or “fixed”) message, which could be sent by processes or kernel services (like a timer or interrupt service routine) to the owner of the proxy. The proxy is non-blocking for the sender and would arrive just like any other message. The way to identify a proxy (as opposed to another process actually sending a message) was to either look at the proxy message contents (not 100% reliable, as a process could send something that looked like the contents of the proxy) or to examine the process ID associated with the message. If the process ID of the message was the same as the proxy ID, then you could be assured it was a proxy, because proxy IDs and process IDs were taken from the same pool of numbers (there'd be no overlap).
Neutrino extends the concept of proxies with “pulses.” Pulses are still non-blocking messages, they can still be sent from a thread to another thread, or from a kernel service (like the timer and ISR mentioned above for proxies) to a thread. The differences are that while proxies were of fixed-content, Neutrino pulses are fixed-length, but the content can be set by the sender of the pulse at any time. For example, an ISR could save away a key piece of data into the pulse and then send that to a thread.
Under QNX 4, some services were able to deliver a signal or a proxy, while other services were able to deliver only one or the other. To complicate matters, the delivery of these services was usually done in several different ways. For example, to deliver a signal, you'd have to use the kill() function. To deliver a proxy or signal as a result of a timer, you'd have to use a negative signal number (to indicate it was a proxy) or a positive signal number (to indicate it was a signal). Finally, an ISR could deliver only a proxy.
Under Neutrino this was abstracted into an extension of the POSIX struct sigevent data structure. Anything that used or returned the struct sigevent structure can use a signal or a pulse.
In fact, this has been extended further, in that the struct sigevent can even cause a thread to be created! We talked about this in the Clocks, Timers, and Getting a Kick Every So Often chapter (under “Getting notified with a thread”).
Under the previous-previous version of the operating system (the QNX 2 family), writing device drivers was an arcane black art. Under QNX 4, it was initially a mystery, but then eventually some samples appeared. Under Neutrino, there are books and courses on the topic. As it turns out, the Neutrino model and the QNX 4 model are, at the highest architectural level, reasonably similar. Whereas QNX 4 had somewhat muddled concepts of what needed to be done as a “connect” function, and what needed to be done as an “I/O” function, Neutrino has a very clear separation. Also, under QNX 4, you (the device driver writer) were responsible for most of the work — you'd supply the main message handling loop, you'd have to associate context on each I/O message, and so on. Neutrino has simplified this greatly with the resource manager library.
One of the driving changes behind the embeddability differences between QNX 4 and Neutrino is the fact that Neutrino supports the MIPS, PowerPC, SH4, and ARM processors. Whereas QNX 4 was initially “at home” on an IBM PC with a BIOS and very standard hardware, Neutrino is equally at home on multiple processor platforms with or without a BIOS (or ROM monitor), and with customized hardware chosen by the manufacturer (often, it would appear, without regard for the requirements of the OS). This means that the Neutrino kernel had to have provision for callouts, so you could, for example, decide what kind of interrupt controller hardware you had, and, without having to buy a source license for the operating system, run on that hardware.
A bunch of other changes you'll notice when you port QNX 4 applications to Neutrino, especially on these different processor platforms, is that they're fussy about alignment issues. You can't access an N-byte object on anything other than an N-byte multiple of an address. Under the x86 (with the alignment flag turned off), you could access memory willy-nilly. By modifying your code to have properly aligned structures (for non-x86 processors), you'll also find that your code runs faster on x86, because the x86 processor can access aligned data faster.
Another thing that often comes to haunt people is the issue of big-endian versus little-endian. The x86 processor is a mono-endian processor (meaning it has only one “endian-ness”), and that's little-endian. MIPS and PPC, for example, are bi-endian processors (meaning that the processor can operate in either big-endian or little-endian mode). Furthermore, these non-x86 processors are “RISC” (Reduced Instruction Set CPU) machines, meaning that certain operations, such as a simple C language |= (bitwise set operation) may or may not be performed in an atomic manner. This can have startling consequences! Look at the file <atomic.h> for a list of helper functions that ensure atomic operation.
Released versions of QNX 4 are strictly single-processor, whereas Neutrino, at the time of this second printing, has support for SMP on the x86 and PPC architectures at least. SMP is a great feature, especially in an operating system that supports threads, but it's also a bigger gun that you can shoot yourself in the foot with. For example, on a single-processor box, an ISR will preempt a thread, but never the other way around. On a single-processor box, it's a worthwhile abstraction to “pretend” that threads run simultaneously, when they don't really.
On an SMP box, a thread and ISR can be running simultaneously, and multiple threads can also be running simultaneously. Not only is an SMP system a great workstation, it's also an excellent SQA (Software Quality Assurance) testing tool — if you've made any “bad” assumptions about protection in a multithreaded environment, an SMP system will find them eventually.
To illustrate just how true that statement is, one of the bugs in an early internal version of SMP had a “window” of one machine cycle! On one processor, what was supposedly coded to be an atomic read/modify/write operation could be interfered with by the second processor's compare and exchange instruction. |
Let's now turn our attention to the “big picture.” We'll look at:
Under QNX 4, the way a client would find a server was either:
If the client/server relationship that you're porting depended on the global namespace, then the client used:
qnx_name_locate()
and the server would “register” its name via:
qnx_name_attach()
In this case, you have two choices. You can try to retain the global namespace idiom, or you can modify your client and server to act like a standard resource manager. If you wish to retain the global namespace, then you should look at the name_attach() and name_detach() functions for your server, and name_open() and name_close() for your clients.
However, I'd recommend that you do the latter; it's “the Neutrino way” to do everything with resource managers, rather than try to bolt a resource manager “kludge” onto the side of a global namespace server.
The modification is actually reasonably simple. Chances are that the client side calls a function that returns either the process ID of the server or uses the “VC” (Virtual Circuit) approach to create a VC from the client's node to a remote server's node. In both cases, the process ID or the VC to the remote process ID was found based on calling qnx_name_locate(). Here, the “magic cookie” that binds the client to the server is some form of process ID (we're considering the VC to be a process ID, because VCs are taken from the same number space, and for all intents and purposes, they look just like process IDs).
If you were to return a connection ID instead of a process ID, you'd have conquered the major difference. Since the QNX 4 client probably doesn't examine the process ID in any way (what meaning would it have, anyway? — it's just a number), you can probably trick the QNX 4 client into performing an open() on the “global name.” In this case, however, the global name would be the pathname that the resource manager attached as its “id.” For example, the following is typical QNX 4 client code, stolen from my caller ID (CLID) server library:
/* * CLID_Attach (serverName) * * This routine is responsible for establishing a connection to * the CLID server. * * Returns the process ID or VC to the CLID server. */ // a place to store the name, for other library calls static char CLID_serverName [MAX_CLID_SERVER_NAME + 1]; // a place to store the clid server id static int clid_pid = -1; int CLID_Attach (char *serverName) { if (serverName == NULL) { sprintf (CLID_serverName, "/PARSE/CLID"); } else { strcpy (CLID_serverName, serverName); } clid_pid = qnx_name_locate (0, CLID_serverName, sizeof (CLID_ServerIPC), NULL); if (clid_pid != -1) { CLID_IPC (CLID_MsgAttach); // send it an ATTACH message return (clid_pid); } return (-1); }
You could change this to be:
/* * CLID_Attach (serverName) Neutrino version */ int CLID_Attach (char *serverName) { if (serverName == NULL) { sprintf (CLID_serverName, "/PARSE/CLID"); } else { strcpy (CLID_serverName, serverName); } return (clid_pid = open (CLID_serverName, O_RDWR)); }
and the client wouldn't even notice the difference.
Two implementation notes: I've simply left the default name “/PARSE/CLID”
as the registered name of the resource manager.
Most likely a better name would be “/dev/clid” —
it's up to you how “POSIX-like” you want to make things. In
any event, it's a one-line change and is only marginally related to the
discussion here.
The second note is that I've still called the file descriptor clid_pid, even though now it should really be called clid_fd. Again, this is a style issue and relates to just how much change you want to perform between your QNX 4 version and the Neutrino one. In any event, to be totally portable to both, you'll want to abstract the client binding portion of the code into a function call — as I did above with the CLID_Attach(). |
At some point, the client would actually perform the message pass operation. This is where things get a little trickier. Since the client/server relationship is not based on an I/O manager relationship, the client generally creates “customized” messages. Again from the CLID library (CLID_AddSingleNPANXX() is the client's exposed API call; I've also included checkAttach() and CLID_IPC() to show the actual message passing and checking logic):
/* * CLID_AddSingleNPANXX (npa, nxx) */ int CLID_AddSingleNPANXX (int npa, int nxx) { checkAttach (); CLID_IPCData.npa = npa; CLID_IPCData.nxx = nxx; CLID_IPC (CLID_MsgAddSingleNPANXX); return (CLID_IPCData.returnValue); } /* * CLID_IPC (IPC message number) * * This routine will call the server with the global buffer * CLID_IPCData, and will stuff in the message number passed * as the argument. * * Should the server not exist, this routine will stuff the * .returnValue field with CLID_NoServer. Otherwise, no * fields are affected. */ void CLID_IPC (IPCMessage) int IPCMessage; { if (clid_pid == -1) { CLID_IPCData.returnValue = CLID_NoServer; return; } CLID_IPCData.serverFunction = IPCMessage; CLID_IPCData.type = 0x8001; CLID_IPCData.subtype = 0; if (Send (clid_pid, &CLID_IPCData, &CLID_IPCData, sizeof (CLID_IPCData), sizeof (CLID_IPCData))) { CLID_IPCData.returnValue = CLID_IPCError; return; } } void checkAttach () { if (clid_pid == -1) { CLID_Attach (NULL); } }
As you can see, the checkAttach() function is used to ensure that a connection exists to the CLID server. If you didn't have a connection, it would be like calling read() with an invalid file descriptor. In my case here, the checkAttach() automagically creates the connection. It would be like having the read() function determine that there is no valid file descriptor and just create one out of the blue. Another style issue.
The customized messaging occurs in the CLID_IPC() function. It takes the global variable CLID_IPCData and tries to send it to the server using the QNX 4 Send() function.
The customized messages can be handled in one of two ways:
In both cases, you've effectively converted the client to communicating using standard resource manager mechanisms for communications. What? You don't have a file descriptor? You have only a connection ID? Or vice versa? This isn't a problem! Under Neutrino, a file descriptor is a connection ID!
In the case of the CLID server, this really isn't an option. There is no standard POSIX file-descriptor-based call to “add an NPA/NXX pair to a CLID resource manager.” However, there is the general devctl() mechanism, so if your client/server relationship requires this form, see below.
Now, before you write off this approach (translating to standard fd-based messages), let's stop and think about some of the places where this would be useful. In an audio driver, you may have used customized QNX 4 messages to transfer the audio data to and from the resource manager. When you really look at it, read() and write() are probably much more suited to the task at hand — bulk data transfer. Setting the sampling rate, on the other hand, would be much better accomplished via the devctl() function.
Granted, not every client/server relationship will have a bulk data transfer requirement (the CLID server is such an example).
So the question becomes, how do you perform control operations? The easiest way is to use the devctl() POSIX call. Our CLID library example (above) now becomes:
/* * CLID_AddSingleNPANXX (npa, nxx) */ int CLID_AddSingleNPANXX (int npa, int nxx) { struct clid_addnpanxx_t msg; checkAttach (); // keep or delete, style issue msg.npa = npa; msg.nxx = nxx; return (devctl (clid_pid, DCMD_CLID_ADD_NPANXX, &msg, sizeof (msg), NULL)); }
As you can see, this was a relatively painless operation. (For those people who don't like devctl() because it forces data transfers to be the same size in both directions, see the discussion below on the _IO_MSG message.) Again, if you're maintaining source that needs to run on both operating systems, you'd abstract the message-passing function into one common point, and then supply different versions of a library, depending on the operating system.
We actually killed two birds with one stone:
Note that we had to define DCMD_CLID_ADD_NPANXX — we could have also kludged around this and used the CLID_MsgAddSingleNPANXX manifest constant (with appropriate modification in the header file) for the same purpose. I just wanted to highlight the fact that the two constants weren't identical.
The second point that we made in the list above (about killing birds) was that we passed only the “correct-sized data structure.” That's actually a tiny lie. You'll notice that the devctl() has only one size parameter (the fourth parameter, which we set to sizeof (msg)). How does the data transfer actually occur? The second parameter to devctl() contains the device command (hence “DCMD”). Encoded within the top two bits of the device command is the direction, which can be one of four possibilities:
If you're not transferring data (meaning that the command itself suffices), or if you're transferring data unidirectionally, then devctl() is fine. The interesting case is when you're transferring data bidirectionally, because (since there's only one data size parameter to devctl()) both data transfers (to the driver and back) will transfer the entire data buffer! This is okay in the sub-case where the “input” and “output” data buffer sizes are identical, but consider the case where the data buffer going to the driver is a few bytes, and the data coming back from the driver is large. Since we have only one size parameter, we're effectively forced to transfer the entire data buffer to the driver, even though only a few bytes were required!
This can be solved by “rolling your own” messages, using the general “escape” mechanism provided by the _IO_MSG message.
The _IO_MSG message is provided to allow you to add your own message types, while not conflicting with any of the “standard” resource manager message types — it's already a resource manager message type.
The first thing that you must do when using _IO_MSG is define your particular “custom” messages. In this example, we'll define two types, and model it after the standard resource manager messages — one data type will be the input message, and one will be the output message:
typedef struct { int data_rate; int more_stuff; } my_input_xyz_t; typedef struct { int old_data_rate; int new_data_rate; int more_stuff; } my_output_xyz_t; typedef union { my_input_xyz_t i; my_output_xyz_t o; } my_message_xyz_t;
Here, we've defined a union of an input and output message, and called it my_message_xyz_t. The naming convention is that this is the message that relates to the “xyz” service, whatever that may be. The input message is of type my_input_xyz_t, and the output message is of type my_output_xyz_t. Note that “input” and “output” are from the point of view of the resource manager — “input” is data going into the resource manager, and “output” is data coming from the resource manager (back to the client).
We need to make some form of API call for the client to use — we could just force the client to manually fill in the data structures my_input_xyz_t and my_output_xyz_t, but I don't recommend doing that. The reason is that the API is supposed to “decouple” the implementation of the message being transferred from the functionality. Let's assume this is the API for the client:
int adjust_xyz (int *data_rate, int *odata_rate, int *more_stuff);
Now we have a well-documented function, adjust_xyz(), that performs something useful from the client's point of view. Note that we've used pointers to integers for the data transfer — this was simply an example of implementation. Here's the source code for the adjust_xyz() function:
int adjust_xyz (int *dr, int *odr, int *ms) { my_message_xyz_t msg; int sts; msg.i.data_rate = *dr; msg.i.more_stuff = *ms; sts = io_msg (global_fd, COMMAND_XYZ, &msg, sizeof (msg.i), sizeof (msg.o)); if (sts == EOK) { *odr = msg.o.old_data_rate; *ms = msg.o.more_stuff; } return (sts); }
This is an example of using io_msg() (which we'll define shortly — it's not a standard QSS supplied library call!). The io_msg() function does the magic of assembling the _IO_MSG message. To get around the problems that we discussed about devctl() having only one “size” parameter, we've given io_msg() two size parameters, one for the input (to the resource manager, sizeof (msg.i)) and one for the output (from the resource manager, sizeof (msg.o)). Notice how we update the values of *odr and *ms only if the io_msg() function returns an EOK. This is a common trick, and is useful in this case because the passed arguments don't get modified unless the actual command succeeded. (This prevents the client program from having to maintain copies of its passed data, just in case the function fails.)
One last thing that I've done in the adjust_xyz() function, is that I depend on the global_fd variable containing the file descriptor of the resource manager. Again, there are a number of ways that you could handle it:
Here's the source for io_msg():
int io_msg (int fd, int cmd, void *msg, int isize, int osize) { io_msg_t io_message; iov_t rx_iov [2]; iov_t tx_iov [2]; int sts; // set up the transmit IOV SETIOV (tx_iov + 0, &io_msg.o, sizeof (io_msg.o)); SETIOV (tx_iov + 1, msg, osize); // set up the receive IOV SETIOV (rx_iov + 0, &io_msg.i, sizeof (io_msg.i)); SETIOV (rx_iov + 1, msg, isize); // set up the _IO_MSG itself memset (&io_message, 0, sizeof (io_message)); io_message.type = _IO_MSG; io_message.mgrid = cmd; return (MsgSendv (fd, tx_iov, 2, rx_iov, 2)); }
Notice a few things.
The io_msg() function used a two-part IOV to “encapsulate” the custom message (as passed by msg) into the io_message structure.
The io_message was zeroed out and initialized with the _IO_MSG message identification type, as well as the cmd (which will be used by the resource manager to decide what kind of message was being sent).
The MsgSendv() function's return status was used directly as the return status of io_msg().
The only “funny” thing that we did was in the mgrid field. QSS reserves a range of values for this field, with a special range reserved for “unregistered” or “prototype” drivers. These are values in the range _IOMGR_PRIVATE_BASE through to _IOMGR_PRIVATE_MAX , respectively. If you're building a deeply embedded system where you know that no inappropriate messages will be sent to your resource manager, then you can go ahead and use the special range. On the other hand, if you are building more of a “desktop” or “generic” system, you may not have enough control over the final configuration of the system to determine whether inappropriate messages will be sent to your resource manager. In that case, you should contact QSS to obtain a mgrid value that will be reserved for you — no one else should use that number. Consult the file <sys/iomgr.h> for the ranges currently in use. In our example above, we could assume that COMMAND_XYZ is something based on _IOMGR_PRIVATE_BASE:
#define COMMAND_XYZ (_IOMGR_PRIVATE_BASE + 0x0007)
Or that we've been assigned a specific number by QSS:
#define COMMAND_XYZ (_IOMGR_ACME_CORP + 0x0007)
Now, what if the client that you're porting used an I/O manager? How would we convert that to Neutrino? Answer: we already did. Once we establish a file-descriptor-based interface, we're using a resource manager. Under Neutrino, you'd almost never use a “raw” message interface. Why not?
Under QNX 4, the only way to send a non-blocking message was to create a proxy via qnx_proxy_attach(). This function returns a proxy ID (which is taken from the same number space as process IDs), which you can then Trigger() or return from an interrupt service routine (see below).
Under Neutrino, you'd set up a struct sigevent to contain a “pulse,” and either use MsgDeliverEvent() to deliver the event or bind the event to a timer or ISR.
The usual trick under QNX 4 to detect proxy messages (via Receive() or Creceive()) was to compare the process ID returned by the receiving function against the proxy IDs that you're expecting. If you got a match, you knew it was a proxy. Alternatively, you could ignore the process ID returned by the receiving function and handle the message as if it were a “regular” message. Unfortunately, this has some porting complications.
If you're comparing the received process ID against the list of proxies that you're expecting, then you'll usually ignore the actual contents of the proxy. After all, since the proxy message couldn't be changed once you've created it, what additional information would you have gained by looking at the message once you knew it was one of your proxies? You could argue that as a convenience you'd place a message into the proxy that you could then look at with your standard message decoding. If that's the case, see below, “Proxies for their contents.”
Therefore, under QNX 4, you'd see code like:
pid = Receive (0, &msg, sizeof (msg)); if (pid == proxyPidTimer) { // we got hit with the timer, do something } else if (pid == proxyPidISR) { // our ISR went off, do something } else { // not one of our proxies, must have been a regular // message for a client. Do something. }
Under Neutrino, you'd replace this code with the following:
rcvid = MsgReceive (chid, &msg, sizeof (msg), NULL); if (rcvid == 0) { // 0 indicates it was a pulse switch (msg.pulse.code) { case MyCodeTimer: // we got hit with the timer, do something break; case MyCodeISR: // our ISR went off, do something break; default: // unknown pulse code, log it, whatever. break; } } else { // rcvid is not zero, therefore not a pulse but a // regular message from a client. Do something. }
Note that this example would be used if you're handling all messages yourself. Since we recommend using the resource manager library, your code would really look more like this:
int main (int argc, char **argv) { … // do the usual initializations pulse_attach (dpp, 0, MyCodeTimer, my_timer_pulse_handler, NULL); pulse_attach (dpp, 0, MyCodeISR, my_isr_pulse_handler, NULL); … }
This time, we're telling the resource manager library to put the two checks that we showed in the previous example into its receive loop and call our two handling functions (my_timer_pulse_handler() and my_isr_pulse_handler()) whenever those codes show up. Much simpler.
If you're looking at proxies for their contents (you're ignoring the fact that it's a proxy and just treating it like a message), then you already have to deal with the fact that you can't reply to a proxy under QNX 4. Under Neutrino, you can't reply to a pulse. What this means is, you've already got code in place that either looks at the proxy ID returned by the receive function and determines that it shouldn't reply, or the proxy has encoded within it special indications that this is a message that shouldn't be replied to.
Unfortunately under Neutrino, you can't stuff arbitrary data into a pulse. A pulse has a well-defined structure, and there's just no getting around that fact. A clever solution would be to “simulate” the message that you'd ordinarily receive from the proxy by using a pulse with a table. The table would contain the equivalent messages that would have been sent by the proxies. When a pulse arrives, you'd use the value field in the pulse as an index into this table and “pretend” that the given proxy message had arrived.
QNX 4's interrupt service routines had the ability to either return a proxy ID (indicating that the proxy should be sent to its owner) or a zero, indicating nothing further needed to be done. Under Neutrino, this mechanism is almost identical, except that instead of returning a proxy, you're returning a pointer to a struct sigevent. The event that you return can be a pulse, which will give you the “closest” analog to a proxy, or it can be a signal or the creation of a thread. Your choice.
Also, under QNX 4 you had to have an interrupt service routine, even if all that the ISR did was return a proxy and nothing else. Under Neutrino, using InterruptAttachEvent(), you can bind a struct sigevent to an interrupt vector, and that event will be delivered every time the vector is activated.
Porting from QNX 4 to Neutrino, or maintaining a program that must function on both, is possible, if you follow these rules:
The key is to not tie yourself to a particular “handle” that represents the “connection” between the client and the server, and to not rely on a particular mechanism for finding the server. If you abstract the connection and the detection services into a set of function calls, you can then conditionally compile the code for whatever platform you wish to port to.
The exact same discussion applies to the message transport — always abstract the client's API away from “knowing” how the messages are transported from client to server to some generic API which can then rely upon a single-point transport API; this single-point transport API can then be conditionally compiled for either platform.
Porting a server from QNX 4 to Neutrino is more difficult, owing to the fact that QNX 4 servers were generally “hand-made” and didn't follow a rigorous structure like that imposed by the resource manager library under Neutrino. Generally, though, if you're porting something hardware specific (for example, a sound card driver, or a block-level disk driver), the main “code” that you'll be porting has nothing to do with the operating system, and everything to do with the hardware itself. The approach I've adopted in these cases is to code a shell “driver” structure, and provide well-defined hardware-specific functions. The entire shell driver will be different between operating systems, but the hardware-specific functions can be amazingly portable.
Note also that QSS provides a QNX 4 to Neutrino migration kit — see the online docs.