In this appendix, we'll look at some typical buildfiles you can use with mkifs or import into the IDE's System Builder to get your system up and running. This appendix is divided into two main parts:
We finish with a section for each of the supported processor platforms, showing you differences from the x86 samples and noting things to look out for.
Note that you should read both the section for your particular processor as well as the section on generic samples, because things like shared objects (which are required by just about everything) are documented in the generic section.
In this section, we'll look at some common buildfile examples that are applicable (perhaps with slight modifications, which we'll note) to all platforms. We'll start with some fragments that illustrate various techniques, and then we'll wrap up with a few complete buildfiles. In the “Processor-specific notes” section, we'll look at what needs to be different for the various processor families.
The first thing you'll need to do is to ensure that the shared objects required by the various drivers you'll be running are present. All drivers require at least the standard C library shared object (libc.so). Since the shared object search order looks in /proc/boot, you don't have to do anything special, except include the shared library into the image. This is done by simply specifying the name of the shared library on a line by itself, meaning “include this file.”
The runtime linker is expected to be found in a file called ldqnx.so.2, but the runtime linker is currently contained within the libc.so file, so we would make a process manager symbolic link to it. |
The following buildfile snippet applies:
# include the C shared library libc.so # create a symlink called ldqnx.so.2 to it [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so
How do you determine which shared objects you need in the image? You can use the objdump utility to display information about the executables you're including in the image; look for the objects marked as NEEDED For example, suppose you're including ping in your image:
$ objdump -x `which ping` | grep NEEDED objdump: /usr/bin/ping: no symbols NEEDED libsocket.so.2 NEEDED libc.so.3
The ping executable needs libsocket.so.2 and libc.so.3. You need to use objdump recursively to see what these shared objects need:
$ objdump -x /lib/libsocket.so.2 | grep NEEDED NEEDED libc.so.3 $ objdump -x /lib/libc.so.3 | grep NEEDED
The libsocket.so.2 shared object needs only libc.so.3, which, in turn, needs nothing. So, if you're including ping in your image, you also need to include these two shared objects.
If you want to be able to run executables more than once, you'll need to specify the [data=copy] attribute for those executables. If you want it to apply to all executables, just put it on a line by itself before the executables. This causes the data segment to be copied before it's used, preventing it from being overwritten by the first invocation of the program.
For systems that have multiple consoles or multiple serial ports, you may wish to have the shell running on each of them. Here's an example showing you how that's done:
[+script] .script = { # start any other drivers you need here devc-con -e -n4 & reopen /dev/con1 [+session] esh & reopen /dev/con2 [+session] esh & ...
As you can see, the trick is to:
It's important to run the shell in the background (via the ampersand character “&”) — if you don't, then the interpretation of the script will suspend until the shell exits!
Generally speaking, this method can be used to start various other programs on the consoles (that is to say, you don't have to start the shell; it could be any program).
To do this for serial ports, start the appropriate serial driver (e.g. devc-ser8250), and redirect standard input, output, and error for each port (e.g. /dev/ser1, /dev/ser2). Then run the appropriate executable (in the background!) after the redirection.
The [+session] directive makes the program the session leader (as per POSIX) — this may not be necessary for arbitrary executables.
You can do the reopen on any device as many times as you want. You would do this, for example, to start a program on /dev/con1, then start the shell on /dev/con2, and then start another program on /dev/con1 again:
[+script] .script = { ... reopen /dev/con1 prog1 & reopen /dev/con2 [+session] esh & reopen /dev/con1 prog2 & ...
To create the /tmp directory on a RAM-disk, you can use the following in your buildfile:
[type=link] /tmp = /dev/shmem
This will establish /tmp as a symbolic link in the process manager's pathname table to the /dev/shmem directory. Since the /dev/shmem directory is really the place where shared memory objects are stored, this effectively lets you create files on a RAM-disk — files created are, in reality, shared memory objects living in RAM.
Note that the line containing the link attribute (the [type=link] line) should be placed outside of the script file or boot file — after all, you're telling mkifs that it should create a file that just happens to be a link rather than a “real” file.
This configuration file does the bare minimum necessary to give you a shell prompt on the first serial port:
[virtual=ppcbe,srec] .bootstrap = { startup-rpx-lite -Dsmc1.115200.64000000.16 PATH=/proc/boot procnto-800 } [+script] .script = { devc-serppc800 -e -F -c64000000 -b115200 smc1 & reopen [+session] PATH=/proc/boot esh & } [type=link] /dev/console=/dev/ser1 [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so libc.so [data=copy] devc-serppc800 esh # specify executables that you want to be able # to run from the shell: echo, ls, pidin, etc... echo ls pidin cat cp
Let's now examine a complete buildfile that starts up the flash filesystem:
[virtual=x86,bios +compress] .bootstrap = { startup-bios PATH=/proc/boot:/bin procnto } [+script] .script = { devc-con -e -n5 & reopen /dev/con1 devf-i365sl -r -b3 -m2 -u2 -t4 & waitfor /fs0p0 [+session] TERM=qansi PATH=/proc/boot:/bin esh & } [type=link] /tmp=/dev/shmem [type=link] /bin=/fs0p0/bin [type=link] /etc=/fs0p0/etc libc.so [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so libsocket.so [data=copy] devf-i365sl devc-con esh
The buildfile's .bootstrap specifies the usual startup-bios and procnto (the startup program and the kernel). Notice how we set the PATH environment variable to point not only to /proc/boot, but also to /bin — the /bin directory is a link (created with the [type=link]) to the flash filesystem's /fs0p0/bin path.
In the .script file, we started up the console driver with five consoles, reopened standard input, output, and error for /dev/con1, and started the flash filesystem driver devf-i365sl. Let's look at the command-line options we gave it:
The devf-i365sl will automatically mount the flash partition as /fs0p0. Notice the process manager symbolic links we created at the bottom of the buildfile:
[type=link] /bin=/fs0p0/bin [type=link] /etc=/fs0p0/etc
These give us /bin and /etc from the flash filesystem.
In this example, we'll look at a filesystem for rotating media. Notice the shared libraries that need to be present:
[virtual=x86,bios +compress] .bootstrap = { startup-bios PATH=/proc/boot:/bin LD_LIBRARY_PATH=/proc/boot:/lib:/dll procnto } [+script] .script = { pci-bios & devc-con & reopen /dev/con1 # Disk drivers devb-eide blk cache=2m,automount=hd0t79:/,automount=cd0:/cd & # Wait for a bin for the rest of the commands waitfor /x86 10 # Some common servers pipe & mqueue & devc-pty & # Start the main shell [+session] esh & } # make /tmp point to the shared memory area [type=link] /tmp=/dev/shmem # Redirect console messages # [type=link] /dev/console=/dev/ser1 # Programs require the runtime linker (ldqnx.so) to be at # a fixed location [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so # Add for HD support [type=link] /usr/lib/libcam.so.2=/proc/boot/libcam.so # add symbolic links for bin, dll, and lib # (files in /x86 with devb-eide) [type=link] /bin=/x86/bin [type=link] /dll=/x86/lib/dll [type=link] /lib=/x86/lib # We use the C shared lib (which also contains the runtime linker) libc.so # Just in case someone needs floating point and our CPU doesn't # have a floating point unit fpemu.so.2 # Include the hard disk shared objects so we can access the disk libcam.so io-blk.so # For the QNX 4 filesystem cam-disk.so fs-qnx4.so # For the UDF filesystem and the PCI cam-cdrom.so fs-udf.so pci-bios # Copy code and data for all executables after this line [data=copy] # Include a console driver, shell, etc. esh devb-eide devc-con
For this release of Neutrino, you can't use the floating-point emulator (fpemu.so) in statically linked executables. |
In this buildfile, we see the startup command line for the devb-eide command:
devb-eide blk cache=2m,automount=hd0t79:/automount=cd0:/cd &
This line indicates that the devb-eide driver should start and then pass the string beginning with the cache= through to the end (except for the ampersand) to the block I/O file (io-blk.so). This will examine the passed command line and then start up with a 2-megabyte cache (the cache=2m part), automatically mount the partition identified by hd0t79 (the first QNX filesystem partition) as the pathname /hd, and automatically mount the CD-ROM as /cd.
Once this driver is started, we then need to wait for it to get access to the disk and perform the mount operations. This line does that:
waitfor /ppcbe/bin
This waits for the pathname /ppcbe/bin to show up in the pathname space. (We're assuming a formatted hard disk that contains a valid QNX filesystem with ${QNX_TARGET} copied to the root.)
Now that we have a complete filesystem with all the shipped executables installed, we run a few common executables, like the Pipe server.
Finally, the list of shared objects contains the .so files required for the drivers and the filesystem.
Here's an example of a buildfile that starts up an Ethernet driver, the TCP/IP stack, and the network filesystem:
[virtual=armle,elf +compress] .bootstrap = { startup-abc123 -vvv PATH=/proc/boot procnto } [+script] .script = { devc-ser8250 -e -b9600 0x1d0003f8,0x23 & reopen # Start the PCI server pci-abc123 & waitfor /dev/pci # Network drivers and filesystems io-pkt-v4 -dtulip-abc123 & waitfor /dev/socket ifconfig en0 10.0.0.1 fs-nfs3 10.0.0.2:/armle/ / 10.0.0.2:/etc /etc & # Wait for a "bin" for the rest of the commands waitfor /usr/bin # Some common servers pipe & mqueue & devc-pty & [+session] sh & } # make /tmp point to the shared memory area [type=link] /tmp=/dev/shmem # Redirect console messages [type=link] /dev/console=/dev/ser1 # Programs require the runtime linker (ldqnx.so) to be at # a fixed location [type=link] /usr/lib/ldqnx.so.2=/proc/boot/libc.so # We use the C shared lib (which also contains the runtime linker) libc.so # If some one needs floating point... fpemu.so.2 # Include the network files so we can access files across the net devn-tulip-abc123.so # Include the socket library libsocket.so [data=copy] # Include the network executables. devc-ser8250 io-pkt-v4 fs-nfs3
For this release of Neutrino, you can't use the floating-point emulator (fpemu.so.2) in statically linked executables. |
This buildfile is very similar to the previous one shown for the disk. The major difference is that instead of starting devb-eide to get a disk filesystem driver running, we started io-pkt-v4 to get the network drivers running. The -d specifies the driver that should be loaded, in this case the driver for a DEC 21x4x (Tulip)-compatible Ethernet controller.
Once the network manager is running, we need to synchronize the script file interpretation to the availability of the drivers. That's what the waitfor /dev/socket is for — it waits for the network manager to initialize itself. The ifconfig en0 10.0.0.1 command then specifies the IP address of the interface.
The next thing started is the NFS filesystem module, fs-nfs3, with options telling it that it should mount the filesystem present on 10.0.0.2 in two different places: ${QNX_TARGET} should be mounted in /, and /etc should be mounted as /etc.
Since it may take some time to go over the network and establish the mounting, we see another waitfor, this time ensuring that the filesystem on the remote has been correctly mounted (here we assume that the remote has a directory called ${QNX_TARGET}/armle/bin — since we've mounted the remote's ${QNX_TARGET} as /, the waitfor is really waiting for armle/bin under the remote's ${QNX_TARGET} to show up).
In this section, we'll look at what's different from the generic files listed above for each processor family. Since almost everything that's processor- and platform-specific in Neutrino is contained in the kernel and startup programs, there's very little change required to go from an x86 with standard BIOS to, for example, a PowerPC 800 evaluation board.
The first obvious difference is that you must specify the processor that the buildfile is for. This is actually a simple change — in the [virtual=…] line, substitute the x86 specification with armle, mipsbe, ppcbe, or shle.
For this CPU: | Use this attribute: |
---|---|
ARM (little-endian) | [virtual=armle,binary] |
MIPS (big-endian) | [virtual=mipsbe,elf] |
PPC (big-endian) | [virtual=ppcbe,openbios] |
SH-4 (little-endian) | [virtual=shle,srec] |
Another difference is that the startup program is tailored not only for the processor family, but also for the actual board the processor runs on. If you're not running an x86 with a standard BIOS, you should replace the startup-bios command with one of the many startup-* programs we supply.
To find out what startup programs we currently provide, please refer to the following sources:
The examples listed previously provide support for the 8250 family of serial chips. Some non-x86 platforms support the 8250 family as well, but others have their own serial port chips.
For details on our current serial drivers, see: