Warning: main(/www/www/htdocs/style/globals.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 1

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/globals.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 1

Warning: main(/www/www/htdocs/style/header.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 8

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/header.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 8

Working with Filesystems

This chapter includes:

Introduction

Neutrino provides a variety of filesystems, so that you can easily access DOS, Linux, as well as native (QNX 4) disks. The Filesystems chapter of the System Architecture guide describes their classes and features.

Under Neutrino:

A desktop Neutrino system starts the appropriate block filesystems on booting; you start other filesystems as standalone managers. The default block filesystem is the QNX 4 filesystem.

Setting up, starting, and stopping a block filesystem

When you boot your machine, the system detects partitions on the block I/O devices and automatically starts the appropriate filesystem for each partition (see Controlling How Neutrino Starts).

You aren't likely ever to need to stop or restart a block filesystem; if you change any of the filesystem's options, you can use the -e or -u option to the mount command to update the filesystem.

If you need to change any of the options associated with the block I/O device, you can slay the appropriate devb-* driver (being careful not to pull the carpet from under your feet) and restart it, but you'll need to explicitly mount any of the filesystems on it.

To determine how much free space you have on a filesystem, use the df command. For more information, see the Utilities Reference.

Some filesystems have the concept of being marked as “dirty.” This can be used to skip an intensive filesystem-check the next time it starts up. The QNX 4 and Ext2 filesystems have a flag bit; the DOS filesystem has some magic bits in the FAT. By default, when you mount a filesystem as read-write, that flag is set; when you cleanly unmount the filesystem, the flag is cleared. In between, the filesystem is dirty and may need to be checked (if it never gets cleanly unmounted). The Power-Safe filesystem has no such flag; it just rolls back to the last clean snapshot. You can use the blk marking=none option to turn off this marking; see the entry for io-blk.so in the Utilities Reference.

Mounting and unmounting filesystems

The following utilities work with filesystems:

mount
Mount a block-special device or remote filesystem.
umount
Unmount a device or filesystem.

For example, if fs-cifs is already running, you can mount filesystems on it like this:

mount -t cifs -o guest,none //SMB_SERVER:10.0.0.1:/QNX_BIN /bin

By default, filesystems are mounted as read-write if the physical media permit it. You can use the -r option for mount to mount the filesystem as read-only. The io-blk.so library also supports an ro option for mounting block I/O filesystems as read-only.

You can also use the -u option for the mount utility to temporarily change the way the filesystem is mounted. For example, if a filesystem is usually mounted as read-only, and you need to remount it as read-write, you can update the mounting by specifying -uw. To return to read-only mode, use -ur.

You should use umount to unmount a read-write filesystem before removing or ejecting removable media.

See the Utilities Reference for details on usage and syntax.

Image filesystem

By an image, we refer to an OS image here, which is a file that contains the OS, your executables, and any data files that might be related to your programs, for use in an embedded system. You can think of the image as a small “filesystem” — it has a directory structure and some files in it.

The image contains a small directory structure that tells procnto the names and positions of the files contained within it; the image also contains the files themselves. When the embedded system is running, the image can be accessed just like any other read-only filesystem:

# cd /proc/boot
# ls
.script     cat         data1       data2       devc-ser8250
esh         ls          procnto
# cat data1
This is a data file, called data1, contained in the image.
Note that this is a convenient way of associating data
files with your programs.

The above example actually demonstrates two aspects of having the OS image function as a filesystem. When we issue the ls command, the OS loads ls from the image filesystem (pathname /proc/boot/ls). Then, when we issue the cat command, the OS loads cat from the image filesystem as well, and opens the file data1.

Configuring an OS image

You can create an OS image by using mkifs (MaKe Image FileSystem). For more information, see Building Embedded Systems, and mkifs in the Utilities Reference.

/dev/shmem RAM “filesystem”

Neutrino provides a simple RAM-based filesystem that allows read/write files to be placed under /dev/shmem. This filesystem isn't a true filesystem because it lacks features such as subdirectories. It also doesn't include . and .. entries for the current and parent directories.

The files in the /dev/shmem directory are advertised as “name-special” files (S_IFNAM), which fools many utilities — such as more — that expect regular files (S_IFREG). For this reason, many utilities might not work for the RAM filesystem.


Note: If you want to use gzip to compress or expand files in /dev/shmem, you need to specify the -f option.

This filesystem is mainly used by the shared memory system of procnto. In special situations (e.g. when no filesystem is available), you can use the RAM filesystem to store file data. There's nothing to stop a file from consuming all free RAM; if this happens, other processes might have problems.

You'll use the RAM filesystem mostly in tiny embedded systems where you need a small, fast, temporary-storage filesystem, but you don't need persistent storage across reboots.

The filesystem comes for free with procnto and doesn't require any setup or device driver. You can simply create files under /dev/shmem and grow them to any size (depending on RAM resources).

Although the RAM filesystem itself doesn't support hard or soft links or directories, you can create a link to it by using process-manager links. For example, you could create a link to a RAM-based /tmp directory:

ln -sP /dev/shmem /tmp

This tells procnto to create a process-manager link to /dev/shmem known as /tmp. Most application programs can then open files under /tmp as if it were a normal filesystem.


Note: In order to minimize the size of the RAM filesystem code inside the process manager, this filesystem specifically doesn't include “big filesystem” features such as file locking and directory creation.

QNX 4 filesystem

The QNX 4 filesystem — the default for Neutrino — uses the same on-disk structure as in the QNX 4 operating system. This filesystem is implemented by the fs-qnx4.so shared object and is automatically loaded by the devb-* drivers when mounting a QNX 4 filesystem.

You can create a QNX disk partition by using the fdisk and dinit utilities.

This filesystem implements a robust design, using an extent-based, bitmap allocation scheme with fingerprint control structures to safeguard against data loss and to provide easy recovery. Features include:

For information about the implementation of the QNX 4 filesystem, see QNX 4 disk structure in the Backing Up and Recovering Data chapter in this guide.

Extents

In the QNX 4 filesystem, regular files and directory files are stored as a sequence of extents, contiguous sequences of blocks on a disk. The directory entry for a file keeps track of the file's extents. If the filesystem needs more than one extent to hold a file, it uses a linked list of extent blocks to store information about the extents.

When a file needs more space, the filesystem tries to extend the file contiguously on the disk. If this isn't possible, the filesystem allocates a new extent, which may require allocating a new extent block as well. When it allocates or expands an extent, the filesystem may overallocate space, under the assumption that the process will continue to write and fill the extra space. When the file is closed, any extra space is returned.

This design ensures that when files — even several files at one time — are written, they're as contiguous as possible. Since most hard disk drives implement track caching, this not only ensures that files are read as quickly as possible from the disk hardware, but also serves to minimize the fragmentation of data on disk.

For more information about performance, see Fine-Tuning Your System.

Filenames

The original QNX 4 filesystem supported filenames no more than 48 characters long. This limit has now increased to 505 characters via a backwards-compatible extension that's enabled by default. The same on-disk format is retained; new systems see the longer name, but old ones see a truncated 48-character name.

Long filenames are supported by default when you create a QNX 4 filesystem; to disable them, specify the -N option to dinit. To add long filename support to an existing QNX 4 filesystem, login as root and create an empty, read-only file named .longfilenames, owned by root in the root directory of the filesystem:

cd root_dir
touch .longfilenames
chmod a=r .longfilenames
chown root:root .longfilenames

Note: After creating the .longfilenames file, you must restart the filesystem for it to enable long filenames.

You can determine the maximum filename length that a filesystem supports by using the getconf utility:

getconf _PC_NAME_MAX root_dir

where root_dir is the root directory of the filesystem.

You can't use the characters 0x00-0x1F, 0x7F, and 0xFF in filenames. In addition, / (0x2F) is the pathname separator, and can't be in a filename component. You can use spaces, but you have to “quote” them on the command line; you also have to quote any wildcard characters that the shell supports. For more information, see Quoting special characters in Using the Command Line.

Links and inodes

File data is stored distinctly from its name and can be referenced by more than one name. Each filename, called a link, points to the actual data of the file itself. (There are actually two kinds of links: hard links, which we refer to simply as “links,” and symbolic links, which are described in the next section.)

In order to support links for each file, the filename is separated from the other information that describes a file. The non-filename information is kept in a storage table called an inode (for “information node”).

If a file has only one link (i.e. one filename), the inode information (i.e. the non-filename information) is stored in the directory entry for the file. If the file has more than one link, the inode is stored as a record in a special file named /.inodes — the file's directory entry points to the inode record.


Two links to a file


One file referenced by two links.

Note that you can create a link to a file only if the file and the link are in the same filesystem.

There are two other situations in which a file can have an entry in the /.inodes file:

Removing links

When a file is created, it is given a link count of one. As you add and remove links to the file, this link count is incremented and decremented. The disk space occupied by the file data isn't freed and marked as unused in the bitmap until its link count goes to zero and all programs using the file have closed it. This allows an open file to remain in use, even though it has been completely unlinked. This behavior is part of that stipulated by POSIX and common UNIX practice.

Directory links

Although you can't create hard links to directories, each directory has two hard-coded links already built in:

The filename “dot” refers to the current directory; “dot dot” refers to the previous (or parent) directory in the hierarchy.

Note that if there's no predecessor, “dot dot” also refers to the current directory. For example, the “dot dot” entry of / is simply / — you can't go further up the path.


Note: There's no POSIX requirement for a filesystem to include . or .. entries; some filesystems, including flash filesystems and /dev/shmem, don't.

Symbolic links

A symbolic link is a special file that usually has a pathname as its data. When the symbolic link is named in an I/O request—by open(), for example—the link portion of the pathname is replaced by the link's “data” and the path is reevaluated.

Symbolic links are a flexible means of pathname indirection and are often used to provide multiple paths to a single file. Unlike hard links, symbolic links can cross filesystems and can also link to directories.

In the following example, the directories /net/node1/usr/fred and /net/node2/usr/barney are linked even though they reside on different filesystems—they're even on different nodes (see the following diagram). You can't do this using hard links, but you can with a symbolic link, as follows:

ln -s /net/node2/usr/barney /net/node1/usr/fred

Note how the symbolic link and the target directory need not share the same name. In most cases, you use a symbolic link for linking one directory to another directory. However, you can also use symbolic links for files, as in this example:

ln -s /net/node1/usr/src/game.c /net/node1/usr/eric/src/sample.c

Two nodes using symbolic links


Symbolic links.


Note: Removing a symbolic link deletes only the link, not the target.

Several functions operate directly on the symbolic link. For these functions, the replacement of the symbolic element of the pathname with its target is not performed. These functions include unlink() (which removes the symbolic link), lstat(), and readlink().

Since symbolic links can point to directories, incorrect configurations can result in problems, such as circular directory links. To recover from circular references, the system imposes a limit on the number of hops; this limit is defined as SYMLOOP_MAX in the <limits.h> include file.

Filesystem robustness

The QNX 4 filesystem achieves high throughput without sacrificing reliability. This has been accomplished in several ways.

While most data is held in the buffer cache and written after only a short delay, critical filesystem data is written immediately. Updates to directories, inodes, extent blocks, and the bitmap are forced to disk to ensure that the filesystem structure on disk is never corrupt (i.e. the data on disk should never be internally inconsistent).

Sometimes all of the above structures must be updated. For example, if you move a file to a directory and the last extent of that directory is full, the directory must grow. In such cases, the order of operations has been carefully chosen such that if a catastrophic failure (e.g. a power failure) occurs when the operation is only partially completed, the filesystem, upon rebooting, would still be “intact.” At worst, some blocks may have been allocated, but not used. You can recover these for later use by running the chkfsys utility. For more information, see the Backing Up and Recovering Data chapter.

Power-Safe filesystem

The Power-Safe filesystem, supported by the fs-qnx6.so shared object, is a reliable disk filesystem that can withstand power failures without losing or corrupting data. It has many of the same features as the QNX 4 filesystem, as well as the following:

For information about the structure of this filesystem, see Power-Safe filesystem in the Filesystems chapter of the System Architecture guide.


Caution: If the drive doesn't support synchronizing, fs-qnx6.so can't guarantee that the filesystem is power-safe. Before using this filesystem on devices — such as USB/Flash devices — other than traditional rotating hard disk drive media, check to make sure that your device meets the filesystem's requirements. For more information, see Required properties of the device in the entry for fs-qnx6.so in the Utilities Reference.

To create a Power-Safe filesystem, use the mkqnx6fs utility. For example:

mkqnx6fs /dev/hd0t76

You can use the mkqnx6fs options to specify the logical blocksize, endian layout, number of logical blocks, and so on.

Once you've formatted the filesystem, simply mount it. For example:

mount -t qnx6 /dev/hd0t76 /mnt/psfs

For more information about the options for the Power-Safe filesystem, see fs-qnx6.so in the Utilities Reference.

To check the filesystem for consistency (which you aren't likely to need to do), use chkqnx6fs.


Note: The chkfsys utility will claim that a Power-Safe filesystem is corrupt.

Booting

Current boot support is for x86 PC partition-table-based (the same base system as current booting) with a BIOS that supports INT13X (LBA).

The mkqnx6fs utility creates a .boot directory in the root of the new filesystem. This is always present, and always has an inode of 2 (the root directory itself is inode 1). The mkqnx6fs utility also installs a new secondary boot loader in the first 8 KB of the partition (and patches it with the location and offset of the filesystem).

The fs-qnx6.so filesystem protects this directory at runtime; in particular it can't be removed or renamed, nor can it exceed 4096 bytes (128 entries). Files placed into the .boot directory are assumed to be boot images created with mkifs. The name of the file should describe the boot image (e.g. 6.3.2SP3, 6.4.0_with_diskboot, or SafeMode_1CPU).

The directory can contain up to 126 entries. You can create other types of object in this directory (e.g. directories or symbolic links) but the boot loader ignores them. The boot loader also ignores certain-sized regular files (e.g. 0 or larger than 2 GB), as well as those with names longer than 27 characters.

The filesystem implicitly suspends snapshots when a boot image is open for writing; this guarantees that the boot loader will never see a partially-written image. You typically build the images elsewhere and then copy them into the directory, and so are open for only a brief time; however this scheme also works if you send the output from mkifs directly to the final boot file.


Note: To prevent this from being used as a DOS attack, the default permissions for the boot directory are root:root rwx------. You can change the permissions with chmod and chown, but beware that if you allow everyone to write in this directory, then anyone can install custom boot images or delete existing ones.

For information about booting from a Power-Safe filesystem, see the Controlling How Neutrino Starts chapter in this guide.

Snapshots

A snapshot is a committed stable view of a Power-Safe filesystem. Each mounted filesystem has one stable snapshot and one working view (in which copy-on-write modifications to the stable snapshot are being made).

Whenever a new snapshot is made, filesystem activity is suspended (to give a stable system), the bitmaps are updated, all dirty blocks are forced to disk, and the alternate filesystem superblock is written (with a higher sequence number). Then filesystem activity is resumed, and another working view is constructed on the old superblock. When a filesystem is remounted after an unclean power failure, it restores the last stable snapshot.

Snapshots are made:

You can disable snapshots on a filesystem at a global or local level. When disabled, a new superblock isn't written, and an attempt to make a snapshot fails with an errno of EAGAIN (or silently, for the sync() or timer cases). If snapshots are still disabled when the filesystem is unmounted (implicitly or at a power failure), any pending modifications are discarded (lost).

Snapshots are also permanently disabled automatically after an unrecoverable error that would result in an inconsistent filesystem. An example is running out of memory for caching bitmap modifications, or a disk I/O error while writing a snapshot. In this case, the filesystem is forced to be read-only, and the current and all future snapshot processing is omitted; the aim being to ensure that the last stable snapshot remains undisturbed and available for reloading at the next mount/startup (i.e. the filesystem always has a guaranteed stable state, even if slightly stale). This is only for certain serious error situations, and generally shouldn't happen.

Manually disabling snapshots can be used to encapsulate a higher-level sequence of operations that must either all succeed or none occur (e.g. should power be lost during this sequence). Possible applications include software updates or filesystem defragmentation.

To disable snapshots at the global level, clear the FS_FLAGS_COMMITTING flag on the filesystem, using the DCMD_FSYS_FILE_FLAGS command to devctl():

struct fs_fileflags  flags;
 
memset( &flags, 0, sizeof(struct fs_fileflags));
flags.mask[FS_FLAGS_GENERIC] = FS_FLAGS_COMMITTING;
flags.bits[FS_FLAGS_GENERIC] = disable ? 0 : FS_FLAGS_COMMITTING;
devctl( fd, DCMD_FSYS_FILE_FLAGS, &flags,
        sizeof(struct fs_fileflags), NULL);

Note: This is a single flag for the entire filesystem, and can be set or cleared by any superuser client; thus applications must coordinate the use of this flag among themselves.

Alternatively, you can use the chattr utility (as a convenient front-end to the above devctl() command):

# chattr -snapshot /fs/qnx6
/fs/qnx6: -snapshot
...
# chattr +snapshot /fs/qnx6
/fs/qnx6: +snapshot

To disable snapshots at a local level, adjust the QNX6FS_SNAPSHOT_HOLD count on a per-file-descriptor basis, again using the DCMD_FSYS_FILE_FLAGS command to devctl(). Each open file has its own hold count, and the sum of all local hold counts is a global hold count that disables snapshots if nonzero. Thus if any client sets a hold count, snapshots are disabled until all clients clear their hold counts.

The hold count is a 32-bit value, and can be incremented more than once (and must be balanced by the appropriate number of decrements). If a file descriptor is closed, or the process terminates, then any local holds it contributed are automatically undone. The advantage of this scheme is that it requires no special coordination between clients; each can encapsulate its own sequence of atomic operations using its independent hold count:

struct fs_fileflags  flags;
 
memset( &flags, 0, sizeof(struct fs_fileflags));
flags.mask[FS_FLAGS_FSYS] = QNX6FS_SNAPSHOT_HOLD;
flags.bits[FS_FLAGS_FSYS] = QNX6FS_SNAPSHOT_HOLD;
devctl( fd, DCMD_FSYS_FILE_FLAGS, &flags,
        sizeof(struct fs_fileflags), NULL);
...
memset( &flags, 0, sizeof(struct fs_fileflags));
flags.mask[FS_FLAGS_FSYS] = QNX6FS_SNAPSHOT_HOLD;
flags.bits[FS_FLAGS_FSYS] = 0;
devctl( fd, DCMD_FSYS_FILE_FLAGS, &flags,
        sizeof(struct fs_fileflags), NULL);

In this case, chattr isn't particularly useful to manipulate the state, as the hold count is immediately reset once the utility terminates (as its file descriptor is closed). However, it is convenient to report on the current status of the filesystem, as it will display both the global and local flags as separate states:

# chattr /fs/qnx6
/fs/qnx6: +snapshot +contiguous +used +hold

If +snapshot isn't displayed, then snapshots have been disabled via the global flag. If +hold is displayed, then snapshots have been disabled due to a global nonzero hold count (by an unspecified number of clients). If +dirty is permanently displayed (even after a sync()), then either snapshots have been disabled due to a potentially fatal error, or the disk hardware doesn't support full data synchronization (track cache flush).


Note: Enabling snapshots doesn't in itself cause a snapshot to be made; you should do this with an explicit fsync() if required. It's often a good idea to fsync() both before disabling and after enabling snapshots (the chattr utility does this).

DOS filesystem

The DOS filesystem provides transparent access to DOS disks, so you can treat DOS filesystems as though they were Neutrino (POSIX) filesystems. This transparency lets processes operate on DOS files without any special knowledge or work on their part.

The fs-dos.so shared object (see the Utilities Reference) lets you mount DOS filesystems (FAT12, FAT16, and FAT32) under Neutrino. This shared object is automatically loaded by the devb-* drivers when mounting a DOS FAT filesystem. If you want to read and write to a DOS floppy disk, mount it by typing something like this:

mount -t dos /dev/fd0 /fd

For information about valid characters for filenames in a DOS filesystem, see the Microsoft Developer Network at http://msdn.microsoft.com. FAT 8.3 names are the most limited; they're uppercase letters, digits, and $%'-_@{}~#(). VFAT names relax it a bit and add the lowercase letters and [];,=+. Neutrino's DOS filesystem silently converts FAT 8.3 filenames to uppercase, to give the illusion that lowercase is allowed ( but it doesn't preserve the case).

For more information on the DOS filesystem manager, see fs-dos.so in the Utilities Reference and Filesystems in the System Architecture guide.

CD-ROM filesystem

Neutrino's CD-ROM filesystem provides transparent access to CD-ROM media, so you can treat CD-ROM filesystems as though they were POSIX filesystems. This transparency lets processes operate on CD-ROM files without any special knowledge or work on their part.

The fs-cd.so shared object provides filesystem support for the ISO 9660 standard as well as a number of extensions, including Rock Ridge (RRIP), Joliet (Microsoft), and multisession (Kodak Photo CD, enhanced audio). This shared object is automatically loaded by the devb-* drivers when mounting an ISO-9660 filesystem.

The CD-ROM filesystem accepts any characters that it sees in a filename; it's read-only, so it's up to whatever prepares the CD image to impose appropriate restrictions. Strict adherence to ISO 9660 allows only 0-9A-Z_, but Joliet and Rockridge are far more lenient.

For information about burning CDs, see Backing Up and Recovering Data.


Note: We've deprecated fs-cd.so in favor of fs-udf.so, which now supports ISO-9660 filesystems in addition to UDF. For information about UDF, see Universal Disk Format (UDF) filesystem,” later in this chapter.

Linux Ext2 filesystem

The Ext2 filesystem provided in Neutrino provides transparent access to Linux disk partitions. Not all Ext2 features are supported, including the following:

The fs-ext2.so shared object provides filesystem support for Ext2. This shared object is automatically loaded by the devb-* drivers when mounting an Ext2 filesystem.


Caution: Although Ext2 is the main filesystem for Linux systems, we don't recommend that you use fs-ext2.so as a replacement for the QNX 4 filesystem. Currently, we don't support booting from Ext2 partitions. Also, the Ext2 filesystem relies heavily on its filesystem checker to maintain integrity; this and other support utilities (e.g. mke2fs) aren't currently available for Neutrino.

If an Ext2 filesystem isn't unmounted properly, a filesystem checker is usually responsible for cleaning up the next time the filesystem is mounted. Although the fs-ext2.so module is equipped to perform a quick test, it automatically mounts the filesystem as read-only if it detects any significant problems (which should be fixed using a filesystem checker).

This filesystem allows the same characters in a filename as the QNX 4 filesystem; see Filenames,” earlier in this chapter.

Flash filesystems

The Neutrino flash filesystem drivers implement a POSIX-compatible filesystem on NOR flash memory devices. The flash filesystem drivers are standalone executables that contain both the flash filesystem code and the flash device code. There are versions of the flash filesystem driver for different embedded systems hardware as well as PCMCIA memory cards.


Note: Flash filesystems don't include . and .. entries for the current and parent directories.

The naming convention for the drivers is devf-system, where system describes the embedded system. For example, the devf-800fads driver is for the 800FADS PowerPC evaluation board. For information about these drivers, see the devf-* entries in the Utilities Reference.

For more information on the way Neutrino handles flash filesystems, see:

CIFS filesystem

CIFS, the Common Internet File System protocol, lets a client workstation perform transparent file access over a network to a Windows system or a UNIX system running an SMB server. It was formerly known as SMB or Server Message Block protocol, which was used to access resources in a controlled fashion over a LAN. File access calls from a client are converted to CIFS protocol requests and are sent to the server over the network. The server receives the request, performs the actual filesystem operation, and then sends a response back to the client. CIFS runs on top of TCP/IP and uses DNS.

The fs-cifs filesystem manager is a CIFS client operating over TCP/IP. To use it, you must have an SMB server and a valid login on that server. The fs-cifs utility is primarily intended for use as a client with Windows machines, although it also works with any SMB server, e.g. OS/2 Peer, LAN Manager, and SAMBA.

The fs-cifs filesystem manager requires a TCP/IP transport layer, such as the one provided by io-pkt*.

For information about passwords — and some examples — see fs-cifs in the Utilities Reference.

If you want to start a CIFS filesystem when you boot your system, put the appropriate command in /etc/host_cfg/$HOSTNAME/rc.d/rc.local or /etc/rc.d/rc.local. For more information, see the description of /etc/rc.d/rc.sysinit in Controlling How Neutrino Starts.

NFS filesystem

The Network File System (NFS) protocol is a TCP/IP application that supports networked filesystems. It provides transparent access to shared filesystems across networks.

NFS lets a client workstation operate on files that reside on a server across a variety of NFS-compliant operating systems. File access calls from a client are converted to NFS protocol (see RFC 1094 and RFC 1813) requests, and are sent to the server over the network. The server receives the request, performs the actual filesystem operation, and sends a response back to the client.

In essence, NFS lets you graft remote filesystems — or portions of them — onto your local namespace. Directories on the remote systems appear as part of your local filesystem, and all the utilities you use for listing and managing files (e.g. ls, cp, mv) operate on the remote files exactly as they do on your local files.

This filesystem allows the same characters in a filename as the QNX 4 filesystem; see Filenames,” earlier in this chapter.

Setting up NFS

NFS consists of:


Note: The procedures used in Neutrino for setting up clients and servers may differ from those used in other implementations. To set up clients and servers on a non-Neutrino system, see the vendor's documentation and examine the initialization scripts to see how the various programs are started on that system.

It's actually the clients that do the work required to convert the generalized file access that servers provide into a file access method that's useful to applications and users.

If you want to start an NFS filesystem when you boot your system, put the appropriate command in /etc/host_cfg/$HOSTNAME/rc.d/rc.local or /etc/rc.d/rc.local. For more information, see the description of /etc/rc.d/rc.sysinit in Controlling How Neutrino Starts.

NFS server

An NFS server handles requests from NFS clients that want to access filesystems as NFS mountpoints. For the server to work, you need to start the following programs:

Name: Purpose:
rpcbind Remote procedure call (RPC) server
nfsd NFS server and mountd daemon

The rpcbind server maps RPC program/version numbers into TCP and UDP port numbers. Clients can make RPC calls only if rpcbind is running on the server.

The nfsd daemon reads the /etc/exports file, which lists the filesystems that can be exported and optionally specifies which clients those filesystems can be exported to. If no client is specified, any requesting client is given access.

The nfsd daemon services both NFS mount requests and NFS requests, as specified by the exports file. Upon startup, nfsd reads the /etc/exports.hostname file (or, if this file doesn't exist, /etc/exports) to determine which mountpoints to service. Changes made to this file don't take affect until you restart nfsd.

NFS client

An NFS client requests that a filesystem exported from an NFS server be grafted onto its local namespace. For the client to work, you need to start the version 2 or 3 of the NFS filesystem manager (fs-nfs2 or fs-nfs3) first. The file handle in version 2 is a fixed-size array of 32 bytes. With version 3, it's a variable-length array of 64 bytes.


Note: If possible, you should use fs-nfs3 instead of fs-nfs2.

The fs-nfs2 or fs-nfs3 filesystem manager is also the NFS 2 or NFS 3 client daemon operating over TCP/IP. To use it, you must have an NFS server and you must be running a TCP/IP transport layer such as that provided by io-pkt*. It also needs socket.so and libc.so.

You can create NFS mountpoints with the mount command by specifying nfs for the type and -o ver3 as an option. You must start fs-nfs3 or fs-nfs3 before creating mountpoints in this manner. If you start fs-nfs2 or fs-nfs3 without any arguments, it runs in the background so you can use mount.

To make the request, the client uses the mount utility, as in the following examples:

In the first example, the client requests that the /home directory on an IP host be mounted onto the local namespace as /mnt/home. In the second example, NFS protocol version 3 is used for the network filesystem.

Here's another example of a command line that starts and mounts the client:

fs-nfs3  10.7.0.197:/home/bob  /homedir

Note: Although NFS 2 is older than POSIX, it was designed to emulate UNIX filesystem semantics and happens to be relatively close to POSIX.

Universal Disk Format (UDF) filesystem

The Universal Disk Format (UDF) filesystem provides access to recordable media, such as CD, CD-R, CD-RW, and DVD. It's used for DVD video, but can also be used for backups to CD, and so on.

The UDF filesystem is supported by the fs-udf.so shared object. The devb-* drivers automatically load fs-udf.so when mounting a UDF filesystem.

Apple Macintosh HFS and HFS Plus

The Apple Macintosh HFS (Hierarchical File System) and HFS Plus are the filesystems on Apple Macintosh systems.

The fs-mac.so shared object provides read-only access to HFS and HFS Plus disks on a QNX Neutrino system. The following variants are recognized: HFS, HFS Plus, HFS Plus in an HFS wrapper, HFSX, and HFS/ISO-9660 hybrid. It also recognises HFSJ (HFS Plus with journal), but only when the journal is clean, not when it's dirty from an unclean shutdown. In a traditional PC partition table, type 175 is used for HFS.

The devb-* drivers automatically load fs-mac.so when mounting an HFS or HFS Plus filesystem.

Windows NT filesystem

The NT filesystem is used on Microsoft Windows NT and later. The fs-nt.so shared object provides read-only access to NTFS disks on a QNX Neutrino system.

The devb-* drivers automatically load fs-nt.so when mounting an NT filesystem.

If you want fs-nt.so to fabricate . and .. directory entries, specify the dots=on option. It doesn't fabricate these entries by default.

Inflator filesystem

Neutrino provides an inflator virtual filesystem. It's a resource manager that sits in front of other filesystems and decompresses files that were previously compressed by the deflate utility.

You typically use inflator when the underlying filesystem is a flash filesystem. Using it can almost double the effective size of the flash memory.

For more information, see the Utilities Reference.

Troubleshooting

Here are some problems that you might have with filesystems:

How can I make a specific flash partition read-only?
Unmount and remount the partition, like this:
flashctl -p raw_mountpoint -u
mount -t flash -r raw_mountpoint /mountpoint
  

where raw_mountpoint indicates the partition (e.g. /dev/fs0px).

How can I determine which drivers are currently running?
  1. Create a list of pathname mountpoints:
    find /proc/mount \
    -name '[-0-9]*,[-0-9]*,[-0-9]*,[-0-9]*,[-0-9]*' \
    -prune -print >mountpoints
      
  2. Show the drivers:
    cut -d, -f2 <mountpoints | sort | uniq | \
    xargs -i "pidin -P{} -FanQ" <pidlist | \
    grep -v "pid name"
      
  3. Show the mountpoints for the specified process ID:
    grep pid mountpoints
      
  4. Show the date of the specified driver:
    use -i /drivername
      

This procedure (which approximates the functionality of the Windows XP driverquery command) shows the drivers (programs that have mountpoints in the pathname space) that are currently running; it doesn't show those that are merely installed.


Warning: main(/www/www/htdocs/style/footer.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 1873

Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/footer.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/user_guide/fsystems.html on line 1873