Warning: main(/www/www/htdocs/style/globals.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 1
Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/globals.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 1
Warning: main(/www/www/htdocs/style/header.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 8
Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/header.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 8
- absolute timer
- A timer with an expiration point defined as a fixed time,
for example, January 20, 2005 at 09:43:12 AM, EDT. Contrast with
relative timer.
- alignment
- The characteristic that accessing an N-byte data element
must be performed only on an address that is a multiple of N.
For example, to access a 4-byte integer, the address of the integer must
be a multiple of 4 bytes (e.g., 0x2304B008, and not
0x2304B009).
On some CPU architectures, an alignment fault will occur if an attempt is
made to perform a non-aligned access.
On other CPU architectures (e.g., x86) a non-aligned access is simply
slower than an aligned access.
- asynchronous
- Used to indicate that a given operation is not synchronized to another
operation. For example, the timer tick interrupt that is generated by
the system's timer chip is said to be “asynchronous” to a
thread that's requesting a delay of a certain amount of time, because
the thread's request is not synchronized in any way to the arrival of
the incoming timer tick interrupt. Contrast with
synchronous.
- atomic (operation)
- An operation that is “indivisible,” that is to say, one that
will not get interrupted by any other operation.
Atomic operations are critical especially in interrupt service routines and
multi-threaded programs, as often a “test and set” sequence
of events must occur in one thread without the chance of another thread
interrupting this sequence.
A sequence can be made atomic from the perspective of multiple threads not
interfering with each other through the use of mutexes
or via
InterruptLock()
and
InterruptUnlock()
when used
with Interrupt service routines.
See the header file <atomic.h> as well.
- attribute (structure)
- A structure used within a resource manager that
contains information relating to the device that the resource manager is
manifesting in the pathname space.
If the resource manager is manifesting multiple devices in the pathname space
(for example, the serial port resource manager might manifest /dev/ser1
and /dev/ser2) there will be an equal number of attribute
structures in the resource manager.
Contrast with OCB.
- barrier (synchronization object)
- A thread-level synchronization object with an associated count. Threads that
call the blocking barrier call
(pthread_barrier_wait())
will block until
the number of threads specified by the count have all called the blocking
barrier call, and then they will all be released.
Contrast this with the operation of
semaphores.
- blocking
- A means for threads to synchronize to other threads or events. In the
blocking state (of which there are about a dozen), a thread doesn't
consume any CPU — it's waiting on a list maintained within the
kernel. When the event occurs that the thread was waiting for, the thread
is unblocked and is able to consume CPU again.
- channel
- An abstract object on which a server
receives a message.
This is the same object to which a
client
creates a connection in order
to send a message to the server.
When the channel is created via
ChannelCreate(),
a “channel ID”
is returned. This channel ID (or “chid” for short) is what a
resource manager will advertise as part of its registered
mountpoint.
- client
- Neutrino's message-passing architecture is
structured around a client/server relationship.
In the case of the client, it's the one that is requesting services of
a particular server. The client generally accesses these services using
standard file-descriptor-based function calls (e.g.,
lseek()),
which are synchronous, in that the client's
call doesn't return until the request is completed by the server.
A thread can be both a client and a server
at the same time.
- condition variable
- A synchronization object used between multiple
threads,
characterized by acting as a rendezvous point where multiple threads can
block, waiting for a signal (not to be confused with
a UNIX-style signal). When the signal is
delivered, one or more of the threads will unblock.
- connection
- The concept of a client being
attached to a channel.
A connection is established by the client either directly by calling
ConnectAttach()
or on behalf of the client by the client's C library function
open().
In either case, the connection ID returned is usable
as a handle for all communications between the client and the
server.
- connection ID
- A “handle” returned by ConnectAttach() (on the
client
side) and used for all communications between the client and the
server.
The connection ID is identical to the traditional C library's
“file descriptor.”
That is to say, when open() returns a file descriptor, it's really returning
a connection ID.
- deadlock
- A failure condition reached when two threads are mutually
blocked
on each other, with each thread waiting for the other to respond.
This condition can be generated quite easily; simply have two threads
send
each other a message — at this point, both threads are waiting for the
other thread to reply to the request.
Since each thread is blocked, it will not have a chance to reply, hence
deadlock.
To avoid deadlock, clients and
servers
should be structured around a send hierarchy (see below).
(Of course, deadlock can occur with more than two threads; A sends to B, B sends to C, and C
sends back to A, for example.)
- FIFO (scheduling)
- In FIFO scheduling, a thread will consume CPU until
a higher priority thread is ready to run, or until the thread voluntarily
gives up CPU.
If there are no higher priority threads, and the thread does not voluntarily
give up CPU, it will run forever.
Contrast with round robin scheduling.
- interrupt service routine
- Code that gets executed (in privileged mode) by the kernel as a result of a
hardware interrupt. This code cannot perform any kernel calls and should
return as soon as possible, since it runs at a priority level effectively higher
than any other thread priority in the system. Neutrino's interrupt
service routines can return a struct sigevent that indicates
what event, if any, should be triggered.
- IOV (I/O Vector)
- A structure where each member contains a pointer and a length.
Generally used as an array of IOVs, rather than as a single IOV.
When used in the array form, this array of structures of pointers and lengths
defines a scatter/gather list, which allows the
message-passing operations to proceed much more
efficiently (than would otherwise be accomplished by copying data individually
so as to form one contiguous buffer).
- kernel callouts
- The Neutrino operating system can be customized to run on various hardware,
without requiring a source license, by supplying kernel callouts to the startup
program.
Kernel callouts let the developer supply code that knows how to deal with the
specifics of the hardware. For example, how to ask an interrupt controller
chip about which interrupt fired, or how to interface to the timer chip
to be able to arrange for periodic interrupts, etc.
This is documented in great depth in the Building Embedded Systems
book.
- message-passing
- The Neutrino operating system is based on a message passing model, where all
services are provided in a synchronous manner by
passing messages around from client to
server.
The client will send a message to the server and
block. The server will receive a message from
the client, perform some amount of processing, and then reply
to the client's message, which will unblock
the client.
- MMU (Memory Management Unit)
- A piece of hardware (usually embedded within the CPU)
that provides for virtual address
to physical address translation, and can be
used to implement a virtual memory system.
Under Neutrino, the primary benefit of an MMU is the ability to detect when
a thread has accessed a virtual address that is
not mapped into the process's address space.
- mutex
- A Mutual Exclusion object used to serialize a number of
threads so that only one thread at a time has access to the resources defined
by the mutex. By using a mutex every time (for example) that you access
a given variable, you're ensuring that only one thread at a time has access
to that variable, preventing race conditions. See also
atomic (operation).
- Neutrino
- Quoting from the Sudbury Neutrino Observatory web pages (found at
http://www.sno.phy.queensu.ca/):
Neutrinos are tiny, possibly massless, neutral elementary particles which interact with matter via the weak
nuclear force. The weakness of the weak force gives neutrinos the property that matter is almost transparent
to them. The sun, and all other stars, produce neutrinos copiously due to nuclear fusion and decay processes
within the core. Since they rarely interact, these neutrinos pass through the sun and the earth (and you)
unhindered. Other sources of neutrinos include exploding stars (supernovae), relic neutrinos (from the birth
of the universe) and nuclear power plants (in fact a lot of the fuel's energy is taken away by neutrinos). For
example, the sun produces over two hundred trillion trillion trillion neutrinos every second, and a supernova
blast can unleash 1000 times more neutrinos than our sun will produce in its 10-billion year lifetime. Billions
of neutrinos stream through your body every second, yet only one or two of the higher energy neutrinos will
scatter from you in your lifetime.
- OCB (open context block)
- A data structure used by a resource manager that contains
information for each client's open() call.
If a client has opened several files, there will exist a corresponding OCB for each
file descriptor that the client has in the respective resource managers.
Contrast with the attribute (structure).
- PDP-8
- An antique computer, “Programmable Data Processor,” manufactured between
1965 and the mid 1970's
by Digital Equipment Corporation (now Compaq) with the coolest front
panel. Also, the first computer I ever programmed.
Unfortunately, this wonderful 12-bit machine does not run Neutrino :-(!
- periodic timer
- See Repeating timer
- physical address
- An address that is emitted by the CPU onto the bus connected to the memory subsystem.
Since Neutrino runs in virtual address mode, this means
that an MMU must translate the virtual addresses used by
the threads into physical addresses usable by the
memory subsystem.
Contrast with virtual address
and virtual
memory.
- process
- A non-schedulable entity that occupies memory, effectively acting as a container
for one or more threads.
- pthreads
- Common name given to the set of function calls of the general form pthread_*().
The vast majority of these function calls are defined by the POSIX committee, and
are used with threads.
- pulse
- A non-blocking message which is received in a manner
similar to a regular message.
It is non-blocking for the sender, and can be waited upon by the receiver using
the standard message-passing functions
MsgReceive()
and
MsgReceivev()
or the special pulse-only receive function
MsgReceivePulse().
While most messages are typically sent from client to
server, pulses are generally sent in the opposite
direction, so as not to break the send
hierarchy
(breaking which would cause deadlock).
Contrast with signal.
- QNX Software Systems
- The company responsible for the QNX 2, QNX 4, and Neutrino operating systems.
- QSS
- An abbreviation for QNX Software Systems.
- receive a message
- A thread can receive a message by calling MsgReceive() or MsgReceivev().
If there is no message available, the thread will block, waiting for one.
See Message passing.
A thread that receives a message is said to be a Server.
- receive ID
- When a server receives a message
from a client, the server's MsgReceive() or
MsgReceivev() function returns a “receive ID” (often abbreviated
in code as rcvid). This rcvid then acts as a handle to the
blocked client, allowing the server
to reply
with the data back to the client, effectively unblocking the client.
Once the rcvid has been used in a reply operation, the rcvid ceases to have any
meaning for all function calls, except
MsgDeliverEvent().
- relative timer
- A timer that has an expiration point defined as an offset from
the current time, for example, 5 minutes from now. Contrast with
absolute timer.
- repeating timer
- An absolute or relative
timer that, once expired, will automatically reload with another relative
interval and will keep doing that until it is canceled. Useful for
receiving periodic notifications.
- reply to a message
- A server will reply to a client's
message in order to deliver the results of the client's request back to the client.
- resource manager
- Also abbreviated “resmgr.”
This is a server process which provides certain
well-defined file-descriptor-based services to arbitrary clients.
A resource manager supports a limited set of messages, which correspond to standard
client C library functions such as open(), read(), write(),
lseek(), devctl(), etc.
- round robin (scheduling)
- In Round Robin (or “RR”) scheduling, a thread
will consume CPU until a higher priority thread is ready to run, until the thread voluntarily
gives up CPU, or until the thread's timeslice expires.
If there are no higher priority threads, the thread doesn't voluntarily
give up CPU, and there are no other threads at the same priority, it will run forever.
If all the above conditions are met except that a thread at the same priority is
ready to run, then this thread will give up CPU after its timeslice expires, and
the other thread will be given a chance to run.
Contrast with FIFO scheduling.
- scatter/gather
- Used to define the operation of message
passing
where a number of different pieces of data are “gathered” by the
kernel (on either the client or
server
side) and then “scattered” into a (possibly) different number of
pieces of data on the other side.
This is extremely useful when, for example, a header needs to be prepended to
the client's data before it's sent to the server.
The client would set up an IOV which would contain a pointer and
length of the header as the first element, and a pointer and length of the data
as the second element. The kernel would then “gather” this
data as if
it were one contiguous piece and send it to the server.
The server would operate analogously.
- semaphore
- A thread synchronization primitive characterized by having
a count associated with it. Threads may call the
sem_wait()
function and
not block if the count was non-zero at the time of the call.
Every thread that calls sem_wait() decrements the count.
If a thread calls sem_wait() when the count is zero, the thread will
block until some other thread calls
sem_post()
to increment the count.
Contrast with barrier.
- send a message
- A thread can send a message to another thread. The MsgSend*() series
of functions are used to send the message; the sending thread blocks until
the receiving thread replies to the message.
See Message passing.
A thread that sends a message is said to be a Client.
- send hierarchy
- A design paradigm whereby messages sent flow in one
direction, and messages replied to flow in another
direction.
The primary purpose of having a send hierarchy is to avoid deadlock.
A send hierarchy is accomplished by assigning clients
and servers a “level,” and ensuring that
messages that are being sent go only to a higher level.
This avoids the potential for deadlock where two threads would send to each other,
because it would violate the send hierarchy — one thread should not have
sent to the other thread, as that other thread must have been at a lower level.
- server
- A server is a regular, user-level process that provides
certain types of functionality (usually file-descriptor-based) to clients.
Servers are typically Resource Managers, and there's an
extensive library provided by QSS which performs much
of the functionality of a resource manager for you.
The server's job is to receive messages from clients,
process them, and then reply to the messages, which
unblocks the clients.
A thread can be both a client and a server
at the same time.
- signal
- A mechanism dating back to early UNIX systems that is used to send asynchronous
notification of events from one thread to another.
Signals are non-blocking for the sender. The receiver of the signal may decide
to treat the signal in a synchronous manner by explicitly
waiting for it. Contrast with pulse.
- sporadic
- Scheduling algorithm whereby a thread's priority can
oscillate dynamically between a “foreground” or
normal priority and a “background” or low priority.
A thread is given an execution budget of time to be consumed
within a certain replenishment period.
See also
FIFO
and
round robin.
- synchronous
- Used to indicate that a given operation has some synchronization to another
operation. For example, during a message-passing operation,
when the server does a MsgReply() (to reply to
the client), the unblocking of the client is said to be
synchronous to the reply operation.
Contrast with Asynchronous.
- thread
- A single, schedulable, flow of execution. Threads are implemented directly within
the Neutrino kernel and correspond to the POSIX pthread*() function
calls. A thread will need to synchronize with other threads (if any) by using
various synchronization primitives such as mutexes,
condition variables, semaphores,
etc.
Threads are scheduled in
FIFO,
Round Robin,
or
sporadic
scheduling mode.
- unblock
- A thread that had been blocked will be unblocked
when the condition it has been blocked on is met. For example, a thread
might be blocked waiting to receive a message.
When the message is sent, the thread will be
unblocked.
- virtual address
- An address that's not necessarily equivalent to a physical
address.
Under Neutrino, all threads operate in virtual addressing
mode, where, through the magic of an MMU, the virtual
addresses are translated into physical addresses.
Contrast with physical address
and virtual
memory.
- virtual memory
- A “virtual memory” system is one in which the virtual
address space may not necessarily map on a one-to-one basis with the
physical address space.
The typical example (which Neutrino doesn't support as of this writing) is a “paged”
system where, in the case of a lack of RAM, certain parts of a process's
address space may be swapped out to disk.
What Neutrino does support is the dynamic mapping of stack pages.
Warning: main(/www/www/htdocs/style/footer.php) [function.main]: failed to open stream: No such file or directory in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 746
Warning: main() [function.include]: Failed opening '/www/www/htdocs/style/footer.php' for inclusion (include_path='.:/www/www/common:/www/www/php/lib/php') in /www/www/docs/6.4.1/neutrino/getting_started/s3_glossary.html on line 746