Before you can use your system, you need to configure it for the drivers, filesystems and applications it needs to run.
Exactly how you should configure your system depends on the type of system you're building. Below are some very general guidelines to help you understand how to use a mkifs buildfile to configure your system, including which executables should be used in what circumstances, and which shared libraries are required for these executables.
The general procedure to set up a system is as follows:
For changes you make to your mkifs buildfile to affect your system configuration, you must re-build your system. For instructions on how to build a new system image after you have modified the buildfile, see the chapter Working with QNX BSPs in this guide.
See the Sample Buildfiles appendix in this guide for some sample buildfiles. If you are use a QNX product such as QNX CAR or QNX Platform for ADAS, see Working with Target Images for more information about how to use filesets, profiles and product-specific configuration files.
One of the first things you should do in a buildfile is start a driver to which you can redirect standard input, output, and errors. Starting a driver early on in the startup sequence provides all subsequent drivers and applications a known location to which they can output their startup messages, diagnostics and error messages. In most cases, the output device you should start will be a serial port driver (e.g., devc-ser8250).
If your system doesn't have a serial driver, you may choose to omit this step, or use some other type of device, for which you will need to write a specialized driver. If you don't specify a driver, by default, output goes to the debug output driver provided by the startup code.
Your embedded system must run drivers and filesystems that will make the hardware accessible to external devices.
You can include these drivers and filesystems in the image you build before you trasfer it to your target. QNX Neutrino supports numerous drivers and filesystems. These are documented in the Utilities reference:
For most systems it is best to keep the OS image small. This means not putting all the executables and shared libraries into the OS image. Instead, do the following:
In other words:
When configuring your image to support a rotating or solid-state disk, you must first determine what hardware controls the disk interface. The QNX Neutrino RTOS supports a number of interfaces, including the EIDE controller. For details on the supported interface controllers, see the various devb-* entries in the Utilities Reference.
The only instruction you need to put in your buildfile is to start the driver for the appropriate hardware. For example, for the AHCI SATA interfaces driver on an x86 board:
Detect all SATA controllers and list all the connected devices:
devb-ahci &
Detect all SATA controllers and use DMA-typed memory
devb-ahci mem name=/ram/dma &
The driver will then dynamically load the required modules, in this order:
The CAM .so files are documented under cam-* in the Utilities Reference. Currently, QNX Neutrino supports CD-ROMs (cam-cdrom.so), hard disks (cam-disk.so), and optical disks (cam-optical.so).
The io-blk.so module is responsible for dealing with a disk on a block-by-block basis. It includes caching support.
The fs-* modules are responsible for providing the high-level knowledge about how a particular filesystem is structured. QNX Neutrino currently supports the following:
Filesystem | Module |
---|---|
MS-DOS | fs-dos.so |
Macintosh HFS and HFS Plus | fs-mac.so |
Windows NT | fs-nt.so |
Power-Safe | fs-qnx6.so |
ISO-9660 CD-ROM, Universal Disk Format (UDF) | fs-udf.so |
Network services are started from the io-pkt* command, which is responsible for loading in the required .so files (see io-pkt-v4-hc, io-pkt-v6-hc in the Utilities Reference).
Two levels of .so files are started, based on the command-line options given to io-pkt*:
For more information, see the Core Networking Stack User's Guide, as well as the devnp-*, and io-pkt* entries in the Utilities Reference.
To dynamically load a network driver, simply use the mount command at command line. For example:
mount -T io-pkt devnp-e1000.so
To run a flash filesystem, you need to select the appropriate flash driver for your target system. For details on the supported flash drivers, see the various devf-* entries in the Utilities Reference. Note that:
For instructions on how to build a flash filesystem images, Building a flash filesystem image in the OS Images chapter. For specific instructions for starting flash filesystem drivers, see the BSP User's Guide for your board and QNX Neutrino release.
Before it attempts to load a network filesystem, your system must load the network drivers it will need (see Network drivers above). Be sure to set the load sequence appropriately in your startup configuration.
QNX Neutrino support two types of network filesystems:
Nothing special is required to run applications. Usually, you'll place them in the script file after all the drivers have started, or later in a the flash filesystem image you generate with mkefs (see Building a flash filesystem image in the OS Images chapter). If you require a particular driver to be present and ready, before you start an application, you would typically use the waitfor command in the script part of your buildfile (see Scripts in the OS Image Buildfiles chapter).
Here's an example. Before it starts, an application called peelmaster needs to wait for a driver (driver-spud) to be ready. The following sequence is typical:
driver-spud & waitfor /dev/spud peelmaster
This sequence causes the driver (driver-spud) to be run in the background (specified by the ampersand character). The expectation is that when the driver is ready, it will register the pathname /dev/spud. The waitfor command tries to access (stat()) the pathname /dev/spud periodically, blocking execution of the script until either the pathname appears, or a pre-determined timeout threshold has been reached.
Once the pathname appears in the pathname space, we assume that the driver is ready to accept requests. At this point, the waitfor unblocks, and the next program in the list (in our case, peelmaster) executes.
Without the waitfor command, the peelmaster program would run immediately after the driver was started, which could cause peelmaster to miss the /dev/spud pathname and fail.
For information about how to optimize boot times, or how to get specific components up quickly so that they can provide services (e.g., rear-view camera in a vehicle), see the Boot Optimization Guide.