When you're developing code, you almost always make use of a library, a collection of code modules that you or someone else has already developed (and hopefully debugged).
Under QNX Neutrino, we have three different ways of using libraries:
To support the two major kinds of linking described above, QNX Neutrino has two kinds of libraries: static and dynamic:
This binding operation literally copies the object module from the library and incorporates it into your finished executable. The major advantage of this approach is that when the executable is created, it's entirely self-sufficient—it doesn't require any other object modules to be present on the target system. This advantage is usually outweighed by two principal disadvantages, however:
Like a static library, this kind of library also contains the modules that you want to include in your program, but these modules are not bound to your program at link time. Instead, your program is linked in such a way that the Process Manager causes your program to be bound to the shared objects at load time.
The Process Manager performs this binding by looking at the program to see if it references any shared objects (.so files). If it does, then the Process Manager looks to see if those particular shared objects are already present in memory. If they're not, it loads them into memory. Then the Process Manager patches your program to be able to use the shared objects. Finally, the Process Manager starts your program.
Note that from your program's perspective, it isn't even aware that it's running with a shared object versus being statically linked—that happened before the first line of your program ran!
The main advantage of dynamic linking is that the programs in the system reference only a particular set of objects—they don't contain them. As a result, programs are smaller. This also means that you can upgrade the shared objects without relinking the programs. This is especially handy when you don't have access to the source code for some of the programs.
If you want to augment your program with additional code at runtime, you can call dlopen().
This function call tells the system that it should find the shared object referenced by the dlopen() function and create a binding between the program and the shared object. Again, if the shared object isn't present in memory already, the system loads it. The main advantage of this approach is that the program can determine, at runtime, which objects it needs to have access to.
Note that there's no real difference between a library of shared objects that you link against and a library of shared objects that you load at runtime. Both modules are of the exact same format. The only difference is in how they get used.
By convention, therefore, we place libraries that you link against (whether statically or dynamically) into the lib directory, and shared objects that you load at runtime into the lib/dll (for dynamically loaded libraries) directory.
Note that this is just a convention—there's nothing stopping you from linking against a shared object in the lib/dll directory or from using the dlopen() function call on a shared object in the lib directory.
The development tools have been designed to work out of their processor directories (x86, armle-v7, etc.). This means you can use the same toolset for any target platform.
If you have development libraries for a certain platform, then put them into the platform-specific library directory (e.g., /x86/lib), which is where the compiler and tools look.