One thing that you'll notice right away is that we extended the attributes (iofunc_attr_t) and the OCB (iofunc_ocb_t) structures:
typedef struct my_attr_s { iofunc_attr_t base; int count; } my_attr_t; typedef struct my_ocb_s { iofunc_ocb_t base; unsigned char *output; int size; } my_ocb_t;
It's a resource manager convention to place the standard structure as the first member of the extended structure. Thus, both members named base are the nonextended versions.
Recall that an instance of the attributes structure is created for each device. This means that for the device /dev/webcounter1.gif, there will be exactly one attributes structure. Our extension simply stores the current count value. We certainly could have placed that into a global variable, but that would be bad design. If it were a global variable, we would be prevented from manifesting multiple devices. Granted, the current example shows only one device, but it's good practice to make the architecture as flexible as possible, especially if it doesn't add undue complexity. Adding one field to an extended structure is certainly not a big issue.
Things get more interesting in the OCB extensions. The OCB structure is present on a per-open basis; if four clients have called open() and haven't closed their file descriptors yet, there will be four instances of the OCB structure.
This brings us to the first design issue. The initial design had an interesting bug. Originally, I reasoned that since the QNX Neutrino resource manager library allows only single-threaded access to the resource (our /dev/webcounter.gif), then it would be safe to place the GIF context (the output member) and the size of the resource (the size member) into global variables. This singled-threaded behavior is standard for resource managers, because the QNX Neutrino library locks the attributes structure before performing any I/O function callouts. Thus, I felt confident that there would be no problems.
In fact, during initial testing, there were no problems—only because I tested the resource manager with a single client at a time. I felt that this was an insufficient test, and ran it with multiple simultaneous clients:
# aview -df1 /dev/webcounter.gif & # aview -df1 /dev/webcounter.gif & # aview -df1 /dev/webcounter.gif & # aview -df1 /dev/webcounter.gif &
and that's when the first bug showed up. Obviously, the GIF context members needed to be stored on a per-client basis, because each client would be requesting a different version of the number (one client would be at 712, the next would be at 713, and so on). This was readily fixed by extending the OCB and adding the GIF context member output. (I really should have done this initially, but for some reason it didn't occur to me at the time.)
At this point, I thought all the problems were fixed; multiple clients would show the numbers happily incrementing, and each client would show a different number, just as you would expect.
Then, the second interesting bug hit. Occasionally, I'd see that some of the aview clients would show only part of the image; the bottom part of the image would be cut off. This glitch would clear itself up on the next refresh of the display, so I was initially at a loss to explain where the problem was. I even suspected aview, although it had worked flawlessly in the past.
The problem turned out to be subtle. Inside of the base attributes structure is a member called nbytes, which indicates the size of the resource. The size of the resource is the number of bytes that would be reported by ls -l—that is, the size of the file /dev/webcounter1.gif.
When you do an ls -l of the web counter, the size reported by ls is fetched from the attributes structure via ls's stat() call. You'd expect that the size of a resource wouldn't change, but it does! Since the GIF compression algorithm squeezes out redundancy in the source bitmap image, it will generate different sizes of output for each image that's presented to it. Because of the way that the io_read() callout works, it requires the nbytes member to be accurate. That's how io_read() determines that the client has reached the end of the resource.
The first client would open() the resource, and begin reading. This caused the GIF compression algorithm to generate a compressed output stream. Since I needed the size of the resource to match the number of bytes that are returned, I wrote the number of bytes output by the GIF compression algorithm into the attributes structure's nbytes member, thinking that was the correct place to put it.
But consider a second client, preempting the first client, and generating a new compressed GIF stream. The size of the second stream was placed into the attributes structure's nbytes member, resulting in a (potentially) different size for the resource! This means that when the first client resumed reading, it was now comparing the new nbytes member against a different value than the size of the stream it was processing! So, if the second client generated a shorter compressed data stream, the first client would be tricked into thinking that the data stream was shorter than it should be. The net result was the second bug: only part of the image would appear — the bottom would be cut off because the first client hit the end-of-file prematurely.
The solution, therefore, was to store the size of the resource on a per-client basis in the OCB, and force the size (the nbytes member of the attributes structure) to be the one appropriate to the current client. This is a bit of a kludge, in that we have a resource that has a varying size depending on who's looking at it. You certainly wouldn't run into this problem with a traditional file—all clients using the file get whatever happens to be in the file when they do their read(). If the file gets shorter during the time that they are doing their read(), it's only natural to return the shorter size to the client.
Effectively, what the web counter resource manager does is maintain virtual client-sensitive devices (mapped onto the same name), with each client getting a slightly different view of the contents.
Think about it this way. If we had a standalone program that generated GIF images, and we piped the output of that program to a file every time a client came along and opened the image, then multiple concurrent clients would get inconsistent views of the file contents. They'd have to resort to some kind of locking or serialization method. Instead, by storing the actual generated image, and its size, in the per-client data area (the OCB), we've eliminated this problem by taking a snapshot of the data that's relevant to the client, without the possibility of having another client upset this snapshot.
That's why we need the io_close_ocb() handler—to release the per-client context blocks that we generated in the io_open().
So what size do we give the resource when no clients are actively changing it? Since there's no simple way of pre-computing the size of the generated GIF image (short of running the compressor, which is a mildly expensive operation), I simply give it the size of the uncompressed bitmap buffer (in the io_open()).
Now that I've given you some background into why the code was designed the way it was, let's look at the resource manager portion of the code.