This appendix includes details about io-pkt that you might need if you plan to write your own network driver:
As described in the Overview chapter, io-pkt is a multithreaded process. It uses at least the following thread types:
Since this context of code is single threaded, you must never block it. If you block this context of code, all the operations it performs (such as the resource manager) will be blocked until it's released. If during testing of your driver the ifconfig utility becomes blocked on io-pkt and doesn't terminate with the expected output, there is a good chance that you have blocked the stack context in your driver.
While single threaded, the stack context can manage blocking operations. This is via pseudo-threading. A stack is maintained per pseudo-thread. If a pseudo-thread is going to block, it's put to sleep, to be woken when the required condition is met. Only pseudo-threads within the stack context can yield execution to each other. You can't use sleep and wake routines outside of the stack context. This includes functions that call these routines. If you use these function outside the stack context, io-pkt can become unstable or fault. For more information on blocking and interacting with the stack context see the Stack context section below.
This is the thread created at io-pkt process startup to initialize io-pkt. It's generally idle after the io-pkt is initialized and its worker posix threads are started. It will never be the stack context. While generally idle, there is a way to leverage it for network driver blocking operations if needed (see blockop in the later sections).
These are threads created by io-pkt to service interrupts. As discussed in the Threading Model of the Network Architecture guide, one thread is created per CPU. You can use an io-pkt option to create more or less threads based on unusual conditions, but its optimal format is one POSIX thread per CPU.
The naming of the thread is the default io-pkt thread naming for any io-pkt managed thread, so this naming doesn't absolutely identify one of these threads (see User-created io-pkt managed thread below). The io-pkt worker threads will also execute the stack context code, and are the only threads that can execute the stack context code. Only one of these thread can execute this code at a time. The stack context may also migrate between the io-pkt worker threads, depending on the circumstances.
User-created threads include the following:
These are POSIX threads that were created by a dynamically loaded library (driver or other) or a thread created by an internal io-pkt service. These threads are created and managed by an io-pkt-specific POSIX thread API and can't execute the stack context code.
An io-pkt internal service example is the PPP read thread (identified as such in the pidin threads output). These threads are typically created to handle blocking operations (such as a blocking read()) in the PPP case. This keeps the stack context from becoming blocked.
User-created io-pkt-managed threads should always have a thread name assigned to them to make it easy to identify them during debugging situations. If they aren't named, they can be hard to distinguish from one another as well as from the io-pkt worker threads, as by default they use the same naming convention. While these threads can't perform operations that manipulate the pseudo-threads of the stack context, they can allocate and free mbufs and clusters and other memory objects via the io-pkt memory management. They can't however perform memory allocation as a M_WAITOK operation (they must always use M_NOWAIT). Using M_WAITOK would engage the pseudo-thread code in the stack context.
These are threads created by the user using the default libc Posix thread API. We don't recommended that you create threads in this manner as there will be no io-pkt context associated with them. This means that they can't allocate memory using io-pkt's memory management and can't be managed via io-pkt's thread synchronization mechanisms.
Typically the reason these threads may exist is during integration of third party code or library functions that create threads for specific tasks (for example USB insertion and removal event thread via libusbdi).
If a thread is created using this API, it should operate in a manner that abstracts it from io-pkt API functions, so they aren't performed by this thread. For example, if a mbuf or cluster memory buffer needs to be created and managed, this thread could modify the data in the buffer, but couldn't allocate or free this buffer. Thread management must also be done by user code (starting/terminating and synchronizing), because io-pkt isn't aware of this thread. As with the io-pkt managed threads, these threads should be named.
When coding a new io-pkt driver or porting existing driver code, you will want to consider how best to integrate it with io-pkt. The io-pkt program is optimized in such a manner that the preferred driver architecture doesn't require the creation of any driver-specific POSIX threads. This is to minimize thread switching in high-bandwidth situations (including forwarding between interfaces). Most io-pkt driver callback functions can potentially be called from the stack context. If this is the case, any time you spend in your driver is time that is potentially blocking other network operations from occurring.
Whether you need to create a thread or take other special steps will probably depend on a few considerations:
If so, you may have to consider some of the advanced topics described later in this appendix. If not, then it's likely you should be able to integrate your driver as close to the optimized architecture as possible without using additional threads.
Examples of blocking include:
A typical scenario is a read operation to the USB stack (io-usb-otg) by a USB network driver, or a read operation to a serial port or character-based interface (for example io-pkt PPP code). In these cases, we don't know when data to RX will arrive, so this could result in blocking indefinitely. You can send a message to another process, but you will expect an immediate response.
An example of your hardware managing additional functions could be that the hardware services a multipurpose BUS. Ethernet frames may just be one type of data passed on this BUS, encapsulated within specific framing associated with this BUS although the primary data passed is network data.
You may want to create a resource manager within io-pkt to allow other types of data along with the TCP/IP traffic to be passed on the BUS. In this case, we are optimizing the TCP/IP traffic over other frame types. An architectural alternative could be to create a dedicated process to manage the BUS and require the io-pkt driver to perform message-passing to communicate with the BUS manager. This would be a more system-wide BUS sharing consideration.
An example of complicated hardware integration could be an interface with limited or no support of optimizations such as DMA and descriptor ring support. It may require multiple operations to obtain packet data where each suboperation requires its own interrupt, or multiple status requests are required. This can be time-consuming and complicated to integrate. A thread dedicated to managing HW RX and potentially TX may be needed.
See the Writing Network Drivers for io-pkt appendix and the accompanying sample driver, sam.c.
One of the items often overlooked in io-pkt drivers is restarting transmission if some kind of resource conflict/exhaustion occurred or the link state is down. When io-pkt calls the driver if_start() callback function, it expects the TX queue to be drained. If it isn't, it will not call this callback function again unless there's a new packet added to the output queue. Also if the link state is down, the TX queue can fill up with packets to be sent. When the link state is restored, io-pkt will wait until the next packet transmission to call the if_start() callback, so that the packets in the send queue are transmitted.
Often managing this behavior is overlooked and can be misinterpreted at runtime as a lost or dropped frame, which was retransmitted simply because another packet will likely be sent shortly afterward to cause the if_start() callback to be executed again.
Differences between the interface Up/Down state and the Link up/down state
The first place to start is the difference between the interface up and down state vs the link up and down state. The interface up and down state is reflected in the interface flags (IFF_UP and IFF_DOWN), which can be viewed by ifconfig. The link up and down state is reflected in the media flags and can be viewed by ifconfig under the media: heading, and can also be viewed with nicinfo under the heading Link is down|up. These states are managed independently of each other, and one can be up while the other is down and vice versa.
The interface state is set via ifconfig and its default is down until set up when configured up explicitly, or when an IP address is assigned to the interface. It's considered an advisory state, as it reflects whether the user has set the interface up or down, regardless of the link's state. If the interface state is marked down, TX packets are dropped (memory is freed) without being queued, and the application can receive the error ENETDOWN. Likewise, RX packets are dropped by ether_input() (which is called by the driver on RX).
The link state is set by the driver itself based on the status of the physical link. If the link state is down, no RX packets will arrive, but on TX, the behavior is driver-specific. The MII code may update the status to io-pkt as displayed by the routing socket, ifconfig, and nicinfo, but otherwise io-pkt takes no specific action. On TX, (provided that the interface state is up) the packet will be added to the interface send queue (if it isn't already full), your driver's if_start() function will be called, and what occurs with respect to the send queue will be driver-specific.
Managing the TX queue
As we saw in the driver sample above, on TX the if_start() driver callback obtains packets to transmit from the ifp->if_snd queue. Packets are added to the send queue regardless of link state or other HW resource issues. One of the first things done in if_start() is to set the interface flag IFF_OACTIVE. This flag defines whether the driver is actively attempting to transmit data. This is a driver-level flag and isn't limited to the context of the if_start() callback function itself. If this flag is set, io-pkt will not attempt to call the if_start() callback again.
The significance of this is what occurs if there aren't enough resources to TX the packet, or if the link state is down. What should be done?
If nothing is done, the driver clears IFF_OACTIVE and if_start() returns, the packets remain on the send queue and if_start() will not be called again until there's another packet to be sent, at which point everything is evaluated again as before. If the link remains down, the send queue can fill, and applications could start getting ENOBUFS errors. The driver may first exhaust the TX descriptors. It all depends on how the driver was coded. It can also be possible to get into this state when the link state is up simply because the HW couldn't transmit the packets quickly enough, exhausting the TX descriptors. We probably want the driver to continue transmission when the hardware or descriptor ring is ready, rather than wait until io-pkt has another packet to add to the send queue.
What needs to be decided is what to do if packets can't be transmitted: whether to leave the packets in the buffer, for how long, and how often should the driver attempt to send them. These parameters are specific to the driver implementation, but here is how they can be applied.
A timer can be enabled with a callback function to execute the if_start() callback. So for example, if the hardware isn't ready:
static void sam_kick_tx (void *arg) { sam_dev_t *sam = arg; NW_SIGLOCK(&sam->ecom.ec_if.if_snd_ex, sam->iopkt); sam_start(&sam->ecom.ec_if); } ... void sam_start(struct ifnet *ifp) { ..... if (callout_pending(&sam->tx_callout)) callout_stop(&sam->tx_callout); ifp->if_flags_tx |= IFF_OACTIVE; /* Actively sending data on interface */ ..... if (detected_issue) { /* Resources aren't ready or something else is wrong */ /* Set a callback to try again later */ callout_msec(&sam->tx_callout, 2, sam_kick_tx, sam); /* Actual timeout value can be configurable or vary based on implementation */ /* Leave IFF_OACTIVE set so the stack doesn't call us again */ NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); return; } ... /* Successful execution of sam_start() */ ifp->if_flags_tx &= ~IFF_OACTIVE; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); return; }
You can also make a similar call when the link is detected up in your MII code. In this case, you may perform some queries to determine if there is data to be sent; you may want to check both the transmit descriptor list and the interface send queue:
... sam->cfg.flags &= ~NIC_FLAG_LINK_DOWN; if_link_state_change(ifp, LINK_STATE_UP); if (data_in_tx_desc || !IFQ_IS_EMPTY(&ifp->if_snd)){ /* There is some data to send */ if (callout_pending(&sam->tx_callout)) callout_stop(&sam->tx_callout); /* Timer not needed calling if_start() callback directly. */ NW_SIGLOCK(&ifp->if_snd_ex, sam->iopkt); sam_start(ifp); } ...
If you set this timer, it should be stopped if an ifconfig interface_name down occurs, or otherwise the if_stop() driver callback function is executed. When this occurs, the following can be called early in if_stop():
static void sam_stop(struct ifnet *ifp, int disable) { ... /* Lock out the transmit side */ NW_SIGLOCK(&ifp->if_snd_ex, sta2x11->iopkt); if (callout_pending(&sam->tx_callout)) { callout_stop(&sam->tx_callout); /* We aren't in if_start() as it stops the callout */ ifp->if_flags_tx &= ~IFF_OACTIVE; } for (i = 0; i < 10; i++) { if ((ifp->if_flags_tx & IFF_OACTIVE) == 0) break; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); delay(50); NW_SIGLOCK(&ifp->if_snd_ex, sam->iopkt); } if (i < 10) { ifp->if_flags_tx &= ~IFF_RUNNING; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); } else { /* Heavy load or bad luck. Try the big gun. */ quiesce_all(); ifp->if_flags_tx &= ~IFF_RUNNING; unquiesce_all(); } ... /* Mark the interface as down and cancel the watchdog timer. */ ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE); ifp->if_timer = 0; return; }
The last point is stale data. These are packets that have accumulated in the send queue but can't be sent. How long should attempts to retransmit this data be made and when should the queue be flushed? You probably want to consider flushing the queue, as you probably don't want to send packets that have sat in the send queue for extended periods of time, as the data is probably out of date.
Above we've seen how to use a timer to resume transmission, or to use link state to resume transmission. This is based on the idea that the issues related to TX are sporadic and for short periods of time. A decision may have to be made when to declare the data stale as well as to stop data from being queued. We can flush the send queue, but we also want io-pkt to stop queuing packets, or the send queue will just fill up again.
Based on some kind of timing, if TX hasn't resumed, you can decide to purge the send queue. This can be managed by a higher level or at the driver level. If managed at the higher level, marking the interface down by clearing the IFF_UP interface flag will cause the send queue to be purged. At the driver level you can perform the same operation via:
IFQ_PURGE(&ifp->if_snd);
If the interface remains down, no new packets will be added to the send queue. If the interface is marked up, io-pkt will continue to add packets to the send queue. If the interface remains up, periodic purging may be needed if TX hasn't resumed at the hardware level.
Blocking Operations
The io-pkt manager is optimized to minimize thread switching, and as mentioned in the architecture discussion previously, driver API callback functions can be called from the stack context. As the stack context is single-threaded, we can't have blocking operations being performed within the stack context. If a blocking operation occurs, you will block the stack context (io-pkt resource manager, protocol processing) for the duration of the time spent blocked.
What defines blocking? Basically any time spent in the driver API callback functions may potentially be time that io-pkt can't service the resource manager (applications), timers, and processing associated with the supported protocols in io-pkt. Time spent in the driver API callback functions should be as little as possible.
Some examples to consider are:
Many QNX Neutrino function calls result in a message's being sent to another resource manager. If the message being sent will not get a immediate response (a blocking read() or write() for example), you can block io-pkt. The typical example is a read operation; if it's a blocking read, the function call may not return until there is data to read.
If the resource is already locked, can it potentially be locked for a long period of time, blocking the callback function while the driver waits to acquire the lock?
Does your hardware require a service that can take a long period of time, such as loading the firmware?
Block Op
If the duration of the blocking scenario is known and within a few seconds or less, you can use the blockop services. Essentially this offloads an operation that may take some time to the io-pkt main thread (which is typically idle). Note that blockop is a shared service, and may have multiple operations scheduled. This is meant for occasional time-consuming operations (such as a firmware upload that occurs once), but not indefinite or long-term operations and not repetitive operations. It's a convenience service that handles the complicated management of the stack context pseudo-thread handling. As it performs these kinds of operations, it must be called from within the stack context. The callback function however isn't called from the stack context and shouldn't perform any operations that require the stack context or buffer management.
The example below is taken from the PPP data link shutdown processing. In this case, the close() function for the serial port resource manager takes an unusually long, but predictable, amount of time to reply to the message blocking the close() function. Since this is called in the stack context, it blocks other io-pkt operations until the close() returns. This code moves the execution of the close() into the main io-pkt thread, and pseudo-thread switches to other operations until the callback function returns. The qnxppp_ttydetach() function pseudo-thread switches at the blockop_dispatch() and resumes from the same point once the qnxppp_tty_close_blockop() function returns.
#include <blockop.h> struct ppp_close_blockop { int qnxsc_pppfdrd; int qnxsc_pppfdrd2; int qnxsc_pppfdwr; } void qnxppp_tty_close_blockop(void *arg); .... void qnxppp_tty_close_blockop(void *arg) { struct ppp_close_blockop *pcb = arg; if(pcb->qnxsc_pppfdrd != -1) close(pcb->qnxsc_pppfdrd); if(pcb->qnxsc_pppfdrd2 != -1) close(pcb->qnxsc_pppfdrd2); if(pcb->qnxsc_pppfdwr != -1) close(pcb->qnxsc_pppfdwr); } int qnxppp_ttydetach(...) { struct ppp_close_blockop pcb; struct bop_dispatch bop; ..... pcb.qnxsc_pppfdrd = sc->qnxsc_pppfdrd; pcb.qnxsc_pppfdrd2 = sc->qnxsc_pppfdrd2; pcb.qnxsc_pppfdwr = sc->qnxsc_pppfdwr; bop.bop_func = qnxppp_tty_close_blockop; bop.bop_arg = &pcb; bop.bop_prio = curproc->p_ctxt.info.priority; blockop_dispatch(&bop); .... return; }
Thread Creation
As stated above, there are several types of threads that can exist in an instance of io-pkt. The two types of threads created by driver or module developers from above are user-created threads that are either tracked (nw_pthread_create()) or not tracked (pthread_create()) by io-pkt. Regardless of how they're created, all POSIX threads created in io-pkt should be named for easier debugging.
If your code creates a thread directly, you should create a tracked thread as described below. If you're calling library functions that create threads on your behalf, you must manage these threads in your module code, because io-pkt isn't aware of their existence. As stated under the io-pkt Architecture section, threads that aren't tracked can't allocate or free an mbuf or cluster, and can't call functions that perform any manipulation of the stack context pseudo-threading.
All tracked POSIX threads must register a quiesce callback function (defined below). If your thread doesn't register a quiesce callback function, io-pkt can end up in a deadlock situation.
In the sample below, nw_pthread_create() is the same as pthread_create(), but for some considerations in the initialization function. The first consideration is naming the thread for easier debugging. The other is setting up the mechanism for your threads' quiesce handling where io-pkt requires all threads to block for an exclusive operation. This is required of all threads created with nw_pthread_create().
Threads can be terminated via quiesce_block handling, or using the function nw_pthread_reap(tid), where tid is the thread ID of your tracked thread. The nw_pthread_reap() can't be called by the thread specified by the tid argument (i.e., a thread can't reap itself).
Both nw_pthread_create() and nw_pthread_reap() must be called from the stack context.
Below is an example where the user-created tracked thread creates a resource manager. The structure of your driver can be different, but the main point is that your quiesce callback function must cause your tracked thread to call quiesce_block().
#include <nw_thread.h> static sam_thread_init(void *arg) { struct nw_work_thread *wtp; sam_dev_t *sam = (sam_dev_t *)arg; pthread_setname_np(gettid(), "sam workthread"); wtp = WTP; ... if ((sam->code = pulse_attach(sam->dpp, MSG_FLAG_ALLOC_PULSE, 0, sam_pulse_func, NULL)) == -1) { log(LOG_ERR, "sam: pulse_attach(): %s", strerror(errno)); return errno; } if ((sam->coid = message_connect(sam->dpp, MSG_FLAG_SIDE_CHANNEL)) == -1) { pulse_detach(sam->dpp, sam->code, 0); log(LOG_ERR, "sam: message_connect(): %s", strerror(errno)); return errno; } wtp->quiesce_callout = sam_thread_quiesce; wtp->quiesce_arg = sam; ... return EOK; } static int sam_pulse_func(message_context_t *ctp, int code, unsigned flags, void *handle) { /* If the die argument is 1, the user thread will terminate in queisce_block */ quiesce_block(ctp->msg->pulse.value.sigval_int); return 0; } static void sam_thread_quiesce(void *arg, int die) { sam_dev_t *sam = (sam_dev_t *)arg; MsgSendPulse(sam->coid, SIGEV_PULSE_PRIO_INHERIT, sam->code, die); } static void *sam_thread(void *arg) { sam_dev_t *sam = (sam_dev_t *)arg; dispatch_context_t *ctp; if ((ctp = dispatch_context_alloc(sam->dpp)) == NULL { ... return NULL; } while(1) { if ((ctp = dispatch_block(ctp)) == NULL) { ... break; } dispatch_handler(ctp); } return NULL; } ... /* Likely in the sam_attach() interface attach driver callback function */ /*Need a thread to handle blocking or other special circumstance scenario */ if (nw_pthread_create(&sam->worker_tid, NULL, sam_thread, sam, 0, sam_thread_init, sam) != EOK) { log(LOG_ERR, "sam: nw_pthread_create() failed\n"); /* Clean up and likely return -1 */ } /* Likely in the sam_detach() interface detach driver callback function */ ... if (nw_pthread_reap(sam->worker_tid)) log(LOG_ERR, "%s(): nw_pthread_reap() failed\n", __FUNCTION__); ...
Quiesce handling
Quiesce handling is required by all threads that are created by nw_pthread_create(). The purpose of this functionality is to allow io-pkt to quiesce (quiet) all threads for an exclusive operation. It also provides a mechanism for terminating the thread.
The basic structure of the mechanism is the quiesce callback function provided by the driver (example above), and the quiesce_block() io-pkt function that the tracked thread is required to call. The quiesce callback function is executed by io-pkt (otherwise called from the stack context via the quiesce_all() function). This callback function provides some kind of mechanism to trigger the tracked thread to call the function quiesce_block() with the die argument provided to the callback function. This argument determines if the thread blocks (die = 0) or terminates (die = 1).
If the quiesce_block() function isn't called by the tracked thread, io-pkt (and thus the stack context) will be blocked in quiesce_all() until it does, as quiesce_all() is intended to block all worker threads until unquiesce_all() is called to resume the tracked threads. The unquiesce_all() function must also be called from the stack context.
As well, if die is 0, your thread will block for a short period of time. You may have HW integration issues to consider that could be affected by this blocking. You may want to have some code around the quiesce_block() to handle this, such as disable and enable interrupts or other hardware considerations. These considerations would be implementation-specific.
If we continue from the example above, the callback function provided will send a pulse to a channel managed by the tracked thread (its resource manager). That pulse will trigger another callback function that's executed by the tracked thread. This function calls quiesce_block() with the die argument provided.
Periodic timers
Network drivers frequently need periodic timers to perform such housekeeping functions as maintaining links and harvesting transmit descriptors. The preferred way to set up a periodic timer is via the callback API provided by io-pkt. This API is used to call a user-defined function after a specified period of time. You can call callout_* functions in io-pkt driver API callbacks, or nw_pthread_create()-created io-pkt threads. The callout function will be called from the stack context.
The callout data type is struct callout, and includes the following functions:
Here's an example:
struct sam_dev { ... /* Declare a type callout in your driver device structure */ /* Unique to this interface */ struct callout my_callout; ... }; static void my_function (void *arg) { struct sam_dev *sam = arg; /* Do something if the timer expires */ /* We may want to arm the callout again if we want my_function() to be called on a regular interval. */ callout_msec(&sam->my_callout, 5, my_function, sam); { /* Before it's used, it must be initialized */ /* This can be in the if_init() or if_attach() callback for example */ callout_init(&sam->my_callout); /* Initialize callout */ /* Once initialized it can be used */ /* Call my_function() in 5 ms */ callout_msec(&sam->my_callout, 5, my_function, sam); callout_stop(&sam->my_callout); /* Cancel the callout */ if (callout_pending(&sam->my_callout)) { /* Is the callout armed */ /* action if pending */ } else { /* action if not pending */ }
Driver doesn't use an RX interrupt
When the driver isn't notified via an interrupt that a packet has arrived, you will need to mimic this functionality in your driver. There are different approaches to this, with different limitations. In your nw_pthread_create() thread, you can either call if_input() directly, or simulate the ISR.
Calling if_input() directly has limitations, as your interface will not be able to support fastforward feature or bridging between interfaces if you're considering these features for the future. You would prepare the mbuf in the same manner as the sample's process interrupt function, and end with calling if_input(). The if_input() function executed in your (nw_pthread)thread will cause the packet to be queued, and an event will trigger the main io-pkt threads to process the packet.
The other method allows fastforward and bridging to work as in other io-pkt drivers. In this case, you will enqueue your packets in your nw_pthread, and trigger the event directly in your code to cause your process interrupt io-pkt callback to execute in the same way it would if an ISR had occurred. In the process interrupt callback, you would dequeue the packet from your internal driver queue, prepare the mbuf in the same manner as the sample, and execute if_input(). In this case, if_input() is executed in the io-pkt callback rather than in your nw_pthread.
For this, you will define a process interrupt callback, along with an enable-interrupt callback as you would with an ISR. The difference is how the interrupt_queue() is applied. In your case, you will have a queue that's accessed by two different threads, the one receiving the packet from the HW, and the other process interrupt passing the packet to upper layers in io-pkt. You will want a mutex protecting this queue so it's modified by only one thread at a time. You will also want to protect the event notification mechanism interrupt_queue() is using.
In your driver thread, you will lock the mutex, check if the queue is full, and if not, enqueue the packet in your internal queue. You will now call interrupt_queue() in your thread. If evp (event structure) isn't NULL, you will send this event yourself in your thread:
MsgSendPulse(evp->sigev_coid, evp->sigev_priority, evp->segev_code, (int)evp->sigev_value.sival_ptr);
Once you have done this, you would unlock your mutex. The remainder of your function will be hw management or descriptor management.
The io-pkt manager will now schedule your process interrupt callback to execute. In your process interrupt callback, you will loop dequeueing packets until the queue is empty. First you will lock your mutex for your internal queue, and attempt to dequeue a packet. If you did dequeue a packet, unlock your mutex, and call if_input() (provided the mbuf is prepared as required) and go back to the top of the loop. If there is no packet to dequeue (IF_DEQUEUE() returns NULL), break out of your loop and return without unlocking your mutex. You don't want to unlock your mutex here because we don’t want your receive thread to call interrupt_queue() at this point. If it did, it would return NULL because you're currently processing a packet. You will unlock your internal mutex in the enable interrupt callback. This way a new evp structure will be returned when your receive thread can continue as you are finished processing packets.
Your mbuf can be prepared either in your receive thread or the process interrupt callback. It just depends on whether you want to store the fully formed mbuf in your internal queue or partial buffers to be formatted later.
In the receive thread:
struct sigevent *evp; pthread_mutex_lock(&driv->rx_mutex); if (IF_QFULL(&driv->rx_queue)) { m_freem(m); ifp->if_ierrors++; ...->stats.rx_failed_allocs++; } else { IF_ENQUEUE(&driv->rx_queue, m); } if (!driv->rx_running) { //RX_running is mimicking interrupt masking. // This is for future compatibility when using interrupt_queue() driv->rx_running = 1; evp = interrupt_queue(driv->iopkt, &driv->inter); if (evp != NULL) { MsgSendPulse( evp->sigev_coid, evp->sigev_priority, evp->sigev_code, (int)evp->sigev_value.sival_ptr); } } pthread_mutex_unlock(&driv->rx_mutex);
In the main code:
int your_process_interrupt( void *arg, struct nw_work_thread *wtp) { driver_dev_t *driv = arg; struct ifnet *ifp; struct mbuf *m; ifp = &driv->ecom.ec_if; while (1) { pthread_mutex_lock(&driv->rx_mutex); IF_DEQUEUE(&driv->rx_queue, m); if (m!= NULL) { pthread_mutex_unlock(&driv->rx_mutex); ... Prepare mbuf if needed ... (*ifp->if_input)(ifp, m); } else { /* Leave mutex locked to prevent any enqueues; unlock in enable */ break; } } return 1; } int your_enable_interrupt (void *arg) { driver_dev_t *driv = arg; ... driv->rx_running = 0; pthread_mutex_unlock(&driv->rx_mutex); return 1; }