OS
v1.7.5
Documentation
|
In the QuarkTS++ OS, tasks can be triggered from multiple event sources including time-elapsed, notifications, queues and event-flags. This can lead to several situations that must be handled by the application writer from the task context, for example:
The OS provides a simple approach for this, a class with all the regarding information of the task execution. This class, that is already defined in the callback function as the qOS::event_t argument, is filled by the kernel dispatcher, so the application writer only needs to read the fields inside.
This class has the following attributes and methods:
Please review the qOS::event_t class reference for more details.
Running tasks at pre-determined rates is desirable in many situations, like sensory data acquisition, low-level servoing, control loops, action planning and system monitoring. As previously explained in Adding tasks to the scheme, you can schedule tasks at any interval your design demands, at least, if the time specification is lower than the scheduler tick. When an application consists of several periodic tasks with individual timing constraints, a few points must be taken:
byTimeElapsed
event that put the task in a READY
state (see figure below).Applications existing in heavy environments require tasks and ISR interacting with each other, forcing the application to implement some event model. Here, we understand events, as any identifiable occurrence that has significance for the embedded system. As such, events include changes in hardware, user-generated actions or messages coming from components of the application itself.
As shown in the figure above, two main scenarios are presented, ISR-to-task and task-to-task interaction.
When using interrupts to catch external events, it is expected to be handled with fast and lightweight code to reduce the variable ISR overhead introduced by the code itself. If too much overhead is used inside an ISR, the system will tend to lose future events. In some specific situations, in the interest of stack usage predictability and to facilitate system behavioral analysis, the best approach is to synchronize the ISR with a task to leave the heavy job in the base level instead of the interrupt level, so the interrupt handler only collect event data and clear the interrupt source and therefore exit promptly by deferring the processing of the event data to a task, this is also called Deferred Interrupt Handling.
The other scenario is when a task is performing a specific job and another task must be awakened to perform some activities when the other task finishes.
Both scenarios require some ways in which tasks can communicate with each other. For this, the OS does not impose any specific event processing strategy to the application designer but does provide features that allow the chosen strategy to be implemented in a simple and maintainable way. From the OS perspective, these features are just sources of asynchronous events with specific triggers and related data.
The OS provides the following features for inter-task communication:
The notifications allow tasks to interact with other tasks and to synchronize with ISRs without the need of intermediate variables or separate communication objects. By using notifications, a task or ISR can launch another task sending an event and related data to the receiving task. This is depicted in the figure below.
Each task node has a 32-bit notification value which is initialized to zero when a task is added to the scheme. The method qOS::core::notify() with mode = qOS::notifyMode::SIMPLE with is used to send an event directly updating the receiving task's notification value increasing it by one. As long as the scheduler sees a non-zero value, the task will be changed to a READY
state and eventually, the dispatcher will launch the task according to the execution chain. After being served, the notification value is later decreased.
If the application notifies multiple events to the same task, queued notifications are the right solution instead of using simple notifications.
Here, the qOS::core::notify() with mode = qOS::notifyMode::QUEUED take advantage of the scheduler FIFO priority-queue. This kind of queue, is somewhat similar to a standard queue, with an important distinction: when a notification is sent, the task is added to the queue with the corresponding priority level, and will be later removed from the queue with the highest priority task first. That is, the tasks are (conceptually) stored in the queue in priority order instead of the insertion order. If two tasks with the same priority are notified, they are served in the FIFO form according to their order inside the queue. The figure below illustrates this behavior.
The scheduler always checks the queue state first, being this event the one with more precedence among the others. If the queue has elements, the scheduler algorithm will extract the data and the corresponding task will be launched with the trigger flag set in byNotificationQueued
.
The next figure, shows a cooperative environment with five tasks. Initially, the scheduler activates Task-E
, then, this task enqueues data to Task-A
and Task-B
respectively using the qOS::core::notify() using the qOS::notifyMode::QUEUED mode. In the next scheduler cycle, the scheduler realizes that the priority-queue is not empty, generating an activation over the task located at the beginning of the queue. In this case, Task-A
will be launched and its respective data will be extracted from the queue. However, Task-A
also enqueues data to Task-C
and Task-D
. Following the priority-queue behavior, the scheduler makes a new reordering, so the next queue extraction will be for Task-D
, Task-C
, and Task-B
sequentially.
The kernel handles all the notifications by itself (simple or queued), so intermediate objects are not needed. Just calling qOS::core::notify() is enough to send notifications. After the task callback is invoked, the notification is cleared by the dispatcher. Here the application writer must read the respective fields of the event-data class to check the received notification.
The next example shows an ISR to task communication. Two interrupts send notifications to a single task with specific event data. The receiver task taskA
after further processing, send an event to taskB
to handle the event generated by the transmitter taskA
.
In some systems, we need the ability to broadcast an event to all tasks. This is often referred to as a barrier. This means that a group of tasks should stop activities at some point and cannot proceed until another task or ISR raise a specific event. For this kind of implementation, we can also use the qOS::core::notify() method but in this case, without specifying a target task.
A queue is a linear data structure with simple operations based on the FIFO (First In First Out) principle. It is capable to hold a finite number of fixed-size data items. The maximum number of items that a queue can hold is called its length. Both the length and the size of each data item are set when the queue is created.
As shown above, the last position is connected back to the first position to make a circle. It is also called ring-buffer or circular-queue.
In general, this kind of data structure is used to serialize data between tasks, allowing some elasticity in time. In many cases, the queue is used as a data buffer in interrupt service routines. This buffer will collect the data so, at some later time, another task can fetch the data for further processing. This use case is the single "task to task" buffering case. There are also other applications for queues as serializing many data streams into one receiving stream (multiple tasks to a single task) or vice-versa (single task to multiple tasks).
Queuing by copy does not prevent the queue from also being used to queue by reference. For example, when the size of the data being queued makes it impractical to copy the data into the queue, then a pointer to the data can be copied into the queue instead.
A queue must be explicitly initialized before it can be used. These objects are referenced by the class qOS::queue. Both, the constructor or the qOS::queue::setup() method can be used to configure the queue and initialize the instance.
The required RAM for the queue data should be provided by the application writer and could be statically allocated at compile time or in run-time using the Memory Management extension.
Additional features are provided by the kernel when the queues are attached to tasks; this allows the scheduler to pass specific queue events to it, usually, states of the object itself that needs to be handled, in this case by a task. For this, use the qOS::task::attachQueue() method.
The following attaching modes are provided:
This example shows the usage of QuarkTS++ queues. The application is the classic producer/consumer example. The producer task puts data into the queue. When the queue reaches a specific item count, the consumer task is triggered to start fetching data from the queue. Here, both tasks are attached to the queue.
Every task node has a set of built-in event bits called Event-Flags, which can be used to indicate if an event has occurred or not. They are somewhat similar to signals, but with greater flexibility, providing a low-cost, but flexible means of passing simple messages between tasks. One task can set or clear any combination of event flags. Another task may read the event flag group at any time or may wait for a specific pattern of flags.
Up to twenty(20) bit-flags are available per task and whenever the scheduler sees that one event-flag is set, the kernel will trigger the task execution.
READY
state when any of the available event-flags is set. The flags should be cleared by the application writer explicitlyThis example demonstrates the usage of Event-flags. The idle task will transmit data generated from another task, only when the required conditions are met, including two events from an ISR (A timer expiration and the change of a digital input) and when a new set of data is generated. The task that generates the data should wait until the idle task transmission is done to generate a new data set.