OS  v1.7.5
Documentation
Loading...
Searching...
No Matches
Fundamentals

RTC(Run To Completion) model

In this pattern each task runs until it finishes or explicitly yields control back to the scheduler. Events are serviced by the kernel in the scheduler loop. If an event is available, it is posted to the task based on the scheduling rules, and then the dispatcher triggers the task. The task executes the actions associated with that event and then returns to the scheduler.

This pattern means that in the absence of exceptions or asynchronous destruction of the task execution, a pending event occurrence is dispatched only after the processing of the previous occurrence is completed, and a stable task-scheme configuration has been reached. That is, an event occurrence will never be dispatched while the task execution is busy processing the previous one. This behavioral paradigm was chosen to avoid complications arising from concurrency conflicts that may occur when a task tries to respond to multiple concurrent or overlapping events. This also provides better power efficiency. In the absence of events, the developer can use CPU low-power modes and make the core active only during the execution of a RTC step.

The provided features enables you to implement task procedures that does not have to execute all the way down to the last line of code. Instead execution can be broken down into different entities by using Finite State Machines (FSM) or by using Co-Routines where the procedure execution can be suspended and resumed at defined locations.

Timing Approach

The kernel implements a Time-Triggered Architecture (TTA) , in which the tasks are triggered by comparing the corresponding task-time with a reference clock. The reference clock must be real-time and follow a monotonic behavior. Usually, all embedded systems can provide this kind of reference with a constant tick generated by a periodic background hardware-timer, typically, at 1Khz or 1mS tick.

For this, the kernel allows you to select the reference clock source among these two scenarios:

  • When tick already provided: The reference is supplied by the Hardware Abstraction Layer (HAL) of the device. It is the simplest scenario and it occurs when the framework or SDK of the embedded system includes a HAL-API that provides the time elapsed since the system starts, usually in milliseconds by returning a 32-bit counter variable.
  • When the tick is not provided: The application writer should use bare-metal code to configure the device and feed the reference clock manually. Here, a hardware timer should raise an interrupt periodically. After the Interrupt Service Routine(ISR) has been implemented using the platform-dependent code, the qOS::clock::sysTick() method must be called inside. It is recommended that the reserved ISR should only be used by QuarkTS++.

The initialization and configuration of QuarkTS++ through qOS::core::init() will allow you to set the reference clock source in addition to specifying Idle task activities.

Note
The call to qOS::core::init() is mandatory and must be called once in the application main thread before any kind of interaction with the other OS functions.

Usage example:

  • Scenario 1 : When tick is already provided
    #include "QuarkTS.h"
    #include "HAL.h"
    using namespace qOS;
    void main( void ) {
    HAL_Init();
    os.init( HAL_GetTick , IdleTask_Callback );
    // TODO: add Tasks to the scheduler scheme and run the OS
    }
    bool init(const getTickFcn_t tFcn=nullptr, taskFcn_t callbackIdle=nullptr) noexcept
    Task Scheduler initialization. This core method is required and must be called once in the applicatio...
    OS/Kernel interfaces.
    Definition bytebuffer.hpp:7
  • Scenario 2 : When the tick is not provided
    #include "QuarkTS.h"
    #include "DeviceHeader.h"
    using namespace qOS;
    void Interrupt_Timer0( void ) {
    clock::sysTick();
    }
    void main( void ) {
    MCU_Init();
    HAL_Init();
    os.init( nullptr , IdleTask_Callback );
    // TODO: add Tasks to the scheduler scheme and run the OS
    }

Tasks

Like many operating systems, the basic unit of work is the task. Tasks can perform certain functions, which could require periodic or one-time execution, update specific variables or wait for specific events. Tasks also could be controlling specific hardware or be triggered by hardware interrupts. In the QuarkTS++ OS, a task is seen as a node concept that links together:

  • Program code performing specific task activities (callback function)
  • Execution interval (time)
  • Number of execution (iterations)
  • Event-based data

The OS uses a Task Control Block (TCB) to represent each task, storing essential information about task management and execution. Part of this information also includes link-pointers that allow it to be part of one of the lists available in the Kernel Control Block (KCB).

tasknode
Task node illustration

Each task performs its activities via a callback function and each of them is responsible for supporting cooperative multitasking by being “good neighbors”, i.e., running their callback methods quickly in a non-blocking way and releasing control back to the scheduler as soon as possible (returning).

Every task node, must be defined using the qOS::task class and the callback is defined as a function that returns void and takes a qOS::event_t data structure as its only parameter (This input argument can be used later to get the event information, see Retrieving the event data).

task UserTask;
void UserTask_Callback( event_t e ) {
// TODO : Task code
}
The task argument with all the regarding information of the task execution.
Definition task.hpp:105
A task node object.
Definition task.hpp:348

Tasks can also be defined using the object-oriented programming approach. In this particular case, a class must be defined that inherits from qOS::task and the activities method, where the behavior of the task resides, must be overridden.

class myCustomTask : public task {
void activities( event_t e ) override {
// TODO : Task code
}
};
Note
All tasks in QuarkTS++ must ensure their completion to return the CPU control back to the scheduler to follow the RTC(Run To Completion) model, otherwise, the scheduler will hold the execution state for that task, preventing the activation of other tasks.

The idle task

It's a special task loaded by the OS scheduler when there is nothing else to do (no task in the whole scheme has reached the ready state). The idle task is already hard-coded into the kernel, ensuring that at least, one task is able to run. Additionally, the OS setup this task with the lowest possible priority to ensure that does not use any CPU time if there are higher priority application tasks able to run. The idle task doesn't perform any active functions, but the user can decide if it should perform some activities defining a callback function for it. This could be done at the beginning of the kernel setup. Of course, the callback must follow the same function prototype for tasks.

Note
To disable the idle-task activities, a nullptr should be passed as argument on qOS::core::init() or qOS::core::setIdleTask().

Adding tasks to the scheme

After setting up the kernel with qOS::core::init(), the user can proceed to deploy the multitasking application by adding tasks. If the task node and their respective callback is already defined, the task can be added to the scheme using qOS::core::add(). This method can schedule a task to run every t seconds, n executions times and invoking the callbackFcn method on every pass.

Caveats:

  1. A task with time argument t defined as qOS::clock::IMMEDIATE, will always get the READY state in every scheduling cycle, as consequence, the idle task will never get dispatched.
  2. Tasks do not remember the number of iterations set initially by the n executions argument. After the iterations are done, the internal iteration counter decreases until reaches zero. If another set of iterations is needed, the user should set the number of iterations again and resume the task explicitly.
  3. Tasks that performed all their iterations, put their own state to qOS::taskState::DISABLED_STATE. Asynchronous triggers do not affect the iteration counter.
  4. The arg parameter can be used as a storage pointer, so, for multiple data, create a structure with the required members and pass a pointer to that structure.

Invoking qOS::core::add() is the most generic way to add tasks to the scheme, supporting a mixture of time-triggered and event-triggered tasks, state-machine tasks, command-line interface tasks and input-watcher objects, additional simplified method functions are also provided to add specific purpose tasks:

Event-triggered tasks

An event-triggered task reacts asynchronously to the occurrence of events in the system, such as external interrupts or changes in the available resources.

The method qOS::core::add() can also support add this kind of tasks, keeping it in a SUSPENDED state. Only asynchronous events followed by their priority value dictate when a task can change to the RUNNING state.

Removing a task

The qOS::core::remove() function removes the task from the scheduling scheme. This means the task node will be disconnected from the kernel chain, preventing additional overhead provided by the scheduler when it does checks over it and course, preventing it from running.

Caveats:

Task nodes are variables like any other. They allow your application code to reference a task, but there is no link back the other way and the kernel doesn't know anything about the variables, where the variable is allocated (stack, global, static, etc.) or how many copies of the variable you have made, or even if the variable still exists. So the qOS::core::remove() method cannot automatically free the resources allocated by the variable. If the task node has been dynamically allocated, the application writer it's responsible to free the memory block after a removal call.

Running the OS

After preparing the multitasking environment for your application, a call to qOS::core::run() is required to execute the scheduling scheme. This function is responsible to run the following OS main components:

  • The Scheduler : Select the tasks to be submitted into the system and decide with of them are able to run.
  • The Dispatcher : When the scheduler completes its job of selecting ready tasks, it is the dispatcher which takes that task to the running state. This procedure gives a task control over the CPU after it has been selected by the scheduler. This involves the following:
    1. Preparing the resources before the task execution
    2. Execute the task activities (via the callback function)
    3. Releasing the resources after the task execution

The states involved in the interaction between the scheduler and dispatcher are described here.

Note
After calling qOS::core::run(), the OS scheduler will now be running, and the following line should never be reached, however, the user can optionally release it explicitly with qOS::core::schedulerRelease() method function.

Releasing the scheduler

This functionality must be enabled from the Q_ALLOW_SCHEDULER_RELEASE macro. This method stops the kernel scheduling. In consequence, the main thread will continue after the qOS::core::run() call.

Although producing this action is not a typical desired behavior in any application, it can be used to handle a critical exception.

When used, the release will take place after the current scheduling cycle finishes. The kernel can optionally include a release callback function that can be configured to get called if the scheduler is released. Defining the release callback, will help to take actions over the exception that caused the release action. To perform a release action, the qOS::core::setSchedulerReleaseCallback() method should be used

Note
When a scheduler release is performed, resources are not freed. After released, the application can invoke the qOS::core::run() again to resume the scheduling activities

Global states and scheduling rules

A task can be in one of the four global states: RUNNING , READY , SUSPENDED or WAITING. Each of these states is tracked implicitly by putting the task in one of the associated kernel lists.

These global states are described below:

taskstates++o
Task global states
  • WAITING : The task cannot run because the conditions for running are not in place.
  • READY : The task has completed preparations for running, but cannot run because a task with higher precedence is running.
  • RUNNING : The task is currently being executed.
  • SUSPENDED : The task doesn't take part in what is going on. Normally this state is taken after the RUNNING state or when the task does not reach the READY state.

The presence of a task in a particular list indicates the task's state. There are many ready lists as defined in the Q_PRIORITY_LEVELS macro. To select the target ready list, the OS uses the user-assigned priority between 0 (the lowest priority) and Q_PRIORITY_LEVELS-1 (the highest priority). For instance, if Q_PRIORITY_LEVELS is set to 5, then QuarkTS++ will use 5 priority levels or ready lists: 0 (lowest priority), 1, 2, 3, and 4 (highest priority).

oslist
OS lists

Except for the idle task, a task exists in one of these states. As the real-time embedded system runs, each task moves from one state to another (moving it from one list to another), according to the logic of a simple finite state machine (FSM). The figure above illustrates the typical flowchart used by QuarkTS++ to handle the task's states, with brief descriptions of the state transitions, additionally you may also notice the interaction between the scheduler and the dispatcher.

The OS assumes that none of the tasks does a block anywhere during the RUNNING state. Based on the round-robin fashion, each ready task runs in turn from every ready list. The developer should take care to monitor their system execution times to make sure during the worst case, when all tasks have to execute, all of the deadlines are still met.

Rules

Task precedence is used as the task scheduling rule and precedence among tasks is determined based on the priority of each task. If there are multiple tasks able to run, the one with the highest precedence goes to RUNNING state first.

In determining precedence among tasks of those tasks having different priority levels, that with the highest priority has the highest precedence. Among tasks having the same priority, the one that entered the scheduling scheme first has the highest precedence if the Q_PRESERVE_TASK_ENTRY_ORDER configuration is enabled, otherwise, the OS will reserves for himself the order according to the dynamics of the kernel lists.

Event precedence

The scheduler also has an order of precedence for incoming events, in this way, if events of different natures converge to a single task, these will be served according to the following flowchart:

evenprecedence
Event precedence

Additional operational states

Each task has independent operating states from those globally controlled by the scheduler. These states can be handled by the application writer to modify the event flow to the task and consequently, affect the transition to the READY global state. These states are described as follows:

  • AWAKE : In this state, the task is conceptually in an alert mode, handling most of the available events. This operational state is available when the SHUTDOWN bit is set, allowing the next operational states to be available:
    • ENABLED : The task can catch all the events. This operational state is available when the ENABLE bit is set.
    • DISABLED : In this state, the time events will be discarded. This operational state is available when the ENABLE bit is cleared.
  • ASLEEP : Task operability is put into a deep doze mode, so the task can not be triggered by the lower precedence events. This operational state is available when the SHUTDOWN bit is cleared. The task can exit from this operational state when it receives a high precedence event (a queued notification) or using the qOS::task::setState() method.

The figure below shows a better representation of how the event flow can be affected by these operational states.

operationalstates
Event flow according operational states
Remarks
Queued notifications are the only event that can wake up sleeping tasks
Note
The ASLEEP operational state overrides the ENABLED and DISABLED states.

Critical sections

Since the kernel is non-preemptive, the only critical section that must be handled are the shared resources accessed from the ISR context. Perhaps, the most obvious way of achieving mutual exclusion is to allow the kernel to disable interrupts before it enters their critical section and then, enable interrupts after it leaves its critical section.

By disabling interrupts, the CPU will be unable to change the current context. This guarantees that the currently running job can use a shared resource without another context accessing it. But, disabling interrupts, is a major undertaking. At best, the system will not be able to service interrupts for the time the current job is doing in its critical section, however, in QuarkTS++, these critical sections are handled as quickly as possible.

Considering that the kernel is hardware-independent, the application writer should provide the necessary piece of code to enable and disable interrupts.

For this, the qOS::critical::setInterruptsED() method should be used. In this way, communication between ISR and tasks using queued notifications or data queues is performed safely.

In some systems, disabling the global IRQ flags is not enough, as they don't save/restore state of interrupt, so here, the uint32_t argument and return value in both functions (Disabler and Restorer) becomes relevant, because they can be used by the application writer to save and restore the current interrupt configuration. So, when a critical section is performed, the Disabler, in addition to disabling the interrupts, returns the current configuration to be retained by the kernel, later when the critical section finish, this retained value is passed to Restorer to bring back the saved configuration.

Configuration macros

Some OS features can be customized using a set of macros located in the header file config.h. Here is the default configuration, followed by an explanation of each macro:

  • Q_PRIORITY_LEVELS : Default: 3. The number of priorities available for application tasks.
  • Q_PRIO_QUEUE_SIZE : Default: 10. Size of the priority queue for notifications. This argument should be an integer number greater than zero. A zero value can be used to disable this functionality.
  • Q_ALLOW_SCHEDULER_RELEASE : Default: 0(disabled). Used to enable or disable the scheduler release functionality.
  • Q_PRESERVE_TASK_ENTRY_ORDER : Default: 0(disabled). If enabled, the kernel will preserve the tasks entry order every OS scheduling cycle.
  • Q_BYTE_ALIGNMENT : Default: 8. Used by the Memory Management extension to perform the byte alignment.
  • Q_DEFAULT_HEAP_SIZE : Default: 2048. The total amount of heap size for the default memory pool. This will enable new and delete operators if not available
  • Q_FSM : Default: 1(enabled). Used to enable or disable the Finite State Machine (FSM) extension.
  • Q_FSM_MAX_NEST_DEPTH : Default: 5. The max depth of nesting in Finite State Machines (FSM).
  • Q_FSM_MAX_TIMEOUTS : Default: 3. Max number of timeouts inside a timeout specification for the Finite State Machines (FSM) extension.
  • Q_FSM_PS_SIGNALS_MAX : Default: 8. Max number of signals to subscribe for a Finite State Machine (FSM).
  • Q_FSM_PS_SUB_PER_SIGNAL_MAX : Default: 4. Max number of FSM subscribers per signal.
  • Q_DEBUGTRACE_BUFSIZE : Default: 36. The buffer size for debug and trace macros
  • Q_CLI : Default: 1(enabled). Used to enable or disable the AT Command Line Interface (CLI) extension.