For a single thread everything looks like good but when I create a different thread, I could not see any context switching.
I created two threads which change led states. Threads are;
static void *led3Thread(void *arg)
{
while (1)
{
LED3_Write(!LED3_Read());
delay();
}
}
static void *led4Thread(void *arg)
{
while (1)
{
LED4_Write(!LED4_Read());
delay();
}
}
I have also added a delay function to sense changes on leds.
void delay(void)
{
for (int i = 0; i < 1000000; i++);
}
Threads looks like working as blocking. In general, led3Thread is started before and only led3Thread is working. Never switch to Led4Thread (I could not see changes on Led4).
If I dont create thread3, I can see changes on Led4.
So, Why am I not able to observe context switching? Is it normal behaviour or not?
I have also tried sleep(), usleep() functions instead of my delay() function but result(blocking) is same.
Sleep() and usleep() functions are working fine individually.
I dont have a different board to compare this behavior with a different environment.
Please first try Ipc_pingpong example . this will create 2 thread
message is transfer from one to another.
when RIOT initially starts up, the CPU is normally running in
interrupt mode (using the interrupt mode stack). After creating the
stacks for the main and the idle threads, the CPU must be put into
thread-mode. This means the main threads initial context needs to put
into the CPUs registers and the stack pointer must put to the
main-threads stack. After this is done the CPU can just do 'normal'
task switching for switching between threads.
So to put it short: in cpu_switch_context_exit() you simply must load
the main threads context into the CPUs register and point the stack
pointer to the main threads stack.
I think I didn't get that you're doing a new port. I was assuming that general kernel initialization, task switching etc. was working (e.g., the applications in test/ do what they're supposed to do.
According to your explanations (Interrupting with ISR or higher
priority), RIOT works like as a single thread application.
This is a design philosophy of RIOT.
The main idea is that in an embedded OS, dividing the CPU into time slices poses an addidional task switching (and thus power consumption) overhead that is not needed if there's no user that needs the impression of things happening at the same time.
I would expect a context switch mechanism for a quantum value even if
threads have same priority.
... as it is being done on standard time-slicing schedulers like on most preemptive operating systems.
But most applications spend most of their time waiting for either IO, a timer, user input, ... Those apps will be preempted during their waiting periods, yielding for other threads.
Your simple example (delaing, toggling an LED) will work as expected if the delay function is changed into something that yields, as soon as the scheduler is running correctly.
Only if you're busy-waiting, which is inefficient on any system, equally or lower prioritized threads might starve.
That said, it is fairly easy to implement time-slice based task switching for a given priority if it really should be needed for a specific application.