CS452 - Real-Time Programming - Winter 2018
Lecture 7 - Create
Public Service Annoucements
-
Due date for kernel 1: Friday, 26 January, 2018
-
Please remember that some aspects of the system configuration
you get depends on the state in which it was left by the previous
group. I have seen very hard to find bugs occur at the last
minute because a group unknowingly relied on state provided by
the group before it.
-
What does
swi
and movs pc, lr
do?
Context Switch
From the point-of-view of the task
What gcc does
We started by modelling what happens in a context switch as a function
call to a kernel function. After all, that's what it looks like to the
user code.
...
tid = Create( priorityQ, code );
...
Software Interrupt SWI
The software interrupt has an inverse instruction which, executed
immediately following it, undoes its effect. Its most common form
is
movs pc, lr
which has the following effect.
-
The
pc
gets the contents of the
lr_<mode>
.
-
The
cpsr
gets the contents of the
spsr_<mode>
.
All forms of instructions like this are priveleged. (Privilege
is needed for changing the low bits of the CPSR. Why?) The inverse
depends on the link register and the stack pointer in the calling
program being undisturbed.
Software Interrupt, SWI.
- Does what bl does, plus
-
saves the cpsr in the kernel's spsr (This must be done
atomically. Why?) and
-
sets some bits in the CPSR.
To make the software interrupt work you must set low memory
correctly. You know what to put in 0x08:
-
You need to put the entry point of the kernel into 0x28.
-
Doing this is part of initializing the kernel.
The sequence
// in the user task
swi n
//inside the kernel
kernel entry:
movs pc, lr
is a NOP
. The 's' in movs puts the SPSR into CPSR.
Where in the kernel is the kernel entry?
-
In the middle of activate, right after the kernel exit.
-
It must occur in an environment that will return from
activate( )
correctly.
Leaving the kernel.
Between the kernel entry and leaving the kernel
When the previous request has been handled completely, it's time
to leave the kernel.
-
You are in svc mode, executing kernel instructions
-
Schedule to discover which task runs next.
- i.e. get the value of
active
-
Enter
activate( active )
-
active
is a pointer to a task descriptor
(TD*
). From it, or from the user stack get
sp_usr
.
-
from the user stack get the cpsr of
active
.
and put it into the kernel's spsr_svc
.
-
You should understand how this takes us back to user mode.
-
Store kernel state on kernel stack
-
During kernel entry you load the kernel registers from
the kernel stack. Taken together loading and storing
should be a NOP.
-
Storing the kernel state before overwriting its registers is
essential because there are three different link registers. What
are they?
-
the instruction after swi
-
the one that returns from the user function
-
the one that returns from
activate( )
-
get the address of the next instruction of
active
to be run, its pc
-
set lr_svc = pc
-
At this point
movs pc, lr
would start executing
the code of active at the correct instruction, but the register
values in the CPU are the kernel's.
Switch to system mode
-
Load registers from user stack
-
Return to supervisor mode
-
Set return value by overwriting r0
- What about registers r1-3?
-
Let it go
movs pc, lr
Somewhere after movs
is the kernel entry.
-
There might be something like an assertion before the entry.
-
Or maybe an illegal instruction.
Immediately after the kernel entry is the inverse of this sequence.
Scheduling
There are two important issues for scheduling
- When do we reschedule?
- Who do we activate when we schedule
When to schedule
As often as possible, which means every time we are in the kernel,
so the question amounts to `When do we enter the kernel?'
The answer right now (during the first part of kernel development)
is: whever user code executes SWI.
Who to Schedule
Whatever task will cause all deadlines to be met.
Because this is not an easy problem, we don't want to solve it
within the kernel. What the kernel does should be fast (=constant
time) and not resource constrained.
Scheduling algorithm
Two pieces of code are run every time the kernel runs
-
the context switching code, and
-
the scheduling code.
We want to make both as efficient as possible, within reason!
-
Find the highest priority non-empty ready queue. A ready queue
can be as simple as a linked list of pointers to task
descriptors.
-
The task found is removed from its queue and becomes the
active task. (Until this point active has pointed to the TD
of the previously active task.)
-
When a task is made ready it is put at the end of its ready
queue.Thus, all tasks oat the same priority get equal chances
of running.
Implementation Comments
The main data structure is usually an array of ready queues, one
for each priority.
Each ready queue is a list with a head pointer (for extraction)and
a tail pointer (for insertion).
Hint. The Art of Computer Programming (Donald Knuth) says that
circular queues are better. Why?
The queues of typical running system
- Highest priority:
- tasks waiting on interrupts, event-blocked tasks
- almost always blocked
- do minimal processing, then release tasks blocked on them
- Medium priority
- receive-blocked tasks
- almost always blocked
- provide service to application tasks
- Low priority
- send-blocked tasks
- blocked more often than not
- make decisions about what should be done next
- Lowest priority
- one task that runs without blocking
- the idle task
- uses power without doing anything
Implementation
Decisions
- How many priorities
- Which task should have which priority
- What to do when there is no ready task
General structure
Array of ready queues, one for each priority.
Each ready queue is a list with a head pointer (for extraction)and a tail
pointer (for insertion).
Implementing lists without memory allocation
You are probably used to implementing lists (and similar data
structures) like this
void insert( ... ) {
struct element *e;
e = malloc( element );
.
.
.
}
We don't like this because it requires freeing memory. Remember
that Voyageur code has been executing for almost forty years.
Here's the most common way to do this without allocating memory
typedef struct task-descr { ...
struct task-desc *rdy-next;
...
} TD;
All the allocation is done when the task descriptors are declared.
Of course, because allocating and freeing constant sized pieces
of data can be done in constant time, you could allocate a pool
of list elements when you initialize, and manage it using a free
list.
Creating a Task
In creating a task you have to do two things
-
Get and initialize resources needed by the task
-
Make the task look as if it had just entered the kernel
-
it's ready to execute when it's scheduled
-
which means it must be on its readyQ
Things you need to do
You have the priority and a function point to the first instruction
in the arguments to the Create
request. You must get
an unused TD and memory for its stack.
-
memory could be associated with TD during initialization
-
allocate using a free list, which is actually a form of
constant time memory allocation
Mostly filling in fields in the TD.
-
task id
-
task id must be unique; Destroy introduces complications.
-
must be easy to find TD from task id.
-
stack pointer
-
See below. Before you can initialize the stack pointer
you must have decided how your kernel will allocate
stack space among tasks.
-
Before you initialize the stack you must first decide
what will be saved on the task's stack, what will be
saved in the task descriptor.
-
My rule of thumb is to put in the TD properties of the
task that are needed to handle requests, and nothing
more.
-
SPSR
-
link register
-
parent tid
-
the task that made the request, which for now is the
active task
-
this assumes that the active task pointer remains valid
until scheduling is performed.
-
return value
-
different return values for the active task and for the
next to be scheduled task.
-
the one for the active task -- the one receiving service
-- goes in its TD or on its stack.
-
the one for the next active task should already be in
its TD or on its stack.
-
state, which is READY
-
priority, needed to install the task into its ready queue
Must also initialize the stack
-
stuff you left out of the TD must be part of stack initialization.
-
exactly as if the task had just done a kernel entry
-
look carefully at what your kernel exit code will do
-
At the end stack pointer must correspond to stack contents
-
I initialize the stack pointer to the top of allocated memory
then change it as I push stuff onto the stack
- imitating the context switch code
Return to: