CS452 - Real-Time Programming - Spring 2012
Lecture 7 - Create, Scheduling
Pubilc Service Annoucements
- Due date for assignment 1
- Partners
After the Software Interrupt
In the kernel
The order matters, except for the last two
- Save the user state
- Get the request
- Retrieve the kernel state
There is more than one way to do almost everything in this list, and I
have chosen this way of describing what is to be done because it's simplest
to describe, not because it's necessarily best!.
At this point the kernel is ready to handle the request.
Handling the Request
What needs to be done
- Check for errors
- Manipulating TDs
- Sometimes, copying bytes from one address space to another.
Saving up the return value
The task that made the request may not be the next one to run.
- The kernel needs to save the request's return value until the next time
the requester is scheduled.
- One solution is to put it in the TD.
- It's also possible to put it where it will be needed (such as r0)
immediately.
Scheduling
There are two important issues for scheduling
- When do we reschedule?
- Who do we activate when we schedule
When to schedule
Every time we are in the kernel, so the issue is `When do we enter the
kernel?'
Three possibilities
- Tasks run to completion, which means until they make a request for
kernel services
- Event-driven pre-emption, which means when hardware makes a request for
service
- Time-slicing
We do 1 & 2, but not 3, because our tasks co-operate. Time-slicing is
needed when tasks are adversarial.
Who to Schedule
Whoever is needed to meet all the deadlines
- or to optimize something.
Because this is not an easy problem, we don't want to solve it within the
kernel. What the kernel does should be fast (=constant time) and not resource
constrained.
Inexpensive (=constant time)ways to schedule
Least expensive first
- active task decides = co-routines
- round robin
- everybody gets the same chance
- but usually long running time = unimportant
- priorities
- fixed at compile time
- fixed when task is created
- re-fixed every time task is scheduled
- Do you have a good algorithm?
The number of priorities should be small, but not too small.
Tasks at the same priority should have the same precedence.
Scheduling algorithm
- Find the highest priority non-empty ready queue.
- Schedule the first task in the queue.
The state of the most recently scheduled (running) task is ACTIVE, not
READY.
The kernel maintains a pointer to the TD of the active task so it
knows which task is making the current request.
- When a task is made ready it is put at the end of its ready queue.
Implementation
Array of ready queues, one for each priority.
Each ready queue is a list with a head pointer (for extraction)and a tail
pointer (for insertion).
Hint. The Art of Computer Programming (Donald Knuth) says that circular
queues are better. Why?
Implementation decisions
- How many priorities
- Which task should have which priority
- What to do when there is no ready task
The queues of typical running system
- Highest priority:
- tasks waiting on interrupts
- almost always blocked
- do minimal processing, then release tasks blocked on them
- Medium priority
- receive blocked tasks
- almost always blocked
- provide service to application tasks
- Low priority
- send-blocked tasks
- blocked more often than not
- make decisions about what should be done next
- Lowest priority
- one task that runs without blocking
- the idle task
- uses power without doing anything
Before the Software Interrupt
After a while it's time to leave the kernel
- Schedule the next task to run
- i.e. get the value of
active
- Call
GetNextRequest( active )
Inside GetNextRequest
- From TD, or the user stack
- get sp_usr
- set spsr_svc = cpsr_usr
- You should understand how this takes us back to user mode.
- set lr_svc = pc for return to user mode
- Save kernel state on kernel stack
- Combined with 6, above this should be a NOP
- Set return value by overwriting r0 on user stack
- Switch to system mode
- Load registers from user stack
- Combined with 3 above this should be a NOP
- Return to supervisor mode
- Let it go
movs pc, lr
The instruction after this one is normally the kernel entry.
Making the Stub that Wraps swi
For each kernel primitive there must be a function available in usr code:
the kernel's API.
- e.g.
int Create( int priority, void ( *code ) ( ) );
What gcc does for you
Before calling Create
- gcc saves the scratch registers to memory.
- gcc puts the arguments into the scratch registers, and possibly on the
stack.
While calling Create
bl to the entry point of Create
While executing Create
- gcc saves the registers that it thinks will be altered
during execution of the function.
- gcc thinks wrong, because only the assembler knows that swi is in
the instruction stream
- your code gets executed
- gcc restores the registers it saved, and only those registers.
Exiting from Create
- mov pc, lr, or equivalent, is executed, returning the execution to the
instruction following bl
After calling Create
- gcc stores register r0, the return value, in the variable to which the
result of Create is assigned.
What the code you write does
- Moves the arguments from gcc's locations to whatever convention you
choose for your kernel
- Does swi n, where n is the code for Create.
- Moves the return value from your kernel's conventional location to
r0.
Creating a Task
In creating a task you have to do two things
- Get and initialize resources needed by the task
- Make the task look as if it had just entered the kernel
- it's ready to execute when it's scheduled
Things you need to do
Get an unused TD and memory for its stack
- memory could be associated with TD during initialization
- actually a form of constant time memory allocation
- unless you implement Destroy
Mostly filling in fields in the TD.
- task id
- stack pointer
- SPSR
- link register
- parent tid
- return value
- dummy
- different return value for the active task, which goes in its
TD
- state
- install in the ready queues
Must also initialize the stack
The Create Function
You also need a int Create( int priority, void (*code) ( ) )
function to call from user tasks.
Although it's no more than a wrapper there are a few problems to solve.
- Passing arguments
- On entry the arguments are somewhere, usually r0 & r1
- You have to put them where the kernel can find them.
- gcc's function extry code immediately puts them on the stack.
- In assembly you can find them using the frame pointer.
- Jumping into the kernel
- Getting the return value from the kernel and returning it.
- You find it where the kernel put it
- gcc's function exit code expects it to be indexed off the frame
pointer
- from where it does into r0
Other Primitives
These primitives exist mostly so that we, which includes you, can ensure
that task creation and scheduling are working when there is not much else
implemented.
Tid MyTid( )
Self-explanatory
- Doesn't block, but does reschedule.
A question, to which there is a correct answer, or more specifically, a
correct (answer, reason) pair.
- Should the Tid be stored in user space?
Tid MyParentTid( )
Self-explanatory
- Doesn't block, but does reschedule.
Where is the parent Tid, and how does the kernel find it?
void Pass( )
Doesn't block: task calling Pass( ) remains ready to
execute.
Does reschedule.
When is Pass( ) a NOP?
void Exit( )
Calling task is removed from all queues, but its resources are not
reclaimed or reused.
That is, the task goes into a zombie state, in which it cannot be active
or ready, but continues to own all its resources.
How Should Execution Terminate?
Nicely.
When there are no tasks left on the ready queues, it goes back to
RedBoot.
- This behaviour changes when hardware interrupts are implemented.
Return to: