CS452 - Real-Time Programming - Spring 2017

Lecture 7 - Create

Public Service Annoucements

  1. Due date for kernel 1: Friday, 26 May, 2017
  2. Please remember that some aspects of the system configuration you get depends on the state in which it was left by the previous group. I have seen very hard to find bugs occur at the last minute because a group unknowingly relied on state provided by the group before it.
  3. What swi and movs pc, lr do.

Leaving the kernel.

When the previous request has been handled completely, it's time to leave the kernel.

  1. You are in svc mode, executing kernel instructions
  2. Schedule to discover which task runs next.
  3. Enter activate( active )
  4. active is a pointer to a task descriptor (TD*). From it, or from the user stack get sp_usr
  5. set spsr_svc = cpsr_usr
  6. Store kernel state on kernel stack
  7. get the address of the next instruction of active to be run, its pc
  8. set lr_svc = pc
  9. At this point movs pc, lr would start executing the code of active at the correct instruction, but the register values in the CPU are the kernel's.
  10. Switch to system mode
  11. Load registers from user stack
  12. Return to supervisor mode
  13. Set return value by overwriting r0
  14. Let it go
    movs   pc, lr

Somewhere after movs is the kernel entry.

After the kernel entry is the inverse of this sequence.

Static/Global Variables.

There are four ways we can use memory for storing data

  1. in the text section with the instructions, constant data only,, visible only to tasks that can see the instructions,
  2. in the bss section, uninitialized data only, visible by any task running the same code,
  3. in the data section, data used for initialization of any type of variable, constant, initialized variables are somewhere else,
  4. on the stack, separate tasks with the same code have their own stack.

C storage classes are not cleanly mapped onto these categories.

  1. auto, on the stack
  2. register, put it in a register if possible, not guaranteed
  3. static
  4. extern, gives visibility to program components compiled in other files.


Scheduling

There are two important issues for scheduling

  1. When do we reschedule?
  2. Who do we activate when we schedule

When to schedule

Every time we are in the kernel, so the question is `When do we enter the kernel?'
The answer is: whever user code executes SWI.

Who to Schedule

Whoever is needed to meet all the deadlines

Because this is not an easy problem, we don't want to solve it within the kernel. What the kernel does should be fast (=constant time) and not resource constrained.

Scheduling algorithm

  1. Find the highest priority non-empty ready queue. A ready queue can be as simple as a linked list of pointers to task descriptors.
  2. The task found is removed from its queue and becomes the active task. (Until this point active has pointed to the TD of the previously active task.)
  3. When a task is made ready it is put at the end of its ready queue.Thus, all tasks oat the same priority get equal chances of running.

Implementation Comments

The main data structure is usually an array of ready queues, one for each priority.

Each ready queue is a list with a head pointer (for extraction)and a tail pointer (for insertion).

Hint. The Art of Computer Programming (Donald Knuth) says that circular queues are better. Why?

The queues of typical running system

  1. Highest priority:
  2. Medium priority
  3. Low priority
  4. Lowest priority

Implementation

Decisions

  1. How many priorities
  2. Which task should have which priority
  3. What to do when there is no ready task

General structure

Array of ready queues, one for each priority.

Each ready queue is a list with a head pointer (for extraction)and a tail pointer (for insertion).

Implementing lists without memory allocation

You are probably used to implementing lists (and similar data structures) like this

      struct element { struct element *next;
		       struct whatever *content;
      }
      struct list { struct element *head;
		    struct element *tail;
      }
      void insert( list *l, content *c ) {
	struct element *e;
	e = malloc( element );
	e->content = c;
	e->next = null;
	l->tail->next = e;
	l->tail = e;
      }
    
We don't like this because it requires allocating and freeing memory.

Here's the most common way to do this without allocating memory

      typedef struct task-descr { ...
				  struct task-desc *rdy-next;
				  ...
      } TD;
    
All the allocation is done when the task descriptors are declared.

Of course, because allocating and freeing constant sized pieces of data can be done in constant time, you could allocate a pool of list elements when you initialize, and manage it using a free list.


Return to: