CS452 - Real-Time Programming - Spring 2008

Lecture 32 - Scheduling Limits


Questions & Comments

  1. Anything else that would be useful

Real-time Scheduling

Deadlines in cyclic execution are often the next scheduling of the task.

Why are we discussing this particular case?

The theorem applies to the important question,

And the answer is,

which is

How it's proved.

Consider all possible collections of tasks,

Show that, if the set of tasks can be scheduled successfully by any scheduling algorithm, then

Proof

Definitions

Overflow at t. A task cannot be scheduled at t because its previous execution is not complete.

Feasible schedule. A set of priorities for which all tasks alwaysmeet their deadlines.

Response time. The time between the the readying of a task and its completion. If a set of tasks is feasible then for every execution of every task RT < Ti.

Critical instant for a task. The scheduling time at which it has the longest response time.

Results

  1. The critical instant for a task occurs when it is readied at the same time as all other tasks.
  2. When there are exactly two tasks, then rate monotone works. Figure 2.
    1. There are only two priority orders
      • task0 above task1
      • task1 above task0
    2. Assume that task0 has periodicity T0 < T1
      • The first order is rate monotone
      • the second is not rate monotone
    3. Choose the second order, and assume that it meets deadlines
    4. The critical instant for task0 is when it is readied at exactl;y the same time as task1
    5. Task1 is scheduled first; both tasks meet their deadlines.
    6. C0 + C1 < T0
    7. If I interchange the priorities of the two tasks deadlines are still met.
  3. When there are n tasks, then if there is a feasible schedule, rate monotone provides a feasible schedule.
    1. Consider the feasible schedule. Either
      • it is the rate-monotone schedule, and we have already proved what we want, or
      • it is not, in which case there is at least one task (period Tn) behind a slower task (period T(n-1) > Tn)
    2. Go down the schedule until you find such a pair.
    3. At the critical instant for taskn all higher priority tasks, and it are scheduled at the same time, and the deadline for taskn is met.
    4. Swapping priorities clearly leaves both deadlines met. Figure 3.
    5. Keep swapping until the schedule is rate-monotone, and we have proved what we want.

This style of reasoning is very common when performance guarantees must be provided for real-time systems.

Consequences

  1. It is possible to show, by similar reasoning, that there is an upper bound on processor utilization of about 70%.
  2. Many systems have just these characteristics.
  3. All systems that fail fail at bottlenecks, where worst case performance occurs. They are just critical instants, and similar reasoning is often possible.
  4. Even when there is ample CPU, similar reasoning can be applied to other limited resources, like bus bandwidth.

Return to: