# Lecture 32 - Scheduling Limits

1. Anything else that would be useful

# Real-time Scheduling

Deadlines in cyclic execution are often the next scheduling of the task.

• That is, a task completes successfully if and only if it completes before it is next scheduled.

Why are we discussing this particular case?

• Because one of the few hard results standing the test of time applies to this case, and
• because it is believed that this result applies to many neighbouring, practical, cases.
• It is believed often to apply to bottleneck cases, where everything happens together by accident.

The theorem applies to the important question,

• `Which task should be scheduled next in order to maximize use of the processor with no deadlines ever being missed?'

• `Rate monotone scheduling.'

which is

#### How it's proved.

Consider all possible collections of tasks,

• each task having a periodicity Ti, and an execution time Ci. (C is for cost.)

Show that, if the set of tasks can be scheduled successfully by any scheduling algorithm, then

• it will be scheduled successfully by rate-monotone scheduling.

### Proof

#### Definitions

Overflow at t. A task cannot be scheduled at t because its previous execution is not complete.

Feasible schedule. A set of priorities for which all tasks alwaysmeet their deadlines.

Response time. The time between the the readying of a task and its completion. If a set of tasks is feasible then for every execution of every task RT < Ti.

Critical instant for a task. The scheduling time at which it has the longest response time.

#### Results

1. The critical instant for a task occurs when it is readied at the same time as all other tasks.
2. When there are exactly two tasks, then rate monotone works. Figure 2.
1. There are only two priority orders
2. Assume that task0 has periodicity T0 < T1
• The first order is rate monotone
• the second is not rate monotone
3. Choose the second order, and assume that it meets deadlines
4. The critical instant for task0 is when it is readied at exactl;y the same time as task1
6. C0 + C1 < T0
7. If I interchange the priorities of the two tasks deadlines are still met.
3. When there are n tasks, then if there is a feasible schedule, rate monotone provides a feasible schedule.
1. Consider the feasible schedule. Either
• it is the rate-monotone schedule, and we have already proved what we want, or
• it is not, in which case there is at least one task (period Tn) behind a slower task (period T(n-1) > Tn)
2. Go down the schedule until you find such a pair.
3. At the critical instant for taskn all higher priority tasks, and it are scheduled at the same time, and the deadline for taskn is met.
4. Swapping priorities clearly leaves both deadlines met. Figure 3.
5. Keep swapping until the schedule is rate-monotone, and we have proved what we want.

This style of reasoning is very common when performance guarantees must be provided for real-time systems.

### Consequences

1. It is possible to show, by similar reasoning, that there is an upper bound on processor utilization of about 70%.
2. Many systems have just these characteristics.
3. All systems that fail fail at bottlenecks, where worst case performance occurs. They are just critical instants, and similar reasoning is often possible.
4. Even when there is ample CPU, similar reasoning can be applied to other limited resources, like bus bandwidth.