1. Final exam

# Real-time Scheduling

Much real-time computing operates in an environment like the following

1. Groups of sensors that are polled with fixed periodicity
2. Sensor input triggers tasks which also run with fixed periodicity

Typical example, a group of sensors that returns

• the rotational speed of the wheels, and
• the exhaust mixture, and
• the torque, and ...

and a set of tasks that

• updates the engine parameters, and
• updates the transmission parameters, and
• passes the speed on to the instrument controller, and ...

Each time a sensor returns a new datum a scheduler runs

3. starts it running.

Your kernel can handle problems like this one pretty efficiently,

• but you can do better.

## Cyclic Execution

Let's make a schedule

A                                       A
AC  BA    A C  A    A  C A    A B CA    A    C    A    AC B A    A C  A    A  C A    B
|    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |    |
|                           |                         |                          |
|          |          |          |          |          |          |          |
________________________________________________________________________________________________________ time

Because the times are integers the pattern repeats after a while.

• The total amount of thinking you have to do is finite.
• The thinking you do is inserting into the schedule the amount of processing that needs to be done
• worst case
• Work it all out so that nothing collides.
• using heuristics, no doubt

#### Make it a little easier

1. Make the complete pattern short by making sensor periods multiples of one another. If you can control sensor periods.
• Underlying clock.
• sensor i is read once every ni ticks.
• Master cycle is LCM( n1, n2, n3, ... )
• Schedule the master cycle by hand (= by brain)
2. Standardize the processing at each point
3. Minimize the interaction between tasks
4. Simple procedure
1. Processing needed by task i is pi
2. Share of CPU needed by pi is si = pi/ni
3. s1 + s2 + s3 + ... < 1. Otherwise you must,
• get a bigger processor, or
• reduce the amount of computing needed
4. Deadline for task i is di after the sensor report. si < di : otherwise
• you always miss that deadline.
5. Prove some theorems, such as Liu & Layland

### Theorems

Important assumptions

• No interaction execpt contention for CPU
• Task must finish processing event n before event n+1 occurs

Main idea

• Identify the bottleneck
• We call this the critical instant for a task.

Main result

• A method for deriving priorities from task characteristics
• rate-monotone scheduling

Subsidiary result

• Maximum possible CPU utilization