CS452 - Real-Time Programming - Spring 2010
Lecture 29 - Reservations
Public Service Announcements
- Due date for Tracking 2
- Friday November 26, seven days from today.
- Bring your documentation to class.
- Project plan
Reservations
Something Essential that You Must Do
Design your reservation system before coding it.
Before coding your reservation system work it out on paper and make sure
that it works for all the generic cases you can think of
- One train following another
- Two trains on a collision course
- There are one or more switches in the path
Common Multi-train Tracking Problems
- Two trains waiting on the same sensor report
- One is bound to get inconsistent state
- Should be solved by the reservation system
- Spurious sensor report that a train is actually expecting.
- Recover from such an error by back-tracking
- Permanently malfunctioning turn outs
- Can't be switched; always derail
- Alter track graph
- Permanently malfunctioning sensors
- Usually fail on because of sticking
- Unstick by hand
- Alter track graph and mask reports
- Finding the trains at the beginning
- One at a time
- Move slowly
Useful debugging
Common Reservation System Problems
- Reservations branch out ahead and cover a lot of the track
- Shrink reservations as trains slow down
- Reservations are not released.
- Looks as though there are phantom trains in teh system
- Usually most of a reservation is released, but not all.
- Reservation leap-frogging
- Two trains are approaching one another; each gets a reservation
behind the other. (Badly needs a diagram.)
- Ask for and give out reservations in the right order.
Useful debugging aids
- Insert/remove reservations by hand from the prompt
- Query reservations (and who holds them) from the prompt
Common Route-Finding/Following Bugs
- Train derails on turn out after changing direction
- Improve acceleration/deceleration calibration
- Switch switches too late
- Treat command latencies systematically
Useful debugging aids
- Add/subtract switches, sections of track from graph by hand
Real-time Scheduling
Much real-time computing operates in an environment like the following
- Groups of sensors that are polled with fixed periodicity
- Sensor input triggers tasks which also run with fixed periodicity
Typical example, a group of sensors that returns
- the rotational speed of the wheels, and
- the exhaust mixture, and
- the torque, and ...
and a set of tasks that
- updates the engine parameters, and
- updates the transmission parameters, and
- passes the speed on to the instrument controller, and ...
Each time a sensor returns a new datum a scheduler runs
- makes ready any task that the data makes ready to run
- decides which of the ready tasks to schedule, and
- starts it running.
Your kernel can handle problems like this one pretty efficiently,
Cyclic Execution
Let's make a schedule
A A
AC BA A C A A C A A B CA A C A AC B A A C A A C A B
| | | | | | | | | | | | | | | | | |
| | | |
| | | | | | | |
________________________________________________________________________________________________________ time
Because the times are integers the pattern repeats after a while.
- The total amount of thinking you have to do is finite.
- The thinking you do is inserting into the schedule the amount of
processing that needs to be done
- Work it all out so that nothing collides.
- using heuristics, no doubt
Make it a little easier
- Make the complete pattern short by making sensor periods multiples of
one another. If you can control sensor periods.
- Underlying clock.
- sensor i is read once every ni ticks.
- Master cycle is LCM( n1, n2, n3, ... )
- Schedule the master cycle by hand (= by brain)
- Standardize the processing at each point
- Minimize the interaction between tasks
- Simple procedure
- Processing needed by task i is pi
- Share of CPU needed by pi is si = pi/ni
- s1 + s2 + s3 + ... < 1. Otherwise you must,
- get a bigger processor, or
- reduce the amount of computing needed
- Deadline for task i is di after the sensor report. si < di :
otherwise
- you always miss that deadline.
- Prove some theorems, such as Liu
& Layland
Theorems
Important assumptions
- No interaction execpt contention for CPU
- All tasks scheduled periodically
- Task must finish processing event n before event n+1 occurs
- When tasks are ready simultaneously the higher priority one runs
Main idea
- Identify the bottleneck
- We call this the critical instant for a task.
Main result
- A method for deriving priorities from task characteristics
Subsidiary result
- Maximum possible CPU utilization
Is this of any practical value?
Return to: