CS452 - Real-Time Programming - Fall 2008
Lecture 21 - Pathologies
Questions & Comment
- next Monday
Pathologies
1. Deadlock
2. Livelock (Deadly Embrace)
3. Critical Races
Theory of relativity and the event horizon
One task tests a condition
- takes an action based on that condition
- which will remedy it
Another task tests the same condition
- takes an action to remedy the condition
And the two actions are incompatible.
That is, information about the action of the first task did not spread
instantaneously:
- that race between the information and the second task's test was won by
the test.
Concrete example.
Engineer sends to switchDetective
- gets back, 'No such task.'
- Creates a switchDetective
- starts interacting with switchDetective using the Pid returned by
Create
Before switchDetective does RegisterAs, a second Engineer sends to
switchDetective
- gets back, 'No such task.'
- Creates a switchDetective
- starts interacting with switchDetective using the Pid returned by
Create
Each switchDetective, or its courier, does Put and Get to find out about
switches.
This is only a little bad, you probably won't even notice it, except your
performance will be bad.
But it can be much worse
- consider having two copies of a task that does track reservations.
Symptoms
- Small changes in priorities change execution unpredictably, and
drastically.
- Debugging output changes execution drastically.
Solutions
- A protocol for using the name server
- e.g. RegisterAs returns the Pid of an existing task
- if it's the one that is known to be bad, then do
ForceRegisterAs
- otherwise use the existing one.
- At initialization it's a programming bug,
- which can be discovered by gating
4. Performance
Remote Delay
Priority
- Priority inversion
- One resource, many clients
Congestion
- Too many tasks
- blocked tasks don't count,
- lowest priority tasks almost don't count
Layered abstraction are costly
e.g. Notifier -> SerialServer -> InputAccumulater -> Parser ->
TrackServer
Practical Control
Data
What levers do you have?
- nominal speed of each train
- switch settings
What do you need to control?
- where each train is over time.
What input do you get
- time and direction of sensor triggering
What other data do you have?
- map of track segments
- length of track segment
How do you find out where the train is?
- On a sensor trigger know that a train triggered the sensor at time t
Must find out which train
- Requires
- knowing which train is near the sensor
- this is a prediction
- Predictions
- at the core of successful control
- humans make millions of predictions
- almost all predictions are correct
- Correct predictions require no more processing
- Make the prediction when each train hits a sensor
- Expect the next sensor if everything goes perfectly
- Expect other sensors if something goes wrong
- Depends on expected travel time, which depends on velocity
since we know the segment lengths
- Velocity is (possibly) a function of
- Engine
- speed setting
- recent changes in speed setting
- track segment
- switch settings
- possibly (probably) time
- After a sensor trigger we can estimate position
- sensor position plus velocity * time
Velocity Estimation
This is an empirical problem.
- By measuring times and positions,
- we estimate velocity.
That is,
- if engine J at speed control C takes X seconds to go from sensor N to
sensor N+1, which are at the ends of segment M of length L
centimetres
- engine J on speed control C travelled L/X centimetres
per second in segment M
The past is a good predictor of the future. Therefore conclude that
- engine J on speed control C travels L/X centimetres
per second in segment M
But there is error in this measurement, from three possible sources
- screw-up errors
- throw them out
- sometimes you can eliminate them
- random errors
- average them out
- often you can turn random errors into systematic ones
- systematic errors
How useful is yesterday's data?
Eliminating screw-up errors
Redefine the track
For example, if a sensor malfunctions frequently
- combine the two track segments into one
Transforming random errors
You can sometimes identify patterns in what you think are random errors
- e.g., you have one speed calibration for curved segments
- another for straight segments
- and discover that segments with switches are different.
Projecting out systematic errors
Subdivide the data.
- Do this as little as possible, but not too little
Return to: