CS452 - Real-Time Programming - Winter 2017

Lecture 27 - Pathologies II.

Public Service Annoucements

  1. Train Control 3 demo on Tuesday, 28 March.
  2. Lecture menu:
  3. What I saw yesterday
  4. The exam will start at 12.30, April 6, 2017 and finish at 15.00, 7 April 2017.


As we go down this list both pathology detection and the length of the edit-compile-test cycle grow without bound.

1. Deadlock

One or more tasks will never run again.

One train trying to go into a siding; a second train trying to get out of the siding. Requests queued within server. Notice that you need explicit code to get you out of this.

2. Livelock (Deadly Embrace)


One train trying to get into a siding the other trying to get out with polling.

Two trains meeting head on over and over again.

3. Critical Races


  1. Two tasks, A & B, at the same priority
  2. A is doing a lot of debugging IO
  3. B always reserves a section of track before A, and all is fine.
  4. Debugging IO is removed
  5. A reserves the section before B can get it, and execution collapses.
  6. Lower priority of A to the same level as C.
  7. Now C executes faster and gets a resource before D .
  8. You shuffle priorities forever, eventually reverting to put back in the debugging IO.


The order in which computation is done is an important factor in determining whether or not it is successful. Without knowing it you have created a program the correctness of which is execution order dependent.

Critical races, like Livelock can be


  1. Small changes in priorities change execution unpredictably, and drastically.
  2. Debugging output changes execution drastically.
  3. Changes in train speeds change execution drastically. How do you tell this apart from a bad calibration? (Your application knows where the train is and it doesn't help.)


  1. Explicit synchronization
  2. Gating is a technique of global synchronization

4. Performance

Changes in performance of one task with respect to another often give rise to critical races

The hardest problem to solve

In practice, how do you know you have performance problems? Problems I have seen:


The hardest thing to get right

Problems with priority

  1. Priority inversion
  2. One resource, many clients
  3. Tasks try to do too much


  1. Too many tasks

Layered abstraction are costly

e.g. Notifier -> SerialServer -> InputAccumulater -> Parser -> TrackServer


  1. Too much terminal output interferes with train controller communication
  2. Requests to poll the sensors get backed up in the serial server, or whoever provides output buffering.


  1. Turn on optimization, but be careful
  2. Turn on caches

Size & align calibration tables by size & alignment of cache lines

I think that this is stretching it.

Return to: