CS452 - Real-Time Programming - Spring 2011

Lecture 26 - Pathologies

Pubilic Service Announcement

  1. Project proposals.
  2. Train availability.
  3. Second train control demo. `I must have been insane.'

Pathologies

As we go down this list both pathology detection and the time of an edit-compile-test cycle grow without bound.

1. Deadlock

One or more tasks will never run again. For example

  1. Task sends to itself (local: rest of system keeps running, task itself will never run)
  2. Every task does Receive( ) (global: nothing is running)
  3. Cycle of tasks sending around the cycle (local: other tasks keep running)

Kernel can detect such things

Potential deadlock can be detected at compile time

Solutions

2. Livelock (Deadly Embrace)

Definition

Two or more tasks are READY. For each task, the state of the other tasks prevents progress being made regardless of which task is ACTIVE.

A higher level of coordination is possible.

Two types of livelock exist

  1. Ones that are the result of bad coding
  2. Ones that are inherent in the application definition

Looking for solutions we prefer ones that avoid the central planner

Usually occurs in the context of resource contention

Livelock that's Really Deadlock

Solutions

  1. Make a single compound resourse, BUT
  2. Impose a global order on resource requests that all clients must follow.
  3. Create a mega-server that handles all resource requests

Real Livelock

Proprietor1 & proprietor2 fail the requests

Livelock that's Really a Critical Race

We could try to make the clients a little more considerate

While ( no resources ) {
   Send( prop1, get1es1, result );
   while ( result == "sorry" ) {
   if ( result == "sorry" ) {
      Delay( ... );
      Send( prop1, getres1, result );
   }
   Send( prop2, getres2, result );
   if ( result == "sorry" ) {
      Send( prop1, relres1, ... );
      Delay( ... );
   } else {
      break; 
   }
}

Inherent Livelock

Remember the example where two trains come face to face, each waiting for the other to move. They will wait facing one another until the demo is over, probably polling.

What's hard about solving this problem?

In real life,

What's most easy for you to do is to programme each engineer with

  1. detection, e.g.,
  2. work around, e.g.,

3. Critical Races

Example

  1. Two tasks, A & B, at the same priority
  2. A is doing a lot of debugging IO
  3. B always reserves a section of track before A, and all is fine.
  4. Debugging IO is removed
  5. A reserves the section before B can get it, and execution collapses.
  6. Lower priority of A to the same level as C.
  7. Now C executes faster and gets a resource before D .
  8. You shuffle priorities forever, eventually reverting, to put back in the debugging IO.

Definition

The order in which computation is done is an important factor in determining whether or not it is successful.

Critical races, like Livelock can be

Symptoms

  1. Small changes in priorities change execution unpredictably, and drastically.
  2. Debugging output changes execution drastically.
  3. Changes in train speeds change execution drastically.

`Drastically' means chaos in both senses of the term

  1. Sense one: a small change in the initial conditions produces an exponentially growing change in the system
  2. Sense two: exercise for the reader.

Solutions

  1. Explicit synchronization
  2. Gating is a technique of global synchronization

4. Performance

The hardest problem to solve

Priority

The hardest thing to get right

Problems with priority

  1. Priority inversion
  2. One resource, many clients
  3. Tasks try to do too much

Congestion

  1. Too many tasks

Layered abstraction are costly

e.g. Notifier -> SerialServer -> InputAccumulater -> Parser -> TrackServer

Hardware

  1. Turn on optimization, but be careful
  2. Turn on caches

Size & align calibration tables by size & alignment of cache lines


Return to: