CS452 - Real-Time Programming - Fall 2009
Lecture 25 - Pathologies
Reminders
- Train tracking 1: new deadline
- Train tracking 1: demos
Calibration
Practical Issues
You might want to consider
- Using floating point for calculation. The easiest way to do this is to
have a single calibration task that
- receives measurements in fixed point,
- calculates internally in floating point, and
- provides current calibration parameters in fixed point.
If more than one task uses fixed point you must change your context
switch if any access to the floating point processor is non-atomic.
- Turning on optimization, but be careful .
- There are places where you have done register allocation by
hand.
- Previously undiscovered critical races could appear, and even
critical races associated with bus clocks.
- Size & align calibration tables by size & alignment of cache
lines
but only if access speed is a problem.
- Each train has a built-in velocity profile used when the train slows or
stops.
- Calibrating this correctly is essential.
- Calibrating this correctly is hard, or at least arduous.
You can create your own profile by a succession of speed commands.
Pathologies
1. Deadlock
One or more tasks will never run again. For example
- Task sends to itself (local: rest of system keeps running, task itself
will never run)
- Every task does Receive( ) (global: nothing is running)
- Cycle of tasks sending around the cycle (local: other tasks keep
running)
Kernel can detect such things
Possible send-blocking can be detected at compile time
- cycle in the send graph of all sends that could happen
- doesn't necessarily occur at run-time
- that is, it's a necessary but not sufficient condition
Solutions
- Gating
- Most common example is initialization, where the send/receive
pattern may be different than FOREVER
- Gate the end of initialization
- Couriers
2. Livelock (Deadly Embrace)
Usually occurs in the context of resource contention. For example
- client1 needs resource1 & resource2; obtains resource1 from
propietor1; asks propietor2 for resource2
- client2 needs resource1 & resource2; obtains resource2 from
propietor2; asks propietor1 for resource1
- Client
Send( prop1, getr1, ... );
Send( prop2, getr2, ... );
// Use the resources
Other order in other client
- Proprietor
FOREVER {
Receive( &clientTid, req, ... );
switch ( req-type ) {
case REQUEST:
if( available ) { Reply( clientTid, use-it, ... ); available = false; }
else enqueue( clientTid );
case RELEASE:
available = true;
Reply( clientTid, "thanks", ... );
if( !empty( Q ) ) Reply( dequeue( ), use-it, ... );
}
}
- state:
- client1, client2: REPLY-BLOCKED - can't release resources
- proprietor1, proprietor2: SEND-BLOCKED - waiting for release
- proprietor1 & proprietor2 fail the requests
- Proprietor
FOREVER {
Receive( &clientTid, req, ... );
switch ( req-type ) {
case REQUEST:
if( available ) { Reply( clientTid, use-it, ... ); available = false; }
else Reply( clientTid, "sorry", ...);
case RELEASE:
available = true;
Reply( clientTid, "thanks", ... );
}
}
- Polling is the most likely result.
- Client
while ( Send( prop1, getr1, ... ) ) ;
while ( Send( prop2, getr2, ... ) ) ;
// Use the resources
- Concrete example:
- Join the tracks together with one set of tasks managing one track,
another set managing another
- i.e. Two track reservation servers
- What happens when a train moves from one track to the other?
- This is a real-life, many dollar problem in the mobile phone
industry.
Kernel(s) cannot easily detect livelock
Possible solutions
- both resources in one proprietor
- global order on resource requests
- ethernet algorithm
- Release; wait a random time; try again
- Requires proprietor who says "sorry".
Could consider this a form of critical race.
3. Critical Races
Example
- Two tasks, A & B, at the same priority
- A is doing a lot of debugging IO
- B always reserves a section of track before A, and all is fine.
- Debugging IO is removed
- A reserves the section before B can get it, and execution
collapses.
- Lower priority of A to the same level as C.
- Now C executes more slowly, and D gets a different resource before C
.
- You shuffle priorities forever, eventually reverting to leave in the
debugging IO.
Theory of relativity and the event horizon.
Symptoms
- Small changes in priorities change execution unpredictably, and
drastically.
- Debugging output changes execution drastically.
- Changes in train speeds change execution drastically.
- Example from two terms ago
`Drastically' means chaos in both senses of the term
- Sense one: a small change in the initial conditions produces an
exponentially growing change in the system
- Sense two: exercise for the reader.
Solutions
- Explicit synchronization
- but you then have to know the orders in which things are permitted
to occur
- Gating is a technique of global synchronization
- which can be provided by a detective/coordinator
4. Performance
The hardest problem to solve
- You just don't know what is possible
- Ask a question like:
- Is my kernel code at the limit of what is possible in terms of
performance?
Priority
The hardest thing to get right
- Sizing stacks used to be harder, but now we have lots of memory
- NP-hard for the human brain
- Practical method starts with all priorities the same, then adjusts
- symptoms of good priority assignment
- The higher priority, the more likely the ready queue is to be
empty
- The shorter the run time in practice the higher the priority
Problems with priority
- Priority inversion
- One resource, many clients
- Tasks try to do too much
Congestion
- Too many tasks
- blocked tasks don't count,
- lowest priority tasks almost don't count
Layered abstraction are costly
e.g. Notifier -> SerialServer -> InputAccumulater -> Parser ->
TrackServer
Hardware
- Turn on optimization, but be careful
- There are places where you have done register allocation by
hand
- Turn on caches
- Size & align calibration tables by size & alignment of cache
lines
- linker command script
- I think that this is stretching it.
Return to: