Back to real trains. When you drive a train there are four things you care about, all of which are functions of time. Particularly you care about the effects of discontinuities in them.
When we model there are two distinct, but easy to confuse, things we care about:
Ideally these two quantities would be exactly equal. So our goal is to minimize something like the absolute value of the difference, or the squared difference between them.
Lemma. Whoever programmed the micro controller in the train knows a lot about trains, not much about computer science, and is lazy.
How does the location change over time if the velocity is constant.
Then at the right time, whenever that is, change the velocity in your model of the train
Why do servers need attendant tasks?
Proprietor `owns' a service, which usually means a resource.
Kernel is handling hardware in this example
Receive( &serverTid, eventId );
Reply( serverTid, ... );
FOREVER {
data = AwaitEvent( eventid ); // data includes event type and volatile data
switch( data.event-type ) {
case RCV_INT:
Send( serverTid, {NOT_RCV, data.byte}, ... );
break;
case XMT_INT:
// test transmitter, turn interrupt off and on?
Send( serverTid, {NOT_XMIT}, byte ); // byte is to be transmitted
store( UART..., byte )
break;
default:
ASSERT( "This never happens because our kernel is bug-free." );
}
// queues & fifos
notifierPid = Create( notifier ); //Should notifier code name be hard coded?
Send( notifierTid, MyTid( ), ... ); //On return notifier is known to be okay
RegisterAs( ); //On return requests can begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_RCV:
Reply( requesterTid, ... );
enqueue( rcvfifo, data );
if ( ! empty( rcvQ ) ) Reply( dequeue( rcvQ ), dequeue( rcvfifo ) );
break;
case NOT_XMIT:
enqueue( xmitQ, requesterTid );
if ( ! empty( xmitfifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
case CLIENT_RCV:
enqueue( rcvQ, requesterTid );
if ( !empty( rcvfifo ) Reply( dequeue( rcvQ ), dequeue( rcvfifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitfifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
default:
ASSERT( "Never executed because notifiers and clients are bug-free." )
}
}
Simplest is best
Receive( &courierTid, ... );
Reply( courierTid, ... );
FOREVER {
Receive( &courierTid, byte );
load( UART..., byte )
data = AwaitEvent( eventid );
Reply( courierTid, NOT_XMIT, );
}
Receive( &serverTid, notifierTid ); Send( notifierTid, ... ); Reply( serverTid );
FOREVER {
Send( notifierTid, {data} );
Send( serverTid, {req}, {data} );
}
// queues & fifos
notifierTid = Create( notifier );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier & notifier are known to be okay
RegisterAs( ); //On return client requests will begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_XMIT:
enqueue( requesterTid, xmitQ )
if ( ! empty( xmitFifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitFifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
default:
ASSERT( "..." );
}
}
This gets you through a bottleneck where no more than two events come too fast.
Remember that all the calls provide error returns. You can/should use them for error recovery
Another possible arrangement for task creation
Another possible arrangement for initialization
Distributed gating
I am showing you collections of tasks implemented together because sets of related tasks is a level of organization above the individual task.
E.g., the decision to add a courier requires revision of code within the group, but not outside it.
Return to: