Simplest is best
Receive( &courierTid, ... );
Reply( courierTid, ... );
FOREVER {
Receive( &courierTid, byte ); // The courier should almost always
// be RCV_BL on the notifiers SendQ.
load( UART..., byte )
data = AwaitEvent( eventid );
Reply( courierTid, NOT_XMIT, );
}
Receive( &serverTid, notifierTid ); Send( notifierTid, ... ); Reply( serverTid );
FOREVER {
Send( notifierTid, {data} );
Send( serverTid, {req}, {data} );
}
// queues & fifos
notifierTid = Create( notifier );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier & notifier are known to be okay
RegisterAs( ); //On return client requests will begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_XMIT:
enqueue( requesterTid, xmitQ )
if ( ! empty( xmitFifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitFifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
default:
ASSERT( "..." );
}
}
This gets you through a bottleneck where no more than two events come too fast.
This code, like the previous code, operates correctly if there is more than one notifier.
Remember that all the calls provide error returns. You can/should use them for error recovery
Another possible arrangement for task creation
Another possible arrangement for initialization
Distributed gating
I am showing you collections of tasks implemented together because sets of related tasks is a level of organization above the individual task.
Add a warehouse between the courier and the notifier.
Receive( &warhouseTid, ... );
Reply( warhouseTid, ... );
FOREVER {
data = AwaitEvent( eventid ); // data includes event-type and volatile data
switch( event-type ) {
case XMT_INT:
// test transmitter?
Send( warehouseTid, NOT_XMIT, byte ); // byte is to be transmitted
Write( UART, byte );
break;
default: //
ASSERT( "This didn't happen because my kernel is bug-free." );
}
// data structures Receive( &courierTid, notifierTid, ... ); Send( notifierTid, ... ); Reply( courierTid, ... );
FOREVER {
Receive( &requester, {req-type, data}, );
switch( req-type ) {
case NOT_XMIT:
enqueue( xmitQ, requester );
if ( !empty( xmitFifo ) ) { Reply( extract( extract( xmitQ ), extract( xmitFifo ) };
break;
case COUR_XMIT:
enqueue( courQ, requester );
insert( xmitFifo, unpack( data ) );
if( !empty( xmitQ ) ) Reply( extract( xmitQ ), extract( xmitFifo ) );
if( empty( xmitFifo ) ) Reply( extract( courQ ), "Send more." )
break;
default:
ASSERT( "This didn't happen because my kernel is bug-free." );
}
}
Receive( &serverTid, {notifierTid, warehouseTid} ... );
Send( warehouseTid, notifierTid, ... );
Reply( serverTid );
FOREVER {
Send( warehouseTid, {req, data} );
Send( serverTid, {req, data} );
}
// queues & fifos
notifierTid = Create( notifier );
warehouseTid = Create( warehouse );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier, warehouse & notifier are known to be okay
RegisterAs( ); // On return client requests can begin.
FOREVER {
Receive( &requesterTid, {request-type, data} );
switch ( request-type ) {
case COUR_XMIT:
enqueue( xmitQ, requesterTid );
if ( !empty( xmitfifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue( xmitfifo, data );
if ( !empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
default:
ASSERT( "This didn't happen because my kernel is bug-free." );
}
}
This structure clears up most problems when a burst of requests to the server would leave the notifier waiting in a long sendQ..
Two issues:
Give a precise and quantitative definition of `bottleneck'.
Called a guard.
What this amounts to is
Server should be lean and hungry
In single-threaded programs this is often the most useful tool.
What is the equivalent of a stack trace in a real-time multi-tasking environment?
Two basic questions to answer.
What does it do?
How do you get it started?
Breakpoint is a special case of a particular sort of tool that is very common.
Getting information closer to real-time.
Return to: