We all, even most programmers (!), have effective intuitions about human relations
Tasks are independent entities
Why do servers need attendant tasks?
Proprietor `owns' a service, which usually means a resource.
Receive( &serverTid, eventId );
Reply( serverTid, ... );
FOREVER {
data = AwaitEvent( eventid ); // data includes event type and volatile data
switch( data.event-type ) {
case RCV_INT:
Send( serverTid, {NOT_RCV, data.byte}, ... );
break;
case XMT_INT:
// transmit interrupt was masked in the UART by the kernel
Send( serverTid, {NOT_XMIT}, byte ); // byte is to be transmitted
store( UART..., byte )
// unmask transmit interrupt
break;
}
// queues & fifos
notifierPid = Create( notifier ); //Should notifier code name be hard coded?
Send( notifierTid, MyTid( ), ... ); //On return notifier is known to be okay
RegisterAs( ); //On return requests can begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_RCV:
Reply( requesterTid, ... );
enqueue( rcvfifo, data );
if ( ! empty( rcvQ ) ) Reply( dequeue( rcvQ ), dequeue( rcvfifo ) );
break;
case NOT_XMIT:
enqueue( xmitQ, requesterTid );
if ( ! empty( xmitfifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
case CLIENT_RCV:
enqueue( rcvQ, requesterTid );
if ( !empty( rcvfifo ) Reply( dequeue( rcvQ ), dequeue( rcvfifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitfifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
}
}
Why? Occasionally two events come close together and you want to be able to handle the first one as quickly as possibl
Simplest is best
Receive( &courierTid, ... );
Reply( courierTid, ... );
FOREVER {
data = AwaitEvent( eventid );
Receive( &courierTid, byte );
Reply( courierTid, NOT_XMIT, );
load( UART, byte );
}
Receive( &serverTid, notifierTid ); Send( notifierTid, ... ); Reply( serverTid );
FOREVER {
Send( notifierTid, {data} );
Send( serverTid, {req}, {data} );
}
// queues & fifos
notifierTid = Create( notifier );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier & notifier are known to be okay
RegisterAs( ); //On return client requests will begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_XMIT:
enqueue( requesterTid, xmitQ )
if ( ! empty( xmitFifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitFifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
default:
ASSERT( "..." );
}
}
This gets you through a bottleneck where no more than two events come too fast.
Remember that all the calls provide error returns. You can/should use them for error recovery
Another possible arrangement for task creation
Another possible arrangement for initialization
Distributed gating
I am showing you collections of tasks implemented together because sets of related tasks is a level of organization above the individual task.
E.g., the decision to add a courier requires revision of code within the group, but not outside it.
Add a warehouse between the courier and the notifier.
The pseudo-code given is for receiving.
Receive( &warhouseTid, ... );
Reply( warhouseTid, ... );
msg.type = NOT_RCV;
FOREVER {
msg.data = AwaitEvent( eventid );
Send( warehouseTid, msg, msg );
}
Receive( &courierTid, notifierTid, ... ); Send( notifierTid, ... ); Reply( courierTid, ... );
FOREVER {
Receive( &requester, msg );
switch( msg.type ) {
case NOT_RCV:
Reply( requester, msg );
// insert data into package
enqueue( pkgQ, package );
if ( !empty( courQ ) ) { dequeue( courQ ), extract( pkgQ ) };
case COUR_RCV:
enqueue( courQ, requester );
if( !empty( pkgQ ) ) Reply( dequeue( courQ ), dequeue( pkgQ ) );
}
}
Receive( &serverTid, {notifierTid, warehouseTid} ... );
Send( warehouseTid, notifierTid, ... );
Reply( serverTid );
FOREVER {
Send( warehouseTid, pkg );
Send( serverTid, pkg );
}
// queues & fifos
notifierTid = Create( notifier );
warehouseTid = Create( warehouse );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier, warehouse & notifier are known to be okay
RegisterAs( ); // On return client requests can begin.
FOREVER {
Receive( &requesterTid, pkg );
switch ( pkg.type ) {
case COUR_RCV:
Reply( requesterTid, pkg );
enqueue( pkgQ, pkg );
if ( !empty( clientQ ) ) Reply( dequeue( clientQ ), dequeue( pkgQ ) )
case CLIENT_RCV:
enqueue( clientQ, requester );
if ( !empty( pkgQ ) ) Reply( dequeue( clientQ ), dequeue( pkgQ ) );
}
}
This structure clears up most problems when a burst of requests to the server would leave the notifier waiting in a long sendQ.
Two issues:
Give a precise and quantitative definition of `bottleneck'.
Called a guard.
What this amounts to is that a server should be lean and hungry
Return to: