We all, even most programmers (!), have effective intuitions about human relations
Tasks are independent entities
Why do servers need attendant tasks?
Proprietor `owns' a service, which usually means a resource. Proprietor actually does the work.
Kernel is handling hardware in this example
AwaitEvent
Send
Why? It's something you might end up doing during your project
Simplest is best
Initializing
Work
Receive /from Courier AwaitEvent Reply /to Courier
Initializing
Receive( &serverTid, notifierTid ); Send( notifierTid, ... ); Reply( serverTid );
FOREVER {
Send( notifierTid, {data} );
Send( serverTid, {req}, {data} );
}
// queues & fifos
notifierTid = Create( notifier );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier & notifier are known to be okay
RegisterAs( ); //On return client requests will begin.
FOREVER {
requesterTid = Receive( request, {request-type, data} );
switch ( request-type ) {
case NOT_XMIT:
enqueue( requesterTid, xmitQ )
if ( ! empty( xmitFifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue ( xmitFifo, data );
if ( ! empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitFifo ) );
break;
default:
ASSERT( "..." );
}
}
This gets you through a bottleneck where no more than two events come too fast.
Remember that all the calls provide error returns. You can/should use them for error recovery
Another possible arrangement for task creation
Another possible arrangement for initialization
Distributed gating
I am showing you collections of tasks implemented together because sets of related tasks is a level of organization above the individual task.
E.g., the decision to add a courier requires revision of code within the group, but not outside it.
Add a warehouse between the courier and the notifier.
Receive( &warhouseTid, ... );
Reply( warhouseTid, ... );
FOREVER {
data = AwaitEvent( eventid ); // data includes event-type and volatile data
switch( event-type ) {
case XMT_INT:
// test transmitter?
Send( warehouseTid, NOT_XMIT, byte ); // byte is to be transmitted
Write( UART, byte );
break;
default: //
ASSERT( "This didn't happen because my kernel is bug-free." );
}
// data structures Receive( &courierTid, notifierTid, ... ); Send( notifierTid, ... ); Reply( courierTid, ... );
FOREVER {
Receive( &requester, {req-type, data}, );
switch( req-type ) {
case NOT_XMIT:
enqueue( xmitQ, requester );
if ( !empty( xmitFifo ) ) { Reply( extract( extract( xmitQ ), extract( xmitFifo ) };
break;
case COUR_XMIT:
enqueue( courQ, requester );
insert( xmitFifo, unpack( data ) );
if( !empty( xmitQ ) ) Reply( extract( xmitQ ), extract( xmitFifo ) );
if( empty( xmitFifo ) ) Reply( extract( courQ ), "Send more." )
break;
default:
ASSERT( "This didn't happen because my kernel is bug-free." );
}
}
Receive( &serverTid, {notifierTid, warehouseTid} ... );
Send( warehouseTid, notifierTid, ... );
Reply( serverTid );
FOREVER {
Send( warehouseTid, {req, data} );
Send( serverTid, {req, data} );
}
// queues & fifos
notifierTid = Create( notifier );
warehouseTid = Create( warehouse );
courierTid = Create( courier );
Send( courierTid, notifierTid, ... ); // On return courier, warehouse & notifier are known to be okay
RegisterAs( ); // On return client requests can begin.
FOREVER {
Receive( &requesterTid, {request-type, data} );
switch ( request-type ) {
case COUR_XMIT:
enqueue( xmitQ, requesterTid );
if ( !empty( xmitfifo ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
case CLIENT_XMIT:
Reply( requesterTid, ... );
enqueue( xmitfifo, data );
if ( !empty( xmitQ ) ) Reply( dequeue( xmitQ ), dequeue( xmitfifo ) );
break;
default:
ASSERT( "This didn't happen because my kernel is bug-free." );
}
}
This structure clears up most problems when a burst of requests to the server would leave the notifier waiting in a long sendQ..
Two issues:
Give a precise and quantitative definition of `bottleneck'.
Called a guard.
What this amounts to is
Server should be lean and hungry
always be receive blocked
The most common set of debugging tools used by experienced programmers is the oldest: printf, grep & stack trace.
Debugging real-time programs, at its base, is just the same as any other debugging, and just the same as empirical science.
But real-time programs are harder to debug. Very few programs are entirely free of critical races, which are the worst type of bug, lurking for weeks months or years in seemingly correct code, the appearing when innocuous, unconnected changes occur.
The memory contents are not wiped by reset. Some of the most difficult errors can be detected only by using the contents of memory after a reset.
On some types of exceptions RedBoot will attempt to connect with gdb. In such cases it writes a bunch of gibberish on the bottom of the monitor screen. Among that gibberish is the address of the instruction that caused the exception. Using the load map generated by the linker you can find
It is usually pretty easy to figure out which line of C source was responsible for the instruction.
In single-threaded programs this is often the most useful tool.
What is the equivalent of a stack trace in a real-time multi-tasking environment?
What does it do?
How do you get it started?
Breakpoint is a special case of a particular sort of tool that is very common.
We need methods of getting information closer to real-time
Return to: