Proprietor `owns' a service, which usually means a resource.
Simplest is best
// Initialize Hardware Receive( requester, eventId ); Reply( requester ); // Should be the last thing before FOREVER
FOREVER { AwaitEvent( eventid ); //Should event ids be hard coded? // Acquire volatile data // Enable interrupts Receive( .... ); Reply( .... ) }
Receive( requester, { notifierId, serverId } ); // Now knows the the server/notifier Pids Reply( requester ); // Should be the last thing before FOREVER
FOREVER { Send( notifierPid ... ); Send( serverPid ... ); }
notifierPid = Create( notifier ); //Should notifier code name be hard coded? courierPid = Create( courier ); Send( courierPid, { notifierPid, MyPid( ) } ... ) // On return courier is okay Send( notifierPid, eventId, ... ); //On return notifier is okay RegisterAs( ); //On return requests can begin.
FOREVER { requesterPid = Receive( request ); switch ( request.type ) { case COURIER: provideCourierService( ); // may release queued clients doReplies( ); case CLIENT: if ( provideClientService( ) ) Reply( requesterPid ); else enQueue( requesterPid ); } }
How does this work?
This gets you through a bottleneck where no more than two events come too fast.
Another possible arrangement for initialization
Distributed gating
Add buffer before courier and server.
// Initialize Hardware Receive( bufferPid )// Find server Pid Reply( ) // Should be the last thing before FOREVER
FOREVER { AwaitEvent( eventid ); //Should event ids be hard coded? // Acquire volatile data // Enable interrupts Send( bufferPid, .... ) }
FOREVER { Receive( requesterPid, request ); switch ( request.type ) { case COURIER: enQueue( queue, request.data ); Reply( ); case SERVER: Reply( queue ); } }
Receive( requester, ..., bufferPid, ); // Now knows the the server/buffer Pids Reply( requester, ... ); // Should be the last thing before FOREVER
FOREVER { Send( bufferPid ... ); Send( serverPid ... ); }
notifierPid = Create( notifier ); //Should notifier code name be hard coded? courierPid = Create( courier ); bufferPid = Create( buffer ); Send( courierPid, { myPid, bufferPid } ... ); // On return courier is okay Send( notifierPid, { bufferPid }, ... ); //On return notifier is okay RegisterAs( ); //On return requests can begin.
FOREVER { requesterPid = Receive( request ); switch ( request.type ) { case COURIER: provideCourierService( ); // may release queued clients Reply( requesterPid ); doReplies( ); case CLIENT: if ( provideClientService( ) ) Reply( requesterPid ); else enQueue( requesterPid ); } }
This structure clears up problems when the notifier runs too fast for the server.
The problem is that the server is doing too much work, and the system is stuck at a lower priority while it is done.
Two issues:
Define `bottleneck'.
Called a secretary.
When there is a constant in a problem,
One of your serial ports is always connected to the train
How to take advantage of this
typedef struct { int component; int instance; int report; int time; }
The pass around the struct
If it's the courier, you now have a TrainInputCourier
Think about the Server's send queue. It might have
Secretary
initialize synchronize FOREVER { request = Receive( ... ) switch ( request.type ) { case COURIER: status = FREE; waitingResult = request; Reply ( waiter, waitingResult ); if ( !empty( requestQ ) ) { { waiter, waitingRequest } = dequeue( requestQ ); Reply( courier, waitingRequest ); status = BUSY; } break; case CLIENT: if ( status == BUSY ) { enqueue( requester, request ); } else { Reply( courier, request ); waiter = requester; status = BUSY; } break; } }
If there are too many requests,
If the problem is not transient, but occurs occasionally because a couple of things happen to occur at once
An administrator does not do work itself,
Return to: