CS457 - System Performance Evaluation - Winter 2010
Public Service Announcements
- Assignment 2.
- Final Examination: April 21st, 2010 at 09.00 in the PAC.
- Wrong dates of lecture notes page
Lecture 18 - Examples of Discrete Event Simulation II
Structure of Software for Discrete Event Simulation
Event Scheduling Components
Event Scheduling Program
Highest Level Description
Initialize( ); //*
while ( ( event = event-set.extract( ) ) ) != nil ) { //*
clock.time = event.time; //*
process-event( event ); //*
}
log.terminate( ); //*
Initialization
Initialize( ){
clock.init( ); //*
state.init( ); //*
event-set.init( ); //*
log.init( ); //*
}
Event Processing
process-event( event ){
log.update( /* Whatever */ ); //*
clock = event.time; //*
state.update( event ); //*
// Possibly test for termination
}
Server with One Queue
Response variables
- Throughput
- Waiting time
- Utilization
Parameters = Factors, which need to be defined
|
|
Interarrival
Times
|
|
|
|
Deterministic |
Stochastic |
Service
Times
|
Deterministic |
Common |
Very unlikely |
|
Stochastic |
Possible |
Very Common |
Assumptions
- Stochastic things are independent of each other
- interarrival times
- service times
- FCFS scheduling
- System starts empty
- Infinite population (open model)
- user population: N
- event rate per user: p
- As N -> infinity, p -> 0, keeping Np = r constant
- We need something like this to underwrite the assumption of
independence
State variables
- n - number of jobs in the queue
- status - server busy or idle
Initialization
clock.init( ) { time = 0 } //*
state.init( ) { n = 0; status = IDLE }
log.init( ) { /* open standard output */ }
event-state.init( ) { event-set.insert( new-event( ARRIVAL, clock.time( ) ) ) }
State & Event-set Manipulation
state.update( event ) {
switch( event.type ) {
case ARRIVAL:
event-set.insert( new-event( ARRIVAL, event.time ) );
n++; queue.insert( event );
if( status == IDLE ) start_service( );
return;
case DEPARTURE:
n--; status = IDLE;
if ( n ) start_service( );
return;
}
}
Explanation.
- Two conditions must be true for the server to be started
- It must be IDLE.
- There must be a request for it to process (n>0).
- n is the total number of requests in the queue and in the
server.
- n is incremented when a request arrives and is put in the
queue
- n is decremented when a request departs.
- n does not change when a request is moved from the queue
to the server
- An arrival event guarantees n>0 so we test if the server is
providing service already
- A departure event guarantees that the server is not providing service
so we test to see if there is a request in the system.
Utility routines
event new-event( type, time ) {
this.type = type;
switch( type ) {
case ARRIVAL:
this.time = event.arrival-time( time );
return this;
case DEPARTURE:
this.time = time + event.service-time( );
return this;
}
}
start-service( event ) {
log.update( /* Whatever */ );
state.status = BUSY;
event-set.insert( new-event( DEPARTURE, event.time ) )
}
Left as exercises for the reader
- Trace through the samples and make certain that it all goes as you
expect.
- Find out the slightly different termination condition than the example
in the notes.
- Think about what should go in all the updates of the log.
Important.
- Remember that events must be conserved.
- For every new arrival event there must be a new departure event.
Parallel servers: one queue, multiple servers
State variables are
- m: number of servers that are currently busy
- n: number of requests in the system, including ones in the server
How should it be updated and used?
- Initialized: m
= 0; n =
n0
- Scheduling fact
- If there is an IDLE server && a request in the queue then
an IDLE server will be started with the request.
- Constraint: n >= m
- On an arrival event
- xmlns="http://www.w3.org/1998/Math/MathML"n is
incremented
- another arrival event is scheduled
- On a departure event
- n is decremented
- m is decremented
- Start a server if
- there is a server idle (m is less than the number of
servers) AND
- there is a request in the queue (n > m)
Finite Population Model
We saw this as an application of Little's Law.
Each user does the following
- Think
- Submit request
- Wait
- Receive result
- Goto 1.
Each request
- Wait in queue
- Receive service
Note. User's wait time is not the same as request's wait
time.
State
- n - number of users waiting
- N-n is the number of users thinking
In the event set there are always N events
- n - finished service events
- N-n - finished thinking events
- Consuming a finished thinking event creates a finished service
event
- Consuming a finished service event creates a finished thinking
event
Exercise for the reader. What are the conditions for starting service?
Tandem Queue
The principle is quite easy
- You just have M (number of servers) simulations
- The depart event of one is the arrive event of the next
but we would rather do this as one simulation. (Programming with signals
is not fun.)
System state
- Nn - The number of requests in each system (server plus
queue).
The events in the event set are
- Arrive at 1 (A1)
- Depart from 1, 2, ..., M-1 (Dn)
- Depart from M (DM)
What happens for each?
- A 1.
- enqueue into Q1
- insert next arrival (A1 event) into event-set
- if S1 idle
- Dn
- Sn idle
- enqueue into Qn+1
- if (Sn+1 idle)
- if (Qn not empty)
- dequeue from Qn
- start-Sn,
- DM
- SM idle
- if (QM not empty)
- start-Sn
- Sn busy
- insert departure from Sn (Dn event) into event-set
Processor Sharing
Time-slicing model: pre-emptive multi-tasking
For example, three classes of jobs
- Jobs with active I/O: long think times, very little processing
- Interactive jobs without active I/O: substantial processing that will
stop and start at widely spaced times
- Batch jobs: which go on for very long times.
Single server, three queues, needs a scheduling algorithm (discipline)
- Typical: try to provide some service to each
- Each queue has an importance, w(r), which means that it gets ... of the
processor (Exercise for the reader: Fill in ... in a reasonable way.)
- In the example below each queue is of equal importance.
Important state
- number of non-empty queues,
m
- number of jobs currently in each queue, N(r), r=1...M
Initialization
- The usual
- All queues empty
- One request (job) of each type in event-set
Arrival
- increment N(r)
- add request to queue
- if at head of queue,
- increment m
- reschedule existing departure events
- one more queue shares the processor
- scheduled departure events will occur later
- multiply time remaining (dep-time - clock) by m/(m-1)
- What about dividing by zero?
- start-service
Departure
- remove job from queue, decrement N(r)
- if N(r) == 0
- decrement m
- reschedule existing departure events
- one less queue shares the processor
- scheduled departure events will occur earlier
- multiply time remaining (dep-time - clock) by m/(m+1)
- What about multiplying by zero?
- else
Start-service
- work out departure time
- schedule departure event
Something is unrealistic about this model. What is it?
- no rescheduling within a queue
- A3 takes that into account
Something is unintuitive about this model. What is it?
- I have been justifying this approach in terms of implementation
simplicity and efficiency.
- But these qualities are starting to recede as things get more complex.
For example, queue-dependent importance.
This is always the case.
Return to: