CS457 - System Performance Evaluation - Winter 2010
Public Service Announcements
- Assignment 2.
- Final Examination: April 21st, 2010 at 09.00 in the PAC.
Lecture 17 - Examples of Discrete Event Simulation
Structure of Software for Discrete Event Simulation
Two possible approaches
- Follow requests (job) through the system
- Global time with OS-like scheduling
- Local time with
sleep( )
- How it's done
- Each request is an independent task/process/thread with its
state maintained in an object
- Its states are
- waiting (where)
- getting service (where and how much)
- Every (nominal) N milliseconds the global clock ticks
- Those getting service update their state
- Those for whom service is complete move to a new state
- Consequences of objects changing state occur
- Simulations like this are a natural for object-oriented
programming.
Simula67 was created for this purpose
- Kristen Nygaard & Ole-Johan Dahl in Norway in the 1960s
- objects
- classes
- subclasses
- virtual methods, polymorphism
- coroutines, for parallel or pseudo-parallel execution
- garbage collection
- Event-scheduling approach
Event Scheduling Components
- Events
- occurrences that will change the state of a system
- happen at a specific time
- Event set - set of events, with a total order
- total order is time
- simulation time kept in a clock variable
updated to the event time each time the earliest event is extracted
from the event-set
Two important operations
- Remove the earliest event from now.
- Insert an event.
- must have a time greater than or equal to the current time
- forces time to move forward, i.e. ensures causality
These will be done equally often (Why?)
- Two key assumptions
- Events are defined so that the system never changes state without
an event occurring.
- The response to an event can include schedule other event(s)
- Defining system state is the most important aspect of abstraction.
Event Scheduling Program
//* marks parts that do not vary from program to program
Highest Level Description
Initialize( ); //*
while ( ( event = event-set.extract( ) ) ) != nil ) { //*
clock.time = event.time; //*
process-event( event ); //*
}
log.terminate( ); //*
Initialization
Initialize( ){
clock.init( ); //*
state.init( ); //*
event-set.init( ); //*
log.init( ); //*
}
Event Processing
process-event( event ){
log.update( /* Whatever */ ); //*
clock = event.time; //*
state.update( event ); //*
// Possibly test for termination
}
Server with One Queue
Response variables
- Throughput
- Waiting time
- Utilization
Parameters = Factors, which need to be defined
|
|
Interarrival
Times
|
|
|
|
Deterministic |
Stochastic |
Service
Times
|
Deterministic |
Common |
Very unlikely |
|
Stochastic |
Possible |
Very Common |
Assumptions
- Stochastic things are independent of each other
- interarrival times
- service times
- FCFS scheduling
- System starts empty
- Infinite population (open model)
- user population: N
- event rate per user: p
- As N -> infinity, p -> 0, keeping Np = r constant
- We need something like this to underwrite the assumption of
independence
State variables
- n - number of jobs in the queue
- status - server busy or idle
Initialization
clock.init( ) { time = 0 } //*
state.init( ) { n = 0; status = IDLE }
log.init( ) { /* open standard output */ }
event-state.init( ) { event-set.insert( new-event( ARRIVAL, clock.time( ) ) ) }
State & Event-set Manipulation
state.update( event ) {
switch( event.type ) {
case ARRIVAL:
event-set.insert( new-event( ARRIVAL, event.time ) );
n++; queue.insert( event );
if( status == IDLE ) start_service( );
return;
case DEPARTURE:
status = IDLE;
if ( --n ) start_service( );
return;
}
}
Utility routines
event new-event( type, time ) {
this.type = type;
switch( type ) {
case ARRIVAL:
this.time = event.arrival-time( time );
return this;
case DEPARTURE:
this.time = time + event.service-time( );
return this;
}
}
start-service( event ) {
log.update( /* Whatever */ );
job = queue.next( );
state.status = BUSY;
event-set.insert( new-event( DEPARTURE, event.time ) )
}
Left as exercises for the reader
- Trace through the samples and make certain that it all goes as you
expect.
- Find out the slightly different termination condition than the example
in the notes.
- Think about what should go in all the updates of the log.
Important.
- Remember that events must be conserved.
- For every new arrival event there must be a new departure event.
Parallel servers: one queue, multiple servers
State variable is number-busy: m
How should it be updated and used?
- Initialized:
m = 0
- Updated on dequeue:
m++
- Updated on departure:
--m
- Tested:
if ( m < n-servers ) start-server;
Finite waiting room
Finite Population Model
We saw this as an application of Little's Law.
Each user does the following
- Think
- Submit request
- Wait
- Receive result
- Goto 1.
Each request
- Wait in queue
- Receive service
Note. User's wait time is not the same as request's wait
time.
You might think that there are three event types
- Arrive
- Start service
- Depart
But, as above "start service" always coincides with either an arrive or a
depart event
One thing that is different:
- When does the next arrive event get put into the event set?
Tandem Queue
The principle is quite easy
- You just have M (number of servers) simulations
- The depart event of one is the arrive event of the next
but we would rather do this as one simulation. (Programming with signals
is not fun.)
The events in the event set are
- Arrive at 1 (A1)
- Depart from 1, 2, ..., M-1 (Dn)
- Depart from M (DM)
What happens for each?
- A 1.
- enqueue into Q1
- insert next arrival (A1 event) into event-set
- if S1 idle
- Dn
- Sn idle
- enqueue into Qn+1
- if (Sn+1 idle)
- if (Qn not empty)
- dequeue from Qn
- start-Sn,
- DM
- SM idle
- if (QM not empty)
- start-Sn
- Sn busy
- insert departure from Sn (Dn event) into event-set
Processor Sharing
Time-slicing model: pre-emptive multi-tasking
For example, three classes of jobs
- Jobs with active I/O: long think times, very little processing
- Interactive jobs without active I/O: substantial processing that will
stop and start at widely spaced times
- Batch jobs: which go on for very long times.
Single server, three queues, needs a scheduling algorithm (discipline)
- Typical: try to provide some service to each
- Each queue has an importance, w(r), which means that it gets ... of the
processor (Exercise for the reader: Fill in ... in a reasonable way.)
- In the example below each queue is of equal importance.
Important state
- number of non-empty queues,
m
- number of jobs currently in each queue, N(r), r=1...M
Initialization
- The usual
- All queues empty
- One request (job) of each type in event-set
Arrival
- increment N(r)
- add request to queue
- if at head of queue,
- increment m
- reschedule existing departure events
- one more queue shares the processor
- scheduled departure events will occur later
- multiply time remaining (dep-time - clock) by m/(m-1)
- What about dividing by zero?
- start-service
Departure
- remove job from queue, decrement N(r)
- if N(r) == 0
- decrement m
- reschedule existing departure events
- one less queue shares the processor
- scheduled departure events will occur earlier
- multiply time remaining (dep-time - clock) by m/(m+1)
- What about multiplying by zero?
- else
Start-service
- work out departure time
- schedule departure event
Something is unrealistic about this model. What is it?
- no rescheduling within a queue
- A3 takes that into account
Something is unintuitive about this model. What is it?
- I have been justifying this approach in terms of implementation
simplicity and efficiency.
- But these qualities are starting to recede as things get more complex.
For example, queue-dependent importance.
This is always the case.
Return to: