# Analytic Queueing Theory - Examples

Text: Chapters 30--36.

## Birth-death process

#### The Simplest Example M/M/1

First M: Markovian (Exponential) birth (interarrival) times

Second M: Markovian (Exponential) life/death (service) times

1: one server

Assumptions:

1. \lambda_j = \lambda
2. \mu_j = \mu
3. Define r = \lambda / \mu

Then

1. pn = r^n p0
2. p0 = 1 / (1 + r + r^2 + ...) = 1 - r.
3. pn = (1 - r) * r^n.

The mean number of jobs in the system is E(r) = sum_n n * (1 - r ) * r^n = r / (1 - r)

• Goes to infinity as r -> 1. Why?

Little's law

• Mean response time: E(r) = (1/\lambda) * E(n) = 1 / (\mu - \lambda) = 1 / (\mu (1-r))
• Goes to infinity as \lambda -> \mu from below. Why?
• What happens when \lambda > \mu?

Mean waiting time E[w]

1. Mean service time is 1/\mu
2. Mean waiting time is E[w] = E[r] - 1/\mu = r / ( \mu (1-r) )

Mean number of jobs in the queue E[nq]

1. Little's law applied to the queue - \lambda*E[w] = E[nq]
2. E[nq] = r^2 / (1-r)

Utilization

1. U = 1 - p0 = 1 - (1-r) = r = \lambda / \mu

#### Another Example: M/M/2 & M/M/m

As above, but with 2 & m servers

Stability

1. 2 servers
1. U = U1 + U2
2. Uj = number processed on server j = \sum_(jobs on j) xj
• xj = processing time for j
3. Therefore, U = (1/L) sum_j xj = (n/L) (1/n) \sum_j x_j = \lambda / \mu.
4. U1 < 1 & U2 < 1 => U < 2.
5. Stable if \lambda / \mu = \pi < 2 => \lambda / (2 \mu) = \pi / 2 < 1
6. r = \lambda / (2 \mu) = \pi / 2 is called the traffic intensity (\pi = /lambda / \mu.)
2. m servers

The same argument, but

• U < m
• r = \lambda / (m \mu) = \pi / m
• r < 1 or \pi < m

Performance

Two servers

j  =      0  1  2  3  ...
l_j =     l  l  l  l  ...
m_(j+1) = m 2m 2m 2m  ...

Write down the equations and aolve them

1. p1 = 2r p0 = \pi p0
2. p2 = r p1 = 2r^2 p0 = (\pi^2/2) p0
3. p3 = r p2 = 2r^3 p0 = (\pi^3/4) p0
4. pn = 2r^n p0 = ( \pi^n / 2^(n-1) ) p0

p0 (1 + 2r + 2r^2 + 2r^3 + ...) = 1 => p0 = (1 + 2 \sum_j r^j)^(-1) = (1 + 2r / (1-r)) = (1-r) / (1+r) = (2 - \pi) / (2 + \pi)

p0 ( 1 + \pi + \pi^2 / 2 + ...) = 1 = p0 ( 1 + \sum_n \pi^n / (2^(n-1)) = p0 ( 1 + 2 \sum_n (\pi / 2)^n ) = p0 ( 1 + 2\pi / (1 - \pi/2) ) = p0 (1 + \pi/2) / (1 - \pi/2)

Therefore,

pn = 2 r^n ( 1 + r ) / ( 1 - r ) = 2 ((\pi/2)^n) * (2 + \pi) / (2 - \pi)

Average number in system

• E[n] = \sum n pn = p0 2\sum n r^n = p0 * 2r / (1-r)^2 = 2r / (1-r)^2 * (1-r) / (1+r) = 2r / (1-r^2) = 4 \pi / ( 4 - \pi^2)
• S(r) = \sum r^n = 1 / (1 - r)
• dS(r)/dr = 1 / (1 - r)^2 = \sum n r^(n-1) = (1/r) \sum n r^n.
• Therefore \sum n r^n = r / (1 - r)^2

Average response time

• E[r] = (1/l) E[n] = 1/ (\mu (1-r^2)) = 4 / (\mu (4 - \pi^2)

Compare this result to two independent servers each of which has reponse time 1 / (\mu (1-r))

• Each gets one half the jobs, so that interarrival rate halves: \lambda -> \lambda / 2
• Thus, response time is 2 / (\mu (2-\pi))
• Compare to 4 / (\mu (4 - \pi^2))
• 2 / (\mu (2-\pi)) - 4 / (\mu (4 - \pi^2)) = 2 / (\mu (2 - \pi)) ( 1 - 2 / (2 + \pi) ) > 1.
• Distributing jobs over two servers is better. Why?

k servers

j  =        0  1  2  3  4  ... k   k+1   k+2   ...
l_j =       l  l  l  l  l  ... l    l     l    ...
m_(j+1) =   m 2m 3m 4m  ...   km    km    km   ...

Write down the equations and solve them

1. p1 = 2r p0
2. p2 = r p1 = 2*3r^2 p0
3. p3 = r p2 = 2*3*4r^3 p0
4. pn = (n+1)!!r^n p0 for n < k and pn = k!r^n p0 for n > k

This will put you summing powers to the test!

p0 (1 + 2r + 6r^2 + 24r^3 + ... + k!r^k + ...) = 1 => p0 = (1 + 2 \sum_j r^j)^(-1) = (1 + 2r / (1-r)) = (1-r) / (1+r)

Average number in system

• E[n] = \sum n pn = 2p0 \sum n r^n = 2p0 * r / (1-r)^2 = 2r / (1-r)^2 * (1-r) / (1+r) = 2r / (1-r^2)

Average response time

• E[r] = (1/l) E[n] = 1/ (m (1-r^2))

Compare this result to two independent servers where reponse time is 1/(m(1-r/2)

#### Example 3. M/M/1/B

B stands for finite buffer, and B is its size, which includes the job being served.

1. Maximum number of jobs in the system is B. Therefore stable.
2. \lambda_j = \lambda when j < B & \lambda_j = 0 when j >= B
3. \mu_j = \mu
4. pn = p0 r^n when n <= B; pn = 0 otherwise
5. \sum pn = 1 = p0 sum^B r^n = (1 - r^(B-1)) / (1-r) when r != 1. When r = 1, \sum pn = B+1.

Can talk about an effective rate.

#### Example 4. M/M/1/Infinity/N

• N stands for number of users.
• This is a closed model with a think time
• The first M is not exponential interarrival time, but exponential think time.