# Spirit of Expert on Part 3…

# Introduction

Esteemed friend not only provides the graphs but also post me a very detailed comment on my last post. Rather than publishing it as a comment, I will also post it as a separate write-up. I believe it will take more hit than my original post…

# Although I am not good at queuing theory…

I would like to make couple comments. First of all, I would like to mention that this entry summarizes most of the “essentials” about one server systems. However, before delving into the details, it might be useful to perform some back of the envelope calculations. For one server models, Kingman’s approximation is a very simple yet extremely useful starting point. By using this approximation, you can pretty much estimate how your system will behave under different conditions in terms of arrival and service rates. Let me briefly summarize what this approximation tells us. First, couple definitions: (sorry I cannot enter formulae so it will be messy)

Let

*c_a* = coefficient of variation of inter-arrival time which is equal to *sigma_a / m_a* where *sigma_a* and *m_a* are the standard deviation and mean of inter-arrival time respectively. (Of course *m_a* = *1/lambda*) (*c_a *= 1 for exponential distribution)

*c_s* = coefficient of variation of the service time which is equal to *sigma_s / m_s* where *sigma_s* and *m_s* are the standard deviation and mean of service time respectively. (Of course *m_s = 1/mu*) (*c_s* = 1 for exponential distribution)

*u* = utilization rate (*u = lambda/mu*)

*Wq* = expected waiting time in queue

Here comes the beauty:

*Wq = (1/2)*(c_a^2+c_s^2)*(u/(1-u))*m_s
*

and here is the interpretation:

A very nice property of this approximation is that you do not have to know the distribution of arrivals or service rates. All you need is the mean and the standard deviation values so that you can compute *c_a, c_s* and u. Therefore, you do not have to go through the hassle of finding appropriate distributions for inter-arrival and service rates. I am not saying that data modeling is useless. What I am saying is you can get the most of what you want without performing any detailed analyses.

On the other hand, “1-u” is the probability that you will find the queue empty and the workstation idle if you observe it at a random time. Therefore, *u/ (1-u)* is a unit-less measure of the ability of your server to work off queues. Now, go ahead and start your favorite spreadsheet program to see the effect of increasing utilization. Start with low utilization and keep incrementing it until you reach 1. You can increment it by .05 until you reach .9 then by 0.01 until you reach .99. Now draw the graph u/(1-u). See what happens? Basically *u/(1-u)* ratio explodes after .95.

What does this tell you? This basically tells one thing: KEEP YOUR UTILIZATION UNDER CONTROL. Let’s go back to our approximation. Suppose *c_a^2, c_s^2 and m_s* are fixed. Then, you get the exact same result for *Wq* by changing utilization (i.e. *Wq* explodes after .95). In other words, if the ratios between your interarrival rate (*lambda*) and your service rate (*mu*) gets larger and larger, after one point, your buffer explodes. You can also observe it by looking at the Arena graphs shown above. Compare the mean response times for different loads.

Of course there is no problem when you have low utilization. When there are no jobs waiting in the queue the system refreshes itself. As mentioned in the text, once you are at time t, it does not matter what happened before so that you can treat t as zero since there is nothing in the queue. However, if you have “extreme load” it might take a really long amount of time until your buffer gets empty. Yay… Explosion… Almost all people working with queues know this “explosion” fact. But the problem is, most of them do not know how to interpret it and avoid it if possible.

First of all, if you believe high utilization is always good; change your job and do financial accounting instead of engineering. That’s what most of the financial accounting people think by the way, since they believe that high utilization means effective use of your companies’ resources. LOSERS… However, if you are happy as an engineer consider watching your system more carefully. And do not just watch also act. How can you act against increasing utilization? First of all, your operations team must set a maximum utilization level goal. This level might differ from operation to operation but a rule of thumb I can recommend is that you should not let your system go above 95% utilization in order to keep waiting times under control. How can you do that? You can set a maximum buffer size say, S, and reject customers once you hit that level (i.e. G/G/1/S). In other words, you reject arrivals if you do not have any space in your buffer. Another approach is related to renewal theory. You do not set a buffer size but you regularly refresh your system. You set up certain time intervals to empty your system. For instance, if we go back to the grocery store example, you stop accepting customers at the end of each hour until you have no people in your buffer. If you want something hardcore, you can combine these two policies. In other words, you set two levels for your buffer say S (large S) and s (small s). When you have S jobs waiting, you stop accepting new jobs until you hit s. Once you have s jobs in your buffer, you can start accepting again until you hit S and this cycle continues. Since you goal to minimize the waiting time deciding what S and s becomes an optimization problem. I can make another long comment about how to solve it but I think it is out of scope this time.

BOTTOMLINE:

- You do not need detailed statistical analysis in order to comprehend how your system will behave.
- Most of the systems can be managed by using back of the envelope calculations and careful control of utilization.
- Once again, that does not mean you should ignore queuing theory. It is absolutely essential if you want to find a global optimal solution.

If you want some more fun try to observe what happens to *Wq* when *c_a* and *c_s* increases? You will see that VARIANCE IS THE ENEMY. This will also help you understand how important it is to achieve deterministic service rates. There is not much you can do about arrivals but once you have deterministic service rates, you can at least get rid of c_s^2 from that equation.

Have fun,

Y. Alan

# Last word…

My dear friend I post your comment without touching a word. Thank you for great enlightenment on topic. Thanks…

Posted on March 26, 2007, in Oracle. Bookmark the permalink. Leave a comment.

## Leave a comment

## Comments 0