Jump to content
  • 0

Queueing Theory


Go to solution Solved by Pradeepan Sekar,

Queueing Theory is a field of study under Operations Research dealing with queues or waiting lines. It helps manage the queues and resource requirements by building a predictive model for waiting times and queue lengths.

 

An application-oriented question on the topic along with responses can be seen below. The best answer was provided by Pradeepan Sekar and Sourabh Nandi. Applause for the joint winners. 

 

Also review the answer provided by Mr Venugopal R, Benchmark Six Sigma's in-house expert.

 

 

Question

Q 284. Seven service disciplines are shown in Queueing Theory at Wikipedia at https://en.wikipedia.org/wiki/Queueing_theory. Starting with FIFO, and ending with "shortest remaining processing time", each of these disciplines serve a purpose. Provide specific examples where these seven service disciplines could be of use.

 

Note for website visitors - Two questions are asked every week on this platform. One on Tuesday and the other on Friday.

Link to post
Share on other sites

4 answers to this question

Recommended Posts

  • 1
  • Solution

Queuing Theory:

1.   Queuing Theory is mathematical study of waiting time or queues in a model.

2.   Queuing theory is used to predict the queue length and waiting time in the simulation for the better business decisions such as planning of resources, space allocation, charges for the premium etc.

3.   Queuing is not only applicable to the customers, but also to the product, raw materials, Services, or projects which are lined up.

How queuing works?

An entity will be arriving to the queuing node, where the entity will be treated by service. But it needs to wait before getting treated and then it will take some duration while it been serviced and then leaves the node.

 

Some applications of Queuing theory:

1.   Raw materials are stored and then consumed as per requirement.

(Queue length – Inventory | Waiting Time  - Capital of WIP/ shelf life)

2.   Ships waiting for the clearance. (Waiting time and Service time -Demur-rage)

3.   Car parking in the mall (Queue/ service length – Number of Parking slots)

 

Scheduling policies:

There are different ways to assign or schedule the next entity to the server which are called as scheduling policies.

1.   First in, First out:

a.   This policy states that the first come will be served at first, which means the entity with longest waiting time will be treated first.

b.   It is considered as an ethical way to schedule the service for the customers. Token system in Bank or Hospital

c.    In Manufacturing, First in First out have been considered to keep track on the shelf life of the product. Raw material consumption.

 

2.   Last in, First out:

a.   This policy states that the last come will be served at first, which means the entity with Shortest waiting time will be treated first.

b.   It is also stated as stack, It is applicable in most of the cases where you have constraint in such a way that you need to treat the last item first

c.    In Drive in Racking, We need to pick the last item placed as first.

 

 

3.   Processor sharing:

a.   In Processor sharing, more entities will be served at the same time. In this case it can be multiple or as many as possible

b.   In a mall/ theater, it can accommodate more customers.

 

4.   Shortest Job First:

a.   Shortest job will be scheduled first , which helps to reduce the queue length.

b.   In super markets, there will be separate queue for less than 5 billing items,  they will treat other customers only after treating the shortest job.

 

5.   Longest Job first:

a.   Longest Job will be scheduled first.

b.   In case of Project funding (Project funding as server), Longest job can be allocated with funding to start first.

   

6.   Priority:

a.   Customers with high priority are treated first.

b.   It can be primitive (Where customer in service will be interrupted) or non- Primitive.

c.    Priority can be given in the form of premium queue or even emergency

d.   Patients at life risk will be treated immediately in hospitals.

e.   High paid premium queue in the amusement parks.

 

7.   Shortest remaining processing time:

a.   Customers who have been treated already and having shortest remaining processing time.

b.   In Banks, after filling the forms, priority will be given to the submission of forms.

 

Key Points:

1.   Types of Service facility: Single server, Multiple server with single queue, Multiple server with Multiple queue

2.   Queue length, waiting time and service time are the key parameters.

3.   Customer Behavior:

a.   Balking: Customer not joining the queue, because of queue length

b.   Jockeying: Switching between the queue, considering other queue will be treated faster

c.    Reneging: Leaving the queue ( which is also called as drop outs)

Link to post
Share on other sites
  • 2

Queueing Theory

 

Many of us have encountered the frustration of having to await in line. Unfortunately, this experience remains to be popular in crowded, urbanized, “high-tech” environment. We remain in line in our automobiles in traffic jams; we await on-hold for an executive to pick up our phone calls; we wait in line at fast-food joints and we wait in line at outlets to check out. 

 

We, as consumers, hardly like these delays, and the organizers of the establishments at which we await also condemn us to wait, since it may yield them business. Why is there awaiting? The claim is straightforward: There is higher need for service than there is a resource for service possible. Why is this so? There may be many reasons; for example, there may be a deficit of servers, it may be infeasible economically for a business to furnish the level of service necessary to limit awaiting, or there may be a space limit to the amount of service that can be provided. We can take these limitations out with the amount of finance, and to know how much service it should then make available, one would need to know claims to such challenges as “How long must a consumer wait? and “How many of us will form within the line?” Queueing theory seeks to clarify these queries through comprehensive analytical analysis.

 

Characteristics of Queueing Systems 
A quantitative interpretation of a queueing system involves an analytical model of the elemental processes. Most times, six primary characteristics give an acceptable description of the process: 

  1. Arrival pattern of customers 
  2. Service pattern of servers 
  3. Number of servers and service channels 
  4. System capacity 
  5. Queue discipline
  6. Number of service stages

Problems in a Queueing System
The ultimate aim of the analysis of queueing systems is to understand the behavior of their underlying processes so we can make informed and intelligent decisions in their management. We can identify three types of problems in this process.

  1. Behavioral Problems
  2. Statistical Problems 
  3. Decision Problems

Queue discipline / Service disciplines
Queue discipline refers to the manner in which it selects customers for service when a queue has formed.

 

(1) First-come, first-served (FCFS) or First in first out (FIFO)
Most popular is the first-come, first-served FCFS rule; which is static because no information other than position in line is used to identify the next customer for service. So it serves the customers in the order they arrive.

zczSsmxP6yjBQmQZjzkwcsLgqKsYPvbAQ3zxmsEwZXVfOvF-dMSN7KbM_lj8WCEtwuT4k-Rb1WiUhCaEsO2g-jjzwy3dpW3Jp9CEYyVNjqJaKsXheFclVEYV7KjLAaSIL3RHuvW1

 

(2) Last come, first served (LCFS) or Last in first out (FIFO)

Last come, first served (LCFS), applies too in many inventory systems, because it is less complicated to achieve the closest items which are the last in. The last customers are going to be served first. Goods inside a van usually arranged specified the primary item enter the truck are going to be delivered last. Stacks of pancakes are eaten from the last item on the highest.

 

(3) Processor Sharing (PS)

Processor Sharing (PS) within which the server processes all customers (or jobs) simultaneously but works at a slower rate on each job supported the quantity within the system (this is typical in computer systems). Processor sharing discipline is incredibly popular in computers, communication systems and networks. Queueing systems with processor sharing system represent the acceptable models for sharing the resources, e.g., peripherals of a computer or a bandwidth of delivery systems.

N6KgOSPhqG4YVA-zQnpEIW-y9HkrViF9urcQisB0l2M1AnDyysPZkOMYOHp5CJrR5JfERSVN8R8O9JiJ4QZHhYs7cnOSgpqqQkp4V57nRZmpSaQskkFgCUjsEAIqAJ09cBHG9EA_

 

(4) Priority

They will serve particular types of selected consumers early. Business class passengers will join the aircraft early before the economy class. Patients with extreme cases will be served first in the emergency room ahead of ordinary sickness.

 

(5) Shortest job first (SJF)

The scheme implements a shortest job first (SJF) in the queue. Shortest Job First (SJF) is also a datum in which we prefer the refining of carrying the smallest execution time for the next execution. This scheduling method can be preemptive or non-preemptive. It significantly reduces the average awaiting time for diverse processes awaiting execution.

 

(6) Preemptive shortest job first (PSJF)

In Preemptive SJF Setting, they put activities into the ready queue as they come. A process with shortest burst time begins execution. If a process with a shorter burst time arrives, the current process is taken out or preempted from execution, and it allots the shorter activity to the CPU cycle.

 

(7) Shortest remaining processing time (SRPT)

The Shortest remaining time interval discipline (SRPT) is perfect with relevancy minimizing steady-state mean flow time. Under this rule, when employment is to be selected from among those waiting, we choose the one with the bottom remaining interval. An arriving job will preempt the task in process id and providing the interval of the new arrival is a smaller amount than the remaining time interval of the task than in commission.

 

Queue dicipline flow chart: 

 

QYbcj8Bt1K5CInKOfdFNWmBDxE2ZE-1eDP1IbHKq6gs6Viy0Eczmz95KyGxl9ni_RdYr9HsFHqBUNs3l9fhDWC_acNLxYLD5eJzQg83g5xFMskrcljf3zku9B3dI9uIyrGv2J4hF

 

Link to post
Share on other sites
  • 1

Benchmark Six Sigma Expert View by Venugopal R

The service disciplines as part of the Queuing Theory are applicable to many situations, but very extensively used for CPU scheduling algorithms.

 

1. FIFO (First In First Out) is a very popular method, also referred as FCFS (First Come First Serve) algorithm in CPU scheduling. FIFO concept is commonly applied on most queues in daily life, say a ticket counter or grocery store billing counter. FIFO is important as part of inventory management, as we would generally like to use or sell materials and products before they become aged, especially when there is a risk of shelf life or obsolescence. For CPU scheduling algorithms, FCFS is preferable when the processes have short burst times.

 

2. LIFO (Last In First Out) is literally the opposite of FIFO. In day to day life, LIFO is likely to happen when we stack up any material that is expected to be consumed fast with no risk of expiry of obsolescence. For instance, even if a FIFO model is followed by a supermarket or an assembly shop at a batch level to stack their shelves and bins, the consumption of the goods within the batch will happen on a LIFO basis, since the item that has been stacked last has the best reach. LIFO is applied by a business if they want to use their most recent inventory first. If the costs of recent goods may be higher and LIFO will reflect higher inventory costs, meaning less profits and lower tax for that period. LIFO is permitted as an accepted accounting principle in some countries.

 

3. Processor Sharing - In this approach, all the recipients are served at the same time by sharing of the available resource. It is akin to many households tapping water from a common water tank, with well laid down network of pipes. There is no priority and the available source gets shared by all. Such a scheduling is also referred to as ‘egalitarian processor sharing’ where each client obtains a fraction of the available capacity. The Processor sharing algorithm is considered as a emergence from the ‘Round Robin’ scheduling algorithm. The application of this scheduling discipline making use of internet and other myriad service portals has revolutionized the way world does many activities in the last couple of decades.

 

4. Priority scheduling- To understand this discipline, let us imagine a queue of patients waiting for seeing a doctor on a FIFO basis. Suddenly, if an emergency case comes in and that patient is given priority, there are two possibilities…. i) the doctor interrupts his session with the current patient and goes to attend to the emergency case – this is pre-emptive  ii) the doctor completes the session with the current patient and then attends to the emergency case – this is non-pre-emptive.

In the case of CPU scheduling, for multiple processes processed by a CPU, each process will have a priority number assigned. The CPU will start processing the process that arrived first. When another process arrives, the priority numbers will be checked. If it is a non-pre-emptive schedule, the CPU will complete its current process and check the priority numbers of all the available process waiting in the ‘ready queue’. The process with the highest priority will be taken up next. Whereas, if it is a pre-emptive schedule, the CPU will check priority number of new processes as and when they arrive and if a process with higher priority than the current one is available, the CPU will be allocated to that new process and the current process will be moved to the ‘ready queue’ for being resumed later.

 

5. Shortest Job First - This will be easy to understand if we have understood the ‘Priority’ discipline as explained above. Shortest Job First (SJN) is a non-pre-emptive algorithm where the priority is given based on the execution time, also known as 'burst time'.  In this case, the shorter the duration, higher the priority. This finds use for CPU scheduling, where the shorter processes are not made to wait too long, thus reducing the overall waiting time. The SJF algorithm is preferred if many processes come in to the processor simultaneously.

 

6. Pre-emptive shortest job first - This is a pre-emptive variant of the above discipline, where the current process will be interrupted to accommodate a newly arrived process with shorter duration. The idea is to reduce the overall waiting time and allow faster completion for shorter processes. However, this method is possible only if the processor has knowledge about the burst time for the process. This is not a recommended method if too many short duration processes start coming in between longer duration processes, since it will lead to long waiting time or ‘starvation’ for the longer processes.

 

7. Shortest remaining processing time - This is a pre-emptive CPU scheduling where; the processing time of new process will be compared with the remaining time of the current process. If the remaining time of current process is lesser than the processing time of the new process, the current process will continue to be executed till completion. On the other hand, if the processing time of the new process happens to be lesser than the remaining time of the current process, the existing process will be pre-empted and the new process will be taken up by the CPU. This discipline can be exercised only if the estimated burst time for the processes are known. This is bit more advantageous than the earlier case of pre-emptive shortest job first, since a current process that has already executed partially and is closer to completion than a new one will be allowed to complete.

Link to post
Share on other sites
Guest
This topic is now closed to further replies.
  • Who's Online (See full list)

    There are no registered users currently online

  • Forum Statistics

    • Total Topics
      2,855
    • Total Posts
      14,452
  • Member Statistics

    • Total Members
      55,019
    • Most Online
      888

    Newest Member
    Rosalin
    Joined
×
×
  • Create New...