Thread Scheduling (ctd): quanta, switching and scheduling algorithms

This page continues from part 1 our discussion of thread scheduling. On the next page, we'll discuss the Java implications of thread scheduling.

Quanta and clock ticks

Each thread has a quantum, which is effectively how long it is allowed to keep hold of the CPU if:

Thread quanta are generally defined in terms of some number of clock ticks. If it doesn't otherwise cease to be runnable, the scheduler decides whether to preempt the currently running thread every clock tick. As a rough guide:

Because clock ticks aren't such granular units, and because we don't want to waste time making fine-tuned measurements, certain simplifications have to be made in assessing how much time a thread has used. In Windows, for example, it is assumed that a process that enters a wait state mid-way through a clock tick has used 1/3 of a tick during that timeslice. This last point shows that quanta may actually not be calculated in whole numbers of clock ticks, e.g. Windows uses a base unit of a third of a clock tick.

In Windows, a thread's quantum allocation is fairly stable. In Linux, on the other hand, a thread's quantum is dynamically adjusted when it is scheduled, depending partly on heuristics about its recent resource usage and partly on a nice value. (See our discussion of thread priority for illustrations of the relationship between Java thread priority and CPU allocation under Linux.)

Switching and scheduling algorithms

At key moments, the thread scheduler considers whether to switch the thread that is currently running on a CPU. These key moments are usually:

At these decision points, the scheduler's job is essentially to decide, of all the runnable threads, which are the most appropriate to actually be running on the available CPUs. Potentially, this is quite a complex task. But we don't want the scheduler to waste too much time deciding "what to do next". So in practice, a few simple heuristics are used each time the scheduler needs to decide which thread to let run next:

Quantum and priority boosting

Both Windows and Linux (kernel 2.6.8 onwards) implement temporary boosting. Strategic points at which a thread may be given a "boost" include:

In Linux, threads can be given negative boosts or penalties. In either direction, there are generally limits to these temporary boosts.

A general implementational difference is that Windows tends to use yes-no decisions ("has received I/O"), whereas Linux uses slightly more complex heuristics (e.g. quantifying the amount of CPU recently used).

Next: implications for Java

Having seen how thread scheduling generally works under the hood, in the next section we'll look at how thread scheduling affects Java.

Further reading

In this discussion, we've tried to concentrate on the generalities of thread scheduling. If you want to look at even lower-level details for a particular operating system, then the following are recommended as a starting point:


Windows Internals (5th ed, due Feb 2009): see the chapter on Threads and Processes, and the subsection on Thread Scheduling. The book gives some details of the specific priority and quantum boosts made in certain circumstances, and gives a discussion of modifications made on multiprocessor systems.

Web resources include the Process and Thread Reference section of the MSDN library.


In Linux, the ultimate resource (though not the most understandable) is the kernel source code, of course.

For a slightlier friendlier but quite detailed introduction to the nitty gritty of Linux scheduling, an excellent starting point is Bovet & Cesati (2008) Understanding the Linux Kernel (3rd Ed).

3. There are corner cases where this won't happen. For example, a scheduler will won't break the processor affinity of a thread, which means that a high priority thread could potentially sit waiting for a preferred processor to become available, even though there's another processor already available.
4. The way in which thread priorities are boosted is subtly different on Windows depending on the cause of the boost. After I/O completion, the thread effectively receives a "staggered boost" over a couple of quanta, the priority decaying by 1 after each quantum.
5. Although not directly settable from Java, processor affinity is an indication of which processor(s) a given thread is allowed to run on. Usually, any thread can run on any processor (though for cache performance reasons, a scheduler will tend to favour scheduling a thread on the same processor it has recently run on).

If you enjoy this Java programming article, please share with friends and colleagues. Follow the author on Twitter for the latest news and rants.

Editorial page content written by Neil Coffey. Copyright © Javamex UK 2021. All rights reserved.