CPU Scheduling Lab
Learn how Operating
Systems schedule processes
NexOS is an interactive simulation environment for exploring CPU scheduling algorithms. Visualize Gantt charts, compare algorithms side-by-side, and develop intuition for how real operating systems decide which process runs next.
6
Scheduling Algorithms
6
Guided Challenges
4
Hands-on Labs
Custom Workloads
What is NexOS? Platform overview
🖥 Interactive Simulator
Build custom process workloads and watch them execute tick-by-tick on an animated Gantt chart. Toggle step-by-step mode for a detailed breakdown of every scheduling decision made.
✦ AI-Powered Insights
NexOS analyses your workload in real time and predicts which algorithm will perform best, estimates metrics before you run, and flags anomalies like starvation risk or convoy effects.
◈ Deep Analytics
After each run, explore waiting time distributions, CPU utilization curves, Jain's Fairness Index, and throughput over time — the same metrics used by real OS engineers.
⇄ Algorithm Comparison
Run all 6 algorithms on the same workload instantly and compare them in a side-by-side table with overlaid Gantt charts, profile bars, and a What-If quantum tuner.
Quickstart Guide Get running in 60 seconds
1
Open the Simulator
Click the Simulator tab. Choose a preset workload (Basic, Convoy, Starvation…) or build your own from scratch.
2
Manage Processes
Click the "Manage Processes" box to add, edit, or delete processes. Set arrival time, burst time, priority, and I/O probability.
3
Run & Observe
Select an algorithm and click RUN. Watch the Gantt chart animate, process states update, and metrics populate in real time.
4
Check AI Insights
The sidebar shows instant AI predictions. Open the full AI Insights tab for algorithm rankings, estimated metrics, and anomaly alerts.
5
Compare Algorithms
Head to Compare and click Auto-Rank All to benchmark all 6 algorithms on your workload simultaneously.
6
💾
Save & Revisit
Save interesting runs with tags and notes. Reload them from History to continue your experiments anytime.
Key Concepts Essential terminology
Burst Time
How long a process needs the CPU to complete. Short bursts characterise I/O-bound processes; long bursts mean CPU-bound work.
🚪
Arrival Time
When a process enters the ready queue. Real systems constantly receive new processes at unpredictable times.
🔄
Turnaround Time
Finish time minus arrival time. The total wall-clock time a process spends in the system — the primary user-facing metric.
Waiting Time
Time spent in the ready queue, not executing. The cost of being preempted or waiting behind other processes.
Response Time
Time from arrival to the first CPU execution. Critical for interactive applications like GUIs and web servers.
🌀
Context Switch
The overhead of saving one process's state and loading another's. Frequent switches improve fairness but add CPU overhead.
🏔
Starvation
When a low-priority process waits indefinitely because higher-priority processes keep arriving and taking the CPU first.
🚛
Convoy Effect
In FCFS, a single long process blocks all shorter ones behind it, causing dramatically high average wait times.
Jain's Fairness Index
A number from 0 to 1 measuring how equally CPU time is distributed. 1.0 = perfectly fair; lower = some processes are favoured.
📦
Preemption
Whether the scheduler can forcibly remove a running process from the CPU. Preemptive algorithms allow this; non-preemptive do not.
📶
CPU Utilisation
The percentage of time the CPU is doing useful work (not idle). Good schedulers maximise utilisation, especially under heavy load.
📊
Throughput
Number of processes completed per unit time. Maximising throughput is the primary goal in batch processing environments.
Scheduling Algorithms All 6 algorithms in NexOS
FCFS
Non-preemptive
First-Come First-Served — processes run in arrival order with no interruption. Simple to implement but susceptible to the convoy effect.
Best forUniform batch jobs
WeaknessConvoy effect, high avg wait
OverheadMinimal
SJF
Non-preemptive
Shortest Job First — picks the process with the smallest burst time. Optimal average wait time for non-preemptive scheduling, but requires knowing burst times in advance.
Best forKnown burst-time workloads
WeaknessStarvation of long jobs
OverheadLow
SRTF
Preemptive
Shortest Remaining Time First — preemptive variant of SJF. Always picks the process with the least time left, even interrupting a running process. Minimises average wait time theoretically.
Best forMixed burst workloads
WeaknessHigh context switches
OverheadMedium-High
Round Robin
Preemptive
Each process gets a fixed time quantum in rotation. Excellent fairness and response time. The quantum size is the key tuning parameter — too small causes overhead, too large approaches FCFS.
Best forInteractive / time-sharing
WeaknessQuantum tuning required
OverheadMedium
Priority
Non-preemptive
Processes are assigned a priority number. Lower numbers = higher priority. Critical tasks always run first. Risk of indefinite starvation for low-priority processes without aging.
Best forDeadline-sensitive tasks
WeaknessStarvation risk
OverheadLow
MLFQ
Preemptive
Multi-Level Feedback Queue — dynamically adapts to process behaviour. CPU-bound processes are demoted to lower-priority queues; I/O-bound processes stay promoted. Used in Linux, macOS, and Windows kernels.
Best forMixed / unknown workloads
WeaknessComplex configuration
OverheadMedium-High
Real-World Context Where this matters
Linux CFS (Completely Fair Scheduler)
Linux uses a variant of MLFQ based on a red-black tree, assigning virtual runtime to each process. The process with the smallest virtual runtime always runs next. NexOS's MLFQ gives you intuition for this approach.
Windows Thread Scheduler
Windows uses a 32-level priority system with dynamic boosts for foreground threads and I/O-completing threads — a real-world priority + feedback system. NexOS's Priority and MLFQ modes simulate this class of scheduling.
Real-Time Systems
Aircraft control systems, medical devices, and industrial controllers require hard deadlines — a missed deadline can be fatal. NexOS's deadline field and Priority scheduling approximate the concerns of EDF (Earliest Deadline First) schedulers.
Cloud & Hyperscale
AWS, Google Cloud, and Azure schedule billions of virtual machine tasks per day. They use variants of fair-share scheduling and weighted round-robin to ensure multi-tenant fairness — directly analogous to NexOS's Jain's Fairness Index metric.
Avg Turnaround
ms
Avg Waiting
ms
Avg Response
ms
CPU Util.
%
Throughput
proc/100ms
Ctx Switches
0
switches
⚠ Starvation Detected:
GANTT CHART — FCFS
TICK 0
Process States
RUN READY WAIT I/O DONE
Add processes and click RUN
Per-Process Statistics
PIDNameArrivalBurst PriorityDeadlineFinish TurnaroundWaitingResponse
No data yet
Run a simulation first
Complete a simulation on the Simulator tab, then return here for analysis.
Add processes to get AI predictions
AI Insights will update automatically as you add or edit processes.
Load a workload in Simulator, then compare here.

Experiment History

Save runs from the Simulator to build your research log.
No saved runs yet. Run a simulation and click 💾 to save.