Modelling Complex Systems

Interactive simulation for modelling cognitive load, psychological safety, and hidden defects in complex systems.

Properties
Valid Throughput0/sec
Rate of completed, non-defective work items per second
Flow Efficiency100%
Percentage of time work is actively progressing vs. waiting
WIP (Little's Law)0
Work in Progress: total number of active items in the system
System EntropyLow
Measure of work distribution chaos. High entropy = uneven load
Coord. Penalty0
Communication overhead: N(N-1)/2 team coordination pairs
Rework Cycles0
Number of times defective work items looped back for rework
Latent Defects0
Hidden errors that passed through due to low psychological safety
System StateStable
Overall system health based on load, throughput, and rework

High intermediation designed for control, not flow. Stable under low load but gridlocks easily.

Kalman Filter
Psychological SafetyHigh
Low Safety = Hidden Errors
Environmental Noise5%
Interrupts & Occasion Noise
Work Volume (Load)Med
Value Units (Teams)8
Intermediation (Gates)4
True Load
Valid Work
Hidden Defect
Stalled
Process Gate

Overview

The Complex System Visualiser is an agent-based simulation designed to model the propagation of entropy, error, and delay within complex organisational systems. It moves beyond simple "randomness" to visualise specific mechanisms from queuing theory, information theory, and organisational psychology.

It demonstrates how structural constraints (intermediation) and human factors (psychological safety, cognitive load) interact to create non-linear effects on system throughput athend stability.

Scientific Basis & Mathematical Laws

The simulation engine is built upon several core scientific principles.

Kingman’s Formula (Queuing Theory)

  • Origin: John Kingman, 1961.
  • Concept: Wait time (Wq) does not increase linearly with utilisation (ρ); it increases exponentially as utilisation approaches 100%.
E[Wq](ρ1ρ)τ(ca2+cs22)E[W_q] \approx \left( \frac{\rho}{1-\rho} \right) \tau \left( \frac{c_a^2 + c_s^2}{2} \right)

In Simulation: Nodes have a Cognitive Load capacity. As a node's load exceeds 80%, its processing speed drops according to a cubic decay curve (1ho3)(1- ho^3). This visually demonstrates why "busy" teams suddenly gridlock.

Little's Law

  • Origin: John Little, 1954.
  • Concept: The long-term average number of items in a stable system (L) is equal to the long-term average effective arrival rate (λ) multiplied by the average time an item spends in the system (W).
L=λWL = \lambda W

In Simulation: The WIP (Work In Progress) metric tracks L. Users can observe that increasing Frequency (λ) without increasing node speed results in an explosion of Lead Time (W), similar to a traffic jam forming when cars enter a highway faster than they exit.

Brooks’s Law

  • Origin: Fred Brooks, The Mythical Man-Month, 1975.
  • Concept: "Adding manpower to a late software project makes it later." This is due to the combinatorial explosion of communication channels, calculated as N(N1)2\frac{N(N-1)}{2}.

In Simulation: The Coordination Penalty increases automatically as you add Value Units (nodes). This adds a global "noise" factor to the system, simulating the friction of alignment in larger groups.

Shannon Entropy (Information Theory)

  • Origin: Claude Shannon, 1948.
  • Concept: Entropy measures the level of uncertainty or disorder in a system.

In Simulation: The System Entropy metric analyses the spatial distribution of work items. Low entropy indicates structured, predictable pulses of work. High entropy indicates scattered, unpredictable jitter, a hallmark of unstable systems.

The Kalman Filter

  • Origin: Rudolf Kalman, 1960.
  • Concept: An algorithm that uses a series of measurements observed over time, containing statistical noise, to produce estimates of unknown variables.

In Simulation: When enabled, the Prediction Layer (represented by a cyan dashed ring) attempts to 'chase' and estimate the true Cognitive Load (the solid ring), filtering out stochastic jitter. It visually demonstrates the difficulty management faces in distinguishing "signal" (true capacity issues) from "noise" (random fluctuations).

The Hidden Factory (Rework)

  • Origin: Armand Feigenbaum / Six Sigma.
  • Concept: A significant portion of capacity is often consumed by correcting defects that were not caught at the source.

In Simulation: When Psychological Safety is low, errors are hidden (purple dots). These have a 50% chance of being rejected at the end of the line and sent back to the start, consuming capacity without generating value.

Human Factors

These controls model the soft-skills/psychological dimension of the system.

Psychological Safety

Definition: The belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes (Amy Edmondson).

  • High Safety: Teams stop the line when an error occurs (red blockage). This hurts short-term flow but prevents technical debt.
  • Low Safety: Teams pass the error downstream to avoid blame. The dot turns purple (Hidden Defect) and continues moving, creating false flow metrics but eventual rework.

Cognitive Load

Definition: The total amount of mental effort being used in the working memory (John Sweller).

Mechanism: Represented by the coloured ring around a node. Work items add load; time decays it. High load triggers the Kingman effect (slowdown) and increases the probability of error generation.

Environmental Noise (Occasion Noise)

Definition: Transient variability in judgement or performance caused by external factors (mood, weather, interruptions) (Daniel Kahneman).

Mechanism: Adds random "jitter" to particle movement speed and increases the base probability of gate failure, independent of structural design.

System Design

These controls model the structural architecture of the organisation.

Value Units (Nodes)

Represents teams, departments, or servers. Increasing nodes increases capacity but incurs the Coordination Penalty (Brooks's Law).

Intermediation (Gates)

Represents approval steps, handovers, bureaucracy, or middleware.

  • Effect: Each gate is a potential failure point. Even with 99% reliability, a chain of 5 gates has only ~95% system reliability (0.995^5).
  • Gridlock: In high-load states, gates become bottlenecks that drastically reduce Flow Efficiency.

System Presets

Ideal Flow (The Tesla Model)

Config: 0 Gates, 100% Safety, Low Noise
  • Theory: Disintermediation. By removing approval gates and trusting the system, flow efficiency is maximised. Errors are rare and caught immediately.
  • Observation: Note the high speed and consistent rhythm of the blue dots, representing optimal flow state.

Bureaucracy (The Ford Model)

Config: 4 Gates, 90% Safety, High Frequency
  • Theory: High intermediation. The system is designed for control, not flow. While stable under low load, it gridlocks easily under high load due to the sheer number of stoppage points.
  • Observation: Note the high speed and consistent rhythm of the blue dots, representing optimal flow state.

Toxic Crunch

Config: 1 Gate, 20% Safety, High Noise
  • Theory: The "Death March." Low safety forces teams to hide errors. Throughput looks high (dots are moving), but the system is actually churning out defects (purple dots) that will return as rework. The system state reads 'Churning' or 'Toxic'.
  • Observation: Observe the high volume of 'movement' (throughput) masking the accumulation of purple defects. This simulates 'vanity metrics' where teams look busy but are actually creating technical debt.

Stochastic Chaos

Config: 0 Gates, 40% Noise
  • Theory: High Entropy. Even without structural blockers (gates), the sheer amount of environmental noise prevents stable flow. Particles jitter and stall randomly, making prediction impossible.