Skip to content

An interpretable system that models the future of work as an equilibrium under AI-driven forces. Instead of predicting job loss, it decomposes workforce disruption into automation pressure, adaptability, skill transferability, demand, and AI augmentation to explain stability, tension, and transition paths by 2030.

License

Notifications You must be signed in to change notification settings

AmirhosseinHonardoust/Workforce-Disruption-Equilibrium-Engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Workforce Disruption Equilibrium Engine (2030)

The Workforce Disruption Equilibrium Engine is an interpretive system designed to help reason about the future of jobs under AI-driven transformation.

It does not attempt to forecast job loss, rank jobs by survivability, or classify occupations as “safe” or “unsafe.” Instead, it models each role as a dynamic equilibrium, shaped by competing structural forces that continuously push, pull, and rebalance labor outcomes.

This project exists to answer a deeper question:

How does AI redistribute stability, instability, and leverage across the workforce and where does pressure actually accumulate?


How to Read This README

This README is intentionally long.

It is structured as a guided walkthrough of the application itself, with each section anchored to a real screen from the Streamlit app. For every screen, we explain:

  1. What you are seeing
  2. Why this view exists
  3. How to interpret it correctly
  4. What kinds of real-world conclusions you should (and should not) draw

The goal is not speed, it is understanding.


Single Job View, Treating a Job as a System

Screenshot 2025-12-13 at 19-54-16 Workforce Disruption Equilibrium Engine

The Single Job View is the conceptual foundation of the entire project.

Here, a job role is treated not as a static label, but as a system under pressure. Every role exists at the intersection of forces that either destabilize it or help it adapt.

This view answers a single, focused question:

Given the current structure of AI, skills, demand, and institutions, where does this job naturally rebalance?


Why This View Matters

Most workforce analyses stop at risk. Risk alone is misleading.

Two jobs can have identical automation risk but radically different futures:

  • One collapses quickly
  • The other transforms and stabilizes

The difference lies in force alignment, not probability.


Core Metrics

Equilibrium Center

The equilibrium center represents the role’s future stability anchor.

It is a relative coordinate:

  • Values below 1.0 indicate net downward pressure
  • Values above 1.0 indicate net stabilizing or leverage pressure

Importantly:

  • This is not a salary
  • Not a probability
  • Not a score of “importance”

It answers only one thing:

Where does the role settle if all current pressures are allowed to balance?


Equilibrium Shift (%)

The shift quantifies how far the role is being pulled away from its baseline.

Small shifts imply:

  • Incremental task change
  • Gradual adaptation

Large shifts imply:

  • Structural realignment
  • New skill boundaries
  • Possible role fragmentation

The direction matters as much as the magnitude:

  • Negative → erosion pressure
  • Positive → transformation or leverage

Resilience Band

The resilience band reflects uncertainty in the rebalancing process.

A narrow band means:

  • Forces agree
  • Transition paths are clearer
  • Outcomes are more predictable

A wide band means:

  • Forces conflict
  • Multiple futures are plausible
  • Small policy or investment changes can have outsized effects

Wide bands often signal where intervention matters most.


Transition Tension

Transition tension measures instability, not danger.

High tension means:

  • Strong forces
  • Conflicting directions
  • Stressful transitions

A role can have:

  • A “safe” equilibrium center
  • And still extremely high tension

This explains why some “safe” jobs feel unstable in practice.


Force Decomposition, Making Pressure Visible

The force decomposition chart is the explanatory core of the system.

Each bar corresponds to one structural force:

  • Automation Pressure
  • Adaptability
  • Skill Transferability
  • Economic Demand
  • AI Augmentation

Positive bars stabilize the role. Negative bars destabilize it.

This chart answers the most important question:

Why does the equilibrium look the way it does?

There is no black box. Interpretation is first-class.


Scenario Simulator, Exploring Counterfactual Futures

Screenshot 2025-12-13 at 19-55-11 Workforce Disruption Equilibrium Engine

The Scenario Simulator exists because real decisions are not about prediction, they are about alternatives.

This view allows you to ask:

If we change the environment, how does the system rebalance?


What the Scenario Simulator Is (and Is Not)

It is:

  • A counterfactual reasoning tool
  • A system stress-testing interface
  • A way to explore second-order effects

It is not:

  • A forecast
  • A policy recommender
  • A deterministic outcome generator

Deep Dive into Scenario Controls

AI Adoption Speed

This control scales how rapidly AI penetrates workflows.

  • Faster adoption increases automation pressure
  • But also increases augmentation potential
  • Can reduce long-term instability if adaptability keeps pace

This slider reveals an important insight:

Speed alone is not the problem, mismatch is.


Regulation Strictness

Regulation introduces friction.

  • High regulation slows substitution
  • But can delay adaptation
  • Often reduces short-term disruption while increasing long-term tension

This captures why regulation is often a trade-off, not a solution.


Education Investment

Education investment is one of the most powerful stabilizers in the system.

It:

  • Raises adaptability
  • Increases skill transferability
  • Often lowers tension even when automation remains high

This explains why education policies tend to affect stability more than risk.


Corporate Automation Incentives

This slider models organizational behavior independent of policy.

It shows why:

  • Corporate incentives can overpower regulation
  • System outcomes depend on alignment, not intention

Scenario Outputs, How to Read Them

When a scenario is applied:

  • All forces are recalculated
  • Equilibrium shifts
  • Bands widen or narrow
  • Tension rises or falls

The most important signal is often tension change, not equilibrium movement.


Workforce Tension Map, Seeing the Whole System

Screenshot 2025-12-13 at 19-56-21 Workforce Disruption Equilibrium Engine

The Workforce Tension Map provides a macro view of the entire dataset.

Instead of individual roles, you see patterns.


Axes

X-Axis: Equilibrium Shift

This axis shows directional pressure:

  • Left → erosion and displacement
  • Right → resilience and leverage

Think of this as where pressure points.


Y-Axis: Transition Tension

This axis shows instability:

  • Low → smooth transitions
  • High → disruptive, stressful transitions

Think of this as how painful the change is.


Reading the Map Holistically

Key regions emerge naturally:

  • Low shift / low tension Stable roles, gradual evolution

  • High tension / moderate shift Roles facing difficult transitions despite unclear outcomes

  • Extreme shifts Structural winners and losers

Clusters matter more than individual dots.


Role-by-Role Narrative Interpretation

The engine is designed to support narrative reasoning.

For example:

  • High automation + low transferability → fragmentation
  • High adaptability + high augmentation → role amplification
  • Neutral equilibrium + high tension → institutional bottleneck
  • Positive shift + rising tension → leverage with stress

Numbers are signals. Narratives are conclusions.


What This System Ultimately Reveals

Across all screens, one principle holds:

AI does not eliminate work, it redistributes stability, tension, and leverage.

Some roles become more powerful. Some become more fragile. Most are reshaped rather than removed.

This engine makes those invisible dynamics visible.


How This Should Be Used

This system is intended for:

  • Strategic workforce planning
  • Policy exploration
  • Education system design
  • Conceptual understanding of AI-driven labor dynamics

It should not be used for:

  • Individual career decisions
  • Automated hiring or firing
  • Deterministic forecasting

About

An interpretable system that models the future of work as an equilibrium under AI-driven forces. Instead of predicting job loss, it decomposes workforce disruption into automation pressure, adaptability, skill transferability, demand, and AI augmentation to explain stability, tension, and transition paths by 2030.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages