Designing a Decision System to Protect Attention

This experiment treats job search as a decision system rather than an effort problem. Instead of reacting to volume, it introduces a structured filter that reduces noise, protects attention, and improves decision quality. The system narrows a large set of signals into a small number of high-confidence opportunities. The result is lower cognitive load, clearer positioning, and more deliberate choices.

Problem

Job search is not limited by opportunity.
It breaks at the level of signal, attention, and decision quality.

Scanning large volumes of roles creates noise, false positives, and fatigue. Activity increases, but clarity does not. The real challenge is deciding where not to invest time.

This experiment tests whether a simple system can reduce noise and improve decision quality before effort and emotion take over.

Framing the Experiment

I approached job search as a design problem.

Not:
Which job should I apply to?

But:
What system would consistently surface strong signals and filter out weak ones?

Hypothesis
Better filtering early leads to fewer decisions later, and stronger applications.

wireframes_experiment01_signal-to-application_funnel

The System

The system is structured as a sequence of decision gates.

Two choices shaped the design:

  • Hard constraints (salary, location, contract type) were delayed

  • Early stages focused on signal and trajectory, not feasibility

This prevents premature rejection of potentially strong opportunities.

Phase 0 — Signal Collection

Input: ~250 job signals
Sources: alerts, inbound messages, network

Action: capture without judgment
No scoring. No filtering

Goal: build awareness without commitment

Phase 1 — Directional Scoring

Each role is assessed across five dimensions:

  • Role trajectory fit

  • Company signal strength

  • Personal energy signal

  • Learning and leverage potential

  • Credibility of match

The scoring is directional. It supports comparison, not precision.

A simple tool was used to keep scoring consistent once the criteria were clear.

Phase 2 — Gatekeeper

At this stage, one question drives the decision:

Does this role involve meaningful execution, or does it drift into work I no longer want to do?

This removes roles that:

  • appear strategic but lack ownership

  • focus on innovation without execution

  • carry program titles but are driven by sales

This step relies on judgment, not numbers.

Phase 3 — Probability × Effort

The remaining roles are evaluated as a portfolio.

Each is considered through four factors:

  • likelihood of securing an interview

  • effort required to apply well

  • emotional cost of rejection

  • upside if successful

This shifts the decision from reactive to deliberate.

Phase 4 — Job Description as Test

Only a small number of roles reach full review.

Instead of scanning widely, each is read carefully.

This exposes constraints that are not visible earlier:

  • unclear ownership

  • organisational complexity

  • weak or undefined mandate

After this step, only a few roles remain.

wireframes_experiment01_probability-effort_matrix

Output

  • ~250 signals collected

  • ~14 shortlisted

  • ~6 reviewed in depth

  • 4 applications submitted

The difference is not volume.
It is clarity.

Cognitive load drops.
Decision confidence increases.

The system does not produce a perfect answer.
It improves the quality of the decisions.

Insight

The strongest signal is not the title.

It is where leverage sits:

  • not in abstract strategy

  • not in surface-level innovation

  • but in execution with real ownership

This clarity did not come from more thinking.
It came from building and running the system.

This work is shared early.

Not as a finished framework, but as something tested in practice.

The goal is not to remove uncertainty.

It is to reduce noise.

Good systems do not eliminate doubt. They make it manageable.

Next
Next

How Ideas Find Structure