Data Ecosystem for Decision-Making

Shared system to reduce amiguity in data-driven decisions and products.

CONTEXT

Product mindset as…

Bringing a product design mindset to data products.

Since I started working in data teams, one question has kept coming up again and again:

at what point does data visualization stop being a visual problem and become a product problem?

My background as a Product Designer made it clear early on that many of the challenges I was seeing in data products couldn’t be solved with better charts alone. They required a more structural approach. I began applying principles I had already used in digital products -clarity of purpose, system thinking, scalability- but translated into a data-driven context.

PROBLEM

What problems does this create?

The real problem is that design decisions become disconnected from the context and the type of decision each data product is meant to support.

When those differences aren’t made explicit, design becomes shallow:

  • Visual inconsistency across dashboards, reports, analytics tools, AI products, and emails

  • Recurrent debates about which chart works best, which colours to use, or how to structure layouts

  • Visual decisions driven by habit rather than intent

This was the starting point for rethinking how the visual data ecosystem at IAG should be designed.

APPROACH

The core design decision

In complex data products, mixing what should be shown with how it can be implemented usually leads to confusion, slower decisions, and diluted outcomes. 

That’s why I deliberately split the ecosystem into two complementary but distinct branches: Desirability and Feasibility.

  • Desirability focuses on intent and value. It answers questions about purpose, decision-making, and meaning: what problem are we solving, what decision needs to be supported, and what information actually matters.

  • Feasibility focuses on execution and scale. It translates those decisions into patterns, components, and rules that can be implemented consistently across tools and teams.

Separating both layers makes design decisions explicit, traceable, and easier to align across disciplines. It prevents technical constraints from defining the experience too early, while ensuring that strategic intent doesn’t remain abstract or unbuildable.

This structure reflects a data product mindset: designing not just visuals, but a system that connects business intent with reliable execution—at scale.

DESIRABILITY

Designing decision logic

Different domains require different decision models.

What changes in data-heavy products at IAG?

Unlike traditional digital products, data-heavy products at IAG are fragmented by domain.

Each domain is specialised and designed to support very different types of decisions.

  • Finance → control, comparison, deviation from targets

  • Operations → status, alerts, prioritisation

  • Commercial → performance, trends, opportunity detection

These aren’t just different datasets, they represent different decision models.

  1. Canonical Decision Rule Patterns

Reusable decision logic across the data ecosystem.

  1. Decision Logic in Context - OTP example

A single decision rarely relies on one rule.

Canonical Decision Rule Patterns define the building blocks of decision logic.

Real decisions emerge when these rules are combined and translated into design constraints, depending on context.

Below is an example of how multiple canonical rules work together to support a single operational decision.

LAYER / 01

Decision Intent

Focused on user context decision

Who decides

When they decide

What they need to decide

Why it matters

Operations Manager

Weekly review

Where to intervine to improve OTP?

Delays, costs and reputation

LAYER / 02

Canonical Rule Mapping

This decision requires combining the following rule patterns

Control indicators

Comparison

See low/high performers

Detect anomalies

Threshold → OTP vs SLA

Comparison → vs. last quarter

Ranking → Worst OpCo / Route

Exception → Abnormal cancellations

LAYER / 03

Design Translation

Matching rules with design constraints (feasibility)

Threshold rule

Comparison rule

Ranking rule

Exception rule

*DC → Single KPI + SLA reference

*DC → Period delta, not raw values

*DC → Limited top/bottom, no full list

*DC → Highlight only when action is required

*DC: Design constraints - These constraints prevent overexposure of data and keep the focus on decision-making.

FEASIBILITY

Making decisions implementable and scalable

Feasibility focuses on translating decision logic into scalable, implementable patterns across tools and teams.

After defining what good looks like through decision logic, the next challenge was ensuring those decisions could be built, reused, and scaled across IAG's data products.

  1. Same chart pattern, two operational views

Instead of designing dashboards, I designed chart patterns that could adapt to different operational questions.

The structure stays the same; what changes is the decision context.

  1. Primitives

To make patterns scalable, I standardised primitives such as titles, axes, tooltips and annotations.

This ensured that meaning stayed consistent even when layouts changed.

  1. Tokens & visual primitives

Visual meaning was encoded at system level through a small set of semantic colour tokens.

This prevented teams from reinterpreting colour usage at implementation time.

  1. Implementation across tools for decision support

The same KPI pattern was implemented across Figma, React, Power BI, presentations, emails and the main objective was adapting to each tool’s constraints.

When full standardisation wasn’t possible, I prioritised consistency of meaning over visual uniformity.

Where is mishandled baggage risk concentrated, and which OpCos require operational prioritisation?

Stakeholders tended to interpret MBR as a monthly score rather than an operational risk pattern.

Trend rule

MBR varies structurally across months and carriers. Temporal evolution prevents overreacting to isolated spikes.

Comparison rule

Cross-OpCo comparison reframes MBR from isolated performance to systemic exposure.

Threshold / Exception rule

Colour thresholds encode operational severity. Above 10% is not a variation — it is a signal requiring review.

Ranking rule

The airport table establishes priority thresholds (>10%). Ranking clarifies where intervention should occur first.

The slide was structured around three cognitive layers: 1. Narrative framing (Highlights) → strategic interpretation 2. Temporal evolution (line chart) → structural behaviour 3. Operational prioritisation (airport table + thresholds) → action focus

© 2026 Javier Mora

Data Ecosystem for Decision-Making

Shared system to reduce amiguity in data-driven decisions and products.

CONTEXT

Product mindset as…

Bringing a product design mindset to data products.

Since I started working in data teams, one question has kept coming up again and again:

at what point does data visualization stop being a visual problem and become a product problem?

My background as a Product Designer made it clear early on that many of the challenges I was seeing in data products couldn’t be solved with better charts alone. They required a more structural approach. I began applying principles I had already used in digital products -clarity of purpose, system thinking, scalability- but translated into a data-driven context.

PROBLEM

What problems does this create?

The real problem is that design decisions become disconnected from the context and the type of decision each data product is meant to support.

When those differences aren’t made explicit, design becomes shallow:

  • Visual inconsistency across dashboards, reports, analytics tools, AI products, and emails

  • Recurrent debates about which chart works best, which colours to use, or how to structure layouts

  • Visual decisions driven by habit rather than intent

This was the starting point for rethinking how the visual data ecosystem at IAG should be designed.

APPROACH

The core design decision

In complex data products, mixing what should be shown with how it can be implemented usually leads to confusion, slower decisions, and diluted outcomes. 

That’s why I deliberately split the ecosystem into two complementary but distinct branches: Desirability and Feasibility.

  • Desirability focuses on intent and value. It answers questions about purpose, decision-making, and meaning: what problem are we solving, what decision needs to be supported, and what information actually matters.

  • Feasibility focuses on execution and scale. It translates those decisions into patterns, components, and rules that can be implemented consistently across tools and teams.

Separating both layers makes design decisions explicit, traceable, and easier to align across disciplines. It prevents technical constraints from defining the experience too early, while ensuring that strategic intent doesn’t remain abstract or unbuildable.

This structure reflects a data product mindset: designing not just visuals, but a system that connects business intent with reliable execution—at scale.

DESIRABILITY

Designing decision logic

Different domains require different decision models.

What changes in data-heavy products at IAG?

Unlike traditional digital products, data-heavy products at IAG are fragmented by domain.

Each domain is specialised and designed to support very different types of decisions.

  • Finance → control, comparison, deviation from targets

  • Operations → status, alerts, prioritisation

  • Commercial → performance, trends, opportunity detection

These aren’t just different datasets, they represent different decision models.

  1. Canonical Decision Rule Patterns

Reusable decision logic across the data ecosystem.

  1. Decision Logic in Context - OTP example

A single decision rarely relies on one rule.

Canonical Decision Rule Patterns define the building blocks of decision logic.

Real decisions emerge when these rules are combined and translated into design constraints, depending on context.

Below is an example of how multiple canonical rules work together to support a single operational decision.

LAYER / 01

Decision Intent

Focused on user context decision

Who decides

When they decide

What they need to decide

Why it matters

Operations Manager

Weekly review

Where to intervine to improve OTP?

Delays, costs and reputation

LAYER / 02

Canonical Rule Mapping

This decision requires combining the following rule patterns

Control indicators

Comparison

See low/high performers

Detect anomalies

Threshold → OTP vs SLA

Comparison → vs. last quarter

Ranking → Worst OpCo / Route

Exception → Abnormal cancellations

LAYER / 03

Design Translation

Matching rules with design constraints (feasibility)

Threshold rule

Comparison rule

Ranking rule

Exception rule

*DC → Single KPI + SLA reference

*DC → Period delta, not raw values

*DC → Limited top/bottom, no full list

*DC → Highlight only when action is required

*DC: Design constraints - These constraints prevent overexposure of data and keep the focus on decision-making.

FEASIBILITY

Making decisions implementable and scalable

Feasibility focuses on translating decision logic into scalable, implementable patterns across tools and teams.

After defining what good looks like through decision logic, the next challenge was ensuring those decisions could be built, reused, and scaled across IAG's data products.

  1. Same chart pattern, two operational views

Instead of designing dashboards, I designed chart patterns that could adapt to different operational questions.

The structure stays the same; what changes is the decision context.

  1. Primitives

To make patterns scalable, I standardised primitives such as titles, axes, tooltips and annotations.

This ensured that meaning stayed consistent even when layouts changed.

  1. Tokens & visual primitives

Visual meaning was encoded at system level through a small set of semantic colour tokens.

This prevented teams from reinterpreting colour usage at implementation time.

  1. Implementation across tools for decision support

The same KPI pattern was implemented across Figma, React, Power BI, presentations, emails and the main objective was adapting to each tool’s constraints.

When full standardisation wasn’t possible, I prioritised consistency of meaning over visual uniformity.

Where is mishandled baggage risk concentrated, and which OpCos require operational prioritisation?

Stakeholders tended to interpret MBR as a monthly score rather than an operational risk pattern.

Trend rule

MBR varies structurally across months and carriers. Temporal evolution prevents overreacting to isolated spikes.

Comparison rule

Cross-OpCo comparison reframes MBR from isolated performance to systemic exposure.

Threshold / Exception rule

Colour thresholds encode operational severity. Above 10% is not a variation — it is a signal requiring review.

Ranking rule

The airport table establishes priority thresholds (>10%). Ranking clarifies where intervention should occur first.

The slide was structured around three cognitive layers: 1. Narrative framing (Highlights) → strategic interpretation 2. Temporal evolution (line chart) → structural behaviour 3. Operational prioritisation (airport table + thresholds) → action focus

© 2026 Javier Mora

Data Ecosystem for Decision-Making

Shared system to reduce amiguity in data-driven decisions and products.

CONTEXT

Product mindset as…

Bringing a product design mindset to data products.

Since I started working in data teams, one question has kept coming up again and again:

at what point does data visualization stop being a visual problem and become a product problem?

My background as a Product Designer made it clear early on that many of the challenges I was seeing in data products couldn’t be solved with better charts alone. They required a more structural approach. I began applying principles I had already used in digital products -clarity of purpose, system thinking, scalability- but translated into a data-driven context.

PROBLEM

What problems does this create?

The real problem is that design decisions become disconnected from the context and the type of decision each data product is meant to support.

When those differences aren’t made explicit, design becomes shallow:

  • Visual inconsistency across dashboards, reports, analytics tools, AI products, and emails

  • Recurrent debates about which chart works best, which colours to use, or how to structure layouts

  • Visual decisions driven by habit rather than intent

This was the starting point for rethinking how the visual data ecosystem at IAG should be designed.

APPROACH

The core design decision

In complex data products, mixing what should be shown with how it can be implemented usually leads to confusion, slower decisions, and diluted outcomes. 

That’s why I deliberately split the ecosystem into two complementary but distinct branches: Desirability and Feasibility.

  • Desirability focuses on intent and value. It answers questions about purpose, decision-making, and meaning: what problem are we solving, what decision needs to be supported, and what information actually matters.

  • Feasibility focuses on execution and scale. It translates those decisions into patterns, components, and rules that can be implemented consistently across tools and teams.

Separating both layers makes design decisions explicit, traceable, and easier to align across disciplines. It prevents technical constraints from defining the experience too early, while ensuring that strategic intent doesn’t remain abstract or unbuildable.

This structure reflects a data product mindset: designing not just visuals, but a system that connects business intent with reliable execution—at scale.

DESIRABILITY

Designing decision logic

Different domains require different decision models.

What changes in data-heavy products at IAG?

Unlike traditional digital products, data-heavy products at IAG are fragmented by domain.

Each domain is specialised and designed to support very different types of decisions.

  • Finance → control, comparison, deviation from targets

  • Operations → status, alerts, prioritisation

  • Commercial → performance, trends, opportunity detection

These aren’t just different datasets, they represent different decision models.

  1. Canonical Decision Rule Patterns

Reusable decision logic across the data ecosystem.

  1. Decision Logic in Context - OTP example

A single decision rarely relies on one rule.

Canonical Decision Rule Patterns define the building blocks of decision logic.

Real decisions emerge when these rules are combined and translated into design constraints, depending on context.

Below is an example of how multiple canonical rules work together to support a single operational decision.

LAYER / 01

Decision Intent

Focused on user context decision

Who decides

When they decide

What they need to decide

Why it matters

Operations Manager

Weekly review

Where to intervine to improve OTP?

Delays, costs and reputation

LAYER / 02

Canonical Rule Mapping

This decision requires combining the following rule patterns

Control indicators

Comparison

See low/high performers

Detect anomalies

Threshold → OTP vs SLA

Comparison → vs. last quarter

Ranking → Worst OpCo / Route

Exception → Abnormal cancellations

LAYER / 03

Design Translation

Matching rules with design constraints (feasibility)

Threshold rule

Comparison rule

Ranking rule

Exception rule

*DC → Single KPI + SLA reference

*DC → Period delta, not raw values

*DC → Limited top/bottom, no full list

*DC → Highlight only when action is required

*DC: Design constraints - These constraints prevent overexposure of data and keep the focus on decision-making.

FEASIBILITY

Making decisions implementable and scalable

Feasibility focuses on translating decision logic into scalable, implementable patterns across tools and teams.

After defining what good looks like through decision logic, the next challenge was ensuring those decisions could be built, reused, and scaled across IAG's data products.

  1. Same chart pattern, two operational views

Instead of designing dashboards, I designed chart patterns that could adapt to different operational questions.

The structure stays the same; what changes is the decision context.

  1. Primitives

To make patterns scalable, I standardised primitives such as titles, axes, tooltips and annotations.

This ensured that meaning stayed consistent even when layouts changed.

  1. Tokens & visual primitives

Visual meaning was encoded at system level through a small set of semantic colour tokens.

This prevented teams from reinterpreting colour usage at implementation time.

  1. Implementation across tools for decision support

The same KPI pattern was implemented across Figma, React, Power BI, presentations, emails and the main objective was adapting to each tool’s constraints.

When full standardisation wasn’t possible, I prioritised consistency of meaning over visual uniformity.

Where is mishandled baggage risk concentrated, and which OpCos require operational prioritisation?

Stakeholders tended to interpret MBR as a monthly score rather than an operational risk pattern.

Trend rule

MBR varies structurally across months and carriers. Temporal evolution prevents overreacting to isolated spikes.

Comparison rule

Cross-OpCo comparison reframes MBR from isolated performance to systemic exposure.

Threshold / Exception rule

Colour thresholds encode operational severity. Above 10% is not a variation — it is a signal requiring review.

Ranking rule

The airport table establishes priority thresholds (>10%). Ranking clarifies where intervention should occur first.

The slide was structured around three cognitive layers: 1. Narrative framing (Highlights) → strategic interpretation 2. Temporal evolution (line chart) → structural behaviour 3. Operational prioritisation (airport table + thresholds) → action focus

© 2026 Javier Mora

Data Ecosystem for Decision-Making

Shared system to reduce amiguity in data-driven decisions and products.

CONTEXT

Product mindset as…

Bringing a product design mindset to data products.

Since I started working in data teams, one question has kept coming up again and again:

at what point does data visualization stop being a visual problem and become a product problem?

My background as a Product Designer made it clear early on that many of the challenges I was seeing in data products couldn’t be solved with better charts alone. They required a more structural approach. I began applying principles I had already used in digital products -clarity of purpose, system thinking, scalability- but translated into a data-driven context.

PROBLEM

What problems does this create?

The real problem is that design decisions become disconnected from the context and the type of decision each data product is meant to support.

When those differences aren’t made explicit, design becomes shallow:

  • Visual inconsistency across dashboards, reports, analytics tools, AI products, and emails

  • Recurrent debates about which chart works best, which colours to use, or how to structure layouts

  • Visual decisions driven by habit rather than intent

This was the starting point for rethinking how the visual data ecosystem at IAG should be designed.

APPROACH

The core design decision

In complex data products, mixing what should be shown with how it can be implemented usually leads to confusion, slower decisions, and diluted outcomes. 

That’s why I deliberately split the ecosystem into two complementary but distinct branches: Desirability and Feasibility.

  • Desirability focuses on intent and value. It answers questions about purpose, decision-making, and meaning: what problem are we solving, what decision needs to be supported, and what information actually matters.

  • Feasibility focuses on execution and scale. It translates those decisions into patterns, components, and rules that can be implemented consistently across tools and teams.

Separating both layers makes design decisions explicit, traceable, and easier to align across disciplines. It prevents technical constraints from defining the experience too early, while ensuring that strategic intent doesn’t remain abstract or unbuildable.

This structure reflects a data product mindset: designing not just visuals, but a system that connects business intent with reliable execution—at scale.

DESIRABILITY

Designing decision logic

Different domains require different decision models.

What changes in data-heavy products at IAG?

Unlike traditional digital products, data-heavy products at IAG are fragmented by domain.

Each domain is specialised and designed to support very different types of decisions.

  • Finance → control, comparison, deviation from targets

  • Operations → status, alerts, prioritisation

  • Commercial → performance, trends, opportunity detection

These aren’t just different datasets, they represent different decision models.

  1. Canonical Decision Rule Patterns

Reusable decision logic across the data ecosystem.

  1. Decision Logic in Context - OTP example

A single decision rarely relies on one rule.

Canonical Decision Rule Patterns define the building blocks of decision logic.

Real decisions emerge when these rules are combined and translated into design constraints, depending on context.

Below is an example of how multiple canonical rules work together to support a single operational decision.

LAYER / 01

Decision Intent

Focused on user context decision

Who decides

When they decide

What they need to decide

Why it matters

Operations Manager

Weekly review

Where to intervine to improve OTP?

Delays, costs and reputation

LAYER / 02

Canonical Rule Mapping

This decision requires combining the following rule patterns

Control indicators

Comparison

See low/high performers

Detect anomalies

Threshold → OTP vs SLA

Comparison → vs. last quarter

Ranking → Worst OpCo / Route

Exception → Abnormal cancellations

LAYER / 03

Design Translation

Matching rules with design constraints (feasibility)

Threshold rule

Comparison rule

Ranking rule

Exception rule

*DC → Single KPI + SLA reference

*DC → Period delta, not raw values

*DC → Limited top/bottom, no full list

*DC → Highlight only when action is required

*DC: Design constraints - These constraints prevent overexposure of data and keep the focus on decision-making.

FEASIBILITY

Making decisions implementable and scalable

Feasibility focuses on translating decision logic into scalable, implementable patterns across tools and teams.

After defining what good looks like through decision logic, the next challenge was ensuring those decisions could be built, reused, and scaled across IAG's data products.

  1. Same chart pattern, two operational views

Instead of designing dashboards, I designed chart patterns that could adapt to different operational questions.

The structure stays the same; what changes is the decision context.

  1. Primitives

To make patterns scalable, I standardised primitives such as titles, axes, tooltips and annotations.

This ensured that meaning stayed consistent even when layouts changed.

  1. Tokens & visual primitives

Visual meaning was encoded at system level through a small set of semantic colour tokens.

This prevented teams from reinterpreting colour usage at implementation time.

  1. Implementation across tools for decision support

The same KPI pattern was implemented across Figma, React, Power BI, presentations, emails and the main objective was adapting to each tool’s constraints.

When full standardisation wasn’t possible, I prioritised consistency of meaning over visual uniformity.

Where is mishandled baggage risk concentrated, and which OpCos require operational prioritisation?

Stakeholders tended to interpret MBR as a monthly score rather than an operational risk pattern.

Trend rule

MBR varies structurally across months and carriers. Temporal evolution prevents overreacting to isolated spikes.

Comparison rule

Cross-OpCo comparison reframes MBR from isolated performance to systemic exposure.

Threshold / Exception rule

Colour thresholds encode operational severity. Above 10% is not a variation — it is a signal requiring review.

Ranking rule

The airport table establishes priority thresholds (>10%). Ranking clarifies where intervention should occur first.

The slide was structured around three cognitive layers: 1. Narrative framing (Highlights) → strategic interpretation 2. Temporal evolution (line chart) → structural behaviour 3. Operational prioritisation (airport table + thresholds) → action focus

© 2026 Javier Mora