Author: Prof. Dr. habil. Stefan Hunziker, Professor of Risk Management and Head of the Competence Centre Risk & Compliance Management at Lucerne School of Business.

His work focuses on decision-relevant risk management and the governance implications of how uncertainty is analyzed and communicated.

Risk matrices are widely used and, in many industries, formally required. Regulators mandate them, consultants promote them, and organizations implement them diligently. Yet one uncomfortable question remains: do they actually improve decisions?

In my experience, I have never seen a board make a strategic decision based on a risk map. Decisions are made based on business cases, financial projections, competitive positioning, and strategic trade-offs. They are not made by pointing to a colored box on a grid. This is not an argument against risk matrices as such. It is an argument for clarity about what they can and cannot do.

A risk map is not a risk analysis tool. It is a communication artifact. The problem begins when visualization substitutes risk analysis. Strategic uncertainty does not behave like a binary event with a fixed probability and impact. Markets fluctuate across ranges, and competitors react unpredictably. Cost structures evolve, and several uncertainties interact simultaneously. However, these dynamics are routinely reduced to likelihood scores and impact categories. By assigning numbers and colors, uncertainty appears quantified, even when core assumptions remain insufficiently challenged.

This shift has significant consequences. Instead of asking how robust an investment is under alternative scenarios, the discussion shifts to mapping risks. Instead of examining how two moderate risks might interact, attention is given to whether a box is orange or red. The visual structure shapes the conversation and narrows it.

Interestingly, the strongest advocates of risk matrices are often consultants and regulators. Decision makers rarely demand them for their own deliberations. They serve primarily as governance artifacts, not as tools executives actively rely on when deciding on strategic alternatives. Indeed, mandated tools are not automatically decision-relevant tools.

Even when proper risk analysis is done, the risk map itself influences judgment. A matrix cannot show interdependencies among risks or capture nonlinear effects. It cannot reflect how probability estimates are shaped by optimism, anchoring, or group dynamics. It translates uncertainty into an overly simplified signal that may support risk reporting, but it does not strengthen decision-making. If boards treat the presence of a risk map as evidence that risk has been properly analyzed, they mistake representation for substance.

Sound risk analysis requires more than binary scoring. It requires challenging assumptions, quantifying ranges where possible, exploring alternative scenarios, and making uncertainty explicit. Most importantly, it requires asking a simple question: Does this risk assessment help us decide differently, or does it merely help us to report risks?

If risk management is genuinely decision-relevant, uncertainty must be embedded directly into strategic decision processes and documents. Risk appetite must be discussed in the context of business decisions. Risk matrices become problematic the moment visualization is mistaken for analysis.