Risk Governance in a Disordered World Growing in Uncertainty:

Peadar Duffy
25 min readJan 29, 2021

Quantum Risks[1] and The Uncertainty Continuum

A paper inviting and encouraging a dialectic between risk governance thought leaders.

This paper describes the effects of 21st-century pace of change on our understanding of the nature of risk and presents the following insights:

1. The pace of disruption has rendered the estimation of likelihood no longer fit for purpose in anything other than the simplest cases[2],

2. A new category of risk, ‘Quantum Risk’, exists. Whereas its definition illustrates its complexity, it also suggests how AI can be used to both identify and parametrise their possible effects,

3. A new frame of reference exists (Value X Time) against which both classical (Probability/Likelihood X Impact/Consequence) and new methods (Plausibility X Possible Consequences) for measuring risk can be framed and aggregated,

4. The combined effect of these insights helps advance the development of new methods with which to manage risks within and across organisations and their complex adaptive systems, particularly ESG (Environment Society Governance) risks which are themselves compound in nature.

Multiple taxonomies and ways of measuring ESG present significant risk governance challenges. Scraping ESG policies from public sources using AI-powered technologies to create supposed ESG metrics is flawed as policies without practices have no teeth. The real measure of ESG is not what organisations say they do, or will do, but what can be verified with data-driven, evidence-based information.

‘Model dependent’ verification is problematic, particularly where complex models are deployed. For example, the inappropriate use of Value at Risk (VaR) was foretold in 2007 as The Great Intellectual Fraud by Nassim Nicholas Taleb in his book Black Swan: The Impact of the Highly Improbable.

More recently, in Radical Uncertainty, Mervyn King and John Kay identified what they call the Viniar fallacy, after the former CFO of Goldman Sachs who described the 2007 inception of the financial crisis as a ’25 standard deviation event’. The Viniar fallacy erroneously treats a probability derived from a model as a description of the world, without regard to the — often low — probability that the model accurately describes the world.

It seems from this and a thoughtful review of ESG metrics currently available, that history might once again be about to repeat itself. Given the state of the planet and the importance which society attaches to the success of ESG, we might perhaps be mindful of Leonardo da Vinci’s profound insight that ‘Simplicity is the Ultimate Sophistication’.

What is required is a Dynamic Input-Output Decision Engine (DIODE) comprised of two integrated parts:

1. Crowdsourced decision-making checkpoints, back-checked against static data sets, and

2. A Decision support sandbox for front line collaboration, challenge, learning and consensus-building.

Principal outcomes include:

1. Agile decision making delivering competitive advantage as organisations navigate uncertainties over the longer term,

2. Transparent ESG decision-making practices reported and improved in real-time.

Dead Reckoning

There is a navigational term called “dead reckoning.” It is taken from the period before radar and GPS. Back then, navigators used the sun and stars to safely navigate from point A to point B, until point B came into sight.

It worked as follows: Assuming you knew your ship and crew’s capabilities, knew where you were starting from, knew where you were going to, knew your speed, understood wind and currents and how to use the sun and the stars to set your bearings and chart a course; you could reckon where you were, in navigational terms. Back in those days, there was much uncertainty and large margins for error when steering a course across both known, and unknown seas. Even when you hit expected landfall it could take you some time to find out exactly where you were and finally get to your chosen destination.

This is what Board Governance looks like today. Instrumentation is poor.

1. Management data is mostly up to date[3], but it mainly relates to where we have been, not to where we are going. It is historic and a bit like buying last month’s newspaper today. Valuable, interesting, but not up-to-date, and so sub-optimal (sometimes useless) in terms of trying to see what’s around the corner, and bumps on the road ahead,

2. Strategic data, because it relates to the future and not the past is fragmented and of poor quality. It is typically unstructured and laden with assumptions which are prone to bias and error. It is at best qualified and can be suspect, if not occasionally dangerous[4].

What Nassim Nicholas Taleb[5] tells us in his seminal works is that not only are decision-makers buying yesterday’s news but that the news they are getting is hugely erroneous. He talks of the ludic fallacy, much of which is embedded in contemporary risk management practice(s) today, including those which were complicit in the lead up to the Global Financial Crisis. One of these he described in 2007 as “the great intellectual fraud”.

Management data, which documents past achievement, can provide solace in the Boardroom. Conversely, strategic data, which is focused on the future, unstructured and laden with assumptions, can generate concern and doubt.

Given shortening corporate lifespans[6] many non-executive directors are typically concerned about the efficacy of management information and the provenance and context of strategic information. They know that they are getting old news on the one hand and qualified views of the future on the other. They know that, although they are the ones charged with fiduciary obligations, that it is management, and not the Board, that has the most up-to-date news and the best insights.

The boardroom’s equivalent of the crow’s nest[7] includes management and strategic information comprised of many pages (sometimes multiple hundreds in the case of Global 1,000s) in the advance papers delivered prior to each board meeting. In seeking clarity of what’s going on in their extended, highly networked, complex organisations, directors today tend to receive opaqueness presented as conciseness. This is not a criticism of management practices. It simply reflects the reality in organisations which are no longer vertically integrated but hugely networked, given the extent of outsourcing of core functions and processes. Organisations no longer have jurisdiction or direct control over all of the non-financial activities (i.e. the operations) that drive financial performance. This can be a root cause of reporting opaqueness and is compounded by the effects of the fast-paced, hyper-connected, multi-polar, uncertain world in which we live today.

Disorder

Globalisation and Technology have flattened the world. Capital, people and jobs move to where the opportunities can be most readily exploited. In a relentless drive towards cost reduction, optimization and profit enhancement, corporations make decisions that can have far-reaching effects on communities and whole countries. As a result, structural barriers are being erected in response to fears which are being broadcast in highly personalised ways through social media.

At the time of writing one headline reads: “A hungry man is an angry man”. Hungry can be a metaphor for afraid, and there is a lot of fear out there, not just on the part of decision takers, but also on the part of decision-makers.

“A proliferation of ‘unthinkable’ events over the previous two years has revealed a new fragility at the highest levels of corporate and public service leaderships. Their ability to spot, identify and handle unexpected, non-normative events is shown not just to be wanting but also perilously inadequate at critical moments. The overall picture is deeply disturbing”[8].

But these are just some of the clouds forming on the horizon!

· “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate)[9].”

· “It is hard to identify what harm[10] a technology can cause until the technology actually causes that harm. In the past, we have had the luxury of putting a lot of funding and research into predicting that harm. We don’t have that luxury anymore — new technologies are emerging too quickly.” [11]

· The disruptive force of technology is killing off older companies earlier and at a much faster rate than decades ago, squeezing employees, investors and other stakeholders[12].

· NASA’s Global Climate Change as well as other sources report runaway weather conditions across the world, and

· COVID-19, and future pandemics.

What can we extrapolate from what we can see going on around us today? Perhaps:

· 20,000 years of progress in 100 years will increase the pace of disruption and upset a lot of apple-carts,

· Emerging and hitherto unknown risks will increase,

· Systems impacts will become more frequent and are likely to have multiplier effects on each other as they grown in number.

· Pace of disruption and advancement will make risk identification increasingly difficult. Risk events will occur faster than managers can identify, let alone understand them. Given that we need to first identify the existence of a risk before we can estimate the likelihood of its occurrence traditional tools are likely to quickly become suspect, and in some cases no longer fit for purpose.

· Institutional investors will increase the use of more searching ESG criteria in their evaluation of corporate behaviours, sustainability and long term viability.

Fig 1: Organizational Response to Change: Source The Behavioural Designers™

What does this mean to those charged with steering large complex behemoth organisations through supposedly known, and more frequently unknown waters?

What does it mean to those risk professionals charged with advising and supporting strategic decision making when traditional tools for managing risks are becoming suspect?

Is the intensification of 21st-century levels of disorder set to drive us back to “Leaving it to the Gods”?

Is there something in Taleb’s notion of achieving “antifragility” wherein he advocates a bimodal strategy of gaining from disorder by:

1. Focusing on decreasing downsides, rather than increasing upsides and thus lowering exposure to negative Black Swans. He says that mitigating fragility is not an option, it’s an absolute requirement as very fast growth built on fragilities will one day break, irretrievably.

2. Balancing gross conservatism with a high-risk appetite. For example, exposing 10% of your risk capacity to extreme risk, and the balanced 90% to “boring cash” delivers certainty of maximum loss exposure to a 10% loss, but with massive upside.

Uncertainty

The effects of bias, perspective(s) and quality of information on decision making. — (Peadar Duffy)

Setting the Scene: Context influences perception: at the strategic level in an organization, decision making is directed at either commercial gain (for purposeful profit) or societal good (not for profit), whereas at the operational level decision making is directed at the details of successful execution.

1. Scenario 1: A General on a hilltop considers reconnaissance reports from his/her scouts and decides to attack and secure a bridge in order to liberate a town on the far side of a river. At the level of the General, the gains associated with the attack outweigh the loss of life s/he can reasonably assume will be incurred. At the level of the foot soldier ordered to ‘fix bayonets’, s/he takes a peek over the trenches and is understandably primarily concerned about loss of life or serious injury, and not with liberating the townspeople. The General, having weighed up the odds is focused on liberating the town, the foot soldier on staying alive.

2. Scenario 2: A CEO decides on new product development in accordance with stakeholder interests. At the level of the CEO, the gains associated with investment outweigh R&D (design/product trial & error etc.), marketing (product and market validation etc.), production (quality, environment etc.), delivery (supply chain, continuity etc.) and associated costs and uncertainties which need to be accepted. Those in finance, legal, operations, quality etc. are primarily concerned about performance against targets rather than gain-sharing amongst stakeholders. The CEO, having considered his/her options is focused on delivering optimised results, operations on performing against an array of KPIs.

The Uncertainty Continuum: Decision making at the top of the continuum morphs into decision management in the middle and decision-taking at the bottom. Objectives are set, and uncertainties navigated towards gains of one kind or another. At the strategic level, whilst aware of uncertainties, the primary focus is gain. At the operational level whilst aware of gain/value creation, the primary focus is performance/value preservation.

Between the General and the foot soldier, the CEO and the front line decision maker, there’s a lot going on. Different people with different roles, different perspectives, different languages (technical) and different ways of measuring things are engaged in different decision-making cascades:

1. From the top where strategic decisions are made, to

2. The middle where strategic decisions take on new shapes as they are refined and honed for tactical deployment, to

3. The operational front lines where the effect of strategic decisions manifest as operational sub-decisions required to achieve mission.

Clearly, a lot needs to happen at the right time, and in the right way if objectives are to be achieved in a way which consistently fulfils organizational purpose. Perspectives, and the availability of consistently reliable quality information is mission-critical as big decisions at the top are decomposed across an array of organizational sub-decisions.

Clearly, objective-centric information is the oxygen of decision making. Once clear as to direction, decision-makers need to be supplied with relevant, consistently reliable, data-driven, evidence-based information with which they can make rational decisions.

Two fundamental questions arise. Under conditions of uncertainty:

· Is relevant, consistently reliable, data-driven, evidence-based actionable information readily available, in real-time, to decision-makers? and

· Is it accessible?

The answer to these conjoined questions is:

Yes, and no, sometimes.

An expanded answer to this question is as follows:

· Yes, mostly: in the case of measurable uncertainties,

· No, frequently: in the case of unmeasurable uncertainties,

· Sometimes: when/where, and if, you can fleetingly join risk data points across complex systems.

This answer is informed by experience in the use of Principal Methods for measuring risks, together with a deep understanding of the state of risk management globally as reported in various authoritative reports and surveys which tell us that:

· Probabilistic methods compute measurable uncertainties, for example, insurable risks (fire, property, product hazards) where probabilities are calculated on a range from 0 to 1. These are examples of known knowns.

However, insurable risks constitute less than 20% of a typical organisation's total risk universe and:

o Tend to be well understood and controlled, and

o Rarely, if ever, precipitate value collapse /destruction events.

· Likelihood methods estimate/guesstimate unmeasurable uncertainties, for example, uninsurable business risks where likelihoods are estimated semi-quantitatively[13] as high-medium-low, red-amber-green, or on five-point scales. These are examples of unknown knowns. However, these risks tend to fall into two categories:

o Risks reported on risk registers and which are subject to senior management review when, and if, they make it to the top of the list of risks (e.g. top 10 risks etc.)

Note 1: Risks which are understood and controlled rarely precipitate disaster,

Note 2: An overlay of risk registers on insured risks, including their limitations and exclusions, typically reveals significant gaps which can sometimes weigh on balance sheets.

1. Who says so? where is the science? where is the underpinning evidence? and

2. When did the clock start?

o Risks which elude the risk register as they are opaque to the risk function (2nd line of defence) when conducting periodic reviews with risk owners (1st line i.e. P&L owners and enablers). This is not surprising as both think differently, use different languages and measure things differently.

Note 1: risks which materialise as value destructive risk events are frequently sources of shock and surprise to boards and senior management. But they are often less of a surprise to those closer to risks on the ground who understood and knew, or were concerned, of their existence. Hence unknown to people at the top, but known to people on the ground.

Note 2: red and amber risks on the ground can sometimes become greener the closer they get to the top. This can have as much to do with bias as anything wilful such as agency conflict etc.

Note 3: risks which reflect multiple interconnections and interdependencies’ across systems can be observed in one system but not in another as they can be named/described and measured differently.

· Scenario development methods seek to sense what’s over the horizon. For example, Shell Scenarios seeking to understand business model technology disruptions as well as those caused by weather, energy and other scenarios. These are things which can be seen to be coming but with such high uncertainty that measurability is illusory and lacking in demonstrable credibility.

Scenario development is hard to do as it necessarily embraces a systems approach in the context of the VUCA world in which we live. Consequently, very few organisations are known to sustain structured scenario development programmes such as Shells. When conducted, they tend to be undertaken as one-off events, all be it done well with the support of external advisors. This is probably not surprising when one considers typical CRO commentaries (ref Fig 2 below) when asked about the state of Enterprise Risk Management (ERM) in organisations generally.

Fig 2: Problems frequently encountered with Enterprise Risk Management (ERM) methods

Risk Measurement

For decades classical risk measurement has taken the form of a Probability/Likelihood Y-axis and an Impact/Consequence X-axis.

Fig 3: Classical Risk Measurement

This remains valid for measurable uncertainties, which in obeying the laws of statistical analysis, can be put to distributions yielding valuable insights.

This is becoming problematic however in the case of unmeasurable uncertainties, as pace of disruption, and scale of complexity, is increasing so rapidly that not enough is known about postulated risks. Without sufficient knowledge[14] likelihood of unmeasurable uncertainties cannot be credibly estimated and attempts to do so can be foolhardy.

This problem can be solved with the introduction of a new Knowledge axis Z (reference figure 4 below) and the juxtaposition of Likelihood on axis Y with Plausibility of given risk scenarios. As knowledge increases, plausibility can give way to estimation of likelihood, and the classical model restored, as illustrated in Figure 3 above. The benefit of this approach is that management can immediately see the quality, or incompleteness, of information pertaining to a given emerging risk. On this basis, they are made fully aware of where they are making informed decisions or indeed where they are making decisions in ignorance of data-driven facts.

Fig 4: Classical Risk Measurement adjusted for 21st-century pace of disruption (technology, weather, trade tensions, extended fragile supply chains, pandemics …) where identifying risks before they occur is problematic, and levels of knowledge dangerously low.

However, adding the ‘level of knowledge — plausibility’ dimension only solves the problem in simple cases. For example, in the case of a single plausible-likely risk event/scenarios [plausible scenario] in just one entity/component/part of a system.

The limitations of this plausibility extension become clear when we consider how risk universe, measurable and unmeasurable uncertainties relate to each other in the diagrams below.

Fig 5: Risk landscape of a simple single system.

What happens then in the case of multiple linked systems within one organizational ecosystem?

Fig 6: Source ResearchGate: Managing the 21st-century organisation.

Clearly, life becomes considerably more complex across an organizational ecosystem as:

1. multiple plausible scenarios need to be considered in and across multiple systems[15], and

2. multiple systems interact with other interconnected and interdependent systems, and

3. systems interactions precipitate events and behaviours which were hitherto unimagined

Clearly therefore a problem arises where hitherto unimagined risks are:

4. known/perceived by some people, but

5. are not known, not seen, by overarching responsible management teams.

Consider also how interconnected and interdependent systems precipitate innumerable cause and effect scenarios over the passage of time. This clearly must cause risk universes to grow, perhaps exponentially. In such complex adaptive systems, many risks expire and/or cancel each other out. But many do not. Many will grow. The result is a virtual ‘Black Hole’ where we really have no idea what’s going on for sure within the organizational ecosystem.

We can hope that a kind of homeostasis will allow the organization to prevail. But, as with organ failure in the human body, any system failure in a behemoth, complex, distributed organisation can result in chronic failure. A big and basic question needs to be asked: Are behemoth complex distributed organisations inherently unstable, but nevertheless sustainable/viable over the longer term?

The simple and sound answer would seem to be probably not, but we just don’t know. Just as we don’t know exactly how body temperature is maintained constant at 37 degrees Celsius, 98.6 Fahrenheit, we just know that it is. So why not a complex adaptive system as well?

Is hope then a risk we must run? Do stakeholders need, and want, some concrete assurance that we can do better than simply leaving it to the Gods?

From a governance perspective, we must ask if classical risk measurement methods are capable of comprehending/managing organisations as complex adaptive systems? Are they fit for 21st-century purposes?

Applying the ‘don’t tell me, don’t show me, just prove it to me’ rule, we are unable to provide any evidence that classical methods work in anything other than simple systems. Global risk surveys report sub-optimal performance in even the most ‘risk-mature’ organisations. Combined with repeated global CEO surveys, and the 23rd Global PWC CEO survey, in particular, it seems certain that we are facing uncertain times ahead with inadequate risk instrumentation. Just as navigating high seas without GPS would be inadvisable, so too is navigating uncertainty without competent risk systems.

Hypothesis

Hypothesis: A new category of risk, Quantum Risks, and a new frame of reference, the Uncertainty Continuum, provide a better means of comprehending 21st-century uncertainties.

Quantum[16] Risks Definition: Risks that exist in different states simultaneously across organizational systems even though they cannot always be observed together,

1. Note 1 to entry: Risk[17] is defined as the effect of uncertainty on objectives,

2. Note 2 to entry: Quantum risks emerge as a consequence of interaction(s) within, and between, complex systems,

3. Note 3 to entry: Quantum Risk sources are variables whose future values are not known with certainty because of a lack of understanding/knowledge/information and/or because they are the result of, or are influenced by, human, economic, social, political (HESP) factors,

4. Note 4 to entry: Quantum risks are difficult to consistently communicate because of different taxonomies and different ways of measuring risks within, and between, systems,

5. Note 5 to entry: Quantum risks are difficult to consistently identify because of different levels of risk maturity across systems,

6. Note 6 to entry: Quantum Risks are comprised of strings of data (structured and unstructured), from route cause(s) to effect(s), changing form and direction in response to internal and external contexts/factors in, and across, systems,

7. Note 7 to entry: Quantum risks are compounded by dynamic volatility, change and Fast Clockspeed Risks[18] in particular,

8. Note 8 to entry: Quantum risks are compounded by the effect(s) of emergent human behaviours arising from system(s) interactions,

9. Note 9 to entry: Quantum risks are unmeasurable other than as to their:

i. Plausibility,

ii. Possible effect on value,

iii. Changing characteristics over time.

The Uncertainty Continuum Definition: A coherent and continuously evolving strategic frame of reference, Value over Time, connecting measurable and unmeasurable uncertainties across complex adaptive systems.

1. Note 1 to entry: Classical risk measurement methods seek to comprehend possible single system events over time,

2. Note 2 to entry: The Uncertainty Continuum seeks to comprehend possible multiple systems events over time,

3. Note 3 to entry: The Uncertainty Continuum accommodates probabilities, likelihoods and plausibility’s,

4. Note 4 to entry: Time is a factor which directly influences all biological and non-biological systems,

5. Note 5 to entry: As disorder and uncertainty are navigated over time, decisions are made which precipitate changes to internal and external contexts vis-a-vis resource allocations, interconnections and interdependencies releasing imagined and unimagined events and behaviours,

6. Note 6 to entry: Value generation is a factor which influences all human endeavours, for example:

a. Not-for-profits for societal gain,

b. For-profits for gain distributions across stakeholders.

Rationale for Hypothesis:

1. Our 21st-century world is more connected and interdependent than our hitherto less connected 20th-century world,

2. 21st-century organisations are no longer principally vertically integrated, they are now mostly extended across ecosystems reflecting multiple partnership, supplier and other types of relationships,

3. Pace of disruption from pandemics/endemics, technology advancements, AI, run-away weather, global trade tensions etc., have the potential to injure the whole of the system which holds the world as we know it together,

4. Too many new and emerging risks are too hard to identify before they can do harm,

5. If you can’t first identify something you can’t measure it,

6. Limitations of the classical methods of risk measurement, specifically the:

a. probability-likelihood Y-axis does not adequately accommodate new and emerging risks due to dangerously low levels of knowledge;

b. introduction of a new axis Z for level of knowledge, whilst effective for simple systems, is inadequate for complex systems,

7. The poor performance of enterprise risk management (ERM) methods as reported in multiple global surveys,

8. If ERM methods are failing simple systems, why should we think they can work for complex adaptive systems?

9. Notes to the definition of Quantum Risk, as seen above, illustrate their complexities. Whether or not you agree with the new proposed categorisation and its definition, systems are complicated, complex and beyond the capabilities of most organisations, even those that consider themselves ‘risk mature’,

10. Urgent need for a simple approach to a wickedly complex 21st-century problem.

Fig 7: The Uncertainty Continuum

Note: The diagram above is a simple representation of a new universal frame of reference to which classical frames of reference can be applied.

Practical Application

Superimposed on the X-axis above is a non-exhaustive representation of just some of the big and basic decision areas which are constantly under review as decision-makers adjust course in the face of events[19]. This axis captures the story of strategic decision-makers being forced to re-evaluate options as they navigate uncertainty over time.

Whereas Time is a constant variable for all, Value is different. Value measures vary according to the enterprise in question.

Value Preservation: Uncertainty is mostly measurable where value is being preserved. This is where many of the known knowns reside, hence for example they are:

· Insured as risks can be probabilistically determined using traditional methods etc.

· Quality assured using Six Sigma etc.

· Project managed using an agile project management methodology

· etc.

Performance Management: Uncertainty is less measurable where performance is being optimized across multiple variables. This is where the unknown knowns reside, hence for example:

· Variables are actively monitored, controlled by management and funded from operational budgets,

· Business impact assessments are conducted and continuity plans prepared,

· Crisis management as well as various contingency plans are put in place.

Value Creation: Uncertainty is high and immeasurable where Value is being created. This is where the unknown unknowns reside. Here certainty, like oxygen at high altitudes becomes thin. Hence for example:

· M&As and divestitures are approved following consultation with shareholders,

· Strategies and major capital allocations are approved by boards,

· Strategic performance is monitored and options for changing course direction are prepared by management for approval by board.

Value creation strategies at this level are best understood through the etymology of the word strategy: strategos: Art of the General. Here options are weighed and decisions made which involve the orchestration of scarce resources. Nothing is certain. As data clearly can’t be pulled from the future neither is historic data as useful as it would have been in a slower-paced 20th century.

What is required to inform decision making is targeted information collection and analysis (military intelligence) which when painstakingly pieced together is used to inform strategic choices as to operational deployments on the ground.

The equivalence of military intelligence here is crowdsourced corporate insight and wisdom, fact-checked using AI-powered technologies.

Technical Requirements:

1. Low-no cognitive load methods of crowdsourcing targeted information from front line decision-makers and business enablers,

2. Proxies for Value and Time, for example:

a. Value: Five-point Y-Axis scale assessing the Ability to Demonstrate Value Generation,[20]

Unable to demonstrate, Able to demonstrate, Partially able to demonstrate, Able to demonstrate, Fully able to demonstrate,

b. Time: Five-point X-axis scale assessing Effect of Uncertainty[21] on Sustainability[22] over Time[23] of X-axis imperatives[24]

Not Sustainable over Time, Moderately Sustainable over Time, Sustainable over Time, Very Sustainable over Time, Fully Sustainable over Time,

3. Safe Haven for the identification of disclosable risks,

Note: The recent 10-K disclosures by Google and Microsoft of the materiality of AI risks might provide a clue as to how a similar approach might be taken in the case of new and emerging risks. What is required beforehand is a new way of identifying (Quantum Risks) and framing (Uncertainty Continuum) such risks in a way which provides assurance to investors.

4. Adoption of an Agile Risk Management approach to navigating disorder and uncertainties in pursuit of long term sustainable objectives and fulfilment of societal purpose.

Principal Methods for Measurable and Unmeasurable Uncertainties:

1. The laws of probability require:

a. Identified events, which occur

b. In large numbers, are

c. Spread, are

d. Independent in their occurrences, and are

e. Directly comparable.

2. Events which obey these requirements provide frequency data with which probability[25] can be determined as a value between 0 and 1, with 1 representing certainty

3. Without frequency data, probability cannot be determined. The reason that hazard risks are insurable (fire, property, car etc.) is because frequency data is available. On this basis, people who wish to transfer risk can do so as there are others (insurers) who are prepared to accept those same risks for a price (the premium paid) at which they believe they can make a profit.

4. Insurers however will not offer conventional insurance products for those risks for which there is insufficient reliable data. It is generally accepted that the majority of risks (i.e. the totality of that which can occur, sometimes referred to as the risk universe in entities) greatly exceeds the minority which are traditionally insurable.

5. Financial risks on the other hand are managed in the main through controls which have over time become generally adequately proven and are subject to ongoing audit, internal audit, and organizational checks and balances.

In addition, certain high-level financial risks are treated using financial instruments (hedges, options, futures, derivatives etc.) which over time are becoming more sophisticated and reliable.

6. Strategic and operational risks however neither fit the instruments created for financial risk management nor the laws of probability for the hazard risk management. These risks, together with residual financial and hazard risks, require good management and a professional approach.

Endnote- [1] The term Quantum Risks here is not used in the classical physics/computing context. The definition provided later, and some of its notes to entry, has similarities to the classical definition, hence the metaphorical use of the term here.

Endnote- [2] Simplest cases in this paper are those classed as obvious and/or complicated in the Cynefin Framework

Endnote- [3] While mostly up to date it is often siloed. In addition, many organisations have not yet fully digitized their data so, whilst available it is not always easily accessible. In this regard, silos and legacy systems are real obstacles and will remain so over the medium term.

Endnote- [4]Two studies have pointed out the significant loss of shareholder value that resulted from the mismanagement of strategic risks. A study by Mercer Management Consulting analyzed the value collapse in the Fortune 1000 during the period 1993–1998. The analysis found that 10% of the Fortune 1000 lost 25% of shareholder value within a one-month period. Mercer traced the collapses back to their root causes and found that 58% of the losses were triggered by strategic risk, 31% by operational risk, 6% by financial risk, and hazard risk did not cause any of the decrease in shareholder value. (Enterprise Risk Management — Implementing New Solutions, 2001; 8.) A more recent study by Booz Allen Hamilton analyzed 1200 firms during the period of 1999 through 2003 with market capitalizations greater than $1 Billion. The poorest performers were identified as companies that trailed the lowest-performing index for that period, which was the S&P 500. The primary events triggering the loss of shareholder value were strategic and operational failures. Of the 360 worst performers in the study, 87% of value destruction suffered by these companies related to strategic and operational mismanagement. (Kocourek, 2004: 1.)

Endnote- [5] Author of Fooled by Randomness, Black Swan: The Impact of the Highly Improbable, and Antifragile: Things that gain from Disorder.

Endnote- [6] https://www.innosight.com/insight/creative-destruction/ …the 33-year average tenure of companies on the S&P 500 in 1964 narrowed to 24 years by 2016 and is forecast to shrink to just 12 years by 2027.

Endnote- [7] “Look out” atop the main mast on a ship

Endnote- [8] Thinking The Unthinkable (https://www.amazon.co.uk/Thinking-Unthinkable-imperative-leadership-digital/dp/1911382748)

Endnote-[9] World Economic Forum: Mitigating Risks in the Innovation Economy How Emerging Technologies Are Changing the Risk Landscape; Ray Kurzweil, Founder and Chief Executive Officer, Kurzweil Technologies, USA, in “The Law of Accelerating Returns”

Endnote-[10] We acknowledge the significant effort and resources invested by organisations, experts and standards setters in seeking to establish Responsible AI Principles and Practices, particularly those with demonstrable track records in

Endnote-[11] Andrew Maynard, Professor in the School for the Future of Innovation Society, Arizona State University

Endnote-[12] “The average age of a company listed on the S&P 500 has fallen from almost 60 years old in the 1950s to less than 20 years currently,” a team of Credit Suisse analysts led by Eugene Klerk wrote in a note to investors Thursday. https://tinyurl.com/ybyb6xms

Endnote- [13] Note the illusion of semi-quantitative methods of estimating risks in terms of occurrence of once in 100 years etc. Challenges which can quickly debunk such estimates/guesstimates are:

Endnote- [14] Knowledge in this context means ‘established universal knowledge,

Endnote- [15] Organizational, technological, information etc.

Endnote- [16] This proposed definition is influenced by:

1. Einstein, contemplated how an object could be both at rest and moving depending on the position of the observer. This, and not his theory of special relatively, lead to his Nobel nomination. It wasn’t awarded because he didn’t attend. He later argued against his discovery which has since been proven with data.

2. Danish physicist Niels Bohr who in order to reconcile the ways that energy acted like both waves and particles determined: states that existed simultaneously, even though they could not be observed together. This train of thought ultimately inspired our understanding today of quantum mechanics, which was originally conceived by Einstein,

3. German physicist Heisenberg’s Uncertainty Principle’: … the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory. Or more colloquially … You can only know certain things precisely, but never everything.

Endnote- [17] ISO 31000 (risk management) 2018

Endnote- [18] More recent and detailed content available in The Risk Management Handbook A practical guide to managing the multiple dimensions of risk’ published by Kogan Page (2016) Chapter 15 pages 236–245

Endnote- [19] The impediment to action advances action. What stands in the way becomes the way Marcus Aurelius, and popularized by Ryan Holiday in his Book The OBSTACLE is the way: The Timeless art of Turning Trials into Triumph

Endnote- [20] Value Generation as the balance between value creation and value preservation

Endnote- [21] Note author definition: The effects of bias, perspective(s) and quality of information on decision making.

Endnote- [22] Sustainability impacts map to the classical risk impacts/consequences,

Endnote- [23] Men plan and God Laughs!

Endnote- [24] X-axis imperatives inserted here (Purpose, Data & Decisions, Strategies, Business Model, Capitals Deployed, Outcomes) is a non-exhaustive list of strategic imperatives. Associated with each imperative are other operational and tactical imperatives.

Endnote- [25] Probability: the extent to which an event (ISO Guide 73 line 3.1.4) is likely to occur. NOTE 1 ISO 3534–1:1993, definition 1.1, gives the mathematical definition of probability as “a real number in the scale 0 to 1 attached to a random event. It can be related to a long-run relative frequency of occurrence or to a degree of belief that an event will occur. For a high degree of belief, the probability is near 1.

--

--

Peadar Duffy

Peadar is a risk and governance expert with significant international experience. LinkedIn https://tinyurl.com/ycv7zayh