We recognise the continuous and deep connection to Country, of Aboriginal and Torres Strait Islander peoples as the first peoples of this nation. In this way we respectfully acknowledge the Traditional Custodians of this land, sea, the waters and sky. We pay tribute to the Elders past and present as we also respect the collective ancestry that has brought us all here today.
AUTHOR: STEPHEN HALPIN
What role will public servants play in an AI connected world? Nearly 700 million people use AI each week, and the Australian Government has just launched its own sovereign-hosted instance of GPT-4o.[1],[2] Within the APS, 70% of staff believe AI will transform service delivery and policy.[3] Yet that optimism must be tempered by the lessons of previous attempts at automation such as Robodebt, underscoring the need for strong ethical guardrails that maintain public trust.
Industry’s embrace of AI shows what’s possible. Medical researchers have shortened drug discovery timeframes, farmers use real-time monitoring to boost crop yields and conservation efforts, and health professionals see more patients by reducing administrative burdens with AI.[4] But disruption and unintended consequences have followed. In professional and financial services, clients may now receive a report in minutes which once took junior consultants’ weeks to prepare.[5] AI is also making inroads into the care economy with therapy bots offering scripted counselling, and social media platforms building AI “friends” to counter loneliness.[6]
The individual’s use of AI is the precursor for successful government deployment. Cultural readiness allows the promise of AI efficiency to be realised. We can see that people are using AI for a range of topics; health, education, and general learning. Its application promises efficiency, not for its own sake, but to free up public servants to spend more time engaged in the complex thinking needed to address important social problems.
A note on method
For this article, we conducted an analysis of open-source AI model training data. We algorithmically clustered prompts into statements with a common theme. For example, “medical, doctor, symptoms, patient” and “earth, planet, sun, solar”. Clusters were randomly chosen and analysed against Bloom’s Taxonomy, a framework for classifying classroom learning objectives by cognitive complexity. Basic knowledge, or lower order thinking, is categorised as remembering, understanding and applying. Higher order thinking is categorised as analysing, evaluating and creating. [7]
Findings
Twenty-six percent of prompts relate to computer science and technical how-to in Python, React, JavaScript, and networking. Eight percent cover English and language, including essay writing and text interpretation. Math and science make up 12%. But the largest share—43%—involves general conversation with the AI: polite interactions, jokes, gift ideas, film recommendations, and personal questions.
Figure 1: How people are using AI

The taxonomy mapping shows most people use AI to “remember” or “understand” information, i.e lower order thinking. The analysis showed very little use of AI for higher order thinking.[8] For example:
- 82% of prompts in the “AI, intelligence, artificial” category asked for explanations or recalling of facts;
- 70% of “book, summary, plot” prompts sought timelines or thematic insights; and
- 96% of “medical, doctor, symptoms” prompts requested treatment details
A word search spanning all 15,200 prompts for “government,” “department,” “agency,” and “public service” yielded just 55 results (0.36% of the sample). Eight of the 55 asked for agency or policy information and the rest related to political science or historical facts. This suggests people either don’t realise AI can deliver government services or didn’t need them in this sample.
Implications for the public service
AI could shift government’s front door services from digital to artificial. The breadth of topics in the dataset shows people trust AI with medical, personal, and social issues. With the right model, AI could handle many administrative, in-person, and phone queries, and would be limited only by computational power. Lengthy delays could disappear in front line areas like the NDIS, Veterans’ Affairs, Centrelink and Health and Ageing.
Within agencies, AI could take over the lower order thinking tasks in policy and regulation. Recalling past decisions, preparing minutes and records, compiling statistics, or drafting background material for briefings could all be undertaken by a well-trained model. An AI integrated with enterprise data could also retrieve decades-old institutional knowledge in seconds. Retirements and staff movements would no longer erase critical history.
Government IT projects could also benefit. Many prompts within the dataset analysed involved Python, JavaScript, and machine learning questions. AI could streamline development by reducing code review, debugging, and training time. Smaller teams could build and test faster, delivering products on schedule.
What’s left for humans?
If AI assumes responsibility for gathering fundamental information and undertaking routine tasks, public servants can focus on creativity, higher-order thinking, evaluation, and analysis. In practice, this could mean uninterrupted time to review literature from academics, industries and other governments. Economic models and business cases can be informed with richer real-world data, with teams directing efforts towards closing data gaps rather than making assumptions.
Positioning AI as the first point of contact for government could also free up frontline areas to work on more complex cases that require judgement and empathy. Claims that traverse portfolio areas could be linked to a single customer service agent for resolution. Social policy experts could meet and collaborate in the same way medical teams do to deliver high-quality continuity of care.[9]
One of the most surprising findings was how often people socialised with AI. Gift suggestions for anniversaries, mothers’ day, valentines and first dates made up an entire category. Turning to AI for advice on presents suggests a shift in how people seek advice and connection. This is no different in the public service, which relies on the communication of ideas across sections and divisions. The public service often struggles with this and is criticised of siloed working. If AI takes over transactional tasks, engagement and collaboration must become the public services’ cornerstone, at least until Artificial General Intelligence (AGI) arrives and we can ask it to fix our deepest problems.
And whilst AGI may one day transform how we work and live, the immediate priority is integrating today’s AI responsibly into government services. Understanding how to wield this readiness to automate lower-order tasks that are repetitive and transactional will free public servants to work on higher-order challenges; the complex, cross-portfolio issues that require deep analysis, collaboration and creativity. For leaders in the public service the short term goal is clear: ethically shepherd adoption and invest in capability and AI deployment that elevates, rather than erodes, the human role in public service.
[1] https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html
[2] https://www.itnews.com.au/news/gov-quietly-launches-onshore-instance-of-gpt-4o-for-aps-618944
[3] https://www.themandarin.com.au/296879-public-servants-positive-about-ai-use-in-service-delivery-mandarin-survey/
[4] https://www.industry.gov.au/publications/australias-artificial-intelligence-ecosystem-growth-and-opportunities
[5] https://www.thefp.com/p/the-consulting-crash-is-coming
[6] https://sfist.com/2025/05/01/mark-zuckerberg-gets-roasted-for-saying-the-average-american-has-fewer-than-three-friends-while-pushing-ai-chatbots/
[7] https://www.monash.edu/learning-teaching/TeachHQ/Teaching-practices/learning-outcomes/how-to/align-with-taxonomies
[8] ibid
[9] https://www.health.gov.au/our-work/strengthening-medicare-measures/encouraging-multidisciplinary-team-based-care
AUTHOR: AARON GREENMAN
Governments around the world including Australia have embraced responsible AI policies grounded in ethics, transparency, and public trust. But while caution is warranted, inaction also carries risk. As expectations grow and AI capabilities accelerate, slow adoption can erode government relevance, capability, and credibility. The goal for government shouldn’t be to eliminate AI risks entirely, but to manage them responsibly while keeping pace with the community’s needs. This includes reducing over-reliance on external technology vendors and building internal capability to steer and govern AI effectively.
The principles might be appropriate, but are they enough?
Over the past two years in particular, the Australian Government has made real progress in setting strong foundations for responsible AI. Agencies have published transparency statements, appointed accountable officers, and committed to ethical principles such as human oversight, explainability, and fairness. The Digital Transformation Agency’s AI policy and classification framework has provided helpful structure. Most federal agencies currently use AI in low-risk domains: automating internal workflows, summarising documents, analysing policy data, or powering staff-facing virtual assistants.
This measured and principled approach is commendable and necessary. However, principles without increased momentum may soon become a liability. The public sector faces an emerging challenge: how to operationalise AI principles at scale and speed, without compromising trust. The truth is, while governments are moving carefully, the world around them isn’t slowing down.
The emerging risk: what happens if we move too slowly?
Playing devil’s advocate, it’s worth confronting an uncomfortable truth: being too risk-averse with AI can create new forms of risk. Here’s how:
First, there’s the issue of service experience. Citizens increasingly expect the same level of responsiveness and personalisation from government that they get from the private sector. When accessing health, welfare, or tax services, people want accurate, timely, digitally enabled experiences. If government systems feel slow, disconnected, or hard to navigate, trust erodes, not just in the service, but in the institution itself.
Second, there’s the missed opportunity for productivity. AI has the potential to automate low-value tasks, freeing up public servants to focus on strategic, high-impact work. Without it, inefficiencies compound. Staff remain bogged down in administrative burden, innovation slows, and the public sector struggles to keep pace with demand.
Third, regulatory and policy expertise is at risk. As AI becomes embedded in every sector, from finance to defence, governments need operational literacy to regulate, audit, and respond effectively. Agencies that haven’t strategically implemented AI internally, may find themselves ill-equipped to govern its use externally.
Fourth, there’s the talent challenge. Public servants want to do meaningful, future-focused work. If the public sector is perceived as behind the curve, it risks losing, or failing to attract skilled technologists, analysts, and innovators to industry or overseas markets.
Finally, and perhaps most urgently, there is growing concern, particularly following recent developments overseas, about third-party entities wielding disproportionate influence over public sector AI systems. As governments outsource capability to commercial providers, they may inadvertently cede control (or visibility) over core decision-making processes. This includes dependency on proprietary models, limited transparency into algorithmic behaviour, and constrained ability to explain or audit outcomes.
This underscores the need for robust in-house capability. Agencies must be able to assess, adapt, and govern AI tools, not just procure them. Otherwise, governments risk becoming passive users of someone else’s technology, rather than active stewards of their own digital future.
How other governments are moving responsibly but assertively
While current Australian government AI use largely remains assistive, supporting, rather than replacing human decision-making, there is a noticeable lack of adoption of agentic AI systems capable of autonomous planning and action. Yet, opportunities abound in well-governed deployments and globally, leading governments demonstrate that responsible yet assertive AI adoption, including agentic AI, is achievable and beneficial.
For example, New Zealand has implemented agentic AI with its SmartStart platform, where AI proactively registers births, schedules healthcare appointments, and coordinates associated social services automatically.
In Singapore, agentic AI systems coordinate real-time traffic management, dynamically responding to changing conditions and improving congestion outcomes.
Estonia uses an agentic AI that proactively helps citizens navigate and complete complex administrative processes across multiple government services.
These international experiences provide valuable insights into safely and effectively deploying AI that doesn’t just assist human tasks but autonomously performs complex public-service functions, offering practical lessons for Australia’s own strategic AI adoption.
These examples show that strategic, experimental, and iterative AI adoption is possible. The key is to be strategic and pair innovation with accountability, starting with modest, measurable pilots, applying proportionate controls, and building internal literacy alongside technical tools.
Bridging the trust gap: practical moves for government
Governments don’t have to go all-in on AI overnight. But they must start moving faster and smarter. That means:
- Begin with internal, assistive use cases such as content summarisation, translation, policy drafting, or document classification, which provide immediate productivity gains and build internal confidence.
- Pilot AI tools in controlled environments including sandboxes, trials, and internal settings, with clearly defined metrics, transparent oversight, and thorough evaluation processes to identify opportunities and challenges early.
- Strategically plan for AI adoption by clearly identifying and prioritising use cases that offer tangible value. This includes defining specific roles, boundaries, and oversight mechanisms for agentic AI systems to avoid unchecked autonomy, particularly in sensitive areas like welfare or healthcare decision-making.
- Clearly delineate the scope and autonomy boundaries for agentic AI deployments, ensuring these systems augment rather than replace human judgment in critical processes. For example, agentic AI could proactively streamline administrative services or environmental monitoring while explicitly leaving final decisions and sensitive judgments in human hands.
- Develop multi-disciplinary teams comprising technologists, policy experts, legal advisors, ethicists, and domain specialists to provide comprehensive governance across AI deployments, ensuring ethical considerations and transparency remain central at every stage.
- Adopt tiered risk frameworks that tailor oversight and governance to the level of potential risk and impact, enabling responsible but agile implementation rather than uniform, overly cautious approaches.
- Enhance AI procurement literacy, empowering agencies to critically evaluate third-party solutions, insist on transparency, and embed public-interest protections directly into contracts.
- Invest proactively in workforce training, transitioning staff from foundational AI awareness to advanced model risk assessment capabilities, ensuring public servants are equipped with both policy literacy and technical fluency.
- Establish dedicated AI assurance functions to rigorously review, continuously test, and audit AI systems, particularly agentic tools, maintaining accountability and public trust throughout the AI lifecycle.
To support these shifts, certain prerequisites are essential:
- Clear governance arrangements: Agencies must define enterprise-wide structures and assign AI-specific accountabilities and responsibilities, including at the model and system levels.
- Data ethics integration: A consistent data ethics framework needs to be embedded across AI lifecycles, with reproducibility, auditability, and transparency integrated into model design.
- Policy and procedural alignment: AI development and use must align with existing organisational policies, supported by targeted procedures for AI-specific risks.
- Performance measurement: Agencies should establish clear mechanisms to evaluate AI’s effectiveness, impact, and compliance across time
- Risk and assurance frameworks: AI-related risks, including misuse of data and unintended outcomes, must be assessed through enterprise risk management processes, with appropriate controls in place
These recommendations, drawn from the Australian National Audit Office (ANAO) review of ATO’s AI practices[1], reinforce that effective adoption requires more than tools, it demands culture, capability, and control.
Most importantly, agencies must ensure that any AI system they use, whether developed in-house or by a vendor, remains within their control, explainability, and accountability.
[1] Governance of Artificial Intelligence at the Australian Taxation Office | Australian National Audit Office (ANAO)
AUTHOR: SIMONE KNIGHT
World Quality Week – 11-15 November 2024
World Quality Week takes place every November. This global campaign raises awareness of the benefits of organisations taking a quality management approach to their operations.
This year’s theme is Quality: from compliance to performance. The timing couldn’t be better for those of us working with Federal Government. We’re now in Quarter 2, 2024-25, so we’ve had sufficient time settling into our Corporate Plans, we’ve finalised our Annual Reports and we are heading towards MYEFO. So how can our corporate documents lift and shift us to achieving high performance? Could a quality management approach be the solution?
Our Roles in Responding to Climate Change
One of our current challenges is how organisations are adapting to climate change. The Government’s push to net zero is also front of mind for many agencies. For those organisations certified or seeking certification under ISO 9001, there is a new requirement in the standard to consider the impacts of climate change as an internal or external factor. This amendment has been effective from 22 February 2024 and aims to integrate climate change considerations into management systems – not only quality management systems under ISO 9001, but also environmental management systems under ISO 14001 and work safety management systems under ISO 45001 (which are the three most widely recognised standards in the world).
Under ISO 9001 organisations are now required under clause 4.1 to assess the impact of climate change on operations to “understand the organisation and its context.” If climate change is identified as relevant to the quality system, it should be treated like other internal or external factors and included in the assessment of risks.
This imperative to address environmental concerns adds another layer of complexity to our operations and this of course, coincides with the Federal Government’s reporting requirements regarding climate change, including the Commonwealth Climate Disclosure requirement to publicly report exposure to climate risks and opportunities and actions to manage them. Under this policy, Commonwealth entities and Commonwealth companies must disclose climate-related information in their annual reports.
An additional amendment to ISO 9001 reinforces this through a new note to clause 4.2 that states “relevant interested parties can have requirements related to climate change.” While this note does not impose extra requirements, it clearly links the ISO 9001 standard to the requirements of interested parties, such as the Government, regarding climate change disclosures.
With that in mind, we should consider how quality management principles and methods allow organisations to respond, creating a culture of process improvement. Quality management can be associated with tools, processes, controls and governance. These elements are all-important, but an organisation’s culture is what will nurture innovation and continuous improvement to achieve high performance. Quality principles and methods not only ensure compliance but also foster learning and agility, to allow organisations to adapt to change more effectively and drive performance excellence.
Act Now
While some organisations will need to be more agile than others when it comes to adjusting for climate change impacts and fulfilling reporting requirements, there are some simple things you can do now to assess your organisation’s position:
Consider how climate change may affect your organisation
It’s important to consider both direct and indirect implications. Some direct implications include changes in weather patterns, rising water levels and government-imposed restrictions that may have an impact on the goods and services that are outputs of your organisation. You may need to adjust how you support human resources, change your physical infrastructure, and start capturing data about climate change related performance.
Indirect consequences may include the introduction of new technologies, shifts in the behaviours of the public and the potential for business disruption as a result of changing weather patterns. You may also experience changes in your supply chain and consumer behaviours.
Assess your risks and opportunities
Remember that not all impacts of climate change need to be negative. As we said above, a quality approach encourages innovation, and climate change may well present the catalyst for designing new practices that create efficiencies and take advantage of new technologies. It may also be an opportunity for a reputational boost for your organisation. Over three quarters of Australians[1] (78%) are worried about climate change and extreme weather events in Australia. Climate action by your organisation may have a positive impact on the way the public responds to your organisation.
Develop an action plan
You may need to update your policies and procedures (including to cover the requirements of clause 4.1), develop and implement training for personnel, incorporate climate change into the governance of decision-making, put in place practices to enable the capture of climate change data, and implement reporting procedures.
If your organisation is ISO certified, auditors may seek evidence that you have considered the issue of climate change. To provide this evidence, we recommend you demonstrate climate change consideration in your organisation’s documentation including by documenting the actions you are taking (your action plan), be transparent with internal and where appropriate, external, audiences about those action plans and ensure they are maintained and kept up to date to reflect the current state of your activities.
Recognise that climate change will involve continuous improvement
Risk assessment and actions relating to climate change impacts will not be a once-off activity. You may need to develop strategies and build performance targets that are reflective of an increasing maturity as your organisation learns to adapt to climate change.
Drive performance
Here are some top tips from the Chartered Quality Institute[2] to harness a quality management approach to drive performance:
- Embrace innovation: Don’t just meet compliance standards; leverage quality management to drive innovation and stay ahead of the curve.
- Manage risks proactively: Identify and mitigate risks associated with digital transformation and other strategic initiatives to ensure long-term success.
- Focus on sustainability: Integrate sustainability into your quality management practices creating products and services that are environmentally and socially responsible.
- Promote a learning culture: Encourage continuous learning and improvement within your organisation to adapt to changing market conditions and customer needs.
What’s Coming Next in the World of Quality Management?
Some emerging quality management issues which may well form part of the next amendment to the ISO standard (expected for 2026 implementation) include:
- A change in the focus on customer satisfaction to a broader concept of ‘customer experience’;
- Integrity and ethics in decision-making;
- Increasing customer expectations of organisations to deliver sustainable innovation where promoting good operational governance, assurance and improvement will support delivery safely and confidently; and
- AI, digitalisation and automated decision-making, which present both opportunities for performance enhancement and risks for many organisations
Why Choose Sententia?
At Sententia Consulting, we believe that quality management provides organisations with a framework that is valuable in supporting improvements in operations to achieve high performance.
We understand that not all organisations are pursuing ISO certification, and we take a pragmatic approach to support organisations to implement quality practices that are fit for purpose. We can help you stay compliance and future-ready with our expertise.
If you are interested in learning more about how Sententia can help you, please see www.sententiaconsulting.com.au or email us at sententia@sentcon.com.au.
[1] www.climatecouncil.org.au from a recent survey conducted in March 2024.
[2] https://www.quality.org/WQW24
AUTHOR: CONOR WYNN, PHD
The following article is republished from Sein with permission.
We like to think that big policy decisions are made thoughtfully, informed by data, with careful consideration of the facts and after a deliberate weighing of the options. Not so it seems.
It turns out that the political elite use ‘rules of thumb,’ also known as heuristics, to make the big decisions1. So who are these people, why do they decide that way, and is it OK that they do so?
The elite is a small group of individuals who occupy prominent positions in society, and who have preferential access to resources and an outsized impact on events. They tend to recognise one another, act and think alike. The political elite is a subset, comprised of politicians, senior bureaucrats, and advisers. Politicians in power face a 24/7 news cycle, deal with complex problems and divergent opinions. And though this is what they signed up for, it can lead to the political elite relying on heuristics to cope.
Which heuristics do the political elite use and why?
The literature shows that the political elite are prone to use heuristics such as, being more sensitive to losses than gains, also known as prospect theory, status quo bias, overconfidence – leading to poor decision-making, which when combined with escalation of commitment could help explain why once having made a poor decision, the elite are also prone to doubling down on previous commitments and stereotyping2 to name a few.
There are seven factors that influence the political elite in using heuristics:
- Experience: the greater the experience the more effective use of heuristics3.
- Context: Greater experience in similar contexts appears to allow the more experienced to get to an acceptable outcome more quickly than those with less experience4.
- Complexity: More complex issues are associated with the greater use of heuristics5.
- Urgency: The greater the urgency the more likely the use of heuristics6.
- Self-interest: For example, gaining or remaining in office.
- Ideology: Political ideology such as conservatism or socialism can be used as a heuristic.
- Emotion: For example, the British prime minister Herbert Asquith’s decision to enter World War One was thought to have been influenced by anger and fear7.
The issue is though—is that OK?
Is it good enough, for example, that a decision to go to war is influenced by heuristics, or should we expect a more thoughtful approach before placing troops in harm’s way?
Is it OK to use heuristics for political decision-making?
There are two leading schools of thought around the use of heuristics. The heuristics and bias (H&B) school lead by the Nobel laurate Daniel Kahneman, and the fast and frugal (F&F) school led by Gerd Gigerenzer.
The H&B school argues that there are two styles of decision-making—either quick or deliberate8, however there are a few caveats. These decision-making styles could be thought of as being at opposite ends of a spectrum, and by implication, a blend of these two decision-making styles is likely, as is a sequencing of different decision-making styles, e.g. using heuristics to narrow down the range of choices, then a deliberate approach to choosing between them. But when it comes to political decision-making Kahneman argues that heuristics should not be used9, whereas Gigerenzer allows for their use in political decision-making10. So, which school is right? Unfortunately, it’s not that straightforward.
The problem with political decision-making is that often the answer to a question is unknown in advance, because the problems are complex. The lack of a “correct” answer makes testing alternatives impossible. To make matters worse, decision-making by the political elite make can be motivated by their desire to stay in office11. And so, it’s not possible to test motivated decision-making in an objective sense, because it is subjective—by definition.
Notwithstanding the impossibility of a binary solution to the debate about the use of heuristics in political decision-making, we wanted to learn more. So, we managed to secure rare access to 21 senior ex state politicians, their advisers, and senior former bureaucrats discussing an innovative but politically difficult transport pricing policy proposal.
Transport pricing reform
Road congestion in large cities is a significant issue in Australia. Charges for road use are levied upfront (e.g. vehicle registration tax) and do not reflect actual road usage. An alternate approach to transport network pricing (TNP) would be a user pay system where those who travel during peak times, for greater distances or into highly congested zones would pay more than those who didn’t. And so, there was political risk.
The discussion forum
There were 21 participants, six of whom were current or former senior politicians (from both major political parties), seven current or former senior bureaucrats (i.e. heads of departments), and eight current or former political advisers. Participants were drawn from the two major parties, for balance. And as the participants were no longer in positions of executive power, we hoped to minimise the risk of presentation management overlaying the decision-making process. An extensive briefing document was provided to all participants in advance, including a detailed business case, economic modelling, pricing recommendations, traffic projections and demographic analysis.
What we found — whether to engage and how to engage
We found that the political elite used heuristics in two ways.
First, to decide whether to engage with an issue, using the “wait-and-see” heuristic. And second—having decided whether to engage—how to engage, using political empathy to guide their actions on TNP.

Whether to engage — the “wait and see” heuristic
The primary concern of the elite we observed was not judging the TNP in isolation on its merits, rather whether they ought to engage with the TNP at all. In other words, they didn’t act as judges, considering the facts of each case brought before them. Instead, they behaved like investors, deciding which stock to invest in from a broad market.
So, the first question they asked was – are voters demanding action? They sensed that though there was dissatisfaction about traffic congestion, community sentiment hadn’t crystallised to the point where action was being demanded. The issue wasn’t making headlines, and so, the answer to the first question was “no”, the issue wasn’t urgent and didn’t demand action. This led to the second question; if this issues wasn’t urgent, yet they pressed ahead with implementing the TNP regardless – would there be much resistance?
In a telling comment, a senior politician summed up the situation, based on his experience of trying to change car usage behaviour through pricing signals alone:
“Pricing hasn’t worked. [It] takes a lot of fiscal pain for someone to get out of a car.”
Politicians representing outer suburbs feared community pushback from increased costs which cast a shadow of self-interest over their decision-making.
So, the calculation was – “yes”, there was a risk of strong resistance to those proposals from segments of the community. At this point they thought – the matter isn’t urgent, there’s likely to be pushback and so they asked the third key question – if we impose this policy against the wishes of the electorate…
“are we prepared to spend the political capital needed to overpower that resistance”
This strategy, characterised as the “political strong man” approach had been used in New York, London, and Singapore for example. A former adviser summarised the group’s dilemma at this point in the decision tree.
“It’s [political] suicide analysis, how much damage are you going to do to yourself?”
A former senior bureaucrat summed up the three step decision tree with a pointed question:
“Does any politician think that the big problem is congestion and that the answer is pricing?”
The matter was decided quickly with the answer to this question by a senior former politician:
“No. Not yet.”
How to engage — political empathy
To identify the heuristics voters might use to judge the proposal, the political elite used political empathy – putting themselves in voters’ shoes to identify which heuristics voters would use when judging a policy.
To help them with political empathy they used:
- Stereotyping – the forum established “Cranbourne man” as the typical suburban voter
who is independent, easily upset and whose key concern is access to roads. - Trust – participant’s view was that the greater the trust, the more likely voters would be to support an innovative policy. So, if the government could demonstrate a track record of successful delivery, voters would be more likely to trust them with large issues such as the TNP.
- Incrementalism – the forum thought that if the public became used to new pricing arrangements on electric vehicles, they would be more amenable to the TNP.
- The decoy effect – when an obviously less attractive option is included in a set of alternatives
with the aim of influencing choice towards a recommended option.
And so, the complex issue of whether to engage with the TNP was decided. It was a decision not to engage, or a decision to make no decision. A previously identified12 but not observed heuristic of “wait-and-see” was used to decide the fate of an innovative policy proposal.
So what?
While we identified seven factors that influence the use of heuristics by the political elite, we saw five in play at our forum – experience, context, issue complexity, and urgency. Though our forum members were highly experienced, they were not experts in transport pricing policy and so context was key. The matter was highly complex, and as there was no pressing need to decide, urgency was low. The combination of those factors influenced decision-making style, and caused them to use both styles of thinking, not either.
The politicians we observed made a decision about a decision, which could be considered deliberate decision-making based on elaborate thought, so supporting the F&F school. However, in arriving at that decision they did not consider the details of the extensive briefing materials, rather they used a three-step process to reach an acceptable answer quickly – the hallmark of heuristic decision-making, preferring to “wait-and-see”.
While it looks like an important question – is it OK for heuristics to be used for political decision-making, it turns out that this isn’t a good question after all. This forum showed that both styles of decision-making can be appropriate, rather than one or the other. And while most studies of political decision-making focus on decision-making, few address political non-decision-making. For the first time to our knowledge, we found evidence of the use of heuristics for avoiding a decision, and the shaping of public policy by inaction.
How to avoid indecision or “irrational” decisions from the political elite
When dealing with political elite, there’s a real risk that there either won’t be a decision, or one that “doesn’t make sense”, so what can be done to avoid those poor outcomes?
- Try to re-engage the decision makers on the detail, so forcing deliberate thinking, but there are problems with this approach. The likelihood of success is low – we know the political elite like to use heuristics13. And secondly, as there are some situations where heuristics are preferable to elaborate thinking14, so it’s possible that for instance now is not the right time for the proposal.
- Encourage decision makers to become aware of their biases possibly through leadership development programs. Once again there are problems with this approach. Telling someone important that their decision-making is biased, and they should re-train could be career limiting. And including de-bias training in general leadership development training programs, so that when leaders do emerge into senior roles they rely less on heuristics is a very long-term play. Worse still, there’s evidence that such training programs are either useless15, or can backfire16.
- Use the decoy heuristic by adding an obviously inferior alternative to the one you prefer. While this might have the desired effect it’s ethically questionable. At minimum you could be accused of libertarian paternalism, or at worst manipulative.
- Take a portfolio view of all your policy proposals and put them to the “wait and see” heuristic test to spot which ones might struggle to get leadership engagement. This has legs. It recognises that the political elite use heuristics rather than pushes back against it, and so is a pragmatic choice. It provides feedback on which of your proposals is likely to be successful and which could end up in deep freeze. Armed with that information, you could re-allocate your resources to those proposals with greater chances of success, becoming more effective in the process. And, in looking at those proposals that didn’t pass the “wait-and see” test, you might discover which conditions need to change or be changed for them to pass the test.
In summary, there’s evidence that heuristics have their place in decision-making, and that a blend of deliberate thinking and heuristics is effective.
But the issue is not the theoretical one of whether important people ought not use heuristics, it’s the pragmatic one of how to cope with the fact that they do. The “wait-and-see” heuristic is alive and well among the political elite, understanding how to deal with it is key.
- Stolwijk, S., & Vis, B. (2020). Politicians, the Representativeness Heuristic and Decision-Making Biases. Political Behavior. https://doi.org/10.1007/s11109-020-09594-6
- Bordalo, P., Coffman, K., Gennaioli, N., & Shleifer, A. (2016). Stereotypes. Quarterly Journal of Economics, 131(4), 1753-1794. https://doi.org/10.1093/qje/qjw029
- Hafner-Burton, E. M., Hughes, D. A., & Victor, D. G. (2013). The Cognitive Revolution and the Political Psychology of Elite Decision Making. Perspectives on Politics, 11(2), 368-386. https://doi.org/10.1017/s1537592713001084
- Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456-460.
- MacGillivray, B. H. (2014). Fast and frugal crisis management: An analysis of rule-based judgment and choice during water contamination events. Journal of Business Research, 67(8), 1717-1724. https://doi.org/10.1016/j.jbusres.2014.02.018
- Chaiken, S., & Trope, Y. (1999). Dual-process theories in social psychology. Guilford Press.
- Young, J. W. (2018). Emotions and the British Government’s Decision for War in 1914. Diplomacy & Statecraft, 29(4), 543-564. https://doi.org/10.1080/09592296.2018.1528778
- Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
- Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: a failure to disagree. American Psychologist, 64(6), 515.
- Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20-29.
- Bowler, S., Donovan, T., & Karp, J. A. (2006). Why politicians like electoral institutions: Self- interest, values, or ideology? The Journal of Politics, 68(2), 434-446.
- Walgrave, S., & Dejaeghere, Y. (2017). Surviving information overload: How elite politicians select information. Governance, 30(2), 229-244.
- Stolwijk, S., & Vis, B. (2020). Politicians, the Representativeness Heuristic and Decision-Making Biases. Political Behavior. https://doi.org/10.1007/s11109-020-09594-6
- Gigerenzer, G., & Goldstein, D. G. (2011). The recognition heuristic: A decade of research. Judgment and Decision Making, 6(1), 100-121.
- Noon, M. (2018). Pointless diversity training: Unconscious bias, new racism and agency. Work, employment and society, 32(1), 198-209.
- Atewologun, D., Cornish, T., & Tresh, F. (2018). Unconscious bias training: An assessment of the evidence for effectiveness. Equality and Human Rights Commission Research Report Series.
This article is based on an article published in the Australian Journal of Public Administration, the peer reviewed journal of the Institute of Public Administration Australia.
AUTHOR: MARK HARRISON
In a recent Insights publication, the Australian Government Auditor-General has recently reported that since 1 July 2021, only 31% of audit findings relating to risk management were positive.
This forced us at Sententia Consulting to think about whether risk management in the Australian Government really is that bad.
We have concluded that the answer is yes… and no.
The fact is that the Australian Government (and government generally) is responsible for some of the most complex and risky ventures and activities in the country. Defence of the nation, operating healthcare systems that must cater for every citizen, delivering environmental outcomes in the face of massive environmental headwinds, all are ventures that can just as easily be unsuccessful as be successful … as well as being just plain difficult. Yet there are plenty of (often unheralded) successes by Government in all of its responsibilities.
It’s easy to look at some of the more challenging episodes in the Australian Public Service and attribute those to poor risk management – Robodebt, the “pink batts” scheme, any number of Defence materiel design and construction projects, and the 2013 lost ballot papers in the Federal Election, amongst others. Further, most agencies and public servants have experienced their own challenged procurements, failed programs, poor grant decisions, and policy implementations which in hindsight could have gone better.
While there is almost inevitably some truth to the comment that all of these are a result of poor risk management, that is simplistic and only part of the circumstances. (Note here, we are not seeking to misinterpret the Auditor-General’s comments, which were not that simplistic.)
Risk management represents just one part of good governance, or good project management, or good procurement management, or good program management, or good contract management, or frankly any model or framework for effective execution of aspects of public administration. Each of these have frameworks with multiple components that all need to work together to create good outcomes. Typically, those frameworks involve having good people doing the right jobs, good planning, effective process design, strong stakeholder engagement, tight legislative compliance, and clear accountability mechanisms.
While risk management definitely is important in contributing to all of these components of effective management, it is not the only discipline that needs to be in place and operating to support good outcomes. Put another way, good risk management does not guarantee a good outcome, but poor risk management does expose agencies to poor outcomes, and reduces defensibility when those poor outcomes occur.
In my 20-something years of working with the Australian Government, I have seen plenty of examples of really good risk management, and I have seen just as many examples of poor (or non-existent) risk management.
That 20 years of experience has taught us that the key ingredients to good risk management are:
- Deep experience and relevant expertise in what you are doing. Too often the Australian Government embarks on projects or processes without the right skills and experience to truly understand how to execute it effectively. Further, without that experience and expertise, it is impossible to really know what the risks are that need to be managed and how best to manage them.
- Strong situational awareness and good information. Risks emerge through projects and processes from a range of sources and vectors. If managers do not have effective monitoring of their operating environment and good data on the metrics that matter, they will likely not see risks emerging or unfavourable operating circumstances approaching. These are sure conditions for unmanaged risks to have a negative impact on your project or function.
- Discipline in following through on risk mitigations and controls. In our view, this is the key to effective risk management, and the most common gap. Risks typically require active management – the taking of steps or the creating of conditions that reduce risk. While managers may think about this while planning, it is not uncommon for the execution of those controls or mitigations to waver over time or as pressure increases. Risks that are not effectively controlled almost inevitably result in poor outcomes.
- Honesty in assessing risk and interpreting what it means. We have seen countless examples of agencies assessing risk at a level that is “perceived” as acceptable, or that reduces the effort required to develop risk management plans. While this may reduce effort at the early stages of a project or process, it increases the likelihood that risks become issues – and that’s where the effort really begins.
- A team that is on the same page about how risk should be considered and managed, including what risks should be taken and what risks should not. In the public service, we operate in teams and the secret to effective teamwork is having a team aligned behind a purpose who are well-informed, well-coordinated, well-directed and well-aligned. This should equally apply to the approach and attitude to risk, as any other aspect of teamwork.
Note here that I have not mentioned risk registers once. I have not referenced the Commonwealth Risk Management Framework once.
Each of these are important tools – tools that support good process and each of the ingredients I have referred to above. For all projects I lead or contribute to, I ensure I do follow the Framework, and I do maintain a focussed risk register.
But, where agencies miss the point with risk management is that they focus all of their energy in connection with risk management on the register and having a register that is “complete”, and a process for risk management that follows all of the steps in the manual or the policy or the Framework. And insufficient energy on some of the ingredients outlined above – and therefore on actually preventing or responding to risk.
To close this article I am reminded of two quotes that are influential in my approach to risk management:
- The first is a quote from an enormously successful leader of a “top 10” Australian company, who said to me “we have been successful in our field, not because of risk registers and risk management reports, but because we have good people who know what we are trying to achieve and make good decisions to support that achievement”. What resonates for me from this quote is the importance of having the people with the right skills, experience, authority and information to support the management of risks and opportunities in any project, organisation, function or business.
- The second is a slightly modified famous quote as follows: “culture eats strategy [and process] for breakfast”. This classic quote from Peter Drucker (and I apologise for my adlibbed addition) reflects something that I believe is the difference in good risk management – everyone on the team understands the desired outcomes and what can impede them, is empowered to work together to achieve them, and they naturally respond to risk accordingly. This does not suggest that either risk strategy or risk management processes are unimportant to good risk management. But rather, that a powerful, informed and empowering culture around risk is more influential to effective risk management.
AUTHOR: BRIONI BALE & KELLI PIERCE
In this 5-part short video series our subject matter expert Brioni Bale draws on her vast experience to share useful insights and tips for better responding to disruption events.
Click the link below to watch.
Disruption Response Series YouTube
AUTHOR: JO CARROLL
Taking Control of Risk Management in 2023.
Focus on risk management has increased significantly over recent years as organisations have been forced to face back-to-back or even parallel crises. However, even with this increasing focus, many organisations are still finding themselves in predicaments that could have been avoided through effective risk management.
In this blog we will work through some recent high profile risk events, looking at them through three key risk themes and drawing out the practical lessons we can learn.
Accountability and Ownership
The collapse of Silicon Valley Bank (SVB) in March 2023 presents an excellent case study in the importance of not just assigning accountability and ownership but operationalising these concepts to hold leaders to account. This was the third largest banking failure in US history and the largest since the GFC in 2009. After months of regulators raising concerns, SVB failed after a bank run was caused when customers were spooked by their announcement on 8 March that it would hold an emergency sale of some treasury stock to raise $2.25b.
SVB was the 16th largest bank in the US, focussed on serving companies in the technology and start up industry. Prior to its collapse the Federal Reserve had identified that SVB was using modelling of interest rate risk that was ‘not at all aligned with reality’. Their risk modelling didn’t anticipate the combination of interest rate rises and liquidity risk shocks. This was flagged with bank management but not addressed.
In the year leading up to its collapse the bank had gone 8 months without a head of risk (Chief Risk Officer or CRO) and there was a lack of risk expertise at board level, with only one of the seven board members on the risk committee having a risk management background. Regulators were raising concerns for months, but the bank did not act.
While our regulatory environment in Australia is different to the US, the broader ramifications in the Banking Sector are still to be seen. Could we be headed for a similar fate?
What does this mean for Risk Management?
- A Chief Risk Officer with influence can hold other executives to account. However, too often the role is undervalued and classified at too low a level to exert the necessary level of influence.
- Boards need members with deep and proven Risk Management experience.
- Risk Management should be built into Job Descriptions and performance measurement and reward systems.
- Create and use risk tolerance, models and settings that inform data driven decision making.
- Assign responsibility to address concerns to regulators (this should go without saying).
Legal but not ethical
Rio Tinto’s May 2020 desecration of Juukan Gorge to make way for an expansion of its iron ore mine in the Western Pilbara highlights the importance of looking beyond legality to ensure decision making is ethical.
This site contained ancient rock shelters showing human occupancy dating back 46,000 years, making it the only inland site in Australia showing human occupation through the last Ice Age. Rio Tinto knew the archaeological value of the site before its destruction but was set to make $135m for the site and so the decision was made to go ahead. At the time this was legal but not ethical (aboriginal heritage laws have since been introduced in Western Australia) and caused great distress to the traditional owners, the Puutu Kunti Kurrama and Pinikura people.
Following considerable public backlash, 3 top executives and 2 board members chose to stand aside, including CEO and Chairman. Rio Tinto has now imposed a moratorium on all work within 10sq kms of Juukan Gorge and is making reparations to the traditional owners including full reconstruction of the caves. Damages are expected to be much more than the $135m they expected to make from the mine.
What does this mean for risk management?
- Ethical and cultural decisions ͏need independent advice. Risk management practices need to keep pace as failure to meet community and social expectations presents an increasingly high reputational and financial risk.
- ͏Diversity in decision making needs to be actively sought to ensure broad and varied perspectives are considered at the decision-making table.
- Strong Environmental, Social and Governance practices need to be implemented to align organisations with social expectations to create and sustain long-term value.
Improper Influence
This case study is particularly relevant for public servants. On 17 June 2022, Former Deputy Premier of NSW Mr John Barilaro was announced as the Senior Trade and Investment Commissioner to the Americas. A Parliamentary Inquiry Interim Report found that this decision had “all the trademarks of ‘jobs for the boys’”, finding a preferred candidate had been selected and offered the position only to have that process set aside for a change of government policy. Quoting the Inquiry:
‘The process of appointment was flawed and not at arm’s length, there was a lack of transparency and integrity in the public sector recruitment process’… ‘there was a pattern of Ministerial interference and lack of transparency conducted by the Government’
This was not only embarrassing to the Government but the Minister and CEO both lost their jobs as a result.
What does this mean for risk management:
- Good probity processes need to be defined and tailored to the decision being made and linked to the risk of the decision.
- We need to say ‘No’ when the risk is too great. There must be the ability to give frank and fearless advice.
- Set the tone from the top and lead by example.
- ͏Decision-making processes should be transparent. Individual decision makers should always ask themselves whether they would be comfortable defending their decision publicly (for example in a Parliamentary Inquiry!).
Each of these cases provide important lessons for all organisations. To avoid becoming another cautionary tale, take these lessons on board and prioritise risk management!
AUTHORS: LILI MILLAWITHANACHCHI, HIRUNDA KANAHARAARACHCHI, & KIRSTY MARTIN
Effective performance measurement and reporting has always been important, particularly in the public sector to ensure transparency and accountability around the spending of public monies. In recent years the Australian National Audit Office (ANAO) has brought even greater emphasis to the topic, through their pilot program of Performance Statements Audits intended to provide a similar level of assurance over non-financial performance information as is currently provided for financial performance information.
With the pilot program expanding, we have compiled our key tips to support agencies to improve their performance reporting. The following suggestions are consistent with Department of Finance guidance on performance reporting and pick up on key lessons from the ANAO pilot so far, as well as our learnings from working with a range of agencies in strengthening performance measurement.
How does your overall suite of measures look?
Purpose and value
Annual performance statements are intended to reflect the performance of the government as a whole in achieving stated objectives and outcomes. At a more granular level, performance measurement is another tool to help the management of performance against outcomes. Performance reporting should not be seen as a compliance exercise or a means of defending the performance outcomes of an individual team – it can achieve so much more! Investment in the process should reflect these benefits, but it does not have to be complex and over-engineered.
Authoritative source
Meaningful measures have a clear line of sight to the key purpose and activities of the agency. Understanding the ‘logic flow’ from Administrative Arrangements Orders through to legislation or policy and agency purposes and key activities in considering performance measures gives context to the measures. Skipping over this step makes it hard to see the relevance of the measure and to be able to prioritise and select measures.
Balanced suite
Agency performance is multi-dimensional and needs different types of measures to demonstrate different aspects of outcomes delivered. Agencies should step back and consider the relative balance of different types of measures (such as output, efficiency, effectiveness, qualitative, and quantitative) across the suite of performance measures. Are the performance measures mostly focused on output at the expense of reporting on the agency’s efficiency or effectiveness?
Have you designed your measures well?
Know your audience
Stepping back to understand your audience can help to design a simple measure that is tailored to answer the questions they are likely to be asking. You cannot – and should not – try to measure everything. Too many measures get in the way of clearly communicating results. Remember that this is not the only way of communicating with your stakeholders.
Meaningful measures
Performance measures need to be meaningful and representative of the activity performed by the agency. Narrow or superfluous measures limit the ability of the agency to showcase efforts in achieving objectives and outcomes. Measures can be strategic and challenging to achieve. While it can be harder to measure and attribute outcomes, measures that are not fully within your agency’s control can show the ultimate impact of activities it conducts, such as policy development. Performance statements give the opportunity to explain why a target has not been met and the contribution your agency makes to lead to an outcome.
Keep on top of it
It can be all too easy to forget about performance measures after the year-end performance statements are done. But changes happen throughout the year that can impact performance measures, data sources or methodology used. Changes in government, restructures within the agency, and changes to data availability can all create the need to adjust performance measures. If you do not monitor for changes, you might be left with a problem at year’s end.
What do you need to consider in reporting results?
Retrace your steps
Each measure should have a clear, documented rationale behind its selection with details to show the data and methodologies are reliable and verifiable. The methodology for formulating the result, data sources utilised, and quality assurance processes over the data and results all need to be captured. There needs to be enough detail for someone to understand and follow in your footsteps to reach the same outcome. This will help in being ‘audit-ready’ as well as improving the transparency of performance measurement.
The full performance story
The narrative supporting measures in the performance statements allows for you to tell the performance story for the measure. Make the most of this narrative to explain why the measure was selected, any limiting factors, other parties contributing to outcomes and the reasoning behind unexpected or disappointing results. This gives the audience context in considering the measure, target and results.
Cast a wide net on data sources
There is often more data that can be used for performance measurement than is immediately apparent. This can include publicly available data as well as data from other agencies, divisions within your agency or third parties. Agencies may also be able to create a new data set to support a performance measure. The value of this data and performance measure to the agency needs to be considered alongside the cost of creation or collation of the data in determining the right data sources to rely on.
AUTHOR: TAMMY CHO
Portfolio governance is generally understood in the field of project, program and portfolio management. However, it is becoming increasingly important in public administration as a mechanism for supporting cooperative and robust governance across the range of entities within a Minister’s portfolio.
In our recent work, we have seen portfolio governance functions established to provide strategic advice and guidance to the portfolio Secretary and portfolio entity counterparts in relation to complex shared risks and issues that impact the achievement of whole-of-portfolio objectives. In some ways, portfolio governance (and the relationship between the department of state and other portfolio entities) shares similarity with the relationship between holding and subsidiary companies that can be observed in the private sector. However, the legislative and policy frameworks applicable to the public sector create complexities that weigh into the equation when considering the need for and suitability of establishing a portfolio governance function.
Developing and Implementing Portfolio Governance
The nature, role and focus of a portfolio governance function is not well defined and, when they do exist, the role tends to be highly bespoke. To make it easier to understand the utility and implementation of the concept, we have outlined some key points you should consider.
- Understand why portfolio governance is relevant and important in your context.
- Understand your portfolio’s governance risk profile, and where things are likely to go wrong.
- Understand the strategies and operating model that would work for your portfolio.
Q1: Why is robust cross-entity governance important?
To successfully design and implement a cross-entity governance function for your portfolio, you and your key stakeholders need to understand and agree why the concept is important and relevant. One underlying source of authority for Commonwealth entities is the Public Service Act 1999. Specifically, s 57 of the Act imparts a range of duties to Secretaries of Departments, including in relation to collaborating and maintaining clear lines of communication within the portfolio, and ensuring strong strategic capability for considering complex, whole-of-government issues across the portfolio. Additionally, the Commonwealth Risk Management Policy requires entities to understand and manage shared risk. This is relevant to governance risks and issues which are likely to require shared oversight and management across portfolio entities.
The underpinning premise of why portfolio governance is important to your context will drive the concept’s design and drive for implementation. Therefore, it is important that this is clearly defined and tested with your stakeholders.
Q2: What does your portfolio’s governance risk profile look like?
It is important to consider and assess the risk profile of your portfolio to inform the design and areas of focus for your portfolio governance function. In particular, understanding your portfolio’s potential sources of governance risk, known issues, organisational capability and capacity, and degree of shared responsibility or management for governance components are key considerations. It is also important to recognise that this risk profile can change across different types of portfolio entities (with consideration of their mandate, how large or small they are, and so on). The governance risk profile can also change as the public administration landscape and priorities shift. Becoming familiar with what governance risks and issues are more significant and how and when they may lie outside your risk appetite, is a key step to effectively designing a portfolio governance function.
Q3: What strategies and models should you employ?
Organisations that are leading implementation of a portfolio governance function can employ different strategies and operating models to manage, seek assurance, and maintain communication with portfolio entities in relation to portfolio governance risks and issues.
While there is no one size fits all, there are a spectrum of approaches, with varying degrees of centralised influence and control – ranging from a light-touch, advisory and facilitative approach, to a much more highly centralised and controlled approach between portfolio entities. The approach must carefully consider the preferred balance between the degree of oversight exercised by the department of state (or other leading entity), with the autonomy of portfolio entities and their accountable authorities. Other relevant factors that influence the preferred approach also include:
- The objectives of the portfolio or leading entity in relation to the portfolio governance function.
- The portfolio’s governance risk profile and where issues may be likely to arise.
- The existing relationships between the leading entity and other portfolio entities.
- The existing capability and capacity within the leading entity and across the portfolio.
Regardless of the model selected, the articulation of roles and responsibilities (both of internal and external stakeholders) is vital to effectively set expectations and operationalise portfolio governance.
If you are looking to examine what portfolio governance arrangements would best suit your organisation, Sententia Consulting’s experienced governance and risk consultants can assist. Our personnel have unparalleled experience and exposure with public sector portfolio governance functions, and also have an understanding of how analogous relationships in the private sector have been designed and operate.
Contact our portfolio governance experts today.
Mark.Harrison@sentcon.com.au
0408 661 325
Tammy.Cho@sentcon.com.au
0405 201 747
Author: Kirsty Martin
The world is constantly changing, and risk management needs to keep up. Here are some key lessons to take control of risk management.
The impossible is possible – so take your chance!
The unlikely and unexpected can and does happen. In recent years we have seen organisations across all sectors rapidly transform in response to unexpected events (pandemic, anyone?), with changes that would usually have taken months or years to rollout being accelerated into weeks or even days.
Although mostly implemented reactively, many of these transformations have had a positive impact on employee and customer experience and accessibility. Think…
- Remote working
- Increased flexibility
- Telehealth
- Improved digital platforms
- Self-service options etc.
Which raises the question… Why had they not already been widely embraced?
The key lesson here is that transformation can occur quickly, and innovative organisations shouldn’t wait for a catalyst, such as a pandemic, to force their hand before fully committing to transform where opportunities are identified. If the last few years have showed us anything it’s that rapid change is possible, and people can adapt faster and more effectively than we perhaps give them credit for.
What does this mean for risk management?
It’s time to walk-the-walk on a positive risk culture that uses risk management to identify opportunities and drive innovation. Decisions can be made quickly while still taking a risk managed approach, and changes can be rapidly implemented and scaled where they are prioritised and staff are empowered to do so.
We’re more interconnected than we think – so consult broadly
We often think of organisations or industries individually. We conduct various analyses of our internal and external environments, but still tend to focus on those elements that we can see may have a direct impact on our particular industry. Given our highly complex supply chains, changes in seemingly unrelated industries or communities can completely shock our operating environment through indirect impacts.
For example, a single ship getting stuck in the Suez Canal in March 2021 had vast and lasting global impacts on almost every industry from electronics to construction to food retail. Most organisations (outside of those directly involved in logistics) would never have considered that as a risk to their business.
Or the pandemic. We saw how the virus and related policy decisions had profoundly far-reaching impacts across society. Many of these impacts would not have been immediately obvious when looking at the risks through the purely epidemiological or economic lenses that tended to dominate the discussion. To understand the full picture, input is also required from public health policy experts, heath care workers, sociologists, businesses, schools, unions and more.
The same is true for most decisions across any organisation. Without input from a broad group of stakeholders from the various teams, organisations, communities and more that combine to create our operating environment, we may not understand the full impact of our decisions and the flow on effects that may influence our intended outcomes.
What does this mean for risk management?
Leaders need to deeply understand the supply chains that their organisations rely upon and consider both direct and indirect risks. This should include consideration of broader essential services such as childcare, schooling, healthcare, retail and logistics and the flow on effects that disruptions or changes in these sectors could have on your organisation. We’ve seen many times in recent years the profound flow on effects for broader labour market participation, spending behaviour, consumer confidence etc. that can come from issues in core services.
There will always be another crisis – so be ready to adapt
The word unprecedented has become a cliché. Organisations in 2022 are dealing with multiple and sometimes interconnected crises. Pandemic, war, climate change and more. These and other crises will continue to cause disruption and we need to be proactive to mitigate and adapt. Taking climate change and the associated increases in regularity and severity of weather events and natural disasters as just one example, organisations should be (at a minimum):
- Updating WHS policies (for extreme heat, air quality, flood safety etc.)
- Upgrading or relocating property holdings (to mitigate more regular flood risk, expanding fire risk areas, rising sea levels etc.)
- Contingency planning for severe weather-related supply chain interruptions.
- Reassessing business models, services, products, supply chains etc. to minimise carbon footprints and ensure sustainability
Organisations will continue to be faced with highly complex and sometimes abstract risks that will require both long term proactive strategic planning and the ability to react and adapt in the short term when faced with specific incidents.
What does this mean for risk management?
Organisations need to have an active and ongoing risk culture that is able to engage with long term risks and opportunities at regular intervals whilst also managing risk in the everyday in operating environment. Risk management cannot just be an annual ‘tick box’ exercise and it cannot be ‘set and forget’. Organisations can’t get complacent that ‘after the crisis’ everything will go back to how it was. The world is forever changing, and as such risk management needs to be invested in and nurtured as an ongoing process and mindset.
Risk Management in 2022
So, what are risk forward organisations doing?
- Identifying opportunities and taking them! An effective risk culture will provide regular information that supports faster decision making and enables organisations to take risks and lead the way in doing things differently.
- Learning more about themselves and the complex supply chains and communities in which they operate.
- Investing in and nurturing an ongoing risk management mindset across their organisations.
As always, there is no one size fits all approach to risk management. Rather, each organisation must assess their current level of risk maturity and understand the way their organisation functions to identify the best approach. For some (most) organisations, a significant amount of education and support across all staff will be required to move towards a more risk forward approach that reaps the rewards of these lessons.
If you’re looking to mature risk management at your organisation, Sententia Consulting’s highly experienced risk consultants can help. Contact us today.
Drive confidence with Sententia
Our team of big thinkers will work closely with you to deeply understand your challenges and create lasting impact. For us that means building capability, doing work that's part of something bigger, and reflecting the best of what consulting can be to uplift organisations and the communities they serve.