More evenly distributed - AI safety for business

With the recent news of a restructuring of OpenAI's AI Safety team and the continuing releases of more and more advanced models, it's a good time to evaluate how businesses can make use of AI safely.

In some ways the hype has died down about AI safety as many risks have been mitigated and AI vendors and regulators seem to be working towards finding the balance between innovation and unleashing the productivity gains from these technologies. Interestingly, in many ways existing laws and legal frameworks are sufficient to deal with problems such as "deepfakes."

But first, let's take a look at emerging signals of change:

Signals from the future:

Emerging trends that are likely to drive changes to the way we live, work and do business.

Technology

Society

Energy & Climate

Focus Issue: AI Safety for businesses

The rapid development and deployment of generative AI (GenAI) presents both immense opportunities and significant risks. With the potential to add up to $4.4 trillion globally, GenAI is poised to drive innovation and economic growth across industries. However, concerns over job displacement, privacy breaches, algorithmic bias, socioeconomic inequality, uncontrollable AI, social manipulation, surveillance, and autonomous weapons have led to calls from tech leaders to halt large AI experiments until risks can be properly managed.

To address these challenges, governments and organisations are developing frameworks and guidelines for responsible AI deployment. In the United States, the National Institute of Standards and Technology (NIST) has released a draft AI Risk Management Framework (AI RMF) identifying 12 risks and proposing over 400 actions to manage them. The framework aims to improve the trustworthiness of AI products, services, and systems through voluntary adoption. Similarly, the World Economic Forum's AI Governance Alliance has published briefing papers on AI governance best practices.

For businesses looking to implement GenAI, McKinsey recommends a structured approach to responsibly manage risks. This involves understanding exposures, developing a comprehensive risk view, establishing governance, and embedding proper training. Organisations should conduct assessments on how GenAI impacts their environment, focusing on inbound risks and the maturity of their control environments. Regular reviews are crucial due to the evolving nature of GenAI technology.

Adapting existing governance frameworks to accommodate GenAI demands is preferred to minimise disruption. A responsible governance framework involves a cross-functional AI steering group and the development of responsible AI guidelines. Four critical roles - Designers, Engineers, Governors, and Users - are essential for the safe and effective deployment of GenAI, emphasising the need for integrated risk management and adaptability to evolving risks.

In the public sector, a survey by Salesforce and Boston Consulting Group indicates cautious optimism among citizens in Australia and New Zealand towards the use of GenAI in government services, contingent on human oversight. While 75% of respondents are open to GenAI-enhanced services, trust concerns related to AI's responsible use persist. Governments can boost trust by implementing responsible AI practices, developing internal capabilities, setting trust prerequisites, and leading in AI innovation.

As GenAI continues to advance, legal regulations, organisational standards, and a human-centered approach to AI development will be crucial to mitigate risks and ensure AI benefits society. Businesses that proactively address AI safety and adopt responsible practices will be better positioned to navigate the challenges and opportunities presented by this transformative technology. By prioritising transparency, accountability, and continuous risk assessment, organisations can harness the power of GenAI while building trust with stakeholders and contributing to a more equitable and sustainable future.

Consider these strategic insights:

  • Proactive risk hunting teams: Assemble dedicated cross-functional teams to continuously identify, assess and mitigate emerging AI risks. Go beyond typical risk management to proactively "hunt" for potential issues before they manifest.
  • AI trust scorecards: Develop public-facing AI Trust Scorecards that transparently rate your AI systems on key dimensions like privacy, security, fairness, and robustness. Build trust through radical transparency.
  • Responsible AI champions network: Empower a network of Responsible AI Champions embedded throughout the organisation to evangelise best practices, conduct training, and ensure accountability. Make responsible AI everyone's job.
  • Generative AI governance sandbox: Create a secure environment to rapidly test and iterate on GenAI governance frameworks. Experiment with novel approaches like decentralised oversight and AI-assisted auditing. Learn fast and scale what works.

Deep strategy:

Longer form articles rich with insights:

  • From sludge to success - Strategy+Business - Eliminating organisational sludge through tech adoption, cultural shifts, and employee empowerment boosts productivity, innovation, and job satisfaction.
  • Is it “just” an operational issue? - Center for Simplified Strategic Planning - Operational decisions can drive strategic success. Differentiating through operations can lead to competitive advantage and better serve target customers.
  • Reimagining the Future: The Power of Backcasting - Impact Lab - Backcasting empowers organisations to envision a desired future and strategically plan backward to achieve it, fostering stakeholder buy-in and adaptability.
  • 3 Ways to Clearly Communicate Your Company’s Strategy - Harvard Business Review - Effective communication of company strategy requires providing context, linking choices to purpose, and involving employees in the process for understanding and support.
  • Defining your ‘true north’: A road map to successful transformation - McKinsey & Company - Transforming businesses requires a holistic approach integrating cost optimisation, growth strategies, organisational effectiveness, and digital enablement for sustained success and competitive advantage.
  • Tying short-term decisions to long-term strategy - McKinsey & Company - Strategic alignment of short-term decisions with long-term goals drives performance and growth through effective resource allocation and decision-making processes.

Business at the point of impact:

Emerging issues and technology trends can change the way we work and do business.

Portage logo
Ready to apply futures thinking and strategic foresight to your biggest challenges? Introducing a strategy design platform that brings over 150 trends, scenario generation, visual strategy boards, combined with finely tuned AI assistants to help guide you through the process.
Read more like this
Build your futures thinking capabilities

More insights: