More evenly distributed - The future of autonomous warfare

As the war in Ukraine continues to drag on, as well as other conflicts, belligerents are increasingly looking to autonomous, AI enabled, remote controlled and low cost expendable weapons systems. This is a shift as profound as gunpowder and will reshape how conflicts are fought and won.

But first, let's go through this week's signals:

Signals from the future:

Emerging trends that are likely to drive changes to the way we live, work and do business.

Trend Scan Report:

  • Disruptions on the Horizon - Policy Horizons Canada - Canada's foresight agency report. The Disruptions on the Horizon 2024 report identifies and assesses 35 disruptions for which Canada may need to prepare and explores some of the interconnections between them. These disruptions are potential events and circumstances that could affect our society and the way it functions, as well as the way people live, work, and connect.

Society:

Technology:

Climate:

Focus Issue - The future of autonomous warfare

The rapid advancement of artificial intelligence (AI) and autonomous weapons systems (AWS) is reshaping the landscape of modern warfare. The development of fully autonomous weapons, such as the Saker Scout by the Ukrainian drone company Saker, exemplifies the shift towards machine-driven warfare. This shift raises critical challenges and dangers, including the potential for uncontrollable wars and increased civilian casualties. Despite calls from AI scientists and organisations for a pre-emptive ban on autonomous weapons, international regulation efforts have been hindered by opposition from major powers like Russia and the U.S.

In a recent episode of The Lawfare Podcast, Lauren Kahn, a Senior Research Analyst at Georgetown's Center for Security and Emerging Technology (CSET), discussed the use of AI in warfare, highlighting the Israeli military's use of AI in their targeting system, known as "The Gospel," during retaliatory attacks on Gaza. This resulted in a high number of civilian casualties, emphasising the need for confidence-building measures and incremental steps rather than an all-out ban on AI in warfare. Kahn remains hopeful for the responsible and ethical use of AI for defence.

The proliferation of AWS, also known as "killer robots," raises ethical concerns about their development and deployment. The United Nations (UN) has adopted Resolution 78/241, affirming that international law applies to AWS matters. However, there is disagreement among countries about whether to ban lethal AWS or allow their use with certain limitations. Countries like Australia, Canada, Japan, South Korea, the UK, and the US support the use of AWS as long as they are not designed to target civilians and have human intervention. The use of AWS in conflicts is already happening, such as the alleged use of autonomous drones in the Libyan civil war and strikes on Russian forces in Ukraine. Australia, as part of the AUKUS agreement, needs to develop a framework for the ethical use of AI in defence. The upcoming international dialogue on responsible military AI provides an opportunity for states to build consensus on rules and norms.

The Australian Human Rights Commission submitted a report to the Human Rights Council Advisory Committee, focusing on the human rights challenges posed by Lethal Autonomous Weapons Systems (LAWS). The report underscores the lack of a clear definition for LAWS, their potential to violate international human rights and humanitarian laws, especially the principle of proportionality, and the absence of enforceable international agreements to regulate their use. It cites instances of LAWS being used in conflicts like the Libyan civil war and the Russia-Ukraine War, emphasising the urgent need for specific regulation. The Commission advocates for international cooperation and dialogue, including UN action and the establishment of a Special Rapporteur on New and Emerging Military Technologies, to ensure LAWS are regulated effectively. It also calls for legal frameworks that either ban or restrict lethal autonomous systems, ensuring human oversight and compliance with international human rights law.

The impact of AI weapons and autonomous warfare is profound and multifaceted. The potential for uncontrollable wars and increased civilian casualties necessitates stringent regulation and international cooperation.

For businesses, the implications are significant, particularly in the context of "grey zone" warfare, where conflicts occur below the threshold of conventional war. Companies may find themselves as collateral damage in such conflicts, facing disruptions in supply chains, cyber-attacks, and other forms of indirect aggression. Therefore, businesses must develop strategies to mitigate these risks, such as enhancing cybersecurity measures, diversifying supply chains, and staying informed about geopolitical developments.

Looking ahead, the future of AI in warfare will likely involve a combination of regulation and technological advancements. Ensuring human involvement in lethal decisions and establishing clear rules for the use of autonomous drones are critical steps to mitigate risks. International dialogue and cooperation will be essential to build consensus on ethical and responsible use of AI in defence. As AI technology continues to evolve, it is imperative to balance innovation with ethical considerations to prevent misuse and protect human rights.

Consider these strategic insights:

  • Form Strategic Alliances with Ethical AI Developers - Partner with AI research institutions and ethical AI developers to influence the creation of AI technologies that prioritise human rights and compliance with international laws, ensuring that your business is aligned with the future regulatory landscape.
  • Invest in AI-Driven Conflict Prediction Tools - Develop or invest in AI algorithms that can predict potential geopolitical conflicts and disruptions, allowing your business to proactively adjust supply chains and mitigate risks before they materialise.
  • Create a Crisis Management AI Task Force - Establish a dedicated team to explore the use of AI in crisis management, focusing on rapid response to cyber-attacks, supply chain disruptions, and other AI-induced threats, ensuring business continuity in volatile environments.
  • Advocate for Ethical AI Policies in Industry Consortia - Take a leadership role in industry consortia to advocate for the development and adoption of ethical AI policies, influencing broader industry standards and creating a competitive advantage through responsible AI use.
  • Leverage AI for Advanced Cybersecurity Measures - Implement cutting-edge AI technologies to enhance your cybersecurity infrastructure, using machine learning to detect and neutralise threats in real-time, safeguarding your business from AI-driven cyber-attacks.

Deep strategy:

Longer form articles rich with insights:

  • A Better Framework for Solving Tough Problems - Harvard Business Review - Fostering trust, embracing diversity, and fostering open dialogue are key to solving complex problems efficiently and driving organisational change.
  • The Consequences of a Shrinking Population - Nothing Human is Alien - Impending population decline poses economic and social challenges, urging necessary societal changes for sustainability and growth in the future.

Business at the point of impact:

Emerging issues and technology trends can change the way we work and do business.

Portage logo
Ready to apply futures thinking and strategic foresight to your biggest challenges? Introducing a strategy design platform that brings over 150 trends, scenario generation, visual strategy boards, combined with finely tuned AI assistants to help guide you through the process.
Read more like this
Build your futures thinking capabilities

More insights: