More evenly distributed - Cybersecurity and AI

Signals from the future:

Emerging trends that are likely to drive changes to the way we live, work and do business.

Energy:


Food and Health:

Robotics & Technology:

  • The metaverse isn’t dead (in fact its future in enterprise is promising) - VentureBeat - The metaverse is evolving beyond traditional virtual worlds and presents significant opportunities in augmented reality and extended reality. It is projected to have a $5 trillion impact by 2030 and can be used in various sectors for learning, development, and industrial applications. However, challenges such as advanced VR technology and social acceptance need to be addressed.

Focus Issue: Cybersecurity and AI

Surprising nobody, artificial intelligence (AI) has emerged as a double-edged sword in the realm of cybersecurity. On one hand, AI's capabilities are enhancing our defence systems, while on the other, they're offering cybercriminals new tools for attack. The White House's recent Executive Order underscores the importance of managing AI's risks, particularly in the cybersecurity domain, and will shape how the AI industry evolves for the foreseeable future.

AI's impact on cybersecurity is multifaceted. For instance, generative AI is revolutionising how we identify threats, with analysts now able to detect attacks more swiftly and accurately. AI aids in various stages of incident response, from identification to recovery. However, the same technology is being exploited by adversaries to craft sophisticated phishing emails and deepfake videos, as well as to rewrite malware to evade detection. The 2024 US Presidential election seems to be a future information battleground.

Moreover, the European Union is taking a proactive stance, aiming to become a global hub for trustworthy AI. Their comprehensive approach is designed to foster innovation while ensuring AI's human-centric and trustworthy deployment, particularly in high-impact sectors like cybersecurity. Australia is slightly behind without having offered more than 8 lightweight "principles" which don't address cybersecurity risks directly, but industry consultation will likely suggest the need for more guidance if not overall regulation.

One of the more surprising developments is the potential misuse of AI in creating or aiding chemical, biological, radiological, and nuclear threats, as highlighted by the White House's order, and has been "red teamed" to find potential risks. This alarming possibility has led to a push for regulations to oversee the training and deployment of AI models in sensitive areas.

Another interesting point is the role of AI in both causing and relieving problems for businesses and organisations. While the introduction of AI tools and technologies can increase risk for businesses, these same tools and technologies can help bridge crucial cybersecurity skills gaps, both technical but also in policy, governance and risk management.

I think the most interesting aspect of this issue is that AI levels the playing field in cybersecurity. Previously, the scales were tipped towards threat actors who could "fail" multiple times against defenders who could only fail once. The threat actor's ability to operate at scale against targets of lower sophistication is being reduced every day by more proactive cybersecurity solutions powered by AI. As more AI-capable compute is pushed out from datacentres to the "edge", this will become even more true and will significantly lift the barrier to threat actors.

But this technology comes with tremendous responsibility, similar to how unlocking nuclear fission and fusion came with their own set of new responsibilities (treaties, non-proliferation, etc.), so too must AI. And because AI is inherently an information technology, it will likely end up in the realm of cybersecurity.

Consider these strategic insights:

  • Adopt a Balanced AI Strategy: Australian businesses should leverage AI to enhance cybersecurity defences but also stay vigilant of AI's potential to be used in cyber-attacks. Strategic investment in AI tools and regular review of AI's role in cybersecurity should be a boardroom agenda.
  • Upskill the Workforce: The cybersecurity skill gap must be addressed. Businesses should focus on training existing employees and hiring new talent with AI and cybersecurity skills, potentially sourcing from diverse and underserved communities.
  • Regulatory Compliance: Companies must stay informed about evolving AI regulations, such as those modelled by the EU, to ensure compliance and trustworthiness in AI deployment, especially as it relates to data privacy and ethical use.
  • Enhance Cyber Hygiene Practices: With the increased adoption of AI tools, businesses need to maintain robust cybersecurity hygiene. This includes educating employees, deploying multi-factor authentication, and conducting regular cybersecurity assessments.
  • International Collaboration: Engage with global partners to align with international standards and best practices, drawing from frameworks like the White House Executive Order, to manage AI risks and ensure the technology's beneficial use.

Deep strategy:

Longer form articles rich with insights:

Business at the point of impact:

Emerging issues and technology trends can change the way we work and do business.

Portage logo
Ready to apply futures thinking and strategic foresight to your biggest challenges? Introducing a strategy design platform that brings over 150 trends, scenario generation, visual strategy boards, combined with finely tuned AI assistants to help guide you through the process.
Read more like this
Build your futures thinking capabilities

More insights: