Skip to content

Gartner Warns: 80% of AI Misuse by 2026 Will Be Internal

Gartner's warning underscores the urgent need for strong internal controls. A new report outlines steps to secure generative AI, from cross-functional governance to integrated security tools.

There is a poster in which there is a robot, there are animated persons who are operating the...
There is a poster in which there is a robot, there are animated persons who are operating the robot, there are artificial birds flying in the air, there are planets, there is ground, there are stars in the sky, there is watermark, there are numbers and texts.

Gartner Warns: 80% of AI Misuse by 2026 Will Be Internal

The secure implementation of generative AI (GenAI) requires a multi-faceted approach involving various teams and mechanisms. A new report highlights the importance of strategy, operations, and model customization teams, as well as integrated enforcement mechanisms to combat internal policy violations.

Gartner predicts that by 2026, at least 80% of unauthorized AI transactions will stem from internal policy violations, not external attacks. This underscores the need for a policy-driven approach, bolstered by integrated enforcement mechanisms. Most organizations recognize GenAI's transformative power and risks but struggle with practical application and enforceable security.

To create a strong and safe GenAI system, a roadmap suggests several steps. Firstly, establish a cross-functional governance team involving legal, compliance, HR, data privacy, and business leaders. Cultural buy-in is crucial; employees who understand and participate in security become the first line of defense.

Architecting for security involves investing in tools to discover shadow AI applications, tracking usage patterns, and applying policy-based controls. Integrating guardrails into the development lifecycle helps prevent vulnerabilities from reaching production and ensures API key leakage and input sanitization. This includes protecting against unique GenAI risks like prompt injection, data poisoning, sensitive data leakage, and advanced attacks like model inversion.

The secure implementation of generative AI requires a holistic approach involving strategic planning, operational execution, and continuous evaluation. By understanding and mitigating risks, organizations can harness the power of GenAI while ensuring safety and effectiveness. This involves creating a structured governance program, investing in security tools, and integrating guardrails into the development lifecycle.

Read also:

Latest