02:45:48 EDT Tue 24 Mar 2026
Enter Symbol
or Name
USA
CA



Open Text Corp
Symbol OTEX
Shares Issued 245,430,377
Close 2026-03-23 C$ 31.63
Market Cap C$ 7,762,962,825
Recent Sedar+ Documents

Open Text releases GenAI security report

2026-03-23 18:56 ET - News Release

Mr. Nicholas Kadysh reports

ENTERPRISES RUSH INTO GENAI WITHOUT SECURITY FOUNDATIONS, NEW PONEMON STUDY FINDS

Open Text Corp. today released a new global report, "Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI," developed in partnership with the Ponemon Institute. The research revealed that, while more than half of enterprises (52 per cent) have fully or partially deployed GenAI, security and governance is falling behind.

This gap highlights a growing challenge for the industry as organizations are adopting generative AI quickly, but many are doing so without the governance and security foundations needed to manage its risks.

"AI maturity isn't just about adopting AI tools -- it's about doing it responsibly," said Muhi Majzoub, executive vice-president, product and engineering. "Security and governance are foundational to getting real value from AI. When they're built into AI systems from the start, organizations can operate with greater transparency, monitor systems continuously and trust the outcomes AI delivers."

Only one in five enterprises report reaching AI maturity -- where AI in cybersecurity activities is fully deployed and security risks are assessed -- and fewer than half (43 per cent) have adopted a risk-based strategy to govern AI systems. As AI systems become more autonomous and embedded in critical operations, closing this maturity gap will be essential for ensuring trust, compliance and long-term business value.

AI security and governance are lagging

According to the survey, significant gaps between the pace of AI deployment and the practices needed to govern and secure it effectively.

  • Nearly eight in 10 organizations (79 per cent) have not yet reached full AI maturity in cybersecurity, where systems are fully deployed and security risks are assessed.
  • Only 41 per cent of organizations have AI-specific data privacy policies in place.
  • A majority (62 per cent) of respondents say it is difficult to minimize model and bias risks (like the breach of ethical and responsible AI principles) in the language model development.
  • Fewer than half (43 per cent) of respondents have adopted a risk-based AI governance approach that addresses AI-related risks like bias, security threats or ethical issues.
  • Fifty-eight per cent say prompt or input risks (that is, misleading, inaccurate or harmful responses) are very or extremely difficult to minimize.
  • Over half of respondents (56 per cent) also report challenges in managing user risks, including the unintended spread of misinformation.
  • Nearly six in 10 respondents (59 per cent) say AI makes it more difficult to comply with privacy and security regulations, yet only 41 per cent report having AI-specific data privacy policies in place.

Without trust and explainability, AI is failing to deliver results and requiring human oversight

Many organizations are deploying AI to improve efficiency, including within security operations. Yet reported challenges around trust, reliability and explainability suggest the very tools designed to enhance security may be limiting effectiveness and AI autonomy due to governance and maturity gaps.

AI falls short in threat detection as bias and reliability risks persist:

  • Just 51 per cent of respondents say AI is effective in reducing the time to detect anomalies or emerging threats. Fewer than half (48 per cent) rate AI as effective in threat detection and hunting for deeper insights and reducing manual workload.
  • AI model and bias risks are limiting effectiveness. Nearly two-thirds (62 per cent) of respondents say it is very difficult or extremely difficult to minimize model and bias risks, including unfair or discriminatory outputs.
  • Operational reliability also presents a challenge, with 45 per cent of respondents citing errors in AI decision rules as a top barrier to effectiveness, while 40 per cent report errors in data inputs ingested by AI.

Fully autonomous AI still far from reach:

  • Fewer than half of organizations (47 per cent) say their AI models can learn robust norms and make safe decisions autonomously, reflecting tempered confidence as AI models take on more independence.
  • As a result, more than half of respondents (51 per cent) say human oversight is needed in AI governance due to the speed at which attackers can adapt.

"The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start," said Mr. Majzoub. "As AI becomes embedded in day-to-day operations, organizations need secure information management as the foundation; clear governance frameworks, policy-based controls and continuous monitoring that ensure AI systems remain trustworthy and compliant. Just as important is aligning AI with the right data, security practices and oversight from the outset so innovation can scale responsibly and deliver measurable business value."

Survey methodology

The Ponemon Institute independently surveyed 1,878 IT (information technology) and IT security practitioners across North America, Asia-Pacific, Europe, the Middle East, Africa and Latin America. The study captured input from organizations of varying sizes and industries, including financial services, health care, technology, energy and manufacturing. The research was conducted in November, 2025. Respondents included executives, decision-makers and practitioners across IT security, engineering, infrastructure, risk and compliance, and other roles involved in AI and security strategy.

About Open Text Corp.

Open Text is a global leader in secure information management for AI, helping organizations protect, govern and activate their data with confidence. The company's technologies turn data into information with context to form the knowledge base for AI.

We seek Safe Harbor.

© 2026 Canjex Publishing Ltd. All rights reserved.