Businesses around the world are rapidly adopting AI technology to stay competitive and meet internal demands, but a new study by TrendAI reveals that this push for AI deployment is outpacing control, visibility, and accountability.
The research, which surveyed 3,700 business and IT decision-makers, found that 67% of respondents have felt pressure to approve AI projects despite security concerns. This pressure has led to the overlooking of security risks, with one in seven respondents describing these concerns as “extreme.” The need to keep pace with competitors and internal demands has resulted in AI being integrated into critical systems without the necessary controls to manage it safely.
Rachel Jin, Chief Platform & Business Officer and Head of TrendAI, emphasized the importance of managing risk in AI deployment, stating that organizations are aware of the risks but lack the necessary conditions to address them. She highlighted the importance of governance maturity in ensuring that AI projects are implemented securely while achieving business objectives.
One of the key findings of the study is the governance inconsistencies and unclear responsibility for AI risk that are prevalent in organizations. Security teams often find themselves reacting to top-down AI decisions, leading to the use of unsanctioned or “shadow” AI tools. This reactive approach to AI deployment can result in workarounds and increased vulnerabilities to cyber threats.
The study also highlighted the growing trend of attackers using AI to automate reconnaissance, accelerate phishing campaigns, and lower the barrier to entry for cybercrime. This has increased the speed and scale of cyber attacks, emphasizing the need for robust AI security measures.
Despite the rapid adoption of AI, organizations are struggling to maintain control over its deployment. The study found that 57% of organizations believe AI is advancing faster than their ability to secure it. Additionally, more than half of the respondents expressed only moderate confidence in their understanding of the legal frameworks governing AI.
Trust in autonomous AI systems remains uncertain, with less than half of the respondents believing that agentic AI will significantly improve cyber defense in the short term. Concerns around data access, misuse, and lack of oversight are key factors contributing to this uncertainty.
The study also identified the risks associated with AI agents accessing sensitive data, malicious prompts compromising security, and the growing attack surface for cyber criminals. Organizations expressed concerns about the abuse of trusted AI status and risks linked to autonomous code deployment, highlighting the need for greater observability and auditability over AI systems.
In response to these challenges, around 40% of organizations support the introduction of AI “kill switch” mechanisms to shut down systems in case of failure or misuse. However, there remains a lack of consensus on how to retain control over autonomous AI systems when needed most.
TrendAI, as a global leader in AI security, aims to empower enterprises to innovate fearlessly by securing AI, cloud, networks, endpoints, and data across the modern attack surface. The TrendAI Vision One platform centralizes cyber risk exposure management and security operations to protect the entire AI lifecycle. With a team of experts across 75 countries, TrendAI helps organizations stay ahead of threats and drive proactive security outcomes.
Overall, the study highlights the importance of balancing the benefits of AI adoption with the need for robust security measures. As organizations continue to deploy AI at a rapid pace, it is essential to prioritize control, visibility, and accountability to mitigate security risks and ensure the responsible use of AI technology.
For more Information, Refer to this article.



































