UNA Verhoeven is vice president of world technology at Valtich.
While the buzz around the genai that arrived at Fever-Pitch in 2023 has not yet worn, more and more, the companies are shifting to the next generation of tools driven by AI: agents. Although the agent’s adoption occurs in all industries, the displacement is particularly intense in some places. This month, Gartner released a view that By 2029 about 80% of customer service problems will be resolved using an AI agent. Deloitte estimates that “one quarter of employers It will try Agentic AI this year “, and this number will increase to 50% by 2027.
However, in the midst of enthusiasm, there must also be a risk recognition-at least for careful executives, with perspective. Even as AI agents are ready to be the “Next big thing“Experts recognize the ability to lose control or situations in which they can make incorrect and irreversible decisions. Because they operate with a much greater degree of autonomy than Genai’s predecessors, agents also introduce safety vulnerabilities that are mature for bad factors.
For these reasons, any organization that uses these tools must at the same time adopt a responsible AI framework to include transparency, ethical guidelines and strong supervision.
Keeping the top of security
Whenever people get away from supervision, the concern about the vulnerabilities of security deepens. From technical issues, such as malfunctions and errors to increased cyberattack threats (which may have increasingly serious consequences due to the autonomous potential of factors’ decision -making), the potential risks raised by agents should be carefully taken into account. Problems are not only within technology itself. Contradictory attacks, data poisoning and vulnerable models can be used to handle the results.
In addition, ensuring privacy and data compliance is an even greater challenge with an autonomous tool. Regulations such as GDPR and CCPA are becoming increasingly rigorous and attachment is vital when developing AI systems that process sensitive information such as health or bank information.
Organizations should not immediately conclude that risks compensate for rewards. But they have to think carefully how to mitigate these security complications.
Applying strong protection
There are some optimal security practices that need to be fully applied by organizations that utilize the agents of AI, regardless of industry:
1. Apply preventive security protocols. Powerful, preventive processes, such as contradictory tests, strong identity measures, access controls and encryption, help to ensure that AI systems are resistant to attacks. The progress of the problems before appearing is much better than trying to clean up a mess after the event.
2. Maintain awareness and accountability. Companies should conduct regular model checks and offer explanation tools to help maintain accountability. Clear policies for the AI governance and response protocols must already exist in the event of errors or violations based on the c. In addition, training and awareness campaigns should be prioritized by an organization.
3. Keep people responsible. Strong transparency and human mechanisms in the loop help to alleviate the risks and ensure the decisions guided by AI, are aligned with business goals and ethical standards. AI agents can operate autonomously, but human supervision is essential for the revision and evaluation of the decisions that these tools have taken.
4. Approach to ethical guidelines. Strong data governance and protective messages that protect security, human rights, privacy and security must be-haves before developing agents. Users must be sure that agents always improve human and social values, as opposed to the reduction of them. Avoiding excessive dependence on these tools (and the weakening of human users) is part of this framework.
A continuous process of assurance
In the end, we see increased focus on AI alignment techniques, which ensure that AI systems act according to human values. The regulatory frameworks are also evolving to respond to the moment, demanding organizations to show AI’s safety and justice in the application of Agentic tools. In many organizations, the process of integrating AI with cyberspace measures is also more advanced. Front -thinking organizations work preventively to detect and mitigate the threats before escalating to safely reap the benefits and capabilities of these advanced tools.
Above all, organizations must consider the security of AGA AI as a continuous process as opposed to a lump sum initiative. By combining AI moral principles, strong governance and preventive security measures, businesses can effectively alleviate the risks, utilizing the amazing power of AGAL AI tools.
Forbes Technology Council It is a community only for an invitation for CIOS, CTOS and world -class technology. Do I qualify?