In almost every organization, artificial intelligence is already in place. Not necessarily as the result of a strategic decision, but driven by the initiative of teams themselves: marketing departments using generative tools, developers integrating APIs in local environments, or analysts training models to speed up reporting. Some solutions run on internal servers, others directly in the cloud or even from personal environments—without centralized control, without traceability, and without a clear governance policy.
The risk is evident: what audits are being performed on the current state of these AIs? What data do they process? How were they trained, and with what information? What decisions do they automate? Without visibility, companies face a new attack surface—not only technical, but also ethical, legal, and reputational—where a misconfiguration, an undetected bias, or a data leak can have significant consequences.
In this sense, the situation echoes the years when employees installed tools or services without approval from IT departments. The difference now is the magnitude of the impact. AI does not just manage information: it can interpret it, learn from it, and in some cases, make decisions. This reality demands new governance and security criteria.
There are already concrete examples. Several global companies have had to restrict the use of generative AI tools after discovering that employees had uploaded sensitive information to external systems. In other cases, recommendation or evaluation algorithms produced biased outcomes that affected organizational reputation or internal trust.
The solution is not to slow down adoption, but to understand it. The democratization of AI enables decentralized innovation, rapid prototyping, and team autonomy. If properly channeled, this creative energy can become a competitive advantage.
To achieve this, organizations must promote a bottom-up strategy: understand what already exists, audit it, classify it, and apply controls proportional to the risk level of each solution. The management framework should not be an obstacle, but rather a guide that combines three layers: technology, processes, and culture. The goal is not to impose limits, but to build a framework that keeps pace with change while ensuring responsibility and a long-term vision.
Today, organizations face multiple challenges:
- Lack of visibility into which AI models or agents are being used and what data they operate on, along with the absence of continuous audits and traceability mechanisms.
- Low awareness among employees and developers regarding the risks of sharing sensitive information.
- Difficulty establishing governance frameworks that evolve at the same pace as technology.
Taken together, these factors expand the exposure surface and reduce the ability to respond effectively to incidents.
True digital maturity is not achieved by restricting the use of artificial intelligence, but by understanding it from the inside. Only organizations that successfully integrate technology, processes, and culture into a coherent strategy will be able to unlock AI’s full potential without compromising security, reputation, or customer trust.
By Matías Szmulewiez, Cybersecurity Practice Head at Baufest.


