Blog

Privacy-First: AI in Your Environment

Written by Amina Pašalić | May 12, 2026 12:11:36 PM

No matter how much time I spend online each week, one thing never really goes away: that quiet concern about data privacy.

 

Every time I click “Accept Cookies,” I don’t get the cookies I actually want. I just end up wondering what I might’ve just agreed to. And still, I don’t read the terms. None of us really do. At this point, it’s just part of how we live with technology.

 

But in business, data is not just a byproduct of digital activity. It represents years of expertise, operational knowledge, customer relationships, and a core competitive advantage. Mishandling it can severely damage how an organization operates.


The potential of AI is huge for large industrial companies. The bigger the organization, the harder it becomes to navigate all the information it has collected over time. That same complexity is exactly the fuel that AI need. Every minute spent searching for information instead of using it is a lost revenue.


However, no cool AI feature will ever be more important than preventing a data leak. The real challenge isn't just what to build, but how to build it safely.

 

 

Building AI Inside Security Boundaries

 

That being said, we are building Alva. Alva is not a consumer chatbot layered on top of company data. It is an ongoing internal solution running within Alfa Laval’s Azure environment, built to operate inside the organization’s existing security and access boundaries. The goal is not just to make information easier to reach, but to do so in a way that respects the sensitivity of the environment it works in.

 

Security is treated as a continuous process rather than finished state, since there is so much stuff we plan to add on and new ideas come every once in a while. 


For model access and orchestration, the solution uses Azure AI Foundry, Google Vertex AI and Anthropic models. This matters because enterprise AI security requires a hardened surrounding platform, not just the model itself. To support this, the architecture is backed by a set of core security practices, including:

  • Access Control:
    Authentication is handled through Microsoft Entra ID (Active Directory), with MSAL obtaining tokens for secure API access. Authorization relies on Azure RBAC to limit access based on user roles and least-privilege principles.

  • Network Protection:
    Public access is completely disabled. Firewalls and internal routing keep the solution safely isolated from the open internet.

  • Security Checks and Monitoring:
    The codebase has Advanced Security enabled, including:
    •    Secret alerts 
    •    Dependency alerts 
    •    CodeQL analysis

    These mechanisms, combined with a private network setup, provide a level of protection against accidental leaks and known vulnerabilities.

  • Architecture Reviews:
    The system undergoes regular architecture validation and review as part of the development process.

  • Resource Management:
    Resources that aren't actively utilized are closed to ensure business continuity.

 

Privacy is also shaped by what happens after a prompt is submitted. Contextual data is not kept indefinitely. Automated Azure Functions help enforce internal retention rules by periodically deleting temporary or no-longer-needed data from storage. That kind of cleanup is important, because privacy is not only about preventing unauthorized access, but also about limiting how much information is retained in the first place.

 

 

Security Is More Than Hosting

 

Hosting an application in Azure does not automatically make it compliant, and using Azure AI Foundry, Vertex AI, or Anthropic models does not by itself guarantee that every possible deployment setup meets every regulatory requirement. Compliance depends on configuration, contracts, governance, and how the solution is actually operated.

 

We take a role as the Data Controller seriously, processing personal data only when it is adequate, relevant, and not excessive.

 

The system is designed to align with EU laws including GDPR principles, particularly around data minimization and user control. If personal data is ever transferred outside the EU/EEA, it is done in strict accordance with applicable privacy laws. Users
retain rights to access, correct, delete, restrict their personal data.

 

Azure and other enterprise AI platforms provide a strong foundation for building secure AI enterprise systems. Microsoft and Google state that prompts and outputs are not used to train foundation models without customer permission in enterprise contexts. Depending on the deployment region, workloads also align with Microsoft’s EU Data Boundary commitments.

 

The key point is this: security and compliance do not come from marketing labels. They come from concrete controls.

 

Region selection matters.
Access design matters.
Data retention matters.
Network exposure matters.
Internal policy matters.



AI That Strengthens, Not Exposes

 

Alva represents our vision for the future, a tool that shows that even when you’re working with sensitive information, you don’t have to choose between being useful and being secure. You can do both.

Privacy-first AI also means putting the user in control. We are still in progress, learning every day how to better bridge the gap between powerful AI and rigorous data protection. Our goal is for Alva to eventually act as any other trusted colleague.

Alva demonstrates that powerful AI and strong data protection shouldn't compete with each other. They should work together to help a company grow.

That is what real artificial intelligence looks like, not just learning from the data, but learning what people need, and respecting what is private to them.