Data & Security Policy
How does Lautonomy secure user data?
1/ We don’t work with LLM providers that collect private data
Our public workflow currently sends some limited data to OpenAI via their API. Their policy states: “OpenAI has considered the API off-limits to data capture or repurposing your information for any purpose."
That said, enterprise partners can specify a variety of different LLMs or preferred hosting provider (e.g. Azure), some of which have more exhaustive and transparent data security measures in place. If a partner has a preferred LLM we can engineer their conversational AI around this.
2/ We never share proprietary user data
In the Lautonomy pipeline, data is sent to the LLM as vector embeddings, constraining the ability of the provider to access the raw information. Vectors function here as anonymized pointers, allowing us to understand the intent of the user in the context of the graph schema and the query without reference to the actual data values (the information that needs to be protected). Under the hood Lautonomy uses LangChain which has emerged as a secure and well documented alternative to sending private data through an API.
3/ We maintain a strict separation between Lautonomy data and user data
We never train models on partner data, including prompts. This strict separation policy means that users don’t need to worry about their private data finding its way into the public domain.
4/ We never send data to third party retrieval services.
Security vulnerabilities can be introduced when conversational AIs connect to third party tools, like search engines. To prevent this we don't connect to any third party services. All data, including supplementary data, is curated and hosted by Lautonomy.
5/ We continuously monitor AI reasoning
We’ve designed our system to ensure that AI behaves responsibly and within the bounds of emerging AI and human rights law.
To facilitate this, registered users will always be able to access and audit conversational AI logs for bias and other forms of faulty reasoning. Users can track, monitor and - if necessary - correct the "thought process" or automated reasoning used by the AI to generate answers to prompts.
For enterprise users the ability to access and export conversational logs also serves as a useful tool for validating compliance with emerging regulatory requirements for algorithmic auditing.