What we’re building
We’re surrounded by rules; rules make the world. But how many of us have a good grasp of what those rules are? Beyond this, how many of us really understand the implications - economic, financial, strategic, social, diplomatic, technological, human - of the rules governed systems we live in?
The starting point and, simultaneously, end goal for Lautonomy is to make sense of these governing rules, particularly those contained in the world’s laws, policies, regulations and contracts. We want to know what specific rules a law creates, in what circumstances the rules are triggered and what actions the rules mandate. We want to understand how these rules are applied in practice; how they succeed in governing behaviour and where they fail. We want to understand the dynamics of rule creation and arbitration - how to mediate between clashing interpretations of which rules take precedence. We want to be able to compare the rules different actors - states, organisations, regulators, companies, judges - use to govern different issues and jurisdictions. We want to uncover the nested, tangled pathways between rules, including parsing the bureaucratic knowhow needed to work within and access the protection of governing rules.
How do we do this? We think the problem here has two parts.
1/ Knowledge
The first part of the problem is the knowledge gap surrounding how rules govern. This knowledge is treated as an esoteric, protected resource to be husbanded by legal experts. It shouldn’t be this way. To solve for this we need to be able to extract, label and link knowledge and know-how about rules and the governing systems they define. This involves mapping knowledge about what the rules say, and knowledge about how the rules govern in practice.
Knowledge about what the rules say we can get by looking at governing documents. These are the treaties, laws, regulations and policies that countries, companies, organisations and individuals are both bound by and have responsibilities to uphold.
Knowledge about what the rules mean we can get by looking at how rules are applied, interpreted and argued over. This we get (or infer) from the case law and jurisprudence of judicial bodies (courts), quasi-judicial bodies (like data regulators, treaty monitoring bodies, corporate oversight boards); from expert commentary; and from event-level data illustrating how rules are (or are not) being implemented.
2/ Application
The second part of the problem is what we do with this knowledge. How can we best use an enhanced knowledge of the world’s governing systems?
This is the application layer. We’re working on the premise that deep, networked knowledge of the systems in which governing rules are embedded allow us to fine-tune highly explainable (and non-hallucinatory) AI models.
With the data structured as a knowledge graph we’ll be able to dynamically adapt insights, strategies and simulations in line with updates to the rules. We’ll be able to automate compliance with confidence by embedding traceable and explainable information into conversational AI. We can develop regime-specific governance models, tailored to reflect and react to fast changing regulations, emergent technologies and varying jurisdictional dynamics. We can use mapped out technical regulations to tell designers and engineers - in real time - whether a product fits within required product standards, ethical guidelines or industry frameworks and is likely to be approved by regulators and, if not, suggest sympathetic edits that reduce the risk of refusal. We can work with diplomats to streamline treaty negotiations, or lawyers to automate contract arbitration. We can surface and monitor systemic forms of injustice, matching event-level and geo-spatial data to individual and community rights to corporate and government responsibilities. We can make it easier and cheaper for people - wherever they’re located - to find and collaborate with lawyers and NGOs. We can develop legal risk metrics, both predicting how litigation is likely to pan out and pioneering new legal risk-based ways to ensure poverty isn't a barrier to justice. We can work with investors to automate due diligence and to incorporate regulatory and ethical risk in trading decisions. We can augment and, potentially, automate forms of judicial decision making. We can work with torture survivors and asylum seekers and indentured workers to ensure they get the legal protections they're owed; journalists to identify patterns of global scale illicit behavior; and governments to monitor, prevent and redress human rights abuses.
In short, with the right knowledge in place there’s quite a lot we can do.