Why we’re building what we’re building …

Legal Systems Intelligence

Writing in the 1970s, John Burton-Race argued that in order to really understand how the world works we needed to get to grips with the sheer complexity of the systems that define behavior. Instead of trying to shoehorn complex causal relations into a billiard ball-esque model, our models need to reflect the way society is composed of densely networked bundles of systems. IN other words, we don’t live in a singular system, but in an overlapping, enmeshed “cobweb” of systems. Within this cobweb model it's the interaction between actors, rules and events that we need to be able to explore if we hope to be able to explain - and predict - human and organisational behavior with any degree of certainty.[1]

Systems intelligence relies on mapping and extracting meaning from this cobweb of interlinked data. There are some systems where the data is (enviably) clean, like finance. It’s no accident that Googling (or ChatGPTing) for the share price of Facebook on 3 September 2018 gets a good answer. But try getting a list of all litigation Facebook was involved with on that date and you’ll have a different - and far more frustrating - research experience.

This helps illustrate one of the core problems Lautonomy works to solve. When it comes to legal and regulatory systems we don’t have an easily accessible or comprehensive pool of clean data to draw from. While lawyers might be able to tell you what the rules say or do, like the proverbial blind men grasping the elephant, they’re unlikely to have a full grasp of how the rules connect and interact. In short, the data we have doesn’t give us a comprehensive picture of law and policy as a system.

At Lautonomy we're leveraging what are called “knowledge graphs'' to solve this problem and power legal systems intelligence. Knowledge graphs are, frankly, awesome, because they force you to explicitly model the entities (“things” or “nodes”) that do stuff (i.e. have causal agency) within a system and to define the relationships between these entities (i.e. how they have causal agency).

The upshot of having an ontologically explicit, strongly typed systems map in place is that when you connect a multi-source firehose of data it becomes straightforward to understand what's going on with that data - and what that data means for the wider system - without additional cleaning. So, for example, if we're looking at a document with the type “judgement”, we can automatically situate it as the product of an organisation called a “court”, an actor with the role “judge”, applying "laws", which contains “rules”, assessing "facts of the case", citing "cases", mentioning "actors" and "events", involving "lawyers", with an "outcome" requiring a "remedy". If you do this across a corpus of 100,000 judgments, you end up with a highly detailed picture of how this legal system functions in practice.

What’s more, as this picture expands so does the confidence and accuracy with which you can infer and reconcile fuzzy, missing or incomplete data. To put this another way, a social systems map functions as an explanatory backstop to AI/ML-based analysis. We can begin to explain and query ML driven insights anchored against established, expert knowledge of how the system is expected to work. This isn’t just about training or fine-tuning new AI models but about parsing the operating system for law and policy rules in a way that is relevant to human and machine decision-making.

As we add new data to the systems map and link data from other systems the range of queries we can run - or, in other words, the type of intelligence we can generate - becomes exponentially more sophisticated. You want to investigate judicial decisions set against political finance records? There's a prompt for that. You want to see the cost of taking an unfair dismissal case in India vs Indiana? There's a prompt for that. You want to compare satellite-based estimates of a company's type 1 emissions vs those reported in the company annual report? There's a prompt for that. You want to see which of your competitors are likely to be found in breach of emerging AI regulations? There's a prompt for that.

Mindful of the over-hype trap, it’s worth exploring in more concrete terms how legal systems intelligence adds value at the strategic and operational level.

STRATEGY

There's clearly value to being able to ask and answer a wider variety of complex questions at speed. The rapid uptake and integration of generative AI tools like chatGPT is convincing evidence of this. But we're focused on the particular payoffs that come from putting semantically searchable, visualizable, fine-grained and - above all - trustworthy knowledge of rules-based systems into the hands of decision makers.

The first set of payoffs comes in the form of enhanced risk awareness. What are the variables that can influence whether a strategy leads to a win or a loss? How does the risk profile change over time and space? What penalties does a system impose for losing - and are the corresponding opportunities worth the risk? For example, lawyers will often have an intuition about the risk of losing a case. This unquantified sense of risk steers a fair bit of the decision-making around everything from choosing whether to take a case, choosing which jurisdiction to pursue the case in, choosing which arguments to foreground, choosing whether to call particular witnesses, and choosing whether to accept a settlement. But this risk awareness tends to be the product of a lawyer’s experience and expertise. Not because lawyers don’t appreciate the value of data science but because the relevant data is in a format that doesn’t (yet) either outperform expert judgement or fit into a lawyer’s decision making flow.

One of the effects of current siloed intelligence is that risk is partitioned, put into boxes labelled "financial risk", "political risk", "reputational risk", "operational risk", "ethical risk", "legal risk" and so on. What tends to happen is that decision makers are left to weigh the relative importance of these different boxes in isolation. The reality, however, is that these types of risks are conjoined and overlapping. For example, legal risks have serious financial and reputational implications; one wonders whether Fox lawyers raised the specter of a $750 million payout to Dominion during editorial meetings, and how this might have changed board and shareholder willingness to support the "stolen election" narrative.

Expanding out the lens, we can also begin to model systems-wide forms and drivers of risk. It’s valuable to be able to look at how well or how badly the overarching system is working; what is the “failure rate” of a system? What behaviors are being incentivized, or disincentivized? Is the fact a rape prosecution was unsuccessful an isolated function of the legal arguments made in a particular court (before a particular judge?), or is evidence of wider system failure, including the way police investigate and evidence sexual violence, the way jurors perceive victims, or the legal funding and pre-trial support available? When it comes to evaluating governing systems we tend to slice risk into narrow fields rather than viewing risk as embedded in and the product of actual or potential system failures.

Third, the Lautonomy systems intelligence lens can help with identifying opportunities for downstream, large-scale impact. Thinking in systems is important for being able to identify and target what Dana Meadows calls “leverage points” and Malcolm Gladwell calls “tipping points”. These are the confluences or eddies in a system where actors, practices, rules, policies or events converge in ways that open low cost/high impact opportunities for potentially radical change.

The key benefit of systems intelligence, at least as Lautonomy sees it, is not just in identifying where these high impact interventions might be but in the ability to understand converging causal dominoes and “high reverb” feedback loops. What is the opportunity window for shaping legislation that is likely to dictate future trading conditions? Can we identify potential best courses of action available within this window? What feedback loops are likely to be created by, say, offering a 4-day working week policy, or a 6 month paid paternity leave policy? What effect have similar policies had at similar companies?

Put another way, the world’s rules establish, shape and amplify the possibility that particular social futures will emerge. It follows that an enhanced ability to design, influence and govern these rules is a massive leverage point.

OPERATIONS

At the operational level, legal systems intelligence pays off in three key ways.

First, decision-makers need to understand the internal policies (or preferences) of the organisation they work for and represent. That can be because they’re in a role that requires communication or advocacy of these policies, or because they’re affected by these policies themselves (e.g. leave entitlements, D&I standards, hiring and firing policies, complaints procedures). Being able to see and query an organisation’s policies at a glance - and benchmark those policies against established law or industry-standard practice - is a quick way to build valuable policy feedback loops between company and customer, and company and employee.

Second, regulatory compliance is a major driver of operational cost (including inefficiency). Creating a systems intelligence map that encompasses all of an organisation’s operations is a way to reduce the bureaucratic burdens of red tape, both because it allows decision makers to see what those burdens are but also because it opens the door to efficiently tuning a generative AI pipeline for compliance and contract-related documents.

Third, in contexts where operations involve significant compliance monitoring - for example, monitoring a social media platform for instances of hate speech and initiating take down procedures - knowledge of the applicable rules and procedures allows for more efficient and cost-effective triage of compliance processes. This can span full automation of content take down to enhanced human-in-loop governance of automated decision-making systems.

In conclusion

Systems intelligence and mapping is a useful way of supercharging operational workflows and strategic decision making. If you want to get the most out of conversational AI, start modelling law as a system. (Reading Kelsen is optional.)



References

[1] Dana Meadows, doyenne of modern system thinking, makes a similar point, arguing that if we want to truly understand a system - and identify effective interventions to change behaviour within that system - we need to be able to identify the connections, relationships and feedback loops that dictate behavior in a system; Donella H. Meadows, Thinking in Systems: A Primer, 2015

KEYWORDS

Systems Intelligence | Legal Theory | Knowledge Graphs | Systems Maps | Data Models for AI