Skip to main content
search results
Sorry, but nothing matched your search terms. Sorry, but nothing matched your search terms. Sorry, but nothing matched your search terms.
Sorry, but we cannot handle your search query now. Please, try again later! Sorry, but we cannot handle your search query now. Please, try again later! Sorry, but we cannot handle your search query now. Please, try again later!
Search suggestions

Keeping the lawyer in the loop: How legal firms are practising Responsible AI

During a recent roundtable on the increased rate of AI adoption in legal practice, the importance of setting secure guardrails around accuracy, confidentiality and explainability was high up the agenda. The consensus was clear: innovation must advance – but only with guardrails strong enough to maintain trust.

Richard Hartshorne, Business Unit Director at Expleo, discusses how to balance the need to progress with the imperative to act responsibly.

AI has become a foundational technology across industries. However, the legal profession has looked like a possible outlier, given the high risks associated with sensitive data, client confidentiality and the consequences of inaccuracy.

Not any more. Over the last year especially, many law firms have shifted their view on AI innovation. The opportunities presented by AI, including rapid data search, document analysis, drafting automation, quality assurance for decision-making and analytics-driven compliance, are simply too valuable to ignore.

Pressure from clients on firms to invest in cost-saving technologies is growing, with the understanding that General Councils might turn to a rival firm or take business in house. AI efficiencies may accelerate the evolution of financial models from billable hours to value-based pricing. Younger, tech-savvy lawyers may also gravitate towards those firms that encourage the use of cutting-edge applications. The question is no longer whether to use AI, but how to do so responsibly. 

Despite growing enthusiasm, the risks associated with AI remain substantial. Rigorous protection of client data and the firm’s intellectual property must remain highest priority. These can’t be compromised in the name of ‘progress’.

Any AI-related security breach or data privacy violation will prove extremely costly, while a firm found to be ‘cutting corners’ with AI risks severe and lasting reputational damage. Over-reliance on AI will result in sloppy mistakes.  

A woman with wavy brown hair wearing a white blazer and dark top works on a laptop at a desk in a modern office with large windows overlooking city skyscrapers.

The new imperative: setting responsibility at the core of innovation

The challenge for legal organisations is to innovate without compromising on security, ethics or professional standards. This requires not only new tools, but updates to governance, operating procedures and the company culture.

However, this noble intention is easier said than done. Governance in legal practice is notoriously complex, especially when the tools are continually changing (and insurer requirements are shifting in step). Firms often have many levels of use authorisation. Deciding the right limits on guardrails is a delicate task.

Yet, despite the trials around AI, the cost of delay is rising. Firms that hesitate risk falling behind clients, competitors and regulators. 

Getting out ahead of regulation: a strategic advantage 

Government regulation and industry standards traditionally provide the guardrails to ensure technological innovation is pursued in a responsible way. In the UK, solicitors and registered lawyers are obliged by the SRA to “maintain their competence to carry out their role” which includes keeping relevant knowledge and skills up to date. Ignoring AI could therefore prove negligent.  

Enterprises across all industries worldwide are following guidance on Responsible AI from respected providers such as the Risk Management Framework from the NIST (National Institute of Standards and Technology) in the US, the EU AI Act in Europe and the ISO 42001 standard.

While compliance to one of these industry standards is not mandatory, it is becoming a de facto requirement driven by client demands and insurer requirements. Compliance is also seen as a benchmark for due diligence in legal cases, while demonstrating a commitment to mitigate risk in a market environment that typically moves faster than formal regulation. 

Big corporations such as McDonald’s have decided to get out ahead of compliance by developing their own principles and frameworks that provide a buffer against change.

For law firms – where risk exposure is higher – the case for proactive governance is even stronger. 

Drawing out the non-negotiable red lines 

As a priority, firms should draft an internal AI policy that clearly sets out acceptable and unacceptable uses.  

Any Responsible AI policy should include guidance on so-called ‘shadow AI’ – i.e., when staff develop their own applications or use non-sanctioned platforms to carry out official work.  

AI tools are increasingly joining processes and meetings as a matter of course, such as transcription services or office assistants like Microsoft Copilot that automatically fire off actions to people mentioned in the dialogue. Firms must determine where these tools add value and where they introduce vulnerabilities. 

The risks of data mismanagement or confidentiality breach are potentially higher on unvetted systems. At the same time, it’s worth investing time in understanding exactly why people will try to circumvent the rules. What are their personal preferences and drivers? What could the firm be missing out on? By taking an open mind, these innovations might be accommodated within the perimeter for others to use.  

The key is to install the necessary guardrails to protect data and IP, while also demonstrating to insurers that the AI stack is sufficiently monitored and controlled.  

The law needs humans to stay in the loop 

Rather than leave IT to manage the risks of AI alone, it’s vital that all functions take ownership. (Some firms are now hiring AI specialists to lead adoption across functions.)

Lawyers themselves are especially important, given their knowledge on technical areas of law and the need to stress test systems for accuracy.

Data governance is at the heart of AI governance. Put the right standards in place, and responsibility will endure no matter how technology advances. This groundwork is especially timely given the growing sophistication of agentic AI.

Firms need to know that AI won’t replace lawyers. Despite rapid advancements, AI still generates inaccurate and fabricated information. Biases or discriminatory patterns in training data can become engrained, leading to unjust decisions. Then there are the grey areas of legal interpretation, which are so important to fairness and justice on a case-by-case basis.

AI lacks the transparent reasoning and nuanced understanding to reflect client needs or convey jurisdictional subtleties. The ‘black box’ nature of AI removes essential explainability. In short, AI is still no substitute for human rationale. 

By leading with Responsible AI, law firms can innovate confidently, create competitive advantage and maintain the trust on which legal practice depends. The firms that build strong foundations today will be best positioned to harness future waves of AI – while keeping lawyers at the helm. 

Download

Download whitepaper