
Across U.S. organizations, adoption of artificial intelligence (AI) surged in 2024–2025, but realizing its full value depends less on algorithms and more on human and organizational factors. Many organizations are finding that while technical AI capabilities have advanced, people and process challenges make up ~70% of obstacles to successful AI integration according to a recent Boston Consulting Group research on AI adoption. Effective change management, human–AI collaboration frameworks, workforce training, ethical guidelines, and governance structures have emerged as critical enablers for AI programs to scale. This month we will highlight adoption trends and emerging best practices in organizational models for AI in the legal industry.
The legal industry underwent a dramatic jump in AI adoption in 2024, driven largely by the advent of generative AI tools for text analysis and drafting. According to the American Bar Association’s 2024 Legal Technology Survey, the percentage of attorneys using AI-based tools in practice nearly tripled from 11% in 2023 to 30% in 2024. This growth spanned firms of all sizes: among large law firms (100+ lawyers), AI use shot up to 46% (from just 16% the year prior), and mid-sized firms (10–49 lawyers) rose to 30% (from 11%). Even solo practitioners, traditionally slower to adopt new tech, went from 10% to 18% using AI tools. Another industry study focusing on cloud-based law firms (Clio’s 2024 Legal Trends Report) found even higher uptake, reporting that AI usage by lawyers jumped from 19% to 79% in one year– indicating that once accessible tools like ChatGPT appeared, a vast majority of lawyers at least experimented with them. (The discrepancy in percentages between surveys may be due to different sample populations and definitions of “using AI,” but the trend is clear: AI awareness and trial in legal practice went mainstream in 2024.) Lawyers are primarily using AI to boost efficiency. Common applications include legal research with AI-assisted search, automated document review for due diligence or e-discovery, contract analysis, and drafting assistance (e.g. using GPT-based tools to generate first drafts of briefs or client emails). In the ABA survey, time savings was the dominant perceived benefit of AI adoption. Law firms also reported adopting AI to gain competitive edge in responsiveness and to offer new services. Importantly, in-house corporate legal departments are embracing AI at least as much as law firms. A late-2024 survey by Axiom found 99% of in-house legal teams now use AI tools for work, and nearly half use AI frequently. Generative AI specifically has blurred the distinction between formal “AI software” and consumer tools – a majority of lawyers have tried general AI platforms (the ABA survey noted that ChatGPT was the most popular AI tool, used or considered by 52% of respondents). This democratization of AI in law led to bottom-up adoption: many attorneys began using AI independently in their workflow, sometimes before their firms established any policy.
Governance, Ethics, and Human-AI Collaboration: The rapid uptake of AI in legal services has outpaced the development of governance structures, raising important ethical and management questions. Governance Gap: Surveys reveal a disconnect between AI adoption and oversight. In Axiom’s study, 47% of organizations lacked a formal AI policy for legal teams, and 83% of in-house lawyers admitted to using AI tools not provided by their company. In fact, 81% said they use AI tools that are technically “unapproved” (e.g. free online tools) in absence of official solutions. This indicates that lawyers are forging ahead to gain efficiency with whatever AI is at hand, even if firm leadership hasn’t caught up – a situation ripe for risk. Every lawyer surveyed acknowledged there are risks in using AI for legal work, the top concerns being cybersecurity (42%), data privacy (38%), and intellectual property leakage (37%). For example, entering confidential client information into a third-party AI service could violate privacy obligations or privilege if not carefully managed. There are also concerns about accuracy and reliability of AI outputs – a cautionary tale came in mid-2023 when attorneys faced sanctions for submitting a brief with fake case citations generated by ChatGPT, underscoring that verification is key. The ABA survey results reflect some caution: aside from the 30% using AI, another 33% of lawyers either don’t know if their firm is using AI or feel they don’t know enough about it, suggesting a learning curve and perhaps hesitancy among others.
Human-AI collaboration in legal practice is still finding its footing. Lawyers are trained to be risk-averse and detail-oriented, so many incorporate AI in a supportive role rather than a decision-making role. For instance, an attorney might use an AI tool to get a first draft of a contract clause, then edit it heavily – using AI as a productivity booster but relying on human judgment for quality and nuance. Ethical guidelines emphasize this “augmented” approach: the ABA and state bar associations have issued guidance that using AI doesn’t remove a lawyer’s duty of competence or supervision. Lawyers must understand the technology’s limitations (e.g. potential for “hallucinated” outputs) and double-check AI-provided information. Indeed, ethical deployment of AI in law has become a hot topic. By 2024, multiple bar associations (New York, California, etc.) had released ethics opinions on generative AI, generally advising: do not expose client confidences without consent, verify all AI-generated content, and stay educated on the technology. The 2024 ABA Tech Survey found that while a growing number of lawyers use AI, many are cautious: a notable portion said they were not sure if their professional liability insurance would cover AI-related mistakes, and some firms limit AI use pending clearer policies.
Emerging Best Practices: Leading law firms and legal departments are now implementing frameworks to harness AI’s benefits while mitigating risks. Key best practices include:
- Drafting Formal AI Policies: Organizations are creating internal policies outlining how lawyers may use AI. These typically address approved AI tools/vendors, types of tasks AI can be used for, confidentiality protocols (e.g. never input sensitive text into public tools), and requirement of human review. Given that barely half of legal teams had policies in 2024, this is priority #1 for many. In the absence of policy, lawyers will continue using AI informally (as the 83% stat shows). A good policy enables safe use rather than banning AI; for example, some firms whitelist certain AI platforms that have appropriate data safeguards (or adopt on-premise AI solutions) so attorneys have sanctioned tools available.
- Providing Training and Upskilling: Training is critical as lawyers incorporate AI. Yet, Axiom’s survey found only 16% of in-house lawyers received adequate training on using AI, though virtually 100% of those without training still went ahead and used AI tools. Recognizing this, forward-looking firms in 2024 started offering workshops or CLE (continuing legal education) sessions on AI – covering how generative AI works, its pitfalls, and best practices for prompting and reviewing outputs. Some large firms appointed “AI specialists” or formed AI committees to support attorneys on use cases (e.g. helping trial lawyers use AI for research efficiently). A trained workforce is less likely to make errors like relying on an AI-supplied case citation without checking it.
- Use-Case Focus and Pilot Programs: Legal teams are identifying specific use cases where AI can immediately assist and piloting there. For example, contract review in due diligence is a ripe area – AI can quickly flag key clauses across hundreds of contracts. If a pilot shows AI saves significant attorney hours with acceptable accuracy, firms then expand its use. The Axiom survey notes contract drafting (56%) and contract analysis (39%) are the top areas where in-house legal teams already apply AI, alongside general legal document drafting and research. These are often low-risk applications (the lawyer still ultimately approves the work) with high time savings. By contrast, fewer lawyers currently use AI for tasks like courtroom advocacy or bespoke legal advice, which carry higher risk and require more nuanced human expertise.
- Strengthening AI Governance: To oversee these efforts, some organizations have created cross-functional governance groups that include legal ops, IT, risk management, and practicing attorneys. These groups evaluate new AI tools (for security and bias), monitor usage, and update policies as needed. In corporate legal departments, coordination with the enterprise IT security team is essential since lawyers accessing AI tools create potential data leakage points. One trend is deploying private AI sandboxes – e.g. law firms using platforms where data stays within a secure environment – to let attorneys experiment without risking confidentiality. Governance also means keeping an eye on quality: firms are instituting review protocols (like second-lawyer review of AI-generated work product) until they gain confidence in outcomes.
- Ethical & Risk Mitigation Strategies: Law firms are integrating ethical considerations into their AI usage. For example, if using AI in legal research, firms instruct lawyers to shepardize (verify case law) themselves and not cite anything they haven’t read in full. Some firms have decided not to use AI for certain sensitive tasks at all (like predicting case outcomes for client advice) until tools are proven. In-house legal teams, per Axiom’s findings, view cybersecurity and data protection as top priorities – half of those using AI frequently said cybersecurity is their #1 concern. Best practices here include robust access controls, encrypting any data sent to AI systems, and vendor agreements that address data use and IP ownership of outputs. On the bias front, a few legal teams are exploring “AI audit” exercises – for instance, checking if an AI model for document review has any systematic blind spots (perhaps not recognizing certain legal phrases). While formal “AI audits” were rare in 2024 (only 51% of life sciences companies had done one, likely even fewer law firms had), the concept is gaining traction to ensure AI tools perform as intended and fairly.
- Maintaining Human Oversight and Quality Control: Ultimately, the consensus in 2024 is that AI is a powerful aid for lawyers, not a replacement. Law firm leaders often describe their approach as “human-in-the-loop.” This means a lawyer supervises the AI at every step – whether it’s verifying research results or tailoring an AI-drafted contract to a client’s needs. Many firms have explicitly told attorneys that responsibility for the final work product lies with the attorney, no matter what tools were used. Emphasizing this has helped maintain standards and also alleviated client concerns. In fact, some corporate clients now ask law firms about their AI usage policies, wanting reassurance that if AI is used on their matters, it’s done with care and confidentiality. Being able to articulate a thoughtful approach to AI is becoming part of legal service quality.
Looking forward, the legal sector is expected to continue its rapid AI adoption, but with more structure. As one legal tech report put it, AI in law has “reached the level of adoption the cloud took a decade to obtain”. The challenge now is to harness its benefits while implementing proper governance, training, and risk management frameworks. Early movers are optimistic: 89% of in-house lawyers in one survey believe AI’s benefits will outweigh its risks. The next two years will likely see firms formalizing AI governance (closing that policy gap), integrating AI into standard workflows (e.g. research platforms with built-in AI assistants), and addressing ethical duties through updated professional guidelines. The most impactful reports of 2024 uniformly echo a key point: legal organizations that treat AI as a strategic asset – investing in their people and processes around AI – will gain efficiency and competitive advantage, while those that do not risk falling behind. In other words, AI adoption in law is no longer optional, but doing it responsibly is the new differentiator.
One industry we work with in AI Strategy & Training is the legal sector. If you don’t currently have AI policies in place, put a governance structure together, or developed an upskilling plan for your employees than we would love to jump on a discovery call with you to talk about how to address the gaps so your organization is more robust and thoughtful in your AI transformation.
Cited References
BCG: “AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value”
Axiom: “AI in Legal Departments: Promise Meets Reality in 2024”
American Bar Association ABA: “ABA releases new survey on legal tech trends”