The Solo Counsel Reality: What MinLaw's AI Guidelines Miss About In-House Practice
Singapore's Ministry of Law has opened public consultation on their "Guide for Using Generative AI in the Legal Sector" until 30 September 2025. The 26-page guide represents thoughtful work, establishing three core principles—professional ethics, confidentiality, and transparency—alongside a comprehensive five-step implementation framework.
Reading through the detailed recommendations and examples from major law firms, I appreciate the effort to create responsible AI governance. The guide demonstrates clear understanding of enterprise-level AI deployment, with insights from firms like Clifford Chance, Rajah & Tann, and WongPartnership on their systematic approaches to GenAI adoption.
But as a solo in-house counsel, I'm struck by how disconnected it feels from my daily reality. The guide assumes enterprise-scale resources and formal IT approval processes. It expects dedicated AI committees and systematic tool evaluation with vendor negotiations. It doesn't speak to practitioners like me—those with limited control over technology choices, constrained budgets, and immediate pressure to deliver legal services efficiently.
1. The Professional Judgement Burden: When Ultimate Responsibility Meets Everyday Constraints
The guide's first principle emphasizes that legal professionals must "take ultimate responsibility for all work product" and maintain accountability "regardless of GenAI use" (Section 3.1). The framework repeatedly stresses the need for "lawyer-in-the-loop" approaches and human verification of all AI outputs. It calls for greater scrutiny "when using GenAI tools outside areas of expertise."
The Professional Judgment Fallacy
This principle sounds uncontroversial—of course lawyers should be responsible for their work. Everyone appears to believe that if only lawyers are more responsible, then unprofessional mistakes like citing hallucinated judgements will go away. If lawyers are more responsible, when they report they are using GenAI, then they must accept that they had been careful with the output.
If "professional judgement" is all you need, then this is what we expect: only junior lawyers and litigants in person will cite hallucinated cases. Well, this is not true at all: senior lawyers are susceptible too. In the "landmark case" regarding hallucinated cases, the law firm partner was a 30 year veteran. There must be something more to "professional judgement" than having practiced law for a long time.
Diving deeper into recommendations reveal more problems which underlie asking lawyers to take ultimate responsibility:
- The guide expects you to "review, analyse, and verify all GenAI-generated output" (Section 3.1(a)). How does one do that? Is it really just reading through the output as if you were writing it yourself?
- The guide mentions that lawyers should "exercise greater scrutiny when using GenAI tools in areas where they lack subject matter knowledge" (Section 3.1(b)). How does one know when they lack subject matter knowledge? Are all lawyers capable of such constant self-reflection?
What's Really Happening
I don't think it's fair to claim that lawyers who make mistakes when using AI "should have known better". There are various pressures which may cause lawyers to fall on the wrong side:
- Time: Most obviously, lawyers have to deliver under time pressure. One could read more thoroughly to avoid mistakes. One could research more into one's methods to ensure perfection. If only I had more time. In the meantime, delivering "good enough" is what we settle with.
- We need to do AI: Some lawyers may feel that they need to show that they are smart or "cool" enough to use AI. This technology is evolving at breakneck speed. One has to take risks with systems nobody really understands. These risks might eventuate into real losses.
- Limited resources: Not everyone can afford the backing of mighty LawNet in Singapore law. Even if you can afford it, I haven't heard of a system that has your back by looking through your submissions to ensure your AI generated output is fully supported by authority. This isn't a criticism of LawNet. Hallucinations, like misinformation, is a nuanced subject. Where you can't get help, you have to somehow wing it. For in-house counsels, this is the reality.
I suspect that anyone who uses AI can't vouch that their output is in fact sound and secure. Is the "professional judgement" then to leave it as is (read: disclaimers) and only cover issues that will get you in trouble with the court or the client?
Validation fatigue is real. I suspect that the most "professional judgement" is to not risk anything at all by not using AI.
What's Actually Needed
With all the issues surrounding the use of AI, one might say that it's a devious cop out to claim that we have to rely on professional judgement.
Unfortunately, the real solutions to this issue are out of reach for most in the legal community:
- To use AI, you don't need "professional judgement". You need actual "AI literacy". I am not talking about reading the front matter of the guide or passing a course. You need to actually use it. You need to learn what is the difference between ChatGPT and Claude, and what goes into these solutions. Prompt engineering is kind of sweet, but you also need to continually update your understanding and precepts. Once you get a better handle of AI's jagged line of competencies, you will know when you are asking for hallucinations.
- Tools that will help stave off hallucinations are deployed by users and the court. Coders have linters, test suites and continuous integration and deployment. These are meant to create quality software that delivers with few bugs. Such quality assurance tools don't exist in legal practice, so we are relying on "professional judgement".
- Pay attention to process. There are probably use cases where the risk of hallucination is not serious or easily caught. In the guide, law firms appear to use AI for copyediting -- which is great because it improves writing and wouldn't get you in trouble with the court. This means you have to observe when to use AI and understand where it fits in with all your tools and workflows. When you are pressed for time, this is not going to happen naturally.
2. Shadow AI: The Guide's Biggest Blind Spot
The guide's five-step implementation framework (Section 4) reads like a masterclass in enterprise software deployment. Develop comprehensive AI adoption policies. Conduct thorough needs analysis across practice areas. Evaluate vendors using detailed checklists. Implement structured pilot programs with user acceptance testing. Establish continuous review processes with performance metrics.
This systematic approach makes perfect sense for large firms with dedicated technology teams, procurement departments, and formal governance structures. The examples from major Singapore firms demonstrate how this can work at scale—Dentons Rodyk's AI Committee, R&T Singapore's AI Core Team with cybersecurity specialists, WongPartnership's comprehensive safety frameworks.
But this enterprise-centric approach completely ignores the elephant in the room: lawyers are already using AI, and they're not waiting for formal enterprise deployments.
The Current Reality of AI Adoption
While MinLaw envisions orderly, committee-managed AI rollouts, many individual lawyers are making AI adoption decisions right now. ChatGPT, Claude, Gemini, and other consumer platforms are accessible, affordable, and don't require organizational buy-in, vendor negotiations, or IT approval processes.
The reason why shadow AI exists is that many users already use AI in their normal lives and find their own tools more effective than the ones provided by the firm (if any) in certain areas. Legal Tech is not going to focus on image generation or data analysis, but in house counsel are expected to be produce them. Expectations have risen because consumer solutions are already doing this well.
Surveys also indicate that lawyers are using more than a single AI tool and that Legal AI tools don't always produce the best output.
The survey reveals a lot about how the most effective lawyers are using AI in contract drafting.
This shadow AI usage creates a unique risk landscape that the guide barely acknowledges:
No Audit Trails: Consumer AI tools don't provide the usage monitoring and documentation that enterprise solutions offer. There's no systematic way to track what information was shared or how AI assistance influenced specific legal decisions.
Unclear Data Governance: While the guide extensively covers enterprise data protection requirements, consumer AI platforms have varying and often opaque data handling practices. Users make individual judgment calls about what information is safe to share without institutional oversight.
Inconsistent Quality Controls: Different platforms have different capabilities, limitations, and failure modes. A lawyer might develop expertise with one tool but then use another for specific tasks without understanding the different risk profiles.
Zero Integration: Consumer AI tools exist outside existing legal workflows, document management systems, and conflict checking processes. This creates information silos and potential gaps in professional record-keeping.
The Regulatory Gap
The guide does mention free-to-use tools, but only briefly in Section 3.2, requiring users to "refrain from including client or commercially confidential information" and "ensure anonymisation of client data." These passing references treat shadow AI as an edge case rather than the primary reality for many practitioners.
The structured implementation framework assumes lawyers will wait for formal, committee-approved AI solutions. This misses the reality entirely—AI adoption has already happened, and it's running on consumer platforms that don't fit the proposed governance model.
Lawyers, especially resource constrained in house counsel, face a thorny dilemma. To get the most out of their AI tool, they have to include as much context as possible. To include as much context as possible means making hard decisions about what is confidential information and what is sufficient anonymisation. The risk of getting it wrong not only leads to hallucination, but bad advice. Again, making the right decisions requires both professional judgement and AI literacy.
The guide's silence on shadow AI represents a fundamental disconnect from how AI adoption is actually happening in resource-constrained legal practice.
3. Transparency Requirements vs. In-House Dynamics
Section 3.3 emphasizes transparency as a core principle, stating that legal professionals should "consider disclosing to clients how these tools are being used," particularly when GenAI use "may materially impact the representation" or when data handling practices "may not align with client-specific preferences."
The guide provides extensive examples of how major firms handle transparency—publishing AI strategies on websites, including GenAI clauses in engagement letters, establishing dedicated client communication channels for AI-related concerns. KEL LLC's simple engagement letter clause, R&T Singapore's comprehensive client notifications, Clifford Chance's proactive communication about AI integration.
These examples demonstrate thoughtful approaches to transparency in traditional client-counsel relationships. But they reveal a fundamental misunderstanding of how in-house practice actually works.
The In-House Context Is Different
Unlike external counsel with formal client relationships and engagement letters, in-house lawyers serve internal stakeholders with vastly different expectations, legal sophistication, and communication preferences. Your "client" might be:
- A product manager under deadline pressure who wants quick answers, not AI governance lectures
- A cost-conscious CFO focused on efficiency gains rather than process transparency
- A compliance-focused board member concerned about any new risk vectors
- A technical team lead who understands AI better than you do
- A risk-averse business unit head who sees any AI mention as a red flag
Practical Disclosure Challenges
The guide's transparency requirements create several practical problems in the in-house context:
Education Overhead: Meaningful disclosure requires educating internal clients about what AI can and cannot do, the limitations of current tools, and the safeguards in place. This education process takes time that most in-house counsel don't have when stakeholders want immediate answers to urgent questions.
Risk Perception Mismatch: Disclosure might create unnecessary anxiety about work quality among stakeholders who lack legal training to assess the implications of AI assistance. A finance director might interpret "I used AI to help draft this" as "this legal advice is unreliable" rather than "I used efficiency tools to provide better service."
Context-Dependent Materiality: The guide's "materially impact the representation" standard is difficult to apply in ongoing internal advisory relationships. When is AI assistance material enough to warrant disclosure versus when does it create unnecessary friction in daily working relationships?
The hardest question of all is whether in-house counsel is well equipped to provide all the information that comes with disclosure. Again, this requires a strong background on both your professional judgement and AI literacy. If you don't have such background, you may not want to risk your credibility by opening this can of worms.
The good news in practice is that most of your colleagues are on the same journey as you are and no one has all the answers. My view is that it is best to share as learning points and to get feedback from others about their own learning journeys.
What Solo Counsel Actually Need
The consultation document asks for feedback on "clarity and practicality of the guidance" and "feasibility of the framework across different types of legal practice." As someone with experience in the areas that the guide doesn't adequately address, here's what would actually help:
Practical Risk Assessment
Instead of enterprise adoption frameworks, provide more information on what has worked and what hasn't worked for law firms and professionals. Legal research without the ability to search the net or a legal information repository? Non-starter. Editing of emails, identifying clauses, summarising contracts? Legal AI tools have mostly solved this.
Shadow AI Management Strategies
Instead of pretending shadow AI doesn't exist, offer practical guidance on safely using consumer AI tools while maintaining professional standards. What does it mean to anonymise? How do you turn off the sharing or training function? What risks still remain when you do all these steps with consumer tools?
Resource-Conscious Implementation
Instead of five-step enterprise deployment processes, focus on solutions that work with existing tools and realistic budgets. How do you implement AI safeguards without dedicated IT support? What are the minimum viable governance practices for solo practitioners?
Context-Sensitive Disclosure Guidelines
Provide nuanced guidance that accounts for the complexities of different practice contexts. When is disclosure helpful versus when does it create unnecessary friction? How do you maintain professional relationships while being appropriately transparent?
Integration with Existing Workflows
Instead of assuming green-field AI deployment, provide guidance on integrating AI tools with existing document management, conflict checking, and client service processes. How do you maintain professional standards when using tools that don't integrate with traditional legal infrastructure?
The Broader Context: Two-Tier AI Governance
This disconnect isn't just about missing details—it represents a fundamental choice about Singapore's AI governance approach. The guide's enterprise focus creates a framework that works primarily for large firms with dedicated resources while leaving everyone else to navigate AI adoption alone.
Consider the implications:
Compliance Burden: Complex governance requirements that work for large firms become impossible burdens for solo practitioners, potentially creating competitive disadvantages for smaller legal service providers.
Innovation Barriers: Formal approval processes that make sense for enterprise deployments can stifle innovation and efficiency improvements in resource-constrained practices.
Professional Development: Comprehensive training and education programs that large firms can provide internally become individual responsibilities for solo practitioners without institutional support.
Risk Management: Systematic safeguards that work at enterprise scale don't translate to individual practice contexts, potentially creating gaps in consumer protection.
Conclusion
Reading this guide, I find myself in a familiar position. As I read it, I understand where it's coming from and agree that from that perspective, the advice is sound. However, it doesn't speak to me.
It's a feeling I get every time I leave a conference or tech demo, including Tech.Law.Fest. I marvel at the progress of legal tech tools and follow them closely, but I wonder if I will ever get to use them myself. This isn't only about spending money—will I be able to use this day in and day out? How does it actually improve my work compared to what I'm already using?
Maybe I'm not the target audience for this guide. But that itself is the problem.
The MinLaw consultation represents an opportunity to create AI governance that works for Singapore's diverse legal sector. The current guide demonstrates thoughtful consideration but misses practitioners who are already making AI decisions with limited resources and immediate constraints.
Effective AI governance shouldn't create a system where sophisticated frameworks work for large firms while everyone else improvises alone. Solo counsel, small teams, and in-house lawyers aren't afterthoughts in AI adoption—we're making real decisions with real constraints right now.
The consultation closes 30 September 2025.

Member discussion