7 min read

From Draft to Final: What Changed in MinLaw's AI Guide (And What It Means for You)

I submitted feedback on MinLaw's draft AI guide expecting nothing. Six months later, the final guide adopted both recommendations near-verbatim. Here's what changed and what it means for solo counsel.
From Draft to Final: What Changed in MinLaw's AI Guide (And What It Means for You)
Photo by Mohamed Marey / Unsplash

Last September, I published a blog post critiquing MinLaw's draft Guide for Using Generative AI in the Legal Sector. I shared it on LinkedIn. People I respect encouraged me to submit it formally. So on September 29, the night before the consultation closed, I turned my critique into a four-page submission and emailed it to MinLaw.

The Solo Counsel Reality: What MinLaw’s AI Guidelines Miss About In-House Practice
Singapore’s AI guidelines assume enterprise resources and committees. But what about solo counsel making real AI decisions with limited budgets and shadow tools? This consultation misses practitioners who need guidance most.

I felt a bit foolish doing it. The consultation would attract responses from the big boys and very important people in the legal fraternity. Some guy nobody has ever heard of sending in his blog post repackaged as formal feedback? I figured it would be politely received and quietly filed.

Six months later, the final guide landed. I skimmed it. Nothing jumped out — it was longer, more detailed, but I couldn't tell what had actually changed. So I asked Claude to run a side-by-side comparison of my submission against the final text, half-expecting hallucinated matches. You know, just trying to be sycophant.

Launch of Guide for Using Generative Artificial Intelligence in the Legal Sector

26 Pages to 48 Pages

The draft was a 26-page outline. The final guide is 48 pages, plus annexes with sample governance policies, vendor checklists, and engagement letter templates. Senior Minister of State Murali Pillai told Parliament that MinLaw received "over 20 constructive responses." The press release said the guide was "enhanced to better support adoption across diverse operating contexts, including smaller law practices and in-house legal teams."

That last phrase caught my attention. My original blog post was called "The Solo Counsel Reality: What MinLaw's AI Guidelines Miss About In-House Practice." The whole thrust of my critique was that the draft didn't work for people like me.

The contributors list names roughly 40 organizations and exactly three individuals. I'm one of them. But it's the substance, not the credits, that matters. Here's what actually changed.

AI Literacy: From Missing to Mandatory

The draft's paragraph 17 listed three things lawyers should do when using AI: keep a lawyer in the loop, apply greater scrutiny outside their expertise, and remember they bear ultimate responsibility. That was it. No mention of understanding how AI tools actually work. No mention of learning when they fail, or why, or how to prompt them effectively.

The assumption was that professional judgment alone was sufficient. But you can't exercise meaningful oversight over a tool you don't understand. A lawyer who doesn't know that AI confidently fabricates citations isn't exercising professional judgment — they're exercising professional trust.

The final guide fixes this. Paragraph 20(a) now requires lawyers to "develop AI literacy" across five specific competencies: understanding how AI tools function and their limitations, knowing when they produce reliable versus unreliable output, learning basic prompting techniques, recognising that AI competency varies across legal tasks, and knowing when additional scrutiny is needed.

At the launch event, Law Minister Edwin Tong framed the shift in starker terms:

Lawyers who want to thrive in the future will have to be legally strong and digitally fluent.

Not just legally competent — digitally fluent. He called for rethinking the education and training lawyers receive, citing estimates that up to 44% of legal tasks can be automated by AI. This echoes the broader push from Budget 2026, which named legal as one of the first professions needing AI transition support.

Budget 2026 Tells Lawyers to Use AI. But Are We in the Driver’s Seat?
PM Wong named legal as one of the first professions needing AI transition support in Budget 2026. The government is putting real money behind this - 400% tax deductions, free premium tools, Champions of AI programme. But is this adoption infrastructure, or does it keep lawyers in the passenger seat?

Consumer Tools: From Edge Case to Protocol

The draft gave consumer AI tools — the ChatGPTs and Claudes that most practitioners actually use — three bare sentences in paragraph 19(c): review the terms of use, don't enter confidential information, anonymise data. That was the entire guidance for the tools most lawyers in Singapore were already using every day.

The final guide's paragraph 23(d) now provides specific protocols:

(d) When using free-to-use GenAI tools:

i) Disable data retention and use for model training; verify settings regularly as updates may reset configurations [Note: Data may be temporarily retained even with privacy settings enabled]; and

ii) Avoid using confidential or commercially sensitive information; if necessary, (i) anonymise data by replacing identifiers and sensitive information with generic placeholders (e.g. [Party A], [Company B], [Amount X]); (ii) frame queries as hypothetical scenarios; and (iii) use isolated clauses instead of full documents where possible. [Note: Anonymised content may still be identifiable with sufficient context. Consider documenting placeholders used for transparency, consistency, and traceability].

This is the difference between guidance that says "be careful" and guidance that tells you how. A solo counsel reading the draft would have known consumer tools carry risk. A solo counsel reading the final guide knows what to actually do about it.

Again, some of this language tracks what I submitted — in places near-verbatim.

Beyond the Submission: What the Guide Got Right on Its Own

The final guide improved in several ways. Bias got a proper treatment — three sources identified, mitigation strategies added. A risk-based oversight diagram now helps practitioners distinguish when they need to review every AI output versus when they can monitor at a higher level.

But for resource-constrained practitioners, the most practical additions are the templates. Annexes C through E contain a sample AI governance policy, employee handbook clauses, engagement letter templates, and a vendor checklist. These aren't aspirational frameworks — they're documents you can download, adapt with your organisation's name, and implement. For a solo counsel who would otherwise need to draft governance documents from scratch (or, more realistically, not draft them at all), this is the difference between having a policy and not having one.

The scope also expanded meaningfully. The draft addressed lawyers and paralegals. The final guide now covers legal technologists, alternative legal service providers, and law students. Real firm examples include in-house teams — GenZero's AI implementation sits alongside the big law case studies, making the guide feel less like a document written exclusively for firms with dedicated innovation labs.

Does It Work for Solo Counsel Now?

My original blog post ended with this line: "Maybe I'm not the target audience for this guide. But that itself is the problem."

So does the final guide fix that?

Significantly, yes. The AI literacy requirement acknowledges that professional judgment alone isn't enough. The consumer tool protocols give practical guidance for practitioners who don't have enterprise platforms. The templates mean small teams don't need to build governance frameworks from scratch. The expanded scope explicitly names in-house counsel and smaller practices.

But the stakes for getting this wrong are real — and they fall hardest on small firms. On the same day the final guide launched, the High Court published its judgment in Tan Hai Peng Micheal v Tan Cheong Joo [2026] SGHC 49. Two solicitors from small firms had cited fictitious case authorities in their closing submissions — cases that didn't exist, likely generated by AI. The court ordered $5,000 in personal costs against each solicitor, rejecting their proposed $1,500 as "plainly inadequate," and warned that future cases could result in disciplinary action. This is a sharp escalation from the $800 sanction in 2025 that first put AI hallucinations on Singapore's legal radar.

For a Singapore sole proprietorship, $5,000 out of pocket is a significant hit. No firm budget to absorb it, no compliance department that should have caught it. These are the practitioners most exposed to AI risk and least equipped to manage it.

The guide helps. If you have two hours this month, start with the consumer tool protocols in paragraph 23(d) — they're usable immediately, no governance process required — and then look at the sample AI governance policy in Annex C, which you can adapt for your team.

But an implementation gap remains. The guide tells smaller practices what good AI governance looks like. It provides templates. But execution still requires time, expertise, and support that solo counsels may not have. MinLaw's LIFT (Legal Industry Framework for Transformation) pilot — which provides hands-on implementation support to law firms adopting AI — has only seven firms so far. The guide is a framework, and frameworks help. But the hardest part for resource-constrained practitioners isn't knowing what to do. It's finding the time to actually do it.

What I Keep Coming Back To

Here's the thing. I almost didn't submit. I assumed my perspective wouldn't matter alongside the institutional responses.

I'm not claiming credit for these changes. Over 20 respondents submitted feedback. MinLaw's team did substantial independent work expanding the guide. The final document reflects a genuine consultative process, not any single voice.

But the experience taught me something. Public consultations aren't performative exercises. They're real opportunities to shape the frameworks that govern our profession. You don't need to be a big firm or an institution to participate. Write publicly. Submit formally. The worst that happens is nothing changes.

The frameworks that govern us shouldn't be written only by the people with the biggest teams.

This post is open source. View the source files, discussion notes, and revision history on GitHub.

View on GitHub