Writing Safer Prompts: What to Share with AI Tools and What to Leave Out
AI can help with advocacy writing β from structuring complaints to clarifying letters β but not every detail belongs in a chat box. Hereβs how to use AI tools safely, protect privacy, and care for your emotional wellbeing while getting the help you need.
AI tools can help advocates summarise stories, draft emails and plan actions β but they also raise big questions: βIs it safe to paste this in?β βHow much detail is too much?β βWhat if someone else reads this later?β
This guide offers simple, plain-language ways to write safer prompts, manage privacy, and avoid re-triggering yourself while using AI.
Why Prompt Safety Matters in Advocacy
Advocacy often involves sensitive information β from health and disability details to financial stress, housing struggles, or unsafe practices. Sharing too much in an AI tool can lead to unwanted exposure or emotional exhaustion.
- Privacy: protecting yourself and others from unnecessary exposure.
- Power: staying in control of whatβs shared and what happens next.
- Emotional load: reducing how much you have to relive or rewrite painful experiences.
A Simple Rule of Thumb
Before you paste or type anything into an AI tool, pause and ask: βDo I really need to include this detail for the tool to help me?β
If the answer is βnoβ or βIβm not sure,β summarise, anonymise, or leave it out. The goal is usefulness, not exposure.
Safer Prompts: Four Key Ideas
- De-identify where you can
- Summarise instead of pasting everything
- Describe patterns, not full case histories
- Control the emotional βzoom levelβ
1. De-Identify Where You Can
De-identifying means removing or changing details that could easily reveal whoβs involved. Be cautious with:
- Full names or addresses
- Emails, phone numbers, case IDs, client numbers
- Exact school, service or clinic names (especially in small towns)
- Dates combined with rare circumstances
Instead of: βOn 14 March 2025, Dr Sarah Jones at Sunshine Paediatrics wrote a report about my son, Jack Smith.β
Try: βIn early 2025, a paediatrician wrote a report about my child at a local clinic.β
The AI can still help you structure your story or letter β it doesnβt need the identifying parts.
2. Summarise Instead of Pasting Everything
Instead of pasting full reports, write short summaries like:
βThis report recommends more therapy hours but doesnβt mention safety concerns.β
Then ask AI:
βHere is a summary of what this report says. Please help me list the main points and draft a few respectful questions I could ask about it.β
You still get support β without sharing sensitive attachments.
3. Describe Patterns, Not Full Case Histories
AI doesnβt need every incident to help you describe the big picture. For example:
βOver the last two years, Iβve had repeated cancellations and unsafe practices from my support service. Please help me summarise the pattern and explain why itβs a problem.β
Keep your detailed records offline for formal use β AIβs role is to help with the wording, not the evidence.
4. Control the Emotional βZoom Levelβ
Choose how close you get to painful details. Think of it like camera distance:
- Close-up: every detail and quote β often overwhelming.
- Mid-shot: general description with examples.
- Wide shot: broad overview and impact.
If itβs too much, stay at mid- or wide-shot level and tell AI:
βPlease help me describe this situation in general terms so the seriousness comes through without every detail.β
Concrete Prompt Patterns for Safer Use
These are examples you can copy and adapt:
Help me with structure, not content:
βIβm writing about a sensitive situation. Please help me structure my letter and headings, but donβt ask for extra detail.β
Rewrite without extra detail:
βHereβs my draft paragraph. Please simplify it, keep meaning, and donβt add new facts.β
Anonymise this text:
βReplace names with neutral labels like βthe serviceβ or βmy childβ, without changing meaning.β
Prompts That Protect Your Boundaries
You can set limits in the way you talk to AI:
- βPlease donβt minimise what happened, but keep it short.β
- βDonβt tell me to βsee both sidesβ. Just help me organise my points.β
- βIf my question needs legal or clinical advice, remind me to speak to a human.β
Youβre always in charge of the tone and depth of your own prompts.
When Itβs Better Not to Use AI at All
Crisis and Immediate Safety
If youβre in danger or in a mental health crisis, reach out to emergency or crisis services, not an AI chat. You can use AI later for summaries, not during the crisis itself.
Legal or Court Matters
AI tools are not legal advisors. Use them only to organise notes or prepare questions for your lawyer β never for legal strategy or submissions.
Other Peopleβs Stories Without Consent
Never share othersβ stories in full without permission. Summarise or anonymise patterns instead.
Emotional Safety While Prompting
Set limits to protect yourself while using AI:
- Work in short sessions β 15 minutes at a time.
- Alternate between heavy and light tasks.
- Notice your body and take breaks if needed.
- Reach out to humans when things feel too much.
AI can help with writing, but it cannot hold you like people can.
A Small Safer-Prompting Checklist
- Have I removed names or IDs I donβt need?
- Could I summarise this instead of pasting everything?
- Am I describing patterns, not every detail?
- Is AI the right tool for this situation?
- Am I emotionally okay to work on this now?
If not, adjust your prompt or come back later.
A Gentle Way to Practise Safer Prompts
- Pick a low-stakes example or resolved issue.
- Write a detailed version (with fake names).
- Practise turning it into a de-identified summary.
- Ask AI whether it still understands the situation.
Most of the time, youβll find that AI can help without you revealing everything.
Safer prompting is not about paranoia β itβs about choice. You decide what to share, when to share it, and how close you get to painful details. AI is one tool in your advocacy kit β but you hold the pen.