Quick reference
Rules of the Road
The short version: dos and don'ts you can scan in two minutes. Pin this somewhere you'll see it.
Push Digital Group — quick reference Companion to: PDG AI Usage Policy v4
Do
- Use Claude Enterprise as your default. It's free for you, secure for clients, and where the integrations live.
- Treat AI output as a first draft. Edit, fact-check, make it yours before anything ships.
- Disclose AI to clients when AI generated voice, image, or video that appears in their deliverable.
- Anonymize sensitive data when you can — names, addresses, voter IDs, donor IDs.
- Ask before you use a new tool. Two business days for an answer. Request goes to your department head; new vendors get vetted by Brian.
- Tell us when something goes wrong. Slack DM Brian + Scott. Mistakes caught early are usually fixable.
- Loop in counsel before shipping AI-generated voice, face, or likeness of a real person.
Don't
- Don't put client data into a personal AI account. Free ChatGPT, free Gemini, your own Claude, personal Perplexity, personal Grok — none of them. Personal tools are Tier 3 — personal use only. Work data only goes through PDG-approved tools (Tier 1 Claude Enterprise, bundled tools, or Tier 2 department-approved).
- Don't publish AI output without review. No exceptions.
- Don't connect outside data sources to anything but Claude. MCP, APIs, integrations — Claude only.
- Don't use AI-generated likenesses of real people without explicit Innovation team approval.
- Don't paste passwords, API keys, or credentials into any AI tool. Ever.
- Don't start using a requested tool while it's pending approval.
- Don't assume a state's disclosure rule from last cycle still applies. Check the Compliance Addendum.
When in doubt — Slack Brian. Default is try it, then bring questions — not avoid AI altogether.
Reporting: Slack DM Brian Athey + Scott Farmer (COO). Backup: email both.
