The policy
Acceptable Use Policy
How PDG staff use AI: what's allowed, what isn't, and the standards we hold for every piece of work that goes out the door.
To: PDG All Staff From: Brian Athey Date: May 20, 2026 Version 1.0 — Effective May 20, 2026
1. Why this policy exists
The future of Push Digital Group will be built with AI. Not as an experiment or a perk, but as part of how we work. The point of this policy is to make that real, securely, safely, and at scale.
Secure means our auth, our data, and our clients' data stay protected. Safe means we use these tools in ways that comply with the law and stand up to public scrutiny. At scale means everyone has the access and the training to use AI day-to-day, not just talk about it.
In return for following this policy, PDG provides a best-in-class default tool (Claude Enterprise) at no cost to you, a fast process to request anything else, real training, and a real point of contact when something goes sideways.
2. Who and what this covers
This policy applies to every employee, contractor, and intern at PDG, on any device, on any network, anytime you're doing work for PDG or our clients — every channel, every deliverable.
Client contracts and the confidentiality terms in our MSAs apply on top of this. Where they conflict with this policy, the contract wins.
3. What good looks like
Most AI use at PDG is encouraged. Use it freely for:
- Drafting memos, briefs, or pitch decks
- Summarizing meetings, threads, or research
- Brainstorming subject lines, ad concepts, hooks, campaign angles
- Translating, editing, or rewriting your own writing
- Analyzing public polling, FEC filings, news clips, competitor messaging
- Writing or debugging code
- First drafts of creative copy, images, voice, or video that you'll refine
If you're not sure whether a use case fits, the default is try it, then bring questions to the Innovation team.
4. Definitions
- AI / AI tool. Software that uses machine learning, NLP, or related techniques to do work that used to require a person.
- Generative AI. AI that creates new content — text, images, audio, video.
- Approved tool. A tool PDG has vetted and signed off on. The current list lives in the PDG Approved AI Tools companion document.
- Public AI tool. A consumer-facing tool, usually with a free tier, where inputs may train future models or aren't covered by enterprise privacy terms. Free ChatGPT and free Gemini are examples.
- Confidential information. Client data (voter files, donor lists, campaign strategy, internal polling, non-public opposition research), employee PII beyond the directory, non-public financials, proprietary strategy, security details, NDA-covered material, and unpublished IP.
- Deepfake. AI-generated or manipulated media that depicts a real person saying or doing something they didn't.
5. Approved tools and how to request one
PDG organizes AI tools into three tiers. The companion document PDG Approved AI Tools is the live list of what's in each tier.
Tier 1 — Claude Enterprise. Universal, company-paid.
Every PDG staffer gets a Claude Enterprise seat, provisioned through PDG SSO at no cost to you. This is the default — reach for it first, especially for anything involving client work. Claude Enterprise is also the only platform sanctioned for MCP, plugged-in data sources, API integrations, and automated workflows, because that's where we've made the security, audit, and indemnification investment. If you are uncertain about what you can connect, reach out to your department head.
Tier 2 — Approved specialty tools. Department-approved, company-paid out of department budget.
A defined list of tools (ChatGPT, voice, video, image, web, code, marketing) that solve specific needs Tier 1 doesn't cover. To request one, send your department head the tool name, the use case, and what data it would touch. Your department head approves based on departmental need and budget; the company pays through the department. Turnaround target: two business days. Don't use the tool for work while the request is pending.
If the tool you want isn't already on the approved Tier 2 list, your department head forwards the request to Brian for vendor vetting (privacy terms, security posture, IP/indemnification) before any spend.
Tier 3 — Personal AI tools. Not company-paid.
Anything you pay for personally — a personal ChatGPT Plus, personal Perplexity, personal Grok, etc. These are subject to Section 6 of this policy: they cannot be used for any client or PDG work. The AUP's data and work-product rules apply the moment work is involved.
6. Personal vs. company accounts
Personal AI accounts cannot be used to create, edit, analyze, store, transmit, or process client work, PDG work, or any work-related data. Work data only goes through company-managed approved tools. The audit trail and indemnification only follow PDG-provisioned access.
Your personal subscriptions are yours — use them for your own life, not for work. If a personal account would solve a real work need, request a company account and we'll provision it. PDG does not reimburse personal AI subscriptions.
7. What data goes where
| Data type | Examples | Where it can go |
|---|---|---|
| Highly sensitive | Voter files, donor lists, non-public campaign strategy, polling data, source code from client projects | Claude Enterprise only, with a heads-up to your department head for first-time use cases |
| Confidential | Draft internal communications, non-public meeting notes, internal financial details | Claude Enterprise only |
| Internal | Non-client, non-public PDG operations (process docs, internal memos) | Any approved tool |
| Public | Press releases, public filings, published opposition research, public web content | Any approved tool — verify accuracy of outputs |
Hard line on secrets. Never paste passwords, API keys, access tokens, OAuth secrets, SSH keys, or database credentials into an AI chat — including Claude. AI tools are not credential vaults. Redact before sharing an example, or rotate after. If a secret slips in, treat it as exposed and rotate immediately.
Also off-limits, in any tool: employee PII beyond the directory, and data a client contract prohibits AI processing on.
Anonymize when you can. Even inside Claude, stripping names, addresses, and IDs from a dataset before processing is a good habit.
8. Quality and human oversight
AI output is a draft, not a deliverable. Anything that leaves this building goes through a human — you.
- Treat AI output as a starting point. Edit, fact-check, make it yours.
- Watch for fabricated sources, made-up quotes, and confidently wrong numbers.
- Watch for bias in voter targeting, segmentation, and message generation.
- The work is yours. AI doesn't get a co-byline.
9. Client disclosure
When AI generates voice, image, or video that appears in a client deliverable, it must be disclosed to and approved by the client before it ships. Internal AI use — drafting, analysis, automation, research, summarization — is not separately disclosed unless a client contract requires it.
If a client tells us they don't want AI used on their work, we honor that. If the request would meaningfully change scope or pricing, we have that conversation with them directly.
Political ads. Google, Meta, and a growing list of states have specific AI-disclosure rules. They're summarized in PDG AI Compliance Addendum — check it before shipping political creative.
Synthetic media. AI-generated voices, faces, and likenesses carry the highest legal and reputational risk in our work. Don't put them in a client deliverable without explicit approval from your department head and confirmation that the use complies with FCC/TCPA, FEC, state deepfake laws, and platform policies.
10. Intellectual property
- Don't use AI to generate content that infringes someone else's copyright, trademark, or IP.
- AI-assisted work created on PDG time or for PDG/client purposes belongs to PDG or the client, per contract.
- Inputting proprietary or client information into a tool may compromise IP protections — exactly why sensitive work routes through Claude Enterprise, where Anthropic indemnifies PDG against third-party IP claims and doesn't train on inputs.
11. Compliance with law and platform policy
PDG complies with applicable federal and state regulations on AI in political and commercial content — FEC, FCC, FTC, CAN-SPAM, TCPA, CCPA/CPRA, state disclosure and deepfake laws — plus Google and Meta political-ad policies. Details by jurisdiction and channel live in PDG AI Compliance Addendum, reviewed quarterly with the COO and outside counsel. Non-compliance means ad rejection, account suspension, and lost client trust.
12. Training and what you can expect from us
- At hire and rollout: every staffer reads this policy and signs off. The Innovation team runs a live walkthrough for existing staff and at onboarding for new hires.
- Ongoing: training and tool-specific guides ship through the PDG AI Knowledge Base. Updates on tools, policies, and new risks go to
#ai-announcementsin Slack. - Office hours: open AI office hours run regularly for the first 90 days post-rollout, then on demand.
If the company isn't holding up its side — training, access, response time — tell Brian. We'll fix it.
13. Governance
| Role | Person | Owns |
|---|---|---|
| VP of Innovation | Brian Athey | AI strategy, approved-tool list, vendor vetting, ethics review, day-to-day questions |
| Backup approver | Scott Farmer (COO) | Tool approvals, incident triage, and policy interpretation when Brian is unavailable |
| First line of defense | Your Department Head | Tool approvals, incident triage, and policy interpretation |
PDG follows NIST AI Risk Management Framework principles — Govern, Map, Measure, Manage — scaled to our size.
Pre-launch check-in. AI work involving voter data, donor lists, or public-facing political creative gets a brief check-in with your department head before it ships — five minutes on Slack, 15 if the situation is unusual. Escalate to the VP of Innovation if needed.
14. If something goes wrong
Default response: coaching, not discipline. Mistakes will happen — that's part of getting comfortable with new tools. We'd rather you try, learn, and get it right next time than avoid AI altogether. Tell us what happened; we'll figure out the fix.
If a violation involves protected data (client PII, voter files, donor records, financial info), the response is more structured: a meeting with the Innovation team and — depending on the situation — a temporary pause on AI access while we reset. Still to learn, not to punish.
Discipline is reserved for willful or repeated violations after coaching. No one gets in trouble for a good-faith mistake.
How to report. If client data or confidential information ended up somewhere it shouldn't, or anything else worries you:
- Primary: Slack DM Brian Athey + Scott Farmer.
- Backup: email both.
Mistakes caught early are usually fixable. Mistakes covered up rarely are.
15. Policy review
This policy gets a fresh look at least once a year, sooner if regulations or our tooling shift materially. Employee feedback drives most of those updates — if something here isn't working, tell Brian.
16. Questions
Slack or email Brian Athey, VP of Innovation. brian@pushdigital.com.
