Push.AiBrian AtheyMeet with Brian

For leadership & account leads

Compliance Addendum

The longer detail on regulatory exposure, sensitive data handling, and review workflows for public-facing political work.

To: PDG All Staff From: Brian Athey Date: May 20, 2026 Version 1.0 — Effective May 20, 2026


Why this exists

There is no single federal law that governs AI. What there is — and what PDG has to comply with — is a patchwork: existing laws applied to AI in new ways, state-level AI legislation (mostly around political content), platform policies, and industry standards. This addendum captures the rules that matter for our work, organized by who's enforcing what.

If you're shipping anything public-facing, especially in political work, read the relevant section before launch. When in doubt, loop in Brian or outside counsel.

1. Federal regulators

Federal Trade Commission (FTC) — consumer protection.

The FTC is enforcing against companies that overstate AI capabilities ("AI washing"), make AI marketing claims they can't substantiate, or deploy AI systems that produce biased outcomes. PDG implication: don't oversell AI in our marketing or in client deliverables. If we say a tool does something, we should be able to back it up. Penalties can include monetary fines and required changes to business practices.

Federal Election Commission (FEC) — political content.

The FEC has affirmed that its existing rules against fraudulent misrepresentation apply to AI-generated political content, regardless of the technology used. Translation: a misleading deepfake or AI-generated false endorsement can trigger FEC enforcement just like any other misrepresentation. This is a hard line — see the synthetic media section of the AUP.

Federal Communications Commission (FCC) — telephony.

The FCC has ruled that AI-generated voices in robocalls fall under the Telephone Consumer Protection Act (TCPA). That means prior express consent is required for AI-generated robocalls and texts to mobile phones. The FCC also has proposed (and at points adopted) rules requiring AI-disclosure language in political ads on broadcast media. Status here moves — confirm current state with counsel before any AI-voice or AI-message campaign.

2. State law

Roughly 16 to 18 states have passed AI-specific legislation. The bulk of it covers two areas:

  • Disclosure requirements for political content created or substantially modified by AI
  • Deepfake prohibitions and disclosures, especially for election-related synthetic media

Requirements vary considerably state to state. Disclosure language, timing, format, and scope are not consistent. We do not improvise here. For any state-targeted political creative involving AI-generated content, defer to outside counsel for the current statute and the exact disclosure language required.

Brian maintains a working list of the relevant states and their current laws as part of the quarterly review of this addendum. If you need it for a specific campaign, ask.

3. Platform policies

Google Ads / YouTube. Google requires advertisers to prominently disclose when election ads contain synthetic content depicting real people or events. Disclosures must be placed in a location that's "noticeable" — vague language, real enforcement.

Meta (Facebook, Instagram). Meta requires disclosure when political ads contain photorealistic images, videos, or realistic audio that's been digitally created or altered. Specific disclosure formats apply for ads in their Ad Library.

Other platforms (TikTok, X, Snap, etc.). Policies vary; check the current state of each platform's rules before launching political creative there. They change.

Failure to comply with platform policy results in ad rejection, account suspension, or both — not just for PDG but for the client whose campaign is running through us. That's a hard cost we don't want to take.

4. Channel-specific rules

Email — CAN-SPAM.

The CAN-SPAM Act's requirements on accurate sender headers, non-deceptive subject lines, identifiable promotional content, and physical address inclusion apply to AI-assisted commercial email exactly as they do to human-written email. AI doesn't change the rule.

SMS and voice — TCPA.

The TCPA's prior-express-consent requirements apply to any AI-generated or AI-assisted SMS or voice outreach to mobile phones. AI-generated voices specifically have been ruled in scope (see FCC above). Don't assume an existing consent record covers AI-generated calls — confirm with counsel.

Web content and chatbots — CCPA / CPRA and state disclosure.

The California Consumer Privacy Act and California Privacy Rights Act impose data-handling and transparency obligations on AI-generated web content and chatbots. California and Utah have additional disclosure rules for AI-generated content reaching their residents. If we're shipping a client-facing chatbot or AI-generated web content with California or Utah audiences, run it past counsel.

5. IP, confidentiality, and indemnification

Vendor terms on input training. Tools that train on customer inputs are off-limits for any proprietary or client material. We confirm this in writing before approval — that's why Claude Enterprise is the default. Anthropic's commercial terms guarantee no training on inputs.

Vendor IP indemnification. When a vendor offers indemnification against third-party IP infringement claims on AI outputs, that's a real risk reducer. Anthropic provides this for Claude. ChatGPT Team provides limited indemnification. Most other tools provide none. This factors into approval decisions.

Vendor security standards. When a vendor claims SOC 2 or ISO/IEC 42001 compliance, we don't take the badge at face value. Approval requires reviewing the actual report, confirming the scope, and verifying the controls match how PDG would use the tool. Outside counsel or fractional security can do this on a per-tool basis.

6. Confidentiality risk in public AI tools

Putting client data into a public AI tool — free ChatGPT, free Gemini, a personal Claude account — risks two things at once:

  1. The data may be retained and used to train future models. Once it's in, you may not be able to get it out.
  2. Some vendors claim broad rights over inputs and outputs, which can put proprietary campaign material and client assets at risk of unintended sharing or reuse.

This is the hard rule from the AUP: client work goes through Claude Enterprise. The risk math is settled.

7. Review cadence

This addendum is reviewed every quarter by Brian (VP of Innovation), in coordination with Scott (COO) and outside legal counsel where needed. Updates trigger:

  • A version bump on this document
  • A note in #ai-announcements
  • An update to the AUP if the policy itself needs to change

If a regulatory or platform change between reviews materially affects how PDG should operate, the change is made off-cycle and announced immediately — not held for the next quarterly review.

8. When to escalate

Loop in Brian and outside counsel before launch when any of these are true:

  • The deliverable contains AI-generated voice, face, or likeness of a real person
  • The deliverable runs in a state with a specific AI-disclosure law and we haven't run that exact campaign template before
  • The deliverable runs on broadcast media (TV or radio) and contains AI-generated content
  • The campaign uses AI-generated SMS or voice outreach to mobile phones
  • The use case is novel and there's no clear precedent in PDG's prior work

Five extra minutes of review before launch beats a takedown, an FCC complaint, or a client losing trust.