AI and Political Email — Ethics, Practice and Labor
I really like running political email programs. It combines writing, math and organizing and gives you near-immediate insights into how a message is landing with your audience. That said, most email jobs in political communication wouldn't be a fit for me – they're very churn-and-burn and disconnected from organizing.
As AI tools have started to get better at producing natural language communication, our community has debated how these tools should fit into email production and related applications for text, ad copy and other forms of digital communication. It already has a role, but I wanted to share a few big picture thoughts because we're talking about more than a tool. We're talking about the ethics of using non-human communication, putting these tools into practice, and the potentially significant labor implications for our field.
The landscape is already pretty bleak. I don't think our email programs are collectively good for democracy right now. They are good at raising money, but they're much more rarely used to inform, inspire, connect, or help constituents deliberate about policy. Of course that isn't a problem with email itself, it's a problem with the limited agency individual level voters and party supporters have in our current system. Email alone can't fix that — we need leaders who are willing to give constituents and supporters more meaningful things to do than smash the donate button and maybe knock a few doors. AI tools will probably not make what lands in people's inboxes much better or worse, either. And thankfully they do not have the power to make Democratic or public interest group email communication as grifty and violently stupid as the median Republican email.
Audiences deserve to know what we're sending them. There's a big distinction in my mind between using AI to code emails and build universes and using AI to create outgoing content for public consumption. In the context of coding and other technical work, AI assistance is a tool and something that's been prevalent in the field for a while. For outgoing communication, however, it's undermining a core unexamined premise in political communication, which is that we are always dealing with humans talking to other humans. At a minimum, campaigns and organizations should disclose when AI is used to generate content. At a maximum, they can proudly say real humans wrote whatever their audience is reading.
AI is a labor issue, not just a tech issue. A lot of the promises about AI are about efficiency. Efficiency can be a good thing, but I think email programs are already pretty efficient at generating lots of copy and generating lots of tests and data to try to optimize messages. Practically speaking, the difference between the best and second-best subject lines and messages and calls to action is usually pretty small – operatives' know-how regarding what works and what doesn't usually keeps things in a decent range, though audience tastes also require constant variation. Additionally, there are hard ceilings on how many subject lines and messages we can test day to day and week to week – political audience sizes often aren't large enough to give us statistically significant differences on deadline among more than a few messages, especially when we're trying to optimize for the tiny fraction of audience members who take online actions, especially donations.
What AI might do is allow one junior level staffer to produce as many coherent subject line and body drafts as two junior level staffers did previously, as an example. Congrats, you just "efficiently" eliminated a staff line. But does that go back into labor? Into digital? Into organizing? Or does it become a larger ad buy? And is it really efficient if the new AI-generated drafts have to be more scrupulously factchecked and vetted due to so-called "hallucination" problems with AI, an artful term for "whoops, looks like the computer program wrote a bunch of wildly untrue stuff again."
There's an opportunity for unions in our field, including at party committees and the Campaign Workers Guild, to spell this out in labor contracts and use it as a negotiating point with managers. Can campaigns hire more writers or ban AI for content creation? Should they support training in working with AI tools for staff? Should they pay junior staffers who are capable of co-writing with AI prompts more than they used to because they're just so darn efficient with those new tools?
Finally, we should recognize that AI applications have a big class bias. It's the junior staffers at digital firms who might get made redundant by AI, not the senior staffers who write speeches for elected officials. More generally, political discourse can often tacitly assume that there's a more disposable form of communication it's okay for a robot to write (emails, ad copy, texts, etc.) and a different class of communication the robots shouldn't touch, e.g., a State of the Union speech. But AI tools can do a pretty decent job mimicking speeches, too.
Don't forget the people. As we discuss these issues, I hope we can keep the people doing political work and the people receiving campaign communication front-of-mind. Technology is cool. It's fun. Sometimes it's scary. Sometimes it's a big old hype train. But we're talking about jobs and how we talk to voters — let's always lead with progressive values.