June 18, 2023

Hollywood Writers' Stance on AI is Short Sighted, Reveals Disingenuousness and Fear of Technology

When considering the Hollywood writer’s strike and what the WGA is asking for in its negotiations with the Alliance of Motion Picture and Television Producers (AMPTP), one issue resonates above all others -- the writers’ motivations and questionable stance regarding artificial intelligence and the role it could come to play in TV and film writing. Artificial intelligence (AI) and language models have arguably been the single hottest topic in the world of technology since ChatGPT was introduced by research laboratory OpenAI last November. This new iteration of ChatGPT is far more advanced, sophisticated and human-like than any AI or chatbot ever deployed to this point. Wired magazine, for its part, can’t seem to cover this topic enough and for the past 2-3 months it's seemed like every other article the tech pub spits out is about ChatGPT. But as far as the Hollywood writers go, it seems they're showing concern that AI is becoming so sophisticated and powerful that it could theoretically replace and do part (or perhaps one day soon, all) of the job of a guild writer. The WGA is thus calling for studios to “regulate” or otherwise severely limit the use of AI in the writing process, as well as abstain from asking writers to edit or rewrite material written by AI.

Part of the rationale behind the writers' hard stance is their belief that AI-generated work could never approach the prowess and richness that a human writer brings. Currently, this seems largely true. As of this writing, artificial intelligence can compose everything from haikus, to term papers, to sketches, and even more -- but the general belief is that the results lack “spark”, “humanity” and “flair” and, though serviceable, are not remarkable enough to move forward without considerable rewriting... At least that’s as of now.

One of the most valuable traits of AI and ChatGPT in particular, is that it’s constantly learning and improving, as it culls from innumerable sources to hone it’s responses and better fulfill the conversational, research, authorship or other task it's been assigned. ChatGPT is also great at mimicking. So, for instance, you can ask it to write song lyrics in a distinctive style — say, that of Prince… or The Ramones… or Morrissey —- and it could do so. Similarly, when it comes to TV and film writing, ChatGPT could potentially be asked to imitate an Aaron Sorkin or a young Woody Allen. The quality of the result, as of now, may be questionable but as soon as six months, a year, two years from now, imagine how much better they'll be. Indeed, ChatGPT has already been estimated to have a verbal IQ of 155 and has passed both medical boards and the bar exam, so just think what it could be accomplishing in the near future.

This is where the WGA has shown the most concern. Similar to how it negotiated during the 2007-08 writer’s strike to install guardrails protecting against diminished perceived value of the writer as a result of industry changes brought about by the internet and streaming, so too are the writers now looking to negotiate to keep their value strong as AI makes its way into the writing industry. However, on this specific issue, despite its stated intentions, there’s a good deal of false nobility and short-sightedness  operating within the WGA.

The guild justifies the strike in general (and the demand for negotiation regarding AI specifically) by claiming that it's seeking to secure fair consideration and compensation for current industry writers, while also ensuring opportunity for the next generation… It’s a half truth. In reality, what the writers are really worried about is their own jobs. The entirety of the pushback against AI is not for the benefit of the next generation of writers, but for the financial security of the current one. They’re worried about a day when a production exec fires up their laptop, starts a conversation with ChatGPT, throws out a premise, a few character descriptions and plot points and then lets AI go to work cranking out a complete pilot. In the WGA's eyes that’s a worst-case scenario. There’s also one where AI does all the heavy lifting (e.g., developing the plot, the characters, and perhaps an initial treatment) and then a human writer is brought in to work with the result by developing, punching up, and polishing it into something more usable. Revisions like this earn fees much lower than commissioned first drafts or even spec scripts. The writers are worried that if AI proves up to it, we could be looking at a world where (for instance) an entire season of an hour long drama could be outlined in an afternoon — and at a cost of zero dollars. That possibility scares the pants off every single writer in the WGA, as evidenced by the words of C. Robert Cargill (writer of Dr. Strange, Sinister, and The Black Phone) who weeks ago tweeted, "The immediate fear of AI isn’t that us writers will have our work replaced by artificially generated content. It’s that we will be underpaid to rewrite that trash into something we could have done better from the start."

There are two things wrong with this way of thinking. First, it pre-supposes that anything AI produces is in fact vastly inferior (i.e., "trash".) How does Cargill (or anyone else) know that AI won't soon be entirely capable of cranking out a perfectly acceptable first draft?... It's emblematic of the type of arrogance that exists among paid, high-level Hollywood writers. Is it crazy to think that any intelligence capable of scoring 710/800 on SAT verbal, has enough mastery of the English language to handle a creative writing assignment?... The rub would be how much imagination, creativity, and personal style would be applied. But this is true of work created by humans as well. After all, it's already been posited that when it comes to storytelling, there are only 4 types of conflict and 7 basic plots -- everything else is just variations on a formula. Cargill makes it sound like he and his guild brethren alone are capable of gold standard work. On the contrary, the WGA is full of mediocre and pedestrian writers and the creative output of these and every other writer is, and always will be, judged with a good deal of subjectivity. For example, one decision-maker may deem a script a funny, imaginative take on a standard theme, while the next might dismiss it as too derivative and only mildly amusing. So it's the same with AI as it is with human writers. Whether it's a line of dialogue, premise, story arc, full screenplay, etc., the final result is always subject to criticism and being dismissed as "trash".

Also consider that not every Hollywood writer is working on  Succession, Breaking BadEverything Everywhere All at Once, or other top-notch entertainmentLess ambitious productions -- children's shows like Paw Patrol and Blues Clues have writersSo does Syfy channel schlock like the Sharktopus movies and dopey theatrical features like Cocaine Bear. Game and quiz shows like Jeopardy and The Wall employ writers to create the contestant questions, and tons of entertainment and other segmented shows have writers penning "wraparounds" and RPG video game makers need storylines for the campaigns of their games. Is the WGA so haughty it thinks AI can't handle writing things of this far less demanding ilk?... 

Also at the root of the WGA's objection to AI is a fear of competition. When the guild loudly protests against AI and declares it's protecting the futures of up and coming writers, it’s just posturing and PR. The WGA is in fact a select (fewer than 13,000 members) elitist group that has always sought to keep its numbers small — if for no other reason than to ensure more opportunity (i.e., writing gigs) for their own. Think about it, if the WGA is so concerned about paving the way and ensuring opportunity for new writers, why does it direct its members never to accept or read unsolicited material from aspiring writers who might be seeking advice or mentorship? (Hint: it's not solely to guard against potential claims of stolen IP...) Why does it restrict membership only to writers who have been paid or sold something?… Why doesn't it offer meaningful mentorship or internship programs to unknowns?… It's because the WGA is designed to keep wannabes out, not help them break in. They don’t want expansion of their club; they want it exclusive, with power consolidated and jobs and paychecks reserved for them, the few. Though the political leanings of their individual members may lean left, when it comes to the nuts and bolts of how the guild operates, it’s decidedly under the worst type of conservative principles — protect what you’ve worked for; no assistance for outsiders. Knowing their reluctance to include, aid or even compete with other human writers makes it even easier to see why the guild is so threatened by machine intelligence. This type of defensive, resistant stance arises every time a disrupting technology comes along. The WGA are candlemakers cursing Edison's invention. Streetcar makers railing against Henry Ford's Model T. 

Other technologies feared as job/business-killers when they were first introduced include radio, computers, robotics and television, which many claimed would ruin the movie industry. The VCR was accused of being a copyright-infringing invention and Sony (developers of Betamax video tape recording) was sued by the movie industry, resulting in the landmark Sony Corp. of America v. Universal City Studios, Inc. decision that ultimately allowed home video recording to proceed and proliferate. Ironically, the movie industry benefited because by the very next year, home video sales were about the same as box office revenue -- and by 1995 more than half of Hollywood's American revenue came from home video, compared to less than a quarter from movie theaters. 

In nearly every instance, "offending" technologies simply cannot be held back and doom-predictors are forced to watch as the world simply adapts. For example, during the 1980's robotics (as many feared) did replace a large volume of factory workers. But this was offset to some degree by new jobs that were created for people to build, program and maintain robotic systems and equipment. But the larger point is that technological advancement always moves forward. Today, every major factory and manufacturing facility in the world uses robotics simply because it's a faster, more efficient, and more affordable way to do things

Here's perhaps the best example of fear of technology and the futility of it. When the MP3 was introduced back in 1990’s (and soon after peer-to-peer file sharing via the internet) what was the response of the music industry?… Panic, resistance, and litigation. Music companies took both purveyors of peer-to-peer technology to court. At the same time, they bullied (via injunctions, subpoenas and other court orders) ISPs into turning over the names of people who had downloaded copyrighted music, cherry picked handfuls and sued them. In some instances, it was unknowing parents of offending teenagers who were slapped with lawsuits. Meanwhile, heavy metal group Metallica also sued and then loudly raged against its own fans, labeling them criminals for "stealing music." Facing months, if not years, of costly litigation, Napster, which at the time stood at the forefront of peer-to-peer file sharing technology, was forced to shutter its operations. 


But once again attempts to suppress technology through lawsuits and other means failed. When Napster shut down, dozens of similar peer-to peer-companies and apps sprung up to take its place, including Grokster, Bearshare, LimeWire and Kazaa. At the same time, Demonoid, Pirate Bay and other torrent sites saw heavy user increase. Most of these companies were similarly targeted, sued and shut down — but not before ducking and dodging the music companies’ cease and desist orders long enough to contribute to the massive proliferation of music sharing worldwide. Looking back, a whirlwind of wrath, acrimony and confusion was born of the simple fact that music companies believed a new technology lessened their power and threatened their livelihood. And yes, it was true; thanks to peer-to-peer, music lovers suddenly had the power to listen before they bought -- or circumvent buying altogether and download a song (or an entire an album) in just a few seconds. Instead of sharing an album by burning CDs for their friends, they could (in seconds) instantly send a copy of that same album to literally anyone in the world. It was the power the new technology provided. It now existed -- and there was no going back.


Now what the Hollywood writers need to understand is that the impasse between the music industry and peer-to-peer tech didn't begin to get resolved until music companies focused less on protesting, resisting and hiring lawyers, and instead got down to the hard work of establishing new business models that figured out ways to exist alongside peer-to-peer and provide alternatives that gave consumers some of the same benefits the technology provided -- while still protecting artists, their copyrights and intellectual property. In other words, things were resolved only when the establishment, instead of working to limit the use of technology, acknowledged and built new strategies to remain relevant, necessary and preferable in the face of it. The result was iTunes, which gave music companies a distribution service that consumers could use to legally purchase music so artists and labels could be properly compensated. Did this solve the issue entirely?…  Of course not. “Pirating” music remains a problem even to this day -- but iTunes (which evolved into Apple Music) and the rapid development of music streaming services like Spotify and Pandora only cane about with the acceptance, support and investment of the music industry. 



The same thing needs to happen with AI and the WGA. Instead of asking for promises that have zero chance of being kept -- like AI-produced work never being used as source material (how could anyone ever be sure if something was written by AI or not?) -- the guild should accept AI and look to leverage it in ways that enhance but in no way replace the human writer. Is anyone at this point entirely sure what those ways are?... No. AI applied to writing and the creative world is still much too new. But once fear is put aside things tend to get figured out. 


The Hollywood writers have to understand that AI is a lot like Cliff Notes. When it comes to literature, Cliff Notes are in no way the equal of reading the book itself but, as millions of high school and college students can attest, they are still extremely helpful as a guide or aid. Similarly, AI is not the equal of a human writer -- but it can research, brainstorm, spark ideas, and augment. In this way, AI can today perform many of tasks of a "writer's assistant." A good example is a way I myself used AI recently. I have a screenplay that includes a character named Marcus Washington who longs to be a hip-hop artist. He's a supporting character and his musical aspirations are really only relevant in one scene. Still, I needed a stage name for him. Instead of wracking my brain and slowing down my process of creating the plot points, action and dialogue that actually drives the story forward, I had ChatGPT spitball some rap names. The results ("MC Dubz", "Wash Money", etc.) were perfectly usable placeholders. So even though at the time I chose to press forward onto more important things, whenever I do come back and decide on Marcus's stage name, the AI suggestions have helped by getting my brain going. 

AI though has far more writing capability than simply tossing out prospective character names. It can create storylines, inciting incidents, dialogue, parentheticals, and more -- all within the parameters set forth by a detailed, well thought out, well articulated prompt. And this may be where the Hollywood writers' biggest fears lie. Not that AI will be used to perform research or (as with my example above) to brainstorm things of minimal consequence. Rather, that in the very near future, it will be executing at such a level that its work will be indistinguishable from that of human writer. But just as teachers and professors can't stop students from taking short cuts by using Cliff Notes, there's essentially nothing that can be done to ensure the elimination of AI in the writing process. Like with Cliff Notes and peer-to-peer, the light bulb, Model T, and practically every other technological innovation, there's simply too much convenience and time and cost savings to be had. Putting limits on the use of AI would be like asking someone to research a paper without using the internet.. catch a bus instead of driving... or stitch by hand instead of using a sewing machine.

Yet these kinds of limitations are exactly what the WGA is insisting upon. Their worry about artificial intelligence in their profession is so great in fact, that they are making demands that their work never to be used to train AI. In other words, if the WGA gets their way, Open AI, as well as all other AI developers, would not be able to use the writers' screenplays, teleplays, transcripts and the like for AI to subsume in order to improve its performance. 

The idea of the WGA asking for this is laughable on so many levels. First, because there's no actual way to prevent it from happening. Let's face it, the writers' work is out there and (though it remains copyrighted) available for public consumption to such a large degree that it cannot be realistically withheld from AI -- or any other person, machine, webcrawler, etc. The final shooting script (or at the very least a transcription) of practically every popular film from the last several decades is available and relatively easy to find online. Many other screenplays have been officially published in book form. (Bugsy and Pulp Fiction are just two that are sitting on my bookshelf right now.) Plus, as of today no one outside of Open AI is even sure of how exactly ChatGPT is trained anyway, nor all of the sources it draws from for learning, which essentially eliminates any possibility of removing WGA-created content from the training mix.

But the other reason withholding WGA work from AI is ludicrous is because it simply defies the nature of human creativity, learning and improvement. It would be like Maya Angelou insisting that other poets not be allowed to read or study her work. Or Paul McCartney attempting to prevent other musicians from studying the chord progressions of his Beatles' songs. Or Rembrandt prohibiting other artists from viewing his paintings up close for fear of them replicating his brushstrokes. It simply sounds dumb right of the gate. With little effort, anyone at all can find (and study) all of the examples cited above (Maya Angelou's words, Lennon/McCartney's chord progressions, Rembrandt's paintings) via hundreds of websites, YouTube videos, the public library, and countless other sources. More importantly, creative artists would never attempt to exclude a human being from accessing, analyzing and studying what they've published/produced -- so clearly the WGA's objection to their output being used to train AI stems from the possibility that Chat GPT's ever-growing intelligence could emulate and rival the writers' work to the point where it becomes preferable to studios and production execs.

Fears abound of where AI is headed and how quickly it will advance. We've all seen The Matrix and Terminator movies that imagine dystopian futures where machines or computer systems grow so intelligent they become sentient and ultimately replace, enslave or eradicate their human creators. It could be that it's not just science fiction. Experts in the field, including Open AI CEO Sam Altman, are already warning that AI could pose an existential threat to humanity. Leading AI expert and ethicist Professor Simon Goldstein even went so far as to say that "AI researchers don't understand the machines they've created very well, so there is a chance we will not be able to completely control their goals. If their goals conflict with our own and they are more intelligent than us, then it is possible that over time they will ultimately replace us as the dominant form of life on this planet.

The risk of this type of future (one straight out of science fiction) may be unavoidable. But human history repeatedly shows us that technology always moves forward and progress can’t be stopped. The much more likely scenario is that the sky is NOT falling and AI technology will have far less of a disastrous impact on both human existence and the entertainment writing industry than many may think. In the future, if we all embrace and learn the right ways to leverage it, ultimately AI will simply do what technology does -- fulfill its fundamental purpose of making humans and our society more efficient and more productive... The sooner the WGA realizes all this, the better.