Lights, Camera, Algorithm: How AI is Changing the Face of Film

A friend of mine recently said that The Marvels felt like it had been written by an AI Although I personally liked the movie, it was obvious that my friend wasn’t being complimentary. The implication was that entry number five thousand and two to the MCU was formulaic, uninspiring, and just plain dull.

Yet, the integration of artificial intelligence into Hollywood’s film-making process persists. I think it is important to think about why, in order to highlight just how ideologically flawed this endeavour may be.

The recent SAG-AFTRA strikes highlighted the obvious – moviemaking is about the money. The Guild’s action, taken in response to an absence of clarity regarding whether AI would be used to replace writers and performers entirely, brought global attention to the moviemaking industry’s willingness to consistently prioritise profit over people and quality. Throughout the strikes, Disney CEO Bob Iger accused those on the picket lines of “naivety” regarding the financial necessity of adapting to AI The message was clear. Incorporating AI in film means more money for studios – so get used to it.

Interestingly, tinseltown’s flirtation with artificial intelligence was going on long before Bryan Cranston et al painted their pickets. Pei-Sze Chow observes that as far back as January 2020, Warner Brothers partnered with “Cinelytic” to adopt their AI driven project management workflow system. At its core, the purpose of Cinelytic is threefold – to predict how well a film will do at the box office, to inform distribution strategies, and to provide real time predictive forecasting intelligence for revenue prediction. In other words, it hopes to maximise studio profit by eliminating the pesky unreliability of human “gut feeling” regarding a film’s viability or quality. Instead, it lets the computer do the talking.

Warner Brothers is a business and as such, cannot rationally be criticised for attempting to make a profit. However, moviegoers could be forgiven for feeling a bit iffy about a theatrical landscape where movies are only made where an algorithm predicts that they will do well. Although Sze Chow notes that Cinelytic is keen to stress that the platform is only meant to be used at the green lighting stage, i.e the stage where producers already have a firm idea of the type of projects they want to initiate on the basis of prior human input and analysis, it would be naive to think that the algorithm will not have the final say.

For example, albeit in a different sphere, when analysing the effect of the use of A.I to recommend the length of criminal sentences in America, Andrew Lee Park observes that Judges will often pick the sentence the A.I recommends, because humans suffer from automation bias. In his words, “humans subconsciously prefer to delegate difficult tasks to machines, which we view as powerful agents with superior analysis and capability. The more difficult the task, and the less time there is to do it, the more powerful it becomes.” Given that decisions about what projects to greenlight in Hollywood will usually involve committing hundreds of millions of dollars, I would argue that a similar phenomenon would unfold in the movie-making world as we see in the judicial world. Humans, when faced with a tough decision, will attempt to move the burden of responsibility to a seemingly objective force, letting it make the decision for them.

Advertisement

The Warner Brothers Executive may feel that, when they go with the machine, they’re on to a winner. I think this is naive. To see why, it is important to think about how AI works. Humans contribute to the formulation of large data sets. The A.I then analyses and makes a decision on the basis of that dataset. In the Hollywood context, the idea is that the technology will identify what made past movies successful, and ensure that current projects match that criteria.

However, this is not how cinema works. Take for example DC’s summer offering, The Flash. That film, greenlit by Warner Brothers. was lambasted by critics and flopped at the box office. A key point of contention was that it needlessly resurrected largely forgotten characters, pigeonholing them into the plotline in a way that felt cheap and unearned. However, following the success of Marvel’s Spiderman – No Way Home, which brought back former Spidermen Tobey Maguire and Andrew Garfield alongside some highlights from Spiderman’s rogue gallery gone-by on its way to becoming one of the most successful films of all time, the studio would be forgiven for thinking that nostalgia sells. Similarly, if you put “nostalgia” into Cinelytic’s dataset, it would have said that The Flash would succeed on the basis of Spiderman’s success, because the same characteristic was apparent in both films. However, one worked and one didn’t. This is because film is about more than binary characteristics, making algorithmic prediction of its success inherently counterintuitive. Thus, execs may begin to question – is letting the algorithm do the talking really going to make us more money?

The answer to that question is intuitively, no, partly due to AI’s inability to avert from the pattern. As explained, programs such as Cinelytic rely on predetermined datasets in order to reach their conclusion. In this context, those datasets are likely to be comprised of characteristics from films that have been successful in the past. However, films that will be successful today, have very little in common with those that would have been successful five or ten years ago. The blockbusters which dominated the industry in times gone by failed to give an adequate voice to ethnic minorities and women, and were almost exclusively focused on alpha-male leads.

This is particularly true for a studio like Warner Brothers, whose most successful films include male led projects such as Batman. Thus, an algorithm basing its data on those films, would therefore likely fail when it comes to answering contemporary audience’s demands for more diversity in film and greater representation for minorities. If projects were greenlit on the basis of what was successful in 2010 – all films would look like they were made in 2010. In attempting to adopt technology to facilitate a move into the future, it could actually leave studios stuck in the past.

Cinelytic aside, the use of AI in other aspects of film-making has drawn derision which casts further doubt on the technology’s ability to maximise a studio’s profit. Reverting to “the Flash,” a major point of contention in that film was the use of AI to resurrect Adam West’s Batman and Christopher Reeve’s Superman. Audiences reacted negatively to the move. There was a similar wave of public backlash when an AI simulation of the deceased Anthony Bourdain’s voice was used in a documentary about his life, entitled Roadrunner: A Film About Anthony Bourdain.

The response is rooted in morality. There is something intuitively vile about seeing an actor on screen, or hearing their voice, when you know that it would have been impossible for them to consent to their appearance. Although these examples were relatively restrained in terms of the nature of their use, it is arguable that using AI to resurrect the dead could become even more problematic where the second-coming of a character is portrayed in a degrading fashion. There is nothing to prevent, say, Bourdain’s AI counterpart from saying something defamatory, or Christopher Reeve’s AI recreation from being reimagined as a problematic villain for the sake of shock value.

These moral considerations undoubtedly factor in audience’s minds, leading them to associate the use of AI with nefarious practice, motivating them to boycott a studio’s output as a result. Ethical considerations also come into play when AI is used to support, or eventually replace, scriptwriters. While the benefits of ChatGPT, regarding its ability to assist writers suffering from writer’s block by “getting the ball rolling,” have been well documented, there seems to be a pervasive fear in the industry that the technology will eventually replace writers altogether.

The Writers Guild’s answer to this has been to propose that artificial intelligence programs such as ChatGPT can be used to write scripts in partnership, rather than in replacement of human writers, so long as it does not affect writers’ credits or residuals. As reported by Variety’s Gene Maddaus, the proposal would allow a writer to use ChatGPT to help them write a script without having to share writing credit or divide residuals. The effectiveness of this proposal is limited. As the technology develops, it will be able to write a script on its own. In such a scenario, studios may abandon writers entirely. However, what may stop them from doing so is the need to keep the public onside. Firing thousands of writers and replacing them with robots may lead to public disillusionment and again, boycotts at the boxoffice.

One way studios could get around this is by utilising ChatGPT to speed up the writing process, but attributing the output of the algorithm to particular writers for the sake of public appeasement. This type of experiment played out in Bojack Horseman. In one of the episodes, the titular character is nominated for an Academy Award, despite the fact that all of his scenes in the film have been replaced by an AI rendering of himself. Interestingly, using a human (or in this case, a horse) proxy to present the output of a robot didn’t work in the episode (Bojack finds it difficult to live with his lie), and would be unlikely to work in reality. It is inevitable that what the studio was actually doing would come to light, which would lead to substantial public backlash which would, again, negatively impact the studio’s bottom line. In this sense, the introduction of AI to scriptwriting may create more problems commercially than it solves.

Those commercial problems are exacerbated when we consider that the use of AI is expensive, per se. Unfortunately for Iger and Co, those costs are going up. Hao has observed that the carbon cost of training an AI model for a study of academic publication quality is the equivalent of the carbon consumption of approximately two average American lifetimes, or seven average global lifetimes. Given that films are consumed by more people than academic content is, it is possible that the costs associated to train AI models to reach that type of quality may be even higher, as studios are bound to be anxious at the offset, demanding the highest quality product.

It is also important to note that globally, broadstroke AI regulation is being rolled out expediently. In the past year alone, we have had President Biden’s Executive Order, the upcoming EU AI Act, and the Bletchley Declaration. Concurrently, climate change is worsening rapidly. It is therefore quite possible that new legal regimes will begin issuing heavy fines for activities which have a negative ecological impact. Wide scale utilisation of AI is certainly one of those activities. Thus, its implementation in film may become very expensive indeed, which will naturally reduce the impact it has when it comes to increasing a studio’s profit.

Much of what I’ve written here is speculative, and I am far from a financial expert. However, as an avid moviegoer, I cannot help but attempt to push back against the direction Hollywood appears to be taking. I believe “pushing back” will only be most effective when the language of tinseltown, i.e money, is utilised. Although technological development is broadly a positive thing, the reduction of cinema to an algorithm is not. The idea that ChatGPT can replace a screenwriter or that a dead actor can be brought back to the life by nefarious studio executives in the name of profit, is disheartening. Nevertheless, I believe we can draw some comfort from the fact that, upon closer inspection, AI may not be the financial miracle it originally appeared to be.

Featured Image Credit