跳至內容

藝評


AI影像、合理性與情緒支配 AI Imagery, Plausibility & Emotional Manipulation
約翰百德 (John BATTEN)
at 1:02pm on 25th June 2024


(John Batten writes about AI after seeing a problematic post on Instagram using manipulated AI-imagery and emotive text.  約翰百德在Instagram上閱讀了一篇使用AI圖像和具有強烈情緒的文字帖子,隨後撰寫了有關人工智慧的文章。)
 
 
 
Image above:
 

Screenshot taken from Instagram post by @jdcruel of 'Alternate Humans of Manila', AI-generated image, 14 June 2024


 
 

AI Imagery, Plausibilty & Emotional Manipulation

by John Batten

 

 

The promise, or threat, that generative Artificial Intelligence (AI) will increasingly become an inevitable and greater part of our lives is constantly in the news. But, trying to foresee the extent of actual AI adoption is difficult to assess. The enthusiasm seen for self-driving cars a few years ago and seemingly about to be driven on our roads, has waned. The reality that considerable internet connectivity and infallible technology were necessary before cars could become autonomous has cooled the hot air promoting these vehicles. Self-driving cars have, at least for the moment, hit a brick wall!

 

Artistic integrity is certainly threatened by generative AI. Low-cost TV programmes, documentaries and films have almost entirely dispensed with original music composed by (human) music composers. Now, producers invariably ‘employ’ AI-generated ‘music’, often a series of changing chord sequences using electronic sequencing equipment and instrumentation. Controversially, this music uses internet sources to generate new sounds, drawing on the previous work of composers, some whose work is still under copyright protection. The generated sounds of these TV programmes, however, is often highly generic and undistinguished; its ‘creativity’ reminiscent of the soporific requirements for shopping mall ambient sound. Recent industrial action by Hollywood film and television script-writers and other creative workers demanding film producers and studios stop the use of generative AI to write scripts and other creative tasks has had some success to protect their work, but the future of human dominance in the creative world appears fragile.

 

The spread of generative AI, for example the use of chat-bots to answer customer queries on websites, such as banks, or, an avatar that has knowledge of an employee’s work processes and can answer task-related questions as a substitute for the actual employee attending some work meetings, has seen a greater acceptance of such technology. Equally, the descriptive language associated with human-helpful computer software is also becoming looser. For years, designers have used CAD - computer-aided design software - with architects abandoning hand-drawn plans to use computer rendering. Likewise, language translation software, for example Google Translate, has been around for twenty years. Such technology has always been seen as beneficial and a non-threatening advance in creative design and communication. But generative AI technology harnesses and pools information on the internet for the software to ‘learn’ and advance its own knowledge, ChatGPT is the most visible of such generative AI. Google Translate has never been referred to as AI, but now it is common to hear simple translation software, including Google Translate, referred to as “AI”, e.g. “I got AI to translate this letter…” We should be wary of the increasingly broad-brushed designation of computer–aided work, which is human directed, alongside the work done by generative AI, which increasingly excludes humans. There is a difference!

 

Let me expand this discussion into the visual arts. Firstly, an example to set the context:

 

Photography, using light, chemicals, and manipulation in the darkroom has always enabled an image to be changed or enhanced, elements to be added or removed. Hong Kong’s M+ museum has a small collection of Hong Kong photography within a narrow range of genres – almost no photojournalism has been collected, but aesthetic and ‘art’ photography has. Employed by M+ for some of its promotional campaigns, Approaching Shadow (1954) is a beautiful photograph in the M+ collection by the Hong Kong photographer and filmmaker Ho Fan. Many of his photographs taken in the 1950s and 1960s fall within the genre of salon photography, whereby composition, aesthetics and dark-room technique are the foremost considerations for a finished photograph.

 

 

Ho Fan, Approaching Shadow (1954), Archival pigment print on Baryta photo silk paper, 60 x 40 cm, edition of 30. 

Courtesy of Blue Lotus Gallery, Hong Kong

 

In Approaching Shadow, Ho Fan has positioned a beautiful woman in (usual female clothing of the era) a cheongsam, leaning upright, side-on to the camera, against a protruding wall. A shadow, occupying nearly half the photograph and dramatically angled across another (camera-facing) wall, lands directly at the woman’s feet. Using modernist ideas of simple composition, it is a highly intentional photograph. The shadow creates focus and a quiet tension. It is the equivalent of a storm-threatening black cloud that gives drama to a seemingly quiet green-hilled landscape. The ‘shadow’ however is not real, but entirely created in the darkroom, a darkened area masked-in: a manipulated photograph. Photoshop can produce similar effects. Ho Fan’s intention would be to create a beautiful, atmospheric photograph using the combination of a languid-looking woman, leaning, alone, well-dressed, thoughtful, juxtaposed alongside a perfectly angled shadow. The composed image is now balanced - perfect in salon photographic terms. Importantly, the finished photograph is also plausible: the photographer might even have known of a real wall where, at a certain time of day, a shadow bisects it. The darkroom printing information is unnecessary for the viewer to know or be told; for what is seen is a beautiful photograph that is not far-fetched. Viewers can feel touched and emotionally engaged about this scene because it is believable.

 

Fake photographs and videos are ubiquitous on the internet: passed-off as real on social media, as click-bait for advertising, and insidiously, as real news. However, overtly fake and misleading imagery always needs a context and story. Such imagery also relies on having some semblance of plausibility. But, once plausibility and believability has been challenged, and photographs and videos are seen to be fakes with stories that are also fake, then a viewer’s emotional engagement is immediately reduced. Knowledge of such deceptions will evoke a dismissive "fake news" response.

 

I recently read a re-posting on an Instagram account (@photographychismisph) dedicated to photography and based in the Philippines. This IG account posts all sorts of information about photography and issues – this re-posting (from IG account @jdcruel) is very problematic. The re-posting had the following text (in italics, added by me) and images:

 

Screenshot taken from Instagram re-post by @jdcruel of 'Alternate Humans of Manila', AI-generated images, 14 June 2024


𝗔𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗲 𝗛𝘂𝗺𝗮𝗻𝘀 𝗼𝗳 𝗠𝗮𝗻𝗶𝗹𝗮 Entry No. 1: "These are the faces of war's brutality, etched not on soldiers, but on women – survivors of a nightmarish system disguised as "comfort stations." These portraits, taken after liberation in war-torn Manila, capture their resilience in the aftermath of unimaginable hardship.

Forced into sexual slavery for the Japanese military during World War II, these women were anything but "comfortable." The euphemism "
𝙟𝙪𝙜𝙪𝙣 𝙞𝙖𝙣𝙛𝙪" masked the horrific reality they endured. Hailing from different parts of Manila, and other occupied provinces, they were imprisoned in Manila's "Military Club," one of twelve houses of horrors disguised as "relaxation stations" for Japanese soldiers.

These photographs are more than historical records; they are powerful testaments to the human spirit's unyielding strength. They serve as a stark reminder of war's darkest chapters, but also of the enduring power of the human will to survive and rebuild." --
𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟴, 𝟭𝟵𝟰𝟱

𝗔𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗲 𝗛𝘂𝗺𝗮𝗻𝘀 𝗼𝗳 𝗠𝗮𝗻𝗶𝗹𝗮 (𝗔𝗛𝗢𝗠) is a personal experiment in storytelling where AI not only creates visuals but also helps fill the gaps in our collective memory to bring alternate narratives of Manila's past to life. This project explores the potential of AI-generated imagery as a tool for historical storytelling, especially for under or undocumented experiences.

At the time, I objected to these images as they were AI-generated and not real photographs of women and not victims of sexual violence. These (sad-looking) images could conceivably be of anyone and their story could be anything that was sad or violent. And, these images were depicting women that had no ‘souls’, so the emotions associated with sexual violence were a fiction.+ Another commentator also pointed out that wartime ‘sex slaves’ were not “under or undocumented experiences”, and in support of that view posted a World Press Photo story by Hannah Reyes Morales.*

 

 

Screenshot taken from Instagram post by @jdcruel of 'Alternate Humans of Manila', AI-generated image, 14 June 2024


Many viewers of this IG post had not read the full text and not realised that the images were AI-generated. This is usual with social media – videos and photographs are briefly viewed and often only headlines are read. Much misinformation, and dangerous fake news, is believed and perpetuated in this way.

 

But, it is the manipulation of emotions in the story accompanying these images that struck me as the most problematic aspect of these images. Ho Fan also tugged at viewers’ emotions, but his intention was mild (he wanted a beautiful photograph to be appreciated by viewers) and morally acceptable. But, the photographs from @jdcruel are ‘pooling’ sadness and grief generated from the internet to make composite AI-generated photographs of women, who we are told are victims of sexual abuse. As Hannah Reyes Morales showed: real women exist that can tell real stories and can really (if necessary) be photographed. Humans can relate to real stories, but should we react with the same emotional response to sad-looking AI-generated images and an accompanying fake story about sexual exploitation? I strongly think not.

 

Increasingly, I hope that all images now seen on the internet, especially those on social media about unlikely or traumatic incidents, are viewed with the same skepticism that we now read any unusual or unsolicited email or text message (is it a scam? a phishing message? is it safe to open etc?). If the image is suspected to be AI-generated and the context or story is shown to be wrong or faked, then surely the prevalence of such AI-generated images will reduce. Because, without being plausible, what is the point of such faked images? But, the internet has a capacity to suck viewers into its 'world-view' and our powers of critical viewing are too often lost by the entertainment of it all. Unfortunately, and perversely, we could be induced to feel emotional about fake AI-generated imagery.

 

 

*See: https://witness.worldpressphoto.org/roots-to-ashes-by-hannah-reyes-morales-e361020a793f

 

+However, if this story had been intentionally written as fiction, then any accompanying fictional imagery would have also been acceptable. Even AI-generated imagery would be acceptable - as it would be the equivalent of any fictional image (e.g. drawn or painted or a posed photograph etc). 

 



作者搜尋:

TOP