One of the most explosive trends over the past couple years has been the release and adoption of new technology in the public and private spheres dubbed “Artificial intelligence”. Its proponents advocate its usage for the purposes of writing, visual art, movies, and even music. But do the social externalities of this technology outweigh the proposed benefits?
Generative AI is a software program designed to predict the “answer” or most likely response to a given prompt. Program output is an amalgamation of statistically common features associated with the information provided in the prompt. What information is statistically common depends on the content of the dataset the program is given for reference – called “training data”. A few well-known companies specializing in image generation are MidJourney, OpenAI, Stability AI, Lensa, and Artbase.
The companies themselves tend to be small, often consisting of executives and a handful of engineers. Several have lately switched to a tiered subscription model after initial launches as free services. In exchange for regular membership fees, they promise customers a nigh-limitless generation of digital art pieces.
The controversy surrounding generative AI mainly centers on the practice of data scraping. AI companies use digital applications called web scrapers to select and duplicate digital data from internet posts. The intended use for the contents of these posts, once scraped, is training the AI model to produce similar output.
Artists with an online presence often find it difficult to prevent their work from being sampled without their consent. If enough of an artist’s work is utilized as training data, especially if their name is attached to their art, different generators might be used to produce a direct mimicry of a specific artist’s style. This process both circumvents the commission process and dilutes the online presence of affected artists.
Interestingly enough, the companies themselves are beginning to shift on their stances in regard to data ownership. StabilityAI went public around March 7, 2024 with the claim that MidJourney attempted to scrape their servers for prompts and training data. This throws more of the matter into question, such as: if the works of artists are fair game for harvesting as training data, then what exempts prompts and already-harvested training data from that same standard? Artists may be a ‘canary in the coal mine’, so to speak, but they are now far from the only people questioning who has the right to use data on the Internet.
FCC’s Visual Arts Club is open to all artistically inclined students, regardless of their major. Whether they are more drawn to digital art as a medium, traditional art media (eg. ink, pencil, paint or photography), or even some combination, each member has their own preferred media and styles. Homogeneity is not a defining feature of this club. And not surprisingly, many of the Visual Arts Club members have reservations about the adoption of generative AI.
“It – is – the devil,” said Wesley Paz. Paz, 19, is an engineering major student.
He said generative AI is “more concerning than people think.”
However, Paz said, “in my lifetime, it won’t replace physical art.”
Nevertheless, he said the artistic field has a “scary outlook” in light of the ease and cost-effectiveness with which employers might utilize AI models in lieu of hiring professional artists.
Another point offered by 20-year-old art major Veronica Vasquez is that generative AI has already shown its capacity to be weaponized in a way that “disproportionately targets women.” The ability to generate images from amalgamated data about a specific person, said Vasquez, “shifts the dynamic” in favor of those who seek to control the sexualization of others’ likenesses without their consent.
They also expressed “constant” worry about the ubiquity of web scraping artistic media. Yet, she also indicated an unwillingness to lose hope in the future of art as a career path, saying “In my heart of hearts, I know it won’t take over my field.”
According to 19-year-old art major Mac Clingerman, “… everything can be used to improve things or abused…AI isn’t an inherent problem.”
Clingerman continued by explaining that generative AI is merely a set of algorithms: “the real problem is art theft.”
Like the other interviewees, much of their concern centers around the looming “outsourcing jobs from artists” – the true source of discontent isn’t the technology itself, so much as the impression of consumer audiences taking the fruits of artistic labor for granted.
Despite this, he claimed to not feel threatened by the encroachment of generative AI and encourages other aspiring artists to take heart: “If you give up (now), you didn’t believe in art in the first place.”
Biochemistry major Elvin Martinez, 20, opined that generative AI is a “useful, good source if sourced ethically and credited to the artist”.
Martinez believed that AI users can source art ethically if they “reach out to ask” the work’s original creator for permission, with full disclosure of intent, and receive an affirmative answer. However, he was sure to clarify that he objects to the current “unethical” nature of common practices in sourcing art as training data.
“Lots of programs steal,” he said, “(they) don’t have the capacity to create.”
Ariana Rose Martinez, a 21-year-old nursing major, shared an analogy to summarize her feelings on generative AI and web scraping. “Think of it like taking candy from a baby – the art is the baby, it’s more important than the candy.”
She, too, believes AI can be useful for many things – but holds firm that “any artist is afraid, in reality.”
Martinez expressed her fear for the livelihood of artists, using the fields of animation and writing as an example: “who’s to say that a corporation won’t say ‘yeah, we don’t need people anymore’?”