FAYETTEVILLE, Ark — In a recent University of Arkansas study, artificial intelligence software gave more creative and elaborate answers to questions than the humans also studied.
ChatGPT-4 reportedly had better answers than 151 human testers across three "divergent thinking" tests that measured the ability to generate a unique solution to an open-ended question such as “What is the best way to avoid talking about politics with my parents?”
The study, titled The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks was published in Nature’s Scientific Reports and authored by U of A Ph.D. students Kent Hubert and Kim Awa, as well as the Assistant Professor of Psychological Science and Director of the Mechanisms of Creative Cognition and Attention (MoCCA) lab, Dr. Darya Zabelina.
Three tests were administered:
- The Alternative Use Task
- Participants are invited to come up with creative uses for everyday objects like a rope or a fork
- The Consequences Task
- Participants are asked to imagine possible outcomes of hypothetical situations, like “What if humans no longer needed sleep”
- The Divergent Associations Task
- Participants must think of 10 nouns that are as logically distant as possible
Ultimately, the authors found that “Overall, [ChatGPT] was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses ... GPT-4 demonstrated higher creative potential.”
However, these findings come with some caveats, the authors said.
"The measures used in this study are all measures of creative potential, but the involvement in creative activities ... is another aspect of measuring a person’s creativity," said the authors.
Hubert and Awa further note that “AI, unlike humans, does not have agency” and is “dependent on the assistance of a human user, therefore ... AI is in a constant state of stagnation unless prompted.”
Also, the researchers said they did not evaluate the appropriateness of GPT-4's answers. So while the AI may have provided more original responses, human testers may have felt constrained by the need to keep responses grounded in reality.
Awa also acknowledges that low human motivation may have played a part in the outcome, and asks “How do you operationalize creativity? Can we say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking?"
The U of A argues that "Whether the tests are perfect measures of human creative potential or not, is not the point."
The point, the authors say, is that AI is growing rapidly and outperforming human thought processes in ways they haven't before, but "Whether they are a threat to replace human creativity remains to be seen."
The authors of the study end on an optimistic note by pointing out the opportunities for AI to be used as an inspirational tool for humans to overcome what they call "fixedness."
Watch 5NEWS on YouTube.
Download the 5NEWS app on your smartphone:
Stream 5NEWS 24/7 on the 5+ app: How to watch the 5+ app on your streaming device
To report a typo or grammatical error, please email KFSMDigitalTeam@tegna.com and detail which story you're referring to.