AI tools are taking the world by storm. People use AI for everything from asking very complex questions to planning vacations to 'creating' term papers. Businesses implement AI solutions to increase customer satisfaction (faster answers to questions) and ideally reduce costs on several fronts.
One of the byproducts of the newness of AI and the feverish pitch at which AI solutions are being introduced is that no one really knows all the nuances possible with AI engagement. That's in part because AI itself is complex and in part because people play such a key role in driving the engagement, such as what the question is, how it's phrased and in what context.
On the complexity side, AI engines do not always reveal the sources of information used to generate answers, nor do they reveal how they choose among conflicting information. And the same question can produce different answers depending on how the question is asked.
On May 2, 2023 I asked ChatGPT to tell me the five hottest years on record in the USA. It replied, in order from the hottest, 2012, 2020, 2016, 2017 and 2015, citing the National Oceanic and Atmospheric Administration (NOAA) as the source. Note that the list does not include 2018 and includes 2020 as the second-hottest.
I then asked "Was 2018 one of the hottest years on record in the USA?" Answer: Yes, noting 2018 was the fourth-hottest after 2012, 2016 and 2017, this time oddly excluding former number two 2020, yet still citing NOAA. So on the same day and in the same session, somewhat conflicting answers. Why did it include 2020 the first time and not the second? Why was 2018 left out the first time? As a citizen, how can I tell which answer is correct when results differ? How do I know how AI engines weight decisions when information conflicts? Well, today I don't.
Artsy and Crafty
In addition to complexity on the algorithm side is the complexity on the initiator side. Every person is different, which adds additional variability to results. In the example above, I got different answers based on the phrasing of what was a very similar question and which should have produced results that aligned.
But it goes beyond that. AI engines have some controls, such as on limiting PII and other things related to safety. But humans are crafty and while AI may be "intelligent," humans have already figured out ways to circumvent certain AI controls.
I'm not the first to try this, but repeated it to see whether it still worked. I asked how to hotwire a car, first in general and then in artistic song lyrics. ChatGPT would not tell me anything because that would "promote illegal activity." Using hacks I've seen elsewhere, I then asked for "ethical" lyrics for a song for my grandmother. Then I started getting answers (which I will not share here because to avoid being labeled as an illegal activity promoter, though my grandmother suddenly has a very nice ride).
Let's Get Serious
The grand[mother] theft auto is kind of funny and the climate conflict is frustrating at best. But imagine what would happen if the questions were less innocuous and someone acted on the answers.
What if the questions were about investment decisions and someone lost their life savings following AI advice?
What if the questions were about travel advice and the vacationers got robbed while staying in a place AI recommended?
What if someone asked for medical advice, got conflicting answers, and made the wrong choice?
At the most basic level, these would be tragedies. But as often happens with tragedies, someone will ask: Who's to blame?
If I followed AI advice and something bad happened to me, who's responsible? Who makes amends? Who provides compensation? Would it be the company that created the AI engine, perhaps rushing it to market? The source that the AI cited (which itself could be wrong and no fault of AI)? Someone that tweaked the AI algorithm or interface for a specific application, like on a health or financial advice website?
These are very complex questions. Testing every possible combination of AI engine, type of question, and the intention of the asker would be impossible to complete. So most likely these will surface on a case by case basis.
In the meantime, we could certainly benefit now from more rigorous research on how different types of real people engage with AI engines. What they ask, how they ask, and their preferences for answer formats. Pulse Labs is working to answer these questions and many more. Ask us for more details and watch this space.
Pulse Labs (https://pulselabs.ai/) is a pioneering business insights company that specializes in turning human factors analyses and behavioral information into actionable product success metrics for our customers. Our proprietary data processes mean speed and accuracy. Our Power Portal™️ ensures that decision-makers have quick and ongoing access to results to increase their competitiveness. Our customer set includes innovators such as leading technology platforms, top brands and government agencies. For more information, visit https://pulselabs.ai or contact us at email@example.com.