Pulse Labs is excited to have as a guest author Kim Vigilia of Humanising Autonomy. Kim is VP of Strategy and is based at Humanising Autonomy’s headquarters in London.

Artificial Intelligence (AI) is once again making headlines following the release of OpenAI’s ChatGPT-4 and its ilk. Featuring improved relevance of results and better mimicry of human-like responses compared to its predecessors, ChatGPT-3 reached over 100 million users in its first two months. Keen explorers quickly set out to test the accuracy of the system, experimenting for a wide range of purposes, such as writing articles, business strategies, travel itineraries, movie scripts, and conversations.

Although the feedback on the quality of the resulting text is mixed - journalists have been quick to point out ChatGPT’s lack of depth and substance, whilst programmers have noted its impressive ability to create fairly accurate code - an overwhelmingly concern has arisen that the technology is going too far, too fast, too soon, and without clarity on the damage it could do. OpenAI responded swiftly with GPT-4, an improved and self-acclaimed “safer” version; but tech leaders including Elon Musk remained skeptical and consequently have called for a “pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.”

Historical Context

Many technological advancements over the last century have straddled a fine line between empowering humankind for good versus being a tool in the breakdown of the population’s wellbeing and health. Cars, smartphones, and the internet have each gone through a phase of skepticism, scandal, and scrutiny before reaching a generally society-based acceptance, with ongoing debate of its positive and negative impact.

AI technology is in its early stages, and like the American Wild West, there is still a considerable lack of regulations, guidelines, and frameworks to help keep its risks in check. Since AI can be so complex, especially when combined with machine learning and built from deep neural networks hidden within black boxes, it is especially difficult to understand where to begin - and to extract who should have accountability and ownership of the consequences when things inevitably go wrong.

When we look at OpenAI’s original mission, they state it is “to ensure that artificial general intelligence benefits all of humanity.” Earlier references to its Chat GPT was as a conversational AI to be used for online customer care. On paper, this sounds straightforward and for the most part, fairly wholesome – so what happened between this original intent to the whirlwind of panic that it has seemingly set off?

One of the challenges could lie in its attempt to be human-like versus being human-centric.

Success Means Avoiding Simple Cloning Interactions
Photo by Daniel K Cheung / Unsplash

Mimicry vs Experiential

In human-like technology, the focus is on generating a superficial interaction that looks, sounds, and seems human, but ultimately is still only a series of automations and algorithms that cannot truly understand human context and intent. It is like a humanoid robot that looks and moves as a person would, but is not sentient and has limited capacity to learn deeper emotions such as empathy. Human-like AI technology is particularly dangerous, because it confuses people with its mimicry of humanity and better hides the fact that it is without empathy and the ability to self-reflect - key things that make humans human. Its pseudo-humanness creates a sense of false security, and can instead shake a person’s trust even more when they realize that the AI cannot comprehend the wide range of direct, indirect, and gray-shade consequences. When developers seek to create human-like products, the priority is the technology itself.

Human-centric technology prioritizes the person using or is impacted by the technology, and focuses on the experiences of that person. The technology itself doesn’t have to look or emulate human responses, but its decision-making processes, awareness of self-limitations, and its built-in layers of protection against undesired outcomes - including unfair bias and incorrect triggers - make it more human-friendly. With this framework, the AI is invisible and is a tool for empowerment within a larger product.

Typical product requirements include ensuring creators know who the product is for, what it will do or empower, how it will do that, and how much it will cost to develop. In addition to this, human-centric AI technology is developed with additional considerations such as ethicality, privacy protection, direct and indirect consequences, direct and indirect risks - all from the beginning. When companies seek to produce human-centric AI technology, the priority is on people, and in effect, the society and the world.

AI with human-centric positive outcomes include Apple’s Siri and its role in supporting elderly citizens and Amazon’s Alexa and how it reduced loneliness among ageing adults during Covid-19; other human-centric AI technology include the AI found in smartphones, advanced driver assistance systems (ADAS) and safety technology for cars, and in medical imaging.

If more businesses, AI developers and regulators demand more human-centric technologies from the beginning, we may face fewer surprises and instead, be able to enjoy the true potential and benefits of AI.

Human-Centric will Set the Stage for a Relationship
Photo by vackground.com / Unsplash

Human-centricity will Change our Relationship with AI

If we are to continue using AI technologies, then people must find a way to form positive relationships with them. Consider our relationship with modern smartphones - and how the first iPhone went from a disruptor of the telecoms industry to a staple in today’s society in less than twenty years.

Today, for many in the digitally active population, we have reached the “Can’t live without it” stage. The relationship is built on dependency, customization, and empowerment - from access to emails, messages, work, social connections, maps, and diaries, people literally may be lost without their phone, as it is configured to our exact needs, preferences, and aesthetics. Since its rise in popularity forced telecom providers to reshape its connectivity availability through 5G, smartphones have also made it possible to connect people from around the world regardless of physical distance and/or barriers. Its continued evolution and more intuitive capabilities strengthen our relationship with them, and much like in a real relationship, the more information it gathers on you and your behaviors, preferences and patterns, the less you have to explain explicitly what you want or need. The AI in smartphones enables it to anticipate and preempt your requests and respond in ways that are most beneficial to you.

How can we translate more human-centric AI into other parts of our lives for a positive outcome and a better, more trusting relationship with AI?

People will trust AI more as they begin to understand how it can enable a better quality of life for them - and how the risks are minimized.

For instance, an experiential-focused AI in the smart home that understands you’re exhausted just by comprehending your body language and speed of movements - and responds by playing relaxing music and dimming the lights might sound convenient. But, if it means losing their privacy, there is no incentive to adopt the technology. Human-centric design would mean the AI can do all that it promises, without tracking your identity or recording your exact actions - it can be designed to respond to concrete and observable behaviors instead.

People may better support AI used in their cities and neighborhoods more if they saw the direct impact on traffic flow optimization and safer travel experiences, without the repercussion of surveillance and unfair profiling. For example, if traffic congestion decreased and more crashes prevented due to AI, only the physical safety of citizens would be impacted. However, if the AI could perform all of the optimization tasks and also had a model to prevent unfair bias, the quality of life for everyone in the city would improve.

There are endless ways to change AI technologies to become more human-centric. Regardless of which purpose developers intend for AI, the most important thing is that they think of how to ensure the technology is human-centric from the beginning of design and before production begins, and how to enhance that over time.

How human-centric an AI technology is can be the driving force behind how we build a long-lasting relationship with it.

The More Complex the Solution, the Greater the Need for Better Testing

AI-Centric Insights

There is no doubt that AI has made great strides and will continue to do so. It will get smarter and hopefully we’ll get better at asking questions and demanding human-centricity. All of this means a great, richer experience and a relationship - and that means the ability to measure the effectiveness of AI-based solutions will become more and more complex to cover the variables of evolving human expectations.

That’s where Pulse Labs and Humanising Autonomy come in.

  • Pulse Labs captures in-situation interactions between humans and technology. Our structured approach allows benchmarking and comparisons over time, across surfaces and across AI products.
  • Humanising Autonomy teaches machines to understand, infer, and predict human behavior, based on joint placement and facial markers without tracking or identifying individuals. As a result, semi- or fully-automated technologies respond with more relevant actions, whilst protecting individual privacy and preventing unfair bias.

Both of these solutions offer the depth of granularity and actionability needed to truly unpack the success of AI-based products. For the creators, that informs how those products are enhanced. For buyers, that informs the right application for their needs as well as the success implementing those. The speed of AI evolution means there is no time to waste.
__________________________________

About Humanising Autonomy: Humanising Autonomy helps make automation human-centric. I’s computer-vision software goes beyond basic detection. It  can analyse live or historical video footage to quickly classify, interpret and predict human behaviour. We’re teaching machines to better understand people and using this layer of human context to help companies develop next generation products and services, create safer environments for people and to elevate the customer experience. Visit www.humanisingautonomy.com or email Kim at info@humanisingautonomy.com.

Pulse Labs (https://pulselabs.ai/) is a pioneering business insights company that specializes in turning human factors analyses and behavioral information into actionable product success metrics for our customers. Our proprietary data processes mean speed and accuracy. Our Power Portal™️ ensures that decision-makers have quick and ongoing access to results to increase their competitiveness. Our customer set includes innovators such as leading technology platforms, top brands and government agencies. For more information, visit https://pulselabs.ai or contact us at sales@pulselabs.ai.