Many of you reading this know what SATs are (first known as scholastic aptitude tests). And most are likely aware of discussions on racial, gender and economic biases in those tests[1]. Any biases in those tests are based in part on the people, experiences and information sources used to create them.

The same risk exists for AI-based solutions. AI does not create new information; it assembles existing information. So what it produces will reflect any skews in the source information and how it weights conflicting information.

For example, suppose an AI engine assumes all doctors are male and answers questions to that effect, when in fact the gender split for doctors in the US is about 50/50? Suppose when asked about "couples," the answers assume that means one person of each traditional gender? How will AI know when "they" refers to a single person vs. many people? What if an AI answer includes terms offensive to a given racial or socioeconomic category?

And what if excluding information is a bias? I asked an AI engine about the danger of stroke. It reported correctly that the risk increases with age...but made no mention of race. WebMd reports: "Strokes kill 4 times more 35- to 54-year-old black Americans than white Americans."[2] Is excluding the higher likelihood for African Americans itself a bias? Does that answer assume the person asking is white?

Handle with care
Photo by FLY:D / Unsplash

What Are Some DEI Risks for AI?

Biases in AI present risks to those that create AI solutions and those that implement them. The risks, of course, include negative social comments--both for AI engine creators and businesses that implement them. Social comments are some of the hardest to counter once in the wild and can cascade out of control.

Then there's a financial risk: lost revenue if end-user businesses drop an AI solution because of reported DEI issues and switch to a rival. And a legal risk risk[3]: given that gender- and race-based lawsuits are not uncommon, it would not be a stretch to envision the same for any similar fails in AI.

Addressing all this falls on what we're calling the AI Lifeguards.

Warning: No lifeguard on duty swim at your own risk
Photo by Taylor Hammersla / Unsplash

How Do AI Lifeguards Operate?

AI Lifeguards are tasked with watching over AI solutions and installations. Their roles parallel what traditional lifeguards do:

🟧 Monitor the water

🟧 Enforce rules and regulations

🟧 Provide first aid

Specifically, AI Lifeguards monitor the answers AI produces, enforce DEI standards, and quickly repair issues related to biases in information presented, including the language and tone in which that information is presented.

Pulse Labs in the Mix

How does Pulse Labs fit in? We're known for outstanding and innovative user experience research. That includes insights on how consumers engage with artificial intelligence through AI engines or through AI deployments in devices, online, and elsewhere. This class of research helps AI Lifeguards monitor the turbulent waters of AI, ensure compliance with DEI standards, and validate that updates in response are on the mark. And watch this space for news about a new Pulse Labs benchmark solution called Pulse AIQ™️, with recurring and trendable guidance on the consumer AI experience. Or ask for details at


Pulse Labs ( is a pioneering business insights company that specializes in turning human factors analyses and behavioral information into actionable product success metrics for our customers. Our proprietary data processes mean speed and accuracy. Our Power Portal™️ ensures that decision-makers have quick and ongoing access to results to increase their competitiveness. Our customer set includes innovators such as leading technology platforms, top brands and government agencies. For more information, visit or contact us at