‘Bias Machines’: How AI Summaries Can Hide Bias

The integration of artificial intelligence (AI) into search engine functionalities has led to the emergence of features like Google's AI Overviews, previously known as Search Generative Experience (SGE), which aim to provide users with comprehensive and informative summaries across a diverse range of topics.

These AI-powered features analyze vast amounts of information from the top search results to generate concise overviews, often appearing at the top of search results pages, effectively positioning themselves as the primary source of information.

This development caters to the growing demand for efficiency in information consumption, potentially leading to a phenomenon termed "no-click search," where users can glean answers directly from the search results page without needing to navigate to individual websites.  

While the convenience offered by AI summaries is undeniable, this efficiency has serious potential drawbacks when people use AI summaries for financial decisions.

I've previously analyzed and shown that Google Search results have confirmation bias. Most people don't think about the algorithm Google uses to help people find what they are looking for. Google confirms that people have found "what they are looking for" based on clicks and how engaged people are with the content. However, people often prefer and are more engaged with information that confirms their biases, thereby training Google Search to confirm their biases.

The ease with which users can obtain AI-summarized information may inadvertently reduce their inclination to scrutinize the underlying sources and delve into the nuances of the topic.

The very nature of AI Overviews, designed to deliver swift answers that confirm the user’s biases, inherently diminishes the motivation for users to engage in more thorough investigation. The "no-click search" trend prioritizes immediate information, potentially sidelining closer scrutiny.

Moreover, the trust that users place in Google's AI summaries, regardless of its actual reliability, can foster a passive acceptance of the presented information, making them less likely to independently verify the original sources.  

Key concerns

  1. While AI summaries offer the advantage of rapid information retrieval, their concise nature can also lead to an oversimplification of complex topics. They serve up summaries of the most popular but not necessarily the most trustworthy results, potentially omitting crucial nuances and contrarian perspectives.

  2. Google Search users are not experienced 'prompt writers', and they don't know that specific instructions for balanced or data-driven answers are required. This leaves their query open to being hijacked and returning results based on what a complex algorithm thinks they want.

  3. The authoritative tone often adopted by AI, combined with the prominent placement of Overviews at the top of search results, can create an impression of unquestionable accuracy, which may discourage users from seeking additional information or exploring diverse viewpoints.

  4. Moreover, AI has the potential to generate summaries that align with the framing of the user's query, whether positively or negatively oriented. For example, an inquiry like “will property prices rise?” will match with content about property prices rising and produce an AI summary of that subset of increasing property prices content rather than a balanced market assessment. This highlights how bias can be subtly introduced or reinforced based on the initial search input.

  5. The personalization of search results by Google can further contribute to the obscuration of bias. By tailoring search results based on factors like location, search history, and web history, users might be presented with AI summaries that reinforce their existing beliefs or preferences, thereby limiting their exposure to alternative or unbiased information.

  6. The lack of transparency surrounding the generation of AI Overviews and the specific criteria used for selecting sources also makes it challenging for users to assess the potential for bias within the summaries.

The process of summarization involves selection and interpretation, which are inherently susceptible to bias, whether the process is done by humans or artificial intelligence.

All of the concerns above can lead to bias being introduced into AI summaries that users are likely not aware of.

When users are looking for clothes that match their personal style, a bias engine is very helpful to find them new clothing that matches their taste.

However, when users seek balanced information to make sense of complex subjects, they likely don't realize that the bias engine is a hidden risk. Omitting important, less-popular information or views that should be considered.

Previous
Previous

Why UK Property Buyers Need AI-Powered Help

Next
Next

Property vs. Portfolios: Why Wealth Advisors Are Asking the Wrong Question