Skip to Main Content

Evaluating Websites and Other Sources

Strategies to help identify trustworthy and reliable information.

How Google Ranks Information

With resources like Google at our fingertips, information isn't hard to find. What is tougher is finding reliable information.

Google can actually make this harder, for two main reasons.

  1. Google ranks on popularity, NOT fact. The first result of a Google search may not even be factually correct. That's because its algorithms weight results that lots of people click on higher - the more a link is clicked on, the higher it is in the results, regardless of whether it is factual or not. And Google has no fact-checkers to make sure the information you click on is correct.
  2. Google personalizes its results to YOU. Google changes what it shows you based on what you've searched before. It also filters results based on where you live, what you've bought online, what you share on social media, and what you've sent in your gmail. Google is in the business of selling data - your data - not information.

This kind of personalization can be helpful when you are looking for local weather, sports scores, or new music suggestions. But this can also lead to narrowing what kind of information shows up when we Google.

  • Google now places "AI" results at the top of its searches - which may or may not have accurate information (see the AI Results box below)
  • Google also notices what sites you visit and changes what you see based on your activity. For example, if you click on lots of sites with a liberal or conservative viewpoint, Google will automatically start filtering out sites from your search results that are different from your perceived preference. Soon, your whole results list is only showing you what you already agree with (this is called a "filter bubble" - check out the TED talk about filter bubbles below).

AI Results

Google has integrated its AI program Gemini into almost all searches. Most other search engines also have AI "assistants," and AI results are now often the first thing you see in a search.

Gemini, ChatGPT and other AI assistants are multimodal Large Language Models (LLMs). LLMs scrape content from datasets (in this case, websites) and reorganize that content to mimic human speech.

AI assistants do NOT function like a search engine; rather, they patchwork information they find into human-sounding content. This often results in incorrect information - one study showed that LLMs get facts wrong over 60% of the time.

This means we need to be even more skeptical and vigilant about content generated by AI assistants in search engines, and verify everything an LLM supplies with actual sources. Sometimes, the AI assistant will supply links to websites, as well as a content summary.

It is ALWAYS best practice to follow information generated by any AI to the original sources to verify that information. Check out our Student Guide to AI for more information:

Eli Pariser: Beware Online Filter Bubbles