Is AI accurate? Is it like a search engine?
We're all accustomed to using Google and other search engines, and to vetting the links and information they return. AIs, however, are a new technology which need to be understood, used, and vetted in a different way.
Is AI accurate?
Despite AIs' ability to produce grammatically correct text and coherent images, these tools have some serious limitations:
- AIs frequently make-up facts and citations, and share them with confidence. These errors have been nicknamed "hallucinations" although "confabulations" might be a more accurate term.
- AIs are good at producing grammatically correct text and visually coherent images, but on review by an expert the content may be shown to contain errors, some of them serious.
- AIs contain biases and other harmful content embedded in the data-sets on which they were trained. This means that they may sometimes return inappropriate or harmful content.
- AIs do not have a conversational memory, so they have no awareness of previous chats and you often have to fill them in on details of previous conversations.
Is AI like a search engine?
Although AIs such as ChatGPT and Copilot will confidently return results that sound plausible, they are quite different from search engines like Google.
A traditional search engine will analyze your inquiry and provide you with a list of specific links to visit to find pertinent information. Your ability to vet the information is based, in part, on the link you visit, and whether or not you know it to be a reliable source.
In contrast, a generative AI like ChatGPT references its vast data set for patterns of words related to your inquiry, and comes up with a distillation or "average" of those millions of word patterns. This is why the text and images generated by AIs often has a generic tone. The output reflects this process of "boiling down" content into generic statements or images.
It can be challenging to vet or fact-check content generated by AI. The AI does not provide citations for the thousands or millions of pieces of data it references when generating a response. So you can't tell when its drawing from reliable sources. So you're left to vet and fact-check the generated content yourself. If you lack the expertise to do that you'll have to ask someone who does have that expertise for help.
A word of advice about using AI for teaching —
Because of these serious limitations in generative AIs, our advice is that you use it primarily for things like:
- Brainstorming — it can be helpful for playing with text and images and generating ideas
- Low-stakes activities — where the accuracy of content is less important
- Verifiable content — always fact-check any AI-generated content you use
The acronym we use to remember this is B-LoVe
A word of caution about AI security —
No private data should be input into an AI.
AIs are new and experimental technology. Even AIs which claim to be secure may have security vulnerabilities which have not yet been discovered or addressed.
For this reason we strongly recommend that you only share content with an AI that you would be willing to share publicly online. Never share confidential data about individuals or organizations, including names, email addresses, or other identifying or sensitive information.
Before you use AI for teaching see: Tufts Guidelines for Use of Generative AI Tools