It is important to remember that there are a number of limitations to AI. These include:
Chat GPT can tell you about its own limitations. Here are some examples:
"ChatGPT may produce inaccurate information about people, places, or facts"
"My knowledge is based on information available up until September 2021, and I don't have access to the internet to provide real-time updates."
"My responses are generated based on pre-existing knowledge up until September 2021. I don't have the ability to browse the internet or access specific book details beyond what I've been trained on. If you have a specific book in mind, I recommend referring to the table of contents or index of the book itself, or conducting an online search to find the chapter listings or summaries."
"It's important to note that while I strive to provide helpful and accurate information, there may be instances where the responses may be incomplete, biased, or incorrect. Therefore, it's always a good idea to verify and cross-reference the information from reliable sources."
"I apologize for the confusion, but the book "Preserving Cultural Heritage: Caring for the Past in the Present" by Joshua A. Bell and Simon J. Knell does not exist. I generated the title and authors in error. I apologize for any inconvenience caused. If you have any other requests or need assistance with a different topic, feel free to let me know."
Whilst Generative AI appears to produce often very plausible text, it is the student's responsibility to make sure that all of these outputs are current and accurate. How can you do this?
Your Library Team have put together guidance on how to begin to evaluate information that you might come across when you are online. This guidance will help you to verify the authenticity of any Generative AI outputs.
This video, by Phil Edwards of Vox Media, is useful viewing for understanding how outputs generated by AI may be biased against certain minority languages. Whilst this video concentrates on Catalan, we do think the information presented here is relevant to students studying in the medium of Welsh.
One challenge that AI faces is accurately generating citations and references. AI models rely on statistical patterns rather than a genuine understanding of how a citation or a reference should be presented. This can lead to inaccuracies in the reference which might include:::
As previously stated in this guide, if you use AI for any part of your assessed work, it is your responsibility to check all outputs generated by the AI to make sure that the information produced is current and correct.
Take a look at our Referencing and Plagiarism Awareness Guide for further information about the theory and practice of this critical academic skill.
Generative AI models can exhibit various biases due to the data they are trained on and the inherent limitations of their algorithms. Here are five common biases found in generative AI:
Gender bias: Generative AI models can replicate and perpetuate existing societal biases related to gender, such as gender stereotypes or gendered language usage.
Racial and ethnic bias: AI models trained on biased or limited datasets may inadvertently generate content that reinforces racial or ethnic stereotypes or displays unequal treatment towards certain groups.
Cultural bias: Generative AI models trained on specific cultural contexts may produce content that is biased towards or excludes other cultures, leading to a lack of representation or misrepresentation.
Confirmation bias: AI models can unintentionally reinforce existing beliefs or opinions present in the training data, potentially leading to biased outputs that align with specific perspectives or ideologies.
Content bias: Generative AI models can exhibit biases in the types of content they generate, favoring certain topics, themes, or perspectives over others, based on the biases present in the training data.
It is important for students to understand that these biases may be present in any outputs created by Generative AI and a key skill in the learning process is to be able to critically evaluate these outputs and to be able to recognise any potential biases.
As with fact-checking and spotting misinformation, the Library's guidance on evaluating information may be useful in spotting biases.