Skip to Main Content
Boxer Library

Artificial Intelligence Tools for Writing & Research

What are Language Generating Tools?

The tools on this page can help you by generating language from prompts. These tools are built of large language models (LLM).

According to MIT, LLMs are "massive neural networks that can generate human-like text, from poetry to programming code. Trained used troves of internet data, these machine-learning models take a small bit of input text and then predict the text that is likely to come next."

This video from Google does a good job of describing LLMs and how we can use them. It is important to note that Google has its own LLM in Beta called Bard.

How are people using LLM tools in the scholarly setting?

Here are just a few examples of the ways people have used LLM tools in the scholarly setting*:

  • Generate a list of journals that might be right to publish in
  • Review and/or re-write text for a certain audience or skill level
  • Create options for question wording or answers (e.g. for exams)
  • Pull specific information/answers out of a body of text
  • Create text for students to review for content or clarity

 

*Note: these are just examples and they may or may not be appropriate depending on the context.

Current Cautions with Language Generating Tools

Timeliness of Content

  • LLM training data is currently not updated in real time. There may be a lag on when content is available in training data so ChatGPT may not have information related to very recent events. Many sources suggest that as these tools evolve, they should start to catch up to the present and even be able to process internet information in real time. 

Hallucinations

  • Information received from these tools may not be factually correct, and indeed, should always be verified. ChatGPT and other tools have been known to make up information, known as "hallucinations." Remember, that these tools are language models, designed to create realistic language. They are NOT search engines. For example, they may add in real sounding citations to their outputs because that is how scholarly writing looks, but the citation isn't real. This is not to say that all information created in ChatGPT is incorrect, it just can't be relied on to be 100% factual.

Plagiarism/Academic Integrity

  • There is still much to be decided about the use of LLMs and what that means for academic integrity and plagiarism. Do not pass off work done by LLM tools as solely your own. As of now, it is best practice to acknowledge the use of LLM tools.

Tools