Concerns about Generative Artificial Intelligence


The emergence of Generative Artificial Intelligence (Generative AI) has raised concerns about its use by students. The most often raised concerns are that they (a) will rely on it as a shortcut and not do the research, critical thinking, and writing that are the objectives of an assignment, (b) will present AI output as their own, without giving credit to the AI source, thus committing plagiarism, and  (c) will use flawed information because they are unable to detect shortcomings and errors in the outputs. Each of these concerns leads to several possible overlapping solutions. Additional ethical concerns have been raised about AI, which will also be addressed.


One of the frequently voiced concerns about AI tools is that they will be used by students as shortcuts for completing assignments. That is, rather than doing the work themselves, students will rely of AI to do research, integrate ideas, and write papers. Students will not search the literature, read original sources, and extract information that they can analyze and synthesize into a coherent presentation. Instead, they will ask a generative AI tool a question and repeat whatever answer they receive. This concern is especially strong regarding written assignments, in which the process of writing is crucial for generating, developing, and clarifying ideas. Using AI to produce papers, short circuits this process and deprives students of potential learning and intellectual growth.

One recommendation to counter this trend is to motivate students to do their own work by convincing them of the importance of the process and focusing on the importance of what they will learn by completing an assignment.  This entails persuading them that seeking a shortcut by using AI tools prevents them from learning, and one long-term consequence is that they miss the opportunity to become better thinkers and more effective writers. This might mean:

  • Assigning questions students can see as clearly connected to their professional plans and employment goals.
  • Communicating the importance of learning in a professional practice program and talking with students about the relevance of the course to their careers.
  • Having discussions with students about course goals and learning objectives, why they are important, and what students will learn in the course if they engage in the learning.
  • Fostering self-reflection on the process of learning to help students discover their own best methods for learning and realize that struggle is part of learning.
  • Emphasizing the importance of original thought and critical thinking.
  • Talking with students about the value of the writing process, and how communication skills are developed through grappling with words and organizing ideas.
  • Reminding students that ChatGPT, for example, should be a tool to aid in their process, not a replacement for their own original ideas and critical thinking. It can be used or generating ideas, organizing thoughts, and improving writing skills, but it should not be relied on to do the entire job.
  • Discussing with students what they gain in learning, self-development, personal relationships, and integrity when they are challenged and work through those challenges to produce original work.

Students’ use of Generative AI to aid in writing assignments can also have positive aspects, especially if it is used as an aid to be learning. According to the Dartmouth Center for the Advancement of Learning, “Students can accelerate their learning by asking ChatGPT to summarize content, to generate ideas, to critique their writing, or to find flaws or counterpoints to their arguments. They can ask for more examples, for more explanations, for paraphrases or synthesis. They can ask for suggestions on where to go next or how to learn or find out more.” Additionally, using Generative AI can be a great help to students who have trouble with writing (e.g., those for whom English is not their first language). It can help “level the playing field” by helping them improve these students’ grammar, syntax, word choice, and so on.


It is tempting for students to ask Generative AI a question and then use the response as if they had written it themselves. This constitutes plagiarism. Several solutions have been proposed to the potential problem of students using Generative AI and representing the results as their own work. Some involve the use of detection software. Others focus on prevention. Strategies for preventing plagiarism often include educating students about what constitutes plagiarism, informing them of the consequences, and motivating them to do their own work.

There have been multiple efforts to develop tools that can detect materials generated by AI. (e.g., GPTZero, Sapling AI Detector, Scribbr Free AI Detector, Turnitin. The accuracy of these detection tools varies, and currently none is highly effective. Furthermore, both AI generators and detectors continue to be updated and change rapidly; therefore, it is likely that AI software will continue to outpace detection tools. The best current advice is not to rely on controlling the use of AI writing through detection technology.

When reading a student’s paper, instructors can use their own judgement to detect material that has been generated by AI. These include possible clues such as

  • Lack of fluency, word repetition
  • Poor or inappropriate punctuation
  • Short, less complex sentences
  • Grammatical, syntax, and spelling errors
  • Writing is too perfect, with no typos or errors
  • Hallucinations, logical inconsistencies, irrelevant or factually incorrect information
  • Lack of creativity or originality
  • Repetitive use of “the,” “it” or “is”
  • Strange metaphors
  • Bibliographic entries that don’t make sense, sometimes with problematic links

According to Missouri Online, instructors can also do the following to determine whether a student is the author of a paper:

  • If you have a writing sample from the student that you know is authentic, compare the style, usage, etc. to see if they match up or vary considerably.
  • Determine whether the paper refers directly to or quotes the textbook or instructor. The AI tool is unlikely to have textbook access (yet) or know what is said in class (unless fed that in a prompt).
  • Consider giving ChatGPT your writing prompt and see how it compares to student submissions

In addition to trying to detect inappropriate use of Generative Intelligence tools, instructors can try to prevent their use. Supiano (2023), in the Chronical of Higher Education, described ways to combat this form of cheating. Instructors can (a) create conditions in which cheating is difficult, giving closed-book, closed-note, closed-internet exams in a controlled environment, (b) create assignments in which cheating is difficult, by asking students to draw on what was said in class and to reflect on their own learning and (c) make cheating less relevant, by letting students collaborate and use any resource at their disposal.

Most often, however, the focus has been on increasing students’ motivation to do their own work by emphasizing the relevance of learning tasks to future employment, creating authentic assessments (e.g., asking students to complete complex tasks like those they would in a non-academic setting), and giving students choices about how to express their learning. Furthermore, because this form of academic dishonestly is most likely to happen when students feel stressed and overwhelmed, with too much work and too little time, another approach is to reduce pressure by having more-frequent, lower-stakes assessments. In this context, it is helpful for instructors periodically to evaluate how realistic the workload expected of students is and consider reducing course-related work.

In an article in the Chronicle of Higher Education, Mills (2023) advocated the following responses to potential problems with writing assignments: (a) assigning writing that is interesting and meaningful to students; (b) communicating what makes the process of writing invaluable; (c) supporting the writing process; (d) focusing on building relationships with students to help them stay engaged; and (e) exploring the nature and risks of AI with students.

Similarly, using authentic assessments have been suggested to prevent use of generative AI output on papers and tests. These involve asking students to perform real world tasks that demonstrated their skills and understanding of a concept. They are called upon to simulates what they will be doing in the workplace by utilizing the skills they are developing and information they will need. This approach makes assignment more relevant to students and enhances their engagement and motivation.

Most importantly, ethical issues should be addressed explicitly and directly with students. Thus, it is crucial to clearly communicate course policies regarding use of Generative AI and other forms of Generative AI to students through a statement in the course syllabus. The statement should also refer to any relevant university policies and to the more general academic integrity policy presented in the syllabus. These policies should be reviewed in class discussions, along with specific examples of what constitutes plagiarism, what constitutes original work, and what type of assistance is and is not permitted. Students should be reminded that using ChatGPT may raise ethical concerns and that it is their responsibility to ensure that their work meets academic standards. This also means providing guidance on how to correctly format and cite the information from Generative AI.

For example, the University of Pittsburgh Center for Teaching and Learning advises that instructors “Talk to students about why academic integrity matters and the ethical and practical implications of academic integrity violations. Emphasize your trust in your students and your belief that they can successfully complete coursework themselves. Invite students to ask questions and attend office hours if they are confused or feel unable to successfully complete their work.” ChatGPT Resources for Faculty

Limitations of ChatGPT Output

Although they continue to change, there are some significant limitations to Generative Artificial Intelligence tools that limit their accuracy and utility. (Because the most has been written about ChatGPT, it will be the focus of this discussion.) These limitations are problematic because students often do not know how to evaluate the material ChatGPT produces. First, there are problems with accuracy and bias. The information provided by ChatGPT can be inaccurate: it sometimes fabricates information, including references. To make up for knowledge gaps ChatGPT may provide a fictional response rather than say “error.” These “hallucinations” apply to both narrative text and references. Additionally, ChatGPT was trained using datasets of text written by humans; thus, the responses can reflect the biases of the humans who wrote the text used in the training dataset.

Second, ChatGPT currently has some limitations inherent in its construction that make it inappropriate for certain types of assignments or projects. These limitations include inability to

  • Write about anything that has happened since 2021
  • Generate personal reflections
  • Generate non-text-based responses
  • Make predictions
  • Make connections among specific course readings/materials

Problematic Citations

Although these tools are being changed and upgraded, and some flaws are being remedied, students should be educated on the reality that ChatGPT often produces factually incorrect information. They need to be aware that when they use output created by ChatGPT, they might be including false statements and made-up references and cautioned about the ramifications of including this information in their papers and presentations.

One approach to dealing with these flaws is to encourage students to examine ChatGPT output critically, so they are aware of the limitations of using AI. This can be part of teaching critical thinking skills and information literacy (i.e., when to trust, verify, and use information found online) and might entail assignments such as asking students to

  • Evaluate AI output for logical flaws in statements.
  • Edit/revise AI output.
  • Compare/contrast AI-generated text with other scholarly work or with their own work.
  • Evaluate AI-generated text for missing or biased information.
  • Create a rebuttal or response to an AI-generated text.
  • Revise AI output to integrate information and draw conclusions.
  • Verify AI generated citations and references.
  • Fact check AI generated statements.

Additional Concerns

Additional concerns have been raised pertaining to the use of ChatGPT in higher education. These include the following issues that instructors should consider:

  • The privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users. Users are cautioned not to input information covered by FERPA, personally identifiable information, or academic and administrative data they do not want made public.
  • Asking students to use ChatGPT provides free labor to OpenAI to continue to train ChatGPT.
  • ChatGPT uses copyright material with the authors’ permission or approval, leading to potential copyright infringement.
  • OpenAI, the developer of Chat GPT, recommends against using ChatGPT for assessment purposes.
  • Use of ChatGPT 3.5, for example, is currently free, but AI has launched a subscription plan for access to more advanced versions, so not all students may be able to afford to continue using it, which will further exacerbate educational disparities.
  • Randomness is built into the model; it provides a different answer every time even when given the same prompt.
  • Access is not equitable. Because of the costs involved, not everyone has the same degree of access – or any access at all.
  • There are environmental concerns about the large carbon footprint resulting from the development and use of AI tools.
  • There are concerns about the low-cost human labor that is involved in helping to train an AI tool.


Supiano, B., Chronicle of Higher Education (2023) Will ChatGPT Change How Professors Assess Learning?
Darby, F., Chronicle of Higher Education (2023) 4 Steps to Help You Plan for ChatGPT in Your Classroom
Trust,T. ChatGPT & Education
Mills, A., Chronicle of Higher Education (2023) ChatGPT Just Got Better: What Does That Mean for Our Writing Assignments?
Cornell University Center for Teaching Innovation Ethical AI for Teaching and Learning
New York University Teaching and Learning Resources Teaching with Generative AI
University of California/Berkely Center for Teaching and Learning Understanding AI Writing Tools and Their Uses for Teaching and Learning
University of Illinois/Urbana-Champaign Center for Innovation in Teaching and Learning Generative Artificial Intelligence
University of Missouri/Kansas City Missouri Online Detecting Artificial Intelligence (AI) Plagiarism
Indian University/Bloomington Center for Innovative Teaching and Learning How to Productively Address AI-Generated Text in Your Classroom
University of Washington Center for Teaching and Learning ChatGPT and Other AI-Based Tools
University of Virginia Center for teaching Excellence Generative AI in Teaching and Learning

Gen AI Detection Tools

With the widespread use of Generative AI, instructors have become concerned that students will use them to respond to assignments and then submit the output as their own. This concern has been accompanied by the development of tools to detect whether a written assignment was generated by AI. There is some controversy, however, about the effectiveness and utility of these tools, and, in fact, whether they should be used at all. Below are some resources that provide information to consider in deciding what to do.

  • This is a recent summary article on the status of AI detection tools in higher education: Professors Cautious of Tools to Detect AI Generated Writing
  • The University of Kansas Center for Teaching Excellence site discusses why instructors should use caution with AI detection tools, including TurnItIn. It offers suggestions for how to use these tools and what to consider when assessing the results. Why you should use caution with AI detectors
  • This post from Stanford University Human-Centered Artificial Intelligence in May 2023 presents finding that indicate that “The detectors are not particularly reliable. Worse yet, they are especially unreliable when the real author (a human) is not a native English speaker.” AI Detectors Biased Against Non-Native English Writer
  • This post, written in August 2023, introduces generative AI and discusses the reliability of AI detectors: How Reliable Are AI Detectors?
  • This post from the Cornell University Center for Teaching Innovation explains why they do not recommend the use of AI detection tools: Detecting AI Generated Content

“We currently do not recommend using current automatic detection algorithms for academic integrity violations using generative AI, given their unreliability and current inability to provide definitive evidence of violations. We believe that establishing trusting relationships with students and designing authentic assessments will likely be far more effective than policing students.”