Skip to Main Content

A Guide to Artificial Intelligence (AI) for Students

Instiutional Data and AI Guidance

East Carolina University recognizes that generative AI holds the promise of introducing advancements in the areas of research, development, and education. Generative AI is a type of AI that creates new content based on learned patterns from datasets retained by user input. Several well-known generative AI tools are Open AI’s ChatGPT and DALL-E, Microsoft’s Bing Chat and Copilot, and Google’s Bard.

 

East Carolina University encourages exploration of these products or services, but individuals must remain cognizant of data being provided to these tools and abide by copyright laws, compliance regulations, as well as ECU’s Faculty Manual Academic Integrity Regulation, Employee Code of Conduct, and Student Code of Conduct.

Potential for Misinformation

  • Citation "hallucinations"
    • There's a tendency for AI tools to create citations to articles that don't exist, or fabricate information if it doesn't know the information or understand the prompt entered

 

  • Possible inaccuracies
    • Can be challenging to tell what is real and true or not
    • You still need to critique and evaluate sources! Verify information elsewhere from a credible source
      • Make sure you cite any AI tools used
    • Ranging from incorrect to intentionally manipulative deepfakes- undermining trust & accountability, can be malicious
    • Most Large Language Models (LLMS) cannot access the most recently published work
    • Inconsistencies- unable to replicate the same results, for the same question over time 

 

  • Bias and Discrimination
    • AI systems are trained on data sets created by humans, which can contain inherent biases. These biases can be reflected in the AI's outputs, leading to discriminatory outcomes.
      • For example, an AI algorithm used in hiring decisions might favor resumes containing certain keywords, unintentionally filtering out qualified candidates from underrepresented groups.
      • AI can amplify existing societal biases

 

  • Privacy and Security
    • AI can reveal sensitive information
    • Risk of data leaks/improper access
    • Potential for misuse of personal information
    • Failure to inform users on data tracking, lack of transparency
      • Surveillance tracking and monitoring individuals- potentially infringing on civil liberties

 

  • Copyright
    • Possible Copyright Infringement
      • Copyright law hasn't yet fully adapted to the complexities of AI-generated content. Courts are starting to grapple with these issues, but there are no definitive answers yet.
    • Who owns the copyright?
      • It's unclear who owns the copyright of creative content generated by AI. Is it the programmer, the company that created the AI, or the AI itself (which currently isn't recognized as an author by copyright law)?
    • Fair Use? Transformative Use? 
      • Copyright law includes the concept of fair use, allowing limited use of copyrighted material for purposes like criticism, commentary, or education. Whether using AI tools on copyrighted content falls under fair use is an open question with ongoing lawsuits

 

  • Job Displacement
    • Jobs involving predictable, repetitive tasks are most susceptible to automation
    • AI is driving innovation in various sectors, leading to the creation of entirely new industries and job opportunities
    • In many cases, AI will augment human capabilities rather than replace them. Employees will need to develop new skills to work alongside AI systems

 

 

  • Existential Risk
    • Unintended consequences
      • AI could pose an existential threat to humanity if it develops goals that are incompatible with human values

 

 

Google. (2024). Gemini (April 22 version) [large language model]. https://gemini.google.com