Skip to Main Content

Generative AI in the Classroom & Research: Research Best Practices

Copyright and AI-generated content

The US Copyright Office “has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. After convening public listening sessions in the first half of 2023 to gather information about current technologies and their impact, the Office will publish a notice of inquiry in the Federal Register.” For additional announcements and links to documents, see https://www.copyright.gov/ai/.  

Their basic position as of mid-March 2023 is that texts and images solely or primarily generated by AI are not eligible for copyright protection; texts and images that originated in AI but have had significant authorial contributions may be eligible on a case-by-case basis. Guidance in the Federal Register for March 16, 2023, outlines their perspective and gives instructions on filling out the copyright registration forms to acknowledge the role of AI in any work being submitted. See https://www.govinfo.gov/content/pkg/FR-2023-03-16/pdf/2023-05321.pdf.  

Using ChatGPT and AI in research

Artificial intelligence has many beneficial uses across all fields of research and scholarly activity. Below are selected insights across the disciplines.

Education

Fine Arts

Health Sciences

  • Ding, H., Wu, J., Zhao, W., Matinlilnna, J. P., Burrow, M. F., & Tsoi, J. K. H. (2023). Artificial intelligence in dentistry: A review. Frontiers in Dental Medicine, 4. https://doi.org/10.3389/fdmed.2023.1085251
  • Douthit, B. J., Shaw, R. J., Lytle, K. S., Richesson, R. L., & Cary, M. P. (2022, January 11). Artificial intelligence in nursing. American Nurse. https://www.myamericannurse.com/ai-artificial-intelligence-in-nursing/
  • Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023). Nonhuman "authors" and implications for the integrity of scientific publication and medical knowledge. JAMA, 329(8), 637-639. https://doi.org/10.1001/jama.2023.1344
  • Iosa, M., Paolucci, S., Antonucci, G., Ciancareeli, I., & Morone, G. (2023). Application of an artificial neural network to identify the factors influencing neurorehabilitation outcomes of patients with ischemic stroke treated with thrombolysis. Biomolecules, 13(334). https://doi.org/10.3390/biom13020334

Humanities

Sciences

  • Martin, H. G., Radivojevic, T., Zucker, J., Bouchard, K., Sustarich, J., Peisert, S., Arnold, D., Hillson, N, Babnigg, G., Marti, J. M., Mungall, C. J., Beckham, G. T., Waldburger, L., Carothers, J., Sundaram, S., Agarwal, D., Simmons, B. A., Backman, T., Banerjee, D., Tanjore, D., Ramakirshnan, L., & Singh, A. (2023). Perspectives for self-driving labs in synthetic biology. Current Opinion in Biotechnology, 79, 102881. https://doi.org/10.1016/j.copbio.2022.102881
  • Xie, S. (2023). Perspectives on development of biomedical polymer materials in artificial intelligence age. Journal of Biomaterials Applications, 37(8), 1355-1375. https://doi.org/10.1177/08853282231151822
  • Zhao, C., Mahmoudi, E., Luo, M., Jiang, M., & Lin, P. (2023). Unfavorable geology recognition in front of shallow tunnel face using machine learning. Computers and Geotechnis, 157, 105313. https://doi.org/10.1016/j.compgeo.2023.105313

Social Sciences

What does ChatGPT say?

We asked ChatGPT to answer the following questions.

Q: How can artificial intelligence be used in scholarly research?

A: Artificial intelligence (AI) can be used in scholarly research in a variety of ways. Here are a few examples:

  1. Data Analysis: AI algorithms can analyze large volumes of data much faster than humans can, making them well-suited for tasks such as text analysis, image recognition, and data mining. This can help researchers identify patterns, trends, and correlations in their data that might be difficult to detect otherwise.

  2. Natural Language Processing: AI-powered tools can be used to analyze written or spoken language, allowing researchers to identify themes, sentiment, and other characteristics of text data. This can be particularly useful in fields such as linguistics, literature, and communication studies.

  3. Predictive Modeling: AI can be used to develop predictive models that can help researchers make predictions about future events or behaviors. This can be useful in fields such as psychology, economics, and political science, where understanding human behavior and decision-making is important.

  4. Image Analysis: AI algorithms can be used to analyze images and identify patterns or features that might be difficult for humans to detect. This can be useful in fields such as medicine, biology, and environmental science, where analyzing images is an important part of the research process.

  5. Collaborative Writing: AI-powered tools such as language models can assist researchers in writing academic papers or books, by suggesting sentence structure, vocabulary, and even entire paragraphs. This can help save time and improve the quality of the final product.

Overall, the use of AI in scholarly research has the potential to greatly enhance the research process, by enabling researchers to analyze data more quickly and accurately, make more accurate predictions, and collaborate more effectively. However, it is important to carefully consider the potential ethical implications of using AI in research and ensure that AI-powered tools are being used in a responsible and transparent way.

Q: What are the ethical implications of using artificial intelligence in scholarly activities?

A: The use of artificial intelligence (AI) in scholarly activities has several ethical implications, including:

  1. Bias: AI models are only as good as the data they are trained on, which means that they can perpetuate biases that exist in the data. This could lead to unfair treatment of certain individuals or groups, particularly those who come from underrepresented backgrounds.

  2. Privacy: AI-powered tools may require access to personal data, which raises privacy concerns. In addition, if personal data is used to train AI models, there is a risk that it could be used to create profiles or make predictions about individuals that could be used in ways that are harmful to them.

  3. Intellectual property: The use of AI in scholarly activities can raise issues related to intellectual property. For example, if an AI system is used to generate text or images, it may be unclear who owns the resulting intellectual property.

  4. Transparency: Some AI models, particularly those that use deep learning techniques, can be difficult to interpret. This lack of transparency could make it difficult to understand how decisions are being made, which could be problematic in situations where decisions have significant consequences.

  5. Responsibility: The use of AI in scholarly activities raises questions about who is responsible for the decisions that are made by these systems. This is particularly relevant in situations where decisions made by AI systems have significant consequences, such as in hiring or admissions decisions.

Overall, while there are potential benefits to using AI in scholarly activities, it is important to carefully consider the ethical implications and take steps to mitigate any potential harm. This may involve being transparent about the use of AI, ensuring that AI-powered tools are fair and unbiased, protecting privacy and intellectual property, and establishing clear lines of responsibility for decisions made by AI systems.

Should I cite ChatGPT?

Citing ChatGPT and other generative AI will allow your work to remain transparent while also demonstrating ethical considerations. If generative AI is used in research, best practice would dictate to mention the use of this tool in the narrative.

Suggestions for citations may be found below:

Publisher policies

Several publishers have recently shared policies regarding the use of AI in manuscripts. In general, they clarify that AI cannot be listed as an author/co-author, and that use of AI must be declared. Policies by publisher name: 

In addition to individual publishers, prominent organizations have established positions against allowing generative AI bots to be listed as authors. See links below by organization:

A word about hallucination

Hallucination refers to the phenomenon involving the complete fabrication of information by generative artificial intelligence. Because AI relies on networked datasets to generate connections that emulate human writing, it is incapable of actually thinking for itself. Therefore, fact-checking is not consistently accurate.

Researchers should practice caution when relying on AI for content, particularly because AI may present its false information with confidence that may lead users to believe that the generator has credibility. For example, ChatGPT has been known to fabricate references for sources, as well as provide fake evidence for health sciences inquiries (Alkaissi & McFarlane, 2023).

 

Reference

Alkaissi, H., & McFarlane, S. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus. https://doi.org/10.7759/cureus.35179