The US Copyright Office “has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. After convening public listening sessions in the first half of 2023 to gather information about current technologies and their impact, the Office will publish a notice of inquiry in the Federal Register.” For additional announcements and links to documents, see https://www.copyright.gov/ai/.
Their basic position as of mid-March 2023 is that texts and images solely or primarily generated by AI are not eligible for copyright protection; texts and images that originated in AI but have had significant authorial contributions may be eligible on a case-by-case basis. Guidance in the Federal Register for March 16, 2023, outlines their perspective and gives instructions on filling out the copyright registration forms to acknowledge the role of AI in any work being submitted. See https://www.govinfo.gov/content/pkg/FR-2023-03-16/pdf/2023-05321.pdf.
Artificial intelligence has many beneficial uses across all fields of research and scholarly activity. Below are selected insights across the disciplines.
We asked ChatGPT to answer the following questions.
Q: How can artificial intelligence be used in scholarly research?
A: Artificial intelligence (AI) can be used in scholarly research in a variety of ways. Here are a few examples:
Data Analysis: AI algorithms can analyze large volumes of data much faster than humans can, making them well-suited for tasks such as text analysis, image recognition, and data mining. This can help researchers identify patterns, trends, and correlations in their data that might be difficult to detect otherwise.
Natural Language Processing: AI-powered tools can be used to analyze written or spoken language, allowing researchers to identify themes, sentiment, and other characteristics of text data. This can be particularly useful in fields such as linguistics, literature, and communication studies.
Predictive Modeling: AI can be used to develop predictive models that can help researchers make predictions about future events or behaviors. This can be useful in fields such as psychology, economics, and political science, where understanding human behavior and decision-making is important.
Image Analysis: AI algorithms can be used to analyze images and identify patterns or features that might be difficult for humans to detect. This can be useful in fields such as medicine, biology, and environmental science, where analyzing images is an important part of the research process.
Collaborative Writing: AI-powered tools such as language models can assist researchers in writing academic papers or books, by suggesting sentence structure, vocabulary, and even entire paragraphs. This can help save time and improve the quality of the final product.
Overall, the use of AI in scholarly research has the potential to greatly enhance the research process, by enabling researchers to analyze data more quickly and accurately, make more accurate predictions, and collaborate more effectively. However, it is important to carefully consider the potential ethical implications of using AI in research and ensure that AI-powered tools are being used in a responsible and transparent way.
Q: What are the ethical implications of using artificial intelligence in scholarly activities?
A: The use of artificial intelligence (AI) in scholarly activities has several ethical implications, including:
Bias: AI models are only as good as the data they are trained on, which means that they can perpetuate biases that exist in the data. This could lead to unfair treatment of certain individuals or groups, particularly those who come from underrepresented backgrounds.
Privacy: AI-powered tools may require access to personal data, which raises privacy concerns. In addition, if personal data is used to train AI models, there is a risk that it could be used to create profiles or make predictions about individuals that could be used in ways that are harmful to them.
Intellectual property: The use of AI in scholarly activities can raise issues related to intellectual property. For example, if an AI system is used to generate text or images, it may be unclear who owns the resulting intellectual property.
Transparency: Some AI models, particularly those that use deep learning techniques, can be difficult to interpret. This lack of transparency could make it difficult to understand how decisions are being made, which could be problematic in situations where decisions have significant consequences.
Responsibility: The use of AI in scholarly activities raises questions about who is responsible for the decisions that are made by these systems. This is particularly relevant in situations where decisions made by AI systems have significant consequences, such as in hiring or admissions decisions.
Overall, while there are potential benefits to using AI in scholarly activities, it is important to carefully consider the ethical implications and take steps to mitigate any potential harm. This may involve being transparent about the use of AI, ensuring that AI-powered tools are fair and unbiased, protecting privacy and intellectual property, and establishing clear lines of responsibility for decisions made by AI systems.
Citing ChatGPT and other generative AI will allow your work to remain transparent while also demonstrating ethical considerations. If generative AI is used in research, best practice would dictate to mention the use of this tool in the narrative.
Suggestions for citations may be found below:
Several publishers have recently shared policies regarding the use of AI in manuscripts. In general, they clarify that AI cannot be listed as an author/co-author, and that use of AI must be declared. Policies by publisher name:
In addition to individual publishers, prominent organizations have established positions against allowing generative AI bots to be listed as authors. See links below by organization:
Hallucination refers to the phenomenon involving the complete fabrication of information by generative artificial intelligence. Because AI relies on networked datasets to generate connections that emulate human writing, it is incapable of actually thinking for itself. Therefore, fact-checking is not consistently accurate.
Researchers should practice caution when relying on AI for content, particularly because AI may present its false information with confidence that may lead users to believe that the generator has credibility. For example, ChatGPT has been known to fabricate references for sources, as well as provide fake evidence for health sciences inquiries (Alkaissi & McFarlane, 2023).
Alkaissi, H., & McFarlane, S. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus. https://doi.org/10.7759/cureus.35179