Speaker
Description
The use of artificial intelligence (AI) in scholarly publishing has expanded rapidly, generating new ethical, technical, and regulatory challenges. Within the framework of open science, these technologies offer opportunities to streamline scientific production, improve editorial processes, and broaden access to knowledge. However, they also raise concerns related to responsible authorship, algorithmic transparency, and the integrity of research outputs.
This poster presents a review of recent editorial policies and guidelines regarding the use of AI in scholarly communication, with an emphasis on open access environments. Among the most relevant findings is a growing consensus that AI tools (especially generative models) should not be listed as authors. Furthermore, full responsibility for the content lies with the human authors, who must explicitly disclose any use of AI and ensure the validity and originality of the final text. There is also a notable trend among scientific journals to demand greater transparency regarding the provenance of data and the behavior of the models used.
The goal is to foster critical reflection on how to design ethical and inclusive editorial policies that align with open science principles, enabling the benefits of AI without compromising quality, equity, or accountability in scholarly communication.
Tagline
Analysis of emerging editorial policies on the responsible use of generative AI in open access publishing, highlighting ethical risks, authorship criteria, and the need for transparency to ensure trust and integrity in open science.
Keywords | Artificial Intelligence; Open access publishing; editorial policies; ethical guidelines |
---|