Shared Editorial Policies on AI

{Agreed to by Editors-in-Chief of the following: Journal of Operations Management (see link this content: https://www.jom-hub.com/ethical-guidelines), Production and Operations Management, Decision Science Journal - 10/11/24; Adopted by the International Journal of Production and Operations Management 12/19/24}


Preamble

Artificial Intelligence (AI) has been embedded in everyday conversations, including academic research. Often there is ambiguity around the term itself. For clarity in our present discussion, we will use the shorthand ‘AI’ to refer to only AI of a generative nature (GenAI), trained on datasets that are outside of the direct visibility and control of a research team. For this discussion we will also focus on the use of such AI as a means of ‘research support’ (e.g., used in conducting and documenting), as opposed non-AI supported research into the role of how AI influences real-world contexts of study (i.e., research on AI). This allows us to meaningfully focus on the scenarios in which the application of such AI can (or cannot) be viewed as justifiable, and the boundaries around any such justified use.

While the benefits of AI are frequently taunted by the press, we are, however, just beginning to explore the potential impact these tools can have in the creative process. For example, what is the potential for AI-usage to dampen human reflexivity, creativity, and critical thinking? Would it dissuade new ideas for research? Is responsible research compromised via research with AI? While AI related tools may be able to augment aspects of the research process, there are also questions regarding their potential to impair academic integrity. What might be the consequences of feeding unpublished work into a generative database? In essence, unvetted research would become part of the sphere of knowledge further affecting the integrity and capabilities of AI tools [1], with non-trivial implications.

AI tools can present bias, inaccuracies, and falsities (aka hallucinations). These are not easy to detect or verify, let alone to correct. The ability to attribute prior work is also compromised via the use of AI tools. In addition, AI providers and users may rely on user interactions, expropriating their input or output and thus infringing on the intellectual property rights of different constituents.

Given these risks, a group of Operations Management journals, has chosen to collaborate and agree to a standard set of policies regarding the use of AI in academic research. In what follows we defined general guidelines on the use of AI and documentation requirements. The policies are aimed at standardizing agreed upon usage and reporting, but they are also intended to reflect and inform agreed practice by the community. As such, we encourage members of the community to provide feedback and suggestions for improvements to these policies to more accurately reflect our understanding of the technology and its performance—please use the link at the bottom of this document, keeping in mind our specific AI focus as articulated in the first paragraph of this discussion. Finally, we acknowledge that the technology is rapidly evolving and that these policies will require frequent updates. We encourage the community to visit frequently these policies regularly to stay up to date with agreed guidelines.

General Policies for Authors

When authors submit their work to our journal, they are accountable for the originality, validity, and integrity of the content that is submitted. If AI tools are used in the research projects, authors must carry out such usage responsibly, attend to each journal’s ethical standards for research, and abide with the journal’s authorship guidelines.

The journals represented in this collaboration support the responsible use of AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as the following, albeit with specific expectations regarding the disclosure, justification and verification of such use (with associated implications for author responsibility regarding the verification of any output generated):

 

 

The journals do not permit the use of AI in the creation or manipulation of images, figures, or other forms of accessible empirical data for use in our publications. The term “images and figures, or other forms” includes pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas; as well as film, audio, filed reports, or other media. The term “manipulation” includes augmenting, concealing, moving, removing, or introducing a specific feature within an image or figure.

 

Policies in the use of AI

1. Disclose and document the use of AI. Apart from grammatical and copy-editing applications, disclosure should be made on the journal submission page, the methods section, as well as the acknowledgment section if one is used. Authors are to disclose the following information: Full name of the tool along with the version number; how and when it was deployed. Authors must acknowledge the limitations of language models in the manuscript, including the potential for bias, errors, and gaps in knowledge. Authors should cite their AI use as outlined within the Chicago Manual of style.

 

2. Justify the use of AI. Apart from grammatical and copy-editing applications, in any instance in which AI is applied, the manuscript should explain the reasons for its use of AI. This justification not only needs to explain why alternative methods were insufficient but also the precautions taken by the author team to avoid potential biases, errors, hallucinations, etc. As a basic ground rule, AI cannot be permitted to generate substantive content for articles. This includes the prohibition of AI as a source of generalized overviews, ideas and concepts, motivational statements, theories and arguments, references to related literature and discussions.

3. Verify and take responsibility for the AI output. The points below apply to all uses of AI, including grammatical and copy-editing applications.

 

Generative AI tools should not be listed as author(s) as these tools cannot undertake responsibility for the content produced nor can consent/sign on for copyright and licensing agreements. In accordance with COPE’s position statement on Authorship and AI tools—these tools cannot fulfil the role of, nor be listed as, an author of an article.

 

Authors are to verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors or inconsistencies. Manual subsample coding, for example, should be conducted to check against the risk of hallucinations by a larger-scale AI application of this type. Similarly, AI may not be used as a fully black-boxed stand-alone analytical process. It may also not be used to directly generate summaries, interpretations or claims, or in general to craft findings. Findings from benchmark established methods must be provided for validation.

 

Authors are to be conscious of the potential for plagiarism that AI-assisted language improvement can generate. Since AI draws on substantial text from other sources, multi-word edits recommended by grammar / copy-editing applications can be considered de facto plagiarism, even if unintended. Check the original sources to be sure you are not plagiarizing someone else’s work.

 

Applying the technology should be done with human oversight and control and all work should be reviewed and edited carefully, because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. The authors are ultimately responsible and accountable for the contents of the work.

 

Policies for Editors and Reviewers 

The journals strive for the highest standards of editorial integrity and transparency. Due to many concerns, including confidentiality, editors and reviewers must not upload files, images or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rightsholder’s intellectual property. 


Reviewers and editors must not use AI tools to generate review reports.


[1] https://www.nytimes.com/interactive/2024/08/26/upshot/ai-synthetic-data.html?unlocked_article_code=1.F04.3Taq.v42VAMZqh023&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb

COI-5-23-24.jpg