ChatGPT | AI should not author scientific studies, says Nature

ChatGPT |  AI should not author scientific studies, says Nature

The popularization of the ChatGPT artificial intelligence model, developed by the American company OpenAI, lit a red alert signal in the scientific community — after all, the tool has even been listed as an author of academic studies. Given the current scenario, one of the most respected journals in the field, Nature, announced this week that an AI cannot and will not be considered an author within its publications.

  • What is ChatGPT and why does it matter so much?
  • ChatGPT | How to identify texts produced by artificial intelligence?

“As researchers delve into the brave new world of advanced AI chatbots, publishers need to recognize their legitimate uses and establish clear guidelines to prevent abuse,” states Nature. At this point, the magazine argues that it is necessary to recognize the limits, as proposed.

Now, the expectation is that other relevant publications in scientific production will follow the same position. Interestingly, Nature does not deny the possibility of using AI in works, but defends the exclusive existence of real authors, since, in cases of errors or failures, they should be held accountable. Blaming an AI such as ChatGPT makes no sense because there is no explicit commitment to seeking the truth.

Understand the problem of the ChatGPT effect in the scientific community

According to a survey by Nature, ChatGPT has already been included as an author in a preprint — a scientific study not yet peer reviewed — published on the MedRxiv platform in December last year. In addition, he has been credited as a co-author on articles in Nurse Education in Practice and Oncoscience journals.

“A major concern in the research community is that students and scientists may misleadingly use texts written by LLM [Modelo de Linguagem Grande, como o ChatGPT] as their own, or using LLM in a simplistic way (such as conducting an incomplete literature review) and producing work that is unreliable.

In this scenario, two rules were defined to regulate such practices:

  • No tool will be accepted as an author in a study. “This is because any attribution of authorship carries responsibility for the work, and AI tools cannot assume that responsibility,” clarifies Nature.
  • The use of such tools should be documented in the methods or acknowledgments sections of a scientific article.

Is it possible to know if a scientific study was done by an AI?

Despite the new rules, it is worth asking: is it possible to identify a text written by an AI? According to Nature, the answer is “maybe”, as it depends on “close inspection” and tends to be more accurate “when more than a few paragraphs are involved”. Even the FreeGameGuide team has already asked the same question for ChatGPT.

Another point is that the tool does not indicate the sources used in the response, something fundamental in the academic world. “But in the future, AI researchers can work around these problems — there are already some experiments linking chatbots to source citation tools, for example, and others training chatbots for specialized scientific texts”, warns Nature.

On another front, Nature’s publisher, Springer, is developing its own technology to identify texts produced by AI for academic purposes. For the coming years, this clash may represent the beginning of a new era in the production of scientific knowledge and a litmus test for creativity and the human intellect.

Source: Nature (1) and (2)