This article was originally published at Jornal da Unicamp. Click here to read the original text in portuguese.
Among the numerous issues being discussed regarding the potential impacts of artificial intelligence (AI) on humanity, those related to the production and use of knowledge are included. This year’s Nobel Prizes focused on advances directly related to AI and computer science. Apparently, nothing will be done without the use of tools, even those we don’t yet know and that may emerge in the near future.
I do not remember any innovation that has spread and had an impact as quickly as ChatGPT, which, launched at the end of 2022, immediately became a global reference and spurred the development of other tools based on Large Language Models. And, experts say, we are only at the beginning of a series of transformations whose novelty makes predictions even more difficult.
Creating things, as we know, depends on new knowledge, which, as we also know, is largely based on science and technological development. Sometimes it starts in laboratories, sometimes in inventive practice. To this day, the creations that have had the most impact on humanity have come from scientific research and the entrepreneurial and visionary capacity of a few.
With AI, some immediate issues to be faced in the world of science and technology are authorship and the reproducibility of creations, two fundamental aspects of the governance of science and technology.
Who should be attributed the authorship and ownership of something created by generative AI? How can we ensure the reproducibility of a model that, with each use, learns and changes, and therefore is no longer the same?
Today, the answer, as we will see below, has been pragmatism, as autonomous creations and inventions, while still not circulating among us, are already occurring in laboratories, such as AI-created antibiotics designed to combat resistant bacteria. As for the future (which does not seem far away), it is impossible to predict the answer, especially since we do not know what might be created, invented, or how different it will be from the original event.
Regarding authorship, as discussed by the authors of the article “A Shift in the World of Science,” published in The New York Times (NYT) on October 13, scientists might produce the tools that later enable advances in knowledge—advances that they themselves will no longer be the authors or inventors of. In the world described in that article, inventors of algorithms and tools capable of learning and creating will no longer do the revolutionary work and might even lose the attribution of causality over what will be done with the tool they developed. Perhaps they will be the precursors of this work, much like the character Eldon Tyrell, from the Tyrell Corporation, in the classic sci-fi film Blade Runner.
In the NYT article, the authors (who are, I believe, not machines) ask: “Who holds the discovery? Where does the machine end and the human begin?” They highlight that today, science can increasingly be considered the result of collective efforts, and that the AI tool used by the researchers who won this year’s Nobel Prize in Chemistry was trained based on a database that gathered the work of over 30,000 biologists. Also, in the case of AI-developed antibiotics mentioned earlier, a large amount of data (and researchers) had to be gathered. This, in fact, is one of the biggest challenges for AI adoption in scientific and technological research: the need for large databases of research results and advances that can be combined to train the tools.
In the field of intellectual property (IP), the issue of how to treat inventions based on AI may be the most important subject in patent offices around the world. IP is a highly institutionalized, codified matter governed by national standards and some international ones validated by national authorities. National patent offices (such as the United States Patent and Trademark Office – USPTO, the European Patent Office, and the National Institute of Industrial Property – INPI in Brazil), which also manage other forms of IP protection, have clear rules regarding what can and cannot be characterized as an invention eligible for intellectual property protection.
All consider that a patentable creation must be the result of human inventiveness (in addition to originality and industrial applicability), and none (as of today) considers granting property to a non-human entity. The text presented by Guerra et al. (2023) shows that none of the major national patent offices (the United States, the European Union, and China), apart from Brazil, had, as of the end of 2023, specific regulations for creations resulting from AI.
However, since this is a subject of rapid and constant evolution, in February 2024, the USPTO published guidelines for patenting AI-assisted inventions. The main discussion in these guidelines concerns the need for a natural person to be involved for ownership to be attributed to an invention. The text states:
“Although AI systems and other non-human entities cannot be listed as inventors on patent applications or patents, the use of an AI system by a natural person does not prevent that person from qualifying as the inventor (or co-inventors) if they have contributed significantly to the claimed invention (…) Patent applications and patents for AI-assisted inventions must name the natural person(s) who contributed significantly to the invention as the inventor(s) or co-inventors.”
“Similarly, the Federal Circuit has made clear that conception is the fundamental point of inventiveness (…) Since conception is an act performed in the mind, it has so far been understood as something done only by natural persons.”
As of now, as the text points out, a generative AI that creates something new through its own learning and is distanced from its creator has no legal forecast different from the current one, either with the USPTO or with other national offices.
The USPTO guidelines document, as well as those of other countries, adopts a pragmatic stance in the face of such uncertainty:
“The USPTO will continue to assume that the inventor(s) named in an application are the actual inventors. And applicants will continue to be responsible for meeting their existing obligations with the USPTO. Only in rare cases, where an examiner determines (…) that one or more of the named inventors may not have invented the claimed subject matter, will issues of inventiveness be raised during examination. From the examiner’s perspective, it will not matter whether the AI or other advanced computer system performed actions that could rise to the level of inventiveness. What matters, according to the guidelines, is whether the actions of at least one human can be shown to be sufficient to meet the level of inventiveness (…)”
In other words, it’s what we have for today.
As a former U.S. Secretary of Defense once said, famous for an action that was not exactly a success (the invasion of Iraq in the early 2000s): “There are known unknowns, and there are unknown unknowns.”
The future of AI lies in both of those statements, but it fits more in the second one. Creating things that we don’t even know could exist is perhaps the greatest unknown we have to face with the emergence of AI in science and technology.
This is what we have for tomorrow.
Note: I would like to thank Professor Anderson de Rezende Rocha from the Institute of Computing (IC) at Unicamp, who suggested important adjustments and topics for this text. Any remaining errors and inaccuracies are solely my responsibility.
[1] See pages 2123 to 2132 of: https://www.altecasociacion.org/_files/ugd/9d974b_5fa00e9cdfd64130ae3042efe321406f.pdf
[2] Translation made with ChatGPT and reviewed by the author.
[3] Another guideline document is from the Singapore IP Office, which discusses concrete situations that point to potential changes in the regulatory framework and case law, although the document itself is not binding and does not replace current legislation in that country.