By: Francesca Mazzi, Post Doctoral Research Fellow in AI and Sustainable Development at Saïd Business School, University of Oxford, and a Research Associate at the Digital Ethics Lab of the Oxford Internet Institute, University of Oxford.
The topic of AI-generated inventions requires preliminary clarifications: what do we mean with AI? What is the meaning of AI-generated inventions?
When we speak of AI, we refer to technologies that can complete tasks that are usually considered to require “intelligence” to be performed. The term “AI-generated inventions” is used from a patent perspective to refer to those inventions partly or entirely generated using AI.
But to what extent is AI used to perform inventive activities? The answer to such a question is not straightforward. While on the one hand, a group of stakeholders performed the DABUS experiment, filing a patent application for an invention supposedly generated entirely with AI and naming the AI as an inventor, on the other hand the extent to which AI is involved in the pharmaceutical inventive process, for example, is unclear and not entirely disclosed. In fact, due to the difficulties in obtaining appropriate IP protection for the AI itself and the competitive advantage derived from using AI in inventive activities, AI is often kept as a trade secret. This proved to be particularly true in relation to the pharmaceutical sector: qualitative interviews conducted by the researcher in the pharmaceutical field confirmed that employees of pharmaceutical companies with different roles (inventors and heads of IP and R&D departments) are not willing to disclose details concerning the use of AI and data in inventive activities.
Hence, to investigate whether AI-generated inventions could and should be patentable, we consider two scenarios: the first, where a human-in-the-loop is still needed to use the AI in the inventive process, and the second, where AI can invent autonomously.
In the two scenarios, we evaluate the following aspects of patent law that were identified as potentially challenging in relation to AI-generated inventions: (1) inventorship, namely the requirement to name an inventor in a patent application; (2) inventive step, i.e. the patentability requirement that an invention to be patentable should be inventive, intended as not obvious to the person skilled in the art; (3) sufficiency of disclosure, the level of disclosure of the invention that should allow for enablement in order for the invention to obtain patent protection; and finally, at a theoretical level, (4) the compatibility of AI-generated inventions in both scenarios with the justification theories at the heart of the patent system.
The classification as patentable subject matter, novelty and industrial applicability were not considered as a challenge to explore in relation to AI-generated inventions.
Inventorship appears to be problematic. Despite the meaning of inventor is left to members definition at EPC level, a wide majority of States requires the inventor to be a human-being explicitly or implicitly. However, the inventor is the person that substantially contributes to the invention, and it can be defined as the person having intellectual domination on the invention, hence when a human uses AI to perform certain inventive activities, as in scenario 1, the notion of inventor could potentially refer to the people directing the AI, providing the parameter, validating the output and so on. However, in scenario 2, the requirement appears inefficient as there would not be a human inventor.
The inventive step is usually evaluated in light of a human parameter, i.e. the person skilled in the art. Such fictional figure is constructed differently depending on the field of technology and on the types of tools normally used in such field when inventing: it could be a team having access to advanced technologies. So, in scenario 1, evaluating the inventive step of AI-generated inventions mainly depends on the level of diffusion of AI technologies and data. If AI is widely used and datasets employed are public, the bar to obtain patent protection could be high. Moreover, second indicia used in certain jurisdictions such as “reasonable expectation of success” and “obvious to try” might be a challenge as well, depending on how obvious it would be to use a certain technology and whether it is based on proprietary or public datasets. In scenario 2, if the kind of AI that invents autonomously is widespread, the parameter of a person skilled in the art becomes outdated.
Concerning the sufficiency of disclosure, the challenge of AI-generated inventions seems to relate mainly to the hypothesis of inventions generated through black-box AI that cannot be fully explained, both in scenario 1 and 2. Indeed, the term black box AI defines a situation where not even the programmer nor the company employing AI can understand what is happening inside the AI, for example how different layers in a neural network evolve and behave with each other. if this type of AI produces an output that in theory could be innovative but it is not entirely explainable because the AI logic process to achieve is not accessible, then writing an enabling disclosure might prove problematic.
Overall, while in relation to scenario 1, it seems that current laws and parameters could be stretched to accommodate AI-generated inventions, with still some grey areas concerning inventive step and inventorship, in the hypothesis of scenario 2, where AI invents autonomously, adjustments would be needed to accommodate AI-generated inventions regarding both inventive step and inventorship.
However, the crucial question comes from the compatibility of AI-generated inventions with the patent system’s rationale: why do we have the patent system in the first place? What societal goal does it serve? Can such a goal be pursued even in an AI and data-driven innovation scenario? The question lies with the societal desirability of patent protection for AI-generated inventions, i.e. whether the patent system as it is can still incentivize innovation.
Few conclusive remarks: the open questions could be further refined when it comes to the pharmaceutical industry. Indeed, given the field’s specificities and the relevance of the sector for public health, as evident now in the course of a global pandemic, a mere legal analysis of patent law does not allow to provide an answer in terms of social desirability for patent protection of AI-generated drugs. Other aspects, such as competition law, the economics of the pharmaceutical market, open-access data policies and, in general, public policy, should be considered to ensure incentives for a sustainable pharmaceutical market in the era of industry 4.0.
Francesca Mazzi is a post-doctoral research fellow in AI and Sustainable Development at Saïd Business School, University of Oxford, and a research associate at the Digital Ethics Lab of the Oxford Internet Institute, University of Oxford. From 2017 to 2020 she was a Marie Sklodowska-Curie Early Stage Researcher within the EIPIN- Innovation Society in a double doctorate program between Queen Mary University of London and Maastricht University.