Tips for using AI tools after the USPTO's recent guidance for practitioners

May 21, 2024

Darren Smith and Mike Mudrow discuss tips for using AI tools after the USPTO’s recent guidance to practitioners for IPWatchdog.

The U.S. Patent and Trademark Office (USPTO) recently released new guidance for practitioners using artificial intelligence (AI)-based tools. The guidance primarily serves as a reminder of longstanding requirements and best practices for patent and trademark practitioners. For example, patent practitioners have a duty of candor and good faith to the USPTO and a duty of confidentiality to their clients. The guidance does not announce any new law or rule regarding practicing before the Office;  rather, it provides some insight into how the Office expects practitioners to operate when incorporating AI-based tools into their practice.

Client confidentiality

One issue that the guidance highlights is the ethical duty for practitioners to maintain client confidentiality. Patent practitioners are required to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”  37 CFR 11.106(d). The guidance warns that “[u]se of AI systems to perform prior art searches, application drafting, etc. may result in the inadvertent disclosure of client-sensitive or confidential information to third parties through the owners of these systems, causing harms to the client.”  For example, AI systems may retain information entered by users and use the information in training its models, and portions of the information may filter into outputs the AI system provides to others. Thus, patent practitioners should be especially diligent when using AI-based tools to ensure that client confidentiality is maintained.

This issue potentially overlaps with issues raised in some of the current litigation against generative AI companies. For instance, in the ongoing New York Times lawsuit against Microsoft and OpenAI,  the Times alleged that the defendants’ AI models were trained on millions of its copyrighted news articles by demonstrating that the models were able to replicate large portions of Times’s material. Similarly, when practitioners use AI-based tools in their practice, there is a potential for the release of client confidential data in the form of unpublished patent applications, drafts of patent applications, and/or invention disclosure materials that may be kept by the AI-based tool for training. There thus exists a risk of LLMs regenerating client confidential data or generating material derived from that client confidential data. Practitioners should be mindful of this and consider, for example, limiting use of AI-based tools to private LLMs and/or carefully considering terms in service contracts related to AI-based tools (and also whether such contractual terms are sufficient to satisfy their local ethics rules).

A related issue is that using AI-based tools in drafting patents may also inadvertently cause a public disclosure of an invention before a patent application is filed, including when invention-related information is input into AI-based tools. This could be worse for inventors and clients than the harms from breaching confidentiality. Public disclosures before the filing of a patent application could interfere with filing dates and even invalidate a resulting patent. Further, the use of AI-based tools may result in the addition of subject matter to the invention, which could change the proper inventorship. The guidance does not address the potential legal consequences that disclosures made through AI-based tools may have on patentability. The exact consequences for patentability from such a disclosure would depend on what was disclosed to an AI-based tool, how the AI-based tool processes, stores, or trains on its input data, and how much access a third party would have to those inputs. Patent practitioners would be wise to exercise caution while preparing patent applications to avoid causing potentially anticipating disclosures.

Impact on the office’s resources

The guidance also indicates that the Office is concerned with how practitioners and others using AI-based tools will impact its limited resources. One example from the guidance is that AI may be used to automatically populate Information Disclosure Statement (IDS) forms. The guidance warns that “the unchecked use of AI poses the danger of increasing the number and size of IDS submissions to the USPTO, which could burden the Office with large numbers of cumulative and irrelevant submissions.” AI-based tools with access to large patent databases can potentially identify large numbers of references as relevant, which may result in an inadvertent burying of important prior art in a large number of less-relevant references. Such “burying” may expose the filer to a future claim of inequitable conduct, although the Federal Circuit has not held that burying is inequitable conduct. See Illinois Tool Works, Inc. v. Termax LLC (N.D. Ill., Jul. 24, 2023). AI-based tools identifying references and auto-filling IDS forms with inadequate human oversight could exacerbate this problem.

To combat this challenge, the Office’s guidance reminds practitioners of their duty to perform a “reasonable inquiry” to ensure papers submitted to the Office are factually accurate and not submitted for any improper purpose. In the case of large numbers of IDS references, a reasonable inquiry includes “not just reviewing the IDS form but reviewing each piece of prior art listed on the form.” As AI-based tools become more commonplace for these tasks, the practitioner’s duty in reviewing prior art may be reduced by finetuning a threshold in the AI-based tool for the selection of prior art. This finetuning may be based on incorporating feedback (as labels in a training set containing the references) from a practitioner’s review of identified prior art references into the AI-based tool. The practitioner will still need to evaluate each identified prior art reference, but the AI-based tool may be trained to identify fewer “false positive” references that do not need to be submitted.

The guidance notes that the Office’s websites and tools, including Patent Center and its related databases, must also be safeguarded against improper access. For example, the Office notes that AI-based tools are not considered users for filing or accessing documents on the USPTO’s electronic filing systems and cannot obtain USPTO.gov accounts. Similarly, the guidance warns that “[u]sing computer tools, including AI systems, in a manner that generates unusually high numbers of database accesses violates the Terms of Use for USPTO websites.”  Instead, the Office suggests that users “consider using the USPTO’s bulk data products for permitted and appropriate data mining efforts.”  For example, the bulk data products may be used for training private LLMs. While filing documents generated using private LLMs, a practitioner would review those documents for compliance with all USPTO rules and regulations, and then sign and file the documents.

You’re still in charge

AI-based tools are already becoming available for patent practitioners, with additional and more substantial tools in development. Patent practitioners should remain open to the use of such tools, particularly when doing so can improve outcomes for clients and increase practice efficiency. However, these tools are best used to assist, rather than replace the practitioner, and the practitioner remains responsible for the proper use of such tools and compliance with all rules and regulations.