Appropriate Use of Generative AI Tools

With the emergence of various generative AI tools (e.g., ChatGPT, Google Gemini, DALL-E, Microsoft CoPilot, and others), members of the campus community are eager to explore their use in the university context. This advisory guides how to use these tools safely and support innovation, without putting institutional, personal, or proprietary information at risk. In all cases, use should be consistent with UC Berkeley’s Principles of Community. This guidance will be updated as circumstances evolve.

Allowable Use:

  • Publicly-available information (Protection Level P1) can be used freely in all generative AI tools. 

  • UC has agreements relating to the following AI tools, which may allow for use with more sensitive information, as listed. It is important to be sure you are using these tools under UC’s contracts (as opposed to as a private individual) to benefit from UC’s contractual protections. Limitations of use under UC’s contracts are noted below:
    • Microsoft CoPilot: Approved for use with P2 information

    • Adobe Firefly: Approved for use with P1 information (no additional protections under the UC Berkeley agreement)

    • Zoom AI Companion (pilot only): Approved for use with P2 information, providing everyone actively consents to its use each time.

    • Approved for use with P3 information, providing everyone actively consents to its use each time.

Prohibited Use:

  • At present, any use of generative AI tools should be with the assumption that no personal, confidential, proprietary, or otherwise sensitive information may be entered into models or prompts. In general, student records subject to FERPA, and any other information classified as Protection Level P2, P3, or P4 should not be used. 

  • Similarly, generative AI tools should not be used to generate output that would be considered non-public. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading. 

  • Please also note that some generative AI tools, such as OpenAI, explicitly forbid their use for certain categories of activity, including harassment, discrimination, and other illegal activities. An example of this can be found in found in OpenAI's usage policy document

Additional Guidance:

For further guidance on the use of ChatGPT for teaching and learning, please see Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley from Research, Teaching & Learning.

UC Berkeley People & Culture drafted guidelines that address the use of AI in campus operations. For more information, please see these Guidelines for the Use of Generative AI and Permissible Use Cases DRAFT.

Users are encouraged to take the UC Berkeley AI Essentials course before using AI tools.

For more information on UC Berkeley's policy and governance relevant to AI, see this page.

Rationale for the Above Guidance:

Protecting privacy and security: As-of 5/16/2024, the University of California's agreements do not cover the use of most generative AI tools. This means that UC has no agreements with these companies around the privacy or security of the information that is submitted to, or generated from, these tools. The UC Office of the President is working on this issue. We hope to see this addressed in the near future and will update this guidance when additional information is available.

Personal Liability: ChatGPT uses a click-through agreement. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions. [1]

Guidance on Appropriate Use

For questions regarding the appropriate use of generative AI tools, please contact

Additional Resources