With the emergence of ChatGPT and other AI tools, many members of our community are eager to explore their use in the university context. This advisory provides guidance on how to use these tools safely, without putting institutional, personal, or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.
Allowable Use:
-
Publicly-available information (Protection Level P1) can be used freely in ChatGPT.
-
In all cases, use should be consistent with UC Berkeley’s Principles of Community
Prohibited Use:
-
At present, any use of ChatGPT should be with the assumption that no personal, confidential, proprietary, or otherwise sensitive information may be used with it. In general, student records subject to FERPA, and any other information classified as Protection Level P2, P3, or P4 should not be used.
-
Similarly, ChatGPT should not be used to generate output that would be considered non-public. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading.
-
Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity, including fraud and illegal activities. This list of items can be found in their usage policy document
Additional Guidance:
For further guidance on the use of ChatGPT for teaching and learning, please see Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley from Research, Teaching & Learning.
Rationale for the Above Guidance:
-
No UC Agreement, No Privacy and Security Terms: All content entered into, or generated by, ChatGPT is available to ChatGPT, its parent company, OpenAI, and their employees. There is currently no agreement between UC and OpenAI or Microsoft that would provide data security and privacy protections required by UC policy with regard to ChatGPT or OpenAI’s programming interface. Consequently, the use of ChatGPT at this time could expose individual users and UC to the potential loss and/or abuse of sensitive data and information.
-
As of April 2023, the UC Office of the President is working on this issue. We hope to see this addressed in the near future and will update this guidance when additional information is available.
-
Personal Liability: ChatGPT uses a click-through agreement. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions. [1]
Guidance on Appropriate Use
For questions regarding the appropriate use of ChatGPT and other AI tools, please contact privacyoffice@berkeley.edu
References
-
[1] Delegations of Authority: To find out who has signature authority at Berkeley to “sign” a click-through agreement, go to the Delegations of Authority webpage in the Office of the Chancellor Compliance Services website.
Additional Readings
-
Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley, from UC Berkeley Research, Teaching & Learning
-
Quantamagazine: The Unpredictable Abilities Emerging From Large AI Models
-
University of California Presidential Working Group on Artificial Intelligence
-
Inclusive Intelligence: Artificial Intelligence in the Service of Science, Work, and the Public Good, from UC Berkeley Research