Purpose

The purpose of this Use of Generative AI in Employment Policy (“Policy”) is to provide guidelines for the use of generative artificial intelligence (“AI”) (e.g., ChatGPT, etc.) to employees of 51¸£ÀûÉç and those performing work and/or services for Lindenwood. Generative AI is a type of artificial intelligence that can generate content, such as art, music, or text, and other forms of creative content typically associated with human creativity. Generative AI systems are trained on large volumes of written information, referred to as Large Language Models (LLM), using deep learning techniques to generate new creative content. This policy addresses the use of Generative AI in employment in this rapidly changing and developing area and is subject to updates that correspond to developments in Generative AI.

Scope

This policy applies to all employees of 51¸£ÀûÉç and those performing work and/or services for Lindenwood.

This Policy covers the use of Generative AI for University purposes. It does not apply to the use of Generative AI for purely personal reasons.

Policy

1. Institutional Policy on Generative AI Use

The use of generative AI models within Lindenwood must strictly align with existing employment and student policies on confidentiality, data protection, and academic integrity. Generative AI tools should be deployed with caution, as the responsibility to ensure that their use is appropriate and that outputs are accurate rests entirely with each employee. Users must be proactive in determining whether an AI tool operates as an "open model"—a publicly accessible system that may use input data to improve its performance and refine public-facing models—or within a "closed ecosystem," a restricted environment where data is securely contained, preventing external access or model training from user inputs. In instances of uncertainty, employees should contact the Chief Information Officer (CIO) for guidance.

2. Use Case Dependent

The applicability of generative AI in employment contexts varies depending on task specificity and risk level. Employees must distinguish between basic tasks—such as obtaining weather forecasts—and more complex, skilled tasks, such as drafting University-facing publications or sensitive documents. This policy encourages the application of AI for routine support, such as:

Employees should evaluate the risk involved in each specific use case, considering whether the AI tool selected aligns with the appropriate security framework (open or closed). Approved tools within a closed ecosystem, such as Microsoft Co-Pilot in protected mode, are available for inputting institutional data where training and public model refinement are securely disabled. Open or non-secure AI models should never process sensitive or protected information under any circumstances.

2.2 Low-Risk Use Cases and Instructional Purposes

Generative AI can also serve low-risk purposes that do not necessitate additional review, provided that inputs align with this policy. These low-risk applications include:

2.3 Assessing Risk and Compliance Requirements

For all other uses not covered under the low-risk or approved use categories, users must assess the AI tool’s operational framework and ensure compliance with university policies:

Section 3: Risk Considerations for Processing Activities

3.1 Overview of Risk Analysis Requirements

For any generative AI application not classified under previously defined low-risk or instructional use cases, employees must conduct a thorough risk analysis. This analysis should include evaluating whether the intended use aligns with Lindenwood's commitment to maintaining data integrity, privacy, and security. Employees should ensure that the benefits of generative AI use outweigh potential harm to the university based on these outlined factors.

  1. Data Privacy and Security
    Every use of generative AI must adhere to all applicable privacy, cybersecurity, and educational regulations, including the Family Educational Rights and Privacy Act (FERPA) and Institutional Review Board (IRB) requirements. Employees must confirm that generative AI tools comply with these standards, particularly when handling student information or other sensitive data.
  2. Bias and Discrimination
    Generative AI applications must not result in outcomes that display bias or discrimination toward any individual, whether student, employee, or external party. Prior to deployment, employees should evaluate the AI tool for potential biases, ensuring the model supports a fair, equitable output in line with institutional standards.
  3. Plagiarism and Intellectual Integrity
    Employees must avoid using generative AI in ways that could lead to plagiarism or the misattribution of ideas. AI-generated content must be original or properly attributed to avoid violations of institutional plagiarism policy for employees and academic integrity policy for students. All outputs should be verified for originality to uphold academic integrity across all AI-assisted tasks.
  4. Copyright and Intellectual Property Rights
    Generative AI usage must respect all copyright laws and institutional copyright policy Employees should take precautions to prevent copyright infringement by ensuring that any generated content is either original, licensed, or otherwise compliant with copyright guidelines. This is particularly crucial for content intended for public or instructional dissemination.
  5. Accuracy and Prevention of Misinformation
    AI models can sometimes generate inaccurate or misleading information, presenting potential risks to Lindenwood’s reputation and credibility. Employees must carefully verify generative AI outputs for factual accuracy before publication or release to ensure the institution does not disseminate incorrect or misleading information.
  6. Confidentiality Obligations
    Employees must avoid using generative AI in ways that compromise Lindenwood's duty of confidentiality, especially in relation to IRB studies or other proprietary research activities. Confidential information must remain secure, and any generative AI application must be vetted to ensure it does not inadvertently expose or disclose restricted data.
  7. Transparency in AI Interactions
    Transparency is a fundamental component of responsible AI use. In high-risk activities, employees should disclose that generative AI is in use, ensuring transparency with all users. This disclosure should include specific acknowledgment of AI involvement, especially in public or high-stakes engagements involving chatbots.

    The following is stock language that can be used for disclosure purposes from OpenAI’s Sharing and Publication Policy:
    “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model [or insert other Generative AI used]. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.”

3. Exceptions

3.1. Exceptions to this Policy may be considered in very limited circumstances when potential risk and harm to the University are mitigated and should be directed to the University’s Legal Office in advance.

4. Training

4.1. Regular and ongoing mandatory training will be provided to employees on this rapidly evolving and changing topic through the Lindenwood Learning Academy.