Using GenAI Responsibly
Practical and Ethical Considerations
General Usage Considerations
How to identify appropriate tools, improve results, and guard against misleading or inaccurate AI outputs.
GenAI models have been applied to a variety of tools. While text-based chatbots are perhaps the best known and most widespread applications, many other tools for a variety of use cases have been developed. These include image generators, video generators, audio generators, transcription and translation services, and software code generators. Newer LLMs that support the mixing of several types of data simultaneously (e.g. audio and video) are referred to as mixed-mode or multimodal models. Domain-specific models (often built on top of larger foundation models) exist that are optimized for more particular uses, such as health care, manufacturing, and the application of digital twins (a digital model of a real-world system used for simulation, testing, and management).
Free services offer easy access to many GenAI capabilities, however, they usually come with significant limitations and privacy trade-offs. Such services generally limit model access to older or more constrained models, impose usage limits, and capture user interactions to further train vendor models or to share with partners for marketing purposes. Paid services address these limitations and often provide data privacy protections and legal compliance assurances (e.g. FERPA, HIPAA, etc.). It is important to carefully read the terms of service, privacy policies, and any supplemental data usage agreements associated with a given service and plan as terms vary widely across vendors, products, and subscription options.
AI hallucinations are responses in which information is misleading or factually incorrect. This can occur for a number of reasons but is rooted in the manner in which AI models are trained and generate responses to inquiries. Models make statistical inferences based upon relationships in their training data. Biased or inaccurate training data, overly complex models, poor model fitting to available data, and errors encoding or decoding text can lead to inaccurate statistical predictions and resulting poor output. The deliberate addition of randomness to model predictions by developers to make outputs seem more human-like, is also a contributing factor. Therefore, it’s important to always validate any AI generated output for accuracy. One study found that AI chatbots hallucinate anywhere from 3% to 27% of the time!
A deepfake is an image, video, or audio track that has been edited or generated using generative artificial intelligence tools. They can include real or imagined people or events, with individuals depicted doing or saying things that have never happened. The technology can insert the likeness of anyone into a photo or video they never participated in, including showing them speaking or engaging in activities. It can also create realistic audio deployed in phone calls or recordings. They have been used for marketing (legitimate and otherwise), political manipulation, and fraud. For example, in February 2024, a finance worker at a Hong Kong based company was convinced to pay out more than $25M in response to a faked video request from someone impersonating the Chief Financial Officer.
Deepfakes are extremely difficult to detect. While there are many proposals to fight deepfakes, ranging from legal to technical (e.g. requiring the incorporation of watermarks), most are so far theoretical. When evaluating recordings, key areas to focus on include facial features such as skin tone or texture, eye blinking rate, vocal distortions or obvious mismatches between voice and appearance, and lighting discrepancies, such as reflections in eyeglasses or windows. AI-generated video will also frequently display object impermanence (i.e. images that disappear or suddenly change) or depict actions that upon close inspection seem to violate the laws of physics. Most importantly, validate all media for accuracy rather than accepting recordings at face value.
Prompt engineering is the process of structuring inputs to generative AI systems to improve the quality of the model’s output. Well-structured prompts can enhance the model’s comprehension of query context, minimize biases, reduce misinterpretation, and clarify intent. This, in turn, improves the output quality of AI-generated content. Techniques address both the structure of individual prompts as well as the use of sequences of iterative prompts. Common techniques include chain-of-thought, chain-of-symbol, and few-shot/in-context learning prompting, among others.
Ethical and Legal Guidelines
Understanding bias, compliance, copyright, and privacy considerations.
AI systems can produce outputs that are distorted, skewed, or systematically prejudiced. This can occur for a wide variety of reasons. For example: incomplete, poorly sampled, or non-representative data sets; algorithms that reflect unconscious developer biases, invalid assumptions, or are subject to underlying statistical distribution shifts; human-cognitive biases impacting model training; or systemic biases arising from organizational practices with respect to how AI systems are used. Such bias can reduce AI accuracy, exacerbate existing inequalities, and cause harm to individuals and/or groups of individuals.
It is important to implement safeguards to protect against AI-generated output that could be potentially harmful if misused or misinterpreted. Use a human-in-the-loop approach to validate output quality and avoid relying upon output that creates or reinforces damaging biases. Algorithm transparency is essential – ensure that someone within the organization can explain how decisions are made by AI-enabled tools or systems and that key decision points are identified and audited. Implement feedback mechanisms and fairness safeguards, including regular audits of AI system use and outcomes. It is recommended to conduct periodic ethical assessments that include diverse perspectives as part of an ongoing monitoring process to avoid unexpected or implicit biases.
A variety of legal and regulatory compliance requirements may apply to university employees when using AI tools, depending on the usage context. These include but are not limited to the Federal Educational Rights and Privacy Act (FERPA), Health Insurance Portability and Accountability Act (HIPAA), Criminal Justice Information Services (CJIS), Children’s Online Privacy Protection Act (COPPA), Federal Risk and Authorization Management Program (FedRAMP), Gramm-Leach-Bliley Act (GLBA), General Data Protection Regulation (GDPR), Export Control, Defense Acquisition Regulation System for Controlled Unclassified Information (DoD/DFARS – CUI), and Department of Defense Information Security Program (DoD 5200.01). Failure to comply may place the university in legal jeopardy. Please consult with your local university administration and IT department to determine compliance requirements applicable to your use case.
While specific legislation is currently sparse and the legal landscape surrounding AI is evolving, existing laws still impose liability on organizations for outcomes related to certain issues, regardless of whether AI was used or not. A recent case involving Air Canada in which the airline argued its AI system was “responsible for its own actions” is instructive. The argument was firmly rejected, with Air Canada ordered to honor the fare offered by its’ GenAI system. The use of automated systems does not absolve the user or organization of obligations under the law.
Despite multiple lawsuits and much discussion among legal scholars, the issue of whether the output of AI systems can be copyrighted remains unsettled. Much of the debate centers around the question of authorship – who gets credit for creating works: the user who enters prompts into the GenAI tool, the developers who built the system, the company that owns/provides the service, or the system itself? The U.S. Copyright Office to date has ruled that only works created by a human being can be copyrighted and courts have tended to support this position, however, there have been multiple legal challenges to such a view. Vendor terms of service vary widely as well, with some retaining ownership of outputs for themselves and others assigning those rights to end-users.
Additional controversy surrounds the use of training data “scraped” from the web or other online sources without explicit permission of copyright holders and the generation of output that resembles existing works. Questions of who is liable for copyright infringements resulting from the use of GenAI also remain unclear. A useful review of the debates surrounding AI and copyright law .
On January 29, 2025, the U.S. Copyright Office issued formal guidance that existing law effectively addresses issues of AI copyrightability. They concluded that “Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements,” however, “Where AI merely assists an author in the creative process, its use does not change the copyrightability of the output.” They further concluded that prompting alone does not constitute authorship, stating that “prompts alone do not provide sufficient human control to make users of an AI system the authors of the outputs.” View the full Copyright and Artificial Intelligence report.
Users of AI-based systems should exercise caution when relying on GenAI output and be certain to ensure proper attribution of the system used.
Output generated by GenAI tools should be cited or acknowledged as appropriate for the medium. Note that methodology varies by style guide and publication.
For example, MLA does not treat AI as an author (reserving that for human authors), Chicago does treat AI as an author, and APA recommends listing the AI creator (e.g. OpenAI, Google, etc.) as the author. Some publications instead require authors to disclose the use of AI in manuscripts as part of the Methods or Acknowledgment sections, reserving citations for human authors. Be sure to review submission policies for the relevant publication.
Risks and Mitigation
Avoiding potential negative outcomes when using AI systems.
AI systems present a variety of risks that should be evaluated and mitigated within the context of the intended use. Major considerations include:
- Ethical Considerations. Consider factors such as impacts on individuals related to fairness and model bias, copyright and attribution challenges, and safety concerns when relying on AI outputs for decision-making.
- Data Handling. Be cognizant of how a vendor handles information submitted to or captured by AI systems. Privacy of personal information (yours and others) and the protection of intellectual property are significant concerns.
- Security. AI systems are subject to a variety of machine learning attacks, such as poisoning, biasing, evasion, model extraction, and membership inference. These can compromise data privacy and the integrity of model outputs. AI is also a powerful tool of attack, allowing malicious actors to attack at scale. For example, automatic code generation within UA’s trusted network environment is a significant potential malware vector. Be cautious when using AI tools that have not been thoroughly vetted to protect yourself and your colleagues from compromises that can lead to major consequences such as loss of data, identity theft, or significant legal/financial impacts to the institution.
- Output Quality. AI models are subject to a variety of factors that can lead to low quality or inaccurate results. Major issues include algorithmic bias, model drift and distribution shifts, uncertain data sourcing, vendor prompt rewriting, model hallucination, and potentially harmful content. It is essential to verify all output prior to using or relying on information or recommendations supplied by AI tools.
- Third Party Risk. AI vendors frequently rely on external third parties as part of their system architectures. For example, they may use multiple LLMs from a variety of sources to enable various features. These third parties may have terms of service and privacy policies that differ from the primary vendor. It is often difficult, if not impossible, to identify which vendors will have access to data and how they will handle that information.
- Legal Issues and Compliance. Higher education institutions are subject to a variety of legal and regulatory requirements.
Failure to meet compliance mandates may put the individual or institution in legal
jeopardy and could impact funding. Consider relevant mandates before adopting any
proposed tool or service and ensure appropriate auditing is in place to maintain future
compliance.
For additional information, please review the UA Generative AI Security Standard or contact your local IT Help Desk for assistance.
AI jailbreaking is the use of techniques that result in the failure of system guardrails designed to ensure the safety and ethical use of generative AI systems. This leads the system to provide responses that developers intended to prevent, such as avoiding offensive output or demonstrating dangerous/illegal activity. Prompt injection attacks are a common approach that allows criminals to disguise malicious inputs as legitimate prompts, resulting in the system leaking sensitive user data, divulging underlying programming details, or spreading misinformation. These inputs can be entered directly or hidden in data consumed by the LLM (e.g. web pages containing malicious prompts). Because GenAI systems are designed to accept natural language instructions, they are particularly vulnerable to such security threats.