AI Marketing/Communication Guidelines

Guidelines for the ethical use of generative artificial intelligence in marketing and communications

The world of generative artificial intelligence (generative AI) technology is evolving rapidly. Advances in generative AI are providing communicators and marketers with new and powerful tools to enhance our work. The following guiding principles will help marcomm professionals at WSU navigate these new opportunities responsibly, ensuring we use generative AI ethically and effectively.

What is generative AI?

Generative artificial intelligence is a type of AI technology designed to create new content, including text, images, audio, music, and videos, by learning from existing data. Unlike traditional AI, which might classify data or make predictions, generative AI models use patterns learned from large datasets to produce original outputs that resemble the training data.

For instance, a generative AI tool trained on a vast collection of text can write essays, stories, or articles that mimic human writing styles. Similarly, generative AI can be used to produce illustrations, photorealistic images, music, and video.

This technology leverages complex algorithms and neural networks to generate content that can be surprisingly creative and human-like. However, it still lacks genuine creativity and understanding. While generative AI can enhance productivity and assist with various tasks, it cannot replicate the full depth of human reasoning, creativity, or ethical judgment.

Guiding principles

These guidelines are intended for communications and marketing professionals at Washington State University. They do not apply to other areas of the university, such as educational and classroom settings, or WSU’s Information Technology Services unit. The principles specifically address AI tools used to generate content, such as images, text, music, audio, video, and similar items.

1. Human involvement is required:

For human communications to be authentic, they must be produced by actual humans. Generative AI can empower and augment the work of professionals by streamlining repetitive tasks, generating insights, and assisting with data analysis, but it cannot replace human knowledge, judgement, experience, emotion, and imagination.

2. Accuracy and accountability:

Generative AI tools are not perfect. They have been known to “hallucinate,” generating false information that appears plausible but has no basis in actual data or reality. Generative AI can also inadvertently change the meaning of communications and reword direct quotes pulled from interviews.

As WSU employees, we are accountable for all content we create, even when it is developed with the assistance of generative AI tools. All AI-generated material must be thoroughly reviewed, edited, and approved before it is published or used, ensuring that it meets ethical standards and accuracy requirements. Ideally, if this is a significant work product, the review should be conducted by a colleague who was not involved in the creation of the content.

Increasingly, generative AI functionality is being built into commonplace applications and services used by marketers and communicators, such as Adobe Creative Cloud and Canva. Care should be taken to review and independently verify content generated by these applications as well, even if they are not considered stand-alone AI tools.

3. Beware of bias:

Users of generative AI must be vigilant for bias in responses because these systems learn from vast datasets that can contain existing prejudices and stereotypes, leading the AI to perpetuate or even amplify these biases in its outputs. Ensuring fairness and accuracy requires careful oversight and continuous evaluation of the generated content.

4. Copyright and plagiarism:

Many generative AI models have been trained on copyrighted material that may have been used without permission from the original creator. Marcomm professionals must be careful to review and, if necessary, modify AI-generated output to avoid plagiarism.

5. AI-generated or manipulated imagery:

There is a distinction between AI-generated and AI-manipulated visual assets. AI-generated visual assets are any images in which an AI tool creates an editorially significant visual element. AI-manipulated visual assets are any images that start with a human-generated image, but use AI tools to augment, subtract, or enhance the visual asset.

AI-generated visual assets

Allowable AI-generated visual assets include photorealistic images, videos, and illustrations not already captured or created by WSU photographers, videographers, and designers. Examples include generic business, office, classroom or lab scenes; people engaged in common activities not connected to a WSU campus; nature scenes or landscapes; imagery connoting holidays and celebrations; non-photorealistic illustrations; and graphic design elements.

A credit or disclaimer should be included acknowledging imagery that was generated by AI.

WSU marketing and communications professionals should never use AI tools to generate visual assets depicting WSU students, faculty, staff, intellectual property, or their likenesses. Similarly, AI tools must not be used to create images of events that appear to take place on WSU campuses but did not actually occur.

Keep in mind the possible ethical concerns regarding authenticity and originality and the potential legal issues surrounding copyright infringement when creating images using generative AI. It is encouraged to use generative AI models that source licensed content to avoid these issues.

AI-manipulated visual assets

AI tools can be used to manipulate or enhance existing visual assets so long as they are not used to meaningfully alter or synthetically generate misleading editorially significant scenes or subjects. A credit or disclaimer is not necessary for AI-manipulated content that follows this guideline.

Remember that many tools harvest content from your sessions to continue to train the artificial intelligence model. You may be contributing university-owned visual assets to a publicly accessible AI learning model.


Examples of acceptable uses of generative AI

(Credit: University of Utah’s Marketing and Communications AI Use Working Group)

  • Brainstorming new story ideas: AI can offer fresh perspectives and provide constructive feedback on content concepts.
  • Creating an outline: AI can help organize content ideas into a cohesive structure.
  • Editing assistance: AI tools can offer suggestions for rewording or tightening text you have already written. AI tools can also answer style guide questions and serve as a thesaurus.
  • Personalizing messaging: AI can help tailor content for different audiences, such as students, staff, faculty, donors, or the media.
  • Drafting social media posts: AI can provide a quick first draft of social media posts and customize them for different audiences.
  • Editorial calendar/content planning: AI can assist in organizing and planning content and social media calendars.
  • Helping with headers, headlines, and other content structure: AI tools can suggest ideas for headlines, subheads, website headers, H3 tags, etc.
  • Search engine optimization (SEO): AI tools can assist with keyword research, readability analysis, keyword usage, and relevancy to improve webpage quality and performance.
  • Research assistance: AI can quickly teach about a concept or topic, but humans must verify all facts and information, including the veracity of any sources that are cited.
  • Anticipating potential questions or objections: AI can suggest potential questions or objections from stakeholders, allowing for better preparation.
  • Enhancing productivity: AI can help with routine tasks such as summarizing interviews, analyzing data, and drafting outlines and text for presentations. However, humans should review all AI-generated content.
  • Improving images and creating illustrations: Use of Content-Aware Fill features in photo editing software is permitted for images you own, provided the integrity and context of the image are maintained. Generative AI tools can also be used to create illustrations.

Prohibited uses of generative AI

(Credit: University of Utah’s Marketing and Communications AI Use Working Group)

  • Uploading confidential data: Do not enter proprietary data or confidential information about students, employees, patients, research study participants, or other constituents into AI tools. Uploading this information could breach privacy laws, including HIPAA and FERPA, or other university policies. Data submitted to many AI tools may become public.
  • Relying on AI to fact check: Do not use artificial intelligence technology to validate information. AI tools may suggest inaccurate facts and sources. Human oversight and independent verification using reputable sources are essential in all research, content creation, and review.
  • Violating WSU standard or policies: AI tools should not be used in any way that violates existing university standards or policies, including creating false communication or manipulating data deceitfully.

Special thanks to the University of Utah’s Marketing and Communications AI Use Working Group for its pioneering efforts in developing AI guidelines for university marcomm professionals. The work of the University of Utah team has significantly informed our approach to developing these guidelines.

For more information on the use of artificial intelligence at Washington State University, visit the WSU Artificial Intelligence Resource Website.