Learning Experience

The Learning Lab experience is supported by both asynchronous and synchronous components. Each part includes a set of resources, an asynchronous discussion, and an interactive live session, all of which culminate in the development of a project to apply learning to local and specific contexts in support of the learning objectives.

Schedule

Part 1: Exploring Principles of Generative AI

September 9 | 12:00–1:30 p.m. ET

In this session we’ll consider generative AI’s potential in higher education. Thinking like a designer, we’ll explore the principles of AI models and compare various technologies, focusing on their practical applications. We’ll examine how AI can boost workplace efficiency, and explore tools for creating chatbots, customizable agents, and/or virtual assistants. Through interactive exercises, professionals from diverse higher education roles will gain practical insights into leveraging generative AI to innovate across teaching, research, and administration.

Learning Objectives:

  • Describe the underlying principles and techniques used in generative AI models.
  • Distinguish between different types of generative AI technologies and their respective strengths and limitations.
  • Identify properties of AI that augment or amplify outcomes of the workplace.

Part 2: Piecing Together Your Own AI Solution

September 12 | 12:00–1:30 p.m. ET

In this session, we’ll focus on designing custom use cases for your unit and/or institution. Participants will identify unit and/or institutional challenges and develop ideas for AI solutions to address them. Then, we’ll dive into the practical aspects of creating custom chatbots or AI assistants and integrating them with relevant data sources. Throughout the session, we’ll emphasize hands-on activities, allowing participants to gain firsthand experience leveraging these tools. By the end, you’ll have a solid foundation to create and evaluate AI-powered chatbots or assistants ready to enhance your institution’s teaching, research, and administrative processes.

Learning Objectives:

  • Identify specific challenges or needs within your units or institutions that could be addressed by generative AI solutions.
  • Generate and evaluate ideas for how generative AI could be applied to address those challenges or needs.
  • Evaluate tools and frameworks for making your own custom chatbot or AI assistant.
  • Integrate the custom AI solution with relevant data sources or systems to meet specific unit or institutional needs.

Part 3: Refining Your AI Solution

September 18 | 12:00–1:30 p.m. ET

In this Learning Lab session, we’ll refine the custom chatbots or AI assistants we’ve been developing. We’ll reassess how our AI solutions address unit and/or institutional challenges and enhance workplace outcomes. We’ll improve interaction patterns and data integration, then evaluate our AI solutions using established benchmarks and institution-specific criteria. We’ll interpret results to identify enhancements and implement performance-boosting strategies. This hands-on session will prepare participants for the final session by equipping them with skills to critically assess and improve their custom chatbots or AI assistants, setting the stage for their optimal deployment in their unit and/or institution.

Learning Objectives:

  • Build interaction patterns for a custom chatbot or AI assistant.
  • Analyze how custom chatbots or AI assistants can be evaluated based on benchmarks.
  • Ideate potential plans for redesign and redeployment.
  • Implement strategies and techniques to refine the custom chatbot or AI assistant and improve its performance.
  • Interpret evaluation results and user feedback to identify opportunities for enhancement.

Part 4: Applying Your AI Solution at Work

September 23 | 12:00–1:30 p.m. ET

In this final session, we’ll reflect on and further refine the custom chatbots or AI assistants we’ve created for our units and/or institutions. The core of our session will focus on comprehensive evaluation: we’ll analyze our chatbots or AI assistants using established benchmarks and develop unique evaluation criteria tailored to our contexts. We’ll assess their effectiveness across multiple dimensions: utility, performance, interaction design, consistency, and potential risks.

Using these insights, we’ll then iterate on our custom chatbots or AI assistants, interpreting evaluation results and user feedback to identify areas for improvement. We’ll implement strategies to enhance performance and align outputs more closely with our organization’s needs, constraints, and visions. The session will culminate in ideating plans for future redesigns and redeployments, ensuring that our AI solutions continue to evolve with our institutions.

Learning Objectives:

  • Develop evaluation criteria and metrics unique to your own unit or institution.
  • Apply appropriate benchmarks to your own and/or others’ chatbots.
  • Assess effectiveness on multiple levels, such as utility, performance, interaction design, consistency, and risk.
  • Align custom chatbot or AI assistant output with the organization’s needs, constraints, and vision.

Lab Project

Participants will apply their learning by developing a custom chatbot or AI assistant to address a specific task or challenge within their professional domain in higher education. They will document the development process, implementation strategies, and evaluation metrics, demonstrating their ability to leverage generative AI effectively in their professional contexts.