Artificial Intelligence (AI) for People Leaders
297-16
Supervisory and management courses
Lead with clear AI expectations at the team level.
AI is increasingly shaping how work gets done on teams, often through informal or uneven use, without shared expectations to rely on. When that happens, teams begin developing “shadow standards”: unspoken assumptions about what’s acceptable, what’s smart, and what’s risky. Over time, this can create team-level strain through misaligned quality, uneven behaviour, reduced visibility, and the quiet erosion of professional trust and confidence.
This workshop helps people leaders replace that uncertainty with team-level leadership judgment. You will learn how to notice when expectations have become blurred, diagnose whether leadership clarity is actually required, and make consistent, defensible calls using PMC’s LENS decision lens.
Rather than waiting for perfect organizational guidance, you will clarify how existing expectations apply to your team’s AI use. You will draft 2-3 practical team guardrails you can reasonably stand behind as a leader, and you will choose a defensible leadership position for how visible that clarity needs to be right now—whether it should be held, reinforced informally, or supported through a focused conversation.
By the end of the session, you will leave with a repeatable method to reduce team-level uncertainty and strengthen trust, fairness, and consistency in how AI is used—without relying on policy work, technical expertise, or escalation.
Interested in other AI workshops? Explore our full series:
- Introduction to AI – Start with the basics
- AI in Action: From Knowing to Doing – Develop practical applications
- AI for People Leaders – Lead your team with clear AI expectations
- Responsible AI Governance – Address AI responsibility and governance
- Assess how AI is influencing team roles, workflows, and communication
- Spot and address team-level risks related to unstructured or unsupported AI use
- Lead conversations that address fear and trust signals related to AI use
- Set practical team guardrails that support ethical use without slowing progress
- Translate leadership judgment about AI into clear expectations teams can rely on
1. Strategic Impact and Leadership Blind Spots
- How AI is changing team dynamics and trust
- Common risks: misinformation, overuse, and unclear expectations
- Leadership gaps in communication and decision-making
2. Leading with Guardrails and Curiosity
- Setting shared norms for ethical, productive AI use
- Encouraging experimentation without overstepping
- Fostering curiosity and learning through example
3. Driving AI Change Across Teams
- Using PMC’s LENS decision lens (Norms, Signals, Exposure, Leader Response) to assess when leadership action is required
- Deciding when leadership clarity or presence is required
- Clarifying expectations that reduce uncertainty and strain
There are no prerequisites for this workshop.
This workshop is for people leaders who are accountable for how their teams use AI day-to-day.
It is designed for those who need to manage the team-level risks of unstructured or unsupported AI use and want a structured way to set clear, practical expectations.
Open to all members of the public.
$ 595 plus tax
Choose my session