Keynotes 45–60 min, executive briefings, board sessions, leadership workshops.
CEOs and boards, C-suite leaders, transformation teams, regulators, policymakers, data and product leaders.
Leading a business through the AI revolution demands vision and bold action. Cassie Kozyrkov, the architect of Google’s AI-first transformation, challenges leaders to rethink what it means to lead in the AI era. She shares innovative strategies for embedding AI into the fabric of your company, transforming not just how you work but what you can achieve.
Attendees will walk away with a clear roadmap to becoming leaders in AI-driven innovation, equipped to outpace the competition and thrive in a rapidly changing technological landscape.
Audience Takeaways:
Contrary to popular fears, AI won’t be the doom of humanity—but once AI starts handling repetitive tasks, you’ll have to face something even scarier: the chance to do truly impactful work. This session challenges you to drop the excuses, step up your game, and seize the creative freedom AI offers by taking drudgery off your plate.
The best defense against job automation isn’t hiding from AI—it’s taking charge of it. Find out how embracing and leading AI initiatives can secure your role, build your influence, and allow you to thrive in the rapidly evolving future of work.
Audience Takeaways:
Artificial intelligence is no longer science fiction, yet many businesses fail to harness its potential at scale. In this insightful talk, Cassie Kozyrkov strips away the jargon to explore what’s easy, what’s hard, and how to spot genuine opportunities to improve your business with AI. She delves into why organizations struggle, uncovers the two biggest threats in the field, and discusses what this means for the future of work.
Discover the secrets to successful AI innovation and learn how to avoid common pitfalls, unlocking AI’s true potential for your organization. Get ready to turn today’s hype into tomorrow’s growth.
Audience Takeaways:
As AI advances rapidly, a trust gap emerges between its capabilities and our understanding. In this compelling keynote, Cassie Kozyrkov explores how you can navigate this gap. By sharing the story of one of history’s most ironic hoaxes, she illustrates why even the savviest among us can be misled and how to prevent it.
Drawing from historical and modern examples, she reveals how understanding AI’s strengths and limitations empowers you to navigate tomorrow’s world. You will gain actionable strategies to build tech fluency, foster skepticism, and ensure AI empowers rather than misleads, building new digital trust habits to guide you confidently into the future.
Audience Takeaways:
Your life outcomes boil down to two things: luck and the quality of your decisions, making decision-making crucial. Yet society often neglects it as a skill to develop. In this empowering talk, Cassie Kozyrkov reveals how outcome bias and confirmation bias prevent us from learning the right lessons and making better choices.
She demonstrates how tolerating these biases institutionalizes complacency and hinders progress. The good news? You can overcome them instantly by asking the most powerful question: “What would it take to change your mind?” Walk away with practical strategies to eliminate excuses, confront your biases, and elevate your decision-making immediately.
Audience Takeaways:
As AI becomes more powerful, the real competitive advantage isn’t in the technology—it’s in the human mind. Cassie Kozyrkov, who spearheaded Google’s AI-first transformation, inspires you to embrace the one thing AI can’t replace: human judgment. In this engaging keynote, she challenges leaders to rethink their role in a world where AI delivers perfect answers in seconds. Cassie equips you with mental agility and decision-making frameworks needed to stay ahead, focusing on how to harness AI’s potential without falling into the trap of thoughtless automation. Get ready to think faster and lead smarter.
Audience Takeaways:
Cassie Kozyrkov is a South African-born data scientist and the founder of decision intelligence. As Google’s first Chief Decision Scientist, she helped lead the company’s AI-first transformation and personally trained more than 20,000 Googlers in data-driven decision making, influencing 500+ initiatives.
Today she is CEO of Kozyr and advises brands such as Gucci, NASA, Spotify, Meta, and GSK on practical AI strategy and digital trust. Cassie’s talks blend technical depth with plain language to demystify AI adoption, human judgment, and leadership in the AI era. Her thought leadership has appeared in publications like Harvard Business Review and Forbes.
She travels from New York.
Cassie Kozyrkov is a globally recognised leader in artificial intelligence and the CEO of Kozyr. She is best known for founding the field of Decision Intelligence and for her pivotal role as Google’s first Chief Decision Scientist, where she spearheaded the company’s transformation into an AI-driven leader. Cassie is a South African data scientist and statistician. Today, Cassie is a highly influential AI advisor and keynote speaker, revolutionising how prominent organisations like Gucci, NASA, Spotify, Meta, and GSK develop and implement their AI strategies. Driven by her passion for enhancing human capability through responsible AI adoption, she also serves on the Innovation Advisory Council of the Federal Reserve Bank of New York and invests in emerging tech ventures.
Cassie’s impact at Google is legendary; her workshops were so popular that attendance had to be managed by lottery due to overwhelming demand. She personally trained more than 20,000 Googlers in AI and data-driven decision-making, influencing over 500 initiatives and reshaping Google’s technological culture. With a unique combination of deep technical expertise and theatre-trained charisma, Cassie delivers keynotes that make complex ideas accessible, engaging, and actionable for diverse audiences, from executives to general participants. Her humour, sharp wit, and vivid storytelling ensure audiences leave not only inspired but also equipped with practical insights to lead innovation within their own organisations.
Cassie’s academic foundation is as impressive as her professional achievements. Beginning her undergraduate studies at the age of 15 at Nelson Mandela University, she later earned degrees in economics, mathematical statistics, psychology, and neuroscience from esteemed institutions such as the University of Chicago, North Carolina State University, and Duke University. This multidisciplinary background has equipped her with a holistic understanding of both the technical and human facets of decision-making.
During her nearly decade-long tenure at Google, Cassie pioneered the field of Decision Intelligence, a discipline that integrates data science with social and managerial sciences to improve decision-making processes. As Google’s first Chief Decision Scientist, she played a pivotal role in the company’s transformation into an AI-first organization. Her efforts included personally training over 20,000 Googlers in data-driven decision-making and AI applications, impacting more than 500 projects.
Beyond Google, Cassie has collaborated with a diverse array of organizations, including Gucci, NASA, Spotify, Meta, and GSK, assisting them in formulating and executing effective AI strategies. Her advisory roles extend to serving on the Innovation Advisory Council of the Federal Reserve Bank of New York and investing in emerging product companies, underscoring her commitment to fostering innovation across sectors.
Cassie’s thought leadership is further evidenced by her extensive writing and speaking engagements. Her articles have been featured in prestigious publications like the Harvard Business Review and Forbes, where she elucidates the nuances of data science, AI, and decision-making. Her engaging communication style, characterized by humor and vivid analogies, makes complex topics accessible and engaging to a broad audience.
Cassie Kozyrkov’s blend of technical prowess, innovative leadership, and engaging communication makes her a standout speaker in the realms of AI and decision intelligence. Her contributions have not only shaped the strategies of leading organizations but have also influenced how industries approach data-driven decision-making. For events aiming to provide attendees with cutting-edge insights and inspiration, Cassie Kozyrkov is an unparalleled choice.
Hashtags: #CassieKozyrkov #DecisionIntelligence #ArtificialIntelligence #DataScience #AILeadership #TechInnovation #WomenInTech #KeynoteSpeaker
Contact us at Speakers Inc and view our International site
Cassie opens with the history of the term artificial intelligence and why the branding misleads expectations. She proposes a clearer view of AI as automation and contrasts two programming modes. Traditional software uses instructions. Machine learning uses examples. That shift puts leaders on the hook for two decisions that seem simple yet are subjective and risky. Define the goal for optimization. Choose or author the data set.
Using a playful “cat or not cat” exercise, she shows that labels depend on purpose. The same image may be cat or not cat based on intended use. Data is a human authored textbook for the machine student, not an objective truth. Because data quality spans statistics, research design, UX, psychology, and engineering, it often becomes everyone’s job which means nobody’s job. Cassie argues for explicit ownership and career paths for data quality.
She differentiates discriminative systems that label things from generative systems that create plausible exemplars. Generative AI acts like a raw material. It accelerates creative iteration but requires thoughtful use and careful regulation. For enterprises, the bottleneck is testing and safety nets, especially when removing humans from the loop. Good engineers become more productive with these tools. Poor teams can get worse by pasting unvetted code or content into critical systems.
00:00 Welcome and the origin of the term AI. Why the name created confusion
03:30 Automation by instructions vs automation by examples. Dora the human “computer” story
07:20 Cat or not cat game. Purpose defines the right label
11:00 Data as human-authored textbooks. Why data quality is subjective and under-owned
15:10 Who owns data quality. The “everybody therefore nobody” problem and career paths
18:45 Internet data is a mirror with distortions. Be cautious with wild-type datasets
21:10 Two-line ML programming. Define the goal and the dataset. Thoughtlessness risk at scale
25:20 Discriminative vs generative systems. Batman door lock example and safety nets
29:05 Generative AI as raw material. Creativity stays human. Iteration and curation matter
33:00 Enterprise use. Good engineers become better, weak teams degrade performance
36:15 When to trust AI. Human in the loop and heavy testing. Both is best
39:40 Avoid death by a thousand pilots. Build testing, guardrails, and rollback
42:30 Kitchen analogy. Ingredients, appliances, recipes, and why testing is hard downstream
45:15 Which jobs are affected. Pressure on repetitive, digitized second quartile tasks
48:10 Talent pipeline risk. Fewer entry-level tasks to build judgment and taste
50:05 Three leadership imperatives. Own data quality, test rigorously, develop people
Cassie opens by positioning Making Friends with Machine Learning as a practical, conceptual course for everyone. She reframes AI as automation and introduces the two ways to instruct computers: by explicit instructions and by examples. Machine learning is the latter, where data serves as a set of examples that a model studies to learn patterns for turning inputs into outputs. She distinguishes core problem types such as prediction, classification, and ranking, then defines essential terms including labels, features, loss, generalization, and evaluation.
A central theme is decision quality. Cassie emphasizes that the most important work in ML is not picking an algorithm but deciding goals, choosing appropriate data, and setting up testing so systems behave as intended. She explains training, validation, and test splits; overfitting versus underfitting; and why leakage, sampling bias, and poor labeling can quietly ruin results. Throughout, she offers memorable analogies that make abstractions tangible and keeps math light while preserving rigor.
She gives a tour of common model families at a high level, underscoring that business context should drive method choice. Feature engineering and problem framing get special attention because they determine whether an initiative is useful. Cassie also introduces human-in-the-loop design, measurement plans, and deployment considerations so teams avoid “demo-only” wins. Ethics and safety are treated as practical design constraints: know the failure modes, set thresholds, build guardrails, and decide who is responsible for intervention.
The session closes with a recap of a simple ML workflow that leaders and contributors can follow: define success and constraints, choose and prepare data, establish evaluation and guardrails, iterate with validation, and plan operational ownership. The result is a clear, confidence-building starting point for cross-functional teams.
03:30 AI as automation. Instructions vs examples concept
08:10 Core ML tasks: prediction, classification, ranking, recommendation
13:00 Key terms: labels, features, loss, generalization
18:20 Data is a textbook. What good examples look like
23:15 Train, validation, test. Why splits and holdouts matter
28:40 Overfitting vs underfitting. Bias–variance intuition
34:05 Evaluation basics. Metrics, thresholds, and business impact
39:30 Data pitfalls. Sampling bias, label noise, and leakage
45:00 Feature engineering. Turning domain knowledge into signals
50:20 Model families at a glance. When simplicity wins
56:00 Human-in-the-loop. Ownership, intervention, and decision rights
61:30 Operationalization. Monitoring, drift, and feedback loops
66:45 Ethics and safety. Failure modes and guardrails
72:10 Measurement plans. What to track and why it matters
77:30 Workflow recap. From success criteria to deployment
82:15 Common stakeholder questions and quick answers
85:00 Closing takeaways. How to participate effectively
Cassie unpacks LinkedIn’s recent opt-in for using users’ posts to train content generation systems outside the EU and UK. She calls it a dark design pattern because the chosen default diverges from what most users would pick when informed. She distinguishes this policy from routine AI uses such as personalization and security. The controversy is content specific because generative AI has input and output symmetry. When what goes in looks like what comes out, leakage risk rises and legal deletion promises collide with technical reality.
She explains why machine unlearning is not a reliable remedy today. Once content is baked into a generative model, removing it is like trying to take sugar out of a cake. Reverting to a pre-training checkpoint and retraining would be impractical at scale. This creates a hornet’s nest for privacy laws that require deletion on request, especially when platforms also claim to delete removed posts.
Cassie then explores platform strategy and ecosystem risks. If LinkedIn prioritizes AI-generated content or shares creator posts widely to Affiliates, it may alienate human creators and degrade community value. She warns about model collapse, where models trained increasingly on AI outputs rather than human work experience abrupt performance decline. High quality creators have the strongest incentive to opt out, further reducing the quality of the training pool.
Her leadership guidance is direct. First, make data quality and data rights someone’s job with clear incentives. Second, test systems rigorously, keep humans in the loop for sensitive uses, and avoid death by a thousand pilots that remove oversight. Third, protect creator communities and plan for talent development in an AI world so that junior contributors still gain the experience needed to become trusted leaders. The core message is to seek AI progress without undermining privacy, legality, or the human community that makes platforms valuable.
00:38 Dark patterns and why default choices matter
01:10 Regular AI vs content training. Why this opt-in is different
01:42 Generative AI symmetry. Inputs resemble outputs and raise leakage risk
02:20 Legal conflict. Deletion rights vs technical persistence
03:00 Machine unlearning status. Why it cannot guarantee forgetfulness
03:42 The cake analogy. Retraining cost and impracticality
04:20 Why high quality LinkedIn content is especially valuable
04:50 Creator incentives. Why many opt out rather than donate work
05:28 Platform direction. AI-generated content vs human community
06:10 Model collapse explained. Training on your own synthetic exhaust
07:05 Data quality erosion when top creators opt out
07:42 User trust and community health implications
08:20 Leadership takeaway 1. Assign ownership for data quality and rights
09:00 Leadership takeaway 2. Testing, safety nets, and human in the loop
09:46 Leadership takeaway 3. Maintain creator relations and talent pipelines
10:30 Enterprise adoption cautions. Avoid careless scale-ups
11:10 Practical next steps for teams and policy leads
12:10 Summary. Progress without eroding privacy or community value
AI-first leadership, decision intelligence, why AI programs fail, digital trust habits, and upgrading decision quality.
Yes. Cassie customizes content using stakeholder discovery and sector-specific use cases.
Yes. Topics include testing methods, red team practices, and decision rights.
Dr. Roelof Botha is an acclaimed economist and sought-after keynote speaker known for his exceptional insights into global political and economic trends. With decades of experience analyzing market forces, government policies, and global financial ecosystems, Dr. Botha has established himself as a guiding voice in economics. His presentations are celebrated for their clarity, depth, and […]
Tim Cohen, a seasoned business editor and journalist, has profoundly shaped South Africa’s economic reporting landscape with his incisive analysis and commitment to truth. Before joining the highly respected Daily Maverick as Business Editor, Cohen served as the editor of Business Day, one of South Africa’s premier financial newspapers. His career began on the pages […]
This fresh and exciting four-piece outfit Julia Lamberti Jazz Band were established in 2010, this diverse and exciting four-piece outfit is comprised of the following seasoned musicians: Julia Lamberti: Lead Vocals Pohl Finck: Keys Anthony Koopman: Bass Guitar Esra Isaacs: Drums & Percussion This multi-cultural Julia Lamberti Jazz Band performs a cool blend of jazz standards, […]
No results available
These remain the property of its owner and are not affiliated with or endorsed by Speakers Inc.
All talent fees exclude VAT, travel and accommodation where required.
Our Mission
Our Mission:
© All rights reserved 2025. Designed using Voxel