Using Lucid to Visualize Neural Networks


Neural networks work in "mysterious ways", but we can now peer into some of them to see how they work. This talk focuses on a tool called Lucid, from the TensorFlow team, and aims to show some interesting examples of different ways of visualizing neural networks with the goal of improving explainability. You will come away with a bit more understanding of how neural networks "learn", and what aspects they are "looking" at.


Neural networks are getting more complex than ever, and becoming harder and harder to understand. But the tools to enable understanding are also improving! We will dive into some of these tools to help build some intuition about what it is that drives the predictions that neural networks make. We will look at everything from the early days of visualizing neural network layers, all the way to the latest approaches, including the building blocks of interpretiability, Activatiton Atlases, and Differentiable Image Parameterizations. Understanding how neural networks "see" is the first step to really harnessing the creative power that they possess. It's a fun and crazy journey, but worth it!


Yufeng Guo

Yufeng is a Developer Advocate focusing on Cloud AI, where he is working to make machine learning more understandable and usable for all.
He is the creator of the YouTube series AI Adventures, at, exploring the art, science, and tools of machine learning.
He is enjoys hearing about new and interesting applications of machine learning, share your use case with him on Twitter @YufengG