Probabilistic Programming Using TensorFlow Probability

Abstract

Probabilistic programming allows us to encode domain knowledge to understand data and make predictions. TensorFlow Probability (TFP) is a Python library built on TensorFlow that makes it easy to combine probabilistic models and Deep Learning. With TensorFlow 2.0, TFP can be very easily integrated into your code with very few changes and the best part - it even works with tf.keras!
This talk will teach you when, why and how to use TensorFlow probability.

Description

As a data scientist if you predict that there is an anomaly then you should be very confident about it. With a probabilistic framework you can make predictions with uncertainty. When you fit your model, how do you know it's good? Is accuracy a good way to score your model? Are some metrics like precision and recall more important to you? TensorFlow Probability is a mathematical framework built on top of TensorFlow which allows you to take into account uncertainity in the weights, variance in the data and even uncertainity in the loss. TensorFlow Probability can be helpful if: * You need to quantify the uncertainty in your predictions, as opposed to predicting a single value. * Your training set has a large number of features relative to the number of data points. * You want to build a generative model of data, reasoning about its hidden processes. * Your data is structured — for example, with groups, space, graphs, or language semantics — and you’d like to capture this structure with prior information. * You want to create interpretable machine learning models and compare them With TFP you can easily bake these ideas into your model and it works like plug and play with Keras. The key take away from this talk would be how TFP can be beneficial in your data science work and why you should experiment with it. P.S - Demos and case studies will be presented. A GitHub repository will be shared which contains code from the talk.

Speaker

Niladri Shekhar Dutt

Undergraduate Researcher working in the field of Deep Learning and its applications in the field of Computer Vision and NLP. He was a visiting student at the University of California, Berkeley for Spring 2019, where he worked at the CITRIS Lab. He has won several hackathons including San Francisco DeveloperWeek Hackathon 2019 (America’s largest challenge-driven hackathon). His current research focuses on self-driving cars and training machine learning models with limited data.