Probabilistic programming allows us to encode domain knowledge to understand data and make predictions. TensorFlow Probability (TFP) is a Python library built on TensorFlow that makes it easy to combine probabilistic models and Deep Learning. With TensorFlow 2.0, TFP can be very easily integrated into your code with very few changes and the best part - it even works with tf.keras!

This talk will teach you when, why and how to use TensorFlow probability.

As a data scientist if you predict that there is an anomaly then you should be very confident about it. With a probabilistic framework you can make predictions with uncertainty.
When you fit your model, how do you know it's good? Is accuracy a good way to score your model? Are some metrics like precision and recall more important to you?
TensorFlow Probability is a mathematical framework built on top of TensorFlow which allows you to take into account uncertainity in the weights, variance in the data and even uncertainity in the loss.
TensorFlow Probability can be helpful if:
* You need to quantify the uncertainty in your predictions, as opposed to predicting a single value.
* Your training set has a large number of features relative to the number of data points.
* You want to build a generative model of data, reasoning about its hidden processes.
* Your data is structured — for example, with groups, space, graphs, or language semantics — and you’d like to capture this structure with prior information.
* You want to create interpretable machine learning models and compare them
With TFP you can easily bake these ideas into your model and it works like plug and play with Keras.
The key take away from this talk would be how TFP can be beneficial in your data science work and why you should experiment with it.
P.S - Demos and case studies will be presented. A GitHub repository will be shared which contains code from the talk.