https://blog.tensorflow.org/2019/06/modeling-unknown-unknowns-with-tensorflow-probability.html?hl=de

TensorFlow Probability

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwYA70Ug9rL2wh6hcVD3NwR91Ik4AXeX9sQbA6GpxRdgcBL3Vr1idHeftWmhAjKOvg8YJhE61ZMzRnVwfXK4Kf8WbbG3z7nFxw2aAHooHkyhyphenhyphenuPwmUS1ucjFqUBgE61ciy7WgPcSQht4A/s1600/fig1.png

Juni 13, 2019 —
*Posted by Venkatesh Rajagopalan, Director Data Science & Analytics; Mahadevan Balasubramaniam, Principal Data Scientist; and Arun Subramaniyan, VP Data Science & Analytics at BHGE Digital*

We believe in a slightly modified version of George Box’s famous comment: “All models are wrong, some are useful” ** for a short period of time**. Irrespective of how sophisticated a model is, it needs to be …

Modeling “Unknown Unknowns” with TensorFlow Probability — Industrial AI, Part 3

We believe in a slightly modified version of George Box’s famous comment: “All models are wrong, some are useful”

In this final part, we will describe the uncertainties that are characterized as “unknown unknowns” and the techniques used for effectively modeling them. To bring all the aspects of our modeling philosophy together in one application, we will predict the performance of a lithium ion battery when we don’t know its real deterioration characteristics.

Cell potential (voltage) and capacity (usually stated in Ampere-hours) are two primary metrics of interest in a lithium-ion battery. The chart below depicts the time-profile of cell potential in a discharge cycle as a function of current draw. As the discharge current increases, the cell potential decreases more rapidly over time.

Cell potential versus time for different current draws. |

Effect of cycling (usage) on cell potential. |

`V= a-0.0005*Q + log(b-Q) where a=4–0.25*I and b=6000–250*I`

Where

Every application is unique and utilizes the battery differently. Consider two dimensions of variation: the amount of discharge and the rate of discharge. Then consider that both of these dimensions can also be applied to the recharging cycle. In other words, a battery can be discharged and recharged fully or partially; it can also be discharged and recharged slowly or quickly. The usage of each battery and variation in manufacturing process affect how each battery degrades. We simulate this variation by choosing a random value for the deterioration parameter δ and computing a response for a random usage at a random cycle. After multiple discharge-recharge cycles, a battery will degrade in performance and provide a lower voltage and lower overall capacity. To illustrate the concept of degradation tracking with a simple example, we will model the deterioration by functionally altering the linear and asymptotic segments of the battery for the entire cycle by a deterioration factor.

If δ is the deterioration factor, then modified battery responses would be the following:

The data from degraded batteries are used for model updating, and not for building the initial model. Please note that this functional form is only an approximation but still representative of field behavior of a battery.

However, if the underlying physics is unknown, then the modeler has no option but to use the

We will show the latter case of modeling the unknown unknown here by building a DNN model with the data from pristine batteries. We have provided representative values to ensure the reader can create the data to build a “generic” deep learning model.

The process that we will follow for the rest of the blog is shown below:

Modeling process to illustrate unknown unknowns. |

Simple Deep Neural Network architecture to model battery performance. |

Training and test predictions from DNN model. |

Pristine DNN model predictions compared to degraded battery data. |

As with the Unscented Kalman Filter (UKF) methodology described in our previous blog post, we will model the “states” of the PF as a random walk process. The output model is the DNN. The equations describing the process and measurement models are summarized below:

```
x[k] = x[k-1] + w[k]
y[k] = h(x[k], u[k]) + v[k]
```

where:*x*represents the states of the PF (i.e., the hyper-parameters of the DNN that need to be updated)

*h*is the functional form of the DNN

*u*represent the inputs of the DNN: current and capacity

*y*is the output of the DNN: cell potential

*w*represents the process noise

*v*represents the measurement noise

- Generate initial particles from a proposed density, called the importance density

- Assign weights to the particles and normalize the weights

- Add process noise to each particle

- Generate output predictions for all the particles

- Calculate likelihood of error between actual and predicted output value, for each particle

- Update a particle weight using its error likelihood

- Normalize the weights

- Calculate effective number of samples

- If effective number of samples < Threshold, then resample the particles
- Estimate the state vector and its covariance

- Repeat steps 3 to 10, for each measurement.

- generating the particles,

- generating the noise values, and

- computing the likelihood of the observation, given the state.

```
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
# Generate Particles with initial state vector pf['state'] and state covariance matrix pf['state_cov']
sess = tf.Session()
state = np.array(pf['state'])
state.shape = (num_st, ) # num_st is the number of state variables
mvn = tfd.MultivariateNormalFullCovariance(loc=state, covariance_matrix=pf['state_cov'])
particles = sess.run(mvn.sample(pf['Ns'])) # pf['Ns'] is the number of particles to be generated
```

```
def pf_predout(coef, loc_pm, p_ampDraw, time_val, tVarDict):
p_ampSec = p_ampDraw * time_val
baseW_Out = tVarDict['W_OUT:0']
n_dim = len(coef)
coef = np.array(coef)
coef.shape = (n_dim,)
tVarDict['b_OUT:0'] = [coef[0]]
for idxW in np.arange(len(baseW_Out)):
baseW_Out[idxW, 0] = coef[idxW + 1]
tVarDict['W_OUT:0'] = baseW_Out
yhat = np.asmatrix(loc_pm.predictSingleRowAugmented(p_ampDraw, p_ampSec, tVarDict))
return yhat
```

```
def update_weights(y, yhat, R, prev_weight):
from scipy.stats import norm
likelihood = norm.pdf(y-yhat, 0, R)
updt_weight = prev_weight * likelihood
return updt_weight
```

```
def systematic_resample(sess, weights):
N = len(weights)
# make N subdivisions and choose positions with a random offset
positions = (sess.run(tf.random.uniform((1, ))) + np.arange(N)) / N
indexes = np.zeros(N, 'i')
cum_sum = np.cumsum(weights)
i, j = 0, 0
while i < N:
if positions[i] < cum_sum[j]:
indexes[i] = j
i = i + 1
else:
j = j + 1
return indexes
def resample_from_index(particles, weights, indexes):
particles = particles[:, indexes]
weights = weights[indexes]
weights.fill(1.0 / len(weights))
return particles, weights, indexes
```

Updated model prediction (from 1st update point). |

Sequential model update results initiated based on prediction uncertainty. |

Final model (only latest updates) predictions compared to initial model. |

Comparison of model prediction errors. |

Comparison of initial and final state of the parameters of the last DNN hidden layer. |

Next post

TensorFlow Probability
**·**

Modeling “Unknown Unknowns” with TensorFlow Probability — Industrial AI, Part 3

Juni 13, 2019
—
*Posted by Venkatesh Rajagopalan, Director Data Science & Analytics; Mahadevan Balasubramaniam, Principal Data Scientist; and Arun Subramaniyan, VP Data Science & Analytics at BHGE Digital*

We believe in a slightly modified version of George Box’s famous comment: “All models are wrong, some are useful” ** for a short period of time**. Irrespective of how sophisticated a model is, it needs to be …

Build, deploy, and experiment easily with TensorFlow