🌱
source # Why? For the POMDP code, we are required to implement our transition kernel in the form of T_cartesian which computes it for all state pairs . This approach is advantageous because: - We can vectorize the computation of all probabilities - We can avoid a lot of overhead in computing the cdf of the Gaussian and instead just compute a hasty representation of the pdf and normalize
The issue with this basic approach is that with small values for the variance the issue of underflow can arise and cause our rows to be so small that summing them gives us a value of , which then leads to normalized rows of nan.
This is where the Log-Sum-Exp trick comes in. This was first suggested to me through Cursor which kinda just did a “well duh this works” approach to sneaking it without naming the concept it was wielding. I sought to record my understanding of the concept as the person who’s blog I learned this concept from aptly put this quote on their website homepage: > I learned very early the difference between knowing the name of something and knowing something. - Richard Feynman
So let’s jump into it.
In general, in ML or stats we will often work in Log Scale. This allows us to handle very small values e.g. such that we can avoid issues like underflow: . Where the LHS could be subject to underflow (due to precomputing ), the operation sufficiently (in most cases) blows up the respective individual values s.t. the RHS is not victim to the same issue.
Overall, working in log scale is much more computationally stable that even scipy’s multivariate_normal uses them to compute pdfs.
Sometimes though, we need to convert our log probabilities (denoted as ) to real probabilities (denoted as ). One may naively compute the following: . These ’s individually may be very large where specifically in our application: - They may be “large” due to a really small in - But tinyness doesn’t seem to be too much an issue since if we’re going over all then there will be an s.t. is not too small. Since the term is negative we get really tiny exponents for everything pretty much unless we have a sufficiently large distance in the numerator to combat this issue. Enter theLog-Sum-Exp operation >[!def] LogSumExp >Given a list of values , the LogSumExp of these is defined as
Now let us rewrite the naive : Still, you may observe that we’re still dealing with individual values which brings us back to our original problem. Well, consider the following: Let , then Now, using the following inequality from Wikipedia: we get hence, which is nice! This means each does not have to deal with underflow.