There is a little triviality that has been referred to as the "soft maximum" over on John Cook's Blog that I find to be fun, at the very least.
The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function
$g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1) + \exp(x_2) + \cdots + \exp(x_n))$
returns a value very near the maximum in the list.
This happens because that exponentiation exaggerates the differences between the $x_i$ values. For the largest $x_i$, $\exp(x_i)$ will be $really$ large. This largest exponential will significantly outweigh all of the others combined. Taking the logarithm, i.e. undoing the exponentiation, we essentially recover the largest of the $x_i$'s. (Of course, if two of the values were very near one another, we aren't guaranteed to get the true maximum, but it won't be far off!)
About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better.
I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up.
Has anyone used this before or have a scenario off hand where it would be useful?