Talk Proposal Submission
If you are interested in attending this talk at PyCon JP 2017, please use the social media share buttons below. We will consider the popularity of the proposals when making our selection.
talk
A Guide to Exponentiation, and how it effects Machine Learning(en)
Speakers
en zyme
Audience level:
Intermediate
Category:
Science
Description
Exponentiation is the gotcha of math operators. Be it square or square root, exp, log, tanh, or the complex roots of unity, ** aka ^ has it's work cut out. Ints, floats, fractions, matrices, complex, and zero don't play nice.. Precision, accuracy, and performance aspects of Python and Julia can be contrasted by following ** from code down to the bits. Machine Learning aspects considered.
Objectives
Mathematics of exponentiation and why data science cares (nearest neighbors and neural network activation functions, e.g logit/logistic)
Calculation, precision, storage, representation and graphing all need to be considered, especially at scale. ** is prone to underflow and overflow with possibly dire consequences. It can even take longer to display the result of an exponentiation than to calculate it! Speed does matter when billions of calculations are being done.
A very brief recap language and version differences
The CPython implementation of exponentiation is fairly convoluted n informative. Ironically, it is sometimes possible to successfully perform exponentiation with integers when floats will fail. As an operator becomes the function pow or ipow. Numerically help is needed at edge cases and so: exp -> expm1, sqrt -> hypot, log -> log1p, factorial -> gamma and Stirling, and why recursion is evil (especially for Fibonacci where an exponential comes to the rescue), and how caching might be useful.
In addition, numpy provides exp and exp2, ldexp and frexp, log2, logaddexp and logaddexp2, square and power, and each with interesting use cases.
Did someone say matrix exponentiation? numpy.linalg.matrix_power scipy.linalg has expm, expm2, expm3, exmp_cond, expm_frechet.
Cryptography uses lots of modular exponentiation and tricks apply here.
Solving the simple equation: xy == yx is challenging for both rational and real solutions. Graphing can help, with it's challenges of logarithmic scaling. We'll be examining the decimal, fractions, mpmath, bignum, and sympy modules for insights, workarounds and new gotchas.
Abstract
Mathematics of exponentiation and why data science cares (nearest neighbors and neural network activation functions, e.g logit/logistic)
Calculation, precision, storage, representation and graphing all need to be considered, especially at scale. ** is prone to underflow and overflow with possibly dire consequences. It can even take longer to display the result of an exponentiation than to calculate it! Speed does matter when billions of calculations are being done.
A very brief recap language and version differences
The CPython implementation of exponentiation is fairly convoluted n informative. Ironically, it is sometimes possible to successfully perform exponentiation with integers when floats will fail. As an operator becomes the function pow or ipow. Numerically help is needed at edge cases and so: exp -> expm1, sqrt -> hypot, log -> log1p, factorial -> gamma and Stirling, and why recursion is evil (especially for Fibonacci where an exponential comes to the rescue), and how caching might be useful.
In addition, numpy provides exp and exp2, ldexp and frexp, log2, logaddexp and logaddexp2, square and power, and each with interesting use cases.
Did someone say matrix exponentiation? numpy.linalg.matrix_power scipy.linalg has expm, expm2, expm3, exmp_cond, expm_frechet.
Cryptography uses lots of modular exponentiation and tricks apply here.
Solving the simple equation: x* * y == y* *x is challenging for both rational and real solutions. Graphing can help, with it's challenges of logarithmic scaling. We'll be examining the decimal, fractions, mpmath, bignum, and sympy modules for insights, workarounds and new gotchas.