2014-09-13

Further Adventures in Python Optimisation

Previously we found that PyPy achieves the best performance gain, executing fieldfunc_py in ~6 us. At the end of that article, I mentioned that the C implementation is up to 50x faster, managing the same calculation in ~0.12 us.

The naive conclusion is that the best thing is to simply call the C function to do the heavy lifting, achieving performance somewhere between PyPy and C. But nothing in life is easy...

Naive C Interfacing

The C code in the previous article was compiled using

gcc -O -shared -o c_fieldfunc.so -fPIC c_fieldfunc.c

The MagnetElement class was then amended to use the C function whenever possible:

class MagnetElement(object):
  def __init__(self, position, size, magnetisation, fieldcalcfunc=fieldfunc_fast_py):
    """ 
    position, and magnetisation are all expected to be numpy arrays of 3
    elements each.

    size is a single number, implying all elements are square
    """

    self.position = position
    self.size = size
    self.magnetisation = magnetisation
    self.moment = magnetisation * size * size * size
    self._fieldcalcfunc = fieldcalcfunc

    try:
      import ctypes
      cmodule = ctypes.cdll.LoadLibrary('c_fieldfunc.so')
      self._fieldcalcfunc = cmodule.fieldfunc
      self.fieldAt = self._cfieldAt
    except OSError, ex: 
      pass

  def _cfieldAt(self, p): 
    import ctypes
    def voidp(x):
      return ctypes.c_void_p(x.ctypes.data)
    field = np.array([0,0,0], np.double)
    self._fieldcalcfunc(voidp(p),
                        voidp(self.position),
                        voidp(self.moment),
                        voidp(field))
    return field

  def fieldAt(self, p): 
    """ 
    p is expected to be a numpy array of 3
    """
    return self._fieldcalcfunc(p, self.position, self.moment)

This however yielded a per-run time of ~28 us!! So clearly there is significant cost in interfacing. The most obvious of these is creating a new np.array each time, and defining and calling voidp.

A Better C Interface

Below is an improved version:

class MagnetElement(object):
  def __init__(self, position, size, magnetisation, fieldcalcfunc=fieldfunc_fast_py):
    """ 
    position, and magnetisation are all expected to be numpy arrays of 3
    elements each.

    size is a single number, implying all elements are square
    """

    self.position = position
    self.size = size
    self.magnetisation = magnetisation
    self.moment = magnetisation * size * size * size
    self._fieldcalcfunc = fieldcalcfunc

    try:
      import ctypes
      cmodule = ctypes.cdll.LoadLibrary('c_fieldfunc.so')
      self._fieldcalcfunc = cmodule.fieldfunc
      self.fieldAt = self._cfieldAt
      self._field = np.array([0,0,0], np.double)

      def voidp(x):
        return ctypes.c_void_p(x.ctypes.data)
      self._position_p = voidp(self.position)
      self._moment_p = voidp(self.moment)
      self._field_p = voidp(self._field)
    except OSError, ex: 
      pass

  def _cfieldAt(self, p): 
    import ctypes
    self._fieldcalcfunc(ctypes.c_void_p(p.ctypes.data),
                        self._position_p,
                        self._moment_p,
                        self._field_p)
    return self._field

  def fieldAt(self, p):
    """
    p is expected to be a numpy array of 3
    """
    return self._fieldcalcfunc(p, self.position, self.moment)

Now we are down to ~7 us, which is about what PyPy gave us. We are still slower, and more effort is involved compared to the installing PyPy + numpy. That said, this method has the advantage that it is compatible with existing Python and numpy installations, and can be used to optimise python code that uses parts of numpy that are not yet implemented in PyPy.

Summary

  • C interfacing needs to be done carefully to gain maximum benefit
  • Not quite as easy as using PyPy, but more compatible

Addendum

While we achieved performance similar to PyPy using C interfacing in this one function, PyPy is going to give faster performance across the entire program, while Python+C will only speed up this one function. In my particular case, Python+C is over all ~2x as slow as PyPy.

Cheers,
Steve

Adventures in Python Optimisation

Recently I found myself looking at the following piece of code and trying to optimise it:

from __future__ import division

import numpy as np

MagneticPermeability = 4*np.pi*1e-7
MagneticPermeabilityOver4Pi = MagneticPermeability / (4*np.pi)

def fieldfunc_py(p, q, moment):
  r = p - q 
  rmag = np.sqrt(r.dot(r))

  rdash3 = rmag**3
  rdash5 = rmag**5

  rminus5term = r * moment.dot(r) * 3 / rdash5
  rminus3term = moment / rdash3

  field = (rminus5term - rminus3term) * MagneticPermeabilityOver4Pi
  return field

This, I have been told, computes the field at p due to a magnetic di-pole at q.

To profile this code I wrote the following function:

def profile(fieldfunc, dtype=np.double):
  def V(*args):
    return np.array(args, dtype)

  q = V(0, 0, 0)
  moment = V(1, 0, 0)
  p = V(10,10,10)

  def f():
    fieldfunc(p, q, moment)

  # force the JIT or whatever to run
  f()

  from timeit import Timer
  t = Timer(f)
  runs = 100000
  print fieldfunc.__name__,'per run', t.timeit(number=runs)*1e6/runs, 'us'

This yields a per-run time of ~22 us.

Optimise Python/Numpy

The first optimisation then is to write a better optimised version:

def fieldfunc_fast_py(p, q, moment):
  r = p - q
  rmag2 = np.dot(r, r)

  rdash3 = rmag2**1.5
  rdash5 = rmag2**2.5

  rminus5term = r * np.dot(moment, r) * 3 / rdash5
  rminus3term = moment / rdash3

  field = (rminus5term - rminus3term) * MagneticPermeabilityOver4Pi
  return field

The main difference is using np.dot() and avoiding np.sqrt(). Using ** is faster than using square root and multiplication. These changes gets us down to ~18 us or so, so slight improvement.

numba

Numba offers JIT compilation of python into native code, and is pretty easy to use and compatible with numpy. To use it the profile function was called as:

profile(jit(fieldfunc_py))
profile(jit(fieldfunc_fast_py))

Using jit without type arguments causes numba to infer the type. This yields per-run time of ~22 us for fieldfunc_py and ~18 us for fieldfunc_fast_py. There is no noticable speed up, in fact per-run times increased slightly. This is possibly due to the fact that the code is already pretty well optimised, and the extra overhead of interfacing between compiled code and the python runtime.

PyPy

PyPy is an altnerative implementation of Python that also uses JIT, but does it for the entire program. It is a little harder to use because it needs its own version of numpy, which is only partially implemented. For our needs this will suffice.

With PyPy, we see significant improvement. fieldfunc_py's per-run time went down to ~6 us, and fieldfunc_fast_py's per-run time went down to ~6 us also. This is a huge improvement.

Data Type Matters

The diligent reader may have noticed that profile accepts dtype as a keyword arguments. This was used to test the effects of using np.float32 vs np.float64. The following code was written:

for dtype in np.float32, np.float64:
  print 'With', dtype
  profile(fieldfunc_py, dtype)
  profile(fieldfunc_fast_py, dtype)

  try:
    from numba import jit
    print '==> Using numba jit'
    profile(jit(fieldfunc_py))
    profile(jit(fieldfunc_fast_py))
  except:
    pass

This yielded the following output using CPython:

With <type 'numpy.float32'>
fieldfunc_py per run 30.8365797997 us
fieldfunc_fast_py per run 27.7728509903 us
==> Using numba jit
fieldfunc_py per run 22.2719597816 us
fieldfunc_fast_py per run 18.1778311729 us

With <type 'numpy.float64'>
fieldfunc_py per run 21.8909192085 us
fieldfunc_fast_py per run 17.5257897377 us
==> Using numba jit
fieldfunc_py per run 22.4718093872 us
fieldfunc_fast_py per run 18.2463383675 us

Choice of data type has a significant effect, and it counter-intuitive: the larger data type is faster. Interestingly the same isn't true for PyPy, which outputted:

With <type 'numpy.float32'>
fieldfunc_py per run 6.29536867142 us
fieldfunc_fast_py per run 6.067070961 us

With <type 'numpy.float64'>
fieldfunc_py per run 6.28163099289 us
fieldfunc_fast_py per run 6.17563962936 us

Which is essentially the same.

How Fast Can It Be Done?

The final question is, how fast can this be done? Here is a naive C implementation:

#include <math.h>
#include <stdio.h>

typedef double dtype;

dtype vdot(dtype *u, dtype *v) {
  return u[0]*v[0] + u[1]*v[1] + u[2]*v[2];
}

/**
 * Computes w=u+v
 */
void vadd(dtype *u, dtype *v, dtype *w) {
  w[0] = u[0] + v[0];
  w[1] = u[1] + v[1];
  w[2] = u[2] + v[2];
}

/**
 * Computes w=u-v
 */
void vsub(dtype *u, dtype *v, dtype *w) {
  w[0] = u[0] - v[0];
  w[1] = u[1] - v[1];
  w[2] = u[2] - v[2];
}
/**
 * Computes w = x*u, returns w
 */ 
void vmult(dtype x, dtype *u, dtype *w) {
  w[0] = x*u[0];
  w[1] = x*u[1];
  w[2] = x*u[2];
}

#define MagneticPermeability          (4*M_PI*1e-7f)
#define MagneticPermeabilityOver4Pi   (MagneticPermeability/(4*M_PI))

void vprint(dtype *v) {
  printf("(%e, %e, %e)\n", v[0], v[1], v[2]);
}

void fieldfunc(dtype *p, dtype *q, dtype *moment, dtype *out) {
  dtype r[3];
  vsub(p, q, r);
  dtype rmag2 = vdot(r,r);

  dtype rdash3 = pow(rmag2, 1.5);
  dtype rdash5 = pow(rmag2, 2.5);


  dtype t5[3];
  vmult(vdot(moment, r), r, t5);
  vmult(3/rdash5, t5, t5);

  dtype t3[3];
  vmult(1/rdash3, moment, t3);

  vsub(t5, t3, out);
  vmult(MagneticPermeabilityOver4Pi, out, out);
}

int main(int argc, char **argv) {
  dtype q[] = {0, 0, 0};
  dtype p[] = {10, 10, 10};
  dtype moment[] = {1, 0, 0};
  dtype field[3];

  fieldfunc(p, q, moment, field);
  vprint(field);

  int x;
  for (x = 0; x < 1000000; ++x) {
    fieldfunc(p, q, moment, field);
  }

  return 0;
}

timing was done as follows:

$ gcc -o c_bench c_bench.c -lm && time ./c_bench 
(0.000000e+00, 1.924501e-11, 1.924501e-11)

real    0m0.116s
user    0m0.114s
sys     0m0.001s

Because we do 1 million iterations, per-run time is ~0.12 us, or around 50 times faster.


Summary

  • Most gain/effort comes from using PyPy
  • In this particular case, numba did not yield signficant speed up
  • C is still much faster

Cheers,
Steve