autoray.compiler

Module Contents

Classes

CompilePython

A simple compiler that unravels all autoray calls, optionally sharing

CompileJax

CompileTensorFlow

CompileTorch

AutoCompiled

Just in time compile a autoray.do using function. See the main

Functions

autojit([fn, backend, compiler_opts])

Just-in-time compile an autoray function, automatically choosing

Attributes

class autoray.compiler.CompilePython(fn, fold_constants=True, share_intermediates=True)[source]

A simple compiler that unravels all autoray calls, optionally sharing intermediates and folding constants, converts this to a code object using compile, then executes this using exec.

Parameters:
  • fn (callable) – Function to compile - should have signature fn(*args, **kwargs) -> array, with args and kwargs any nested combination of tuple, list and dict objects containing arrays (or other constant arguments), and perform array operations on these using autoray.do.

  • fold_constants (bool, optional) – Whether to fold all constant array operations into the graph, which might increase memory usage.

  • share_intermediates (bool, optional) – Whether to cache all computational nodes during the trace, so that any shared intermediate results can be identified.

setup(args, kwargs)[source]

Convert the example arrays to lazy variables and trace them through the function.

__call__(*args, array_backend=None, **kwargs)[source]

If necessary, build, then call the compiled function.

class autoray.compiler.CompileJax(fn, enable_x64=None, platform_name=None, **kwargs)[source]
setup()[source]
__call__(*args, array_backend=None, **kwargs)[source]
class autoray.compiler.CompileTensorFlow(fn, **kwargs)[source]
setup()[source]
__call__(*args, array_backend=None, **kwargs)[source]
class autoray.compiler.CompileTorch(fn, **kwargs)[source]
setup(*args, **kwargs)[source]
__call__(*args, array_backend=None, **kwargs)[source]
autoray.compiler._backend_lookup
autoray.compiler._compiler_lookup
class autoray.compiler.AutoCompiled(fn, backend=None, compiler_opts=None)[source]

Just in time compile a autoray.do using function. See the main wrapper autojit.

__call__(*args, backend=None, **kwargs)[source]
autoray.compiler.autojit(fn=None, *, backend=None, compiler_opts=None)[source]

Just-in-time compile an autoray function, automatically choosing the backend based on the input arrays, or via keyword argument.

The backend used to do the compilation can be set in three ways:

  1. Automatically based on the arrays the function is called with, i.e. cfn(*torch_arrays) will use torch.jit.trace.

  2. In this wrapper, @autojit(backend='jax'), to provide a specific default instead.

  3. When you call the function cfn(*arrays, backend='torch') to override on a per-call basis.

If the arrays supplied are of a different backend type to the compiler, then the returned array will also be converted back, i.e. cfn(*numpy_arrays, backend='tensorflow') will return a numpy array.

The 'python' backend simply extracts and unravels all the do calls into a code object using compile which is then run with exec. This makes use of shared intermediates and constant folding, strips away any python scaffoliding, and is compatible with any library, but the resulting function is not ‘low-level’ in the same way as the other backends.

Parameters:
  • fn (callable) – The autoray function to compile.

  • backend ({None, 'python', 'jax', 'torch', 'tensorflow'}, optional) – If set, use this as the default backend.

  • compiler_opts (dict[dict], optional) – Dict of dicts when you can supply options for each compiler backend separately, e.g.: @autojit(compiler_opts={'tensorflow': {'jit_compile': True}}).

Returns:

cfn – The function with auto compilation.

Return type:

callable