autoray.compiler
¶
Module Contents¶
Classes¶
A simple compiler that unravels all autoray calls, optionally sharing |
|
Just in time compile a |
Functions¶
|
Just-in-time compile an |
Attributes¶
- class autoray.compiler.CompilePython(fn, fold_constants=True, share_intermediates=True)[source]¶
A simple compiler that unravels all autoray calls, optionally sharing intermediates and folding constants, converts this to a code object using
compile
, then executes this usingexec
.- Parameters:
fn (callable) – Function to compile - should have signature
fn(*args, **kwargs) -> array
, withargs
andkwargs
any nested combination oftuple
,list
anddict
objects containing arrays (or other constant arguments), and perform array operations on these usingautoray.do
.fold_constants (bool, optional) – Whether to fold all constant array operations into the graph, which might increase memory usage.
share_intermediates (bool, optional) – Whether to cache all computational nodes during the trace, so that any shared intermediate results can be identified.
- autoray.compiler._backend_lookup¶
- autoray.compiler._compiler_lookup¶
- class autoray.compiler.AutoCompiled(fn, backend=None, compiler_opts=None)[source]¶
Just in time compile a
autoray.do
using function. See the main wrapperautojit
.
- autoray.compiler.autojit(fn=None, *, backend=None, compiler_opts=None)[source]¶
Just-in-time compile an
autoray
function, automatically choosing the backend based on the input arrays, or via keyword argument.The backend used to do the compilation can be set in three ways:
Automatically based on the arrays the function is called with, i.e.
cfn(*torch_arrays)
will usetorch.jit.trace
.In this wrapper,
@autojit(backend='jax')
, to provide a specific default instead.When you call the function
cfn(*arrays, backend='torch')
to override on a per-call basis.
If the arrays supplied are of a different backend type to the compiler, then the returned array will also be converted back, i.e.
cfn(*numpy_arrays, backend='tensorflow')
will return anumpy
array.The
'python'
backend simply extracts and unravels all thedo
calls into a code object usingcompile
which is then run withexec
. This makes use of shared intermediates and constant folding, strips away any python scaffoliding, and is compatible with any library, but the resulting function is not ‘low-level’ in the same way as the other backends.- Parameters:
fn (callable) – The autoray function to compile.
backend ({None, 'python', 'jax', 'torch', 'tensorflow'}, optional) – If set, use this as the default backend.
compiler_opts (dict[dict], optional) – Dict of dicts when you can supply options for each compiler backend separately, e.g.:
@autojit(compiler_opts={'tensorflow': {'jit_compile': True}})
.
- Returns:
cfn – The function with auto compilation.
- Return type:
callable