taichi.lang

Subpackages

Submodules

Package Contents

Classes

ScalarNdarray

Taichi ndarray with scalar elements.

GroupedNDRange

ndrange

AnyArray

Class for arbitrary arrays in Python AST.

AnyArrayAccess

Class for first-level access to AnyArray with Vector/Matrix elements in Python AST.

Layout

Layout of a Taichi field or ndarray.

Expr

A Python-side Expr wrapper, whose member variable ptr is an instance of C++ Expr class. A C++ Expr object contains member variable expr which holds an instance of C++ Expression class.

Field

Taichi field with SNode implementation.

ScalarField

Taichi scalar field with SNode implementation.

SparseMatrixProxy

Matrix

The matrix class.

MatrixField

Taichi matrix field with SNode implementation.

Mesh

MeshElementFieldProxy

SNode

A Python-side SNode wrapper.

SourceBuilder

Struct

The Struct type class.

StructField

Taichi struct field with SNode implementation.

TapeImpl

KernelProfiler

Kernel profiler of Taichi.

CuptiMetric

A class to add CUPTI metric for KernelProfiler.

FieldsBuilder

A builder that constructs a SNodeTree instance.

Functions

locale_encode(path)

make_expr_group(*exprs)

axes(*x: Iterable[int])

Defines a list of axes to be used by a field.

begin_frontend_if(cond)

begin_frontend_struct_for(group, loop_range)

call_internal(name, *args)

current_cfg()

deactivate_all_snodes()

Recursively deactivate all SNodes.

expr_init(rhs)

expr_init_func(rhs)

expr_init_list(xs, expected)

field

get_runtime()

grouped(x)

Groups a list of independent loop indices into a Vector().

insert_expr_stmt_if_ti_func(func, *args, **kwargs)

This method is used only for real functions. It inserts a

ndarray(dtype, shape)

Defines a Taichi ndarray with scalar elements.

one(x)

Fill the input field with one.

static(x, *xs)

Evaluates a Taichi-scope expression at compile time.

static_assert(cond, msg=None)

static_print(*args, __p=print, **kwargs)

stop_grad(x)

subscript(value, *_indices, skip_reordered=False)

ti_assert(cond, msg, extra_args)

ti_float(_var)

ti_format(*args, **kwargs)

ti_int(_var)

ti_print(*_vars, sep=' ', end='\n')

zero(x)

Fill the input field with zero.

data_oriented(cls)

Marks a class as Taichi compatible.

func(fn)

Marks a function as callable in Taichi-scope.

kernel(fn)

Marks a function as a Taichi kernel.

pyfunc(fn)

Marks a function as callable in both Taichi and Python scopes.

Vector(n, dt=None, **kwargs)

Construct a Vector instance i.e. 1-D Matrix.

TetMesh()

TriMesh()

cook_dtype(dtype)

is_taichi_class(rhs)

taichi_scope(func)

stack_info()

is_taichi_expr(a)

wrap_if_not_expr(a)

unary(foo)

binary(foo)

ternary(foo)

writeback_binary(foo)

cast(obj, dtype)

bit_cast(obj, dtype)

neg(a)

The negate function.

sin(a)

The sine function.

cos(a)

The cosine function.

asin(a)

The inverses function of sine.

acos(a)

The inverses function of cosine.

sqrt(a)

The square root function.

rsqrt(a)

The reciprocal of the square root function.

round(a)

The round function.

floor(a)

The floor function.

ceil(a)

The ceil function.

tan(a)

The tangent function.

tanh(a)

The hyperbolic tangent function.

exp(a)

The exp function.

log(a)

The natural logarithm function.

abs(a)

The absolute value function.

bit_not(a)

The bit not function.

logical_not(a)

The logical not function.

random(dtype=float)

The random function.

add(a, b)

The add function.

sub(a, b)

The sub function.

mul(a, b)

The multiply function.

mod(a, b)

The remainder function.

pow(a, b)

The power function.

floordiv(a, b)

The floor division function.

truediv(a, b)

True division function.

max(a, b)

The maxnimum function.

min(a, b)

The minimum function.

atan2(a, b)

The inverses of the tangent function.

raw_div(a, b)

Raw_div function.

raw_mod(a, b)

Raw_mod function. Both a and b can be float.

cmp_lt(a, b)

Compare two values (less than)

cmp_le(a, b)

Compare two values (less than or equal to)

cmp_gt(a, b)

Compare two values (greater than)

cmp_ge(a, b)

Compare two values (greater than or equal to)

cmp_eq(a, b)

Compare two values (equal to)

cmp_ne(a, b)

Compare two values (not equal to)

bit_or(a, b)

Computes bitwise-or

bit_and(a, b)

Compute bitwise-and

bit_xor(a, b)

Compute bitwise-xor

bit_shl(a, b)

Compute bitwise shift left

bit_sar(a, b)

Compute bitwise shift right

bit_shr(a, b)

Compute bitwise shift right (in taichi scope)

select(cond, a, b)

atomic_add(a, b)

atomic_sub(a, b)

atomic_min(a, b)

atomic_max(a, b)

atomic_and(a, b)

atomic_or(a, b)

atomic_xor(a, b)

assign(a, b)

ti_max(*args)

ti_min(*args)

ti_any(a)

ti_all(a)

async_flush()

sync()

activate(l, indices)

append(l, indices, val)

deactivate(l, indices)

get_addr(f, indices)

Query the memory address (on CUDA/x64) of field f at index indices.

is_active(l, indices)

length(l, indices)

rescale_index(a, b, I)

Rescales the index 'I' of field (or SNode) 'a' to match the shape of SNode 'b'

parallel_sort(keys, values=None)

cook_dtype(dtype)

has_clangpp()

has_pytorch()

Whether has pytorch in the current Python environment.

is_taichi_class(rhs)

python_scope(func)

taichi_scope(func)

to_numpy_type(dt)

Convert taichi data type to its counterpart in numpy.

to_pytorch_type(dt)

Convert taichi data type to its counterpart in torch.

to_taichi_type(dt)

Convert numpy or torch data type to its counterpart in taichi.

get_default_kernel_profiler()

We have only one KernelProfiler instance(i.e. _ti_kernel_profiler) now.

get_predefined_cupti_metrics(name='')

set_gdb_trigger(on=True)

warning(msg, warning_type=UserWarning, stacklevel=1)

Print warning message

ext_arr()

Type annotation for external arrays.

print_kernel_profile_info(mode='count')

Print the profiling results of Taichi kernels.

query_kernel_profile_info(name)

Query kernel elapsed time(min,avg,max) on devices using the kernel name.

clear_kernel_profile_info()

Clear all KernelProfiler records.

kernel_profiler_total_time()

Get elapsed time of all kernels recorded in KernelProfiler.

set_kernel_profiler_toolkit(toolkit_name='default')

Set the toolkit used by KernelProfiler.

set_kernel_profile_metrics(metric_list=default_cupti_metrics)

Set metrics that will be collected by the CUPTI toolkit.

collect_kernel_profile_metrics(metric_list=default_cupti_metrics)

Set temporary metrics that will be collected by the CUPTI toolkit within this context.

print_memory_profile_info()

Memory profiling tool for LLVM backends with full sparse support.

is_extension_supported(arch, ext)

Checks whether an extension is supported on an arch.

reset()

Resets Taichi to its initial state.

prepare_sandbox()

Returns a temporary directory, which will be automatically deleted on exit.

check_version()

try_check_version()

init(arch=None, default_fp=None, default_ip=None, _test_mode=False, enable_fallback=True, **kwargs)

Initializes the Taichi runtime.

no_activate(*args)

block_local(*args)

Hints Taichi to cache the fields and to enable the BLS optimization.

mesh_local(*args)

cache_read_only(*args)

assume_in_range(val, base, low, high)

loop_unique(val, covers=None)

Tape(loss, clear_gradients=True)

Return a context manager of TapeImpl. The

clear_all_gradients()

Set all fields' gradients to 0.

benchmark(_func, repeat=300, args=())

benchmark_plot(fn=None, cases=None, columns=None, column_titles=None, archs=None, title=None, bars='sync_vs_async', bar_width=0.4, bar_distance=0, left_margin=0, size=(12, 8))

stat_write(key, value)

is_arch_supported(arch, use_gles=False)

Checks whether an arch is supported on the machine.

adaptive_arch_select(arch, enable_fallback, use_gles)

get_host_arch_list()

Attributes

root

Root of the declared Taichi :func:`~taichi.lang.impl.field`s.

unary_ops

binary_ops

ternary_ops

writeback_binary_ops

logical_or

logical_and

quant

type_factory

default_cupti_metrics

any_arr

Alias for ArgAnyArray.

template

Alias for Template.

f16

f32

Alias for float32

f64

Alias for float64

i32

Alias for int32

i64

Alias for int64

integer_types

u32

Alias for uint32

u64

Alias for uint64

runtime

i

j

k

l

ij

ik

il

jk

jl

kl

ijk

ijl

ikl

jkl

ijkl

cfg

x86_64

The x64 CPU backend.

x64

The X64 CPU backend.

arm64

The ARM CPU backend.

cuda

The CUDA backend.

metal

The Apple Metal backend.

opengl

The OpenGL backend. OpenGL 4.3 required.

cc

wasm

The WebAssembly backend.

vulkan

The Vulkan backend.

dx11

The DX11 backend.

gpu

A list of GPU backends supported on the current system.

cpu

A list of CPU backends supported on the current system.

timeline_clear

timeline_save

type_factory_

extension

parallelize

serialize

vectorize

bit_vectorize

block_dim

global_thread_idx

mesh_patch_idx

taichi.lang.locale_encode(path)
class taichi.lang.ScalarNdarray(dtype, arr_shape)

Bases: Ndarray

Taichi ndarray with scalar elements.

Parameters
  • dtype (DataType) – Data type of each value.

  • shape (Tuple[int]) – Shape of the ndarray.

property element_shape(self)

Gets ndarray element shape.

Returns

Ndarray element shape.

Return type

Tuple[Int]

to_numpy(self)
from_numpy(self, arr)
fill_by_kernel(self, val)

Fills ndarray with a specific scalar value using a ti.kernel.

Parameters

val (Union[int, float]) – Value to fill.

class taichi.lang.GroupedNDRange(r)
class taichi.lang.ndrange(*args)
grouped(self)
class taichi.lang.AnyArray(ptr, element_shape, layout)

Class for arbitrary arrays in Python AST.

Parameters
  • ptr (taichi_core.Expr) – A taichi_core.Expr wrapping a taichi_core.ExternalTensorExpression.

  • element_shape (Tuple[Int]) – () if scalar elements (default), (n) if vector elements, and (n, m) if matrix elements.

  • layout (Layout) – Memory layout.

property shape(self)

A list containing sizes for each dimension. Note that element shape will be excluded.

Returns

The result list.

Return type

List[Int]

loop_range(self)

Gets the corresponding taichi_core.Expr to serve as loop range.

This is not in use now because struct fors on AnyArrays are not supported yet.

Returns

See above.

Return type

taichi_core.Expr

class taichi.lang.AnyArrayAccess(arr, indices_first)

Class for first-level access to AnyArray with Vector/Matrix elements in Python AST.

Parameters
  • arr (AnyArray) – See above.

  • indices_first (Tuple[Int]) – Indices of first-level access.

subscript(self, i, j)
class taichi.lang.Layout

Bases: enum.Enum

Layout of a Taichi field or ndarray.

Currently, AOS (array of structures) and SOA (structure of arrays) are supported.

AOS = 1
SOA = 2
exception taichi.lang.InvalidOperationError

Bases: Exception

Common base class for all non-exit exceptions.

exception taichi.lang.TaichiCompilationError

Bases: Exception

Common base class for all non-exit exceptions.

exception taichi.lang.TaichiNameError

Bases: TaichiCompilationError, NameError

Common base class for all non-exit exceptions.

exception taichi.lang.TaichiSyntaxError

Bases: TaichiCompilationError, SyntaxError

Common base class for all non-exit exceptions.

exception taichi.lang.TaichiTypeError

Bases: TaichiCompilationError, TypeError

Common base class for all non-exit exceptions.

class taichi.lang.Expr(*args, tb=None)

Bases: taichi.lang.common_ops.TaichiOperations

A Python-side Expr wrapper, whose member variable ptr is an instance of C++ Expr class. A C++ Expr object contains member variable expr which holds an instance of C++ Expression class.

taichi.lang.make_expr_group(*exprs)
class taichi.lang.Field(_vars)

Taichi field with SNode implementation.

A field is constructed by a list of field members. For example, a scalar field has 1 field member, while a 3x3 matrix field has 9 field members. A field member is a Python Expr wrapping a C++ GlobalVariableExpression. A C++ GlobalVariableExpression wraps the corresponding SNode.

Parameters

vars (List[Expr]) – Field members.

property snode(self)

Gets representative SNode for info purposes.

Returns

Representative SNode (SNode of first field member).

Return type

SNode

property shape(self)

Gets field shape.

Returns

Field shape.

Return type

Tuple[Int]

property dtype(self)

Gets data type of each individual value.

Returns

Data type of each individual value.

Return type

DataType

property name(self)

Gets field name.

Returns

Field name.

Return type

str

parent(self, n=1)

Gets an ancestor of the representative SNode in the SNode tree.

Parameters

n (int) – the number of levels going up from the representative SNode.

Returns

The n-th parent of the representative SNode.

Return type

SNode

get_field_members(self)

Gets field members.

Returns

Field members.

Return type

List[Expr]

loop_range(self)

Gets representative field member for loop range info.

Returns

Representative (first) field member.

Return type

taichi_core.Expr

set_grad(self, grad)

Sets corresponding gradient field.

Parameters

grad (Field) – Corresponding gradient field.

abstract fill(self, val)

Fills self with a specific value.

Parameters

val (Union[int, float]) – Value to fill.

abstract to_numpy(self, dtype=None)

Converts self to a numpy array.

Parameters

dtype (DataType, optional) – The desired data type of returned numpy array.

Returns

The result numpy array.

Return type

numpy.ndarray

abstract to_torch(self, device=None)

Converts self to a torch tensor.

Parameters

device (torch.device, optional) – The desired device of returned tensor.

Returns

The result torch tensor.

Return type

torch.tensor

abstract from_numpy(self, arr)

Loads all elements from a numpy array.

The shape of the numpy array needs to be the same as self.

Parameters

arr (numpy.ndarray) – The source numpy array.

from_torch(self, arr)

Loads all elements from a torch tensor.

The shape of the torch tensor needs to be the same as self.

Parameters

arr (torch.tensor) – The source torch tensor.

copy_from(self, other)

Copies all elements from another field.

The shape of the other field needs to be the same as self.

Parameters

other (Field) – The source field.

pad_key(self, key)
initialize_host_accessors(self)
host_access(self, key)
class taichi.lang.ScalarField(var)

Bases: Field

Taichi scalar field with SNode implementation.

Parameters

var (Expr) – Field member.

fill(self, val)

Fills self with a specific value.

Parameters

val (Union[int, float]) – Value to fill.

to_numpy(self, dtype=None)

Converts self to a numpy array.

Parameters

dtype (DataType, optional) – The desired data type of returned numpy array.

Returns

The result numpy array.

Return type

numpy.ndarray

to_torch(self, device=None)

Converts self to a torch tensor.

Parameters

device (torch.device, optional) – The desired device of returned tensor.

Returns

The result torch tensor.

Return type

torch.tensor

from_numpy(self, arr)

Loads all elements from a numpy array.

The shape of the numpy array needs to be the same as self.

Parameters

arr (numpy.ndarray) – The source numpy array.

taichi.lang.axes(*x: Iterable[int])

Defines a list of axes to be used by a field.

Parameters

*x – A list of axes to be activated

Note that Taichi has already provided a set of commonly used axes. For example, ti.ij is just axes(0, 1) under the hood.

taichi.lang.begin_frontend_if(cond)
taichi.lang.begin_frontend_struct_for(group, loop_range)
taichi.lang.call_internal(name, *args)
taichi.lang.current_cfg()
taichi.lang.deactivate_all_snodes()

Recursively deactivate all SNodes.

taichi.lang.expr_init(rhs)
taichi.lang.expr_init_func(rhs)
taichi.lang.expr_init_list(xs, expected)
taichi.lang.field(dtype, shape=None, name='', offset=None, needs_grad=False)

Defines a Taichi field

A Taichi field can be viewed as an abstract N-dimensional array, hiding away the complexity of how its underlying SNode are actually defined. The data in a Taichi field can be directly accessed by a Taichi kernel().

See also https://docs.taichi.graphics/lang/articles/basic/field

Parameters
  • dtype (DataType) – data type of the field.

  • shape (Union[int, tuple[int]], optional) – shape of the field

  • name (str, optional) – name of the field

  • offset (Union[int, tuple[int]], optional) – offset of the field domain

  • needs_grad (bool, optional) – whether this field participates in autodiff and thus needs an adjoint field to store the gradients.

Example

The code below shows how a Taichi field can be declared and defined:

>>> x1 = ti.field(ti.f32, shape=(16, 8))
>>>
>>> # Equivalently
>>> x2 = ti.field(ti.f32)
>>> ti.root.dense(ti.ij, shape=(16, 8)).place(x2)
taichi.lang.get_runtime()
taichi.lang.grouped(x)

Groups a list of independent loop indices into a Vector().

Parameters

x (Any) – does the grouping only if x is a ndrange.

Example:

>>> for I in ti.grouped(ndrange(8, 16)):
>>>     print(I[0] + I[1])
taichi.lang.insert_expr_stmt_if_ti_func(func, *args, **kwargs)

This method is used only for real functions. It inserts a FrontendExprStmt to the C++ AST to hold the function call if func is a Taichi function.

Parameters
  • func – The function to be called.

  • args – The arguments of the function call.

  • kwargs – The keyword arguments of the function call.

Returns

The return value of the function call if it’s a non-Taichi function. Returns None if it’s a Taichi function.

taichi.lang.ndarray(dtype, shape)

Defines a Taichi ndarray with scalar elements.

Parameters
  • dtype (DataType) – Data type of each value.

  • shape (Union[int, tuple[int]]) – Shape of the ndarray.

Example

The code below shows how a Taichi ndarray with scalar elements can be declared and defined:

>>> x = ti.ndarray(ti.f32, shape=(16, 8))
taichi.lang.one(x)

Fill the input field with one.

Parameters

x (DataType) – The input field to fill.

Returns

The output field, which keeps the shape but filled with one.

Return type

DataType

taichi.lang.root

Root of the declared Taichi :func:`~taichi.lang.impl.field`s.

See also https://docs.taichi.graphics/lang/articles/advanced/layout

Example:

>>> x = ti.field(ti.f32)
>>> ti.root.pointer(ti.ij, 4).dense(ti.ij, 8).place(x)
taichi.lang.static(x, *xs)

Evaluates a Taichi-scope expression at compile time.

static() is what enables the so-called metaprogramming in Taichi. It is in many ways similar to constexpr in C++11.

See also https://docs.taichi.graphics/lang/articles/advanced/meta.

Parameters
  • x (Any) – an expression to be evaluated

  • *xs (Any) – for Python-ish swapping assignment

Example

The most common usage of static() is for compile-time evaluation:

>>> @ti.kernel
>>> def run():
>>>     if ti.static(FOO):
>>>         do_a()
>>>     else:
>>>         do_b()

Depending on the value of FOO, run() will be directly compiled into either do_a() or do_b(). Thus there won’t be a runtime condition check.

Another common usage is for compile-time loop unrolling:

>>> @ti.kernel
>>> def run():
>>>     for i in ti.static(range(3)):
>>>         print(i)
>>>
>>> # The above is equivalent to:
>>> @ti.kernel
>>> def run():
>>>     print(0)
>>>     print(1)
>>>     print(2)
taichi.lang.static_assert(cond, msg=None)
taichi.lang.static_print(*args, __p=print, **kwargs)
taichi.lang.stop_grad(x)
taichi.lang.subscript(value, *_indices, skip_reordered=False)
taichi.lang.ti_assert(cond, msg, extra_args)
taichi.lang.ti_float(_var)
taichi.lang.ti_format(*args, **kwargs)
taichi.lang.ti_int(_var)
taichi.lang.ti_print(*_vars, sep=' ', end='\n')
taichi.lang.zero(x)

Fill the input field with zero.

Parameters

x (DataType) – The input field to fill.

Returns

The output field, which keeps the shape but filled with zero.

Return type

DataType

class taichi.lang.SparseMatrixProxy(ptr)
subscript(self, i, j)
exception taichi.lang.KernelArgError(pos, needed, provided)

Bases: Exception

Common base class for all non-exit exceptions.

exception taichi.lang.KernelDefError

Bases: Exception

Common base class for all non-exit exceptions.

taichi.lang.data_oriented(cls)

Marks a class as Taichi compatible.

To allow for modularized code, Taichi provides this decorator so that Taichi kernels can be defined inside a class.

See also https://docs.taichi.graphics/lang/articles/advanced/odop

Example:

>>> @ti.data_oriented
>>> class TiArray:
>>>     def __init__(self, n):
>>>         self.x = ti.field(ti.f32, shape=n)
>>>
>>>     @ti.kernel
>>>     def inc(self):
>>>         for i in self.x:
>>>             self.x[i] += 1.0
>>>
>>> a = TiArray(32)
>>> a.inc()
Parameters

cls (Class) – the class to be decorated

Returns

The decorated class.

taichi.lang.func(fn)

Marks a function as callable in Taichi-scope.

This decorator transforms a Python function into a Taichi one. Taichi will JIT compile it into native instructions.

Parameters

fn (Callable) – The Python function to be decorated

Returns

The decorated function

Return type

Callable

Example:

>>> @ti.func
>>> def foo(x):
>>>     return x + 2
>>>
>>> @ti.kernel
>>> def run():
>>>     print(foo(40))  # 42
taichi.lang.kernel(fn)

Marks a function as a Taichi kernel.

A Taichi kernel is a function written in Python, and gets JIT compiled by Taichi into native CPU/GPU instructions (e.g. a series of CUDA kernels). The top-level for loops are automatically parallelized, and distributed to either a CPU thread pool or massively parallel GPUs.

Kernel’s gradient kernel would be generated automatically by the AutoDiff system.

See also https://docs.taichi.graphics/lang/articles/basic/syntax#kernels.

Parameters

fn (Callable) – the Python function to be decorated

Returns

The decorated function

Return type

Callable

Example:

>>> x = ti.field(ti.i32, shape=(4, 8))
>>>
>>> @ti.kernel
>>> def run():
>>>     # Assigns all the elements of `x` in parallel.
>>>     for i in x:
>>>         x[i] = i
taichi.lang.pyfunc(fn)

Marks a function as callable in both Taichi and Python scopes.

When called inside the Taichi scope, Taichi will JIT compile it into native instructions. Otherwise it will be invoked directly as a Python function.

See also func().

Parameters

fn (Callable) – The Python function to be decorated

Returns

The decorated function

Return type

Callable

class taichi.lang.Matrix(n=1, m=1, dt=None, suppress_warning=False)

Bases: taichi.lang.common_ops.TaichiOperations

The matrix class.

Parameters
  • n (Union[int, list, tuple, np.ndarray]) – the first dimension of a matrix.

  • m (int) – the second dimension of a matrix.

  • dt (DataType) – the element data type.

is_taichi_class = True
element_wise_binary(self, foo, other)
broadcast_copy(self, other)
element_wise_ternary(self, foo, other, extra)
element_wise_writeback_binary(self, foo, other)
element_wise_unary(self, foo)
linearize_entry_id(self, *args)
set_entry(self, i, j, e)
subscript(self, *indices)
property x(self)

Get the first element of a matrix.

property y(self)

Get the second element of a matrix.

property z(self)

Get the third element of a matrix.

property w(self)

Get the fourth element of a matrix.

property value(self)
to_list(self)
set_entries(self, value)
cast(self, dtype)

Cast the matrix element data type.

Parameters

dtype (DataType) – the data type of the casted matrix element.

Returns

A new matrix with each element’s type is dtype.

trace(self)

The sum of a matrix diagonal elements.

Returns

The sum of a matrix diagonal elements.

inverse(self)

The inverse of a matrix.

Note

The matrix dimension should be less than or equal to 4.

Returns

The inverse of a matrix.

Raises

Exception – Inversions of matrices with sizes >= 5 are not supported.

normalized(self, eps=0)

Normalize a vector.

Parameters

eps (Number) – a safe-guard value for sqrt, usually 0.

Examples:

a = ti.Vector([3, 4])
a.normalized() # [3 / 5, 4 / 5]
# `a.normalized()` is equivalent to `a / a.norm()`.

Note

Only vector normalization is supported.

transpose(self)

Get the transpose of a matrix.

Returns

Get the transpose of a matrix.

determinant(a)

Get the determinant of a matrix.

Note

The matrix dimension should be less than or equal to 4.

Returns

The determinant of a matrix.

Raises

Exception – Determinants of matrices with sizes >= 5 are not supported.

static diag(dim, val)

Construct a diagonal square matrix.

Parameters
  • dim (int) – the dimension of a square matrix.

  • val (TypeVar) – the diagonal element value.

Returns

The constructed diagonal square matrix.

sum(self)

Return the sum of all elements.

norm(self, eps=0)

Return the square root of the sum of the absolute squares of its elements.

Parameters

eps (Number) – a safe-guard value for sqrt, usually 0.

Examples:

a = ti.Vector([3, 4])
a.norm() # sqrt(3*3 + 4*4 + 0) = 5
# `a.norm(eps)` is equivalent to `ti.sqrt(a.dot(a) + eps).`
Returns

The square root of the sum of the absolute squares of its elements.

norm_inv(self, eps=0)

Return the inverse of the matrix/vector norm. For norm: please see norm().

Parameters

eps (Number) – a safe-guard value for sqrt, usually 0.

Returns

The inverse of the matrix/vector norm.

norm_sqr(self)

Return the sum of the absolute squares of its elements.

max(self)

Return the maximum element value.

min(self)

Return the minimum element value.

any(self)

Test whether any element not equal zero.

Returns

True if any element is not equal zero, False otherwise.

Return type

bool

all(self)

Test whether all element not equal zero.

Returns

True if all elements are not equal zero, False otherwise.

Return type

bool

fill(self, val)

Fills the matrix with a specific value in Taichi scope.

Parameters

val (Union[int, float]) – Value to fill.

to_numpy(self, keep_dims=False)

Converts the Matrix to a numpy array.

Parameters

keep_dims (bool, optional) – Whether to keep the dimension after conversion. When keep_dims=False, the resulting numpy array should skip the matrix dims with size 1.

Returns

The result numpy array.

Return type

numpy.ndarray

static zero(dt, n, m=None)

Construct a Matrix filled with zeros.

Parameters
  • dt (DataType) – The desired data type.

  • n (int) – The first dimension (row) of the matrix.

  • m (int, optional) – The second dimension (column) of the matrix.

Returns

A Matrix instance filled with zeros.

Return type

Matrix

static one(dt, n, m=None)

Construct a Matrix filled with ones.

Parameters
  • dt (DataType) – The desired data type.

  • n (int) – The first dimension (row) of the matrix.

  • m (int, optional) – The second dimension (column) of the matrix.

Returns

A Matrix instance filled with ones.

Return type

Matrix

static unit(n, i, dt=None)

Construct an unit Vector (1-D matrix) i.e., a vector with only one entry filled with one and all other entries zeros.

Parameters
  • n (int) – The length of the vector.

  • i (int) – The index of the entry that will be filled with one.

  • dt (DataType, optional) – The desired data type.

Returns

An 1-D unit Matrix instance.

Return type

Matrix

static identity(dt, n)

Construct an identity Matrix with shape (n, n).

Parameters
  • dt (DataType) – The desired data type.

  • n (int) – The number of rows/columns.

Returns

A n x n identity Matrix instance.

Return type

Matrix

static rotation2d(alpha)
classmethod field(cls, n, m, dtype, shape=None, name='', offset=None, needs_grad=False, layout=Layout.AOS)

Construct a data container to hold all elements of the Matrix.

Parameters
  • n (int) – The desired number of rows of the Matrix.

  • m (int) – The desired number of columns of the Matrix.

  • dtype (DataType, optional) – The desired data type of the Matrix.

  • shape (Union[int, tuple of int], optional) – The desired shape of the Matrix.

  • name (string, optional) – The custom name of the field.

  • offset (Union[int, tuple of int], optional) – The coordinate offset of all elements in a field.

  • needs_grad (bool, optional) – Whether the Matrix need gradients.

  • layout (Layout, optional) – The field layout, i.e., Array Of Structure (AOS) or Structure Of Array (SOA).

Returns

A Matrix instance serves as the data container.

Return type

Matrix

classmethod ndarray(cls, n, m, dtype, shape, layout=Layout.AOS)

Defines a Taichi ndarray with matrix elements.

Parameters
  • n (int) – Number of rows of the matrix.

  • m (int) – Number of columns of the matrix.

  • dtype (DataType) – Data type of each value.

  • shape (Union[int, tuple[int]]) – Shape of the ndarray.

  • layout (Layout, optional) – Memory layout, AOS by default.

Example

The code below shows how a Taichi ndarray with matrix elements can be declared and defined:

>>> x = ti.Matrix.ndarray(4, 5, ti.f32, shape=(16, 8))
static rows(rows)

Construct a Matrix instance by concatenating Vectors/lists row by row.

Parameters

rows (List) – A list of Vector (1-D Matrix) or a list of list.

Returns

A Matrix instance filled with the Vectors/lists row by row.

Return type

Matrix

static cols(cols)

Construct a Matrix instance by concatenating Vectors/lists column by column.

Parameters

cols (List) – A list of Vector (1-D Matrix) or a list of list.

Returns

A Matrix instance filled with the Vectors/lists column by column.

Return type

Matrix

dot(self, other)

Perform the dot product with the input Vector (1-D Matrix).

Parameters

other (Matrix) – The input Vector (1-D Matrix) to perform the dot product.

Returns

The dot product result (scalar) of the two Vectors.

Return type

DataType

cross(self, other)

Perform the cross product with the input Vector (1-D Matrix).

Parameters

other (Matrix) – The input Vector (1-D Matrix) to perform the cross product.

Returns

The cross product result (1-D Matrix) of the two Vectors.

Return type

Matrix

outer_product(self, other)

Perform the outer product with the input Vector (1-D Matrix).

Parameters

other (Matrix) – The input Vector (1-D Matrix) to perform the outer product.

Returns

The outer product result (Matrix) of the two Vectors.

Return type

Matrix

class taichi.lang.MatrixField(_vars, n, m)

Bases: taichi.lang.field.Field

Taichi matrix field with SNode implementation.

Parameters
  • vars (List[Expr]) – Field members.

  • n (Int) – Number of rows.

  • m (Int) – Number of columns.

get_scalar_field(self, *indices)

Creates a ScalarField using a specific field member. Only used for quant.

Parameters

indices (Tuple[Int]) – Specified indices of the field member.

Returns

The result ScalarField.

Return type

ScalarField

calc_dynamic_index_stride(self)
fill(self, val)

Fills self with specific values.

Parameters

val (Union[Number, List, Tuple, Matrix]) – Values to fill, which should have dimension consistent with self.

to_numpy(self, keep_dims=False, dtype=None)

Converts the field instance to a NumPy array.

Parameters
  • keep_dims (bool, optional) – Whether to keep the dimension after conversion. When keep_dims=True, on an n-D matrix field, the numpy array always has n+2 dims, even for 1x1, 1xn, nx1 matrix fields. When keep_dims=False, the resulting numpy array should skip the matrix dims with size 1. For example, a 4x1 or 1x4 matrix field with 5x6x7 elements results in an array of shape 5x6x7x4.

  • dtype (DataType, optional) – The desired data type of returned numpy array.

Returns

The result NumPy array.

Return type

numpy.ndarray

to_torch(self, device=None, keep_dims=False)

Converts the field instance to a PyTorch tensor.

Parameters
  • device (torch.device, optional) – The desired device of returned tensor.

  • keep_dims (bool, optional) – Whether to keep the dimension after conversion. See to_numpy() for more detailed explanation.

Returns

The result torch tensor.

Return type

torch.tensor

from_numpy(self, arr)
taichi.lang.Vector(n, dt=None, **kwargs)

Construct a Vector instance i.e. 1-D Matrix.

Parameters
  • n (Union[int, list, tuple], np.ndarray) – The desired number of entries of the Vector.

  • dt (DataType, optional) – The desired data type of the Vector.

Returns

A Vector instance (1-D Matrix).

Return type

Matrix

class taichi.lang.Mesh
static Tet()
static Tri()
static load_meta(filename)
class taichi.lang.MeshElementFieldProxy(mesh: MeshInstance, element_type: MeshElementType, entry_expr: taichi.lang.impl.Expr)
property ptr(self)
property id(self)
taichi.lang.TetMesh()
taichi.lang.TriMesh()
exception taichi.lang.TaichiSyntaxError

Bases: TaichiCompilationError, SyntaxError

Common base class for all non-exit exceptions.

taichi.lang.cook_dtype(dtype)
taichi.lang.is_taichi_class(rhs)
taichi.lang.taichi_scope(func)
taichi.lang.unary_ops = []
taichi.lang.stack_info()
taichi.lang.is_taichi_expr(a)
taichi.lang.wrap_if_not_expr(a)
taichi.lang.unary(foo)
taichi.lang.binary_ops = []
taichi.lang.binary(foo)
taichi.lang.ternary_ops = []
taichi.lang.ternary(foo)
taichi.lang.writeback_binary_ops = []
taichi.lang.writeback_binary(foo)
taichi.lang.cast(obj, dtype)
taichi.lang.bit_cast(obj, dtype)
taichi.lang.neg(a)

The negate function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The negative value of a.

taichi.lang.sin(a)

The sine function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

Sine of a.

taichi.lang.cos(a)

The cosine function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

Cosine of a.

taichi.lang.asin(a)

The inverses function of sine.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix with elements in [-1,1].

Returns

The inverses function of sine of a.

taichi.lang.acos(a)

The inverses function of cosine.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix with elements in [-1,1].

Returns

The inverses function of cosine of a.

taichi.lang.sqrt(a)

The square root function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix with elements not less than zero.

Returns

x such that x>=0 and x^2=a.

taichi.lang.rsqrt(a)

The reciprocal of the square root function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The reciprocal of sqrt(a).

taichi.lang.round(a)

The round function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The nearest integer of a.

taichi.lang.floor(a)

The floor function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The greatest integer less than or equal to a.

taichi.lang.ceil(a)

The ceil function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The least integer greater than or equal to a.

taichi.lang.tan(a)

The tangent function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

Tangent of a.

taichi.lang.tanh(a)

The hyperbolic tangent function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

(e**x - e**(-x)) / (e**x + e**(-x)).

taichi.lang.exp(a)

The exp function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

e to the a.

taichi.lang.log(a)

The natural logarithm function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix with elements greater than zero.

Returns

The natural logarithm of a.

taichi.lang.abs(a)

The absolute value function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

The absolute value of a.

taichi.lang.bit_not(a)

The bit not function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

Bitwise not of a.

taichi.lang.logical_not(a)

The logical not function.

Parameters

a (Union[Expr, Matrix]) – A number or a matrix.

Returns

1 iff a=0, otherwise 0.

taichi.lang.random(dtype=float)

The random function.

Parameters

dtype (DataType) – Type of the random variable.

Returns

A random variable whose type is dtype.

taichi.lang.add(a, b)

The add function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

sum of a and b.

taichi.lang.sub(a, b)

The sub function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

a subtract b.

taichi.lang.mul(a, b)

The multiply function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

a multiplied by b.

taichi.lang.mod(a, b)

The remainder function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

The remainder of a divided by b.

taichi.lang.pow(a, b)

The power function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

a to the b.

taichi.lang.floordiv(a, b)

The floor division function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

The floor function of a divided by b.

taichi.lang.truediv(a, b)

True division function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

The true value of a divided by b.

taichi.lang.max(a, b)

The maxnimum function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

The maxnimum of a and b.

taichi.lang.min(a, b)

The minimum function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix.

Returns

The minimum of a and b.

taichi.lang.atan2(a, b)

The inverses of the tangent function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

The inverses function of tangent of b/a.

taichi.lang.raw_div(a, b)

Raw_div function.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

If a is a int and b is a int, then return a//b. Else return a/b.

taichi.lang.raw_mod(a, b)

Raw_mod function. Both a and b can be float.

Parameters
  • a (Union[Expr, Matrix]) – A number or a matrix.

  • b (Union[Expr, Matrix]) – A number or a matrix with elements not equal to zero.

Returns

The remainder of a divided by b.

taichi.lang.cmp_lt(a, b)

Compare two values (less than)

Parameters
Returns

True if LHS is strictly smaller than RHS, False otherwise

Return type

Union[Expr, bool]

taichi.lang.cmp_le(a, b)

Compare two values (less than or equal to)

Parameters
Returns

True if LHS is smaller than or equal to RHS, False otherwise

Return type

Union[Expr, bool]

taichi.lang.cmp_gt(a, b)

Compare two values (greater than)

Parameters
Returns

True if LHS is strictly larger than RHS, False otherwise

Return type

Union[Expr, bool]

taichi.lang.cmp_ge(a, b)

Compare two values (greater than or equal to)

Parameters
Returns

True if LHS is greater than or equal to RHS, False otherwise

Return type

bool

taichi.lang.cmp_eq(a, b)

Compare two values (equal to)

Parameters
Returns

True if LHS is equal to RHS, False otherwise.

Return type

Union[Expr, bool]

taichi.lang.cmp_ne(a, b)

Compare two values (not equal to)

Parameters
Returns

True if LHS is not equal to RHS, False otherwise

Return type

Union[Expr, bool]

taichi.lang.bit_or(a, b)

Computes bitwise-or

Parameters
Returns

LHS bitwise-or with RHS

Return type

Union[Expr, bool]

taichi.lang.bit_and(a, b)

Compute bitwise-and

Parameters
Returns

LHS bitwise-and with RHS

Return type

Union[Expr, bool]

taichi.lang.bit_xor(a, b)

Compute bitwise-xor

Parameters
Returns

LHS bitwise-xor with RHS

Return type

Union[Expr, bool]

taichi.lang.bit_shl(a, b)

Compute bitwise shift left

Parameters
Returns

LHS << RHS

Return type

Union[Expr, int]

taichi.lang.bit_sar(a, b)

Compute bitwise shift right

Parameters
Returns

LHS >> RHS

Return type

Union[Expr, int]

taichi.lang.bit_shr(a, b)

Compute bitwise shift right (in taichi scope)

Parameters
Returns

LHS >> RHS

Return type

Union[Expr, int]

taichi.lang.logical_or
taichi.lang.logical_and
taichi.lang.select(cond, a, b)
taichi.lang.atomic_add(a, b)
taichi.lang.atomic_sub(a, b)
taichi.lang.atomic_min(a, b)
taichi.lang.atomic_max(a, b)
taichi.lang.atomic_and(a, b)
taichi.lang.atomic_or(a, b)
taichi.lang.atomic_xor(a, b)
taichi.lang.assign(a, b)
taichi.lang.ti_max(*args)
taichi.lang.ti_min(*args)
taichi.lang.ti_any(a)
taichi.lang.ti_all(a)
taichi.lang.quant
taichi.lang.async_flush()
taichi.lang.sync()
class taichi.lang.SNode(ptr)

A Python-side SNode wrapper.

For more information on Taichi’s SNode system, please check out these references:

Arg:

ptr (pointer): The C++ side SNode pointer.

dense(self, axes, dimensions)

Adds a dense SNode as a child component of self.

Parameters
  • axes (List[Axis]) – Axes to activate.

  • dimensions (Union[List[int], int]) – Shape of each axis.

Returns

The added SNode instance.

pointer(self, axes, dimensions)

Adds a pointer SNode as a child component of self.

Parameters
  • axes (List[Axis]) – Axes to activate.

  • dimensions (Union[List[int], int]) – Shape of each axis.

Returns

The added SNode instance.

static hash(axes, dimensions)

Not supported.

dynamic(self, axis, dimension, chunk_size=None)

Adds a dynamic SNode as a child component of self.

Parameters
  • axis (List[Axis]) – Axis to activate, must be 1.

  • dimension (int) – Shape of the axis.

  • chunk_size (int) – Chunk size.

Returns

The added SNode instance.

bitmasked(self, axes, dimensions)

Adds a bitmasked SNode as a child component of self.

Parameters
  • axes (List[Axis]) – Axes to activate.

  • dimensions (Union[List[int], int]) – Shape of each axis.

Returns

The added SNode instance.

bit_struct(self, num_bits: int)

Adds a bit_struct SNode as a child component of self.

Parameters

num_bits – Number of bits to use.

Returns

The added SNode instance.

bit_array(self, axes, dimensions, num_bits)

Adds a bit_array SNode as a child component of self.

Parameters
  • axes (List[Axis]) – Axes to activate.

  • dimensions (Union[List[int], int]) – Shape of each axis.

  • num_bits (int) – Number of bits to use.

Returns

The added SNode instance.

place(self, *args, offset=None, shared_exponent=False)

Places a list of Taichi fields under the self container.

Parameters
  • *args (List[ti.field]) – A list of Taichi fields to place.

  • offset (Union[Number, tuple[Number]]) – Offset of the field domain.

  • shared_exponent (bool) – Only useful for quant types.

Returns

The self container.

lazy_grad(self)

Automatically place the adjoint fields following the layout of their primal fields.

Users don’t need to specify needs_grad when they define scalar/vector/matrix fields (primal fields) using autodiff. When all the primal fields are defined, using taichi.root.lazy_grad() could automatically generate their corresponding adjoint fields (gradient field).

To know more details about primal, adjoint fields and lazy_grad(), please see Page 4 and Page 13-14 of DiffTaichi Paper: https://arxiv.org/pdf/1910.00935.pdf

parent(self, n=1)

Gets an ancestor of self in the SNode tree.

Parameters

n (int) – the number of levels going up from self.

Returns

The n-th parent of self.

Return type

Union[None, _Root, SNode]

path_from_root(self)

Gets the path from root to self in the SNode tree.

Returns

The list of SNodes on the path from root to self.

Return type

List[Union[_Root, SNode]]

property dtype(self)

Gets the data type of self.

Returns

The data type of self.

Return type

DataType

property id(self)

Gets the id of self.

Returns

The id of self.

Return type

int

property shape(self)

Gets the number of elements from root in each axis of self.

Returns

The number of elements from root in each axis of self.

Return type

Tuple[int]

loop_range(self)

Gets the taichi_core.Expr wrapping the taichi_core.GlobalVariableExpression corresponding to self to serve as loop range.

Returns

See above.

Return type

taichi_core.Expr

property name(self)

Gets the name of self.

Returns

The name of self.

Return type

str

property snode(self)

Gets self.

Returns

self.

Return type

SNode

property needs_grad(self)

Checks whether self has a corresponding gradient SNode.

Returns

Whether self has a corresponding gradient SNode.

Return type

bool

get_children(self)

Gets all children components of self.

Returns

All children components of self.

Return type

List[SNode]

property num_dynamically_allocated(self)
property cell_size_bytes(self)
property offset_bytes_in_parent_cell(self)
deactivate_all(self)

Recursively deactivate all children components of self.

physical_index_position(self)

Gets mappings from virtual axes to physical axes.

Returns

Mappings from virtual axes to physical axes.

Return type

Dict[int, int]

taichi.lang.activate(l, indices)
taichi.lang.append(l, indices, val)
taichi.lang.deactivate(l, indices)
taichi.lang.get_addr(f, indices)

Query the memory address (on CUDA/x64) of field f at index indices.

Currently, this function can only be called inside a taichi kernel.

Parameters
  • f (Union[ti.field, ti.Vector.field, ti.Matrix.field]) – Input taichi field for memory address query.

  • indices (Union[int, ti.Vector()]) – The specified field indices of the query.

Returns

The memory address of f[indices].

Return type

ti.u64

taichi.lang.is_active(l, indices)
taichi.lang.length(l, indices)
taichi.lang.rescale_index(a, b, I)

Rescales the index ‘I’ of field (or SNode) ‘a’ to match the shape of SNode ‘b’

Parameters
  • a (ti.field(), ti.Vector.field, ti.Matrix.field()) – input taichi field or snode

  • b (ti.field(), ti.Vector.field, ti.Matrix.field()) – output taichi field or snode

  • I (ti.Vector()) – grouped loop index

Returns

Ib – rescaled grouped loop index

Return type

ti.Vector()

taichi.lang.parallel_sort(keys, values=None)
class taichi.lang.SourceBuilder
classmethod from_file(cls, filename, compile_fn=None, _temp_dir=None)
classmethod from_source(cls, source_code, compile_fn=None)
class taichi.lang.Struct(*args, **kwargs)

Bases: taichi.lang.common_ops.TaichiOperations

The Struct type class. :param entries: keys and values for struct members. :type entries: Dict[str, Union[Dict, Expr, Matrix, Struct]]

is_taichi_class = True
property keys(self)
property members(self)
property items(self)
register_members(self)
set_entries(self, value)
static make_getter(key)
static make_setter(key)
element_wise_unary(self, foo)
element_wise_binary(self, foo, other)
broadcast_copy(self, other)
element_wise_writeback_binary(self, foo, other)
element_wise_ternary(self, foo, other, extra)
fill(self, val)

Fills the Struct with a specific value in Taichi scope.

Parameters

val (Union[int, float]) – Value to fill.

to_dict(self)

Converts the Struct to a dictionary.

Args:

Returns

The result dictionary.

Return type

Dict

classmethod field(cls, members, shape=None, name='<Struct>', offset=None, needs_grad=False, layout=Layout.AOS)
class taichi.lang.StructField(field_dict, name=None)

Bases: taichi.lang.field.Field

Taichi struct field with SNode implementation.

Instead of directly contraining Expr entries, the StructField object directly hosts members as Field instances to support nested structs.

Parameters
  • field_dict (Dict[str, Field]) – Struct field members.

  • name (string, optional) – The custom name of the field.

property name(self)
property keys(self)
property members(self)
property items(self)
static make_getter(key)
static make_setter(key)
register_fields(self)
get_field_members(self)

Get A flattened list of all struct elements.

Returns

A list of struct elements.

property snode(self)

Gets representative SNode for info purposes.

Returns

Representative SNode (SNode of first field member).

Return type

SNode

loop_range(self)

Gets representative field member for loop range info.

Returns

Representative (first) field member.

Return type

taichi_core.Expr

copy_from(self, other)

Copies all elements from another field.

The shape of the other field needs to be the same as self.

Parameters

other (Field) – The source field.

fill(self, val)

Fills self with a specific value.

Parameters

val (Union[int, float]) – Value to fill.

initialize_host_accessors(self)
get_member_field(self, key)

Creates a ScalarField using a specific field member. Only used for quant.

Parameters

key (str) – Specified key of the field member.

Returns

The result ScalarField.

Return type

ScalarField

from_numpy(self, array_dict)
from_torch(self, array_dict)
to_numpy(self)
Converts the Struct field instance to a dictionary of NumPy arrays. The dictionary may be nested when converting

nested structs.

Args: :returns: The result NumPy array. :rtype: Dict[str, Union[numpy.ndarray, Dict]]

to_torch(self, device=None)
Converts the Struct field instance to a dictionary of PyTorch tensors. The dictionary may be nested when converting

nested structs.

Parameters

device (torch.device, optional) – The desired device of returned tensor.

Returns

The result PyTorch tensor.

Return type

Dict[str, Union[torch.Tensor, Dict]]

class taichi.lang.TapeImpl(runtime, loss=None)
insert(self, func, args)
grad(self)
taichi.lang.type_factory
taichi.lang.cook_dtype(dtype)
taichi.lang.has_clangpp()
taichi.lang.has_pytorch()

Whether has pytorch in the current Python environment.

Returns

True if has pytorch else False.

Return type

bool

taichi.lang.is_taichi_class(rhs)
taichi.lang.python_scope(func)
taichi.lang.taichi_scope(func)
taichi.lang.to_numpy_type(dt)

Convert taichi data type to its counterpart in numpy.

Parameters

dt (DataType) – The desired data type to convert.

Returns

The counterpart data type in numpy.

Return type

DataType

taichi.lang.to_pytorch_type(dt)

Convert taichi data type to its counterpart in torch.

Parameters

dt (DataType) – The desired data type to convert.

Returns

The counterpart data type in torch.

Return type

DataType

taichi.lang.to_taichi_type(dt)

Convert numpy or torch data type to its counterpart in taichi.

Parameters

dt (DataType) – The desired data type to convert.

Returns

The counterpart data type in taichi.

Return type

DataType

class taichi.lang.KernelProfiler

Kernel profiler of Taichi.

Kernel profiler acquires kernel profiling records from backend, counts records in Python scope, and prints the results to the console by print_info().

KernelProfiler now support detailed low-level performance metrics (such as memory bandwidth consumption) in its advanced mode. This mode is only available for the CUDA backend with CUPTI toolkit, i.e. you need ti.init(kernel_profiler=True, arch=ti.cuda).

Note

For details about using CUPTI in Taichi, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.

COUNT = count
TRACE = trace
set_kernel_profiler_mode(self, mode=False)

Turn on or off KernelProfiler.

get_kernel_profiler_mode(self)

Get status of KernelProfiler.

set_toolkit(self, toolkit_name='default')
get_total_time(self)

Get elapsed time of all kernels recorded in KernelProfiler.

Returns

total time in second.

Return type

time (float)

clear_info(self)

Clear all records both in front-end KernelProfiler and back-end instance KernelProfilerBase.

Note

The values of self._profiling_mode and self._metric_list will not be cleared.

query_info(self, name)

For docsting of this function, see query_kernel_profile_info().

set_metrics(self, metric_list=default_cupti_metrics)

For docsting of this function, see set_kernel_profile_metrics().

collect_metrics_in_context(self, metric_list=default_cupti_metrics)

This function is not exposed to user now.

For usage of this function, see collect_kernel_profile_metrics().

print_info(self, mode=COUNT)

Print the profiling results of Taichi kernels.

For usage of this function, see print_kernel_profile_info().

Parameters

mode (str) – the way to print profiling results.

taichi.lang.get_default_kernel_profiler()

We have only one KernelProfiler instance(i.e. _ti_kernel_profiler) now.

For KernelProfiler using CuptiToolkit, GPU devices can only work in a certain configuration. Profiling mode and metrics are configured by the host(CPU) via CUPTI APIs, and device(GPU) will use its counter registers to collect specific metrics. So if there are multiple instances of KernelProfiler, the device will work in the latest configuration, the profiling configuration of other instances will be changed as a result. For data retention purposes, multiple instances will be considered in the future.

class taichi.lang.CuptiMetric(name='', header='unnamed_header', val_format='     {:8.0f} ', scale=1.0)

A class to add CUPTI metric for KernelProfiler.

This class is designed to add user selected CUPTI metrics. Only available for the CUDA backend now, i.e. you need ti.init(kernel_profiler=True, arch=ti.cuda). For usage of this class, see examples in func set_kernel_profile_metrics() and collect_kernel_profile_metrics().

Parameters

Example:

>>> import taichi as ti

>>> ti.init(kernel_profiler=True, arch=ti.cuda)
>>> num_elements = 128*1024*1024

>>> x = ti.field(ti.f32, shape=num_elements)
>>> y = ti.field(ti.f32, shape=())
>>> y[None] = 0

>>> @ti.kernel
>>> def reduction():
>>>     for i in x:
>>>         y[None] += x[i]

>>> global_op_atom = ti.CuptiMetric(
>>>     name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum',
>>>     header=' global.atom ',
>>>     val_format='    {:8.0f} ')

>>> # add and set user defined metrics
>>> profiling_metrics = ti.get_predefined_cupti_metrics('global_access') + [global_op_atom]
>>> ti.set_kernel_profile_metrics(profiling_metrics)

>>> for i in range(16):
>>>     reduction()
>>> ti.print_kernel_profile_info('trace')

Note

For details about using CUPTI in Taichi, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.

taichi.lang.default_cupti_metrics
taichi.lang.get_predefined_cupti_metrics(name='')
class taichi.lang.FieldsBuilder

A builder that constructs a SNodeTree instance.

Example:

x = ti.field(ti.i32)
y = ti.field(ti.f32)
fb = ti.FieldsBuilder()
fb.dense(ti.ij, 8).place(x)
fb.pointer(ti.ij, 8).dense(ti.ij, 4).place(y)

# Afer this line, `x` and `y` are placed. No more fields can be placed
# into `fb`.
#
# The tree looks like the following:
# (implicit root)
#  |
#  +-- dense +-- place(x)
#  |
#  +-- pointer +-- dense +-- place(y)
fb.finalize()
classmethod finalized_roots(cls)

Gets all the roots of the finalized SNodeTree.

Returns

A list of the roots of the finalized SNodeTree.

property ptr(self)
property root(self)
property empty(self)
property finalized(self)
deactivate_all(self)

Same as taichi.lang.snode.SNode.deactivate_all()

dense(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])

Same as taichi.lang.snode.SNode.dense()

pointer(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])

Same as taichi.lang.snode.SNode.pointer()

abstract hash(self, indices, dimensions)

Same as taichi.lang.snode.SNode.hash()

dynamic(self, index: Union[Sequence[_Axis], _Axis], dimension: Union[Sequence[int], int], chunk_size: Optional[int] = None)

Same as taichi.lang.snode.SNode.dynamic()

bitmasked(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])

Same as taichi.lang.snode.SNode.bitmasked()

bit_struct(self, num_bits: int)

Same as taichi.lang.snode.SNode.bit_struct()

bit_array(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int], num_bits: int)

Same as taichi.lang.snode.SNode.bit_array()

place(self, *args: Any, offset: Optional[Union[Sequence[int], int]] = None, shared_exponent: bool = False)

Same as taichi.lang.snode.SNode.place()

lazy_grad(self)

Same as taichi.lang.snode.SNode.lazy_grad()

finalize(self, raise_warning=True)

Constructs the SNodeTree and finalizes this builder.

Parameters

raise_warning (bool) – Raise warning or not.

taichi.lang.set_gdb_trigger(on=True)
taichi.lang.warning(msg, warning_type=UserWarning, stacklevel=1)

Print warning message

Parameters
  • msg (str) – massage to print.

  • warning_type (builtin warning type) – type of warning.

  • stacklevel (int) – warning stack level from the caller.

taichi.lang.any_arr

Alias for ArgAnyArray.

Example:

>>> @ti.kernel
>>> def to_numpy(x: ti.any_arr(), y: ti.any_arr()):
>>>     for i in range(n):
>>>         x[i] = y[i]
>>>
>>> y = ti.ndarray(ti.f64, shape=n)
>>> ... # calculate y
>>> x = numpy.zeros(n)
>>> to_numpy(x, y)  # `x` will be filled with `y`'s data.
taichi.lang.ext_arr()

Type annotation for external arrays.

External arrays are formally defined as the data from other Python frameworks. For now, Taichi supports numpy and pytorch.

Example:

>>> @ti.kernel
>>> def to_numpy(arr: ti.ext_arr()):
>>>     for i in x:
>>>         arr[i] = x[i]
>>>
>>> arr = numpy.zeros(...)
>>> to_numpy(arr)  # `arr` will be filled with `x`'s data.
taichi.lang.template

Alias for Template.

taichi.lang.f16
taichi.lang.f32

Alias for float32

taichi.lang.f64

Alias for float64

taichi.lang.i32

Alias for int32

taichi.lang.i64

Alias for int64

taichi.lang.integer_types
taichi.lang.u32

Alias for uint32

taichi.lang.u64

Alias for uint64

taichi.lang.runtime
taichi.lang.i
taichi.lang.j
taichi.lang.k
taichi.lang.l
taichi.lang.ij
taichi.lang.ik
taichi.lang.il
taichi.lang.jk
taichi.lang.jl
taichi.lang.kl
taichi.lang.ijk
taichi.lang.ijl
taichi.lang.ikl
taichi.lang.jkl
taichi.lang.ijkl
taichi.lang.cfg
taichi.lang.x86_64

The x64 CPU backend.

taichi.lang.x64

The X64 CPU backend.

taichi.lang.arm64

The ARM CPU backend.

taichi.lang.cuda

The CUDA backend.

taichi.lang.metal

The Apple Metal backend.

taichi.lang.opengl

The OpenGL backend. OpenGL 4.3 required.

taichi.lang.cc
taichi.lang.wasm

The WebAssembly backend.

taichi.lang.vulkan

The Vulkan backend.

taichi.lang.dx11

The DX11 backend.

taichi.lang.gpu

A list of GPU backends supported on the current system.

When this is used, Taichi automatically picks the matching GPU backend. If no GPU is detected, Taichi falls back to the CPU backend.

taichi.lang.cpu

A list of CPU backends supported on the current system.

When this is used, Taichi automatically picks the matching CPU backend.

taichi.lang.timeline_clear
taichi.lang.timeline_save
taichi.lang.type_factory_
taichi.lang.print_kernel_profile_info(mode='count')

Print the profiling results of Taichi kernels.

To enable this profiler, set kernel_profiler=True in ti.init(). 'count' mode: print the statistics (min,max,avg time) of launched kernels, 'trace' mode: print the records of launched kernels with specific profiling metrics (time, memory load/store and core utilization etc.), and defaults to 'count'.

Parameters

mode (str) – the way to print profiling results.

Example:

>>> import taichi as ti

>>> ti.init(ti.cpu, kernel_profiler=True)
>>> var = ti.field(ti.f32, shape=1)

>>> @ti.kernel
>>> def compute():
>>>     var[0] = 1.0

>>> compute()
>>> ti.print_kernel_profile_info()
>>> # equivalent calls :
>>> # ti.print_kernel_profile_info('count')

>>> ti.print_kernel_profile_info('trace')

Note

Currently the result of KernelProfiler could be incorrect on OpenGL backend due to its lack of support for ti.sync().

For advanced mode of KernelProfiler, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.

taichi.lang.query_kernel_profile_info(name)

Query kernel elapsed time(min,avg,max) on devices using the kernel name.

To enable this profiler, set kernel_profiler=True in ti.init.

Parameters

name (str) – kernel name.

Returns

with member variables(counter, min, max, avg)

Return type

KernelProfilerQueryResult (class)

Example:

>>> import taichi as ti

>>> ti.init(ti.cpu, kernel_profiler=True)
>>> n = 1024*1024
>>> var = ti.field(ti.f32, shape=n)

>>> @ti.kernel
>>> def fill():
>>>     for i in range(n):
>>>         var[i] = 0.1

>>> fill()
>>> ti.clear_kernel_profile_info() #[1]
>>> for i in range(100):
>>>     fill()
>>> query_result = ti.query_kernel_profile_info(fill.__name__) #[2]
>>> print("kernel excuted times =",query_result.counter)
>>> print("kernel elapsed time(min_in_ms) =",query_result.min)
>>> print("kernel elapsed time(max_in_ms) =",query_result.max)
>>> print("kernel elapsed time(avg_in_ms) =",query_result.avg)

Note

[1] To get the correct result, query_kernel_profile_info() must be used in conjunction with clear_kernel_profile_info().

[2] Currently the result of KernelProfiler could be incorrect on OpenGL backend due to its lack of support for ti.sync().

taichi.lang.clear_kernel_profile_info()

Clear all KernelProfiler records.

taichi.lang.kernel_profiler_total_time()

Get elapsed time of all kernels recorded in KernelProfiler.

Returns

total time in second.

Return type

time (float)

taichi.lang.set_kernel_profiler_toolkit(toolkit_name='default')

Set the toolkit used by KernelProfiler.

Currently, we only support toolkits: 'default' and 'cupti'.

Parameters

toolkit_name (str) – string of toolkit name.

Returns

whether the setting is successful or not.

Return type

status (bool)

Example:

>>> import taichi as ti

>>> ti.init(arch=ti.cuda, kernel_profiler=True)
>>> x = ti.field(ti.f32, shape=1024*1024)

>>> @ti.kernel
>>> def fill():
>>>     for i in x:
>>>         x[i] = i

>>> ti.set_kernel_profiler_toolkit('cupti')
>>> for i in range(100):
>>>     fill()
>>> ti.print_kernel_profile_info()

>>> ti.set_kernel_profiler_toolkit('default')
>>> for i in range(100):
>>>     fill()
>>> ti.print_kernel_profile_info()
taichi.lang.set_kernel_profile_metrics(metric_list=default_cupti_metrics)

Set metrics that will be collected by the CUPTI toolkit.

Parameters

metric_list (list) – a list of CuptiMetric() instances, default value: default_cupti_metrics.

Example:

>>> import taichi as ti

>>> ti.init(kernel_profiler=True, arch=ti.cuda)
>>> ti.set_kernel_profiler_toolkit('cupti')
>>> num_elements = 128*1024*1024

>>> x = ti.field(ti.f32, shape=num_elements)
>>> y = ti.field(ti.f32, shape=())
>>> y[None] = 0

>>> @ti.kernel
>>> def reduction():
>>>     for i in x:
>>>         y[None] += x[i]

>>> # In the case of not pramater, Taichi will print its pre-defined metrics list
>>> ti.get_predefined_cupti_metrics()
>>> # get Taichi pre-defined metrics
>>> profiling_metrics = ti.get_predefined_cupti_metrics('shared_access')

>>> global_op_atom = ti.CuptiMetric(
>>>     name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum',
>>>     header=' global.atom ',
>>>     format='    {:8.0f} ')
>>> # add user defined metrics
>>> profiling_metrics += [global_op_atom]

>>> # metrics setting will be retained until the next configuration
>>> ti.set_kernel_profile_metrics(profiling_metrics)
>>> for i in range(16):
>>>     reduction()
>>> ti.print_kernel_profile_info('trace')

Note

Metrics setting will be retained until the next configuration.

taichi.lang.collect_kernel_profile_metrics(metric_list=default_cupti_metrics)

Set temporary metrics that will be collected by the CUPTI toolkit within this context.

Parameters

metric_list (list) – a list of CuptiMetric() instances, default value: default_cupti_metrics.

Example:

>>> import taichi as ti

>>> ti.init(kernel_profiler=True, arch=ti.cuda)
>>> ti.set_kernel_profiler_toolkit('cupti')
>>> num_elements = 128*1024*1024

>>> x = ti.field(ti.f32, shape=num_elements)
>>> y = ti.field(ti.f32, shape=())
>>> y[None] = 0

>>> @ti.kernel
>>> def reduction():
>>>     for i in x:
>>>         y[None] += x[i]

>>> # In the case of not pramater, Taichi will print its pre-defined metrics list
>>> ti.get_predefined_cupti_metrics()
>>> # get Taichi pre-defined metrics
>>> profiling_metrics = ti.get_predefined_cupti_metrics('device_utilization')

>>> global_op_atom = ti.CuptiMetric(
>>>     name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum',
>>>     header=' global.atom ',
>>>     format='    {:8.0f} ')
>>> # add user defined metrics
>>> profiling_metrics += [global_op_atom]

>>> # metrics setting is temporary, and will be clear when exit from this context.
>>> with ti.collect_kernel_profile_metrics(profiling_metrics):
>>>     for i in range(16):
>>>         reduction()
>>>     ti.print_kernel_profile_info('trace')

Note

The configuration of the metric_list will be clear when exit from this context.

taichi.lang.print_memory_profile_info()

Memory profiling tool for LLVM backends with full sparse support.

This profiler is automatically on.

taichi.lang.extension
taichi.lang.is_extension_supported(arch, ext)

Checks whether an extension is supported on an arch.

Parameters
  • arch (taichi_core.Arch) – Specified arch.

  • ext (taichi_core.Extension) – Specified extension.

Returns

Whether ext is supported on arch.

Return type

bool

taichi.lang.reset()

Resets Taichi to its initial state.

This would destroy all the fields and kernels.

taichi.lang.prepare_sandbox()

Returns a temporary directory, which will be automatically deleted on exit. It may contain the taichi_core shared object or some misc. files.

taichi.lang.check_version()
taichi.lang.try_check_version()
taichi.lang.init(arch=None, default_fp=None, default_ip=None, _test_mode=False, enable_fallback=True, **kwargs)

Initializes the Taichi runtime.

This should always be the entry point of your Taichi program. Most importantly, it sets the backend used throughout the program.

Parameters
  • arch – Backend to use. This is usually cpu or gpu.

  • default_fp (Optional[type]) – Default floating-point type.

  • default_ip (Optional[type]) – Default integral type.

  • **kwargs

    Taichi provides highly customizable compilation through kwargs, which allows for fine grained control of Taichi compiler behavior. Below we list some of the most frequently used ones. For a complete list, please check out https://github.com/taichi-dev/taichi/blob/master/taichi/program/compile_config.h.

    • cpu_max_num_threads (int): Sets the number of threads used by the CPU thread pool.

    • debug (bool): Enables the debug mode, under which Taichi does a few more things like boundary checks.

    • print_ir (bool): Prints the CHI IR of the Taichi kernels.

    • packed (bool): Enables the packed memory layout. See https://docs.taichi.graphics/lang/articles/advanced/layout.

taichi.lang.no_activate(*args)
taichi.lang.block_local(*args)

Hints Taichi to cache the fields and to enable the BLS optimization.

Please visit https://docs.taichi.graphics/lang/articles/advanced/performance for how BLS is used.

Parameters

*args (List[Field]) – A list of sparse Taichi fields.

taichi.lang.mesh_local(*args)
taichi.lang.cache_read_only(*args)
taichi.lang.assume_in_range(val, base, low, high)
taichi.lang.loop_unique(val, covers=None)
taichi.lang.parallelize
taichi.lang.serialize
taichi.lang.vectorize
taichi.lang.bit_vectorize
taichi.lang.block_dim
taichi.lang.global_thread_idx
taichi.lang.mesh_patch_idx
taichi.lang.Tape(loss, clear_gradients=True)

Return a context manager of TapeImpl. The context manager would catching all of the callings of functions that decorated by kernel() or grad_replaced() under with statement, and calculate all the partial gradients of a given loss variable by calling all of the gradient function of the callings caught in reverse order while with statement ended.

See also kernel() and grad_replaced() for gradient functions.

Parameters
  • loss (Expr) – The loss field, which shape should be ().

  • clear_gradients (Bool) – Before with body start, clear all gradients or not.

Returns

The context manager.

Return type

TapeImpl

Example:

>>> @ti.kernel
>>> def sum(a: ti.float32):
>>>     for I in ti.grouped(x):
>>>         y[None] += x[I] ** a
>>>
>>> with ti.Tape(loss = y):
>>>     sum(2)
taichi.lang.clear_all_gradients()

Set all fields’ gradients to 0.

taichi.lang.benchmark(_func, repeat=300, args=())
taichi.lang.benchmark_plot(fn=None, cases=None, columns=None, column_titles=None, archs=None, title=None, bars='sync_vs_async', bar_width=0.4, bar_distance=0, left_margin=0, size=(12, 8))
taichi.lang.stat_write(key, value)
taichi.lang.is_arch_supported(arch, use_gles=False)

Checks whether an arch is supported on the machine.

Parameters
  • arch (taichi_core.Arch) – Specified arch.

  • use_gles (bool) – If True, check is GLES is available otherwise check if GLSL is available. Only effective when arch is ti.opengl. Default is False.

Returns

Whether arch is supported on the machine.

Return type

bool

taichi.lang.adaptive_arch_select(arch, enable_fallback, use_gles)
taichi.lang.get_host_arch_list()