View all functions

CategoryMath: Linalg & Ops
GPUYes
BLAS/LAPACK

What does the dot function do in MATLAB / RunMat?

dot(A, B) evaluates the inner product between matching slices of A and B. Inputs must have the same size, and complex vectors follow MATLAB's convention sum(conj(A) .* B).

How does the dot function behave in MATLAB / RunMat?

  • Treats vectors, matrices, and N-D tensors identically as long as A and B share the same size.
  • When no dimension is supplied, the function reduces along the first non-singleton dimension (dim = 1 when all dimensions are singleton).
  • dot(A, B, dim) collapses dimension dim (1-based) while leaving every other dimension untouched.
  • Complex inputs conjugate the first argument before multiplication; real inputs use a straight element-wise product.
  • Empty reductions yield zeros of the appropriate shape; length mismatches raise A and B must be the same size.
  • Logical and integer inputs are promoted to double precision automatically so the result is always floating point (or complex when any input is complex).

dot Function GPU Execution Behaviour

RunMat Accelerate keeps GPU-resident tensors on the device whenever the active provider exposes a dot hook. When the hook is missing, RunMat gathers both operands to the host, evaluates the reference path, and—when the result is real—uploads it back to the provider so downstream GPU code continues without manual transfers. Complex outputs remain on the host until provider support lands.

Examples of using the dot function in MATLAB / RunMat

Computing the dot product of row vectors

A = [1 2 3];
B = [4 5 6];
val = dot(A, B);

Expected output:

val = 32

Dotting column vectors to obtain a scalar

u = [1; 3; 5];
v = [2; 4; 6];
val = dot(u, v);

Expected output:

val = 44

Applying dot along a chosen dimension

X = [1 2 3; 4 5 6];
Y = [6 5 4; 3 2 1];
cols = dot(X, Y, 1);  % collapse rows
rows = dot(X, Y, 2);  % collapse columns

Expected output:

cols = [18 20 18];
rows = [28; 28];

Dotting complex vectors uses conjugation on the first input

a = [1+2i, 3-4i];
b = [2-3i, -1+5i];
val = dot(a, b);

Expected output:

val = -27 + 4i

Evaluating dot on gpuArray inputs

G1 = gpuArray([1 2 3 4]);
G2 = gpuArray([4 3 2 1]);
G = dot(G1, G2);
result = gather(G);

Expected output:

isa(G, 'gpuArray')   % logical 1
result = 20

GPU residency in RunMat (Do I need gpuArray?)

You rarely need to call gpuArray explicitly for dot. If the provider lacks a dedicated dot kernel, RunMat will gather the operands, compute the host result, and—when the output is real—upload it back to the provider automatically. Complex outputs remain on the host until GPU complex datatypes are implemented; in mixed pipelines consider reintroducing gpuArray explicitly after the call if residency is critical.

FAQ

Does dot require vectors?

No. Any pair of tensors with identical sizes works; specifying a dimension lets you dot slices of higher-dimensional arrays.

How does the optional dimension behave?

dot(A, B, dim) collapses the dimth dimension (1-based). Dimensions greater than the array rank have length 1 and therefore leave the data unchanged.

What happens with complex numbers?

The first input is conjugated before multiplication so the result matches MATLAB's hermitian inner product.

Are empty inputs supported?

Yes. If the reduction dimension has length 0 the result is filled with zeros of the appropriate shape.

Will the result stay on the GPU?

When a provider is active the runtime uploads real-valued results back to the device. Complex outputs stay on the host until GPU complex support is available.

What error is raised for size mismatches?

When A and B differ in size dot raises: A and B must be the same size. matching MATLAB's wording.

Does dot accept logical or integer inputs?

Yes. Inputs are promoted to double precision before evaluation so you never lose MATLAB's semantics.

Can I request conjugation of the second argument instead?

No. MATLAB's dot is fixed to conjugate the first argument. Use sum(A .* conj(B)) manually if you need the opposite orientation.

See Also

mtimes, mldivide, norm, sum

Source & Feedback