View all functions

CategoryMath: Elementwise
GPUYes

What does the times function do in MATLAB / RunMat?

times(A, B) (or the operator form A .* B) multiplies corresponding elements of A and B, honouring MATLAB's implicit expansion rules so that scalars and singleton dimensions broadcast automatically.

How does the times function behave in MATLAB / RunMat?

  • Supports real, complex, logical, and character inputs; logical and character data are promoted to double precision before multiplication.
  • Implicit expansion works across any dimension, provided the non-singleton extents match. Size mismatches raise the standard MATLAB-compatible error.
  • Complex operands follow the analytic rule (a + ib) .* (c + id) = (ac - bd) + i(ad + bc).
  • Empty dimensions propagate naturally—if either operand has a zero-sized dimension after broadcasting, the result is empty with the broadcasted shape.
  • Integer inputs currently promote to double precision, mirroring the behaviour of other RunMat arithmetic builtins.
  • The optional 'like' prototype makes the output adopt the residency (host or GPU) and complexity characteristics of the prototype, which is particularly useful for keeping implicit-expansion expressions on the device.

times Function GPU Execution Behaviour

When a gpuArray provider is active:

  1. If both operands are gpuArrays with identical shapes, RunMat dispatches to the provider's elem_mul hook.
  2. If one operand is a scalar (host or device) and the other is a gpuArray, the runtime calls scalar_mul to keep the result on the device.
  3. The fusion planner treats times as a fusible elementwise node, so adjacent elementwise producers/consumers can execute inside a single WGSL kernel or provider-optimised pipeline, avoiding spurious host↔device transfers.
  4. Implicit-expansion workloads (e.g., mixing row and column vectors) or unsupported operand kinds gather transparently to host memory, compute the result with full MATLAB semantics, and return a host tensor. The documentation callouts below flag this fallback behaviour explicitly.

Examples of using the times function in MATLAB / RunMat

Multiply two matrices element-wise

A = [1 2 3; 4 5 6];
B = [7 8 9; 1 2 3];
P = times(A, B);

Expected output:

P =
    7   16   27
    4   10   18

Scale a matrix by a scalar

A = magic(3);
scaled = times(A, 0.5);

Expected output:

scaled =
    4.5    0.5    3.5
    1.5    5.0    9.0
    8.0    6.5    2.0

Use implicit expansion between a column and row vector

col = (1:3)';
row = [10 20 30];
m = times(col, row);

Expected output:

m =
    10    20    30
    20    40    60
    30    60    90

Multiply complex inputs element-wise

z1 = [1+2i, 3-4i];
z2 = [2-1i, -1+1i];
prod = times(z1, z2);

Expected output:

prod =
    4 + 3i   1 + 7i

Multiply character codes by a numeric scalar

letters = 'ABC';
codes = times(letters, 2);

Expected output:

codes = [130 132 134]

Execute times directly on gpuArray inputs

G1 = gpuArray([1 2 3]);
G2 = gpuArray([4 5 6]);
deviceProd = times(G1, G2);
result = gather(deviceProd);

Expected output:

deviceProd =
  1x3 gpuArray
     4     10     18
result =
     4    10    18

Keep the result on the GPU with a 'like' prototype

proto = gpuArray.zeros(1, 1);
A = [1 2 3];
B = [4 5 6];
C = times(A, B, 'like', proto);  % stays on the GPU for downstream work

Expected output:

C =
  1x3 gpuArray
      4     10     18

GPU residency in RunMat (Do I need gpuArray?)

RunMat's auto-offload planner keeps tensors on the GPU whenever fused expressions benefit from device execution. Explicit gpuArray / gather calls are still supported for MATLAB code that manages residency manually. When the active provider lacks the kernels needed for a particular call (for example, implicit expansion between gpuArrays of different shapes), RunMat gathers back to the host, computes the MATLAB-accurate result, and resumes execution seamlessly.

FAQ

Does times support MATLAB implicit expansion?

Yes. Any singleton dimensions expand automatically. If a dimension has incompatible non-singleton extents, times raises the standard size-mismatch error.

What numeric type does times return?

Results are double precision for real inputs and complex double when either operand is complex. Logical and character inputs are promoted to double before multiplication.

Can I multiply gpuArrays and host scalars?

Yes. RunMat keeps the computation on the GPU when the scalar is numeric. For other host operand types, the runtime gathers the gpuArray and computes on the CPU.

Does times preserve gpuArray residency after a fallback?

When a fallback occurs (for example, implicit expansion that the provider does not implement), the current result remains on the host. Subsequent operations may move it back to the GPU when auto-offload decides it is profitable.

How can I force the result to stay on the GPU?

Provide a 'like' prototype: times(A, B, 'like', gpuArray.zeros(1, 1)) keeps the result on the device even if one of the inputs originated on the host.

How are empty arrays handled?

Empty dimensions propagate. If either operand has an extent of zero in the broadcasted shape, the result is empty with the broadcasted dimensions.

Are integer inputs supported?

Yes. Integers promote to double precision during the multiplication, matching other RunMat arithmetic builtins.

Can I mix complex and real operands?

Absolutely. The result is complex, with broadcasting rules identical to MATLAB.

What about string arrays?

String arrays are not numeric and therefore raise an error when passed to times.

See Also

mtimes, rdivide, ldivide, plus, gpuArray, gather

Source & Feedback