[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Efficient multiplication by a diagonal matrix
From: |
Ted Harding |
Subject: |
Re: Efficient multiplication by a diagonal matrix |
Date: |
Wed, 13 Nov 1996 00:00:20 +0000 (GMT) |
( Re Message From: Mario Storti )
>
>
> I found myself repeatedly with the following problem. Given a matrix
> A(n,m) and a vector v(n), I have to multiply each row A(j,:) by
> v(j). This is equivalent to compute:
>
> B = diag(v) * A (1)
>
> Now, for large n, (1) is very inefficient, because it requires
> constructing the square matrix diag(v) which requires storage and many
> redundant operations since most elements of diag(v) are null.
A good while ago I wrote to John Eaton suggesting an extension of the
octave ".*" multiplication operator so that, if A (m x n) is a matrix,
u (m x 1) and v (1 x n) are vectors, then the following could be written:
u .* A = A .* u
= rows of A times corresponding elements of u (the above case)
v .* A = A .* v
= columns of A times corresponding elements of v
If this were implemented by built-in code it could be done very
efficiently and fast. The decision between row-wise or column-wise
multiplication would be taken according to the dimensions of u (or v).
Any other dimensional relationships (except u or v scalar) would be an
error.
John reacted favourably at the time, but I have heard no more since.
I certainly have frequent call for such operations: in Statistics,
applying the same weighting factor along a whole row (or column) is a
daily requirement; I have tended to adopt the diag(u)*A method, but as
Mario points out this can be inefficient, especially in computations
involving iterative re-weighting and large matrices (a matrix with
several thousand rows -- "cases" -- is not uncommon).
Best wishes to all,
Ted. (address@hidden)