[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Writing GPU Array Type: Which Array Class to Inherit From?
From: |
cjbattagl |
Subject: |
Re: Writing GPU Array Type: Which Array Class to Inherit From? |
Date: |
Wed, 8 Jul 2009 13:52:29 -0700 (PDT) |
David Bateman-2 wrote:
>
> Wouldn't it be better to inherit from octave_matrix and store a copy of
> the matrix in the original octave_matrix::matrix value and the version
> in GPU memory in the new class and synchronize the two values as
> needed.. You won't inherit the operators at the octave prompt if you
> inherit directly for a liboctave class.
>
I'm now inheriting from octave_float_matrix, which was a good start:.. the
class allocates CPU and GPU memory, prints it out on request by reading the
GPU vectors, and can assign the data to other types of Matrices:
GPUArray (FloatMatrix A) : octave_float_matrix(A) { ... }
It looks like I need to overload every single operator, though... I get the
following behavior for every operator:
A=single(rand(4)); G=NewGPUArray(A);
G = G + 1;
gives: error: T& Array<T>::checkelem (5, -1): range error
I haven't changed modified any important functions of FloatArray in
GPUArray, so I suspect the fix should be easy for now.
In examples I see use of the macros INSTALL_BINOP and CAST_BINOP_ARGS...
would these provide a quick and dirty way for me to make GPUArray act
exactly like FloatMatrix? (I would like GPUArray to act like a normal
FloatArray at first, while I go through and add GPU functionality to each
function)..
If not, I will probably end up using the new @-classes in Octave to
implement all of the functions/operators; it just seems like it wouldn't be
as fast, though.
Thanks for your help!
Casey
--
View this message in context:
http://www.nabble.com/Writing-GPU-Array-Type%3A-Which-Array-Class-to-Inherit-From--tp24377785p24399146.html
Sent from the Octave - General mailing list archive at Nabble.com.