[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Solving large matrix equations
From: |
Martin Heller |
Subject: |
Solving large matrix equations |
Date: |
Tue, 7 Apr 2009 09:55:57 +0200 |
I am new to Octave and am trying to write a function for differentiating
some
data based on this paper:
<http://math.lanl.gov/Research/Publications/Docs/chartrand-2007-numerical.pdf>
My attempt is shown below and it seems to work but my data sets contain
10000-15000 points which is too much for Octave to handle when solving
the problem using my simple minded approch.
I was wondering if there is a standard way to solve large matrix problems in
Octave that I could take advantage of? At the moment I just feed smaller
chunks of data to my function and this gets the job done.
% test of chartrand_diff
x = 0:0.01:1;
y = abs(x-0.5)+0.05*rand(size(x));
[dx,dy,ys] = chartrand_diff(x,y,0.00025);
plot(dx,dy)
% chartrand_diff.m
function [dx,u,fs] = chartrand_diff(x,f,alpha,varargin),
# [dx,u,fs] = chartrand_diff(x,f,alpha,['tol',TOL,'maxstep',MAXSTEP])
#
# Computes the derivative and the smoothed data values based on
# Rick Chartrand, "Numerical differentiation of noisy, nonsmooth data".
#
# Input:
# x = equally spaced parameter values
# f = equally spaced function values. Must have same length as x.
# alpha = variance of noice of f.
# TOL = convergence crterium (default: 1e-3)
# MAXSTEP = max number of iterations (default: 20)
#
# Output:
# u = derivative
# dx = x-values for derivative
# fs = smoothed function values
#
# Default values
max_step = 20;
tol = 1e-3;
# Parse input
if (nargin > 3)
for i = 1:nargin-3
arg = varargin{i};
if ischar(arg)
switch arg
case "maxstep"
max_step = varargin{i+1};
case "tol"
tol = varargin{i+1};
otherwise
printf("Option '%s' is not implemented;\n", arg)
endswitch
endif
endfor
endif
# Make sure x and y are column vectors
x = x(:);
f = f(:);
if (~size_equal(x,f))
error('x and y must have the same length');
end
N = length(f);
% make f(1) = 0;
f0 = f(1);
f = f - f0;
% x spacing
h = x(2) - x(1);
dx = x(1:end-1)+h/2;
u = diff(f)/h;
s = zeros(size(u));
% Differentiation with mid-point rule
D = diff(speye(N-1),1)./h;
Dhat = D.';
% Integration matrix
K = (tril(ones(N,N-1),-1)+tril(ones(N,N-1),-2))*h/2;
Khat = K.';
KK = Khat*K;
step = 0;
U=[];
while (step<max_step),
step++;
E = spdiag(1./sqrt((u(2:end)-u(1:end-1)).^2+eps));
%L = h*D'*E*D;
L = h*Dhat*E*D;
H = KK+alpha*L;
%g = K'*(K*u-f)+alpha*L*u;
g = KK*u-Khat*f+alpha*L*u;
% solve directly
% s = H\(-g);
% solve with LU-factorization
%[LL,U,P] = lu(H); %
%s = U \ (LL \ (P*(-g))); % a forward and a backward substitution
% solve with Cholesky factorisation
[R,p] = chol(H);
s = R \ (R' \ (-g)); % a forward and a backward substitution
% Solve with conjungated gradients method (iterative)
% conjgrad is found at
http://en.wikipedia.org/wiki/Conjugate_gradient_method
% s = conjgrad(H,-g,s)
% Solve iteratively
%s = pcg(H,-g);
u = u + s;
conv_criterium = norm(s);
if (conv_criterium<tol),
break
endif
endwhile
# Compute the smoothed data
fs = f0 + K*u;
endfunction
- Solving large matrix equations,
Martin Heller <=