Hi all,
The code I am using is fairly
simple. I am using it to process satellite images of dimension - 3000X2000 pixels. The code goes as follows:
function texlayer(subfn)
clc;
[fn,pn] = uigetfile({"*.TIF;*.tiff;*.tif;*.TIFF;*.jpg;*.bmp;*.JPG;*.png", "Supported Image Formats"},
...
'Select an Image', "/home/");
I = double(imread(fullfile(pn,fn)));
global ld
ld = input('Enter the lag distance = '); % prompt for lag
distance
fh = eval(['@' subfn]); % Function handles
I2 = uint8(nlfilter(I, [7 7], fh))
imshow(I2);
imwrite(I2,'i1.tiff');
% Zero Degree Variogram
function [gamma] = ewvar(I)
c = (size(I)+1)/2; % Finds the central pixel of moving window
global ld
EW =
I(c(1),c(2):end); % Determines the values from central pixel to margin of window
h = length(EW) - ld; % Number of lags
gamma = 1/(2 * h) * sum((EW(1:ld:end-1) - EW(2:ld:end)).^2);
According to the above code, I call the function as texlayer('ewvar') and browse and select the input image and give the lag distance which is usually 1. The processing exceeds 45 minutes and I get the output image. Is there any scope for improvement in this code or will it be possible to implement parallel processing for the code? I have no idea about how to implement parallel processing. So any help would be a great time-saver for me.
Thanks and regards,
Chethan S.
Hi Chethan,
On Tue, May 17, 2011 at 09:19:59PM +0530, Chethan S wrote:
> I just know that parallel processing can make use of multiple CPU/GPU
> cores and help in completing memory intensive tasks faster. So I was
> just wondering about the possibility/ease of implementing for my case.
> I have a function which makes use of nlfilter and takes more than half
> an hour to complete.
Insufficient information. Please post the code you are using; anything
else is just guessing.
Thomas