[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[gnuastro-commits] master dd4d43e 113/113: NoiseChisel and Segment: now
From: |
Mohammad Akhlaghi |
Subject: |
[gnuastro-commits] master dd4d43e 113/113: NoiseChisel and Segment: now working on 3D data cubes |
Date: |
Fri, 16 Apr 2021 10:34:05 -0400 (EDT) |
branch: master
commit dd4d43e5777496271b7e3b68023573f795728a9a
Merge: aa80ac4 89db01d
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>
NoiseChisel and Segment: now working on 3D data cubes
Until now the development of the 3D NoiseChisel and Segment was done in
parallel to the main branch. But the lack of recent commits in those
branches shows that they are now sufficiently mature and ready to be merged
into the master branch.
With this commit, they are now imported into the 'master' branch.
---
.gitignore | 1 +
NEWS | 138 +-
README | 7 +-
THANKS | 3 +
bin/arithmetic/arithmetic.c | 6 +-
bin/arithmetic/operands.c | 5 +-
bin/arithmetic/ui.c | 3 +-
bin/convertt/args.h | 19 +
bin/convertt/main.h | 1 +
bin/convertt/ui.c | 115 +-
bin/convertt/ui.h | 3 +-
bin/convolve/ui.c | 3 +-
bin/cosmiccal/ui.c | 23 +-
bin/crop/ui.c | 9 +-
bin/fits/args.h | 13 +
bin/fits/fits.c | 6 +-
bin/fits/keywords.c | 31 +-
bin/fits/main.h | 2 +
bin/fits/ui.c | 14 +-
bin/fits/ui.h | 1 +
bin/gnuastro.conf | 1 +
bin/mkcatalog/args.h | 14 +
bin/mkcatalog/columns.c | 30 +-
bin/mkcatalog/mkcatalog.c | 336 +-
bin/mkcatalog/mkcatalog.h | 10 +-
bin/mkcatalog/ui.c | 21 +-
bin/mkcatalog/ui.h | 1 +
bin/mkcatalog/upperlimit.c | 212 +-
bin/mkcatalog/upperlimit.h | 4 +-
bin/mknoise/mknoise.c | 3 +-
bin/mknoise/ui.c | 3 +-
bin/mkprof/args.h | 10 +-
bin/mkprof/mkprof.c | 5 +-
bin/mkprof/profiles.c | 10 +-
bin/mkprof/ui.c | 7 +-
bin/noisechisel/ui.c | 3 +-
bin/query/astron.c | 5 +-
bin/query/gaia.c | 3 +
bin/query/main.h | 1 +
bin/query/ned.c | 119 +-
bin/query/query.c | 48 +-
bin/query/ui.c | 67 +-
bin/query/vizier.c | 3 +
bin/script/Makefile.am | 21 +-
bin/script/{make-ds9-reg.in => ds9-region.in} | 68 +-
bin/script/radial-profile.in | 551 +++
bin/segment/ui.c | 6 +-
bin/statistics/statistics.c | 3 +-
bin/statistics/ui.c | 3 +-
bin/table/arithmetic.c | 3 +-
bin/table/table.c | 2 +-
bin/table/ui.c | 2 +-
bin/warp/ui.c | 3 +-
bootstrap.conf | 8 -
configure.ac | 17 +
doc/Makefile.am | 26 +-
doc/announce-acknowledge.txt | 3 +
doc/gnuastro.texi | 5106 +++++++++++++++----------
lib/Makefile.am | 1 +
lib/arithmetic-set.c | 6 +-
lib/arithmetic.c | 26 +-
lib/fits.c | 36 +-
lib/gnuastro-internal/commonopts.h | 16 +-
lib/gnuastro-internal/options.h | 8 +-
lib/gnuastro/arithmetic.h | 10 +-
lib/gnuastro/units.h | 14 +-
lib/gnuastro/wcs.h | 47 +-
lib/options.c | 49 +
lib/txt.c | 5 +
lib/units.c | 45 +
lib/wcs.c | 510 ++-
tests/script/list-by-night.sh | 22 +-
72 files changed, 5260 insertions(+), 2676 deletions(-)
diff --git a/.gitignore b/.gitignore
index 26b30b4..c9bc5d4 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,6 +44,7 @@
*.log
*.pdf
*.png
+*.swp
*.toc
*.trs
*.txt
diff --git a/NEWS b/NEWS
index a5af946..ceb1926 100644
--- a/NEWS
+++ b/NEWS
@@ -8,20 +8,35 @@ See the end of the file for license conditions.
** New features
New program:
- - astscript-make-ds9-reg: Given a table (either as a file or from
+ - astscript-ds9-region: Given a table (either as a file or from
standard input), create an SAO DS9 region file from the requested
positional columns (WCS or image coordinates). For example with the
command below you can select certain rows of a given table, and show
them over an image:
- asttable table.fits --range=MAG,18:20 --column=RA,DEC \
- | astscript-make-ds9-reg --column=1,2 --radius=0.5 \
- --command="ds9 image.fits"
+ asttable table.fits --range=MAGNITUDE,18:20 --column=RA,DEC \
+ | astscript-ds9-region --column=1,2 --radius=0.5 \
+ --command="ds9 image.fits"
+ - astscript-radial-profile: Measure the radial profile of an object on
+ an image. The profile can be centered anywhere in the image and any
+ circular, or elliptical distance can be defined. The output is a table
+ with the profile's value in one column and any requested measure in
+ the other columns (any MakeCatalog measurement is possible).
+
+ All programs:
+ --wcslinearmatrix: new option in all programs that lets you select the
+ output WCS linear matrix format. It takes one of two values: 'pc' (for
+ the 'PCi_j' formalism) and 'cd' (for 'CDi_j'). Until now, the outputs
+ were always stored in the 'PCi_j' formalism (which is still the
+ recommended format).
+
+ Book:
+ - New "Image surface brightness limit" section added to the third
+ tutorial (on "Detecting large extended targets"). It describes the
+ different ways to measure a dataset's surface brightness limit and
+ upper-limit surface brightness, while discussing their differences.
Arithmetic:
- - New operators (the trigonometric/hyperbolic functions were previously
- only avaialble in Table's column arithmetic, but they have been moved
- into the Gnuastro library and are thus now available on images within
- Arithmetic also):
+ - New operators (all also available in Table's column arithmetic):
- sin: Trigonometric sine (input in degrees).
- cos: Trigonometric cosine (input in degrees).
- tan: Trigonometric tangent (input in degrees).
@@ -33,22 +48,65 @@ See the end of the file for license conditions.
- tanh: Hyperbolic tangent.
- asinh: Inverse of hyperbolic sine.
- acosh: Inverse of hyperbolic cosine.
- - atabh: Inverse of hyperbolic tangent.
+ - atanh: Inverse of hyperbolic tangent.
+ - counts-to-mag: Convert counts to magnitudes with given zero point.
+ - counts-to-jy: Convert counts to Janskys through a zero point based
+ on AB magnitudes.
+
+ ConvertType:
+ --globalhdu: Use a single HDU identifier for all the input files
+ files. Its operation is identical to the similarly named option in
+ Arithmetic. Until now it was necessary to call '--hdu' three times if
+ you had three input FITS files with input in the same HDU.
+
+ Fits:
+ --wcscoordsys: convert the WCS coordinate system of the input into any
+ recognized coordinate system. It currently supports: equatorial
+ (J2000, B1950), ecliptic (J2000, B1950), Galactic and
+ Supergalactic. For example if 'image.fits' is in galactic coordinates,
+ you can use this command to convert its WCS to equatorial (J2000):
+ astfits image.fits --wcscoordsys=eq-j2000
+ This option only works with WCSLIB 7.5 and above (released in March
+ 2021), otherwise it will abort with an informative warning.
+
+ MakeCatalog:
+ - Newly added measurement columns:
+ --upperlimitsb: upper-limit surface brightness for the given label
+ (object or clump). This is useful for measuring a dataset's
+ realistic surface brightness level for each labeled region by random
+ positioning of its footprint over undetected regions (not
+ extrapolated from the single-pixel noise level like the "surface
+ brightness limit").
+
+ NoiseChisel:
+ - Can now work on 3D datacubes. Since the configuration parameters are
+ different from images, it is recommended to manually set the 3D
+ configuration (the '...' can be the input image and options):
+ astnoisechisel --config=/usr/local/etc/astnoisechisel-3d.conf ...
+ Alternatively, you can set an 'astnoisechisel-3d' alias like below and
+ always easily run 'astnoisechisel-3d' on cubes.
+ alias astnoisechisel-3d="astnoisechisel
--config=/usr/local/etc/astnoisechisel-3d.conf"
+
+ Segment:
+ - Can now work on 3D datacubes. Similar to NoiseChisel, it requires a
+ separate set of default configurations, so please see the note under
+ NoiseChisel above.
Table:
- When given a value of '_all', the '--noblank' option (that will remove
all rows with a blank value in the given columns) will check all
columns of the final output table. This is handy when you want a
"clean" (no NaN values in any column) table, but the table has many
- columns.
+ columns. Until now, '--noblank' needed the name/number of each column
+ to "clean".
--rowlimit: new option to specify the positional interval of rows to
- show. Until now the '--head' or '--tail' options would just allow
- seeing the first or last few rows. You can use this to view a
- contiguous set of rows in the middle of the table.
+ show. Until now, the '--head' or '--tail' options would just allow
+ seeing the first or last few rows. You can use this new option to view
+ a contiguous set of rows in the middle of the table.
--rowrandom: Make a random selection of the rows. This option is useful
- when you have a large dataset and just want to see a random sub-set of
+ when you have a large table and just want to see a random sub-set of
the rows. It takes an integer, selects that many rows from the input
- randomly.
+ randomly and returns them.
- New column arithmetic operators:
- 'set-AAA' operator (which allows storing the popped operand into a
named variable for easy usage in complex operations) is also usable
@@ -57,17 +115,28 @@ See the end of the file for license conditions.
- 'date-to-sec' Convert FITS date format ('YYYY-MM-DDThh:mm:ss') into
seconds from the Unix epoch (1970-01-01,00:00:00 UTC). This can be
very useful in combination with the new '--keyvalue' option of the
- Fits program to sort your FITS images based on observation time.
+ Fits program to operate on FITS dates (for example sort your FITS
+ images based on observation time).
Fits:
--keyvalue: Print only the values of the FITS keywords given to this
- option in separate columns. This option can take multiple values and
- many FITS files. Thus generating a table of keyword values (with one
- row per file). Its output can thus either be piped to the Table
- program for selecting a certain sub-set of your FITS files, or sorting
- them for example.
+ option in separate columns. This option can take multiple keyword
+ names and many FITS files. Thus generating a table of keyword values
+ (with one row per file where the first column is the file name). Its
+ output can thus be written as a Table file or be piped to the Table
+ program for selecting a certain sub-set of your FITS files based on
+ key values, or sorting them for example.
+
+ Query:
+ - The Galactic extinction calculator of the NASA/IPAC Extragalactic
+ Database (NED) is now available for any coordinate with a command like
+ below. For more, see the manual (the description of the 'extinction'
+ dataset of NED in the "Available datasets" section).
+ astquery ned --dataset=extinction --center=49.9507,41.5116
Library:
+ - gal_units_counts_to_mag: Convert counts to magnitudes.
+ - gal_units_counts_to_jy: Convert counts to Janskys.
- New arithmetic operator macros (for the 'gal_arithmetic' function):
- GAL_ARITHMETIC_OP_SIN: sine (input in deg).
- GAL_ARITHMETIC_OP_COS: cosine (input in deg).
@@ -82,6 +151,17 @@ See the end of the file for license conditions.
- GAL_ARITHMETIC_OP_ASINH: Inverse hyperbolic sine.
- GAL_ARITHMETIC_OP_ACOSH: Inverse hyperbolic cosine.
- GAL_ARITHMETIC_OP_ATANH: Inverse hyperbolic tangent.
+ - GAL_ARITHMETIC_OP_COUNTS_TO_JY: Convert counts to Janskys.
+ - WCS coordinate system identifiers:
+ - GAL_WCS_COORDSYS_EQB1950: 1950.0 (Besselian-year) equatorial coords.
+ - GAL_WCS_COORDSYS_EQJ2000: 2000.0 (Julian-year) equatorial coords.
+ - GAL_WCS_COORDSYS_ECB1950: 1950.0 (Besselian-year) ecliptic coords.
+ - GAL_WCS_COORDSYS_ECJ2000: 2000.0 (Julian-year) ecliptic coords.
+ - GAL_WCS_COORDSYS_GALACTIC: Galactic coordinates.
+ - GAL_WCS_COORDSYS_SUPERGALACTIC: Supergalactic coordinates.
+ - gal_wcs_coordsys_from_string: WCS coordinate system from string.
+ - gal_wcs_coordsys_identify: Parse WCS struct to find coordinate system.
+ - gal_wcs_coordsys_convert: Convert the coordinate system of the WCS.
** Removed features
@@ -96,15 +176,31 @@ See the end of the file for license conditions.
9:00a.m. But in some cases, calibration images may be taken after
that. So to be safer in general it was incremented by 2 hours.
+ MakeCatalog:
+ - Surface brightness limit (SBL) calculations are now written as
+ standard FITS keywords in the output catalog/table. Until now, they
+ were simply stored as 'COMMENT' keywords with no name so it was hard
+ to parse them automatically. From this version, the following keywords
+ are also written into the output table(s), see the "MakeCatalog
+ output" section of the book for more: 'SBLSTD', 'SBLNSIG', 'SBLMAGPX',
+ 'SBLAREA', 'SBLMAG'.
+ - Upper-limit (UP) settings are also written into the output tables as
+ keywords (like surface brightness limit numbers above): 'UPNSIGMA',
+ 'UPNUMBER', 'UPRNGNAM', 'UPRNGSEE', 'UPSCMLTP', 'UPSCTOL'.
+
Library:
- gal_fits_key_write_wcsstr: also takes WCS structure as argument.
- gal_fits_key_read_from_ptr: providing a numerical datatype for the
desired keyword's value is no longer mandatory. When not given, the
smallest numeric datatype that can keep the value will be found and
used.
+ - gal_wcs_read: allows specifying the linear matrix of the WCS.
+ - gal_wcs_read_fitsptr: allows specifying the linear matrix of the WCS.
** Bugs fixed
bug #60082: Arithmetic library crash for integer operators like modulo
+ bug #60121: Arithmetic segfault when multi-operand output given to set-
+ bug #60368: CosmicCalculator fails --setdirconf when redshift isn't given
diff --git a/README b/README
index 03b6b93..2b368be 100644
--- a/README
+++ b/README
@@ -109,10 +109,15 @@ running a program in a special way), Gnuastro also
installs Bash scripts
(all prefixed with 'astscript-'). They can be run like a program and behave
very similarly (with minor differences, as explained in the book).
- - astscript-make-ds9-reg: Given a table (either as a file or from
+ - astscript-ds9-region: Given a table (either as a file or from
standard input), create an SAO DS9 region file from the requested
positional columns (WCS or image coordinates).
+ - astscript-radial-profile: Calculate the radial profile of an object
+ within an image. The object can be at any location in the image, using
+ various measures (median, sigma-clipped mean and etc), and the radial
+ distance can also be measured on any general ellipse.
+
- astscript-sort-by-night: Given a list of FITS files, and a HDU and
keyword name for a date, this script separates the files in the same
night (possibly over two calendar days).
diff --git a/THANKS b/THANKS
index 6d36246..3ace152 100644
--- a/THANKS
+++ b/THANKS
@@ -41,6 +41,7 @@ support in Gnuastro. The list is ordered alphabetically (by
family name).
Pierre-Alain Duc pierre-alain.duc@astro.unistra.fr
Elham Eftekhari elhamea@iac.es
Paul Eggert eggert@cs.ucla.edu
+ Sepideh Eskandarlou sepideh.eskandarlou@gmail.com
Gaspar Galaz ggalaz@astro.puc.cl
Andrés García-Serra Romero alu0101451923@ull.edu.es
Thérèse Godefroy godef.th@free.fr
@@ -64,6 +65,8 @@ support in Gnuastro. The list is ordered alphabetically (by
family name).
Sebastián Luna Valero sluna@iaa.es
Alberto Madrigal brt.madrigal@gmail.com
Guillaume Mahler guillaume.mahler@univ-lyon1.fr
+ Joseph Mazzarella mazz@ipac.caltech.edu
+ Juan Miro miro.juan@gmail.com
Alireza Molaeinezhad amolaei@gmail.com
Javier Moldon jmoldon@iaa.es
Juan Molina Tobar juan.a.molina.t@gmail.com
diff --git a/bin/arithmetic/arithmetic.c b/bin/arithmetic/arithmetic.c
index 3975a7a..963633f 100644
--- a/bin/arithmetic/arithmetic.c
+++ b/bin/arithmetic/arithmetic.c
@@ -63,8 +63,10 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
/***************************************************************/
/************* Internal functions *************/
/***************************************************************/
-#define SET_NUM_OP(CTYPE) { \
- CTYPE a=*(CTYPE *)(numpop->array); if(a>0) return a; }
+#define SET_NUM_OP(CTYPE) { \
+ CTYPE a=*(CTYPE *)(numpop->array); \
+ gal_data_free(numpop); \
+ if(a>0) return a; }
static size_t
pop_number_of_operands(struct arithmeticparams *p, int op, char *token_string,
diff --git a/bin/arithmetic/operands.c b/bin/arithmetic/operands.c
index 1c06500..cb10d3c 100644
--- a/bin/arithmetic/operands.c
+++ b/bin/arithmetic/operands.c
@@ -135,8 +135,9 @@ operands_add(struct arithmeticparams *p, char *filename,
gal_data_t *data)
: NULL);
/* Read the WCS. */
- p->refdata.wcs=gal_wcs_read(filename, newnode->hdu, 0, 0,
- &p->refdata.nwcs);
+ p->refdata.wcs=gal_wcs_read(filename, newnode->hdu,
+ p->cp.wcslinearmatrix,
+ 0, 0, &p->refdata.nwcs);
/* Remove extra (length of 1) dimensions (if we had an
image HDU). */
diff --git a/bin/arithmetic/ui.c b/bin/arithmetic/ui.c
index 93dcbba..5f13635 100644
--- a/bin/arithmetic/ui.c
+++ b/bin/arithmetic/ui.c
@@ -377,7 +377,8 @@ ui_preparations(struct arithmeticparams *p)
dsize=gal_fits_img_info_dim(p->wcsfile, p->wcshdu, &ndim);
/* Read the WCS. */
- p->refdata.wcs=gal_wcs_read(p->wcsfile, p->wcshdu, 0, 0,
+ p->refdata.wcs=gal_wcs_read(p->wcsfile, p->wcshdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->refdata.nwcs);
if(p->refdata.wcs)
{
diff --git a/bin/convertt/args.h b/bin/convertt/args.h
index 5f66e04..bea65cc 100644
--- a/bin/convertt/args.h
+++ b/bin/convertt/args.h
@@ -30,6 +30,25 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
/* Array of acceptable options. */
struct argp_option program_options[] =
{
+ /* Input */
+ {
+ "globalhdu",
+ UI_KEY_GLOBALHDU,
+ "STR/INT",
+ 0,
+ "Use this HDU for all inputs, ignore '--hdu'.",
+ GAL_OPTIONS_GROUP_INPUT,
+ &p->globalhdu,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET
+ },
+
+
+
+
+
/* Output */
{
"quality",
diff --git a/bin/convertt/main.h b/bin/convertt/main.h
index 0d5cda6..bf1bb9b 100644
--- a/bin/convertt/main.h
+++ b/bin/convertt/main.h
@@ -83,6 +83,7 @@ struct converttparams
struct gal_options_common_params cp; /* Common parameters. */
gal_list_str_t *inputnames; /* The names of input files. */
gal_list_str_t *hdus; /* The names of input hdus. */
+ char *globalhdu; /* Global HDU (for all inputs). */
uint8_t quality; /* Quality of JPEG image. */
float widthincm; /* Width in centimeters. */
uint32_t borderwidth; /* Width of border in PostScript points. */
diff --git a/bin/convertt/ui.c b/bin/convertt/ui.c
index d6e306b..219e39e 100644
--- a/bin/convertt/ui.c
+++ b/bin/convertt/ui.c
@@ -518,17 +518,27 @@ ui_make_channels_ll(struct converttparams *p)
if( gal_fits_name_is_fits(name->v) )
{
/* Get the HDU value for this channel. */
- if(p->hdus)
- hdu=gal_list_str_pop(&p->hdus);
+ if(p->globalhdu)
+ hdu=p->globalhdu;
else
- error(EXIT_FAILURE, 0, "not enough HDUs. Every input FITS image "
- "needs a HDU, you can use the '--hdu' ('-h') option once "
- "for each input FITS image (in the same order)");
+ {
+ if(p->hdus)
+ hdu=gal_list_str_pop(&p->hdus);
+ else
+ error(EXIT_FAILURE, 0, "not enough HDUs. Every input FITS "
+ "image needs a HDU (identified by name or number, "
+ "counting from zero). You can use multiple calls to "
+ "the '--hdu' ('-h') option for each input FITS image "
+ "(in the same order as the input FITS files), or use "
+ "'--globalhdu' ('-g') once when the same HDU should "
+ "be used for all of them");
+ }
/* Read in the array and its WCS information. */
data=gal_fits_img_read(name->v, hdu, p->cp.minmapsize,
p->cp.quietmmap);
- data->wcs=gal_wcs_read(name->v, hdu, 0, 0, &data->nwcs);
+ data->wcs=gal_wcs_read(name->v, hdu, p->cp.wcslinearmatrix,
+ 0, 0, &data->nwcs);
data->ndim=gal_dimension_remove_extra(data->ndim, data->dsize,
data->wcs);
gal_list_data_add(&p->chll, data);
@@ -617,6 +627,97 @@ ui_make_channels_ll(struct converttparams *p)
+static void
+ui_prepare_input_channels_check_wcs(struct converttparams *p)
+{
+ int printwarning=0;
+ float wcsmatch=1.0;
+ gal_data_t *tmp, *coords=NULL;
+ size_t one=1, numwcs=0, numnonblank=0;
+ double *c1, *c2, r1=NAN, r2=NAN, *pixscale=NULL;
+
+ /* If all the inputs have WCS, check to see if the inputs are aligned and
+ print a warning if they aren't. */
+ for(tmp=p->chll; tmp!=NULL; tmp=tmp->next)
+ {
+ if(tmp->wcs && tmp->type!=GAL_TYPE_INVALID) ++numwcs;
+ if(tmp->type!=GAL_TYPE_INVALID) ++numnonblank;
+ }
+ if(numwcs==numnonblank)
+ {
+ /* Allocate the coordinate columns. */
+ gal_list_data_add_alloc(&coords, NULL, GAL_TYPE_FLOAT64, 1,
+ &one, NULL, 0, -1, 1, NULL, NULL, NULL);
+ gal_list_data_add_alloc(&coords, NULL, GAL_TYPE_FLOAT64, 1,
+ &one, NULL, 0, -1, 1, NULL, NULL, NULL);
+
+ /* Go over each image and put its central pixel in the coordinates
+ and do the world-coordinate transformation. Recall that the C
+ coordinates are the inverse order of FITS coordinates and that
+ FITS coordinates count from 1 (not 0).*/
+ for(tmp=p->chll; tmp!=NULL; tmp=tmp->next)
+ if(tmp->wcs)
+ {
+ /* Fill the coordinate values. */
+ c1=coords->array;
+ c2=coords->next->array;
+ c1[0] = tmp->dsize[1] / 2 + 1;
+ c2[0] = tmp->dsize[0] / 2 + 1;
+
+ /* Get the RA/Dec. */
+ gal_wcs_img_to_world(coords, tmp->wcs, 1);
+
+ /* If the pixel scale hasn't been calculated yet, do it (we
+ only need it once, should be similar in all). */
+ if(pixscale==NULL)
+ pixscale=gal_wcs_pixel_scale(tmp->wcs);
+
+ /* If the reference/first center is not yet defined then write
+ the conversions in it. If it is defined, compare with it
+ with the new dataset and print a warning if necessary. */
+ if( isnan(r1) )
+ { r1=c1[0]; r2=c2[0]; }
+ else
+ {
+ /* For a check.
+ printf("check: %g, %g\n", fabs(c1[0]-r1)/pixscale[0],
+ fabs(c2[0]-r2)/pixscale[1]);
+ */
+
+ /* See if a warning should be printed. */
+ if( fabs(c1[0]-r1)/pixscale[0] > wcsmatch
+ || fabs(c2[0]-r2)/pixscale[1] > wcsmatch )
+ printwarning=1;
+ }
+ }
+ }
+
+ /* Print the warning message if necessary. */
+ if(printwarning && p->cp.quiet==0)
+ {
+ error(EXIT_SUCCESS, 0, "WARNING: The WCS information of the input "
+ "FITS images don't match (by more than %g pixels in the "
+ "center), even though the input images have the same number "
+ "of pixels in each dimension. Therefore the color channels "
+ "of the output colored image may not be aligned. If this is "
+ "not a problem, you can suppress this warning with the "
+ "'--quiet' option.\n\n"
+ "A solution to align your images is provided in the "
+ "\"Aligning images with small WCS offsets\" section of "
+ "Gnuastro's manual. Please run the command below to see "
+ "it (you can return to the command-line by pressing 'q'):\n\n"
+ " info gnuastro \"Aligning images\"\n",
+ wcsmatch);
+ }
+
+ /* Clean up. */
+ free(pixscale);
+}
+
+
+
+
+
/* Read the input(s)/channels. */
static void
ui_prepare_input_channels(struct converttparams *p)
@@ -692,6 +793,8 @@ ui_prepare_input_channels(struct converttparams *p)
wcs=tmp->wcs;
}
+ /* Make sure the images are all aligned to the same grid. */
+ ui_prepare_input_channels_check_wcs(p);
/* If ndim is still NULL, then there were no non-blank inputs, so print
an error. */
diff --git a/bin/convertt/ui.h b/bin/convertt/ui.h
index f1deb68..b23e8fb 100644
--- a/bin/convertt/ui.h
+++ b/bin/convertt/ui.h
@@ -42,12 +42,13 @@ enum program_args_groups
/* Available letters for short options:
- a d e f g j k l n p r s t v y z
+ a d e f j k l n p r s t v y z
E G J O Q R W X Y
*/
enum option_keys_enum
{
/* With short-option version. */
+ UI_KEY_GLOBALHDU = 'g',
UI_KEY_QUALITY = 'u',
UI_KEY_WIDTHINCM = 'w',
UI_KEY_BORDERWIDTH = 'b',
diff --git a/bin/convolve/ui.c b/bin/convolve/ui.c
index e466975..2c2582f 100644
--- a/bin/convolve/ui.c
+++ b/bin/convolve/ui.c
@@ -438,7 +438,8 @@ ui_read_input(struct convolveparams *p)
INPUT_USE_TYPE,
p->cp.minmapsize,
p->cp.quietmmap);
- p->input->wcs=gal_wcs_read(p->filename, p->cp.hdu, 0, 0,
+ p->input->wcs=gal_wcs_read(p->filename, p->cp.hdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim,
p->input->dsize,
diff --git a/bin/cosmiccal/ui.c b/bin/cosmiccal/ui.c
index 5d516ec..641dc98 100644
--- a/bin/cosmiccal/ui.c
+++ b/bin/cosmiccal/ui.c
@@ -398,18 +398,6 @@ ui_read_check_only_options(struct cosmiccalparams *p)
error(EXIT_FAILURE, 0, "'--listlines' and '--listlinesatz' can't be "
"called together");
- /* Make sure that atleast one of '--redshift', '--obsline', or
- '--velocity' are given (different ways to set/estimate the
- redshift). However, when '--listlines' and/or '--printparams' are
- called (i.e., when they have a non-zero value) we don't need a
- redshift all and the program can run without any of the three options
- above. */
- if(isnan(p->redshift) && p->obsline==NULL && isnan(p->velocity)
- && p->listlines==0 && p->cp.printparams==0)
- error(EXIT_FAILURE, 0, "no redshift/velocity specified! Please use "
- "'--redshift', '--velocity' (in km/s), or '--obsline' to specify "
- "a redshift, run with '--help' for more");
-
/* Make sure that '--redshift' and '--obsline' aren't called together. */
if( (hasredshift + hasvelocity + hasobsline) > 1 )
error(EXIT_FAILURE, 0, "only one of '--redshift', '--velocity', or "
@@ -477,6 +465,17 @@ ui_preparations(struct cosmiccalparams *p)
{
double *obsline = p->obsline ? p->obsline->array : NULL;
+ /* Make sure that atleast one of '--redshift', '--obsline', or
+ '--velocity' are given (different ways to set/estimate the
+ redshift). However, when '--listlines' is called we don't need a
+ redshift and the program can run without any of the three options
+ above. */
+ if(isnan(p->redshift) && p->obsline==NULL && isnan(p->velocity)
+ && p->listlines==0 )
+ error(EXIT_FAILURE, 0, "no redshift/velocity specified! Please use "
+ "'--redshift', '--velocity' (in km/s), or '--obsline' to specify "
+ "a redshift, run with '--help' for more");
+
/* If '--listlines' is given, print them and abort the program
successfully, don't continue with the preparations. Note that
'--listlines' is the rest-frame lines. So we don't need any
diff --git a/bin/crop/ui.c b/bin/crop/ui.c
index a1bc980..8776f15 100644
--- a/bin/crop/ui.c
+++ b/bin/crop/ui.c
@@ -855,8 +855,9 @@ ui_preparations_to_img_mode(struct cropparams *p)
size_t i;
int nwcs;
double *darr, *pixscale;
- struct wcsprm *wcs=gal_wcs_read(p->inputs->v, p->cp.hdu, p->hstartwcs,
- p->hendwcs, &nwcs);
+ struct wcsprm *wcs=gal_wcs_read(p->inputs->v, p->cp.hdu,
+ p->cp.wcslinearmatrix,
+ p->hstartwcs, p->hendwcs, &nwcs);
/* Make sure a WCS actually exists. */
if(wcs==NULL)
@@ -1018,8 +1019,8 @@ ui_preparations(struct cropparams *p)
tmpfits=gal_fits_hdu_open_format(img->name, p->cp.hdu, 0);
gal_fits_img_info(tmpfits, &p->type, &img->ndim, &img->dsize,
NULL, NULL);
- img->wcs=gal_wcs_read_fitsptr(tmpfits, p->hstartwcs, p->hendwcs,
- &img->nwcs);
+ img->wcs=gal_wcs_read_fitsptr(tmpfits, p->cp.wcslinearmatrix,
+ p->hstartwcs, p->hendwcs, &img->nwcs);
if(img->wcs)
{
gal_wcs_decompose_pc_cdelt(img->wcs);
diff --git a/bin/fits/args.h b/bin/fits/args.h
index af1d90c..ffe56a9 100644
--- a/bin/fits/args.h
+++ b/bin/fits/args.h
@@ -341,6 +341,19 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_MANDATORY,
GAL_OPTIONS_NOT_SET
},
+ {
+ "wcscoordsys",
+ UI_KEY_WCSCOORDSYS,
+ "STR",
+ 0,
+ "Convert WCS coordinate system.",
+ UI_GROUP_KEYWORD,
+ &p->wcscoordsys,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET
+ },
diff --git a/bin/fits/fits.c b/bin/fits/fits.c
index 19a7a1b..718d50c 100644
--- a/bin/fits/fits.c
+++ b/bin/fits/fits.c
@@ -323,7 +323,8 @@ fits_pixelscale(struct fitsparams *p)
double multip, *pixelscale;
/* Read the desired WCS. */
- wcs=gal_wcs_read(p->input->v, p->cp.hdu, 0, 0, &nwcs);
+ wcs=gal_wcs_read(p->input->v, p->cp.hdu, p->cp.wcslinearmatrix,
+ 0, 0, &nwcs);
/* If a WCS doesn't exist, let the user know and return. */
if(wcs)
@@ -474,7 +475,8 @@ fits_skycoverage(struct fitsparams *p)
}
/* For the range type of coverage. */
- wcs=gal_wcs_read(p->input->v, p->cp.hdu, 0, 0, &nwcs);
+ wcs=gal_wcs_read(p->input->v, p->cp.hdu, p->cp.wcslinearmatrix,
+ 0, 0, &nwcs);
printf("\nSky coverage by range along dimensions:\n");
for(i=0;i<ndim;++i)
printf(" %-8s %-15.10g%-15.10g\n", gal_wcs_dimension_name(wcs, i),
diff --git a/bin/fits/keywords.c b/bin/fits/keywords.c
index 9a325f6..4a1d483 100644
--- a/bin/fits/keywords.c
+++ b/bin/fits/keywords.c
@@ -438,7 +438,7 @@ keywords_date_to_seconds(struct fitsparams *p, fitsfile
*fptr)
static void
-keywords_distortion_wcs(struct fitsparams *p)
+keywords_wcs_convert(struct fitsparams *p)
{
int nwcs;
size_t ndim, *insize;
@@ -463,16 +463,17 @@ keywords_distortion_wcs(struct fitsparams *p)
}
/* Read the input's WCS and make sure one exists. */
- inwcs=gal_wcs_read(p->input->v, p->cp.hdu, 0, 0, &nwcs);
+ inwcs=gal_wcs_read(p->input->v, p->cp.hdu, p->cp.wcslinearmatrix,
+ 0, 0, &nwcs);
if(inwcs==NULL)
error(EXIT_FAILURE, 0, "%s (hdu %s): doesn't have any WCS structure "
- "for converting its distortion",
+ "for converting its coordinate system or distortion",
p->input->v, p->cp.hdu);
/* In case there is no dataset and the conversion is between TPV to SIP,
we need to set a default size and use that for the conversion, but we
also need to warn the user. */
- if(data==NULL)
+ if(p->wcsdistortion && data==NULL)
{
if( !p->cp.quiet
&& gal_wcs_distortion_identify(inwcs)==GAL_WCS_DISTORTION_TPV
@@ -491,14 +492,22 @@ keywords_distortion_wcs(struct fitsparams *p)
else dsize=data->dsize;
/* Do the conversion. */
- outwcs=gal_wcs_distortion_convert(inwcs, p->distortionid, dsize);
+ if(p->wcscoordsys)
+ outwcs=gal_wcs_coordsys_convert(inwcs, p->coordsysid);
+ else if(p->wcsdistortion)
+ outwcs=gal_wcs_distortion_convert(inwcs, p->distortionid, dsize);
+ else
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to fix "
+ "the problem. The requested mode for this function is not "
+ "recognized", __func__, PACKAGE_BUGREPORT);
/* Set the output filename. */
if(p->cp.output)
output=p->cp.output;
else
{
- if( asprintf(&suffix, "-%s.fits", p->wcsdistortion)<0 )
+ if( asprintf(&suffix, "-%s.fits",
+ p->wcsdistortion ? p->wcsdistortion : p->wcscoordsys)<0 )
error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
output=gal_checkset_automatic_output(&p->cp, p->input->v, suffix);
}
@@ -781,10 +790,10 @@ keywords_value(struct fitsparams *p)
size_t i, ii=0, ninput, nkeys;
gal_data_t *out=NULL, *keysll=NULL;
- /* Count how many inputs there are and allocate the first column with the
- name. */
+ /* Count how many inputs there are, and allocate the first column with
+ the name. */
ninput=gal_list_str_number(p->input);
- if(ninput>1)
+ if(ninput>1 || p->cp.quiet==0)
out=gal_data_alloc(NULL, GAL_TYPE_STRING, 1, &ninput, NULL, 0,
p->cp.minmapsize, p->cp.quietmmap, "FILENAME",
"name", "Name of input file.");
@@ -1034,8 +1043,8 @@ keywords(struct fitsparams *p)
}
/* Convert the input's distortion to the desired output distortion. */
- if(p->wcsdistortion)
- keywords_distortion_wcs(p);
+ if(p->wcsdistortion || p->wcscoordsys)
+ keywords_wcs_convert(p);
/* Return. */
return r;
diff --git a/bin/fits/main.h b/bin/fits/main.h
index 9c66221..c0254c2 100644
--- a/bin/fits/main.h
+++ b/bin/fits/main.h
@@ -78,12 +78,14 @@ struct fitsparams
uint8_t *verify; /* Verify the CHECKSUM and DATASUM keys. */
char *copykeys; /* Range of keywords to copy in output. */
char *datetosec; /* Convert FITS date to seconds. */
+ char *wcscoordsys; /* Name of new WCS coordinate system. */
char *wcsdistortion; /* WCS distortion to write in output. */
uint8_t quitonerror; /* Quit if an error occurs. */
uint8_t colinfoinstdout; /* Print column info in output. */
/* Internal: */
int mode; /* Operating on HDUs or keywords. */
+ int coordsysid; /* ID of desired coordinate system.*/
int distortionid; /* ID of desired distortion. */
long copykeysrange[2]; /* Start and end of copy. */
gal_fits_list_key_t *write_keys; /* Keys to write in the header. */
diff --git a/bin/fits/ui.c b/bin/fits/ui.c
index c9cbc63..b8e9436 100644
--- a/bin/fits/ui.c
+++ b/bin/fits/ui.c
@@ -120,6 +120,7 @@ ui_initialize_options(struct fitsparams *p,
case GAL_OPTIONS_KEY_SEARCHIN:
case GAL_OPTIONS_KEY_IGNORECASE:
case GAL_OPTIONS_KEY_TYPE:
+ case GAL_OPTIONS_KEY_WCSLINEARMATRIX:
case GAL_OPTIONS_KEY_DONTDELETE:
case GAL_OPTIONS_KEY_LOG:
case GAL_OPTIONS_KEY_NUMTHREADS:
@@ -324,7 +325,7 @@ ui_read_check_only_options(struct fitsparams *p)
if( p->date || p->comment || p->history || p->asis || p->keyvalue
|| p->delete || p->rename || p->update || p->write || p->verify
|| p->printallkeys || p->copykeys || p->datetosec
- || p->wcsdistortion )
+ || p->wcscoordsys || p->wcsdistortion )
{
/* Check if a HDU is given. */
if(p->cp.hdu==NULL)
@@ -339,15 +340,20 @@ ui_read_check_only_options(struct fitsparams *p)
/* Keyword-related options that must be called alone. */
checkkeys = ( (p->keyvalue!=NULL)
+ (p->datetosec!=NULL)
+ + (p->wcscoordsys!=NULL)
+ (p->wcsdistortion!=NULL) );
if( ( checkkeys
&& ( p->date || p->comment || p->history || p->asis
|| p->delete || p->rename || p->update || p->write
|| p->verify || p->printallkeys || p->copykeys ) )
|| checkkeys>1 )
- error(EXIT_FAILURE, 0, "'--keyvalue', '--datetosec' and "
- "'--wcsdistortion' cannot currently be called with "
- "any other option");
+ error(EXIT_FAILURE, 0, "'--keyvalue', '--datetosec', "
+ "'--wcscoordsys' and '--wcsdistortion' cannot "
+ "currently be called with any other option");
+
+ /* Give an ID to recognized coordinate systems. */
+ if(p->wcscoordsys)
+ p->coordsysid=gal_wcs_coordsys_from_string(p->wcscoordsys);
/* Identify the requested distortion. Note that this also acts as a
sanity check because it will crash with an error if the given
diff --git a/bin/fits/ui.h b/bin/fits/ui.h
index 4ee5333..61f70af 100644
--- a/bin/fits/ui.h
+++ b/bin/fits/ui.h
@@ -77,6 +77,7 @@ enum option_keys_enum
UI_KEY_SKYCOVERAGE,
UI_KEY_OUTHDU,
UI_KEY_COPYKEYS,
+ UI_KEY_WCSCOORDSYS,
UI_KEY_PRIMARYIMGHDU,
UI_KEY_WCSDISTORTION,
};
diff --git a/bin/gnuastro.conf b/bin/gnuastro.conf
index 6c180c0..a93d7cb 100644
--- a/bin/gnuastro.conf
+++ b/bin/gnuastro.conf
@@ -36,6 +36,7 @@
# Output:
tableformat fits-binary
+ wcslinearmatrix pc
# Operating mode
quietmmap 0
diff --git a/bin/mkcatalog/args.h b/bin/mkcatalog/args.h
index 6582d38..a1c18c6 100644
--- a/bin/mkcatalog/args.h
+++ b/bin/mkcatalog/args.h
@@ -1229,6 +1229,20 @@ struct argp_option program_options[] =
ui_column_codes_ll
},
{
+ "upperlimitsb",
+ UI_KEY_UPPERLIMITSB,
+ 0,
+ 0,
+ "Upper-limit surface brightness (mag/arcsec^2).",
+ UI_GROUP_COLUMNS_BRIGHTNESS,
+ 0,
+ GAL_TYPE_INVALID,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ ui_column_codes_ll
+ },
+ {
"upperlimitonesigma",
UI_KEY_UPPERLIMITONESIGMA,
0,
diff --git a/bin/mkcatalog/columns.c b/bin/mkcatalog/columns.c
index e94bb79..e7340c7 100644
--- a/bin/mkcatalog/columns.c
+++ b/bin/mkcatalog/columns.c
@@ -31,6 +31,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <pthread.h>
#include <gnuastro/wcs.h>
+#include <gnuastro/units.h>
#include <gnuastro/pointer.h>
#include <gnuastro-internal/checkset.h>
@@ -257,6 +258,7 @@ columns_wcs_preparation(struct mkcatalogparams *p)
case UI_KEY_HALFMAXSB:
case UI_KEY_HALFSUMSB:
case UI_KEY_AREAARCSEC2:
+ case UI_KEY_UPPERLIMITSB:
case UI_KEY_SURFACEBRIGHTNESS:
pixscale=gal_wcs_pixel_scale(p->objects->wcs);
p->pixelarcsecsq=pixscale[0]*pixscale[1]*3600.0f*3600.0f;
@@ -1415,6 +1417,22 @@ columns_define_alloc(struct mkcatalogparams *p)
oiflag[ OCOL_UPPERLIMIT_B ] = ciflag[ CCOL_UPPERLIMIT_B ] = 1;
break;
+ case UI_KEY_UPPERLIMITSB:
+ name = "UPPERLIMIT_SB";
+ unit = "mag/arcsec^2";
+ ocomment = "Upper limit surface brightness over its
footprint.";
+ ccomment = ocomment;
+ otype = GAL_TYPE_FLOAT32;
+ ctype = GAL_TYPE_FLOAT32;
+ disp_fmt = GAL_TABLE_DISPLAY_FMT_FLOAT;
+ disp_width = 8;
+ disp_precision = 3;
+ p->hasmag = 1;
+ p->upperlimit = 1;
+ oiflag[ OCOL_NUMALL ] = ciflag[ CCOL_NUMALL ] = 1;
+ oiflag[ OCOL_UPPERLIMIT_B ] = ciflag[ CCOL_UPPERLIMIT_B ] = 1;
+ break;
+
case UI_KEY_UPPERLIMITONESIGMA:
name = "UPPERLIMIT_ONE_SIGMA";
unit = MKCATALOG_NO_UNIT;
@@ -1995,7 +2013,7 @@ columns_define_alloc(struct mkcatalogparams *p)
/********** Column calculation ***************/
/******************************************************************/
#define MKC_RATIO(TOP,BOT) ( (BOT)!=0.0f ? (TOP)/(BOT) : NAN )
-#define MKC_MAG(B) ( ((B)>0) ? -2.5f * log10(B) + p->zeropoint : NAN )
+#define MKC_MAG(B) ( gal_units_counts_to_mag(B, p->zeropoint) )
#define MKC_SB(B, A) ( ((B)>0 && (A)>0) \
? MKC_MAG(B) + 2.5f * log10((A) * p->pixelarcsecsq) \
: NAN )
@@ -2544,6 +2562,11 @@ columns_fill(struct mkcatalog_passparams *pp)
((float *)colarr)[oind] = MKC_MAG(oi[ OCOL_UPPERLIMIT_B ]);
break;
+ case UI_KEY_UPPERLIMITSB:
+ ((float *)colarr)[oind] = MKC_SB( oi[ OCOL_UPPERLIMIT_B ],
+ oi[ OCOL_NUMALL ] );
+ break;
+
case UI_KEY_UPPERLIMITONESIGMA:
((float *)colarr)[oind] = oi[ OCOL_UPPERLIMIT_S ];
break;
@@ -2888,6 +2911,11 @@ columns_fill(struct mkcatalog_passparams *pp)
((float *)colarr)[cind] = MKC_MAG(ci[ CCOL_UPPERLIMIT_B ]);
break;
+ case UI_KEY_UPPERLIMITSB:
+ ((float *)colarr)[cind] = MKC_SB( ci[ CCOL_UPPERLIMIT_B ],
+ ci[ CCOL_NUMALL ] );
+ break;
+
case UI_KEY_UPPERLIMITONESIGMA:
((float *)colarr)[cind] = ci[ CCOL_UPPERLIMIT_S ];
break;
diff --git a/bin/mkcatalog/mkcatalog.c b/bin/mkcatalog/mkcatalog.c
index e3e75b7..18f5a2d 100644
--- a/bin/mkcatalog/mkcatalog.c
+++ b/bin/mkcatalog/mkcatalog.c
@@ -35,6 +35,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <gnuastro/wcs.h>
#include <gnuastro/data.h>
#include <gnuastro/fits.h>
+#include <gnuastro/units.h>
#include <gnuastro/threads.h>
#include <gnuastro/pointer.h>
#include <gnuastro/dimension.h>
@@ -337,86 +338,92 @@ mkcatalog_wcs_conversion(struct mkcatalogparams *p)
void
-mkcatalog_write_inputs_in_comments(struct mkcatalogparams *p,
- gal_list_str_t **comments, int withsky,
- int withstd)
+mkcatalog_outputs_keys_numeric(gal_fits_list_key_t **keylist, void *number,
+ uint8_t type, char *nameliteral,
+ char *commentliteral, char *unitliteral)
{
- char *tmp, *str;
+ void *value;
+ value=gal_pointer_allocate(type, 1, 0, __func__, "value");
+ memcpy(value, number, gal_type_sizeof(type));
+ gal_fits_key_list_add_end(keylist, type, nameliteral, 0,
+ value, 1, commentliteral, 0,
+ unitliteral, 0);
+}
+
+
- /* Basic classifiers for plain text outputs. */
- if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
- {
- if( asprintf(&str, "--------- Input files ---------")<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
- }
+
+
+void
+mkcatalog_outputs_keys_infiles(struct mkcatalogparams *p,
+ gal_fits_list_key_t **keylist)
+{
+ char *stdname, *stdhdu, *stdvalcom;
+
+ gal_fits_key_list_title_add_end(keylist,
+ "Input files and/or configuration", 0);
/* Object labels. */
- if( asprintf(&str, "Objects: %s (hdu: %s).", p->objectsfile, p->cp.hdu)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_write_filename("INLAB", p->objectsfile, keylist, 0);
+ gal_fits_key_write_filename("INLABHDU", p->cp.hdu, keylist, 0);
/* Clump labels. */
if(p->clumps)
{
- if(asprintf(&str, "Clumps: %s (hdu: %s).", p->usedclumpsfile,
- p->clumpshdu)<0)
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_write_filename("INCLU", p->usedclumpsfile, keylist, 0);
+ gal_fits_key_write_filename("INCLUHDU", p->clumpshdu, keylist, 0);
}
- /* Values dataset. */
+ /* Values image. */
if(p->values)
{
- if( asprintf(&str, "Values: %s (hdu: %s).", p->usedvaluesfile,
- p->valueshdu)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_write_filename("INVAL", p->usedvaluesfile, keylist, 0);
+ gal_fits_key_write_filename("INVALHDU", p->valueshdu, keylist, 0);
}
- /* Sky dataset. */
- if(withsky && p->sky)
+ /* Sky image/value. */
+ if(p->sky)
{
if(p->sky->size==1)
- {
- if( asprintf(&str, "Sky: %g.", *((float *)(p->sky->array)) )<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- }
+ mkcatalog_outputs_keys_numeric(keylist, p->sky->array,
+ p->sky->type, "INSKYVAL",
+ "Value of Sky used (a single number).",
+ NULL);
else
{
- if( asprintf(&str, "Sky: %s (hdu: %s).", p->usedskyfile,
- p->skyhdu)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
+ gal_fits_key_write_filename("INSKY", p->usedskyfile, keylist, 0);
+ gal_fits_key_write_filename("INSKYHDU", p->skyhdu, keylist, 0);
}
- gal_list_str_add(comments, str, 0);
}
- /* Sky standard deviation dataset. */
- tmp = p->variance ? "VAR" : "STD";
- if(withstd && p->std)
+ /* Standard deviation (or variance) image. */
+ if(p->variance)
+ {
+ stdname="INVAR"; stdhdu="INVARHDU";
+ stdvalcom="Value of Sky variance (a single number).";
+ }
+ else
+ {
+ stdname="INSTD"; stdhdu="INSTDHDU";
+ stdvalcom="Value of Sky STD (a single number).";
+ }
+ if(p->std)
{
if(p->std->size==1)
- {
- if( asprintf(&str, "Sky %s: %g.", tmp,
- *((float *)(p->std->array)) )<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- }
+ mkcatalog_outputs_keys_numeric(keylist, p->std->array, p->std->type,
+ stdname, stdvalcom, NULL);
else
{
- if( asprintf(&str, "Sky %s: %s (hdu: %s).", tmp, p->usedstdfile,
- p->stdhdu)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
+ gal_fits_key_write_filename(stdname, p->usedstdfile, keylist, 0);
+ gal_fits_key_write_filename(stdhdu, p->stdhdu, keylist, 0);
}
- gal_list_str_add(comments, str, 0);
}
/* Upper limit mask. */
if(p->upmaskfile)
{
- if( asprintf(&str, "Upperlimit mask: %s (hdu: %s).", p->upmaskfile,
- p->upmaskhdu)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_write_filename("INUPM", p->upmaskfile, keylist, 0);
+ gal_fits_key_write_filename("INUPMHDU", p->upmaskhdu, keylist, 0);
}
}
@@ -424,166 +431,148 @@ mkcatalog_write_inputs_in_comments(struct
mkcatalogparams *p,
-/* Write the similar information. */
-static gal_list_str_t *
-mkcatalog_outputs_same_start(struct mkcatalogparams *p, int o0c1,
- char *ObjClump)
+/* Write the output keywords. */
+static gal_fits_list_key_t *
+mkcatalog_outputs_keys(struct mkcatalogparams *p, int o0c1)
{
- char *str, *tstr;
- double pixarea=NAN;
- gal_list_str_t *comments=NULL;
-
- if( asprintf(&str, "%s catalog of %s", o0c1 ? "Object" : "Clump",
- PROGRAM_STRING)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
-
- /* If in a Git controlled directory and output isn't a FITS file (in
- FITS, this will be automatically included). */
- if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT && gal_git_describe())
- {
- if(asprintf(&str, "Working directory commit %s", gal_git_describe())<0)
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
-
- /* Write the date. However, 'ctime' is going to put a new-line character
- in the end of its string, so we are going to remove it manually. */
- if( asprintf(&str, "%s started on %s", PROGRAM_NAME, ctime(&p->rawtime))<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- str[strlen(str)-1]='\0';
- gal_list_str_add(&comments, str, 0);
+ float pixarea=NAN, fvalue;
+ gal_fits_list_key_t *keylist=NULL;
+ /* First, add the file names. */
+ mkcatalog_outputs_keys_infiles(p, &keylist);
- /* Write the basic information. */
- mkcatalog_write_inputs_in_comments(p, &comments, 1, 1);
+ /* Type of catalog. */
+ gal_fits_key_list_add_end(&keylist, GAL_TYPE_STRING, "CATTYPE", 0,
+ o0c1 ? "clumps" : "objects", 0,
+ "Type of catalog ('object' or 'clump').", 0,
+ NULL, 0);
-
- /* Write other supplementary information. */
- if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
- {
- if( asprintf(&str, "--------- Supplementary information ---------")<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
+ /* Add project commit information when in a Git-controlled directory and
+ the output isn't a FITS file (in FITS, this will be automatically
+ included). */
+ if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT && gal_git_describe())
+ gal_fits_key_list_add_end(&keylist, GAL_TYPE_STRING, "COMMIT", 0,
+ gal_git_describe(), 1,
+ "Git commit in running directory.", 0,
+ NULL, 0);
/* Pixel area. */
if(p->objects->wcs)
{
pixarea=gal_wcs_pixel_area_arcsec2(p->objects->wcs);
if( isnan(pixarea)==0 )
- {
- if( asprintf(&str, "Pixel area (arcsec^2): %g", pixarea)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
+ mkcatalog_outputs_keys_numeric(&keylist, &pixarea,
+ GAL_TYPE_FLOAT32, "PIXAREA",
+ "Pixel area of input image.",
+ "arcsec^2");
}
- /* Zeropoint magnitude */
- if(p->hasmag)
- {
- if( asprintf(&str, "Zeropoint magnitude: %.4f", p->zeropoint)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
+ /* Zeropoint magnitude. */
+ if( !isnan(p->zeropoint) )
+ mkcatalog_outputs_keys_numeric(&keylist, &p->zeropoint,
+ GAL_TYPE_FLOAT32, "ZEROPNT",
+ "Zeropoint used for magnitude.",
+ "mag");
- /* Print surface brightness limits. */
+ /* Add the title for the keywords. */
+ gal_fits_key_list_title_add_end(&keylist, "Surface brightness limit (SBL)",
0);
+
+ /* Print surface brightness limit. */
if( !isnan(p->medstd) && !isnan(p->sfmagnsigma) )
{
+ /* Used noise value (per pixel) and multiple of sigma. */
+ mkcatalog_outputs_keys_numeric(&keylist, &p->medstd,
+ GAL_TYPE_FLOAT32, "SBLSTD",
+ "Pixel STD for surface brightness limit.",
+ NULL);
+ mkcatalog_outputs_keys_numeric(&keylist, &p->sfmagnsigma,
+ GAL_TYPE_FLOAT32, "SBLNSIG",
+ "Sigma multiple for surface brightness "
+ "limit.", NULL);
+
/* Only print magnitudes if a zeropoint is given. */
if( !isnan(p->zeropoint) )
{
- /* Per pixel. */
- if( asprintf(&str, "%g sigma surface brightness (magnitude/pixel): "
- "%.3f", p->sfmagnsigma, ( -2.5f
- *log10( p->sfmagnsigma
- * p->medstd )
- + p->zeropoint ) )<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
-
- /* Requested projected area: if a pixel area could be measured (a
- WCS was given), then also estimate the surface brightness over
- one arcsecond^2. From the pixel area, we know how many pixels
- are necessary to fill the requested projected area (in
- arcsecond^2). We also know that as the number of samples
- (pixels) increases (to N), the noise increases by sqrt(N), see
- the full discussion in the book. */
- if(!isnan(pixarea) && !isnan(p->sfmagarea))
+ /* Per pixel, Surface brightness limit magnitude. */
+ fvalue=gal_units_counts_to_mag(p->sfmagnsigma * p->medstd,
+ p->zeropoint);
+ mkcatalog_outputs_keys_numeric(&keylist, &fvalue,
+ GAL_TYPE_FLOAT32, "SBLMAGPX",
+ "Surface brightness limit per pixel.",
+ "mag/pix");
+
+ /* Only print the SBL in fixed area if a WCS is present and a
+ pixel area could be deduced. */
+ if( !isnan(pixarea) )
{
- /* Prepare the comment/information. */
- if(p->sfmagarea==1.0f)
- tstr=NULL;
- else
- if( asprintf(&tstr, "%g-", p->sfmagarea)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- if( asprintf(&str, "%g sigma surface brightness "
- "(magnitude/%sarcsec^2): %.3f", p->sfmagnsigma,
- tstr ? tstr : "",
- ( -2.5f * log10( p->sfmagnsigma
- * p->medstd
- * sqrt( p->sfmagarea / pixarea) )
- + p->zeropoint ) )<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-
- /* Add the final string/line to the catalog comments. */
- gal_list_str_add(&comments, str, 0);
-
- /* Clean up (if necessary). */
- if (tstr)
- {
- free(tstr);
- tstr=NULL;
- }
+ /* Area used for measuring SBL. */
+ mkcatalog_outputs_keys_numeric(&keylist, &p->sfmagarea,
+ GAL_TYPE_FLOAT32, "SBLAREA",
+ "Area for surface brightness "
+ "limit.", "arcsec^2");
+
+ /* Per area, Surface brightness limit magnitude. */
+ fvalue=gal_units_counts_to_mag(p->sfmagnsigma
+ * p->medstd
+ / sqrt( p->sfmagarea
+ * pixarea),
+ p->zeropoint);
+ mkcatalog_outputs_keys_numeric(&keylist, &fvalue,
+ GAL_TYPE_FLOAT32, "SBLMAG",
+ "Surf. bright. limit in SBLAREA.",
+ "mag/arcsec^2");
}
+ else
+ gal_fits_key_list_fullcomment_add_end(&keylist, "Can't "
+ "write surface brightness limiting magnitude (SBLM) "
+ "values in fixed area ('SBLAREA' and 'SBLMAG' "
+ "keywords) because input doesn't have a world "
+ "coordinate system (WCS), or the first two "
+ "coordinates of the WCS weren't angular positions "
+ "in units of degrees.", 0);
}
-
- /* Notice: */
- if( asprintf(&str, "Pixel STD for surface brightness calculation%s: %f",
- (!isnan(pixarea) && !isnan(p->sfmagarea))?"s":"",
- p->medstd)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
+ else
+ gal_fits_key_list_fullcomment_add_end(&keylist, "Can't write "
+ "surface brightness limiting magnitude values (e.g., "
+ "'SBLMAG' or 'SBLMAGPX' keywords) because no "
+ "'--zeropoint' has been given.", 0);
}
else
{
- gal_checkset_allocate_copy("No surface brightness calcuations "
- "because no STD image used.", &str);
- gal_list_str_add(&comments, str, 0);
- gal_checkset_allocate_copy("Ask for column that uses the STD image, "
- "or '--forcereadstd'.", &str);
- gal_list_str_add(&comments, str, 0);
+ gal_fits_key_list_fullcomment_add_end(&keylist, "No surface "
+ "brightness calcuations (e.g., 'SBLMAG' or 'SBLMAGPX' "
+ "keywords) because STD image didn't have the 'MEDSTD' "
+ "keyword. There are two solutions: 1) Call with "
+ "'--forcereadstd'. 2) Measure the median noise level "
+ "manually (possibly with Gnuastro's Arithmetic program) "
+ "and put the value in the 'MEDSTD' keyword of the STD "
+ "image.", 0);
+ gal_fits_key_list_fullcomment_add_end(&keylist, "", 0);
}
/* The count-per-second correction. */
if(p->cpscorr>1.0f)
- {
- if( asprintf(&str, "Counts-per-second correction: %.3f", p->cpscorr)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
+ mkcatalog_outputs_keys_numeric(&keylist, &p->cpscorr,
+ GAL_TYPE_FLOAT32, "CPSCORR",
+ "Counts-per-second correction.",
+ NULL);
/* Print upper-limit parameters. */
if(p->upperlimit)
- upperlimit_write_comments(p, &comments, 1);
+ upperlimit_write_keys(p, &keylist, 1);
- /* Start column metadata. */
+ /* In plain-text outputs, put a title for column metadata. */
if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
- {
- if( asprintf(&str, "--------- Table columns ---------")<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, str, 0);
- }
+ gal_fits_key_list_title_add_end(&keylist, "Column metadata", 0);
- /* Return the comments. */
- return comments;
+ /* Return the list of keywords. */
+ return keylist;
}
-
/* Since all the measurements were done in parallel (and we didn't know the
number of clumps per object a-priori), the clumps informtion is just
written in as they are measured. Here, we'll sort the clump columns by
@@ -646,19 +635,20 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
{
size_t i, scounter;
char str[200], *fname;
- gal_list_str_t *comments;
+ gal_fits_list_key_t *keylist;
+ gal_list_str_t *comments=NULL;
int outisfits=gal_fits_name_is_fits(p->objectsout);
/* If a catalog is to be generated. */
if(p->objectcols)
{
/* OBJECT catalog */
- comments=mkcatalog_outputs_same_start(p, 0, "Detection");
+ keylist=mkcatalog_outputs_keys(p, 0);
/* Reverse the comments list (so it is printed in the same order
here), write the objects catalog and free the comments. */
gal_list_str_reverse(&comments);
- gal_table_write(p->objectcols, NULL, comments, p->cp.tableformat,
+ gal_table_write(p->objectcols, &keylist, NULL, p->cp.tableformat,
p->objectsout, "OBJECTS", 0);
gal_list_str_free(comments, 1);
@@ -667,7 +657,7 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
if(p->clumps)
{
/* Make the comments. */
- comments=mkcatalog_outputs_same_start(p, 1, "Clumps");
+ keylist=mkcatalog_outputs_keys(p, 1);
/* Write objects catalog
---------------------
diff --git a/bin/mkcatalog/mkcatalog.h b/bin/mkcatalog/mkcatalog.h
index 9cbdbd8..75e9c7a 100644
--- a/bin/mkcatalog/mkcatalog.h
+++ b/bin/mkcatalog/mkcatalog.h
@@ -45,9 +45,13 @@ struct mkcatalog_passparams
};
void
-mkcatalog_write_inputs_in_comments(struct mkcatalogparams *p,
- gal_list_str_t **comments, int withsky,
- int withstd);
+mkcatalog_outputs_keys_numeric(gal_fits_list_key_t **keylist, void *number,
+ uint8_t type, char *nameliteral,
+ char *commentliteral, char *unitliteral);
+
+void
+mkcatalog_outputs_keys_infiles(struct mkcatalogparams *p,
+ gal_fits_list_key_t **keylist);
void
mkcatalog(struct mkcatalogparams *p);
diff --git a/bin/mkcatalog/ui.c b/bin/mkcatalog/ui.c
index 37bef56..aa88388 100644
--- a/bin/mkcatalog/ui.c
+++ b/bin/mkcatalog/ui.c
@@ -538,7 +538,8 @@ ui_wcs_info(struct mkcatalogparams *p)
size_t i;
/* Read the WCS meta-data. */
- p->objects->wcs=gal_wcs_read(p->objectsfile, p->cp.hdu, 0, 0,
+ p->objects->wcs=gal_wcs_read(p->objectsfile, p->cp.hdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->objects->nwcs);
/* Read the basic WCS information. */
@@ -1423,10 +1424,20 @@ ui_preparations_read_keywords(struct mkcatalogparams *p)
keys[0].array=&minstd; keys[1].array=&p->medstd;
gal_fits_key_read(p->usedstdfile, p->stdhdu, keys, 0, 0);
- /* If the two keywords couldn't be read. We don't want to slow down
- the user for the median (which needs sorting). So we'll just
- calculate the minimum which is necessary for the 'p->cpscorr'. */
- if(keys[1].status) p->medstd=NAN;
+ /* If the two keywords couldn't be read. We don't want to slow
+ down the user for the median (which needs sorting). So we'll
+ just calculate if if '--forcereadstd' is called. However, we
+ need the minimum for 'p->cpscorr'. */
+ if(keys[1].status)
+ {
+ if(p->forcereadstd)
+ {
+ tmp=gal_statistics_median(p->std, 0);
+ p->medstd=*((float *)(tmp->array));
+ }
+ else
+ p->medstd=NAN;
+ }
if(keys[0].status)
{
/* Calculate the minimum STD. */
diff --git a/bin/mkcatalog/ui.h b/bin/mkcatalog/ui.h
index 572720e..3d390da 100644
--- a/bin/mkcatalog/ui.h
+++ b/bin/mkcatalog/ui.h
@@ -158,6 +158,7 @@ enum option_keys_enum
UI_KEY_MAXIMUM,
UI_KEY_CLUMPSMAGNITUDE,
UI_KEY_UPPERLIMIT,
+ UI_KEY_UPPERLIMITSB,
UI_KEY_UPPERLIMITONESIGMA,
UI_KEY_UPPERLIMITSIGMA,
UI_KEY_UPPERLIMITQUANTILE,
diff --git a/bin/mkcatalog/upperlimit.c b/bin/mkcatalog/upperlimit.c
index b1b27bc..e108afe 100644
--- a/bin/mkcatalog/upperlimit.c
+++ b/bin/mkcatalog/upperlimit.c
@@ -35,6 +35,8 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <gnuastro/dimension.h>
#include <gnuastro/statistics.h>
+#include <gnuastro-internal/checkset.h>
+
#include "main.h"
#include "ui.h"
@@ -277,82 +279,64 @@ upperlimit_random_position(struct mkcatalog_passparams
*pp, gal_data_t *tile,
used/necessary, so to avoid confusion, we won't write it.
*/
void
-upperlimit_write_comments(struct mkcatalogparams *p,
- gal_list_str_t **comments, int withsigclip)
+upperlimit_write_keys(struct mkcatalogparams *p,
+ gal_fits_list_key_t **keylist, int withsigclip)
{
- char *str;
-
- if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
- {
- if(asprintf(&str, "--------- Upper-limit measurement ---------")<0)
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
- }
-
- if( asprintf(&str, "Number of usable random samples: %zu", p->upnum)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
-
+ /* Write a title for */
+ gal_fits_key_list_title_add_end(keylist, "Upper-limit (UP) parameters", 0);
+
+ /* Basic settings. */
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT32, "UPNSIGMA", 0,
+ &p->upnsigma, 0,
+ "Multiple of sigma to measure upper-limit.", 0,
+ NULL, 0);
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_SIZE_T, "UPNUMBER", 0,
+ &p->upnum, 0,
+ "Number of usable random samples.", 0,
+ "counter", 0);
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRNGNAM", 0,
+ (void *)(p->rng_name), 0,
+ "Random number generator name.", 0, NULL, 0);
+ mkcatalog_outputs_keys_numeric(keylist, &p->rng_seed,
+ GAL_TYPE_ULONG, "UPRNGSEE",
+ "Random number generator seed.", NULL);
+
+ /* Range of upper-limit values. */
if(p->uprange)
{
- switch(p->objects->ndim)
- {
- case 2:
- if( asprintf(&str, "Range of random samples about target: "
- "%zu, %zu", p->uprange[1], p->uprange[0])<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- break;
- case 3:
- if( asprintf(&str, "Range of random samples about target: %zu, "
- "%zu, %zu", p->uprange[2], p->uprange[1],
- p->uprange[0])<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- break;
- default:
- error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
- "address the problem. The value %zu is not recognized for "
- "'p->input->ndim'", __func__, PACKAGE_BUGREPORT,
- p->objects->ndim);
- }
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_SIZE_T, "UPRANGE1", 0,
+ &p->uprange[p->objects->ndim-1], 0,
+ "Range about target in axis 1.", 0,
+ "pixels", 0);
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRANGE2", 0,
+ &p->uprange[p->objects->ndim==2 ? 0 : 1], 0,
+ "Range about target in axis 2.", 0,
+ "pixels", 0);
+ if(p->objects->ndim==3)
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRANGE3", 0,
+ &p->uprange[0], 0,
+ "Range about target in axis 3.", 0,
+ "pixels", 0);
}
- if( asprintf(&str, "Random number generator name: %s", p->rng_name)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
-
- if( asprintf(&str, "Random number generator seed: %lu", p->rng_seed)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
-
+ /* If the upper-limit measurement included sigma-clipping. */
if(withsigclip)
{
- if( asprintf(&str, "Multiple of STD used for sigma-clipping: %.3f",
- p->upsigmaclip[0])<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
-
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCMLTP", 0,
+ &p->upsigmaclip[0], 0,
+ "Multiple of STD used for sigma-clipping.", 0,
+ NULL, 0);
if(p->upsigmaclip[1]>=1.0f)
- {
- if( asprintf(&str, "Number of clips for sigma-clipping: %.0f",
- p->upsigmaclip[1])<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- }
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCNUM", 0,
+ &p->upsigmaclip[1], 0,
+ "Number of clips for sigma-clipping.", 0,
+ NULL, 0);
else
- {
- if( asprintf(&str, "Tolerance level to sigma-clipping: %.3f",
- p->upsigmaclip[1])<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- }
- gal_list_str_add(comments, str, 0);
+ gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCTOL", 0,
+ &p->upsigmaclip[1], 0,
+ "Tolerance level to sigma-clipping.", 0,
+ NULL, 0);
- if( p->oiflag[ OCOL_UPPERLIMIT_B ] )
- {
- if( asprintf(&str, "Multiple of sigma-clipped STD for upper-limit: "
- "%.3f", p->upnsigma)<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(comments, str, 0);
- }
}
}
@@ -367,8 +351,7 @@ upperlimit_write_check(struct mkcatalogparams *p,
gal_list_sizet_t *check_x,
gal_list_f32_t *check_s)
{
float *sarr;
- char *tmp=NULL, *tmp2=NULL;
- gal_list_str_t *comments=NULL;
+ gal_fits_list_key_t *keylist=NULL;
size_t *xarr, *yarr, *zarr=NULL, tnum, ttnum, num;
gal_data_t *x=NULL, *y=NULL, *z=NULL, *s=NULL; /* To avoid warnings. */
@@ -418,35 +401,30 @@ upperlimit_write_check(struct mkcatalogparams *p,
gal_list_sizet_t *check_x,
/* Write exactly what object/clump this table is for. */
+ gal_fits_key_list_title_add_end(&keylist, "Target for upper-limit check", 0);
+ mkcatalog_outputs_keys_numeric(&keylist, &p->checkuplim[0],
+ GAL_TYPE_INT32, "UPCHKOBJ",
+ "Object label for upper-limit check target.",
+ NULL);
if( p->checkuplim[1]!=GAL_BLANK_INT32 )
- if( asprintf(&tmp2, ", Clump %d", p->checkuplim[1]) <0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- if( asprintf(&tmp, "Upperlimit distribution for Object %d%s",
- p->checkuplim[0],
- ( p->checkuplim[1]==GAL_BLANK_INT32
- ? "" : tmp2) ) <0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, tmp, 0);
- if(tmp2) {free(tmp2); tmp2=NULL;}
-
-
- /* Write the basic info, and conclude the comments. */
- mkcatalog_write_inputs_in_comments(p, &comments, 0, 0);
- upperlimit_write_comments(p, &comments, 0);
+ mkcatalog_outputs_keys_numeric(&keylist, &p->checkuplim[1],
+ GAL_TYPE_INT32, "UPCHKCLU",
+ "Clump label for upper-limit check target.",
+ NULL);
+
+
+ /* Write the basic info, and conclude the keywords. */
+ mkcatalog_outputs_keys_infiles(p, &keylist);
+ upperlimit_write_keys(p, &keylist, 0);
if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
- {
- if( asprintf(&tmp, "--------- Table columns ---------")<0 )
- error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
- gal_list_str_add(&comments, tmp, 0);
- }
+ gal_fits_key_list_title_add_end(&keylist, "Column metadata", 0);
/* Define a list from the containers and write them into a table. */
x->next=y;
if(check_z) { y->next=z; z->next=s; }
else { y->next=s; }
- gal_list_str_reverse(&comments);
- gal_table_write(x, NULL, comments, p->cp.tableformat, p->upcheckout,
+ gal_table_write(x, &keylist, NULL, p->cp.tableformat, p->upcheckout,
"UPPERLIMIT_CHECK", 0);
/* Inform the user. */
@@ -488,12 +466,33 @@ upperlimit_measure(struct mkcatalog_passparams *pp,
int32_t clumplab,
{
switch(column->status)
{
- /* Columns that depend on the sigma of the distribution. */
- case UI_KEY_UPPERLIMIT:
- case UI_KEY_UPPERLIMITMAG:
- case UI_KEY_UPPERLIMITSIGMA:
- case UI_KEY_UPPERLIMITONESIGMA:
+ /* Quantile column. */
+ case UI_KEY_UPPERLIMITQUANTILE:
+
+ /* Also only necessary once (if requested multiple times). */
+ if(qfunc==NULL)
+ {
+ /* Similar to the case for sigma-clipping, we'll need to
+ keep the size here also. */
+ init_size=pp->up_vals->size;
+ sum=gal_data_alloc(NULL, GAL_TYPE_FLOAT32, 1, &one, NULL, 0,
+ -1, 1, NULL, NULL, NULL);
+ ((float *)(sum->array))[0]=o[clumplab?CCOL_SUM:OCOL_SUM];
+ qfunc=gal_statistics_quantile_function(pp->up_vals, sum, 1);
+
+ /* Fill in the column. */
+ col = clumplab ? CCOL_UPPERLIMIT_Q : OCOL_UPPERLIMIT_Q;
+ pp->up_vals->size=pp->up_vals->dsize[0]=init_size;
+ o[col] = ((double *)(qfunc->array))[0];
+
+ /* Clean up. */
+ gal_data_free(sum);
+ gal_data_free(qfunc);
+ }
+ break;
+ /* Columns that depend on the sigma of the distribution. */
+ default:
/* We only need to do this once, but the columns can be
requested in any order. */
if(sigclip==NULL)
@@ -525,31 +524,6 @@ upperlimit_measure(struct mkcatalog_passparams *pp,
int32_t clumplab,
gal_data_free(sigclip);
}
break;
-
- /* Quantile column. */
- case UI_KEY_UPPERLIMITQUANTILE:
-
- /* Also only necessary once (if requested multiple times). */
- if(qfunc==NULL)
- {
- /* Similar to the case for sigma-clipping, we'll need to
- keep the size here also. */
- init_size=pp->up_vals->size;
- sum=gal_data_alloc(NULL, GAL_TYPE_FLOAT32, 1, &one, NULL, 0,
- -1, 1, NULL, NULL, NULL);
- ((float *)(sum->array))[0]=o[clumplab?CCOL_SUM:OCOL_SUM];
- qfunc=gal_statistics_quantile_function(pp->up_vals, sum, 1);
-
- /* Fill in the column. */
- col = clumplab ? CCOL_UPPERLIMIT_Q : OCOL_UPPERLIMIT_Q;
- pp->up_vals->size=pp->up_vals->dsize[0]=init_size;
- o[col] = ((double *)(qfunc->array))[0];
-
- /* Clean up. */
- gal_data_free(sum);
- gal_data_free(qfunc);
- }
- break;
}
}
}
diff --git a/bin/mkcatalog/upperlimit.h b/bin/mkcatalog/upperlimit.h
index 666c3a0..3ddb5ca 100644
--- a/bin/mkcatalog/upperlimit.h
+++ b/bin/mkcatalog/upperlimit.h
@@ -24,8 +24,8 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#define UPPERLIMIT_H
void
-upperlimit_write_comments(struct mkcatalogparams *p,
- gal_list_str_t **comments, int withsigclip);
+upperlimit_write_keys(struct mkcatalogparams *p,
+ gal_fits_list_key_t **keylist, int withsigclip);
void
upperlimit_calculate(struct mkcatalog_passparams *pp);
diff --git a/bin/mknoise/mknoise.c b/bin/mknoise/mknoise.c
index 6731ed7..5320d5c 100644
--- a/bin/mknoise/mknoise.c
+++ b/bin/mknoise/mknoise.c
@@ -32,6 +32,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <gsl/gsl_rng.h> /* Used in setrandoms. */
#include <gnuastro/fits.h>
+#include <gnuastro/units.h>
#include <gsl/gsl_randist.h> /* To make noise. */
#include <gnuastro-internal/timing.h>
@@ -70,7 +71,7 @@ convertsaveoutput(struct mknoiseparams *p)
0, NULL, 0);
if( !isnan(p->zeropoint) )
{
- tmp=-2.5 * log10(p->background) + p->zeropoint;
+ tmp=gal_units_counts_to_mag(p->background, p->zeropoint);
gal_checkset_allocate_copy("BCKGMAG", &keyname);
gal_fits_key_list_add_end(&headers, GAL_TYPE_FLOAT64, keyname, 1,
&tmp, 0,
diff --git a/bin/mknoise/ui.c b/bin/mknoise/ui.c
index f39bbc9..4c99355 100644
--- a/bin/mknoise/ui.c
+++ b/bin/mknoise/ui.c
@@ -311,7 +311,8 @@ ui_preparations(struct mknoiseparams *p)
p->input=gal_array_read_one_ch_to_type(p->inputname, p->cp.hdu, NULL,
GAL_TYPE_FLOAT64, p->cp.minmapsize,
p->cp.quietmmap);
- p->input->wcs=gal_wcs_read(p->inputname, p->cp.hdu, 0, 0, &p->input->nwcs);
+ p->input->wcs=gal_wcs_read(p->inputname, p->cp.hdu, p->cp.wcslinearmatrix,
+ 0, 0, &p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim, p->input->dsize,
p->input->wcs);
diff --git a/bin/mkprof/args.h b/bin/mkprof/args.h
index c0d05fe..d6a8887 100644
--- a/bin/mkprof/args.h
+++ b/bin/mkprof/args.h
@@ -209,7 +209,7 @@ struct argp_option program_options[] =
UI_KEY_NUMRANDOM,
"INT",
0,
- "No. of random points in Monte Carlo integration.",
+ "No. of random points in Monte Carlo integ.",
UI_GROUP_PROFILES,
&p->numrandom,
GAL_TYPE_SIZE_T,
@@ -376,7 +376,7 @@ struct argp_option program_options[] =
UI_KEY_CCOL,
"STR/INT",
0,
- "Coordinate columns (one call for each dimension).",
+ "Coordinate columns (one call for each dim.).",
UI_GROUP_CATALOG,
&p->ccol,
GAL_TYPE_STRLL,
@@ -391,7 +391,7 @@ struct argp_option program_options[] =
0,
"sersic (1), moffat (2), gaussian (3), point (4), "
"flat (5), circumference (6), distance (7), "
- "radial-table (8)",
+ "radial-table (8).",
UI_GROUP_CATALOG,
&p->fcol,
GAL_TYPE_STRING,
@@ -430,7 +430,7 @@ struct argp_option program_options[] =
UI_KEY_PCOL,
"STR/INT",
0,
- "Position angle (First X-Z-X Euler angle in 3D).",
+ "Position angle (3D: first X-Z-X Euler angle).",
UI_GROUP_CATALOG,
&p->pcol,
GAL_TYPE_STRING,
@@ -508,7 +508,7 @@ struct argp_option program_options[] =
UI_KEY_TCOL,
"STR/INT",
0,
- "Truncation in units of --rcol, unless --tunitinp.",
+ "Truncation in units of --rcol.",
UI_GROUP_CATALOG,
&p->tcol,
GAL_TYPE_STRING,
diff --git a/bin/mkprof/mkprof.c b/bin/mkprof/mkprof.c
index 60d188b..13b8cde 100644
--- a/bin/mkprof/mkprof.c
+++ b/bin/mkprof/mkprof.c
@@ -32,6 +32,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <gnuastro/box.h>
#include <gnuastro/git.h>
#include <gnuastro/fits.h>
+#include <gnuastro/units.h>
#include <gnuastro/threads.h>
#include <gnuastro/pointer.h>
#include <gnuastro/dimension.h>
@@ -655,7 +656,7 @@ mkprof_write(struct mkprofparams *p)
break;
case 2:
((float *)(log->array))[ibq->id] =
- sum>0.0f ? -2.5f*log10(sum)+p->zeropoint : NAN;
+ gal_units_counts_to_mag(sum, p->zeropoint);
break;
case 1:
((unsigned long *)(log->array))[ibq->id]=ibq->id+1;
@@ -883,7 +884,7 @@ mkprof(struct mkprofparams *p)
}
/* If a merged image was created, let the user know.... */
- if(p->mergedimgname)
+ if(p->mergedimgname && p->cp.quiet==0)
printf(" -- Output: %s\n", p->mergedimgname);
/* Clean up. */
diff --git a/bin/mkprof/profiles.c b/bin/mkprof/profiles.c
index ebd69ac..d6a3984 100644
--- a/bin/mkprof/profiles.c
+++ b/bin/mkprof/profiles.c
@@ -56,19 +56,18 @@ profiles_radial_distance(struct mkonthread *mkp)
double
profiles_custom_table(struct mkonthread *mkp)
{
- double out;
- long i; /* May become negative. */
+ long i; /* May become negative. */
double *reg=mkp->p->customregular;
double *min=mkp->p->custom->array;
double *max=mkp->p->custom->next->array;
double *value=mkp->p->custom->next->next->array;
+ double out=0.0f; /* Zero means no value, user may want a NaN value! */
/* If the table isn't regular ('reg[0]' isn't NaN), then we have to parse
over the whole table. However, if its regular, we can find the proper
value much more easily. */
if( isnan(reg[0]) )
{
- out=0;
for(i=0;i<mkp->p->custom->size;++i)
if( mkp->r >= min[i] && mkp->r < max[i] )
{ out=value[i]; break; }
@@ -76,12 +75,11 @@ profiles_custom_table(struct mkonthread *mkp)
else
{
i=(mkp->r - reg[0])/reg[1];
- if(i<0 || i>mkp->p->custom->size) out=0;
- else out=value[i];
+ if(i>=0 && i<=mkp->p->custom->size) out=value[i];
}
/* Return the output value. */
- return isnan(out) ? 0 : out;
+ return out;
}
diff --git a/bin/mkprof/ui.c b/bin/mkprof/ui.c
index 5e1ad36..9b29674 100644
--- a/bin/mkprof/ui.c
+++ b/bin/mkprof/ui.c
@@ -1359,6 +1359,10 @@ ui_prepare_wcs(struct mkprofparams *p)
if(status)
error(EXIT_FAILURE, 0, "wcsset error %d: %s", status,
wcs_errmsg[status]);
+
+ /* Convert it to CD if the user wanted it. */
+ if(p->cp.wcslinearmatrix==GAL_WCS_LINEAR_MATRIX_CD)
+ gal_wcs_to_cd(wcs);
}
@@ -1384,7 +1388,8 @@ ui_prepare_canvas(struct mkprofparams *p)
the background image and the number of its dimensions. So
'ndim==0' and what 'dsize' points to is irrelevant. */
tdsize=gal_fits_img_info_dim(p->backname, p->backhdu, &tndim);
- p->wcs=gal_wcs_read(p->backname, p->backhdu, 0, 0, &p->nwcs);
+ p->wcs=gal_wcs_read(p->backname, p->backhdu, p->cp.wcslinearmatrix,
+ 0, 0, &p->nwcs);
tndim=gal_dimension_remove_extra(tndim, tdsize, p->wcs);
free(tdsize);
if(p->nomerged==0)
diff --git a/bin/noisechisel/ui.c b/bin/noisechisel/ui.c
index 2f3e413..9d3e2b5 100644
--- a/bin/noisechisel/ui.c
+++ b/bin/noisechisel/ui.c
@@ -587,7 +587,8 @@ ui_preparations_read_input(struct noisechiselparams *p)
NULL, GAL_TYPE_FLOAT32,
p->cp.minmapsize,
p->cp.quietmmap);
- p->input->wcs = gal_wcs_read(p->inputname, p->cp.hdu, 0, 0,
+ p->input->wcs = gal_wcs_read(p->inputname, p->cp.hdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim,
p->input->dsize,
diff --git a/bin/query/astron.c b/bin/query/astron.c
index c3f41ad..01dd25d 100644
--- a/bin/query/astron.c
+++ b/bin/query/astron.c
@@ -50,6 +50,9 @@ astron_sanity_checks(struct queryparams *p)
gal_checkset_allocate_copy("tgssadr.main", &p->datasetstr);
}
}
+
+ /* Currently we assume ASTRON only uses TAP. */
+ p->usetap=1;
}
@@ -59,7 +62,7 @@ astron_sanity_checks(struct queryparams *p)
void
astron_prepare(struct queryparams *p)
{
- /* NED-specific. */
+ /* ASTRON-specific. */
astron_sanity_checks(p);
/* Set the URLs, note that this is a simply-linked list, so we need to
diff --git a/bin/query/gaia.c b/bin/query/gaia.c
index 135e3ca..a7dd61f 100644
--- a/bin/query/gaia.c
+++ b/bin/query/gaia.c
@@ -95,6 +95,9 @@ gaia_sanity_checks(struct queryparams *p)
gal_checkset_allocate_copy("public.tycho2", &p->datasetstr);
}
}
+
+ /* Currently we assume GAIA only uses TAP. */
+ p->usetap=1;
}
diff --git a/bin/query/main.h b/bin/query/main.h
index 377d41c..62c70f3 100644
--- a/bin/query/main.h
+++ b/bin/query/main.h
@@ -70,6 +70,7 @@ struct queryparams
char *ra_name; /* Name of RA column. */
char *dec_name; /* Name of Dec columns. */
char *finalcommand; /* The final command used. */
+ int usetap; /* If a TAP-download should be used. */
/* Output: */
time_t rawtime; /* Starting time of the program. */
diff --git a/bin/query/ned.c b/bin/query/ned.c
index f2fd24f..80a9bae 100644
--- a/bin/query/ned.c
+++ b/bin/query/ned.c
@@ -37,7 +37,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
-
+/* Basic sanity checks. */
static void
ned_sanity_checks(struct queryparams *p)
{
@@ -51,9 +51,35 @@ ned_sanity_checks(struct queryparams *p)
}
}
+ /* Database-specific checks. For example, if we should use TAP or
+ not. Note that the user may give 'NEDTAP.objdir', so we can't use the
+ 'if' above (for expanding summarized names). */
+ if( !strcmp(p->datasetstr, "NEDTAP.objdir") )
+ p->usetap=1;
+ else if( !strcmp(p->datasetstr, "extinction") )
+ {
+ /* Crash for options that are not compatible with extinction. */
+ if( p->radius || p->width || p->range || p->noblank || p->columns
+ || p->head!=GAL_BLANK_SIZE_T || p->sort )
+ error(EXIT_FAILURE, 0, "NED's extinction calculator returns "
+ "the galactic extinction for a single point (in multiple "
+ "filters), therefore the following options are not "
+ "acceptable with it: '--radius', '--width', '--range', "
+ "'--noblank', '--column', '--head' and '--sort'");
+
+ /* Make sure that '--center' is given. */
+ if(p->center==NULL)
+ error(EXIT_FAILURE, 0, "no coordinate specified! Please use "
+ "'--center' to specify the RA and Dec (in J2000) of "
+ "your desired coordinate, for example "
+ "--center=10.68458,41.269166");
+ }
+
/* Currently NED only has a single table for TAP access, so warn the
users about this if they ask for any other table. */
- if( p->datasetstr==NULL || strcmp(p->datasetstr, "NEDTAP.objdir") )
+ if( p->usetap
+ && ( p->datasetstr==NULL
+ || strcmp(p->datasetstr, "NEDTAP.objdir") ) )
error(EXIT_FAILURE, 0, "NED currently only supports a single "
"dataset with the TAP protocol called 'NEDTAP.objdir' "
"(which you can also call in Query with '--dataset=objdir'). "
@@ -67,22 +93,87 @@ ned_sanity_checks(struct queryparams *p)
+/* Extinction with NED */
+void
+ned_extinction(struct queryparams *p)
+{
+ double *darr;
+ char *command;
+
+ /* If the user wants information, we'll specify a (0,0) center coordinate
+ and continue. In the end, instead of saving the file, we'll just
+ report the metadata. */
+ if(p->information)
+ error(EXIT_FAILURE, 0, "'--information' is not yet supported for "
+ "NED's extinction calculator");
+
+ /* Build the calling command. Note that the query quotes are
+ included by the function building it. */
+ darr=p->center->array;
+ if( asprintf(&command, "curl%s -o%s
'https://ned.ipac.caltech.edu/cgi-bin/calc?in_csys=Equatorial&out_csys=Equatorial&in_equinox=J2000.0&out_equinox=J2000.0&obs_epoch=2000.0&lon=%fd&lat=%fd&of=xml_main&ext=1'",
p->cp.quiet ? " -s" : "",
+ p->downloadname, darr[0], darr[1])<0 )
+ error(EXIT_FAILURE, 0, "%s: asprintf allocation ('command')",
+ __func__);
+
+ /* Print the calling command for the user to know. */
+ if(p->dryrun==1 || p->cp.quiet==0)
+ {
+ if(p->dryrun==0) printf("\n");
+ error(EXIT_SUCCESS, 0, "%s: %s",
+ p->dryrun ? "would run" : "running", command);
+ if(p->dryrun==0) printf("\nDownload status:\n");
+ }
+
+ /* Run the command if '--dryrun' isn't called: if the command succeeds
+ 'system' returns 'EXIT_SUCCESS'. */
+ if(p->dryrun==0)
+ {
+ if(system(command)!=EXIT_SUCCESS)
+ error(EXIT_FAILURE, 0, "the query download command %sfailed%s\n",
+ p->cp.quiet==0 ? "printed above " : "",
+ p->cp.quiet==0 ? "" : " (the command can be printed "
+ "if you don't use the option '--quiet', or '-q')");
+ }
+}
+
+
+
+
+
+/* For NED's non-TAP queries. */
+void
+ned_non_tap(struct queryparams *p)
+{
+ if( !strcmp(p->datasetstr, "extinction") )
+ ned_extinction(p);
+}
+
+
+
+
+
void
ned_prepare(struct queryparams *p)
{
/* NED-specific. */
ned_sanity_checks(p);
- /* Set the URLs, note that this is a simply-linked list, so we need to
- reverse it in the end (with 'gal_list_str_reverse') to have the same
- order here. */
- gal_list_str_add(&p->urls,
- "https://ned.ipac.caltech.edu/tap/sync", 0);
-
- /* Name of default RA Dec columns. */
- if(p->ra_name==NULL) p->ra_name="ra";
- if(p->dec_name==NULL) p->dec_name="dec";
-
- /* Basic sanity checks. */
- tap_sanity_checks(p);
+ /* If we should use TAP, do the preparations. */
+ if(p->usetap)
+ {
+ /* Set the URLs, note that this is a simply-linked list, so we need
+ to reverse it in the end (with 'gal_list_str_reverse') to have the
+ same order here. */
+ gal_list_str_add(&p->urls,
+ "https://ned.ipac.caltech.edu/tap/sync", 0);
+
+ /* Name of default RA Dec columns. */
+ if(p->ra_name==NULL) p->ra_name="ra";
+ if(p->dec_name==NULL) p->dec_name="dec";
+
+ /* Basic sanity checks. */
+ tap_sanity_checks(p);
+ }
+ else
+ ned_non_tap(p);
}
diff --git a/bin/query/query.c b/bin/query/query.c
index b8289b5..cae127d 100644
--- a/bin/query/query.c
+++ b/bin/query/query.c
@@ -284,36 +284,49 @@ query_output_data(struct queryparams *p)
void
-query_check_download(struct queryparams *p)
+query_output_finalize(struct queryparams *p)
{
size_t len;
- int status=0;
+ int isxml=0;
char *logname;
fitsfile *fptr;
+ int gooddownload=0, status=0;
- /* Open the FITS file and if the status value is still zero, it means
- everything worked properly. */
- fits_open_file(&fptr, p->downloadname, READONLY, &status);
- if(status==0)
+ /* See if its a FITS file or a VOTable. */
+ len=strlen(p->downloadname);
+ if( !strcmp(&p->downloadname[len-4], ".xml") )
+ { isxml=1; gooddownload=1; }
+ else
{
- /* Close the FITS file pointer. */
- fits_close_file(fptr, &status);
+ /* Open the FITS file and if the status value is still zero, it means
+ everything worked properly. */
+ fits_open_file(&fptr, p->downloadname, READONLY, &status);
+ if(status==0)
+ {
+ gooddownload=1;
+ fits_close_file(fptr, &status);
+ }
+ }
+ /* If the downloaded file is good, do the preparations. */
+ if(gooddownload)
+ {
/* Prepare the output dataset. */
if(p->information)
{
if(p->datasetstr) query_output_meta_dataset(p);
else query_output_meta_database(p);
}
- else query_output_data(p);
+ else if(isxml==0) query_output_data(p);
/* Delete the raw downloaded file if necessary. */
if(p->keeprawdownload==0) remove(p->downloadname);
}
+
+ /* If there was an error */
else
{
/* Add a '.log' suffix to the output filename. */
- len=strlen(p->downloadname);
logname=gal_pointer_allocate(GAL_TYPE_STRING, len+10, 1,
__func__, "logname");
sprintf(logname, "%s.log", p->downloadname);
@@ -326,8 +339,8 @@ query_check_download(struct queryparams *p)
"retrieved! For more, please see '%s'", logname);
}
- /* Add the query keywords to the first extension (if the output was a
- FITS file). */
+ /* Add the query keywords to the first extension of the output (if the
+ output was a FITS file). */
if( p->information==0 && gal_fits_name_is_fits(p->cp.output) )
{
gal_fits_key_list_title_add_end(&p->cp.okeys,
@@ -360,12 +373,12 @@ query(struct queryparams *p)
}
/* Download the requested query. */
- tap_download(p);
+ if(p->usetap) tap_download(p);
/* Make sure that the result is a readable FITS file, otherwise, abort
with an error. */
if(p->dryrun==0)
- query_check_download(p);
+ query_output_finalize(p);
/* Let the user know that things went well. */
if(p->dryrun==0 && p->cp.quiet==0)
@@ -379,12 +392,7 @@ query(struct queryparams *p)
printf("Query's raw downloaded file: %s\n", p->downloadname);
}
if(p->information==0)
- {
- printf("Query's final output: %s\n", p->cp.output);
- printf("TIP: use the command below for more on the "
- "downloaded table:\n"
- " asttable %s --info\n", p->cp.output);
- }
+ printf("Query's output written: %s\n", p->cp.output);
}
/* Clean up. */
diff --git a/bin/query/ui.c b/bin/query/ui.c
index ed37313..204fa8e 100644
--- a/bin/query/ui.c
+++ b/bin/query/ui.c
@@ -266,9 +266,9 @@ ui_read_check_only_options(struct queryparams *p)
{
size_t i;
double *darray;
- char *basename;
gal_data_t *tmp;
int keepinputdir;
+ char *suffix, *rdsuffix, *basename;
/* See if database has been specified. */
if(p->databasestr==NULL)
@@ -405,26 +405,65 @@ ui_read_check_only_options(struct queryparams *p)
gal_checkset_writable_remove(p->cp.output, p->cp.keep,
p->cp.dontdelete);
+ /* Set the suffix of the default names. */
+ if( p->database==QUERY_DATABASE_NED
+ && !strcmp(p->datasetstr, "extinction") )
+ {
+ suffix=".xml";
+ rdsuffix="-raw-download.xml";
+ }
+ else
+ {
+ suffix=".fits";
+ rdsuffix="-raw-download.fits";
+ }
+
+ /* Currently Gnuastro doesn't read or write XML files (VOTable). So if
+ the downloaded file is an XML file but the user hasn't given an XML
+ suffix, abort and inform the user. */
+ if(p->cp.output)
+ {
+ if( !strcmp(suffix,".xml")
+ && strcmp(&p->cp.output[strlen(p->cp.output)-4], ".xml") )
+ error(EXIT_FAILURE, 0, "this dataset's output is a VOTable (with "
+ "an '.xml' suffix). However, Gnuastro doesn't yet support "
+ "VOTable, so it won't do any checks and corrections on "
+ "the downloaded file. Please give an output name with an "
+ "'.xml' suffix to continue");
+ }
+
/* Set the name for the downloaded and final output name. These are due
to an internal low-level processing that will be done on the raw
downloaded file. */
- if(p->cp.output==NULL)
+ else
{
- basename=gal_checkset_malloc_cat(p->databasestr, ".fits");
- p->cp.output=gal_checkset_make_unique_suffix(basename, ".fits");
+ basename=gal_checkset_malloc_cat(p->databasestr, suffix);
+ p->cp.output=gal_checkset_make_unique_suffix(basename, suffix);
free(basename);
}
- /* Make sure the output name doesn't exist (and report an error if
- '--dontdelete' is called. Just note that for the automatic output, we
- are basing that on the output, not the input. So we are temporarily
- activating 'keepinputdir'. */
- keepinputdir=p->cp.keepinputdir;
- p->cp.keepinputdir=1;
- gal_checkset_writable_remove(p->cp.output, 0, p->cp.dontdelete);
- p->downloadname=gal_checkset_automatic_output(&p->cp, p->cp.output,
- "-raw-download.fits");
- p->cp.keepinputdir=keepinputdir;
+ /* Currently we don't interally process VOTable (in '.xml' suffix) files,
+ so to keep the next steps un-affected, we'll set Query to not delete
+ the raw download and copy the name of the output into the raw
+ download. */
+ if( !strcmp(suffix, ".xml") )
+ {
+ p->keeprawdownload=1;
+ gal_checkset_allocate_copy(p->cp.output, &p->downloadname);
+ }
+ else
+ {
+ /* Make sure the output name doesn't exist (and report an error if
+ '--dontdelete' is called. Just note that for the automatic output,
+ we are basing that on the output, not the input. So we are
+ temporarily activating 'keepinputdir'. */
+ keepinputdir=p->cp.keepinputdir;
+ p->cp.keepinputdir=1;
+ gal_checkset_writable_remove(p->cp.output, 0, p->cp.dontdelete);
+ p->downloadname=gal_checkset_automatic_output(&p->cp, p->cp.output,
+ rdsuffix);
+ p->cp.keepinputdir=keepinputdir;
+ }
}
diff --git a/bin/query/vizier.c b/bin/query/vizier.c
index d7513a0..f2dfcfd 100644
--- a/bin/query/vizier.c
+++ b/bin/query/vizier.c
@@ -143,6 +143,9 @@ vizier_sanity_checks(struct queryparams *p)
gal_checkset_allocate_copy("II/363/unwise", &p->datasetstr);
}
}
+
+ /* Currently we assume VizieR only uses TAP. */
+ p->usetap=1;
}
diff --git a/bin/script/Makefile.am b/bin/script/Makefile.am
index 3bbedcf..3c45589 100644
--- a/bin/script/Makefile.am
+++ b/bin/script/Makefile.am
@@ -26,10 +26,13 @@
## 'prefix/bin' directory ('bin_SCRIPTS'), files necessary to distribute
## with the tarball ('EXTRA_DIST') and output files (to be cleaned with
## 'make clean').
-bin_SCRIPTS = astscript-make-ds9-reg \
+bin_SCRIPTS = astscript-ds9-region \
+ astscript-radial-profile \
astscript-sort-by-night
-EXTRA_DIST = make-ds9-reg.in sort-by-night.in
+EXTRA_DIST = ds9-region.in \
+ radial-profile.in \
+ sort-by-night.in
CLEANFILES = $(bin_SCRIPTS)
@@ -45,11 +48,15 @@ do_subst = sed -e 's,[@]VERSION[@],$(VERSION),g' \
-## Rules to build the scripts
-astscript-sort-by-night: sort-by-night.in Makefile
- $(do_subst) < $(srcdir)/sort-by-night.in > $@
+## Rules to install the scripts.
+astscript-ds9-region: ds9-region.in Makefile
+ $(do_subst) < $(srcdir)/ds9-region.in > $@
+ chmod +x $@
+
+astscript-radial-profile: radial-profile.in Makefile
+ $(do_subst) < $(srcdir)/radial-profile.in > $@
chmod +x $@
-astscript-make-ds9-reg: make-ds9-reg.in Makefile
- $(do_subst) < $(srcdir)/make-ds9-reg.in > $@
+astscript-sort-by-night: sort-by-night.in Makefile
+ $(do_subst) < $(srcdir)/sort-by-night.in > $@
chmod +x $@
diff --git a/bin/script/make-ds9-reg.in b/bin/script/ds9-region.in
old mode 100755
new mode 100644
similarity index 83%
rename from bin/script/make-ds9-reg.in
rename to bin/script/ds9-region.in
index 15d115e..3c55243
--- a/bin/script/make-ds9-reg.in
+++ b/bin/script/ds9-region.in
@@ -5,6 +5,7 @@
# Original author:
# Mohammad Akhlaghi <mohammad@akhlaghi.org>
# Contributing author(s):
+# Samane Raji <samaneraji@protonmail.com>
# Copyright (C) 2021 Free Software Foundation, Inc.
#
# Gnuastro is free software: you can redistribute it and/or modify it under
@@ -32,11 +33,11 @@ set -e
# command-line).
hdu=1
col=""
-name=""
width=1
mode=wcs
radius=""
command=""
+namecol=""
out=ds9.reg
color=green
dontdelete=0
@@ -82,6 +83,7 @@ $scriptname options:
-h, --hdu=STR HDU/extension of all input FITS files.
-c, --column=STR,STR Columns to use as coordinates (name or number).
-m, --mode=wcs|img Coordinates in WCS or image (default: $mode)
+ -n, --namecol=STR ID of each region (name or number of a column)
Output:
-C, --color Color for the regions (read by DS9).
@@ -189,6 +191,9 @@ do
-m|--mode) mode="$2"; check_v "$1"
"$mode"; shift;shift;;
-m=*|--mode=*) mode="${1#*=}"; check_v "$1"
"$mode"; shift;;
-m*) mode=$(echo "$1" | sed -e's/-m//'); check_v "$1"
"$mode"; shift;;
+ -n|--namecol) namecol="$2"; check_v "$1"
"$namecol"; shift;shift;;
+ -n=*|--namecol=*) namecol="${1#*=}"; check_v "$1"
"$namecol"; shift;;
+ -n*) namecol=$(echo "$1" | sed -e's/-n//'); check_v
"$1" "$namecol"; shift;;
# Output parameters
-C|--color) color="$2"; check_v
"$1" "$color"; shift;shift;;
@@ -244,7 +249,7 @@ if [ x$col = x ]; then
else
ncols=$(echo $col | awk 'BEGIN{FS=","}END{print NF}')
if [ x$ncols != x2 ]; then
- echo "$scriptname: only two columns should be given, but $ncols were
given"
+ echo "$scriptname: only two columns should be given with '--column'
(or '-c'), but $ncols were given"
exit 1
fi
fi
@@ -266,6 +271,15 @@ if [ -f $out ]; then
fi
fi
+# Make sure a single column is given to '--namecol':
+if [ x"$namecol" != x ]; then
+ ncols=$(echo $namecol | awk 'BEGIN{FS=","}END{print NF}')
+ if [ x$ncols != x1 ]; then
+ echo "$scriptname: only one column should be given to '--namecol'"
+ exit 1
+ fi
+fi
+
@@ -288,8 +302,11 @@ if [ x$mode = x"wcs" ]; then unit="\""; else unit=""; fi
# Write the metadata in the output.
printf "# Region file format: DS9 version 4.1\n" > $out
printf "# Created by $scriptname (GNU Astronomy Utilities) $version\n" >> $out
-printf "# Input: $input (hdu $hdu)\n" >> $out
+printf "# Input file: $input (hdu $hdu)\n" >> $out
printf "# Columns: $col\n" >> $out
+if [ x"$namecol" != x ]; then
+ printf "# Region name (or label) column: $namecol\n" >> $out
+fi
printf "global color=%s width=%d\n" $color $width >> $out
if [ $mode = "wcs" ]; then printf "fk5\n" >> $out
else printf "image\n" >> $out; fi
@@ -300,27 +317,44 @@ else printf "image\n" >> $out; fi
# Write each region's results (when no input file is given, read from the
# standard input).
-if [ x"$input" = x ]; then
- cat /dev/stdin \
- | asttable $input --column=$col \
- | while read a b; do \
- printf "circle(%g,%g,%g%s)\n" $a $b $radius $unit >> $out; \
- done
+if [ x"$namecol" = x ]; then
+ if [ x"$input" = x ]; then
+ cat /dev/stdin \
+ | asttable $input --column=$col \
+ | while read a b; do \
+ printf "circle(%g,%g,%g%s)\n" \
+ $a $b $radius $unit >> $out; \
+ done
+ else
+ asttable $input --column=$col \
+ | while read a b; do \
+ printf "circle(%g,%g,%g%s)\n" \
+ $a $b $radius $unit >> $out; \
+ done
+ fi
else
- asttable $input --column=$col \
- | while read a b; do \
- printf "circle(%g,%g,%g%s)\n" $a $b $radius $unit >> $out; \
- done
+ if [ x"$input" = x ]; then
+ cat /dev/stdin \
+ | asttable $input --column=$col --column=$namecol \
+ | while read a b c; do \
+ printf "circle(%g,%g,%g%s) # text={%g}\n" \
+ $a $b $radius $unit $c >> $out; \
+ done
+ else
+ asttable $input --column=$col --column=$namecol \
+ | while read a b c; do \
+ printf "circle(%g,%g,%g%s) # text={%g}\n" \
+ $a $b $radius $unit $c >> $out; \
+ done
+ fi
fi
-# Run ds9 with the desired region over-plotted.
-if [ x"$command" = x ]; then
- junk=1
-else
+# Run the user's command (while appending the region).
+if [ x"$command" != x ]; then
$command -regions $out
if [ $dontdelete = 0 ]; then rm $out; fi
fi
diff --git a/bin/script/radial-profile.in b/bin/script/radial-profile.in
new file mode 100644
index 0000000..a89eb29
--- /dev/null
+++ b/bin/script/radial-profile.in
@@ -0,0 +1,551 @@
+#!/bin/sh
+
+# Obtain averaged radial profiles, run with `--help', or see description
+# under `print_help' (below) for more.
+#
+# Original author:
+# Raul Infante-Sainz <infantesainz@gmail.com>
+# Contributing author(s):
+# Mohammad Akhlaghi <mohammad@akhlaghi.org>
+# Zahra Sharbaf <zahra.sharbaf2@gmail.com>
+# Carlos Morales-Socorro <cmorsoc@gmail.com>
+# Copyright (C) 2020-2021, Free Software Foundation, Inc.
+#
+# Gnuastro is free software: you can redistribute it and/or modify it under
+# the terms of the GNU General Public License as published by the Free
+# Software Foundation, either version 3 of the License, or (at your option)
+# any later version.
+#
+# Gnuastro is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+# more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with Gnuastro. If not, see <http://www.gnu.org/licenses/>.
+
+
+# Exit the script in the case of failure
+set -e
+
+
+
+
+
+# Default option values (can be changed with options on the command-line).
+hdu=1
+rmax=""
+quiet=""
+center=""
+tmpdir=""
+output=""
+keeptmp=0
+mode="img"
+measure=""
+axisratio=1
+sigmaclip=""
+oversample=""
+positionangle=0
+version=@VERSION@
+scriptname=@SCRIPT_NAME@
+
+
+
+
+
+# Output of `--usage' and `--help':
+print_usage() {
+ cat <<EOF
+$scriptname: run with '--help' for list of options
+EOF
+}
+
+print_help() {
+ cat <<EOF
+Usage: $scriptname [OPTION] FITS-files
+
+This script is part of GNU Astronomy Utilities $version.
+
+This script will consider the input image for constructing the radial
+profile around a given center with elliptical apertures.
+
+For more information, please run any of the following commands. In
+particular the first contains a very comprehensive explanation of this
+script's invocation: expected input(s), output(s), and a full description
+of all the options.
+
+ Inputs/Outputs and options: $ info $scriptname
+ Full Gnuastro manual/book: $ info gnuastro
+
+If you couldn't find your answer in the manual, you can get direct help from
+experienced Gnuastro users and developers. For more information, please run:
+
+ $ info help-gnuastro
+
+$scriptname options:
+ Input:
+ -h, --hdu=STR HDU/extension of all input FITS files.
+ -O, --mode=STR Coordinate mode: img or wcs.
+ -c, --center=FLT,FLT Coordinate of the center along 2 axes.
+ -R, --rmax=FLT Maximum radius for the radial profile (in pixels).
+ -Q, --axisratio=FLT Axis ratio for ellipse profiles (A/B).
+ -p, --positionangle=FLT Position angle for ellipse profiles.
+ -s, --sigmaclip=FLT,FLT Sigma-clip multiple and tolerance.
+
+ Output:
+ -t, --tmpdir Directory to keep temporary files.
+ -k, --keeptmp Keep temporal/auxiliar files.
+ -m, --measure=STR Measurement operator (mean, sigclip-mean, etc.).
+ -o, --output Output table with the radial profile.
+ -v, --oversample Oversample for higher resolution radial profile.
+
+ Operating mode:
+ -h, --help Print this help list.
+ --cite BibTeX citation for this program.
+ -q, --quiet Don't print the list.
+ -V, --version Print program version.
+
+Mandatory or optional arguments to long options are also mandatory or optional
+for any corresponding short options.
+
+GNU Astronomy Utilities home page: http://www.gnu.org/software/gnuastro/
+
+Report bugs to bug-gnuastro@gnu.org.
+EOF
+}
+
+
+
+
+
+# Output of `--version':
+print_version() {
+ cat <<EOF
+$scriptname (GNU Astronomy Utilities) $version
+Copyright (C) 2020-2021, Free Software Foundation, Inc.
+License GPLv3+: GNU General public license version 3 or later.
+This is free software: you are free to change and redistribute it.
+There is NO WARRANTY, to the extent permitted by law.
+
+Written/developed by Raul Infante-Sainz
+EOF
+}
+
+
+
+
+
+# Functions to check option values and complain if necessary.
+on_off_option_error() {
+ if [ "x$2" = x ]; then
+ echo "$scriptname: '$1' doesn't take any values."
+ else
+ echo "$scriptname: '$1' (or '$2') doesn't take any values."
+ fi
+ exit 1
+}
+
+check_v() {
+ if [ x"$2" = x ]; then
+ echo "$scriptname: option '$1' requires an argument."
+ echo "Try '$scriptname --help' for more information."
+ exit 1;
+ fi
+}
+
+
+
+
+
+# Separate command-line arguments from options. Then put the option
+# value into the respective variable.
+#
+# OPTIONS WITH A VALUE:
+#
+# Each option has three lines because we want to all common formats: for
+# long option names: `--longname value' and `--longname=value'. For short
+# option names we want `-l value', `-l=value' and `-lvalue' (where `-l'
+# is the short version of the hypothetical `--longname' option).
+#
+# The first case (with a space between the name and value) is two
+# command-line arguments. So, we'll need to shift it two times. The
+# latter two cases are a single command-line argument, so we just need to
+# "shift" the counter by one. IMPORTANT NOTE: the ORDER OF THE LATTER TWO
+# cases matters: `-h*' should be checked only when we are sure that its
+# not `-h=*').
+#
+# OPTIONS WITH NO VALUE (ON-OFF OPTIONS)
+#
+# For these, we just want the two forms of `--longname' or `-l'. Nothing
+# else. So if an equal sign is given we should definitely crash and also,
+# if a value is appended to the short format it should crash. So in the
+# second test for these (`-l*') will account for both the case where we
+# have an equal sign and where we don't.
+while [ $# -gt 0 ]
+do
+ case "$1" in
+ # Input parameters.
+ -h|--hdu) hdu="$2"; check_v
"$1" "$hdu"; shift;shift;;
+ -h=*|--hdu=*) hdu="${1#*=}"; check_v
"$1" "$hdu"; shift;;
+ -h*) hdu=$(echo "$1" | sed -e's/-h//'); check_v
"$1" "$hdu"; shift;;
+ -O|--mode) mode="$2"; check_v
"$1" "$mode"; shift;shift;;
+ -O=*|--mode=*) mode="${1#*=}"; check_v
"$1" "$mode"; shift;;
+ -O*) mode=$(echo "$1" | sed -e's/-O//'); check_v
"$1" "$mode"; shift;;
+ -c|--center) center="$2"; check_v
"$1" "$center"; shift;shift;;
+ -c=*|--center=*) center="${1#*=}"; check_v
"$1" "$center"; shift;;
+ -c*) center=$(echo "$1" | sed -e's/-c//'); check_v
"$1" "$center"; shift;;
+ -R|--rmax) rmax="$2"; check_v
"$1" "$rmax"; shift;shift;;
+ -R=*|--rmax=*) rmax="${1#*=}"; check_v
"$1" "$rmax"; shift;;
+ -R*) rmax=$(echo "$1" | sed -e's/-R//'); check_v
"$1" "$rmax"; shift;;
+ -Q|--axisratio) axisratio="$2"; check_v
"$1" "$axisratio"; shift;shift;;
+ -Q=*|--axisratio=*) axisratio="${1#*=}"; check_v
"$1" "$axisratio"; shift;;
+ -Q*) axisratio=$(echo "$1" | sed -e's/-Q//'); check_v
"$1" "$axisratio"; shift;;
+ -p|--positionangle) positionangle="$2"; check_v
"$1" "$positionangle"; shift;shift;;
+ -p=*|--positionangle=*) positionangle="${1#*=}"; check_v
"$1" "$positionangle"; shift;;
+ -p*) positionangle=$(echo "$1" | sed -e's/-p//');
check_v "$1" "$positionangle"; shift;;
+ -s|--sigmaclip) sigmaclip="$2"; check_v
"$1" "$sigmaclip"; shift;shift;;
+ -s=*|--sigmaclip=*) sigmaclip="${1#*=}"; check_v
"$1" "$sigmaclip"; shift;;
+ -s*) sigmaclip=$(echo "$1" | sed -e's/-s//'); check_v
"$1" "$sigmaclip"; shift;;
+
+ # Output parameters
+ -k|--keeptmp) keeptmp=1; shift;;
+ -k*|--keeptmp=*) on_off_option_error --keeptmp -k;;
+ -t|--tmpdir) tmpdir="$2"; check_v "$1"
"$tmpdir"; shift;shift;;
+ -t=*|--tmpdir=*) tmpdir="${1#*=}"; check_v "$1"
"$tmpdir"; shift;;
+ -t*) tmpdir=$(echo "$1" | sed -e's/-t//'); check_v "$1"
"$tmpdir"; shift;;
+ -m|--measure) measuretmp="$2"; check_v
"$1" "$measuretmp"; shift;shift;;
+ -m=*|--measure=*) measuretmp="${1#*=}"; check_v
"$1" "$measuretmp"; shift;;
+ -m*) measuretmp=$(echo "$1" | sed -e's/-m//'); check_v
"$1" "$measuretmp"; shift;;
+ -o|--output) output="$2"; check_v "$1"
"$output"; shift;shift;;
+ -o=*|--output=*) output="${1#*=}"; check_v "$1"
"$output"; shift;;
+ -o*) output=$(echo "$1" | sed -e's/-o//'); check_v "$1"
"$output"; shift;;
+ -v|--oversample) oversample="$2"; check_v
"$1" "$oversample"; shift;shift;;
+ -v=*|--oversample=*) oversample="${1#*=}"; check_v
"$1" "$oversample"; shift;;
+ -v*) oversample=$(echo "$1" | sed -e's/-v//'); check_v
"$1" "$oversample"; shift;;
+
+ # Non-operating options.
+ -q|--quiet) quiet="--quiet"; shift;;
+ -q*|--quiet=*) on_off_option_error --quiet -q;;
+ -?|--help) print_help; exit 0;;
+ -'?'*|--help=*) on_off_option_error --help -?;;
+ -V|--version) print_version; exit 0;;
+ -V*|--version=*) on_off_option_error --version -V;;
+ --cite) astfits --cite; exit 0;;
+ --cite=*) on_off_option_error --cite;;
+
+ # Unrecognized option:
+ -*) echo "$scriptname: unknown option '$1'"; exit 1;;
+
+ # Not an option (not starting with a `-'): assumed to be input FITS
+ # file name.
+ *) inputs="$1 $inputs"; shift;;
+ esac
+
+ # If a measurment was given, add it to possibly existing previous
+ # measurements into a comma-separate list.
+ if [ x"$measuretmp" != x ]; then
+ if [ x"$measure" = x ]; then measure=$measuretmp;
+ else measure="$measure,$measuretmp";
+ fi
+ fi
+done
+
+
+
+
+
+# Basic sanity checks
+# ===================
+
+# If an input image is given at all.
+if [ x"$inputs" = x ]; then
+ echo "$scriptname: no input FITS image files."
+ echo "Run with '--help' for more information on how to run."
+ exit 1
+fi
+
+# If a '--center' has been given, make sure it only has two numbers.
+if [ x"$center" = x ]; then
+ ncenter=$(echo $center | awk 'BEGIN{FS=","}END{print NF}')
+ if [ x$ncenter != x2 ]; then
+ echo "$scriptname: '--center' (or '-c') only take two values, but
$ncenter were given"
+ exit 1
+ fi
+fi
+
+# Make sure the value to '--mode' is either 'wcs' or 'img'.
+if [ $mode = "wcs" ] || [ $mode = "img" ]; then
+ junk=1
+else
+ echo "$scriptname: value to '--mode' ('-m') should be 'wcs' or 'img'"
+ exit 1
+fi
+
+# If no specific measurement has been requested, use the mean.
+if [ x"$measure" = x ]; then measure=mean; fi
+
+
+
+
+
+# Finalize the center value
+# -------------------------
+#
+# Beyond this point, we know the image-based, central coordinate for the
+# radial profile as two values (one along each dimension).
+if [ x"$center" = x ]; then
+
+ # No center has been given: we thus assume that the object is already
+ # centered on the input image and will set the center to the central
+ # pixel in the image. In the FITS standard, pixels are counted from 1,
+ # and the integers are in the center of the pixel. So after dividing
+ # the pixel size of the image by 2, we should add it with 0.5 to be the
+ # `center' of the image.
+ xcenter=$(astfits $inputs --hdu=$hdu | awk '/^NAXIS1/{print $3/2+0.5}')
+ ycenter=$(astfits $inputs --hdu=$hdu | awk '/^NAXIS2/{print $3/2+0.5}')
+
+else
+
+ if [ $mode = img ]; then
+
+ # A center has been given, we should just separate them.
+ xcenter=$(echo "$center" | awk 'BEGIN{FS=","} {print $1}')
+ ycenter=$(echo "$center" | awk 'BEGIN{FS=","} {print $2}')
+
+ else
+
+ # WCS coordinates have been given. We should thus convert them to
+ # image coordinates at this point. To do that, WCS information from
+ # the input header image is used.
+ xy=$(echo "$center" \
+ | sed 's/,/ /' \
+ | asttable -c'arith $1 $2 wcstoimg' \
+ --wcsfile=$inputs --wcshdu=$hdu)
+ xcenter=$(echo $xy | awk '{print $1}');
+ ycenter=$(echo $xy | awk '{print $2}');
+
+ fi
+fi
+
+
+
+
+
+# Calculate the maximum radius
+# ----------------------------
+#
+# If the user didn't set the '--rmax' parameter, then compute the maximum
+# radius possible on the image.
+#
+# If the user has not given any maximum radius, we give the most reliable
+# maximum radius (where the full circumference will be within the
+# image). If the radius goes outside the image, then the measurements and
+# calculations can be biased, so when the user has not provided any maximum
+# radius, we should only confine ourselves to a radius where the results
+# are reliable.
+#
+# Y--------------
+# | | The maximum radius (to ensure the profile
+# y |........* | lies within the image) is the smallest
+# | . | one of these values:
+# | . | x, y, X-x, Y-y
+# --------------
+# 0 x X
+#
+if [ x"$rmax" = x ]; then
+ rmax=$(astfits $inputs --hdu=$hdu \
+ | awk '/^NAXIS1/{X=$3} /^NAXIS2/{Y=$3} \
+ END{ x='$xcenter'; y='$ycenter'; \
+ printf("%s\n%s\n%s\n%s", x, y, X-x, Y-y); }' \
+ | aststatistics --minimum )
+fi
+
+
+
+
+
+# Define the final output file and temporal directory
+# ---------------------------------------------------
+#
+# Here, it is defined the final output file containing the radial profile.
+# If the user has defined a specific path/name for the output, it will be
+# used for saving the output file. If the user does not specify a output
+# name, then a default value containing the center and mode will be
+# generated.
+bname_prefix=$(basename $inputs | sed 's/\.fits/ /' | awk '{print $1}')
+defaultname=$(pwd)/"$bname_prefix"_radial_profile_$mode"_$xcenter"_"$ycenter"
+if [ x$output = x ]; then output="$defaultname.fits"; fi
+
+# Construct the temporary directory. If the user does not specify any
+# directory, then a default one with the base name of the input image will
+# be constructed. If the user set the directory, then make it. This
+# directory will be deleted at the end of the script if the user does not
+# want to keep it (with the `--keeptmp' option).
+if [ x$tmpdir = x ]; then tmpdir=$defaultname; fi
+if [ -d $tmpdir ]; then junk=1; else mkdir $tmpdir; fi
+
+
+
+
+
+# Crop image
+# ----------
+#
+# Crop the input image around the desired point so we can continue
+# processing only on those pixels (we do not need the other pixels).
+#
+# The crop's output always has the range of pixels from the original image
+# used in the `ICF1PIX' keyword value. So, to find the new center
+# (important if it is sub-pixel precission), we can simply get the first
+# and third value of that string, and convert to the cropped coordinate
+# system. Note that because FITS pixel couting starts from 1, we need to
+# subtract `1'.
+crop=$tmpdir/crop.fits
+cropwidth=$(echo $rmax | awk '{print $1*2+1}')
+astcrop $inputs --hdu=$hdu --center=$xcenter,$ycenter --mode=img \
+ --width=$cropwidth --output=$crop $quiet
+dxy=$(astfits $crop -h1 \
+ | grep ICF1PIX \
+ | sed -e"s/'/ /g" -e's/\:/ /g' -e's/,/ /' \
+ | awk '{print $3-1, $5-1}')
+xcenter=$(echo "$xcenter $cropwidth $dxy" \
+ | awk '{ if($1>int($2/2)) print $1-$3; \
+ else print int($2/2)+$1-int($1) }')
+ycenter=$(echo "$ycenter $cropwidth $dxy" \
+ | awk '{ if($1>int($2/2)) print $1-$4; \
+ else print int($2/2)+$1-int($1) }')
+
+
+
+# Over-sample the input if necessary
+# ----------------------------------
+values=$tmpdir/values.fits
+if [ x$oversample = x ]; then
+ ln -fs $crop $values
+else
+ astwarp $crop --scale=$oversample,$oversample -o$values
+ xcenter=$(echo $xcenter | awk '{print '$oversample'*$1}')
+ ycenter=$(echo $ycenter | awk '{print '$oversample'*$1}')
+ rmax=$(echo $rmax | awk '{print '$oversample'*$1}')
+fi
+
+
+
+
+# Generate the apertures image
+# ----------------------------
+#
+# The apertures image is generated using MakeProfiles with the parameters
+# specified in the echo statement:
+#
+# 1 -- ID of profile (irrelevant here!)
+# xcenter -- X center position (in pixels).
+# ycenter -- Y center position (in pixels).
+# 7 -- Type of the profiles (radial distance).
+# 1 -- The Sersic or Moffat index (irrelevant here!).
+# positionangle -- position angle.
+# axisratio -- axis ratio.
+# rmax -- magnitude of the profile within the truncation radius
(rmax).
+# 1 -- Truncation in radius unit.
+aperturesraw=$tmpdir/apertures-raw.fits
+echo "1 $xcenter $ycenter 7 $rmax 1 $positionangle $axisratio 1 1" \
+ | astmkprof --background=$values --backhdu=1 --mforflatpix \
+ --mode=img --clearcanvas --type=int16 \
+ --circumwidth=1 --replace --output=$aperturesraw \
+ $quiet
+
+
+
+
+
+# Fill the central pixel(s)
+# -------------------------
+#
+# The central pixel(s) have a distance of 0! So we need to add a single
+# value to all the profile pixels (but keep the outer parts at 0).
+apertures=$tmpdir/apertures.fits
+astarithmetic $aperturesraw set-i \
+ i 0 ne 1 fill-holes set-good \
+ i good i 1 + where -o$apertures
+
+
+
+
+
+# Extract each measurement column(s)
+# ----------------------------------
+#
+# The user gives each desired MakeCatalog option name as a value to the
+# '--measure' option here as a comma-separated list of values. But we want
+# to feed them into MakeCatalog (which needs each one of them to be
+# prefixed with '--' and separated by a space).
+finalmeasure=$(echo "$measure" \
+ | awk 'BEGIN{FS=","} \
+ END{for(i=1;i<=NF;++i) printf "--%s ", $i}')
+
+
+
+
+
+# Set the used sigma-clipping parameters
+# --------------------------------------
+#
+# If not given, don't use anything and just use MakeCatalog's default
+# values.
+if [ x"$sigmaclip" = x ]; then
+ finalsigmaclip=""
+else
+ finalsigmaclip="--sigmaclip=$sigmaclip";
+fi
+
+
+
+
+
+# Obtain the radial profile
+# -------------------------
+#
+# The radial profile is obtained using MakeCatalog. In practice, we obtain
+# a catalogue using the segmentation image previously generated (the
+# elliptical apertures) and the original input image for measuring the
+# values.
+cat=$tmpdir/catalog.fits
+astmkcatalog $apertures -h1 --valuesfile=$values --valueshdu=1 \
+ --ids $finalmeasure $finalsigmaclip --output=$cat \
+ $quiet
+
+
+
+
+
+# Prepare the final output
+# ------------------------
+#
+# The raw MakeCatalog output isn't clear for the users of this script (for
+# example the radius column is called 'OBJ_ID'!). Also, when oversampling
+# is requested we need to divide the radii by the over-sampling factor.
+#
+# But before anything, we need to set the options to print the other
+# columns untouched (we only want to change the first column).
+restcols=$(astfits $cat -h1 \
+ | awk '/^TFIELDS/{for(i=2;i<=$3;++i) printf "-c%d ", i}')
+if [ x"$oversample" = x ]; then
+ asttable $cat -c'arith OBJ_ID float32 1 -' $restcols -o$output \
+ --colmetadata=1,RADIUS,pix,"Radial distance"
+else
+ asttable $cat -c'arith OBJ_ID float32 '$oversample' /' $restcols \
+ -o$output --colmetadata=ARITH_2,RADIUS,pix,"Radial distance"
+fi
+
+
+
+
+
+# Remove temporal files
+# ---------------------
+#
+# If the user does not specify to keep the temporal files with the option
+# `--keeptmp', then remove the whole directory.
+if [ $keeptmp = 0 ]; then
+ rm -rf $tmpdir
+fi
diff --git a/bin/segment/ui.c b/bin/segment/ui.c
index 773c323..62f8f0f 100644
--- a/bin/segment/ui.c
+++ b/bin/segment/ui.c
@@ -411,8 +411,10 @@ ui_prepare_inputs(struct segmentparams *p)
/* Read the input as a single precision floating point dataset. */
p->input = gal_array_read_one_ch_to_type(p->inputname, p->cp.hdu,
NULL, GAL_TYPE_FLOAT32,
- p->cp.minmapsize, p->cp.quietmmap);
- p->input->wcs = gal_wcs_read(p->inputname, p->cp.hdu, 0, 0,
+ p->cp.minmapsize,
+ p->cp.quietmmap);
+ p->input->wcs = gal_wcs_read(p->inputname, p->cp.hdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim,
p->input->dsize,
diff --git a/bin/statistics/statistics.c b/bin/statistics/statistics.c
index 48986eb..afacdc0 100644
--- a/bin/statistics/statistics.c
+++ b/bin/statistics/statistics.c
@@ -832,7 +832,8 @@ histogram_2d(struct statisticsparams *p)
cunit[0] = p->input->unit; cunit[1] = p->input->next->unit;
ctype[0] = histogram_2d_set_ctype(p->input->name, "X");
ctype[1] = histogram_2d_set_ctype(p->input->next->name, "Y");
- img->wcs=gal_wcs_create(crpix, crval, cdelt, pc, cunit, ctype, 2);
+ img->wcs=gal_wcs_create(crpix, crval, cdelt, pc, cunit, ctype, 2,
+ p->cp.wcslinearmatrix);
/* Write the output. */
output=statistics_output_name(p, suf, &isfits);
diff --git a/bin/statistics/ui.c b/bin/statistics/ui.c
index 0b2ad5c..7c2b7fc 100644
--- a/bin/statistics/ui.c
+++ b/bin/statistics/ui.c
@@ -971,7 +971,8 @@ ui_preparations(struct statisticsparams *p)
p->inputformat=INPUT_FORMAT_IMAGE;
p->input=gal_array_read_one_ch(p->inputname, cp->hdu, NULL,
cp->minmapsize, p->cp.quietmmap);
- p->input->wcs=gal_wcs_read(p->inputname, cp->hdu, 0, 0,
+ p->input->wcs=gal_wcs_read(p->inputname, cp->hdu,
+ p->cp.wcslinearmatrix, 0, 0,
&p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim,
p->input->dsize,
diff --git a/bin/table/arithmetic.c b/bin/table/arithmetic.c
index dfa576c..b964535 100644
--- a/bin/table/arithmetic.c
+++ b/bin/table/arithmetic.c
@@ -159,7 +159,8 @@ arithmetic_init_wcs(struct tableparams *p, char *operator)
"for the '%s' operator", operator);
/* Read the WCS. */
- p->wcs=gal_wcs_read(p->wcsfile, p->wcshdu, 0, 0, &p->nwcs);
+ p->wcs=gal_wcs_read(p->wcsfile, p->wcshdu, p->cp.wcslinearmatrix,
+ 0, 0, &p->nwcs);
if(p->wcs==NULL)
error(EXIT_FAILURE, 0, "%s (hdu: %s): no WCS could be read by "
"WCSLIB", p->wcsfile, p->wcshdu);
diff --git a/bin/table/table.c b/bin/table/table.c
index 86fda2b..912e408 100644
--- a/bin/table/table.c
+++ b/bin/table/table.c
@@ -661,7 +661,7 @@ table_select_by_position(struct tableparams *p)
error(EXIT_SUCCESS, 0, "'--rowrandom' not activated because "
"the number of rows in the table at this stage (%zu) "
"is smaller than the number of requested random rows "
- "(%zu). You can supress this message with '--quiet'",
+ "(%zu). You can suppress this message with '--quiet'",
p->table->size, p->rowrandom);
return;
}
diff --git a/bin/table/ui.c b/bin/table/ui.c
index adbd834..ece4e24 100644
--- a/bin/table/ui.c
+++ b/bin/table/ui.c
@@ -1273,7 +1273,7 @@ ui_read_check_inputs_setup(int argc, char *argv[], struct
tableparams *p)
printf("Parameters used for '--randomrows':\n");
printf(" - Random number generator name: %s\n", p->rng_name);
printf(" - Random number generator seed: %lu\n", p->rng_seed);
- printf("(use '--quiet' to supress this starting message)\n");
+ printf("(use '--quiet' to suppress this starting message)\n");
}
}
diff --git a/bin/warp/ui.c b/bin/warp/ui.c
index a91fb14..72659c1 100644
--- a/bin/warp/ui.c
+++ b/bin/warp/ui.c
@@ -352,7 +352,8 @@ ui_check_options_and_arguments(struct warpparams *p)
p->cp.quietmmap);
/* Read the WCS and remove one-element wide dimension(s). */
- p->input->wcs=gal_wcs_read(p->inputname, p->cp.hdu, p->hstartwcs,
+ p->input->wcs=gal_wcs_read(p->inputname, p->cp.hdu,
+ p->cp.wcslinearmatrix, p->hstartwcs,
p->hendwcs, &p->input->nwcs);
p->input->ndim=gal_dimension_remove_extra(p->input->ndim,
p->input->dsize,
diff --git a/bootstrap.conf b/bootstrap.conf
index bbf7318..e2f2630 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -123,14 +123,6 @@ bootstrap_post_import_hook()
done
fi
- # With Autoconf 2.70, the 'as_echo' has been depreciated and will cause
- # an error with autoreconf. But unfortunately the ax_pthread test still
- # uses it. So until it is fixed there, we need to manually correct it
- # here.
- sed -e's|\$as_echo \"\$ac_link\"|AS_ECHO([\"\$ac_link\"])|' \
- $m4_base/ax_pthread.m4 > $m4_base/ax_pthread_tmp.m4
- mv $m4_base/ax_pthread_tmp.m4 $m4_base/ax_pthread.m4
-
# Hack in 'AC_LIB_HAVE_LINKFLAGS' so it doesn't search for shared
# libraries when '--disable-shared' is used.
sed 's|if test -n \"$acl_shlibext\"; then|if test -n \"\$acl_shlibext\" -a
\"X$enable_shared\" = \"Xyes\"; then|' bootstrapped/m4/lib-link.m4 >
bootstrapped/m4/lib-link_tmp.m4
diff --git a/configure.ac b/configure.ac
index 173763d..bd2e865 100644
--- a/configure.ac
+++ b/configure.ac
@@ -555,6 +555,12 @@ AC_DEFINE_UNQUOTED([GAL_CONFIG_HAVE_WCSLIB_OBSFIX],
[$has_wcslib_obsfix],
[WCSLIB comes with OBSFIX macro])
AC_SUBST(HAVE_WCSLIB_OBSFIX, [$has_wcslib_obsfix])
+# If the WCS library has the 'wcsccs' function.
+AC_CHECK_LIB([wcs], [wcsccs], [has_wcslib_wcsccs=1],
+ [has_wcslib_wcsccs=0; anywarnings=yes], [-lcfitsio -lm])
+AC_DEFINE_UNQUOTED([GAL_CONFIG_HAVE_WCSLIB_WCSCCS], [$has_wcslib_wcsccs],
+ [WCSLIB comes with wcsccs])
+AC_SUBST(HAVE_WCSLIB_WCSCCS, [$has_wcslib_wcsccs])
# If the pthreads library has 'pthread_barrier_wait'.
AC_CHECK_LIB([pthread], [pthread_barrier_wait], [has_pthread_barrier=1],
@@ -1115,6 +1121,17 @@ AS_IF([test x$enable_guide_message = xyes],
AS_ECHO([" operations you can ignore this warning."])
AS_ECHO([]) ])
+ AS_IF([test "x$has_wcslib_wcsccs" = "x0"],
+ [dependency_notice=yes
+ AS_ECHO([" - WCSLIB
(https://www.atnf.csiro.au/people/mcalabre/WCS) version"])
+ AS_ECHO([" on this system doesn't support conversion of
coordinate systems"])
+ AS_ECHO([" (through the 'wcsccs' function that was
introduced in WCSLIB 7.5, "])
+ AS_ECHO([" March 2021). For example converting from
equatorial J2000 to"])
+ AS_ECHO([" Galactic coordinates). This build won't crash but
the related"])
+ AS_ECHO([" functionalities in Gnuastro will be disabled. If
you don't need"])
+ AS_ECHO([" such operations you can ignore this warning."])
+ AS_ECHO([]) ])
+
AS_IF([test "x$has_libjpeg" = "xno"],
[dependency_notice=yes
AS_ECHO([" - libjpeg (http://ijg.org), could not be linked
with in your library"])
diff --git a/doc/Makefile.am b/doc/Makefile.am
index 12dc957..c0c4a43 100644
--- a/doc/Makefile.am
+++ b/doc/Makefile.am
@@ -153,6 +153,7 @@ dist_man_MANS = $(MAYBE_ARITHMETIC_MAN)
$(MAYBE_BUILDPROG_MAN) \
$(MAYBE_MKCATALOG_MAN) $(MAYBE_MKNOISE_MAN) $(MAYBE_MKPROF_MAN) \
$(MAYBE_NOISECHISEL_MAN) $(MAYBE_QUERY_MAN) $(MAYBE_SEGMENT_MAN) \
$(MAYBE_STATISTICS_MAN) $(MAYBE_TABLE_MAN) $(MAYBE_WARP_MAN) \
+ man/astscript-ds9-region.1 man/astscript-radial-profile.1 \
man/astscript-sort-by-night.1
@@ -224,11 +225,6 @@ man/astquery.1: $(top_srcdir)/bin/query/args.h
$(ALLMANSDEP)
$(MAYBE_HELP2MAN) -n "query remote data servers and download" \
--libtool $(toputildir)/query/astquery
-man/astscript-sort-by-night.1: $(top_srcdir)/bin/script/sort-by-night.in \
- $(ALLMANSDEP)
- $(MAYBE_HELP2MAN) -n "Sort input FITS files by night" \
- --libtool $(toputildir)/script/astscript-sort-by-night
-
man/astsegment.1: $(top_srcdir)/bin/segment/args.h $(ALLMANSDEP)
$(MAYBE_HELP2MAN) -n "segmentation based on signal structure" \
--libtool $(toputildir)/segment/astsegment
@@ -248,3 +244,23 @@ man/asttable.1: $(top_srcdir)/bin/table/args.h
$(ALLMANSDEP)
man/astwarp.1: $(top_srcdir)/bin/warp/args.h $(ALLMANSDEP)
$(MAYBE_HELP2MAN) -n "warp (transform) input dataset" \
--libtool $(toputildir)/warp/astwarp
+
+
+
+
+
+# The Scripts:
+man/astscript-ds9-region.1: $(top_srcdir)/bin/script/ds9-region.in \
+ $(ALLMANSDEP)
+ $(MAYBE_HELP2MAN) -n "Create an SAO DS9 region file from a table" \
+ --libtool $(toputildir)/script/astscript-ds9-region
+
+man/astscript-radial-profile.1: $(top_srcdir)/bin/script/radial-profile.in \
+ $(ALLMANSDEP)
+ $(MAYBE_HELP2MAN) -n "Create a radial profile of an object in an image"
\
+ --libtool
$(toputildir)/script/astscript-radial-profile
+
+man/astscript-sort-by-night.1: $(top_srcdir)/bin/script/sort-by-night.in \
+ $(ALLMANSDEP)
+ $(MAYBE_HELP2MAN) -n "Sort input FITS files by night" \
+ --libtool $(toputildir)/script/astscript-sort-by-night
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index 8ff3cba..aabddfd 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,8 +1,11 @@
Alphabetically ordered list to acknowledge in the next release.
Mark Calabretta
+Sepideh Eskandarlou
Raul Infante-Sainz
Alberto Madrigal
+Juan Miro
+Carlos Morales Socorro
Sylvain Mottet
Francois Ochsenbein
Samane Raji
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 54efea1..f8241ad 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -107,7 +107,7 @@ A copy of the license is included in the section entitled
``GNU Free Documentati
* astscript-sort-by-night: (gnuastro)Invoking astscript-sort-by-night. Options
to this script
-* astscript-make-ds9-reg: (gnuastro)Invoking astscript-make-ds9-reg. Options
to this script
+* astscript-ds9-region: (gnuastro)Invoking astscript-ds9-region. Options to
this script
@end direntry
@@ -202,6 +202,7 @@ To go to the sections or subsections, you have to click on
the menu entries that
* Data analysis:: Analyze images.
* Modeling and fittings:: Make and fit models.
* High-level calculations:: Physical calculations.
+* Installed scripts:: Installed scripts that operate like programs.
* Library:: Gnuastro's library of useful functions.
* Developing:: The development environment.
* Gnuastro programs list:: List and short summary of Gnuastro.
@@ -272,12 +273,9 @@ Detecting large extended targets
* Downloading and validating input data:: How to get and check the input data.
* NoiseChisel optimization:: Detect the extended and diffuse wings.
+* Image surface brightness limit:: Standards to quantify the noise level.
* Achieved surface brightness level:: Calculate the outer surface brightness.
-
-Downloading and validating input data
-
-* NoiseChisel optimization:: Optimize NoiseChisel to dig very deep.
-* Achieved surface brightness level:: Measure how much you detected.
+* Extract clumps and objects:: Find sub-structure over the detections.
Installation
@@ -328,7 +326,6 @@ Common program behavior
* Command-line:: How to use the command-line.
* Configuration files:: Values for unspecified variables.
* Getting help:: Getting more information on the go.
-* Installed scripts:: Installed Bash scripts, not compiled programs.
* Multi-threaded operations:: How threads are managed in Gnuastro.
* Numeric data types:: Different types and how to specify them.
* Memory management:: How memory is allocated (in RAM or HDD/SSD).
@@ -404,6 +401,7 @@ ConvertType
* Recognized file formats:: Recognized file formats
* Color:: Some explanations on color.
+* Aligning images with small WCS offsets:: When the WCS slightly differs.
* Invoking astconvertt:: Options and arguments to ConvertType.
Table
@@ -481,8 +479,6 @@ Data analysis
* Segment:: Segment detections based on signal structure.
* MakeCatalog:: Catalog from input and labeled images.
* Match:: Match two datasets.
-* Sort FITS files by night:: Sort and classify images in separate nights.
-* SAO DS9 region files from table:: Table's positional columns into DS9
region file.
Statistics
@@ -527,11 +523,20 @@ Invoking Segment
MakeCatalog
* Detection and catalog production:: Discussing why/how to treat these
separately
+* Brightness flux magnitude:: More on Magnitudes, surface brightness and etc.
* Quantifying measurement limits:: For comparing different catalogs.
* Measuring elliptical parameters:: Estimating elliptical parameters.
* Adding new columns to MakeCatalog:: How to add new columns.
* Invoking astmkcatalog:: Options and arguments to MakeCatalog.
+Quantifying measurement limits
+
+* Magnitude measurement error of each detection:: Derivation of mag error
equation
+* Completeness limit of each detection:: Possibility of detecting similar
objects?
+* Upper limit magnitude of each detection:: How reliable is your magnitude?
+* Surface brightness limit of image:: How deep is your data?
+* Upper limit magnitude of image:: How deep is your data for certain
footprint?
+
Invoking MakeCatalog
* MakeCatalog inputs and basic settings:: Input files and basic settings.
@@ -543,14 +548,6 @@ Match
* Invoking astmatch:: Inputs, outputs and options of Match
-Sort FITS files by night
-
-* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
-
-SAO DS9 region files from table
-
-* Invoking astscript-make-ds9-reg:: How to call astscript-make-ds9-reg
-
Modeling and fitting
* MakeProfiles:: Making mock galaxies and stars.
@@ -560,7 +557,6 @@ MakeProfiles
* Modeling basics:: Astronomical modeling basics.
* If convolving afterwards:: Considerations for convolving later.
-* Brightness flux magnitude:: About these measures of energy.
* Profile magnitude:: Definition of total profile magnitude.
* Invoking astmkprof:: Inputs and Options for MakeProfiles.
@@ -608,6 +604,24 @@ Invoking CosmicCalculator
* CosmicCalculator basic cosmology calculations:: Like distance modulus,
distances and etc.
* CosmicCalculator spectral line calculations:: How they get affected by
redshift.
+Installed scripts
+
+* Sort FITS files by night:: Sort many files by date.
+* Generate radial profile:: Radial profile of an object in an image.
+* SAO DS9 region files from table:: Create ds9 region file from a table.
+
+Sort FITS files by night
+
+* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
+
+Generate radial profile
+
+* Invoking astscript-radial-profile:: How to call astscript-radial-profile
+
+SAO DS9 region files from table
+
+* Invoking astscript-ds9-region:: How to call astscript-ds9-region
+
Library
* Review of library fundamentals:: Guide on libraries and linking.
@@ -844,7 +858,7 @@ In @ref{Tutorials} some real life examples of how these
programs might be used a
@node Science and its tools, Your rights, Quick start, Introduction
-@section Science and its tools
+@section Gnuastro manifesto: Science and its tools
History of science indicates that there are always inevitably unseen faults,
hidden assumptions, simplifications and approximations in all our theoretical
models, data acquisition and analysis techniques.
It is precisely these that will ultimately allow future generations to advance
the existing experimental and theoretical knowledge through their new solutions
and corrections.
@@ -880,7 +894,7 @@ This kind of subjective experience is prone to serious
misunderstandings about t
This attitude is further encouraged through non-free
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly
written (or non-existent) scientific software manuals, and non-reproducible
papers@footnote{Where the authors omit many of the analysis/processing
``details'' from the paper by arguing that they would make the paper too
long/unreadable.
However, software engineers have been dealing with such issues for a long time.
There are thus software management solutions that allow us to supplement
papers with all the details necessary to exactly reproduce the result.
-For example see @url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746}
and @url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
http://akhlaghi.org/reproducible-science.html, general discussion}.}.
+For example see Akhlaghi et al. (2021,
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).}.
This approach to scientific software and methods only helps in producing
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in
his personal knowledge and authority}''@footnote{Karl Popper. The logic of
scientific discovery. 1959.
Larger quote is given at the start of the PDF (for print) version of this
book.}.
@@ -916,11 +930,7 @@ Therefore, while it empowers the privileged individual who
has access to it, it
Exactly at the opposite end of the spectrum, Gnuastro's source code is
released under the GNU general public license (GPL) and this book is released
under the GNU free documentation license.
You are therefore free to distribute any software you create using parts of
Gnuastro's source code or text, or figures from this book, see @ref{Your
rights}.
-With these principles in mind, Gnuastro's developers aim to impose the
-minimum requirements on you (in computer science, engineering and even the
-mathematics behind the tools) to understand and modify any step of Gnuastro
-if you feel the need to do so, see @ref{Why C} and @ref{Program design
-philosophy}.
+With these principles in mind, Gnuastro's developers aim to impose the minimum
requirements on you (in computer science, engineering and even the mathematics
behind the tools) to understand and modify any step of Gnuastro if you feel the
need to do so, see @ref{Why C} and @ref{Program design philosophy}.
@cindex Brahe, Tycho
@cindex Galileo, Galilei
@@ -941,9 +951,16 @@ The same is true today: science cannot progress with a
black box, or poorly rele
The source code of a research is the new (abstractified) communication
language in science, understandable by humans @emph{and} computers.
Source code (in any programming language) is a language/notation designed to
express all the details that would be too tedious/long/frustrating to report in
spoken languages like English, similar to mathematic notation.
+@quotation
+An article about computational science [almost all sciences today] ... is not
the scholarship itself, it is merely advertising of the scholarship.
+The Actual Scholarship is the complete software development environment and
the complete set of instructions which generated the figures.
+@author Buckheit & Donoho, Lecture Notes in Statistics, Vol 103, 1996
+@end quotation
+
Today, the quality of the source code that goes into a scientific result (and
the distribution of that code) is as critical to scientific vitality and
integrity, as the quality of its written language/English used in
publishing/distributing its paper.
A scientific paper will not even be reviewed by any respectable journal if its
written in a poor language/English.
A similar level of quality assessment is thus increasingly becoming necessary
regarding the codes/methods used to derive the results of a scientific paper.
+For more on this, please see Akhlaghi et al. (2021) at
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).
@cindex Ken Thomson
@cindex Stroustrup, Bjarne
@@ -975,7 +992,7 @@ Our future discoveries must be looked for in the sixth
place of decimals.
@cindex Puzzle solving scientist
@cindex Scientist, puzzle solver
-If scientists are considered to be more than mere ``puzzle''
solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
Revolutions}, University of Chicago Press, 1962.} (simply adding to the
decimals of existing values or observing a feature in 10, 100, or 100000 more
galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just
passively sit back and uncritically repeat the previous (observational or
theoretical) methods/tools on new data.
+If scientists are considered to be more than mere puzzle
solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
Revolutions}, University of Chicago Press, 1962.} (simply adding to the
decimals of existing values or observing a feature in 10, 100, or 100000 more
galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just
passively sit back and uncritically repeat the previous (observational or
theoretical) methods/tools on new data.
Today there is a wealth of raw telescope images ready (mostly for free) at the
finger tips of anyone who is interested with a fast enough internet connection
to download them.
The only thing lacking is new ways to analyze this data and dig out the
treasure that is lying hidden in them to existing methods and techniques.
@@ -1392,6 +1409,10 @@ When it is a last character in a line (the next
character is a new-line characte
@end itemize
+This is not a convention, but a bi-product of the PDF building process of the
manual:
+In the PDF version of this manual, a single quote (or apostrophe) character in
the commands or codes is shown like this: @code{'}.
+Single quotes are sometimes necessary in combination with commands like
@code{awk} or @code{sed}, or when using Column arithmetic in Gnuastro's own
Table (see @ref{Column arithmetic}).
+Therefore when typing (recommended) or copy-pasting (not recommented) the
commands that have a @code{'}, please correct it to the single-quote (or
apostrophe) character, otherwise the command will fail.
@node Acknowledgments, , Conventions, Introduction
@@ -1649,10 +1670,11 @@ $ astmkprof -P
[[[ ... Truncated lines ... ]]]
# Columns, by info (see `--searchin'), or number (starting from 1):
- ccol 2 # Coordinate columns (one call for each dimension).
- ccol 3 # Coordinate columns (one call for each dimension).
+ ccol 2 # Coord. columns (one call for each dim.).
+ ccol 3 # Coord. columns (one call for each dim.).
fcol 4 # sersic (1), moffat (2), gaussian (3),
- # point (4), flat (5), circumference (6).
+ # point (4), flat (5), circumference (6),
+ # distance (7), radial-table (8).
rcol 5 # Effective radius or FWHM in pixels.
ncol 6 # Sersic index or Moffat beta.
pcol 7 # Position angle.
@@ -1884,9 +1906,9 @@ astconvolve --kernel=0_"$base".fits "$base".fits
# Scale the image back to the intended resolution.
astwarp --scale=1/5 --centeroncorner "$base"_convolved.fits
-# Crop the edges out (dimmed during convolution). ‘--section’ accepts
-# inclusive coordinates, so the start of start of the section must be
-# one pixel larger than its end.
+# Crop the edges out (dimmed during convolution). ‘--section’
+# accepts inclusive coordinates, so the start of the section
+# must be one pixel larger than its end.
st_edge=$(( edge + 1 ))
astcrop "$base"_convolved_scaled.fits --zeroisnotblank \
--mode=img --section=$st_edge:*-$edge,$st_edge:*-$edge
@@ -2233,6 +2255,15 @@ The second row is the coverage range along RA and Dec
(compare with the outputs
We can thus simply subtract the second from the first column and multiply it
with the difference of the fourth and third columns to calculate the image area.
We'll also multiply each by 60 to have the area in arc-minutes squared.
+@iftex
+@cartouche
+@noindent
+@strong{Single quotes in PDF format:} in the PDF version of this manual, a
single quote (or apostrophe) character in the commands or codes is shown like
this: @code{'}.
+Single quotes are sometimes necessary in combination with commands like
@code{awk} or @code{sed} (like the command below), or when using Column
arithmetic in Gnuastro's own Table (see @ref{Column arithmetic}).
+Therefore when typing (recommended) or copy-pasting (not recommented) the
commands that have a @code{'}, please correct it to the single-quote (or
apostrophe) character, otherwise the command will fail.
+@end cartouche
+@end iftex
+
@example
astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
| awk 'NR==2@{print ($2-$1)*60*($4-$3)*60@}'
@@ -3075,7 +3106,7 @@ We are now ready to finally run NoiseChisel on the three
filters and keep the ou
$ rm *.fits
$ mkdir nc
$ for f in f105w f125w f160w; do \
- astnoisechisel flat-ir/xdf-$f.fits --output=nc/xdf-$f.fits
+ astnoisechisel flat-ir/xdf-$f.fits --output=nc/xdf-$f.fits; \
done
@end example
@@ -3171,7 +3202,18 @@ The clumps are not affected by the hard-to-deblend and
low signal-to-noise diffu
From this step onward, we'll continue with clumps.
Having localized the regions of interest in the dataset, we are ready to do
measurements on them with @ref{MakeCatalog}.
-Besides the IDs, we want to measure (in this order) the Right Ascension (with
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}),
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
+MakeCatalog is specialized and optimized for doing measurements over labeled
regions of an image.
+In other words, through MakeCatalog, you can ``reduce'' an image to a table
(catalog of certain properties of objects in the image).
+Each requested measurement (over each label) will be given a column in the
output table.
+To see the full set of available measurements run it with @option{--help} like
below (and scroll up), note that measurements are classified by context.
+
+@example
+$ astmkcatalog --help
+@end example
+
+So let's select the properties we want to measure in this tutorial.
+First of all, we need to know which measurement belongs to which object or
clump, so we'll start with the @option{--ids} (read as: IDs@footnote{This
option is plural because we need two ID columns for identifying ``clumps'' in
the clumps catalog/table: the first column will be the ID of the host
``object'', and the second one will be the ID of the clump within that object.
In the ``objects'' catalog/table, only a single column will be returned for
this option.}).
+We also want to measure (in this order) the Right Ascension (with
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}),
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
Furthermore, as mentioned above, we also want measurements on clumps, so we
also need to call @option{--clumpscat}.
The following command will make these measurements on Segment's F160W output
and write them in a catalog for each object and clump in a FITS table.
@@ -3212,9 +3254,10 @@ However, the measurements of each column are also done
on different pixels: the
Please open them and focus on one object to see for your self.
This can bias the result, if you match catalogs.
-An accurate color calculation can only be done when magnitudes are measured
from the same pixels on both images.
-Fortunately in these images, the Point spread function (PSF) are very similar,
allowing us to do this directly@footnote{When the PSFs between two images
differ largely, you would have to PSF-match the images before using the same
pixels for measurements.}.
-You can do this with MakeCatalog and is one of the reasons that NoiseChisel or
Segment don't generate a catalog at all (to give you the freedom of selecting
the pixels to do catalog measurements on).
+An accurate color calculation can only be done when magnitudes are measured
from the same pixels on all images and this can be done easily with MakeCatalog.
+Infact this is one of the reasons that NoiseChisel or Segment don't generate a
catalog like most other detection/segmentation software.
+This gives you the freedom of selecting the pixels for measurement in any way
you like (from other filters, other software, manually, and etc).
+Fortunately in these images, the Point spread function (PSF) is very similar,
allowing us to use a single labeled image output for all filters@footnote{When
the PSFs between two images differ largely, you would have to PSF-match the
images before using the same pixels for measurements.}.
The F160W image is deeper, thus providing better detection/segmentation, and
redder, thus observing smaller/older stars and representing more of the mass in
the galaxies.
We will thus use the F160W filter as a reference and use its segment labels to
identify which pixels to use for which objects/clumps.
@@ -3232,8 +3275,9 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec
--magnitude --sn \
@end example
After running the commands above, look into what MakeCatalog printed on the
command-line.
-You can see that (as requested) the object and clump labels for both were
taken from the respective extensions in @file{seg/xdf-f160w.fits}, while the
values and Sky standard deviation were taken from @file{nc/xdf-f105w.fits} and
@file{nc/xdf-f125w.fits}.
-Since we used the same labeled image on both filters, the number of rows in
both catalogs are now identical.
+You can see that (as requested) the object and clump pixel labels in both were
taken from the respective extensions in @file{seg/xdf-f160w.fits}.
+However, the pixel values and pixel Sky standard deviation were respectively
taken from @file{nc/xdf-f105w.fits} and @file{nc/xdf-f125w.fits}.
+Since we used the same labeled image on all filters, the number of rows in
both catalogs are now identical.
Let's have a look:
@example
@@ -3242,11 +3286,12 @@ $ asttable cat/xdf-f125w-on-f160w-lab.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example
-Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in the
FITS headers, or lines starting with @code{#} in plain text) contain some
important information about the input datasets and other useful info (for
example pixel area or per-pixel surface brightness limit).
-You can see them with this command:
+Finally, MakeCatalog also does basic calculations on the full dataset
(independent of each labeled region but related to whole data), for example
pixel area or per-pixel surface brightness limit.
+They are stored as keywords in the FITS headers (or lines starting with
@code{#} in plain text).
+You can see them with this command (for more, see @ref{Image surface
brightness limit} in the next tutorial):
@example
-$ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
+$ astfits cat/xdf-f160w.fits -h1
@end example
@@ -4233,7 +4278,9 @@ Due to its more peculiar low surface brightness
structure/features, we'll focus
@menu
* Downloading and validating input data:: How to get and check the input data.
* NoiseChisel optimization:: Detect the extended and diffuse wings.
+* Image surface brightness limit:: Standards to quantify the noise level.
* Achieved surface brightness level:: Calculate the outer surface brightness.
+* Extract clumps and objects:: Find sub-structure over the detections.
@end menu
@node Downloading and validating input data, NoiseChisel optimization,
Detecting large extended targets, Detecting large extended targets
@@ -4318,13 +4365,7 @@ Here, we don't need the compressed file any more, so
we'll just let @command{bun
$ bunzip2 r.fits.bz2
@end example
-
-@menu
-* NoiseChisel optimization:: Optimize NoiseChisel to dig very deep.
-* Achieved surface brightness level:: Measure how much you detected.
-@end menu
-
-@node NoiseChisel optimization, Achieved surface brightness level, Downloading
and validating input data, Detecting large extended targets
+@node NoiseChisel optimization, Image surface brightness limit, Downloading
and validating input data, Detecting large extended targets
@subsection NoiseChisel optimization
In @ref{Detecting large extended targets} we downloaded the single exposure
SDSS image.
Let's see how NoiseChisel operates on it with its default parameters:
@@ -4568,19 +4609,251 @@ However, given the many problems in existing ``smart''
solutions, such automatic
So even when they are implemented, we would strongly recommend quality checks
for a robust analysis.
@end cartouche
-@node Achieved surface brightness level, , NoiseChisel optimization,
Detecting large extended targets
-@subsection Achieved surface brightness level
+@node Image surface brightness limit, Achieved surface brightness level,
NoiseChisel optimization, Detecting large extended targets
+@subsection Image surface brightness limit
+@cindex Surface brightness limit
+@cindex Limit, surface brightness
In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a
single-exposure SDSS image of the M51 group.
-Let's measure how deep we carved the signal out of noise.
-For this measurement, we'll need to estimate the average flux on the outer
edges of the detection.
-Fortunately all this can be done with a few simple commands (and no
higher-level language mini-environments like Python or IRAF) using
@ref{Arithmetic} and @ref{MakeCatalog}.
+When presenting your detection results in a paper or scientific conference,
usually the first thing that someone will ask (if you don't explicity say it!),
is the dataset's @emph{surface brightness limit} (a standard measure of the
noise level), and your target's surface brightness (a measure of the signal,
either in the center or outskirts, depending on context).
+For more on the basics of these important concepts please see @ref{Quantifying
measurement limits}).
+Here, we'll measure these values for this image.
+
+Let's start by measuring the surface brightness limit masking all the detected
pixels and have a look at the noise distribution with the
@command{astarithmetic} and @command{aststatistics} commands below.
+
+@example
+$ astarithmetic r_detected.fits -hINPUT-NO-SKY set-in \
+ r_detected.fits -hDETECTIONS set-det \
+ in det nan where -odet-masked.fits
+$ ds9 det-masked.fits
+$ aststatistics det-masked.fits
+@end example
+
+@noindent
+From the ASCII histogram, we see that the distribution is roughly symmetric.
+We can also quantify this by measuring the skewness (difference between mean
and median, divided by the standard deviation):
+
+@example
+$ aststatistics det-masked.fits --mean --median --std \
+ | awk '@{print ($1-$2)/$3@}'
+@end example
+
+@noindent
+Showing that the mean is larger than the median by @mymath{0.08\sigma}, in
other words, as we saw in @ref{NoiseChisel optimization}, a very small residual
signal still remains in the undetected regions and it was up to you as an
exercise to improve it.
+So let's continue with this value.
+Now, we will use the masked image and the surface brightness limit equation in
@ref{Quantifying measurement limits} to measure the @mymath{3\sigma} surface
brightness limit over an area of @mymath{25 \rm{arcsec}^2}:
+
+@example
+$ nsigma=3
+$ zeropoint=22.5
+$ areaarcsec2=25
+$ std=$(aststatistics det-masked.fits --sigclip-std)
+$ pixarcsec2=$(astfits det-masked.fits --pixelscale --quiet \
+ | awk '@{print $3*3600*3600@}')
+$ astarithmetic --quiet $nsigma $std x \
+ $areaarcsec2 $pixarcsec2 x \
+ sqrt / $zeropoint counts-to-mag
+26.0241
+@end example
+
+The customizable steps above are good for any type of mask.
+For example your field of view may contain a very deep part so you need to
mask all the shallow parts @emph{as well as} the detections before these steps.
+But when your image is flat (like this), there is a much simpler method to
obtain the same value through MakeCatalog (when the standard deviation image is
made by NoiseChisel).
+NoiseChisel has already calculated the minimum (@code{MINSTD}), maximum
(@code{MAXSTD}) and median (@code{MEDSTD}) standard deviation within the tiles
during its processing and has stored them as FITS keywords within the
@code{SKY_STD} HDU.
+You can see them by piping all the keywords in this HDU into @command{grep}.
+In Grep, each @samp{.} represents one character that can be anything so
@code{M..STD} will match all three keywords mentioned above.
+
+@example
+$ astfits r_detected.fits --hdu=SKY_STD | grep 'M..STD'
+@end example
+
+The @code{MEDSTD} value is very similar to the standard deviation derived
above, so we can safely use it instead of having to mask and run Statistics.
+In fact, MakeCatalog also uses this keyword and will report the dataset's
@mymath{n\sigma} surface brightness limit as keywords in the output (not as
measurement columns, since its related to the noise, not labeled signal):
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
+ --forcereadstd --ids
+@end example
+
+@noindent
+Before looking into the measured surface brightness limits, let's review some
important points about this call to MakeCatalog first:
+@itemize
+@item
+We are only concerned with the noise (not the signal), so we don't ask for any
further measurements, because they can un-necessarily slow it down.
+However, MakeCatalog requires at least one column, so we'll only ask for the
@option{--ids} column (which doesn't need any measurement!).
+The output catalog will therefore have a single row and a single column, with
1 as its value@footnote{Recall that NoiseChisel's output is a binary image:
0-valued pixels are noise and 1-valued pixel are signal.
+NoiseChisel doesn't identify sub-structure over the signal, this is the job of
Segment, see @ref{Extract clumps and objects}.}.
+@item
+If we don't ask for any noise-related column (for example the signal-to-noise
ratio column with @option{--sn}, among other noise-related columns),
MakeCatalog is not going to read the noise standard deviation image (again, to
speed up its operation when it is redundant).
+We are thus using the @option{--forcereadstd} option (short for ``force read
standard deviation image'') here so it is ready for the surface brightness
limit measurements that are written as keywords.
+@end itemize
+
+With the command below you can see all the keywords that were measured with
the table.
+Notice the group of keywords that are under the ``Surface brightness limit
(SBL)'' title.
+
+@example
+$ astfits sbl.fits -h1
+@end example
+
+@noindent
+Since all the keywords of interest here start with @code{SBL}, we can get a
more cleaner view with this command.
+
+@example
+$ astfits sbl.fits -h1 | grep ^SBL
+@end example
+
+Notice how the @code{SBLSTD} has the same value as NoiseChisel's @code{MEDSTD}
above.
+Using @code{SBLSTD}, MakeCatalog has determined the @mymath{n\sigma} surface
brightness limiting magnitude in these header keywords.
+The multiple of @mymath{\sigma}, or @mymath{n}, is the value of the
@code{SBLNSIG} keyword which you can change with the @option{--sfmagnsigma}.
+The surface brightness limiting magnitude within a pixel (@code{SBLNSIG}) and
within a pixel-agnostic area of @code{SBLAREA} arcsec@mymath{^2} are stored in
@code{SBLMAG}.
+
+@cindex SDSS
+@cindex Nanomaggy
+@cindex Zero point magnitude
+You will notice that the two surface brightness limiting magnitudes above have
values around 3 and 4 (which is not correct!).
+This is because we haven't given a zero point magnitude to MakeCatalog, so it
uses the default value of @code{0}.
+SDSS image pixel values are calibrated in units of ``nanomaggy'' which are
defined to have a zero point magnitude of 22.5@footnote{From
@url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
+So with the first command below we give the zero point value and with the
second we can see the surface brightness limiting magnitudes with the correct
values (around 25 and 26)
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --zeropoint=22.5 \
+ --output=sbl.fits --forcereadstd --ids
+$ astfits sbl.fits -h1 | grep ^SBL
+@end example
+
+As you see from @code{SBLNSIG} and @code{SBLAREA}, the default multiple of
sigma is 1 and the default area is 1 arcsec@mymath{^2}.
+Usually higher values are used for these two parameters.
+Following the manual example we did above, you can ask for the multiple of
sigma to be 3 and the area to be 25 arcsec@mymath{^2}:
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --zeropoint=22.5 \
+ --output=sbl.fits --sfmagarea=25 --sfmagnsigma=3 \
+ --forcereadstd --ids
+$ astfits sbl.fits -h1 | awk '/^SBLMAG /@{print $3@}'
+26.02296
+@end example
+
+You see that the value is identical to the custom surface brightness limiting
magnitude we measured above (a difference of @mymath{0.00114} magnitudes is
negligible and hundreds of times larger than the typical errors in the zero
point magnitude or magnitude measurements).
+But it is much more easier to have MakeCatalog do this measurement, because
these values will be appended (as keywords) into your final catalog of objects
within that image.
+
+@cartouche
+@noindent
+@strong{Custom STD for MakeCatalog's Surface brightness limit:} You can
manually change/set the value of the @code{MEDSTD} keyword in your input STD
image with @ref{Fits}:
+
+@example
+$ std=$(aststatistics masked.fits --sigclip-std)
+$ astfits noisechisel.fits -hSKY_STD --update=MEDSTD,$std
+@end example
+
+With this change, MakeCatalog will use your custom standard deviation for the
surface brightness limit.
+This is necessary in scenarios where your image has multiple depths and during
your masking, you also mask the shallow regions (as well as the detections of
course).
+@end cartouche
+
+We have successfully measured the image's @mymath{3\sigma} surface brightness
limiting magnitude over 25 arcsec@mymath{^2}.
+However, as discussed in @ref{Quantifying measurement limits} this value is
just an extrapolation of the per-pixel standard deviaiton.
+Issues like correlated noise will cause the real noise over a large area to be
different.
+So for a more robust measurement, let's use the upper-limit magnitude of
similarly sized region.
+For more on the upper-limit magnitude, see the respective item in
@ref{Quantifying measurement limits}.
+
+In summary, the upper-limit measurements involve randomly placing the
footprint of an object in undetected parts of the image many times.
+This resuls in a random distribution of brightness measurements, the standard
deviation of that distribution is then converted into magnitudes.
+To be comparable with the results above, let's make a circular aperture that
has an area of 25 arcsec@mymath{^2} (thus with a radius of @mymath{2.82095}
arcsec).
+
+@example
+zeropoint=22.5
+r_arcsec=2.82095
+
+## Convert the radius (in arcseconds) to pixels.
+r_pixel=$(astfits r_detected.fits --pixelscale -q \
+ | awk '@{print '$r_arcsec'/($1*3600)@}')
+
+## Make circular aperture at pixel (100,100) position is irrelevant.
+echo "1 100 100 5 $r_pixel 0 0 1 1 1" \
+ | astmkprof --background=r_detected.fits \
+ --clearcanvas --mforflatpix --type=uint8 \
+ --output=lab.fits
+
+## Do the upper-limit measurement, ignoring all NoiseChisel's
+## detections as a mask for the upper-limit measurements.
+astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
+ --sfmagarea=25 --sfmagnsigma=3 --forcereadstd \
+ --valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
+ --upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
+ --upnsigma=3 --checkuplim=1 --upnum=1000 \
+ --ids --upperlimitsb
+@end example
+
+The @file{sbl.fits} catalog now contains the upper-limit surface brightness
for a circle with an area of 25 arcsec@mymath{^2}.
+You can check the value with the command below, but the great thing is that
now you have both the surface brightness limiting magnitude in the headers
discussed above, and the upper-limit surface brigthness within the table.
+You can also add more profiles with different shapes and sizes if necessary.
+Of course, you can also use @option{--upperlimitsb} in your actual science
objects and clumps to get an object-specific or clump-specific value.
+
+@example
+$ asttable sbl.fits -cUPPERLIMIT_SB
+25.9119
+@end example
+
+@cindex Random number generation
+@cindex Seed, random number generator
+@noindent
+You will get a slightly different value from the command above.
+In fact, if you run the MakeCatalog command again and look at the measured
upper-limit surface brightness, it will be slightly different with your first
trial!
+Please try exactly the same MakeCatalog command above a few times to see how
it changes.
+
+This is because of the @emph{random} factor in the upper-limit measurements:
every time you run it, different random points will be checked, resulting in a
slightly different distribution.
+You can decrease the random scatter by increasing the number of random checks
(for example setting @option{--upnum=100000}, compared to 1000 in the command
above).
+But this will be slower and the results won't be exactly reproducible.
+The only way to ensure you get an identical result later is to fix the random
number generator function and seed like the command below@footnote{You can use
any integer for the seed. One recommendation is to run MakeCatalog without
@option{--envseed} once and use the randomly generated seed that is printed on
the terminal.}.
+This is a very important point regarding any statistical process involving
random numbers, please see @ref{Generating random numbers}.
+
+@example
+export GSL_RNG_TYPE=ranlxs1
+export GSL_RNG_SEED=1616493518
+astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
+ --sfmagarea=25 --sfmagnsigma=3 --forcereadstd \
+ --valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
+ --upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
+ --upnsigma=3 --checkuplim=1 --upnum=1000 \
+ --ids --upperlimitsb --envseed
+@end example
+
+But where do all the random apertures of the upper-limit measurement fall on
the image?
+It is good to actually inspect their location to get a better understanding
for the process and also detect possible bugs/biases.
+When MakeCatalog is run with the @option{--checkuplim} option, it will print
all the random locations and their measured brightness as a table in a file
with the suffix @file{_upcheck.fits}.
+With the first command below you can use Gnuastro's @command{asttable} and
@command{astscript-ds9-region} to convert the successful aperture locations
into a DS9 region file, and with the second can load the region file into the
detections and sky-subtracted image to visually see where they are.
+
+@example
+## Create a DS9 region file from the check table (activated
+## with '--checkuplim')
+asttable lab_upcheck.fits --noblank=RANDOM_SUM \
+ | astscript-ds9-region -c1,2 --mode=img \
+ --radius=$r_pixel
+
+## Have a look at the regions in relation with NoiseChisel's
+## detections.
+ds9 r_detected.fits[INPUT-NO-SKY] -regions load ds9.reg
+ds9 r_detected.fits[DETECTIONS] -regions load ds9.reg
+@end example
+
+In this example, we were looking at a single-exposure image that has no
correlated noise.
+Because of this, the surface brightness limit and the upper-limit surface
brightness are very close.
+They will have a bigger difference on deep datasets with stronger correlated
noise (that are the result of stacking many individual exposures).
+As an exercise, please try measuring the upper-limit surface brightness level
and surface brightness limit for the deep HST data that we used in the previous
tutorial (@ref{General program usage tutorial}).
+
+@node Achieved surface brightness level, Extract clumps and objects, Image
surface brightness limit, Detecting large extended targets
+@subsection Achieved surface brightness level
+
+In @ref{NoiseChisel optimization} we customized NoiseChisel for a
single-exposure SDSS image of the M51 group and in @ref{Image surface
brightness limit} we measured the surface brightness limit and the upper-limit
surface brightness level (which are both measures of the noise level).
+In this section, let's do some measurements on the outer-most edges of the M51
group to see how they relate to the noise measurements found in the previous
section.
@cindex Opening
-First, let's separate each detected region, or give a unique label/counter to
all the connected pixels of NoiseChisel's detection map:
+For this measurement, we'll need to estimate the average flux on the outer
edges of the detection.
+Fortunately all this can be done with a few simple commands using
@ref{Arithmetic} and @ref{MakeCatalog}.
+First, let's separate each detected region, or give a unique label/counter to
all the connected pixels of NoiseChisel's detection map with the command below.
+Recall that with the @code{set-} operator, the popped operand will be given a
name (@code{det} in this case) for easy usage later.
@example
-$ det="r_detected.fits -hDETECTIONS"
-$ astarithmetic $det 2 connected-components -olabeled.fits
+$ astarithmetic r_detected.fits -hDETECTIONS set-det \
+ det 2 connected-components -olabeled.fits
@end example
You can find the label of the main galaxy visually (by opening the image and
hovering your mouse over the M51 group's label).
@@ -4608,8 +4881,16 @@ $ id=$(asttable cat.fits --sort=AREA_FULL --tail=1
--column=OBJ_ID)
$ echo $id
@end example
+@noindent
+We can now use the @code{id} variable to reject all other detections:
+
+@example
+$ astarithmetic labeled.fits $id eq -oonly-m51.fits
+@end example
+
+Open the image and have a look.
To separate the outer edges of the detections, we'll need to ``erode'' the M51
group detection.
-We'll erode three times (to have more pixels and thus less scatter), using a
maximum connectivity of 2 (8-connected neighbors).
+So in the same Arithmetic command as above, we'll erode three times (to have
more pixels and thus less scatter), using a maximum connectivity of 2
(8-connected neighbors).
We'll then save the output in @file{eroded.fits}.
@example
@@ -4620,8 +4901,7 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2
erode \
@noindent
In @file{labeled.fits}, we can now set all the 1-valued pixels of
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the
previous command.
We'll need the pixels of the M51 group in @code{labeled.fits} two times: once
to do the erosion, another time to find the outer pixel layer.
-To do this (and be efficient and more readable) we'll use the @code{set-i}
operator.
-In the command below, it will save/set/name the pixels of the M51 group as the
`@code{i}'.
+To do this (and be efficient and more readable) we'll use the @code{set-i}
operator (to give this image the name `@code{i}').
In this way we can use it any number of times afterwards, while only reading
it from disk and finding M51's pixels once.
@example
@@ -4634,58 +4914,74 @@ You'll see that the detected edge of the M51 group is
now clearly visible.
You can use @file{edge.fits} to mark (set to blank) this boundary on the input
image and get a visual feeling of how far it extends:
@example
-$ astarithmetic r.fits edge.fits nan where -oedge-masked.fits -h0
+$ astarithmetic r.fits -h0 edge.fits nan where -oedge-masked.fits
@end example
To quantify how deep we have detected the low-surface brightness regions (in
units of signal to-noise ratio), we'll use the command below.
In short it just divides all the non-zero pixels of @file{edge.fits} in the
Sky subtracted input (first extension of NoiseChisel's output) by the pixel
standard deviation of the same pixel.
This will give us a signal-to-noise ratio image.
The mean value of this image shows the level of surface brightness that we
have achieved.
-
You can also break the command below into multiple calls to Arithmetic and
create temporary files to understand it better.
However, if you have a look at @ref{Reverse polish notation} and
@ref{Arithmetic operators}, you should be able to easily understand what your
computer does when you run this command@footnote{@file{edge.fits} (extension
@code{1}) is a binary (0 or 1 valued) image.
-Applying the @code{not} operator on it, just flips all its pixels.
-Through the @code{where} operator, we are setting all the newly 1-valued
pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
-In the second line, we are dividing all the non-blank values by
@file{r_detected.fits} (extension @code{SKY_STD}).
-This gives the signal-to-noise ratio for each of the pixels on the boundary.
+Applying the @code{not} operator on it, just flips all its pixels (from
@code{0} to @code{1} and vice-versa).
+Using the @code{where} operator, we are then setting all the newly 1-valued
pixels (pixels that aren't on the edge) to NaN/blank in the sky-subtracted
input image (@file{r_detected.fits}, extension @code{INPUT-NO-SKY}, which we
call @code{skysub}).
+We are then dividing all the non-blank pixels (only those on the edge) by the
sky standard deviation (@file{r_detected.fits}, extension @code{SKY_STD}, which
we called @code{skystd}).
+This gives the signal-to-noise ratio (S/N) for each of the pixels on the
boundary.
Finally, with the @code{meanvalue} operator, we are taking the mean value of
all the non-blank pixels and reporting that as a single number.}.
@example
-$ edge="edge.fits -h1"
-$ skystd="r_detected.fits -hSKY_STD"
-$ skysub="r_detected.fits -hINPUT-NO-SKY"
-$ astarithmetic $skysub $skystd / $edge not nan where \
- meanvalue --quiet
+$ astarithmetic edge.fits -h1 set-edge \
+ r_detected.fits -hSKY_STD set-skystd \
+ r_detected.fits -hINPUT-NO-SKY set-skysub \
+ skysub skystd / edge not nan where meanvalue --quiet
@end example
@cindex Surface brightness
-We have thus detected the wings of the M51 group down to roughly 1/3rd of the
noise level in this image! But the signal-to-noise ratio is a relative
measurement.
+We have thus detected the wings of the M51 group down to roughly 1/3rd of the
noise level in this image which is a very good achievement!
+But the per-pixel S/N is a relative measurement.
Let's also measure the depth of our detection in absolute surface brightness
units; or magnitudes per square arc-seconds (see @ref{Brightness flux
magnitude}).
-Fortunately Gnuastro's MakeCatalog does this operation easily.
-SDSS image pixel values are calibrated in units of ``nanomaggy'', so the zero
point magnitude is 22.5@footnote{From
@url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
+We'll also ask for the S/N and magnitude of the full edge we have defined.
+Fortunately doing this is very easy with Gnuastro's MakeCatalog:
@example
-astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
- --zeropoint=22.5 --ids --surfacebrightness
-asttable edge_cat.fits
+$ astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
+ --zeropoint=22.5 --ids --surfacebrightness --sn \
+ --magnitude
+$ asttable edge_cat.fits
+1 25.6971 55.2406 15.8994
@end example
-We have thus reached an outer surface brightness of @mymath{25.69}
magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this
single exposure SDSS image!
+We have thus reached an outer surface brightness of @mymath{25.70}
magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this
single exposure SDSS image!
+This is very similar to the surface brightness limit measured in @ref{Image
surface brightness limit} (which is a big achievement!).
+But another point in the result above is very interesting: the total S/N of
the edge is @mymath{55.24} with a total edge magnitude@footnote{You can run
MakeCatalog on @file{only-m51.fits} instead of @file{edge.fits} to see the full
magnitude of the M51 group in this image.} of 15.90!!!
+This very large for such a faint signal (recall that the mean S/N per pixel
was 0.32) and shows a very important point in the study of galaxies:
+While the per-pixel signal in their outer edges may be very faint (and
invisible to the eye in noise), a lot of signal hides deeply burried in the
noise.
+
In interpreting this value, you should just have in mind that NoiseChisel
works based on the contiguity of signal in the pixels.
-Therefore the larger the object (with a similarly diffuse emission), the
deeper NoiseChisel can carve it out of the noise.
+Therefore the larger the object, the deeper NoiseChisel can carve it out of
the noise (for the same outer surface brightness).
In other words, this reported depth, is the depth we have reached for this
object in this dataset, processed with this particular NoiseChisel
configuration.
If the M51 group in this image was larger/smaller than this (the field of view
was smaller/larger), or if the image was from a different instrument, or if we
had used a different configuration, we would go deeper/shallower.
-To continue your analysis of such datasets with extended emission, you can use
@ref{Segment} to identify all the ``clumps'' over the diffuse regions:
background galaxies and foreground stars.
+
+@node Extract clumps and objects, , Achieved surface brightness level,
Detecting large extended targets
+@subsection Extract clumps and objects (Segmentation)
+In @ref{NoiseChisel optimization} we found a good detection map over the
image, so pixels harboring signal have been differentiated from those that
don't.
+For noise-related measurements like the surface brightness limit, this is fine.
+However, after finding the pixels with signal, you are most likely interested
in knowing the sub-structure within them.
+For example how many star forming regions (those bright dots along the spiral
arms) of M51 are within this image?
+What are the colors of each of these star forming regions?
+In the outer most wings of M51, which pixels belong to background galaxies and
foreground stars?
+And many more similar qustions.
+To address these questions, you can use @ref{Segment} to identify all the
``clumps'' and ``objects'' over the detection.
@example
$ astsegment r_detected.fits --output=r_segmented.fits
-$ ds9 -mecube r_segmented.fits -zscale -cmap sls -zoom to fit
+$ ds9 -mecube r_segmented.fits -cmap sls -zoom to fit -scale limits 0 2
@end example
@cindex DS9
@cindex SAO DS9
-Open the output @file{r_segmented.fits} as a multi-extension data cube like
before and flip through the first and second extensions to see the detected
clumps (all pixels with a value larger than 1).
+Open the output @file{r_segmented.fits} as a multi-extension data cube with
the second command above and flip through the first and second extensions,
zoom-in to the spiral arms of M51 and see the detected clumps (all pixels with
a value larger than 1 in the second extension).
To optimize the parameters and make sure you have detected what you wanted, we
recommend to visually inspect the detected clumps on the input image.
For visual inspection, you can make a simple shell script like below.
@@ -4706,14 +5002,11 @@ set -u # Stop execution when a variable is not
initialized.
# Default output is `$1_cat.fits'.
astmkcatalog $1.fits --clumpscat --ids --ra --dec
-# Use Gnuastro's Table program to read the RA and Dec columns of the
-# clumps catalog (in the `CLUMPS' extension). Then pipe the columns
-# to AWK for saving as a DS9 region file.
-asttable $1"_cat.fits" -hCLUMPS -cRA,DEC \
- | awk 'BEGIN @{ print "# Region file format: DS9 version 4.1"; \
- print "global color=green width=1"; \
- print "fk5" @} \
- @{ printf "circle(%s,%s,1\")\n", $1, $2 @}' > $1.reg
+# Use Gnuastro's Table and astscript-ds9-region to build the DS9
+# region file (a circle of radius 1 arcseconds on each point).
+asttable $1"_cat.fits" -hCLUMPS -cRA,DEC \
+ | astscript-ds9-region -c1,2 --mode=wcs --radius=1 \
+ --output=$1.reg
# Show the image (with the requested color scale) and the region file.
ds9 -geometry 1800x3000 -mecube $1.fits -zoom to fit \
@@ -6516,7 +6809,6 @@ When the output is a FITS file, all the programs also
store some very useful inf
* Command-line:: How to use the command-line.
* Configuration files:: Values for unspecified variables.
* Getting help:: Getting more information on the go.
-* Installed scripts:: Installed Bash scripts, not compiled programs.
* Multi-threaded operations:: How threads are managed in Gnuastro.
* Numeric data types:: Different types and how to specify them.
* Memory management:: How memory is allocated (in RAM or HDD/SSD).
@@ -6860,6 +7152,17 @@ But there are two types of FITS tables: FITS ASCII, and
FITS binary.
Thus, with this option, the program is able to identify which type you want.
The currently recognized values to this option are:
+@item --wcslinearmatrix=STR
+Select the linear transformation matrix of the output's WCS.
+This option only takes two values: @code{pc} (for the @code{PCi_j} formalism)
and @code{cd} (for @code{CDi_j}).
+For more on the different formalisms, please see Section 8.1 of the FITS
standard@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}},
version 4.0.
+
+@cindex @code{CDELT}
+In short, in the @code{PCi_j} formalism, we only keep the linear rotation
matrix in these keywords and put the scaling factor (or the pixel scale in
astronomical imaging) in the @code{CDELTi} keywords.
+In the @code{CDi_j} formalism, we blend the scaling into the rotation into a
single matrix and keep that matrix in these FITS keywords.
+By default, Gnuastro uses the @code{PCi_j} formalism, because it greatly helps
in human readability of the raw keywords and is also the default mode of WCSLIB.
+However, in some circumstances it may be necessary to have the keywords in the
CD format; for example when you need to feed the outputs into other software
that don't follow the full FITS standard and only recognize the @code{CDi_j}
formalism.
+
@table @command
@item txt
A plain text table with white-space characters between the columns (see
@@ -7411,7 +7714,7 @@ The prefix of @file{/usr/local/} is conventionally used
for programs you install
-@node Getting help, Installed scripts, Configuration files, Common program
behavior
+@node Getting help, Multi-threaded operations, Configuration files, Common
program behavior
@section Getting help
@cindex Help
@@ -7663,75 +7966,7 @@ We have other mailing lists and tools for those
purposes, see @ref{Report a bug}
-
-
-
-
-
-@node Installed scripts, Multi-threaded operations, Getting help, Common
program behavior
-@section Installed scripts
-
-Gnuastro's programs (introduced in previous chapters) are designed to be
highly modular and thus mainly contain lower-level operations on the data.
-However, in many contexts, higher-level operations (for example a sequence of
calls to multiple Gnuastro programs, or a special way of running a program and
using the outputs) are also very similar between various projects.
-
-To facilitate data analysis on these higher-level steps also, Gnuastro also
installs some scripts on your system with the (@code{astscript-}) prefix (in
contrast to the other programs that only have the @code{ast} prefix).
-
-@cindex GNU Bash
-Like all of Gnuastro's source code, these scripts are also heavily commented.
-They are written in GNU Bash, which doesn't need compilation.
-Therefore, if you open the installed scripts in a text editor, you can
actually read them@footnote{Gnuastro's installed programs (those only starting
with @code{ast}) aren't human-readable.
-They are written in C and are thus compiled (optimized in binary CPU
instructions that will be given directly to your CPU).
-Because they don't need an interpreter like Bash on every run, they are much
faster and more independent than scripts.
-To read the source code of the programs, look into the @file{bin/progname}
directory of Gnuastro's source (@ref{Downloading the source}).
-If you would like to read more about why C was chosen for the programs, please
see @ref{Why C}.}.
-Bash is the same language that is mainly used when typing on the command-line.
-Because of these factors, Bash is much more widely known and used than C (the
language of other Gnuastro programs).
-Gnuastro's installed scripts also do higher-level operations, so customizing
these scripts for a special project will be more common than the programs.
-You can always inspect them (to customize, check, or educate your self) with
this command (just replace @code{emacs} with your favorite text editor):
-
-@example
-$ emacs $(which astscript-NAME)
-@end example
-
-These scripts also accept options and are in many ways similar to the programs
(see @ref{Common options}) with some minor differences:
-
-@itemize
-@item
-Currently they don't accept configuration files themselves.
-However, the configuration files of the Gnuastro programs they call are indeed
parsed and used by those programs.
-
-As a result, they don't have the following options: @option{--checkconfig},
@option{--config}, @option{--lastconfig}, @option{--onlyversion},
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
-
-@item
-They don't directly allocate any memory, so there is no @option{--minmapsize}.
-
-@item
-They don't have an independent @option{--usage} option: when called with
@option{--usage}, they just recommend running @option{--help}.
-
-@item
-The output of @option{--help} is not configurable like the programs (see
@ref{--help}).
-
-@item
-@cindex GNU AWK
-@cindex GNU SED
-The scripts will commonly use your installed Bash and other basic command-line
tools (for example AWK or SED).
-Different systems have different versions and implementations of these basic
tools (for example GNU/Linux systems use GNU AWK and GNU SED which are far more
advanced and up to date then the minimalist AWK and SED of most other systems).
-Therefore, unexpected errors in these tools might come up when you run these
scripts.
-We will try our best to write these scripts in a portable way.
-However, if you do confront such strange errors, please submit a bug report so
we fix it (see @ref{Report a bug}).
-
-@end itemize
-
-
-
-
-
-
-
-
-
-
-@node Multi-threaded operations, Numeric data types, Installed scripts, Common
program behavior
+@node Multi-threaded operations, Numeric data types, Getting help, Common
program behavior
@section Multi-threaded operations
@pindex nproc
@@ -8930,7 +9165,7 @@ Also, unlike the rest of the options in this section,
with @option{--keyvalue},
@item -l STR[,STR[,...]
@itemx --keyvalue=STR[,STR[,...]
Only print the value of the requested keyword(s): the @code{STR}s.
-@option{--keyvalue} can be called multiple times, and each call can contain
multiple comma-separated values.
+@option{--keyvalue} can be called multiple times, and each call can contain
multiple comma-separated keywords.
If more than one file is given, this option uses the same HDU/extension for
all of them (value to @option{--hdu}).
For example, you can get the number of dimensions of the three FITS files in
the running directory, as well as the length along each dimension, with this
command:
@@ -8941,11 +9176,11 @@ image-b.fits 2 774 672
image-c.fits 2 387 336
@end example
-If a single dataset is given, its name is not printed on the first column,
only the values of the requested keywords.
+If only one input is given, and the @option{--quiet} option is activated, the
file name is not printed on the first column, only the values of the requested
keywords.
@example
$ astfits image-a.fits --keyvalue=NAXIS,NAXIS1 \
- --keyvalue=NAXIS2
+ --keyvalue=NAXIS2 --quiet
2 774 672
@end example
@@ -8981,7 +9216,7 @@ image-a.fits
image-b.fits
@end example
-Note that @option{--colinfoinstdout} is necessary to use column names in the
subsequent @command{asttable} command.
+Note that @option{--colinfoinstdout} is necessary to use column names when
piping to other programs (like @command{asttable} above).
Also, with the @option{-cFILENAME} option, we are asking Table to only print
the final file names (we don't need the sizes any more).
The commands with multiple files above used @file{*.fits}, which is only
useful when all your FITS files are in the same directory.
@@ -9216,16 +9451,16 @@ Since its not too long, you can also simply put the
variable values of the first
@cindex @code{DATASUM}: FITS keyword
@cindex @code{CHECKSUM}: FITS keyword
When nothing is given afterwards, the header integrity keywords @code{DATASUM}
and @code{CHECKSUM} will be calculated and written/updated.
-This is calculation and writing is done fully by CFITSIO.
-They thus comply with the FITS standard
4.0@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
that defines these keywords (its Appendix J).
+The calculation and writing is done fully by CFITSIO, therefore they comply
with the FITS standard
4.0@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
that defines these keywords (its Appendix J).
If a value is given (e.g., @option{--write=checksum,MyOwnCheckSum}), then
CFITSIO won't be called to calculate these two keywords and the value (as well
as possible comment and unit) will be written just like any other keyword.
-This is generally not recommended, but necessary in special circumstances (for
example when the checksum needs to be manually updated).
+This is generally not recommended since @code{CHECKSUM} is a reserved FITS
standard keyword.
+If you want to calculate the checksum with another hashing standard manually
and write it into the header, its is recommended to use another keyword name.
-@code{DATASUM} only depends on the data section of the HDU/extension, so it is
not changed when you update the keywords.
-But @code{CHECKSUM} also depends on the header and will not be valid if you
make any further changes to the header.
+In the FITS standard, @code{CHECKSUM} depends on the HDU's data @emph{and}
header keywords, it will therefore not be valid if you make any further changes
to the header after writing the @code{CHECKSUM} keyword.
This includes any further keyword modification options in the same call to the
Fits program.
-Therefore it is recommended to write these keywords as the last keywords that
are written/modified in the extension.
+However, @code{DATASUM} only depends on the data section of the HDU/extension,
so it is not changed when you add, remove or update the header keywords.
+Therefore, it is recommended to write these keywords as the last keywords that
are written/modified in the extension.
You can use the @option{--verify} option (described below) to verify the
values of these two keywords.
@item datasum
@@ -9308,7 +9543,46 @@ In this case (following the GNU C Library), this option
will make the following
This is a very useful option for operations on the FITS date values, for
example sorting FITS files by their dates, or finding the time difference
between two FITS files.
The advantage of working with the Unix epoch time is that you don't have to
worry about calendar details (for example the number of days in different
months, or leap years, etc).
-@item --wcsdistortion STR
+@item --wcscoordsys=STR
+@cindex Galactic coordinate system
+@cindex Ecliptic coordinate system
+@cindex Equatorial coordinate system
+@cindex Supergalactic coordinate system
+@cindex Coordinate system: Galactic
+@cindex Coordinate system: Ecliptic
+@cindex Coordinate system: Equatorial
+@cindex Coordinate system: Supergalactic
+Convert the coordinate system of the image's world coordinate system (WCS) to
the given coordinate system (@code{STR}) and write it into the file given to
@option{--output} (or an automatically named file if no @option{--output} has
been given).
+
+For example with the command below, @file{img-eq.fits} will have an identical
dataset (pixel values) as @file{image.fits}.
+However, the WCS coordinate system of @file{img-eq.fits} will be the
equatorial coordinate system in the Julian calendar epoch 2000 (which is the
most common epoch used today).
+Fits will automatically extract the current coordinate system of
@file{image.fits} and as long as its one of the recognized coordinate systems
listed below, it will do the conversion.
+
+@example
+$ astfits image.fits --coordsys=eq-j2000 --output=img-eq.fits
+@end example
+
+The currently recognized coordinate systems are listed below (the most common
one today is @code{eq-j2000}):
+
+@table @code
+@item eq-j2000
+2000.0 (Julian-year) equatorial coordinates.
+@item eq-b1950
+1950.0 (Besselian-year) equatorial coordinates.
+@item ec-j2000
+2000.0 (Julian-year) ecliptic coordinates.
+@item ec-b1950
+1950.0 (Besselian-year) ecliptic coordinates.
+@item galactic
+Galactic coordinates.
+@item supergalactic
+Supergalactic coordinates.
+@end table
+
+The Equatorial and Ecliptic coordinate systems are defined by the mean equator
and equinox epoch: either the Besselian year 1950.0, or the Julian year 2000.
+For more on their difference and links for further reading about epochs in
astronomy, please see the description in
@url{https://en.wikipedia.org/wiki/Epoch_(astronomy), Wikipedia}.
+
+@item --wcsdistortion=STR
@cindex WCS distortion
@cindex Distortion, WCS
@cindex SIP WCS distortion
@@ -9373,6 +9647,7 @@ So before explaining the options and arguments (in
@ref{Invoking astconvertt}),
@menu
* Recognized file formats:: Recognized file formats
* Color:: Some explanations on color.
+* Aligning images with small WCS offsets:: When the WCS slightly differs.
* Invoking astconvertt:: Options and arguments to ConvertType.
@end menu
@@ -9508,7 +9783,7 @@ To print to the standard output, set the output name to
`@file{stdout}'.
@end table
-@node Color, Invoking astconvertt, Recognized file formats, ConvertType
+@node Color, Aligning images with small WCS offsets, Recognized file formats,
ConvertType
@subsection Color
@cindex RGB
@@ -9576,7 +9851,67 @@ But thanks to the JPEG compression algorithms, when all
the pixels of one channe
Therefore a Grayscale image and a CMYK image that has only the K-channel
filled are approximately the same file size.
-@node Invoking astconvertt, , Color, ConvertType
+@node Aligning images with small WCS offsets, Invoking astconvertt, Color,
ConvertType
+@subsection Aligning images with small WCS offsets
+
+In order to have nice color images, it is important that the images be
properly aligned.
+This is usually the case in many scenarios, but it some times happens that the
images have a small WCS offset, even though they have the same size.
+In such cases you can use the script below to align the images into
approximately the same pixel grid (to within about 0.5 pixels which is
sufficient in many color-image usage scenarios).
+
+The script below does the job using Gnuastro's @ref{Warp} and @ref{Crop}
programs.
+Simply copy the lines below into a plain-text file with your favorite text
editor and save it as @file{my-align.sh}.
+Don't forget to set the variables of the first three lines to specify the file
names (without the @file{.fits} suffix) and the HDUs of your inputs.
+These four lines are all you need to edit, leave the rest unchanged.
+Also, if you are copy/pasting the script from a PDF, be careful that the
single-quotes used in AWK may need to be corrected.
+
+@example
+#!/bin/sh
+
+# Set the input names (without the '.fits' suffix),
+# and their HDUs.
+r=RED_IMAGE_NO_SUFFIX; rhdu=1
+g=GREEN_IMAGE_NO_SUFFIX; ghdu=1
+b=BLUE_IMAGE_NO_SUFFIX; bhdu=1
+
+# To stop the script if there is a crash
+set -e
+
+# Align all the images to the celestial poles.
+astwarp $r.fits --align -h$rhdu -o $r-aligned.fits
+astwarp $g.fits --align -h$ghdu -o $g-aligned.fits
+astwarp $b.fits --align -h$bhdu -o $b-aligned.fits
+
+# Calculate the final WCS-based center and image-based width based on
+# the G-band (in RGB) image.
+centerwcs=$(astfits $g-aligned.fits --skycoverage --quiet \
+ | awk 'NR==1@{printf "%g %g", $1,$2@}')
+widthpix=$(astfits $g-aligned.fits -h1 --quiet \
+ --keyvalue=NAXIS1,NAXIS2 \
+ | awk '@{printf "%d,%d", $1, $2@}')
+
+# Crop all the images around the desired center and width.
+for f in $r $g $b; do
+ centerpix=$(echo $centerwcs \
+ | asttable -c'arith $1 $2 wcstoimg' \
+ --wcsfile=$f-aligned.fits \
+ | awk '@{printf "%g,%g", $1, $2@}')
+ astcrop $f-aligned.fits --mode=img --width=$widthpix \
+ --center=$centerpix -o$f-use.fits
+ rm $f-aligned.fits
+done
+@end example
+
+Once you have have saved the file and come back to your command-line you can
run the script like this:
+
+@example
+$ chmod +x my-align.sh
+$ ./my-align.sh
+@end example
+
+@noindent
+Of course, feel free to hack it and modify it to fit your datasets, like the
rest of Gnuastro, this script is released under GNU GPLv.3 and above, see
@ref{Your rights}.
+
+@node Invoking astconvertt, , Aligning images with small WCS offsets,
ConvertType
@subsection Invoking ConvertType
ConvertType will convert any recognized input file type to any specified
output type.
@@ -9641,6 +9976,8 @@ Input:
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
+Input HDU name or counter (counting from 0) for each input FITS file.
+If the same HDU should be used from all the FITS files, you can use the
@option{--globalhdu} option described below.
In ConvertType, it is possible to call the HDU option multiple times for the
different input FITS or TIFF files in the same order that they are called on
the command-line.
Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may
contain multiple color channels (for example when the image is in RGB).
@@ -9649,6 +9986,11 @@ The number of calls to this option cannot be less than
the number of input FITS
Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes
numbers (counting from zero, similar to CFITSIO) for `directory' identification.
Hence the concept of names is not defined for the directories and the values
to this option for TIFF files must be numbers.
+
+@item -g STR/INT
+@itemx --globalhdu=STR/INT
+Use the value given to this option (a HDU name or a counter, starting from 0)
for the HDU identifier of all the input FITS files.
+This is useful when all the inputs are distributed in different files, but
have the same HDU in those files.
@end table
@noindent
@@ -10597,12 +10939,42 @@ Here is the list of short names for popular datasets
within Gaia:
@cindex NED (NASA/IPAC Extragalactic Database)
The NASA/IPAC Extragalactic Database (NED, @url{http://ned.ipac.caltech.edu})
is a fusion database, integrating the information about extra-galactic sources
from many large sky surveys into a single catalog.
It covers the full spectrum, from Gamma rays to radio frequencies and is
updated when new data arrives.
-A query to @code{ned} is submitted to
@code{https://ned.ipac.caltech.edu/tap/sync}.
+A TAP query to @code{ned} is submitted to
@code{https://ned.ipac.caltech.edu/tap/sync}.
-Currently NED only has its main dataset for TAP access (shown below), more
datasets will be added for TAP access in the future.
@itemize
@item
-@code{objdir --> NEDTAP.objdir}
+@code{objdir --> NEDTAP.objdir}: default TAP-based dataset in NED.
+
+@item
+@cindex VOTable
+@code{extinction}: A command-line interface to the
@url{https://ned.ipac.caltech.edu/extinction_calculator, NED Extinction
Calculator}.
+It only takes a central coordinate and returns a VOTable of the calculated
extinction in many commonly used filters at that point.
+As a result, options like @option{--width} or @option{--radius} are not
supported.
+However, Gnuastro doesn't yet support the VOTable format.
+Therefore, if you specify an @option{--output} file, it should have an
@file{.xml} suffix and the downloaded file will not be checked.
+
+Until VOTable support is added to Gnuastro, you can use GREP, AWK and SED to
convert the VOTable data into a FITS table with a command like below (assuming
the queried VOTable is called @file{ned-extinction.xml}):
+
+@verbatim
+grep '^<TR><TD>' ned-extinction.xml \
+ | sed -e's|<TR><TD>||' \
+ -e's|</TD></TR>||' \
+ -e's|</TD><TD>|@|g' \
+ | awk 'BEGIN{FS="@"; \
+ print "# Column 1: FILTER [name,str15] Filter name"; \
+ print "# Column 2: CENTRAL [um,f32] Central Wavelength"; \
+ print "# Column 3: EXTINCTION [mag,f32] Galactic Ext."; \
+ print "# Column 4: ADS_REF [ref,str50] ADS reference"} \
+ {printf "%-15s %g %g %s\n", $1, $2, $3, $4}' \
+ | asttable -oned-extinction.fits
+@end verbatim
+
+Once the table is in FITS, you can easily get the extinction for a certain
filter (for example the @code{SDSS r} filter) like the command below:
+
+@example
+asttable ned-extinction.fits --equal=FILTER,"SDSS r" \
+ -cEXTINCTION
+@end example
@end itemize
@item vizier
@@ -10740,7 +11112,7 @@ If this option is given, the raw string is directly
passed to the server and all
With the high-level options (like @option{--column}, @option{--center},
@option{--radius}, @option{--range} and other constraining options below), the
low-level query will be constructed automatically for the particular database.
This method is only limited to the generic capabilities that Query provides
for all servers.
So @option{--query} is more powerful, however, in this mode, you don't need
any knowledge of the database's query language.
-You can see the internally generated query on the terminal (if
@option{--quiet} is not used) or in the 0-th extension of the output (if its a
FITS file).
+You can see the internally generated query on the terminal (if
@option{--quiet} is not used) or in the 0-th extension of the output (if it is
a FITS file).
This full command contains the internally generated query.
@end itemize
@@ -11719,6 +12091,34 @@ These operators take a single operand.
Inverse Hyperbolic sine, cosine, and tangent.
These operators take a single operand.
+@item counts-to-mag
+Convert counts (usually CCD outputs) to magnitudes using the given zeropoint.
+The zero point is the first popped operand and the count value is the second.
+For example assume you have measured the standard deviation of the noise in an
image to be @code{0.1}, and the image's zero point is @code{22.5}.
+You can therefore measure the @emph{per-pixel} surface brightness limit of the
dataset (which is the magnitude of the noise standrard deviation) with this
simple command below.
+Note that because the output is a simple number, we are using @option{--quiet}
to avoid printing extra information.
+
+@example
+astarithmetic 0.1 22.5 counts-to-mag --quiet
+@end example
+
+Of course, you can also convert every pixel in an image (or table column in
Table's @ref{Column arithmetic}) with this operator if you replace the second
popped operand with an image/column.
+
+@item counts-to-jy
+@cindex AB magnitude
+@cindex Magnitude, AB
+Convert counts (usually CCD outputs) to Janskys through an AB-magnitude based
zeropoint.
+The top-popped operand is assumed to be the AB-magnitude zero point and the
second-popped operand is assumed to be a dataset in units of counts (an image
in Arithmetic, and a column in Table's @ref{Column arithmetic}).
+For the full equation and basic definitions, see @ref{Brightness flux
magnitude}.
+
+@cindex SDSS
+For example SDSS images are calibrated in units of nano-maggies, with a fixed
zero point magnitude of 22.5.
+Therefore you can convert the units of SDSS image pixels to Janskys with the
command below:
+
+@example
+$ astarithmetic sdss-image.fits 22.5 counts-to-jy
+@end example
+
@item minvalue
Minimum value in the first popped operand, so ``@command{a.fits minvalue}''
will push the minimum pixel value in this image onto the stack.
When this operator acts on a single image, the output (operand that is put
back on the stack) will no longer be an image, but a number.
@@ -12140,7 +12540,7 @@ In effect, this expands the outer borders of the
foreground.
This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
The usage is similar to @code{erode}, for example:
@example
-$ astarithmetic binary.fits 2 erode -oout.fits
+$ astarithmetic binary.fits 2 dilate -oout.fits
@end example
@item connected-components
@@ -14119,8 +14519,6 @@ For example getting general or specific statistics of
the dataset (with @ref{Sta
* Segment:: Segment detections based on signal structure.
* MakeCatalog:: Catalog from input and labeled images.
* Match:: Match two datasets.
-* Sort FITS files by night:: Sort and classify images in separate nights.
-* SAO DS9 region files from table:: Table's positional columns into DS9
region file.
@end menu
@node Statistics, NoiseChisel, Data analysis, Data analysis
@@ -16551,13 +16949,14 @@ For those who feel MakeCatalog's existing
measurements/columns aren't enough and
@menu
* Detection and catalog production:: Discussing why/how to treat these
separately
+* Brightness flux magnitude:: More on Magnitudes, surface brightness and etc.
* Quantifying measurement limits:: For comparing different catalogs.
* Measuring elliptical parameters:: Estimating elliptical parameters.
* Adding new columns to MakeCatalog:: How to add new columns.
* Invoking astmkcatalog:: Options and arguments to MakeCatalog.
@end menu
-@node Detection and catalog production, Quantifying measurement limits,
MakeCatalog, MakeCatalog
+@node Detection and catalog production, Brightness flux magnitude,
MakeCatalog, MakeCatalog
@subsection Detection and catalog production
Most existing common tools in low-level astronomical data-analysis (for
example
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}})
merge the two processes of detection and measurement (catalog production) in
one program.
@@ -16605,124 +17004,162 @@ It might even be so intertwined with its
processing, that adding new columns mig
-@node Quantifying measurement limits, Measuring elliptical parameters,
Detection and catalog production, MakeCatalog
-@subsection Quantifying measurement limits
-@cindex Depth
-@cindex Clump magnitude limit
-@cindex Object magnitude limit
-@cindex Limit, object/clump magnitude
-@cindex Magnitude, object/clump detection limit
-No measurement on a real dataset can be perfect: you can only reach a certain
level/limit of accuracy.
-Therefore, a meaningful (scientific) analysis requires an understanding of
these limits for the dataset and your analysis tools: different datasets have
different noise properties and different detection methods (one
method/algorithm/software that is run with a different set of parameters is
considered as a different detection method) will have different abilities to
detect or measure certain kinds of signal (astronomical objects) and their
properties in the dataset.
-Hence, quantifying the detection and measurement limitations with a particular
dataset and analysis tool is the most crucial/critical aspect of any high-level
analysis.
-Here, we'll review some of the most general limits that are important in any
astronomical data analysis and how MakeCatalog makes it easy to find them.
-Depending on the higher-level analysis, there are more tests that must be
done, but these are relatively low-level and usually necessary in most cases.
-In astronomy, it is common to use the magnitude (a unit-less scale) and
physical units, see @ref{Brightness flux magnitude}.
-Therefore the measurements discussed here are commonly used in units of
magnitudes.
-@table @asis
+@node Brightness flux magnitude, Quantifying measurement limits, Detection and
catalog production, MakeCatalog
+@subsection Brightness, Flux, Magnitude and Surface brightness
-@item Surface brightness limit (of whole dataset)
-@cindex Surface brightness
-As we make more observations on one region of the sky, and add the
observations into one dataset, the signal and noise both increase.
-However, the signal increase much faster than the noise: assuming you add
@mymath{N} datasets with equal exposure times, the signal will increases as a
multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
-Thus this increases the signal-to-noise ratio.
-Qualitatively, fainter (per pixel) parts of the objects/signal in the image
will become more visible/detectable.
-The noise-level is known as the dataset's surface brightness limit.
+@cindex ADU
+@cindex Gain
+@cindex Counts
+Astronomical data pixels are usually in units of counts@footnote{Counts are
also known as analog to digital units (ADU).} or electrons or either one
divided by seconds.
+To convert from the counts to electrons, you will need to know the instrument
gain.
+In any case, they can be directly converted to energy or energy/time using the
basic hardware (telescope, camera and filter) information.
+We will continue the discussion assuming the pixels are in units of
energy/time.
-You can think of the noise as muddy water that is completely covering a flat
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky
value}.
-Note that this analogy only holds for a flat sky value across the surface of
the image or ground.}.
-The signal (or astronomical objects in this analogy) will be summits/hills
that start from the flat sky level (under the muddy water) and can sometimes
reach outside of the muddy water.
-Let's assume that in your first observation the muddy water has just been
stirred and you can't see anything through it.
-As you wait and make more observations/exposures, the mud settles down and the
@emph{depth} of the transparent water increases, making the summits visible.
-As the depth of clear water increases, the parts of the hills with lower
heights (parts with lower surface brightness) can be seen more clearly.
-In this analogy, height (from the ground) is @emph{surface
brightness}@footnote{Note that this muddy water analogy is not perfect, because
while the water-level remains the same all over a peak, in data analysis, the
Poisson noise increases with the level of data.} and the height of the muddy
water is your surface brightness limit.
+@table @asis
+@cindex Flux
+@cindex Luminosity
+@cindex Brightness
+@item Brightness
+The @emph{brightness} of an object is defined as its total detected energy per
time.
+In the case of an imaged source, this is simply the sum of the pixels that are
associated with that detection by our detection tool (for example
@ref{NoiseChisel}@footnote{If further processing is done, for example the Kron
or Petrosian radii are calculated, then the detected area is not sufficient and
the total area that was within the respective radius must be used.}).
+The @emph{flux} of an object is defined in units of
energy/time/collecting-area.
+For an astronomical target, the flux is therefore defined as its brightness
divided by the area used to collect the light from the source: or the telescope
aperture (for example in units of @mymath{cm^2}).
+Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can
define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
-@cindex Data's depth
-The outputs of NoiseChisel include the Sky standard deviation
(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated from
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel
output}.
-Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful
meshes in the image (prior to interpolation or smoothing).
+Therefore, while flux and luminosity are intrinsic properties of the object,
brightness depends on our detecting tools (hardware and software).
+In low-level observational astronomy data analysis, we are usually more
concerned with measuring the brightness, because it is the thing we directly
measure from the image pixels and create in catalogs.
+On the other hand, luminosity is used in higher-level analysis (after image
contents are measured as catalogs to deduce physical interpretations).
+It is just important avoid possible confusion between luminosity and
brightness because both have the same units of energy per seconds.
-On different instruments, pixels have different physical sizes (for example in
micro-meters, or spatial angle over the sky).
-Nevertheless, a pixel is our unit of data collection.
-In other words, while quantifying the noise, the physical or projected size of
the pixels is irrelevant.
-We thus define the Surface brightness limit or @emph{depth}, in units of
magnitude/pixel, of a data-set, with zeropoint magnitude @mymath{z}, with the
@mymath{n}th multiple of @mymath{\sigma_m} as (see @ref{Brightness flux
magnitude}):
+@item Magnitude
+@cindex Magnitudes from flux
+@cindex Flux to magnitude conversion
+@cindex Astronomical Magnitude system
+Images of astronomical objects span over a very large range of brightness.
+With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}}
times brighter than the fainter galaxies we can currently detect in the deepest
images.
+Therefore discussing brightness directly will involve a large range of values
which is inconvenient.
+So astronomers have chosen to use a logarithmic scale to talk about the
brightness of astronomical objects.
-@dispmath{SB_{\rm Pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z}
+@cindex Hipparchus of Nicaea
+But the logarithm can only be usable with a dimensionless value that is always
positive.
+Fortunately brightness is always positive (at least in theory@footnote{In
practice, for very faint objects, if the background brightness is
over-subtracted, we may end up with a negative brightness in a real object.}).
+To remove the dimensions, we divide the brightness of the object (@mymath{B})
by a reference brightness (@mymath{B_r}).
+We then define a logarithmic scale as @mymath{magnitude} through the relation
below.
+The @mymath{-2.5} factor in the definition of magnitudes is a legacy of the
our ancient colleagues and in particular Hipparchus of Nicaea (190-120 BC).
-@cindex XDF survey
-@cindex CANDELS survey
-@cindex eXtreme Deep Field (XDF) survey
-As an example, the XDF survey covers part of the sky that the Hubble space
telescope has observed the most (for 85 orbits) and is consequently very small
(@mymath{\sim4} arcmin@mymath{^2}).
-On the other hand, the CANDELS survey, is one of the widest multi-color
surveys covering several fields (about 720 arcmin@mymath{^2}) but its deepest
fields have only 9 orbits observation.
-The depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W
filter are respectively 34.40 and 32.45 magnitudes/pixel.
-In a single orbit image, this same field has a depth of 31.32.
-Recall that a larger magnitude corresponds to less brightness.
-
-The low-level magnitude/pixel measurement above is only useful when all the
datasets you want to use belong to one instrument (telescope and camera).
-However, you will often find yourself using datasets from various instruments
with different pixel scales (projected pixel sizes).
-If we know the pixel scale, we can obtain a more easily comparable surface
brightness limit in units of: magnitude/arcsec@mymath{^2}.
-Let's assume that the dataset has a zeropoint value of @mymath{z}, and every
pixel is @mymath{p} arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels
that cover an area of @mymath{A} arcsec@mymath{^2}).
-If the surface brightness is desired at the @mymath{n}th multiple of
@mymath{\sigma_m}, the following equation (in units of magnitudes per
@mymath{A} arcsec@mymath{^2}) can be used:
+@dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
+
+@noindent
+@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the
pre-defined magnitude of the reference brightness.
+For estimating the error in measuring a magnitude, see @ref{Quantifying
measurement limits}.
+
+@item Zero point
+@cindex Zero point magnitude
+@cindex Magnitude zero point
+A unique situation in the magnitude equation above occurs when the reference
brightness is unity (@mymath{B_r=1}).
+This brightness will thus summarize all the hardware-specific parameters
discussed above (like the conversion of pixel values to physical units) into
one number.
+That reference magnitude is commonly known as the @emph{Zero point} magnitude
because when @mymath{B=B_r=1}, the right side of the magnitude definition above
will be zero.
+Using the zero point magnitude (@mymath{Z}), we can write the magnitude
relation above in a more simpler format:
+
+@dispmath{m = -2.5\log_{10}(B) + Z}
+
+@cindex Janskys (Jy)
+@cindex AB magnitude
+@cindex Magnitude, AB
+Having the zero point of an image, you can convert its pixel values to
physical units of microJanskys (or @mymath{\mu{}Jy}) to enable direct
pixel-based comparisons with images from other instruments@footnote{Comparing
data from different instruments assumes instrument and observation signatures
are properly corrected, things like the flat-field or the Sky}.
+Jansky is a commonly used unit for measuring spectral flux density and one
Jansky is equivalent to @mymath{10^{-26} W/m^2/Hz} (watts per square meter per
hertz).
-@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over
p}\right)+z}}
+This conversion can be done with the fact that in the AB magnitude
standard@footnote{@url{https://en.wikipedia.org/wiki/AB_magnitude}},
@mymath{3631Jy} corresponds to the zero-th magnitude, therefore
@mymath{B\equiv3631\times10^{6}\mu{Jy}} and @mymath{m\equiv0}.
+We can therefore estimate the brightness (@mymath{B_z}, in @mymath{\mu{Jy}})
corresponding to the image zero point (@mymath{Z}) using this equation:
-The @mymath{\sqrt{A/p}} term comes from the fact that noise is added in RMS:
if you add three datasets with noise @mymath{\sigma_1}, @mymath{\sigma_2} and
@mymath{\sigma_3}, the resulting noise level is
@mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when
@mymath{\sigma_1=\sigma_2=\sigma_3=\sigma}, then
@mymath{\sigma_t=\sqrt{3}\sigma}.
-As mentioned above, there are @mymath{A/p} pixels in the area @mymath{A}.
-Therefore, as @mymath{A/p} increases, the surface brightness limiting
magnitude will become brighter.
+@dispmath{m - Z = -2.5\log_{10}(B/B_z)}
+@dispmath{0 - Z = -2.5\log_{10}({3631\times10^{6}\over B_z})}
+@dispmath{B_z = 3631\times10^{\left(6 - {Z \over 2.5} \right)} \mu{Jy}}
-It is just important to understand that the surface brightness limit is the
raw noise level, @emph{not} the signal-to-noise.
-To get a feeling for it you can try these commands on any FITS image (let's
assume its called @file{image.fits}), the output of the first command
(@file{zero.fits}) will be the same size as the input, but all pixels will have
a value of zero.
-We then add an ideal noise to this image and warp it to a new pixel size (such
that the area of the new pixels is @code{area_per_pixel} times the input's),
then we print the standard deviation of the raw noise and warped noise.
-Please open the output images an compare them (their sizes, or their pixel
values) to get a good feeling of what is going on.
-Just note that this demo only works when @code{area_per_pixel} is larger than
one.
+@cindex SDSS
+Because the image zero point corresponds to a pixel value of @mymath{1}, the
@mymath{B_z} value calculated above also corresponds to a pixel value of
@mymath{1}.
+Therefore you simply have to multiply your image by @mymath{B_z} to convert it
to @mymath{\mu{Jy}}.
+Don't forget that this only applies when your zero point was also estimated in
the AB magnitude system.
+On the command-line, you can estimate this value for a certain zero point with
AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
+For example let's assume you are using an SDSS image with a zero point of 22.5:
@example
-area_per_pixel=25
-scale=$(echo $area_per_pixel | awk '@{print sqrt($1)@}')
-astarithmetic image.fits -h0 nan + isblank not -ozero.fits
-astmknoise zero.fits -onoise.fits
-astwarp --scale=1/$scale,1/$scale noise.fits -onoise-w.fits
-std_raw=$(aststatistics noise.fits --std)
-std_warped=$(aststatistics noise-w.fits --std)
-echo;
-echo "(warped pixel area) = $area_per_pixel x (pixel area)"
-echo "Raw STD: $std_raw"
-echo "Warped STD: $std_warped"
+bz=$(echo 22.5 | awk '@{print 3631 * 10^(6-$1/2.5)@}')
+astarithmetic sdss.fits $bz x --output=sdss-in-muJy.fits
@end example
-As you see in this example, this is thus just an extrapolation of the
per-pixel measurement @mymath{\sigma_m}.
-So it should be used with extreme care: for example the dataset must have an
approximately flat depth or noise properties overall.
-A more accurate measure for each detection is known as the @emph{upper-limit
magnitude} which actually uses random positioning of each detection's
area/footprint, see the respective item below.
-The upper-limit magnitude doesn't extrapolate and even accounts for correlated
noise patterns in relation to that detection.
-Therefore, the upper-limit magnitude is a much better measure of your
dataset's surface brightness limit for each particular object.
+@noindent
+But in Gnuastro, it gets even easier: Arithmetic has an operator called
@code{counts-to-jy}.
+This will directly convert your image pixels (in units of counts) to Janskys
though a provided AB Magnitude-based zero point like below.
+See @ref{Arithmetic operators} for more.
-MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and
@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the output
catalog(s).
-Just note that @mymath{SB_{\rm Projected}} is only calculated if the input has
World Coordinate System (WCS).
+@example
+$ astarithmetic sdss.fits 22.5 counts-to-jy
+@end example
-@item Completeness limit (of each detection)
-@cindex Completeness
-As the surface brightness of the objects decreases, the ability to detect them
will also decrease.
-An important statistic is thus the fraction of objects of similar morphology
and brightness that will be identified with our detection algorithm/parameters
in the given image.
-This fraction is known as completeness.
-For brighter objects, completeness is 1: all bright objects that might exist
over the image will be detected.
-However, as we go to objects of lower overall surface brightness, we will fail
to detect some, and gradually we are not able to detect anything any more.
-For a given profile, the magnitude where the completeness drops below a
certain level (usually above @mymath{90\%}) is known as the completeness limit.
+@item Surface brightness
+@cindex Steradian
+@cindex Angular coverage
+@cindex Celestial sphere
+@cindex Surface brightness
+@cindex SI (International System of Units)
+Another important concept is the distribution of an object's brightness over
its area.
+For this, we define the @emph{surface brightness} to be the magnitude of an
object's brightness divided by its solid angle over the celestial sphere (or
coverage in the sky, commonly in units of arcsec@mymath{^2}).
+The solid angle is expressed in units of arcsec@mymath{^2} because
astronomical targets are usually much smaller than one steradian.
+Recall that the steradian is the dimension-less SI unit of a solid angle and 1
steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial
sphere.
-@cindex Purity
-@cindex False detections
-@cindex Detections false
-Another important parameter in measuring completeness is purity: the fraction
of true detections to all true detections.
-In effect purity is the measure of contamination by false detections: the
higher the purity, the lower the contamination.
-Completeness and purity are anti-correlated: if we can allow a large number of
false detections (that we might be able to remove by other means), we can
significantly increase the completeness limit.
+Surface brightness is therefore most commonly expressed in units of
mag/arcsec@mymath{2}.
+For example when the brightness is measured over an area of A
arcsec@mymath{^2}, then the surface brightness becomes:
-One traditional way to measure the completeness and purity of a given sample
is by embedding mock profiles in regions of the image with no detection.
-However in such a study we must be really careful to choose model profiles as
similar to the target of interest as possible.
+@dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
+
+@noindent
+In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is
related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units
of arcsec@mymath{^2}) through this equation:
+
+@dispmath{S = m + 2.5\log_{10}(A)}
+
+A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and
divide the object's magnitude by its area.
+But this is wrong because magnitude is a logarithmic scale while area is
linear.
+It is the brightness that should be divided by the solid angle because both
have linear scales.
+The magnitude of that ratio is then defined to be the surface brightness.
+@end table
-@item Magnitude measurement error (of each detection)
-Any measurement has an error and this includes the derived magnitude for an
object.
-Note that this value is only meaningful when the object's magnitude is
brighter than the upper-limit magnitude (see the next items in this list).
+
+
+
+
+
+@node Quantifying measurement limits, Measuring elliptical parameters,
Brightness flux magnitude, MakeCatalog
+@subsection Quantifying measurement limits
+
+@cindex Depth
+@cindex Clump magnitude limit
+@cindex Object magnitude limit
+@cindex Limit, object/clump magnitude
+@cindex Magnitude, object/clump detection limit
+No measurement on a real dataset can be perfect: you can only reach a certain
level/limit of accuracy and a meaningful (scientific) analysis requires an
understanding of these limits.
+Different datasets have different noise properties and different detection
methods (one method/algorithm/software that is run with a different set of
parameters is considered as a different detection method) will have different
abilities to detect or measure certain kinds of signal (astronomical objects)
and their properties in the dataset.
+Hence, quantifying the detection and measurement limitations with a particular
dataset and analysis tool is the most crucial/critical aspect of any high-level
analysis.
+
+Here, we'll review some of the most commonly used methods to quantify the
limits in astronomical data analysis and how MakeCatalog makes it easy to
measure them.
+Depending on the higher-level analysis, there are more tests that must be
done, but these are relatively low-level and usually necessary in most cases.
+In astronomy, it is common to use the magnitude (a unit-less scale) and
physical units, see @ref{Brightness flux magnitude}.
+Therefore the measurements discussed here are commonly used in units of
magnitudes.
+
+@menu
+* Magnitude measurement error of each detection:: Derivation of mag error
equation
+* Completeness limit of each detection:: Possibility of detecting similar
objects?
+* Upper limit magnitude of each detection:: How reliable is your magnitude?
+* Surface brightness limit of image:: How deep is your data?
+* Upper limit magnitude of image:: How deep is your data for certain
footprint?
+@end menu
+
+@node Magnitude measurement error of each detection, Completeness limit of
each detection, Quantifying measurement limits, Quantifying measurement limits
+@subsubsection Magnitude measurement error of each detection
+The raw error in measuring the magnitude is only meaningful when the object's
magnitude is brighter than the upper-limit magnitude (see below).
As discussed in @ref{Brightness flux magnitude}, the magnitude (@mymath{M}) of
an object with brightness @mymath{B} and zero point magnitude @mymath{z} can be
written as:
@dispmath{M=-2.5\log_{10}(B)+z}
@@ -16745,39 +17182,154 @@ But, @mymath{\Delta{B}/B} is just the inverse of the
Signal-to-noise ratio (@mym
MakeCatalog uses this relation to estimate the magnitude errors.
The signal-to-noise ratio is calculated in different ways for clumps and
objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
[2015]}), but this single equation can be used to estimate the measured
magnitude error afterwards for any type of target.
-@item Upper limit magnitude (of each detection)
+@node Completeness limit of each detection, Upper limit magnitude of each
detection, Magnitude measurement error of each detection, Quantifying
measurement limits
+@subsubsection Completeness limit of each detection
+@cindex Completeness
+As the surface brightness of the objects decreases, the ability to detect them
will also decrease.
+An important statistic is thus the fraction of objects of similar morphology
and brightness that will be detected with our detection algorithm/parameters in
a given image.
+This fraction is known as @emph{completeness}.
+For brighter objects, completeness is 1: all bright objects that might exist
over the image will be detected.
+However, as we go to objects of lower overall surface brightness, we will fail
to detect a fraction of them, and fainter than a certain surface brightness
level (for each morphology),nothing will be detectable in the image: you will
need more data to construct a ``deeper'' image.
+For a given profile and dataset, the magnitude where the completeness drops
below a certain level (usually above @mymath{90\%}) is known as the
completeness limit.
+
+@cindex Purity
+@cindex False detections
+@cindex Detections false
+Another important parameter in measuring completeness is purity: the fraction
of true detections to all true detections.
+In effect purity is the measure of contamination by false detections: the
higher the purity, the lower the contamination.
+Completeness and purity are anti-correlated: if we can allow a large number of
false detections (that we might be able to remove by other means), we can
significantly increase the completeness limit.
+
+One traditional way to measure the completeness and purity of a given sample
is by embedding mock profiles in regions of the image with no detection.
+However in such a study we must be really careful to choose model profiles as
similar to the target of interest as possible.
+
+
+
+@node Upper limit magnitude of each detection, Surface brightness limit of
image, Completeness limit of each detection, Quantifying measurement limits
+@subsubsection Upper limit magnitude of each detection
Due to the noisy nature of data, it is possible to get arbitrarily low values
for a faint object's brightness (or arbitrarily high @emph{magnitudes}).
Given the scatter caused by the dataset's noise, values fainter than a certain
level are meaningless: another similar depth observation will give a radically
different value.
-For example, while the depth of the image is 32 magnitudes/pixel, a
measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel object is
clearly unreliable.
-In another similar depth image, we might measure a magnitude of 30 for it, and
yet another might give 33.
-Furthermore, due to the noise scatter so close to the depth of the data-set,
the total brightness might actually get measured as a negative value, so no
magnitude can be defined (recall that a magnitude is a base-10 logarithm).
-This problem usually becomes relevant when the detection labels were not
derived from the values being measured (for example when you are estimating
colors, see @ref{MakeCatalog}).
+For example, assume that you have done your detection and segmentation on one
filter and now you do measurements over the same labeled regions, but on other
filters to measure colors (as we did in the tutorial @ref{Segmentation and
making a catalog}).
+Some objects are not going to have any significant signal in the other
filters, but for example, you measure magnitude of 36 for one of them!
+This is clearly unreliable (no dataset in current astronomy is able to detect
such a faint signal).
+In another image with the same depth, using the same filter, you might measure
a magnitude of 30 for it, and yet another might give you 33.
+Furthermore, the total brightness might actually be negative in some images of
the same depth (due to noise).
+In these cases, no magnitude can be defined and MakeCatalog will place a NaN
there (recall that a magnitude is a base-10 logarithm).
@cindex Upper limit magnitude
@cindex Magnitude, upper limit
Using such unreliable measurements will directly affect our analysis, so we
must not use the raw measurements.
-But how can we know how reliable a measurement on a given dataset is?
+When approaching the limits of your detection method, it is therefore
important to be able to identify such cases.
+But how can we know how reliable a measurement of one object on a given
dataset is?
-When we confront such unreasonably faint magnitudes, there is one thing we can
deduce: that if something actually exists here (possibly buried deep under the
noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
-To find this upper limit magnitude, we place the object's footprint
(segmentation map) over random parts of the image where there are no
detections, so we only have pure (possibly correlated) noise, along with
undetected objects.
+When we confront such unreasonably faint magnitudes, there is one thing we can
deduce: that if something actually exists under our labeled pixels (possibly
buried deep under the noise), it's inherent magnitude is fainter than an
@emph{upper limit magnitude}.
+To find this upper limit magnitude, we place the object's footprint
(segmentation map) over a random part of the image where there are no
detections, and measure the total brightness within the footprint.
Doing this a large number of times will give us a distribution of brightness
values.
-The standard deviation (@mymath{\sigma}) of that distribution can be used to
quantify the upper limit magnitude.
+The standard deviation (@mymath{\sigma}) of that distribution can be used to
quantify the upper limit magnitude for that particular object (given its
particular shape and area):
+
+@dispmath{M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad
[mag/target]}
@cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular
apertures (for example with a diameter of @mymath{N} arc-seconds).
-Hence, the upper limit was like the depth discussed above: one value for the
whole image.
+Traditionally, faint/small object photometry was done using fixed circular
apertures (for example with a diameter of @mymath{N} arc-seconds) and there
wasn't much processing involved (to make a deep stack).
+Hence, the upper limit was synonymous with the surface brightness limit
discussed above: one value for the whole image.
The problem with this simplified approach is that the number of pixels in the
aperture directly affects the final distribution and thus magnitude.
-Also the image correlated noise might actually create certain patters, so the
shape of the object can also affect the final result result.
-Fortunately, with the much more advanced hardware and software of today, we
can make customized segmentation maps for each object.
+Also the image correlated noise might actually create certain patterns, so the
shape of the object can also affect the final result.
+Fortunately, with the much more advanced hardware and software of today, we
can make customized segmentation maps (footprint) for each object and have
enough computing power to actually place that footprint over many random places.
+As a result, the per-target upper-limit magnitude and general surface
brightness limit have diverged.
-When requested, MakeCatalog will randomly place each target's footprint over
the dataset as described above and estimate the resulting distribution's
properties (like the upper limit magnitude).
+When any of the upper-limit-related columns requested, MakeCatalog will
randomly place each target's footprint over the undetected parts of the dataset
as described above, and estimate the required properties.
The procedure is fully configurable with the options in @ref{Upper-limit
settings}.
-If one value for the whole image is required, you can either use the surface
brightness limit above or make a circular aperture and feed it into MakeCatalog
to request an upper-limit magnitude for it@footnote{If you intend to make
apertures manually and not use a detection map (for example from
@ref{Segment}), don't forget to use the @option{--upmaskfile} to give
NoiseChisel's output (or any a binary map, marking detected pixels, see
@ref{NoiseChisel output}) as a mask.
-Otherwise, the footprints may randomly fall over detections, giving highly
skewed distributions, with wrong upper-limit distributions.
-See The description of @option{--upmaskfile} in @ref{Upper-limit settings} for
more.}.
+You can get the full list of upper-limit related columns of MakeCatalog with
this command (the extra @code{--} before @code{--upperlimit} is
necessary@footnote{Without the extra @code{--}, grep will assume that
@option{--upperlimit} is one of its own options, and will thus abort,
complaining that it has no option with this name.}):
+
+@example
+$ astmkcatalog --help | grep -- --upperlimit
+@end example
+
+@node Surface brightness limit of image, Upper limit magnitude of image, Upper
limit magnitude of each detection, Quantifying measurement limits
+@subsubsection Surface brightness limit of image
+@cindex Surface brightness
+As we make more observations on one region of the sky and add/combine the
observations into one dataset, both the signal and the noise increase.
+However, the signal increases much faster than the noise:
+Assuming you add @mymath{N} datasets with equal exposure times, the signal
will increases as a multiple of @mymath{N}, while noise increases as
@mymath{\sqrt{N}}.
+Therefore the signal-to-noise ratio increases by a factor of @mymath{\sqrt{N}}.
+Visually, fainter (per pixel) parts of the objects/signal in the image will
become more visible/detectable.
+The noise-level is known as the dataset's surface brightness limit.
+
+You can think of the noise as muddy water that is completely covering a flat
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky
value}.
+Note that this analogy only holds for a flat sky value across the surface of
the image or ground.}.
+The signal (coming from astronomical objects in real data) will be
summits/hills that start from the flat sky level (under the muddy water) and
their summits can sometimes reach above the muddy water.
+Let's assume that in your first observation the muddy water has just been
stirred and except a few small peaks, you can't see anything through the mud.
+As you wait and make more observations/exposures, the mud settles down and the
@emph{depth} of the transparent water increases.
+As a result, more and more summits become visible and the lower parts of the
hills (parts with lower surface brightness) can be seen more clearly.
+In this analogy@footnote{Note that this muddy water analogy is not perfect,
because while the water-level remains the same all over a peak, in data
analysis, the Poisson noise increases with the level of data.}, height (from
the ground) is the @emph{surface brightness} and the height of the muddy water
at the moment you combine your data, is your @emph{surface brightness limit}
for that moment.
+
+@cindex Data's depth
+The outputs of NoiseChisel include the Sky standard deviation
(@mymath{\sigma}) on every group of pixels (a tile) that were calculated from
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel
output}.
+Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful
meshes in the image (prior to interpolation or smoothing).
+It is recorded in the @code{MEDSTD} keyword of the @code{SKY_STD} extension of
NoiseChisel's output.
+
+@cindex ACS camera
+@cindex Surface brightness limit
+@cindex Limit, surface brightness
+On different instruments, pixels cover different spatial angles over the sky.
+For example, the width of each pixel on the ACS camera on the Hubble Space
Telescope (HST) is roughly 0.05 seconds of arc, while the pixels of SDSS are
each 0.396 seconds of arc (almost eight times wider@footnote{Ground-based
instruments like the SDSS suffer from strong smoothing due to the atmosphere.
+Therefore, increasing the pixel resolution (or decreasing the width of a
pixel) won't increase the received information).}).
+Nevertheless, irrespective of its sky coverage, a pixel is our unit of data
collection.
+
+To start with, we define the low-level Surface brightness limit or
@emph{depth}, in units of magnitude/pixel with the equation below (assuming the
image has zero point magnitude @mymath{z} and we want the @mymath{n}th multiple
of @mymath{\sigma_m}).
+
+@dispmath{SB_{n\sigma,\rm pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad
[mag/pixel]}
+
+@cindex XDF survey
+@cindex CANDELS survey
+@cindex eXtreme Deep Field (XDF) survey
+As an example, the XDF survey covers part of the sky that the HST has observed
the most (for 85 orbits) and is consequently very small (@mymath{\sim4} minutes
of arc, squared).
+On the other hand, the CANDELS survey, is one of the widest multi-color
surveys done by the HST covering several fields (about 720 arcmin@mymath{^2})
but its deepest fields have only 9 orbits observation.
+The @mymath{1\sigma} depth of the XDF and CANDELS-deep surveys in the near
infrared WFC3/F160W filter are respectively 34.40 and 32.45 magnitudes/pixel.
+In a single orbit image, this same field has a @mymath{1\sigma} depth of 31.32
magnitudes/pixel.
+Recall that a larger magnitude corresponds to less brightness, see
@ref{Brightness flux magnitude}.
+
+@cindex Pixel scale
+The low-level magnitude/pixel measurement above is only useful when all the
datasets you want to use, or compare, have the same pixel size.
+However, you will often find yourself using, or comparing, datasets from
various instruments with different pixel scales (projected pixel width, in
arc-seconds).
+If we know the pixel scale, we can obtain a more easily comparable surface
brightness limit in units of: magnitude/arcsec@mymath{^2}.
+But another complication is that astronomical objects are usually larger than
1 arcsec@mymath{^2}, so its common to measure the surface brightness depth over
a larger (but fixed, depending on context) area.
+
+Let's assume that every pixel is @mymath{p} arcsec@mymath{^2} and we want the
surface brightness limit for an object covering A arcsec@mymath{^2} (so
@mymath{A/p} is the number of pixels that cover an area of @mymath{A}
arcsec@mymath{^2}).
+On the other hand, noise is added in RMS@footnote{If you add three datasets
with noise @mymath{\sigma_1}, @mymath{\sigma_2} and @mymath{\sigma_3}, the
resulting noise level is
@mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when
@mymath{\sigma_1=\sigma_2=\sigma_3\equiv\sigma}, then
@mymath{\sigma_t=\sigma\sqrt{3}}.
+In this case, the area @mymath{A} is covered by @mymath{A/p} pixels, so the
noise level is @mymath{\sigma_t=\sigma\sqrt{A/p}}.}, hence the noise level in
@mymath{A} arcsec@mymath{^2} is @mymath{n\sigma_m\sqrt{A/p}}.
+But we want the result in units of arcsec@mymath{^2}, so we should divide this
by @mymath{A} arcsec@mymath{^2}:
+@mymath{n\sigma_m\sqrt{A/p}/A=n\sigma_m\sqrt{A/(pA^2)}=n\sigma_m/\sqrt{pA}}.
+Plugging this into the magnitude equation, we get the @mymath{n\sigma} surface
brightness limit, over an area of A arcsec@mymath{^2}, in units of
magnitudes/arcsec@mymath{^2}:
+
+@dispmath{SB_{{n\sigma,\rm A
arcsec}^2}=-2.5\times\log_{10}{\left(n\sigma_m\over \sqrt{pA}\right)+z}
\quad\quad [mag/arcsec^2]}
+
+@cindex World Coordinate System (WCS)
+MakeCatalog will calculate the input dataset's @mymath{SB_{n\sigma,\rm pixel}}
and @mymath{SB_{{n\sigma,\rm A arcsec}^2}} and will write them as the
@code{SBLMAGPIX} and @code{SBLMAG} keywords the output catalog(s), see
@ref{MakeCatalog output}.
+You can set your desired @mymath{n}-th multiple of @mymath{\sigma} and the
@mymath{A} arcsec@mymath{^2} area using the following two options respectively:
@option{--sfmagnsigma} and @option{--sfmagarea} (see @ref{MakeCatalog output}).
+Just note that @mymath{SB_{{n\sigma,\rm A arcsec}^2}} is only calculated if
the input has World Coordinate System (WCS).
+Without WCS, the pixel scale cannot be derived.
+
+@cindex Correlated noise
+@cindex Noise, correlated
+As you saw in its derivation, the calculation above extrapolates the noise in
one pixel over all the input's pixels!
+It therefore implicitly assumes that the noise is the same in all of the
pixels.
+But this only happens in individual exposures: reduced data will have
correlated noise because they are a stack of many individual exposures that
have been warped (thus mixing the pixel values).
+A more accurate measure which will provide a realistic value for every labeled
region is known as the @emph{upper-limit magnitude}, which is discussed below.
+
+
+@node Upper limit magnitude of image, , Surface brightness limit of image,
Quantifying measurement limits
+@subsubsection Upper limit magnitude of image
+As mentioned above, the upper-limit magnitude will depend on the shape of each
object's footprint.
+Therefore we can measure the dataset's upper-limit magnitude using standard
shapes.
+Traditionally a circular aperture of a fixed size (in arcseconds) has been
used.
+For a full example of implementing this, see the respective section in the
tutorial (@ref{Image surface brightness limit}).
+
+
+
+
+
-@end table
@@ -17143,6 +17695,8 @@ The dataset given to @option{--stdfile} (and
@option{--stdhdu} has the Sky varia
Read the input STD image even if it is not required by any of the requested
columns.
This is because some of the output catalog's metadata may need it, for example
to calculate the dataset's surface brightness limit (see @ref{Quantifying
measurement limits}, configured with @option{--sfmagarea} and
@option{--sfmagnsigma} in @ref{MakeCatalog output}).
+Furthermore, if the input STD image doesn't have the @code{MEDSTD} keyword
(that is meant to contain the representative standard deviation of the full
image), with this option, the median will be calculated and used for the
surface brightness limit.
+
@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude for the input image, see @ref{Brightness flux
magnitude}.
@@ -17152,7 +17706,8 @@ The sigma-clipping parameters when any of the
sigma-clipping related columns are
This option takes two values: the first is the multiple of @mymath{\sigma},
and the second is the termination criteria.
If the latter is larger than 1, it is read as an integer number and will be
the number of times to clip.
-If it is smaller than 1, it is interpreted as the tolerance level to stop
clipping. See @ref{Sigma clipping} for a complete explanation.
+If it is smaller than 1, it is interpreted as the tolerance level to stop
clipping.
+See @ref{Sigma clipping} for a complete explanation.
@item --fracmax=FLT[,FLT]
The fractions (one or two) of maximum value in objects or clumps to be used in
the related columns, for example @option{--fracmaxarea1},
@option{--fracmaxsum1} or @option{--fracmaxradius1}, see @ref{MakeCatalog
measurements}.
@@ -17510,6 +18065,7 @@ For now these factors have to be found by other means.
@item --upperlimit
The upper limit value (in units of the input image) for this object or clump.
+This is the sigma-clipped standard deviation of the random distribution,
multiplied by the value of @option{--upnsigma}).
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a
complete explanation.
This is very important for the fainter and smaller objects in the image where
the measured magnitudes are not reliable.
@@ -17518,6 +18074,10 @@ The upper limit magnitude for this object or clump.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a
complete explanation.
This is very important for the fainter and smaller objects in the image where
the measured magnitudes are not reliable.
+@item --upperlimitsb
+The upper-limit surface brightness (in units of mag/arcsec@mymath{^2}) of this
labeled region (object or clump).
+This is just a simple wrapper over lower-level columns: setting B and A as the
value in the columns @option{--upperlimit} and @option{--areaarcsec2}, we fill
this column by simply use the surface brightness equation of @ref{Brightness
flux magnitude}.
+
@item --upperlimitonesigma
The @mymath{1\sigma} upper limit value (in units of the input image) for this
object or clump.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a
complete explanation.
@@ -17791,13 +18351,48 @@ If an output filename is given (see @option{--output}
in @ref{Input output optio
When it isn't given, the input name will be appended with a @file{_cat} suffix
(see @ref{Automatic output}) and its format will be determined from the
@option{--tableformat} option, which is also discussed in @ref{Input output
options}.
@option{--tableformat} is also necessary when the requested output name is a
FITS table (recall that FITS can accept ASCII and binary tables, see
@ref{Table}).
-By default (when @option{--spectrum} isn't called) only a single catalog/table
will be created for ``objects'', however, if @option{--clumpscat} is called, a
secondary catalog/table will also be created.
-For more on ``objects'' and ``clumps'', see @ref{Segment}.
-In short, if you only have one set of labeled images, you don't have to worry
about clumps (they are deactivated by default).
+By default (when @option{--spectrum} or @option{--clumpscat} aren't called)
only a single catalog/table will be created for the labeled ``objects''.
+
+@itemize
+@item
+if @option{--clumpscat} is called, a secondary catalog/table will also be
created for ``clumps'' (one of the outputs of the Segment program, for more on
``objects'' and ``clumps'', see @ref{Segment}).
+In short, if you only have one labeled image, you don't have to worry about
clumps and just ignore this.
+@item
+When @option{--spectrum} is called, it is not mandatory to specify any
single-valued measurement columns. In this case, the output will only be the
spectra of each labeled region within a 3D datacube.
+For more, see the description of @option{--spectrum} in @ref{MakeCatalog
measurements}.
+@end itemize
+
+@cindex Surface brightness limit
+@cindex Limit, Surface brightness
+When possible, MakeCatalog will also measure the full input's noise level
(also known as surface brightness limit, see @ref{Quantifying measurement
limits}).
+Since these measurements are related to the noise and not any particular
labeled object, they are stored as keywords in the output table.
+Furthermore, they are only possible when a standard deviation image has been
loaded (done automatically for any column measurement that involves noise, for
example @option{--sn} or @option{--std}).
+But if you just want the surface brightness limit and no noise-related column,
you can use @option{--forcereadstd}.
+All these keywords start with @code{SBL} (for ``surface brightness limit'')
and are described below:
+
+@table @code
+@item SBLSTD
+Per-pixel standard deviation.
+If a @code{MEDSTD} keyword exists in the standard deviation dataset, then that
value is directly used.
+
+@item SBLNSIG
+Sigma multiple for surface brightness limit (value you gave to
@option{--sfmagnsigma}), used for @code{SBLMAGPX} and @code{SBLMAG}.
+
+@item SBLMAGPX
+Per-pixel surface brightness limit (in units of magnitudes/pixel).
+
+@item SBLAREA
+Area (in units of arcsec@mymath{^2}) used in @code{SBLMAG} (value you gave to
@option{--sfmagarea}).
-When @option{--spectrum} is called, it is not mandatory to specify any
single-valued measurement columns. In this case, the output will only be the
spectra of each labeled region. See the description of @option{--spectrum} in
@ref{MakeCatalog measurements}.
+@item SBLMAG
+Surface brightness limit of data calculated over @code{SBLAREA} (in units of
mag/arcsec@mymath{^2}).
+@end table
+
+When any of the upper-limit measurements are requested, the input parameters
for the upper-limit measurement are stored in the keywords starting with
@code{UP}: @code{UPNSIGMA}, @code{UPNUMBER}, @code{UPRNGNAM}, @code{UPRNGSEE},
@code{UPSCMLTP}, @code{UPSCTOL}.
+These are primarily input arguments, so they correspond to the options with a
similar name.
-The full list of MakeCatalog's output options are elaborated below.
+The full list of MakeCatalog's options relating to the output file format and
keywords are listed below.
+See @ref{MakeCatalog measurements} for specifying which columns you want in
the final catalog.
@table @option
@item -C
@@ -17845,7 +18440,23 @@ For random measurements on any area, please use the
upper-limit columns of MakeC
-@node Match, Sort FITS files by night, MakeCatalog, Data analysis
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+@node Match, , MakeCatalog, Data analysis
@section Match
Data can come come from different telescopes, filters, software and even
different configurations for a single software.
@@ -18105,2434 +18716,2602 @@ The last three are the three Euler angles in
units of degrees in the ZXZ order a
-@node Sort FITS files by night, SAO DS9 region files from table, Match, Data
analysis
-@section Sort FITS files by night
+@node Modeling and fittings, High-level calculations, Data analysis, Top
+@chapter Modeling and fitting
-@cindex Calendar
-FITS images usually contain (several) keywords for preserving important dates.
-In particular, for lower-level data, this is usually the observation date and
time (for example, stored in the @code{DATE-OBS} keyword value).
-When analyzing observed datasets, many calibration steps (like the dark, bias
or flat-field), are commonly calculated on a per-observing-night basis.
+@cindex Fitting
+@cindex Modeling
+In order to fully understand observations after initial analysis on the image,
it is very important to compare them with the existing models to be able to
further understand both the models and the data.
+The tools in this chapter create model galaxies and will provide 2D fittings
to be able to understand the detections.
-However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is
based on the western (Gregorian) calendar.
-Dates that are stored in this format are complicated for automatic processing:
a night starts in the final hours of one calendar day, and extends to the early
hours of the next calendar day.
-As a result, to identify datasets from one night, we commonly need to search
for two dates.
-However calendar peculiarities can make this identification very difficult.
-For example when an observation is done on the night separating two months
(like the night starting on March 31st and going into April 1st), or two years
(like the night starting on December 31st 2018 and going into January 1st,
2019).
-To account for such situations, it is necessary to keep track of how many days
are in a month, and leap years, etc.
+@menu
+* MakeProfiles:: Making mock galaxies and stars.
+* MakeNoise:: Make (add) noise to an image.
+@end menu
-@cindex Unix epoch time
-@cindex Time, Unix epoch
-@cindex Epoch, Unix time
-Gnuastro's @file{astscript-sort-by-night} script is created to help in such
important scenarios.
-It uses @ref{Fits} to convert the FITS date format into the Unix epoch time
(number of seconds since 00:00:00 of January 1st, 1970), using the
@option{--datetosec} option.
-The Unix epoch time is a single number (integer, if not given in sub-second
precision), enabling easy comparison and sorting of dates after January 1st,
1970.
-You can use this script as a basis for making a much more highly customized
sorting script.
-Here are some examples
-@itemize
+
+@node MakeProfiles, MakeNoise, Modeling and fittings, Modeling and fittings
+@section MakeProfiles
+
+@cindex Checking detection algorithms
+@pindex @r{MakeProfiles (}astmkprof@r{)}
+MakeProfiles will create mock astronomical profiles from a catalog, either
individually or together in one output image.
+In data analysis, making a mock image can act like a calibration tool, through
which you can test how successfully your detection technique is able to detect
a known set of objects.
+There are commonly two aspects to detecting: the detection of the fainter
parts of bright objects (which in the case of galaxies fade into the noise very
slowly) or the complete detection of an over-all faint object.
+Making mock galaxies is the most accurate (and idealistic) way these two
aspects of a detection algorithm can be tested.
+You also need mock profiles in fitting known functional profiles with
observations.
+
+MakeProfiles was initially built for extra galactic studies, so currently the
only astronomical objects it can produce are stars and galaxies.
+We welcome the simulation of any other astronomical object.
+The general outline of the steps that MakeProfiles takes are the following:
+
+@enumerate
+
@item
-If you need to copy the files, but only need a single extension (not the whole
file), you can add a step just before the making of the symbolic links, or
copies, and change it to only copy a certain extension of the FITS file using
the Fits program's @option{--copy} option, see @ref{HDU information and
manipulation}.
+Build the full profile out to its truncation radius in a possibly over-sampled
array.
@item
-If you need to classify the files with finer detail (for example the purpose
of the dataset), you can add a step just before the making of the symbolic
links, or copies, to specify a file-name prefix based on other certain keyword
values in the files.
-For example when the FITS files have a keyword to specify if the dataset is a
science, bias, or flat-field image.
-You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the
created file (after the @option{--prefix}) automatically.
+Multiply all the elements by a fixed constant so its total magnitude equals
the desired total magnitude.
-For example, let's assume the observing mode is stored in the hypothetical
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
-With the step below, you can generate a mode-prefix, and add it to the
generated link/copy names (just correct the filename and extension of the first
line to the script's variables):
+@item
+If @option{--individual} is called, save the array for each profile to a FITS
file.
-@example
-modepref=$(astfits infile.fits -h1 \
- | sed -e"s/'/ /g" \
- | awk '$1=="MODE"@{ \
- if($3=="BIAS-IMAGE") print "bias-"; \
- else if($3=="SCIENCE-IMAGE") print "sci-"; \
- else if($3==FLAT-EXP) print "flat-"; \
- else print $3, "NOT recognized"; exit 1@}')
-@end example
+@item
+If @option{--nomerged} is not called, add the overlapping pixels of all the
created profiles to the output image and abort.
-@cindex GNU AWK
-@cindex GNU Sed
-Here is a description of it.
-We first use @command{astfits} to print all the keywords in extension @code{1}
of @file{infile.fits}.
-In the FITS standard, string values (that we are assuming here) are placed in
single quotes (@key{'}) which are annoying in this context/use-case.
-Therefore, we pipe the output of @command{astfits} into @command{sed} to
remove all such quotes (substituting them with a blank space).
-The result is then piped to AWK for giving us the final mode-prefix: with
@code{$1=="MODE"}, we ask AWK to only consider the line where the first column
is @code{MODE}.
-There is an equal sign between the key name and value, so the value is the
third column (@code{$3} in AWK).
-We thus use a simple @code{if-else} structure to look into this value and
print our custom prefix based on it.
-The output of AWK is then stored in the @code{modepref} shell variable which
you can add to the link/copy name.
+@end enumerate
-With the solution above, the increment of the file counter for each night will
be independent of the mode.
-If you want the counter to be mode-dependent, you can add a different counter
for each mode and use that counter instead of the generic counter for each
night (based on the value of @code{modepref}).
-But we'll leave the implementation of this step to you as an exercise.
+Using input values, MakeProfiles adds the World Coordinate System (WCS)
headers of the FITS standard to all its outputs (except PSF images!).
+For a simple test on a set of mock galaxies in one image, there is no need for
the third step or the WCS information.
-@end itemize
+@cindex Transform image
+@cindex Lensing simulations
+@cindex Image transformations
+However in complicated simulations like weak lensing simulations, where each
galaxy undergoes various types of individual transformations based on their
position, those transformations can be applied to the different individual
images with other programs.
+After all the transformations are applied, using the WCS information in each
individual profile image, they can be merged into one output image for
convolution and adding noise.
@menu
-* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
+* Modeling basics:: Astronomical modeling basics.
+* If convolving afterwards:: Considerations for convolving later.
+* Profile magnitude:: Definition of total profile magnitude.
+* Invoking astmkprof:: Inputs and Options for MakeProfiles.
@end menu
-@node Invoking astscript-sort-by-night, , Sort FITS files by night, Sort FITS
files by night
-@subsection Invoking astscript-sort-by-night
-
-This installed script will read a FITS date formatted value from the given
keyword, and classify the input FITS files into individual nights.
-For more on installed scripts please see (see @ref{Installed scripts}).
-This script can be used with the following general template:
-
-@example
-$ astscript-sort-by-night [OPTION...] FITS-files
-@end example
-@noindent
-One line examples:
-@example
-## Use the DATE-OBS keyword
-$ astscript-sort-by-night --key=DATE-OBS /path/to/data/*.fits
+@node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
+@subsection Modeling basics
-## Make links to the input files with the `img-' prefix
-$ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
-@end example
+In the subsections below, first a review of some very basic information and
concepts behind modeling a real astronomical image is given.
+You can skip this subsection if you are already sufficiently familiar with
these concepts.
-This script will look into a HDU/extension (@option{--hdu}) for a keyword
(@option{--key}) in the given FITS files and interpret the value as a date.
-The inputs will be separated by "night"s (11:00a.m to next day's 10:59:59a.m,
spanning two calendar days, exact hour can be set with @option{--hour}).
+@menu
+* Defining an ellipse and ellipsoid:: Definition of these important shapes.
+* PSF:: Radial profiles for the PSF.
+* Stars:: Making mock star profiles.
+* Galaxies:: Radial profiles for galaxies.
+* Sampling from a function:: Sample a function on a pixelated canvas.
+* Oversampling:: Oversampling the model.
+@end menu
-The default output is a list of all the input files along with the following
two columns: night number and file number in that night (sorted by time).
-With @option{--link} a symbolic link will be made (one for each input) that
contains the night number, and number of file in that night (sorted by time),
see the description of @option{--link} for more.
-When @option{--copy} is used instead of a link, a copy of the inputs will be
made instead of symbolic link.
+@node Defining an ellipse and ellipsoid, PSF, Modeling basics, Modeling basics
+@subsubsection Defining an ellipse and ellipsoid
-Below you can see one example where all the @file{target-*.fits} files in the
@file{data} directory should be separated by observing night according to the
@code{DATE-OBS} keyword value in their second extension (number @code{1},
recall that HDU counting starts from 0).
-You can see the output after the @code{ls} command.
+@cindex Ellipse
+@cindex Axis ratio
+@cindex Position angle
+The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an
ellipse.
+Therefore, in this section we'll start defining an ellipse on a pixelated 2D
surface.
+Labeling the major axis of an ellipse @mymath{a}, and its minor axis with
@mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
+The major axis of an ellipse can be aligned in any direction, therefore the
angle of the major axis with respect to the horizontal axis of the image is
defined to be the @emph{position angle} of the ellipse and in this book, we
show it with @mymath{\theta}.
-@example
-$ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
-$ ls
-img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
-@end example
+@cindex Radial profile on ellipse
+Our aim is to put a radial profile of any functional form @mymath{f(r)} over
an ellipse.
+Hence we need to associate a radius/distance to every point in space.
+Let's define the radial distance @mymath{r_{el}} as the distance on the major
axis to the center of an ellipse which is located at @mymath{i_c} and
@mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
+We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the
image coordinate system) from the center of the ellipse with axis ratio
@mymath{q} and position angle @mymath{\theta}.
+First the coordinate system is rotated@footnote{Do not confuse the signs of
@mymath{sin} with the rotation matrix defined in @ref{Warping basics}.
+In that equation, the point is rotated, here the coordinates are rotated and
the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of
that point @mymath{(i_r,j_r)}:
-The outputs can be placed in a different (already existing) directory by
including that directory's name in the @option{--prefix} value, for example
@option{--prefix=sorted/img-} will put them all under the @file{sorted}
directory.
+@dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
+@dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}
-This script can be configured like all Gnuastro's programs (through
command-line options, see @ref{Common options}), with some minor differences
that are described in @ref{Installed scripts}.
-The particular options to this script are listed below:
+@cindex Elliptical distance
+@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1}
and that we defined @mymath{r_{el}\equiv{a}}.
+Hence, multiplying all elements of the ellipse definition with
@mymath{r_{el}^2} we get the elliptical distance at this point point located:
@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
+To place the radial profiles explained below over an ellipse,
@mymath{f(r_{el})} is calculated based on the functional radial profile desired.
-@table @option
-@item -h STR
-@itemx --hdu=STR
-The HDU/extension to use in all the given FITS files.
-All of the given FITS files must have this extension.
+@cindex Ellipsoid
+@cindex Euler angles
+An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid,
ellipsoid}, can be defined following similar principles as before.
+Labeling the major (largest) axis length as @mymath{a}, the second and third
(in a right-handed coordinate system) axis lengths can be labeled as @mymath{b}
and @mymath{c}.
+Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and
@mymath{q_2\equiv{c/a}}.
+The orientation of the ellipsoid can be defined from the orientation of its
major axis.
+There are many ways to define 3D orientation and order matters.
+So to be clear, here we use the ZXZ (or @mymath{Z_1X_2Z_3}) proper
@url{https://en.wikipedia.org/wiki/Euler_angles, Euler angles} to define the 3D
orientation.
+In short, when a point is rotated in this order, we first rotate it around the
Z axis (third axis) by @mymath{\alpha}, then about the (rotated) X axis by
@mymath{\beta} and finally about the (rotated) Z axis by @mymath{\gamma}.
-@item -k STR
-@itemx --key=STR
-The keyword name that contains the FITS date format to classify/sort by.
+Following the discussion in @ref{Merging multiple warpings}, we can define the
full rotation with the following matrix multiplication.
+However, here we are rotating the coordinates, not the point.
+Therefore, both the rotation angles and rotation order are reversed.
+We are also not using homogeneous coordinates (see @ref{Warping basics}) since
we aren't concerned with translation in this context:
-@item -H FLT
-@itemx --hour=FLT
-The hour that defines the next ``night''.
-By default, all times before 11:00a.m are considered to belong to the previous
calendar night.
-If a sub-hour value is necessary, it should be given in units of hours, for
example @option{--hour=9.5} corresponds to 9:30a.m.
+@dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
+ \left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr
0&0&1}\right]
+ \left[\matrix{1&0&0\cr 0&cos\beta&sin\beta\cr
0&-sin\beta&cos\beta }\right]
+ \left[\matrix{cos\alpha&sin\alpha&0\cr -sin\alpha&cos\alpha&0\cr
0&0&1}\right]
+ \left[\matrix{i_c-i\cr j_c-j\cr k_c-k}\right] }
-@cartouche
@noindent
-@cindex Time zone
-@cindex UTC (Universal time coordinate)
-@cindex Universal time coordinate (UTC)
-@strong{Dealing with time zones:}
-The time that is recorded in @option{--key} may be in UTC (Universal Time
Coordinate).
-However, the organization of the images taken during the night depends on the
local time.
-It is possible to take this into account by setting the @option{--hour} option
to the local time in UTC.
-
-For example, consider a set of images taken in Auckland (New Zealand, UTC+12)
during different nights.
-If you want to classify these images by night, you have to know at which time
(in UTC time) the Sun rises (or any other separator/definition of a different
night).
-For example if your observing night finishes before 9:00a.m in Auckland, you
can use @option{--hour=21}.
-Because in Auckland the local time of 9:00 corresponds to 21:00 UTC.
-@end cartouche
-
-@item -l
-@itemx --link
-Create a symbolic link for each input FITS file.
-This option cannot be used with @option{--copy}.
-The link will have a standard name in the following format (variable parts are
written in @code{CAPITAL} letters and described after it):
+Recall that an ellipsoid can be characterized with
+@mymath{(i_r/a)^2+(j_r/b)^2+(k_r/c)^2=1}, so similar to before
+(@mymath{r_{el}\equiv{a}}), we can find the ellipsoidal radius at pixel
+@mymath{(i,j,k)} as: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q_1)^2+(k_r/q_2)^2}}.
-@example
-PnN-I.fits
-@end example
+@cindex Breadth first search
+@cindex Inside-out construction
+@cindex Making profiles pixel by pixel
+@cindex Pixel by pixel making of profiles
+MakeProfiles builds the profile starting from the nearest element (pixel in an
image) in the dataset to the profile center.
+The profile value is calculated for that central pixel using monte carlo
integration, see @ref{Sampling from a function}.
+The next pixel is the next nearest neighbor to the central pixel as defined by
@mymath{r_{el}}.
+This process goes on until the profile is fully built upto the truncation
radius.
+This is done fairly efficiently using a breadth first parsing
strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}}
which is implemented through an ordered linked list.
-@table @code
-@item P
-This is the value given to @option{--prefix}.
-By default, its value is @code{./} (to store the links in the directory this
script was run in).
-See the description of @code{--prefix} for more.
-@item N
-This is the night-counter: starting from 1.
-@code{N} is just incremented by 1 for the next night, no matter how many
nights (without any dataset) there are between two subsequent observing nights
(its just an identifier for each night which you can easily map to different
calendar nights).
-@item I
-File counter in that night, sorted by time.
-@end table
+Using this approach, we build the profile by expanding the circumference.
+Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}}
from above is not cheap in CPU terms).
+Another consequence of this strategy is that extending MakeProfiles to three
dimensions becomes very simple: only the neighbors of each pixel have to be
changed.
+Everything else after that (when the pixel index and its radial profile have
entered the linked list) is the same, no matter the number of dimensions we are
dealing with.
-@item -c
-@itemx --copy
-Make a copy of each input FITS file with the standard naming convention
described in @option{--link}.
-With this option, instead of making a link, a copy is made.
-This option cannot be used with @option{--link}.
-@item -p STR
-@itemx --prefix=STR
-Prefix to append before the night-identifier of each newly created link or
copy.
-This option is thus only relevant with the @option{--copy} or @option{--link}
options.
-See the description of @option{--link} for how its used.
-For example, with @option{--prefix=img-}, all the created file names in the
current directory will start with @code{img-}, making outputs like
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
-@option{--prefix} can also be used to store the links/copies in another
directory relative to the directory this script is being run (it must already
exist).
-For example @code{--prefix=/path/to/processing/img-} will put all the
links/copies in the @file{/path/to/processing} directory, and the files (in
that directory) will all start with @file{img-}.
-@end table
+@node PSF, Stars, Defining an ellipse and ellipsoid, Modeling basics
+@subsubsection Point spread function
+@cindex PSF
+@cindex Point source
+@cindex Diffraction limited
+@cindex Point spread function
+@cindex Spread of a point source
+Assume we have a `point' source, or a source that is far smaller than the
maximum resolution (a pixel).
+When we take an image of it, it will `spread' over an area.
+To quantify that spread, we can define a `function'.
+This is how the point spread function or the PSF of an image is defined.
+This `spread' can have various causes, for example in ground based astronomy,
due to the atmosphere.
+In practice we can never surpass the `spread' due to the diffraction of the
lens aperture.
+Various other effects can also be quantified through a PSF.
+For example, the simple fact that we are sampling in a discrete space, namely
the pixels, also produces a very small `spread' in the image.
+@cindex Blur image
+@cindex Convolution
+@cindex Image blurring
+@cindex PSF image size
+Convolution is the mathematical process by which we can apply a `spread' to an
image, or in other words blur the image, see @ref{Convolution process}.
+The Brightness of an object should remain unchanged after convolution, see
@ref{Brightness flux magnitude}.
+Therefore, it is important that the sum of all the pixels of the PSF be unity.
+The PSF image also has to have an odd number of pixels on its sides so one
pixel can be defined as the center.
+In MakeProfiles, the PSF can be set by the two methods explained below.
+@table @asis
+@item Parametric functions
+@cindex FWHM
+@cindex PSF width
+@cindex Parametric PSFs
+@cindex Full Width at Half Maximum
+A known mathematical function is used to make the PSF.
+In this case, only the parameters to define the functions are necessary and
MakeProfiles will make a PSF based on the given parameters for each function.
+In both cases, the center of the profile has to be exactly in the middle of
the central pixel of the PSF (which is automatically done by MakeProfiles).
+When talking about the PSF, usually, the full width at half maximum or FWHM is
used as a scale of the width of the PSF.
+@table @cite
+@item Gaussian
+@cindex Gaussian distribution
+In the older papers, and to a lesser extent even today, some researchers use
the 2D Gaussian function to approximate the PSF of ground based images.
+In its most general form, a Gaussian function can be written as:
+@dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}
+Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d}
are constrained.
+@mymath{a} can also be found because the function has to be normalized.
+So the only important parameter for MakeProfiles is the @mymath{\sigma}.
+In the Gaussian function we have this relation between the FWHM and
@mymath{\sigma}:
+@cindex Gaussian FWHM
+@dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}
+@item Moffat
+@cindex Moffat function
+The Gaussian profile is much sharper than the images taken from stars on
photographic plates or CCDs.
+Therefore in 1969, Moffat proposed this functional form for the image of stars:
+@dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}
+@cindex Moffat beta
+Again, @mymath{a} is constrained by the normalization, therefore two
parameters define the shape of the Moffat function: @mymath{\alpha} and
@mymath{\beta}.
+The radial parameter is @mymath{\alpha} which is related to the FWHM by
+@cindex Moffat FWHM
+@dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}
+@cindex Compare Moffat and Gaussian
+@cindex PSF, Moffat compared Gaussian
+@noindent
+Comparing with the PSF predicted from atmospheric turbulence theory with a
Moffat function, Trujillo et al.@footnote{
+Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The
effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328,
pp. 977---985.}
+claim that @mymath{\beta} should be 4.765.
+They also show how the Moffat PSF contains the Gaussian PSF as a limiting case
when @mymath{\beta\to\infty}.
+@end table
+@item An input FITS image
+An input image file can also be specified to be used as a PSF.
+If the sum of its pixels are not equal to 1, the pixels will be multiplied by
a fraction so the sum does become 1.
+@end table
+While the Gaussian is only dependent on the FWHM, the Moffat function is also
dependent on @mymath{\beta}.
+Comparing these two functions with a fixed FWHM gives the following results:
-@node SAO DS9 region files from table, , Sort FITS files by night, Data
analysis
-@section SAO DS9 region files from table
+@itemize
+@item
+Within the FWHM, the functions don't have significant differences.
+@item
+For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes
sharper.
+@item
+The Gaussian function is much sharper than the Moffat functions, even when
@mymath{\beta} is large.
+@end itemize
-Once your desired catalog (containing the positions of some objects) is
created (for example with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it
often happens that you want to see your selected objects on an image for a
feeling of the spatial properties of your objects.
-For example you want to see their positions relative to each other.
-In this section we describe a simple installed script that is provided within
Gnuastro for converting your given columns to an SAO DS9 region file to help in
this process.
-SAO DS9@footnote{@url{https://sites.google.com/cfa.harvard.edu/saoimageds9}}
is one of the most common FITS image vizualization tools in astronomy and is
free software.
-@menu
-* Invoking astscript-make-ds9-reg:: How to call astscript-make-ds9-reg
-@end menu
-@node Invoking astscript-make-ds9-reg, , SAO DS9 region files from table, SAO
DS9 region files from table
-@subsection Invoking astscript-make-ds9-reg
+@node Stars, Galaxies, PSF, Modeling basics
+@subsubsection Stars
-This installed script will read two positional columns within an input table
and generate an SAO DS9 region file to visualize the position of the given
objects over an image.
-For more on installed scripts please see (see @ref{Installed scripts}).
-This script can be used with the following general template:
+@cindex Modeling stars
+@cindex Stars, modeling
+In MakeProfiles, stars are generally considered to be a point source.
+This is usually the case for extra galactic studies, were nearby stars are
also in the field.
+Since a star is only a point source, we assume that it only fills one pixel
prior to convolution.
+In fact, exactly for this reason, in astronomical images the light profiles of
stars are one of the best methods to understand the shape of the PSF and a very
large fraction of scientific research is preformed by assuming the shapes of
stars to be the PSF of the image.
-@example
-## Use the RA and DEC columns of 'table.fits' for the region file.
-$ astscript-make-ds9-reg table.fits --column=RA,DEC \
- --output=ds9.reg
-## Select objects with a magnitude between 18 to 20, and generate the
-## region file directly (through a pipe), each region with radius of
-## 0.5 arcseconds.
-$ asttable table.fits --range=MAG,18:20 --column=RA,DEC \
- | astscript-make-ds9-reg --column=1,2 --radius=0.5
-## With the first command, select objects with a magnitude of 25 to 26
-## as red regions in 'bright.reg'. With the second command, select
-## objects with a magnitude between 28 to 29 as a green region and
-## show both.
-$ asttable cat.fits --range=MAG_F160W,25:26 -cRA,DEC \
- | ./astscript-make-ds9-reg -c1,2 --color=red -obright.reg
-$ asttable cat.fits --range=MAG_F160W,28:29 -cRA,DEC \
- | ./astscript-make-ds9-reg -c1,2 --color=green \
- --command="ds9 image.fits -regions bright.reg"
-@end example
-The input can either be passed as a named file, or from standard input (a
pipe).
-Only the @option{--column} option is mandatory (to specify the input table
columns): two colums from the input table must be specified, either by name
(recommended) or number.
-You can optionally also specify the region's radius, width and color of the
regions with the @option{--radius}, @option{--width} and @option{--color}
options, otherwise default values will be used for these (described under each
option).
-The created region file will be written into the file name given to
@option{--output}.
-When @option{--output} isn't called, the default name of @file{ds9.reg} will
be used (in the running directory).
-If the file exists before calling this script, it will be overwritten, unless
you pass the @option{--dontdelete} option.
-Optionally you can also use the @option{--command} option to give the full
command that should be run to execute SAO DS9 (see example above and
description below).
-In this mode, the created region file will be deleted once DS9 is closed
(unless you pass the @option{--dontdelete} option).
-A full description of each option is given below.
+@node Galaxies, Sampling from a function, Stars, Modeling basics
+@subsubsection Galaxies
-@table @option
+@cindex Galaxy profiles
+@cindex S@'ersic profile
+@cindex Profiles, galaxies
+@cindex Generalized de Vaucouleur profile
+Today, most practitioners agree that the flux of galaxies can be modeled with
one or a few generalized de Vaucouleur's (or S@'ersic) profiles.
-@item -h INT/STR
-@item --hdu INT/STR
-The HDU of the input table when a named FITS file is given as input.
-The HDU (or extension) can be either a name or number (counting from zero).
-For more on this option, see @ref{Input output options}.
+@dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n}
-1 \right] \right )}
-@item -c STR,STR
-@itemx --column=STR,STR
-Identifiers of the two positional columns to use in the DS9 region file from
the table.
-They can either be in WCS (RA and Dec) or image (pixel) coordiantes.
-The mode can be specified with the @option{--mode} option, described below.
+@cindex Brightness
+@cindex S@'ersic, J. L.
+@cindex S@'ersic index
+@cindex Effective radius
+@cindex Radius, effective
+@cindex de Vaucouleur profile
+@cindex G@'erard de Vaucouleurs
+G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this
function resembles the galaxy light profiles, with the only difference that he
held @mymath{n} fixed to a value of 4.
+Twenty years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a
variety of values and does not necessarily need to be 4.
+This profile depends on the effective radius (@mymath{r_e}) which is defined
as the radius which contains half of the profile brightness (see @ref{Profile
magnitude}).
+@mymath{I_e} is the flux at the effective radius.
+The S@'ersic index @mymath{n} is used to define the concentration of the
profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on
@mymath{n}.
+MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman
(2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters,
Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that
for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this
equation:
-@item -m wcs|img
-@itemx --mode=wcs|org
-The coordinate system of the positional columns (can be either
@option{--mode=wcs} and @option{--mode=img}).
-In the WCS mode, the values within the columns are interpreted to be RA and
Dec.
-In the image mode, they are interpreted to be pixel X and Y positions.
-This option also affects the interpretation of the value given to
@option{--radius}.
-When this option isn't explicitly given, the columns are assumed to be in WCS
mode.
+@dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over
1148175n^3}-{2194697\over 30690717750n^4}}
-@item -C STR
-@itemx --color=STR
-The color to use for created regions.
-These will be directly interpreted by SAO DS9 when it wants to open the region
file so it must be recognizable by SAO DS9.
-As of SAO DS9 8.2, the recognized color names are @code{black}, @code{white},
@code{red}, @code{green}, @code{blue}, @code{cyan}, @code{magenta} and
@code{yellow}.
-The default color (when this option is not called) is @code{green}
-@item -w INT
-@itemx --width=INT
-The line width of the regions.
-These will be directly interpreted by SAO DS9 when it wants to open the region
file so it must be recognizable by SAO DS9.
-The default value is @code{1}.
-@item -r FLT
-@itemx --radius=FLT
-The radius of all the regions.
-In WCS mode, the radius is assumed to be in arc-seconds, in image mode, it is
in pixel units.
-If this option is not explicitly given, in WCS mode the default radius is 1
arc-seconds and in image mode it is 3 pixels.
-@item --dontdelete
-If the output file name exists, abort the program and don't over-write the
contents of the file.
-This option is thus good if you want to avoid accidentally writing over an
important file.
-Also, don't delete the created region file when @option{--command} is given
(by default, when @option{--command} is given, the created region file will be
deleted after SAO DS9 closes).
-@item -o STR
-@itemx --output=STR
-Write the created SAO DS9 region file into the name given to this option.
-If not explicity given on the command-line, a default name of @file{ds9.reg}
will be used.
-If the file already exists, it will be over-written, you can avoid the
deletion (or over-writing) of an existing file with the @option{--dontdelete}.
+@node Sampling from a function, Oversampling, Galaxies, Modeling basics
+@subsubsection Sampling from a function
-@item --command="STR"
-After creating the region file, run the string given to this option as a
command-line command.
-The SAO DS9 region command will be appended to the end of the given command.
-Because the command will mostly likely contain white-space characters it is
recommended to put the given string in double quotations.
+@cindex Sampling
+A pixel is the ultimate level of accuracy to gather data, we can't get any
more accurate in one image, this is known as sampling in signal processing.
+However, the mathematical profiles which describe our models have infinite
accuracy.
+Over a large fraction of the area of astrophysically interesting profiles (for
example galaxies or PSFs), the variation of the profile over the area of one
pixel is not too significant.
+In such cases, the elliptical radius (@mymath{r_{el}} of the center of the
pixel can be assigned as the final value of the pixel, see @ref{Defining an
ellipse and ellipsoid}).
-For example, let's assume @option{--command="ds9 image.fits -zscale"}.
-After making the region file (assuming it is called @file{ds9.reg}), the
following command will be executed:
+@cindex Integration over pixel
+@cindex Gradient over pixel area
+@cindex Function gradient over pixel area
+As you approach their center, some galaxies become very sharp (their value
significantly changes over one pixel's area).
+This sharpness increases with smaller effective radius and larger S@'ersic
values.
+Thus rendering the central value extremely inaccurate.
+The first method that comes to mind for solving this problem is integration.
+The functional form of the profile can be integrated over the pixel area in a
2D integration process.
+However, unfortunately numerical integration techniques also have their
limitations and when such sharp profiles are needed they can become extremely
inaccurate.
-@example
-ds9 image.fits -zscale -regions ds9.reg
-@end example
+@cindex Monte carlo integration
+The most accurate method of sampling a continuous profile on a discrete space
is by choosing a large number of random points within the boundaries of the
pixel and taking their average value (or Monte Carlo integration).
+This is also, generally speaking, what happens in practice with the photons on
the pixel.
+The number of random points can be set with @option{--numrandom}.
-You can customize all aspects of SAO DS9 with its command-line options,
therefore the value of this option can be as long and complicated as you like.
-For example if you also want the image to fit into the window, this option
will be: @command{--command="ds9 image.fits -zscale -zoom to fit"}.
-You can see the SAO DS9 command-line descriptions by clicking on the ``Help''
menu and selecting ``Reference Manual''.
-In the opened window, click on ``Command Line Options''.
-@end table
+Unfortunately, repeating this Monte Carlo process would be extremely time and
CPU consuming if it is to be applied to every pixel.
+In order to not loose too much accuracy, in MakeProfiles, the profile is built
using both methods explained below.
+The building of the profile begins from its central pixel and continues
(radially) outwards.
+Monte Carlo integration is first applied (which yields @mymath{F_r}), then the
central pixel value (@mymath{F_c}) is calculated on the same pixel.
+If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given
tolerance level (specified with @option{--tolerance}) MakeProfiles will stop
using Monte Carlo integration and only use the central pixel value.
+@cindex Inside-out construction
+The ordering of the pixels in this inside-out construction is based on
@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining
an ellipse and ellipsoid}.
+When the axis ratios are large (near one) this is fine.
+But when they are small and the object is highly elliptical, it might seem
more reasonable to follow @mymath{r_{el}} not @mymath{r}.
+The problem is that the gradient is stronger in pixels with smaller @mymath{r}
(and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
+In other words, the gradient is strongest along the minor axis.
+So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level
will be reached sooner and lots of pixels with large fractional differences
will be missed.
+Monte Carlo integration uses a random number of points.
+Thus, every time you run it, by default, you will get a different distribution
of points to sample within the pixel.
+In the case of large profiles, this will result in a slight difference of the
pixels which use Monte Carlo integration each time MakeProfiles is run.
+To have a deterministic result, you have to fix the random number generator
properties which is used to build the random distribution.
+This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED}
environment variables and calling MakeProfiles with the @option{--envseed}
option.
+To learn more about the process of generating random numbers, see
@ref{Generating random numbers}.
+@cindex Seed, Random number generator
+@cindex Random number generator, Seed
+The seed values are fixed for every profile: with @option{--envseed}, all the
profiles have the same seed and without it, each will get a different seed
using the system clock (which is accurate to within one microsecond).
+The same seed will be used to generate a random number for all the sub-pixel
positions of all the profiles.
+So in the former, the sub-pixel points checked for all the pixels undergoing
Monte carlo integration in all profiles will be identical.
+In other words, the sub-pixel points in the first (closest to the center)
pixel of all the profiles will be identical with each other.
+All the second pixels studied for all the profiles will also receive an
identical (different from the first pixel) set of sub-pixel points and so on.
+As long as the number of random points used is large enough or the profiles
are not identical, this should not cause any systematic bias.
+@node Oversampling, , Sampling from a function, Modeling basics
+@subsubsection Oversampling
+@cindex Oversampling
+The steps explained in @ref{Sampling from a function} do give an accurate
representation of a profile prior to convolution.
+However, in an actual observation, the image is first convolved with or
blurred by the atmospheric and instrument PSF in a continuous space and then it
is sampled on the discrete pixels of the camera.
+@cindex PSF over-sample
+In order to more accurately simulate this process, the unconvolved image and
the PSF are created on a finer pixel grid.
+In other words, the output image is a certain odd-integer multiple of the
desired size, we can call this `oversampling'.
+The user can specify this multiple as a command-line option.
+The reason this has to be an odd number is that the PSF has to be centered on
the center of its image.
+An image with an even number of pixels on each side does not have a central
pixel.
+The image can then be convolved with the PSF (which should also be oversampled
on the same scale).
+Finally, image can be sub-sampled to get to the initial desired pixel size of
the output image.
+After this, mock noise can be added as explained in the next section.
+This is because unlike the PSF, the noise occurs in each output pixel, not on
a continuous space like all the prior steps.
+@node If convolving afterwards, Profile magnitude, Modeling basics,
MakeProfiles
+@subsection If convolving afterwards
+In case you want to convolve the image later with a given point spread
function, make sure to use a larger image size.
+After convolution, the profiles become larger and a profile that is normally
completely outside of the image might fall within it.
+On one axis, if you want your final (convolved) image to be @mymath{m} pixels
and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set
the axis size to @mymath{m+2n}, not @mymath{m}.
+You also have to shift all the pixel positions of the profile centers on the
that axis by @mymath{n} pixels to the positive.
+After convolution, you can crop the outer @mymath{n} pixels with the section
crop box specification of Crop: @option{--section=n:*-n,n:*-n} assuming your
PSF is a square, see @ref{Crop section syntax}.
+This will also remove all discrete Fourier transform artifacts (blurred sides)
from the final image.
+To facilitate this shift, MakeProfiles has the options @option{--xshift},
@option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.
-@node Modeling and fittings, High-level calculations, Data analysis, Top
-@chapter Modeling and fitting
-@cindex Fitting
-@cindex Modeling
-In order to fully understand observations after initial analysis on the image,
it is very important to compare them with the existing models to be able to
further understand both the models and the data.
-The tools in this chapter create model galaxies and will provide 2D fittings
to be able to understand the detections.
-@menu
-* MakeProfiles:: Making mock galaxies and stars.
-* MakeNoise:: Make (add) noise to an image.
-@end menu
+@node Profile magnitude, Invoking astmkprof, If convolving afterwards,
MakeProfiles
+@subsection Profile magnitude
-@node MakeProfiles, MakeNoise, Modeling and fittings, Modeling and fittings
-@section MakeProfiles
+@cindex Brightness
+@cindex Truncation radius
+@cindex Sum for total flux
+To find the profile brightness or its magnitude, (see @ref{Brightness flux
magnitude}), it is customary to use the 2D integration of the flux to infinity.
+However, in MakeProfiles we do not follow this idealistic approach and apply a
more realistic method to find the total brightness or magnitude: the sum of all
the pixels belonging to a profile within its predefined truncation radius.
+Note that if the truncation radius is not large enough, this can be
significantly different from the total integrated light to infinity.
-@cindex Checking detection algorithms
-@pindex @r{MakeProfiles (}astmkprof@r{)}
-MakeProfiles will create mock astronomical profiles from a catalog, either
individually or together in one output image.
-In data analysis, making a mock image can act like a calibration tool, through
which you can test how successfully your detection technique is able to detect
a known set of objects.
-There are commonly two aspects to detecting: the detection of the fainter
parts of bright objects (which in the case of galaxies fade into the noise very
slowly) or the complete detection of an over-all faint object.
-Making mock galaxies is the most accurate (and idealistic) way these two
aspects of a detection algorithm can be tested.
-You also need mock profiles in fitting known functional profiles with
observations.
+@cindex Integration to infinity
+An integration to infinity is not a realistic condition because no galaxy
extends indefinitely (important for high S@'ersic index profiles), pixelation
can also cause a significant difference between the actual total pixel sum
value of the profile and that of integration to infinity, especially in small
and high S@'ersic index profiles.
+To be safe, you can specify a large enough truncation radius for such compact
high S@'ersic index profiles.
-MakeProfiles was initially built for extra galactic studies, so currently the
only astronomical objects it can produce are stars and galaxies.
-We welcome the simulation of any other astronomical object.
-The general outline of the steps that MakeProfiles takes are the following:
+If oversampling is used then the brightness is calculated using the
over-sampled image, see @ref{Oversampling} which is much more accurate.
+The profile is first built in an array completely bounding it with a
normalization constant of unity (see @ref{Galaxies}).
+Taking @mymath{B} to be the desired brightness and @mymath{S} to be the sum of
the pixels in the created profile, every pixel is then multiplied by
@mymath{B/S} so the sum is exactly @mymath{B}.
-@enumerate
+If the @option{--individual} option is called, this same array is written to a
FITS file.
+If not, only the overlapping pixels of this array and the output image are
kept and added to the output array.
-@item
-Build the full profile out to its truncation radius in a possibly over-sampled
array.
-@item
-Multiply all the elements by a fixed constant so its total magnitude equals
the desired total magnitude.
-@item
-If @option{--individual} is called, save the array for each profile to a FITS
file.
-@item
-If @option{--nomerged} is not called, add the overlapping pixels of all the
created profiles to the output image and abort.
-@end enumerate
-Using input values, MakeProfiles adds the World Coordinate System (WCS)
headers of the FITS standard to all its outputs (except PSF images!).
-For a simple test on a set of mock galaxies in one image, there is no need for
the third step or the WCS information.
+@node Invoking astmkprof, , Profile magnitude, MakeProfiles
+@subsection Invoking MakeProfiles
-@cindex Transform image
-@cindex Lensing simulations
-@cindex Image transformations
-However in complicated simulations like weak lensing simulations, where each
galaxy undergoes various types of individual transformations based on their
position, those transformations can be applied to the different individual
images with other programs.
-After all the transformations are applied, using the WCS information in each
individual profile image, they can be merged into one output image for
convolution and adding noise.
+MakeProfiles will make any number of profiles specified in a catalog either
individually or in one image.
+The executable name is @file{astmkprof} with the following general template
-@menu
-* Modeling basics:: Astronomical modeling basics.
-* If convolving afterwards:: Considerations for convolving later.
-* Brightness flux magnitude:: About these measures of energy.
-* Profile magnitude:: Definition of total profile magnitude.
-* Invoking astmkprof:: Inputs and Options for MakeProfiles.
-@end menu
+@example
+$ astmkprof [OPTION ...] [Catalog]
+@end example
+@noindent
+One line examples:
+@example
+## Make an image with profiles in catalog.txt (with default size):
+$ astmkprof catalog.txt
-@node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
-@subsection Modeling basics
+## Make the profiles in catalog.txt over image.fits:
+$ astmkprof --background=image.fits catalog.txt
-In the subsections below, first a review of some very basic information and
concepts behind modeling a real astronomical image is given.
-You can skip this subsection if you are already sufficiently familiar with
these concepts.
+## Make a Moffat PSF with FWHM 3pix, beta=2.8, truncation=5
+$ astmkprof --kernel=moffat,3,2.8,5 --oversample=1
-@menu
-* Defining an ellipse and ellipsoid:: Definition of these important shapes.
-* PSF:: Radial profiles for the PSF.
-* Stars:: Making mock star profiles.
-* Galaxies:: Radial profiles for galaxies.
-* Sampling from a function:: Sample a function on a pixelated canvas.
-* Oversampling:: Oversampling the model.
-@end menu
+## Make profiles in catalog, using RA and Dec in the given column:
+$ astmkprof --ccol=RA_CENTER --ccol=DEC_CENTER --mode=wcs catalog.txt
-@node Defining an ellipse and ellipsoid, PSF, Modeling basics, Modeling basics
-@subsubsection Defining an ellipse and ellipsoid
+## Make a 1500x1500 merged image (oversampled 500x500) image along
+## with an individual image for all the profiles in catalog:
+$ astmkprof --individual --oversample 3 --mergedsize=500,500 cat.txt
+@end example
-@cindex Ellipse
-@cindex Axis ratio
-@cindex Position angle
-The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an
ellipse.
-Therefore, in this section we'll start defining an ellipse on a pixelated 2D
surface.
-Labeling the major axis of an ellipse @mymath{a}, and its minor axis with
@mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
-The major axis of an ellipse can be aligned in any direction, therefore the
angle of the major axis with respect to the horizontal axis of the image is
defined to be the @emph{position angle} of the ellipse and in this book, we
show it with @mymath{\theta}.
+@noindent
+The parameters of the mock profiles can either be given through a catalog
(which stores the parameters of many mock profiles, see @ref{MakeProfiles
catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output
dataset}).
+The catalog can be in the FITS ASCII, FITS binary format, or plain text
formats (see @ref{Tables}).
+A plain text catalog can also be provided using the Standard input (see
@ref{Standard input}).
+The columns related to each parameter can be determined both by number, or by
match/search criteria using the column names, units, or comments, with the
options ending in @option{col}, see below.
-@cindex Radial profile on ellipse
-Our aim is to put a radial profile of any functional form @mymath{f(r)} over
an ellipse.
-Hence we need to associate a radius/distance to every point in space.
-Let's define the radial distance @mymath{r_{el}} as the distance on the major
axis to the center of an ellipse which is located at @mymath{i_c} and
@mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
-We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the
image coordinate system) from the center of the ellipse with axis ratio
@mymath{q} and position angle @mymath{\theta}.
-First the coordinate system is rotated@footnote{Do not confuse the signs of
@mymath{sin} with the rotation matrix defined in @ref{Warping basics}.
-In that equation, the point is rotated, here the coordinates are rotated and
the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of
that point @mymath{(i_r,j_r)}:
+Without any file given to the @option{--background} option, MakeProfiles will
make a zero-valued image and build the profiles on that (its size and main WCS
parameters can also be defined through the options described in
@ref{MakeProfiles output dataset}).
+Besides the main/merged image containing all the profiles in the catalog, it
is also possible to build individual images for each profile (only enclosing
one full profile to its truncation radius) with the @option{--individual}
option.
-@dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
-@dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}
+If an image is given to the @option{--background} option, the pixels of that
image are used as the background value for every pixel hence flux value of each
profile pixel will be added to the pixel in that background value.
+You can disable this with the @code{--clearcanvas} option (which will
initialize the background to zero-valued pixels and build profiles over that).
+With the @option{--background} option, the values to all options relating to
the ``canvas'' (output size and WCS) will be ignored if specified, for example
@option{--oversample}, @option{--mergedsize}, and @option{--prepforconv}.
-@cindex Elliptical distance
-@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1}
and that we defined @mymath{r_{el}\equiv{a}}.
-Hence, multiplying all elements of the ellipse definition with
@mymath{r_{el}^2} we get the elliptical distance at this point point located:
@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
-To place the radial profiles explained below over an ellipse,
@mymath{f(r_{el})} is calculated based on the functional radial profile desired.
+The sections below discuss the options specific to MakeProfiles based on
context: the input catalog settings which can have many rows for different
profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles
profile settings}, we discuss how you can set general profile settings (that
are the same for all the profiles in the catalog).
+Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file}
discuss the outputs of MakeProfiles and how you can configure them.
+Besides these, MakeProfiles also supports all the common Gnuastro program
options that are discussed in @ref{Common options}, so please flip through them
is well for a more comfortable usage.
-@cindex Ellipsoid
-@cindex Euler angles
-An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid,
ellipsoid}, can be defined following similar principles as before.
-Labeling the major (largest) axis length as @mymath{a}, the second and third
(in a right-handed coordinate system) axis lengths can be labeled as @mymath{b}
and @mymath{c}.
-Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and
@mymath{q_2\equiv{c/a}}.
-The orientation of the ellipsoid can be defined from the orientation of its
major axis.
-There are many ways to define 3D orientation and order matters.
-So to be clear, here we use the ZXZ (or @mymath{Z_1X_2Z_3}) proper
@url{https://en.wikipedia.org/wiki/Euler_angles, Euler angles} to define the 3D
orientation.
-In short, when a point is rotated in this order, we first rotate it around the
Z axis (third axis) by @mymath{\alpha}, then about the (rotated) X axis by
@mymath{\beta} and finally about the (rotated) Z axis by @mymath{\gamma}.
+When building 3D profiles, there are more degrees of freedom.
+Hence, more columns are necessary and all the values related to dimensions
(for example size of dataset in each dimension and the WCS properties) must
also have 3 values.
+To allow having an independent set of default values for creating 3D profiles,
MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see
@ref{Configuration files}).
+You can use this for default 3D profile values.
+For example, if you installed Gnuastro with the prefix @file{/usr/local} (the
default location, see @ref{Installation directory}), you can benefit from this
configuration file by running MakeProfiles like the example below.
+As with all configuration files, if you want to customize a given option, call
it before the configuration file.
-Following the discussion in @ref{Merging multiple warpings}, we can define the
full rotation with the following matrix multiplication.
-However, here we are rotating the coordinates, not the point.
-Therefore, both the rotation angles and rotation order are reversed.
-We are also not using homogeneous coordinates (see @ref{Warping basics}) since
we aren't concerned with translation in this context:
+@example
+$ astmkprof --config=/usr/local/etc/astmkprof-3d.conf catalog.txt
+@end example
-@dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
- \left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr
0&0&1}\right]
- \left[\matrix{1&0&0\cr 0&cos\beta&sin\beta\cr
0&-sin\beta&cos\beta }\right]
- \left[\matrix{cos\alpha&sin\alpha&0\cr -sin\alpha&cos\alpha&0\cr
0&0&1}\right]
- \left[\matrix{i_c-i\cr j_c-j\cr k_c-k}\right] }
+@cindex Shell alias
+@cindex Alias, shell
+@cindex Shell startup
+@cindex Startup, shell
+To further simplify the process, you can define a shell alias in any startup
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
+Assuming that you installed Gnuastro in @file{/usr/local}, you can add this
line to the startup file (you may put it all in one line, it is broken into two
lines here for fitting within page limits).
+
+@example
+alias astmkprof-3d="astmkprof --config=/usr/local/etc/astmkprof-3d.conf"
+@end example
@noindent
-Recall that an ellipsoid can be characterized with
-@mymath{(i_r/a)^2+(j_r/b)^2+(k_r/c)^2=1}, so similar to before
-(@mymath{r_{el}\equiv{a}}), we can find the ellipsoidal radius at pixel
-@mymath{(i,j,k)} as: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q_1)^2+(k_r/q_2)^2}}.
+Using this alias, you can call MakeProfiles with the name
@command{astmkprof-3d} (instead of @command{astmkprof}).
+It will automatically load the 3D specific configuration file first, and then
parse any other arguments, options or configuration files.
+You can change the default values in this 3D configuration file by calling
them on the command-line as you do with @command{astmkprof}@footnote{Recall
that for single-invocation options, the last command-line invocation takes
precedence over all previous invocations (including those in the 3D
configuration file).
+See the description of @option{--config} in @ref{Operating mode options}.}.
-@cindex Breadth first search
-@cindex Inside-out construction
-@cindex Making profiles pixel by pixel
-@cindex Pixel by pixel making of profiles
-MakeProfiles builds the profile starting from the nearest element (pixel in an
image) in the dataset to the profile center.
-The profile value is calculated for that central pixel using monte carlo
integration, see @ref{Sampling from a function}.
-The next pixel is the next nearest neighbor to the central pixel as defined by
@mymath{r_{el}}.
-This process goes on until the profile is fully built upto the truncation
radius.
-This is done fairly efficiently using a breadth first parsing
strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}}
which is implemented through an ordered linked list.
+Please see @ref{Sufi simulates a detection} for a very complete tutorial
explaining how one could use MakeProfiles in conjunction with other Gnuastro's
programs to make a complete simulated image of a mock galaxy.
-Using this approach, we build the profile by expanding the circumference.
-Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}}
from above is not cheap in CPU terms).
-Another consequence of this strategy is that extending MakeProfiles to three
dimensions becomes very simple: only the neighbors of each pixel have to be
changed.
-Everything else after that (when the pixel index and its radial profile have
entered the linked list) is the same, no matter the number of dimensions we are
dealing with.
+@menu
+* MakeProfiles catalog:: Required catalog properties.
+* MakeProfiles profile settings:: Configuration parameters for all profiles.
+* MakeProfiles output dataset:: The canvas/dataset to build profiles over.
+* MakeProfiles log file:: A description of the optional log file.
+@end menu
+@node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof,
Invoking astmkprof
+@subsubsection MakeProfiles catalog
+The catalog containing information about each profile can be in the FITS
ASCII, FITS binary, or plain text formats (see @ref{Tables}).
+The latter can also be provided using standard input (see @ref{Standard
input}).
+Its columns can be ordered in any desired manner.
+You can specify which columns belong to which parameters using the set of
options discussed below.
+For example through the @option{--rcol} and @option{--tcol} options, you can
specify the column that contains the radial parameter for each profile and its
truncation respectively.
+See @ref{Selecting table columns} for a thorough discussion on the values to
these options.
+The value for the profile center in the catalog (the @option{--ccol} option)
can be a floating point number so the profile center can be on any sub-pixel
position.
+Note that pixel positions in the FITS standard start from 1 and an integer is
the pixel center.
+So a 2D image actually starts from the position (0.5, 0.5), which is the
bottom-left corner of the first pixel.
+When a @option{--background} image with WCS information is provided or you
specify the WCS parameters with the respective options, you may also use RA and
Dec to identify the center of each profile (see the @option{--mode} option
below).
+In MakeProfiles, profile centers do not have to be in (overlap with) the final
image.
+Even if only one pixel of the profile within the truncation radius overlaps
with the final image size, the profile is built and included in the final image
image.
+Profiles that are completely out of the image will not be created (unless you
explicitly ask for it with the @option{--individual} option).
+You can use the output log file (created with @option{--log} to see which
profiles were within the image, see @ref{Common options}.
+If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the
profiles are to be built in one image (when @option{--individual} is not used),
it is assumed they are the PSF(s) you want to convolve your created image with.
+So by default, they will not be built in the output image but as separate
files.
+The sum of pixels of these separate files will also be set to unity (1) so you
are ready to convolve, see @ref{Convolution process}.
+As a summary, the position and magnitude of PSF profile will be ignored.
+This behavior can be disabled with the @option{--psfinimg} option.
+If you want to create all the profiles separately (with @option{--individual})
and you want the sum of the PSF profile pixels to be unity, you have to set
their magnitudes in the catalog to the zero point magnitude and be sure that
the central positions of the profiles don't have any fractional part (the PSF
center has to be in the center of the pixel).
-@node PSF, Stars, Defining an ellipse and ellipsoid, Modeling basics
-@subsubsection Point spread function
+The list of options directly related to the input catalog columns is shown
below.
-@cindex PSF
-@cindex Point source
-@cindex Diffraction limited
-@cindex Point spread function
-@cindex Spread of a point source
-Assume we have a `point' source, or a source that is far smaller than the
maximum resolution (a pixel).
-When we take an image of it, it will `spread' over an area.
-To quantify that spread, we can define a `function'.
-This is how the point spread function or the PSF of an image is defined.
-This `spread' can have various causes, for example in ground based astronomy,
due to the atmosphere.
-In practice we can never surpass the `spread' due to the diffraction of the
lens aperture.
-Various other effects can also be quantified through a PSF.
-For example, the simple fact that we are sampling in a discrete space, namely
the pixels, also produces a very small `spread' in the image.
+@table @option
-@cindex Blur image
-@cindex Convolution
-@cindex Image blurring
-@cindex PSF image size
-Convolution is the mathematical process by which we can apply a `spread' to an
image, or in other words blur the image, see @ref{Convolution process}.
-The Brightness of an object should remain unchanged after convolution, see
@ref{Brightness flux magnitude}.
-Therefore, it is important that the sum of all the pixels of the PSF be unity.
-The PSF image also has to have an odd number of pixels on its sides so one
pixel can be defined as the center.
-In MakeProfiles, the PSF can be set by the two methods explained below.
+@item --ccol=STR/INT
+Center coordinate column for each dimension.
+This option must be called two times to define the center coordinates in an
image.
+For example @option{--ccol=RA} and @option{--ccol=DEC} (along with
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns
named @option{RA} and @option{DEC} for the Right Ascension and Declination of
the profile centers.
-@table @asis
+@item --fcol=INT/STR
+The functional form of the profile with one of the values below depending on
the desired profile.
+The column can contain either the numeric codes (for example `@code{1}') or
string characters (for example `@code{sersic}').
+The numeric codes are easier to use in scripts which generate catalogs with
hundreds or thousands of profiles.
-@item Parametric functions
-@cindex FWHM
-@cindex PSF width
-@cindex Parametric PSFs
-@cindex Full Width at Half Maximum
-A known mathematical function is used to make the PSF.
-In this case, only the parameters to define the functions are necessary and
MakeProfiles will make a PSF based on the given parameters for each function.
-In both cases, the center of the profile has to be exactly in the middle of
the central pixel of the PSF (which is automatically done by MakeProfiles).
-When talking about the PSF, usually, the full width at half maximum or FWHM is
used as a scale of the width of the PSF.
+The string format can be easier when the catalog is to be written/checked by
hand/eye before running MakeProfiles.
+It is much more readable and provides a level of documentation.
+All Gnuastro's recognized table formats (see @ref{Recognized table formats})
accept string type columns.
+To have string columns in a plain text table/catalog, see @ref{Gnuastro text
table format}.
-@table @cite
-@item Gaussian
-@cindex Gaussian distribution
-In the older papers, and to a lesser extent even today, some researchers use
the 2D Gaussian function to approximate the PSF of ground based images.
-In its most general form, a Gaussian function can be written as:
+@itemize
+@item
+S@'ersic profile with `@code{sersic}' or `@code{1}'.
-@dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}
+@item
+Moffat profile with `@code{moffat}' or `@code{2}'.
-Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d}
are constrained.
-@mymath{a} can also be found because the function has to be normalized.
-So the only important parameter for MakeProfiles is the @mymath{\sigma}.
-In the Gaussian function we have this relation between the FWHM and
@mymath{\sigma}:
+@item
+Gaussian profile with `@code{gaussian}' or `@code{3}'.
-@cindex Gaussian FWHM
-@dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}
+@item
+Point source with `@code{point}' or `@code{4}'.
-@item Moffat
-@cindex Moffat function
-The Gaussian profile is much sharper than the images taken from stars on
photographic plates or CCDs.
-Therefore in 1969, Moffat proposed this functional form for the image of stars:
+@item
+Flat profile with `@code{flat}' or `@code{5}'.
-@dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}
+@item
+Circumference profile with `@code{circum}' or `@code{6}'.
+A fixed value will be used for all pixels less than or equal to the truncation
radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value
to the @option{--circumwidth}).
-@cindex Moffat beta
-Again, @mymath{a} is constrained by the normalization, therefore two
parameters define the shape of the Moffat function: @mymath{\alpha} and
@mymath{\beta}.
-The radial parameter is @mymath{\alpha} which is related to the FWHM by
-
-@cindex Moffat FWHM
-@dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}
+@item
+Radial distance profile with `@code{distance}' or `@code{7}'.
+At the lowest level, each pixel only has an elliptical radial distance given
the profile's shape and orientation (see @ref{Defining an ellipse and
ellipsoid}).
+When this profile is chosen, the pixel's elliptical radial distance from the
profile center is written as its value.
+For this profile, the value in the magnitude column (@option{--mcol}) will be
ignored.
-@cindex Compare Moffat and Gaussian
-@cindex PSF, Moffat compared Gaussian
-@noindent
-Comparing with the PSF predicted from atmospheric turbulence theory with a
Moffat function, Trujillo et al.@footnote{
-Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The
effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328,
pp. 977---985.}
-claim that @mymath{\beta} should be 4.765.
-They also show how the Moffat PSF contains the Gaussian PSF as a limiting case
when @mymath{\beta\to\infty}.
+You can use this for checks or as a first approximation to define your own
higher-level radial function.
+In the latter case, just note that the central values are going to be
incorrect (see @ref{Sampling from a function}).
-@end table
+@item
+Custom profile with `@code{custom}' or `@code{8}'.
+The values to use for each radial interval should be in the table given to
@option{--customtable}. For more, see @ref{MakeProfiles profile settings}.
+@end itemize
-@item An input FITS image
-An input image file can also be specified to be used as a PSF.
-If the sum of its pixels are not equal to 1, the pixels will be multiplied by
a fraction so the sum does become 1.
-@end table
+@item --rcol=STR/INT
+The radius parameter of the profiles.
+Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.
+@item --ncol=STR/INT
+The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.
-While the Gaussian is only dependent on the FWHM, the Moffat function is also
dependent on @mymath{\beta}.
-Comparing these two functions with a fixed FWHM gives the following results:
+@item --pcol=STR/INT
+The position angle (in degrees) of the profiles relative to the first FITS
axis (horizontal when viewed in SAO ds9).
+When building a 3D profile, this is the first Euler angle: first rotation of
the ellipsoid major axis from the first FITS axis (rotating about the third
axis).
+See @ref{Defining an ellipse and ellipsoid}.
-@itemize
-@item
-Within the FWHM, the functions don't have significant differences.
-@item
-For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes
sharper.
-@item
-The Gaussian function is much sharper than the Moffat functions, even when
@mymath{\beta} is large.
-@end itemize
+@item --p2col=STR/INT
+Second Euler angle (in degrees) when building a 3D ellipsoid.
+This is the second rotation of the ellipsoid major axis (following
@option{--pcol}) about the (rotated) X axis.
+See @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
+@item --p3col=STR/INT
+Third Euler angle (in degrees) when building a 3D ellipsoid.
+This is the third rotation of the ellipsoid major axis (following
@option{--pcol} and @option{--p2col}) about the (rotated) Z axis.
+See @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
+@item --qcol=STR/INT
+The axis ratio of the profiles (minor axis divided by the major axis in a 2D
ellipse).
+When building a 3D ellipse, this is the ratio of the major axis to the
semi-axis length of the second dimension (in a right-handed coordinate system).
+See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
+@item --q2col=STR/INT
+The ratio of the ellipsoid major axis to the third semi-axis length (in a
right-handed coordinate system) of a 3D ellipsoid.
+See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
+This column is ignored when building a 2D profile.
-@node Stars, Galaxies, PSF, Modeling basics
-@subsubsection Stars
+@item --mcol=STR/INT
+The total pixelated magnitude of the profile within the truncation radius, see
@ref{Profile magnitude}.
-@cindex Modeling stars
-@cindex Stars, modeling
-In MakeProfiles, stars are generally considered to be a point source.
-This is usually the case for extra galactic studies, were nearby stars are
also in the field.
-Since a star is only a point source, we assume that it only fills one pixel
prior to convolution.
-In fact, exactly for this reason, in astronomical images the light profiles of
stars are one of the best methods to understand the shape of the PSF and a very
large fraction of scientific research is preformed by assuming the shapes of
stars to be the PSF of the image.
+@item --tcol=STR/INT
+The truncation radius of this profile.
+By default it is in units of the radial parameter of the profile (the value in
the @option{--rcol} of the catalog).
+If @option{--tunitinp} is given, this value is interpreted in units of pixels
(prior to oversampling) irrespective of the profile.
+@end table
+@node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles
catalog, Invoking astmkprof
+@subsubsection MakeProfiles profile settings
+The profile parameters that differ between each created profile are specified
through the columns in the input catalog and described in @ref{MakeProfiles
catalog}.
+Besides those there are general settings for some profiles that don't differ
between one profile and another, they are a property of the general process.
+For example how many random points to use in the monte-carlo integration, this
value is fixed for all the profiles.
+The options described in this section are for configuring such properties.
+@table @option
-@node Galaxies, Sampling from a function, Stars, Modeling basics
-@subsubsection Galaxies
+@item --mode=STR
+Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles
catalog}) in image or WCS coordinates.
+This option thus accepts only two values: @option{img} and @option{wcs}.
+It is mandatory when a catalog is being used as input.
-@cindex Galaxy profiles
-@cindex S@'ersic profile
-@cindex Profiles, galaxies
-@cindex Generalized de Vaucouleur profile
-Today, most practitioners agree that the flux of galaxies can be modeled with
one or a few generalized de Vaucouleur's (or S@'ersic) profiles.
+@item -r
+@itemx --numrandom
+The number of random points used in the central regions of the profile, see
@ref{Sampling from a function}.
-@dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n}
-1 \right] \right )}
+@item -e
+@itemx --envseed
+@cindex Seed, Random number generator
+@cindex Random number generator, Seed
+Use the value to the @code{GSL_RNG_SEED} environment variable to generate the
random Monte Carlo sampling distribution, see @ref{Sampling from a function}
and @ref{Generating random numbers}.
-@cindex Brightness
-@cindex S@'ersic, J. L.
-@cindex S@'ersic index
-@cindex Effective radius
-@cindex Radius, effective
-@cindex de Vaucouleur profile
-@cindex G@'erard de Vaucouleurs
-G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this
function resembles the galaxy light profiles, with the only difference that he
held @mymath{n} fixed to a value of 4.
-Twenty years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a
variety of values and does not necessarily need to be 4.
-This profile depends on the effective radius (@mymath{r_e}) which is defined
as the radius which contains half of the profile brightness (see @ref{Profile
magnitude}).
-@mymath{I_e} is the flux at the effective radius.
-The S@'ersic index @mymath{n} is used to define the concentration of the
profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on
@mymath{n}.
-MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman
(2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters,
Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that
for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this
equation:
+@item -t FLT
+@itemx --tolerance=FLT
+The tolerance to switch from Monte Carlo integration to the central pixel
value, see @ref{Sampling from a function}.
-@dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over
1148175n^3}-{2194697\over 30690717750n^4}}
+@item -p
+@itemx --tunitinp
+The truncation column of the catalog is in units of pixels.
+By default, the truncation column is considered to be in units of the radial
parameters of the profile (@option{--rcol}).
+Read it as `t-unit-in-p' for `truncation unit in pixels'.
+@item -f
+@itemx --mforflatpix
+When making fixed value profiles (flat and circumference, see
`@option{--fcol}'), don't use the value in the column specified by
`@option{--mcol}' as the magnitude.
+Instead use it as the exact value that all the pixels of these profiles should
have.
+This option is irrelevant for other types of profiles.
+This option is very useful for creating masks, or labeled regions in an image.
+Any integer, or floating point value can used in this column with this option,
including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant),
and infinities (@code{inf}, @code{-inf}, or @code{+inf}).
+For example, with this option if you set the value in the magnitude column
(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask
over an image (which can be given as the argument), see @ref{Blank pixels}.
+Another useful application of this option is to create labeled elliptical or
circular apertures in an image.
+To do this, set the value in the magnitude column to the label you want for
this profile.
+This labeled image can then be used in combination with NoiseChisel's output
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see
@ref{MakeCatalog}).
+Alternatively, if you want to mark regions of the image (for example with an
elliptical circumference) and you don't want to use NaN values (as explained
above) for some technical reason, you can get the minimum or maximum value in
the image @footnote{
+The minimum will give a better result, because the maximum can be too high
compared to most pixels in the image, making it harder to display.}
+using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude
column along with this option for all the profiles.
+Please note that when using MakeProfiles on an already existing image, you
have to set `@option{--oversample=1}'.
+Otherwise all the profiles will be scaled up based on the oversampling scale
in your configuration files (see @ref{Configuration files}) unless you have
accounted for oversampling in your catalog.
-@node Sampling from a function, Oversampling, Galaxies, Modeling basics
-@subsubsection Sampling from a function
+@item --mcolisbrightness
+The value given in the ``magnitude column'' (specified by @option{--mcol}, see
@ref{MakeProfiles catalog}) must be interpreted as brightness, not magnitude.
+The zero point magnitude (value to the @option{--zeropoint} option) is ignored
and the given value must have the same units as the input dataset's pixels.
-@cindex Sampling
-A pixel is the ultimate level of accuracy to gather data, we can't get any
more accurate in one image, this is known as sampling in signal processing.
-However, the mathematical profiles which describe our models have infinite
accuracy.
-Over a large fraction of the area of astrophysically interesting profiles (for
example galaxies or PSFs), the variation of the profile over the area of one
pixel is not too significant.
-In such cases, the elliptical radius (@mymath{r_{el}} of the center of the
pixel can be assigned as the final value of the pixel, see @ref{Defining an
ellipse and ellipsoid}).
+Recall that the total profile magnitude or brightness that is specified with
in the @option{--mcol} column of the input catalog is not an integration to
infinity, but the actual sum of pixels in the profile (until the desired
truncation radius).
+See @ref{Profile magnitude} for more on this point.
-@cindex Integration over pixel
-@cindex Gradient over pixel area
-@cindex Function gradient over pixel area
-As you approach their center, some galaxies become very sharp (their value
significantly changes over one pixel's area).
-This sharpness increases with smaller effective radius and larger S@'ersic
values.
-Thus rendering the central value extremely inaccurate.
-The first method that comes to mind for solving this problem is integration.
-The functional form of the profile can be integrated over the pixel area in a
2D integration process.
-However, unfortunately numerical integration techniques also have their
limitations and when such sharp profiles are needed they can become extremely
inaccurate.
+@item --magatpeak
+The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be
used to find the brightness only for the peak profile pixel, not the full
profile.
+Note that this is the flux of the profile's peak pixel in the final output of
MakeProfiles.
+So beware of the oversampling, see @ref{Oversampling}.
-@cindex Monte carlo integration
-The most accurate method of sampling a continuous profile on a discrete space
is by choosing a large number of random points within the boundaries of the
pixel and taking their average value (or Monte Carlo integration).
-This is also, generally speaking, what happens in practice with the photons on
the pixel.
-The number of random points can be set with @option{--numrandom}.
+This option can be useful if you want to check a mock profile's total
magnitude at various truncation radii.
+Without this option, no matter what the truncation radius is, the total
magnitude will be the same as that given in the catalog.
+But with this option, the total magnitude will become brighter as you increase
the truncation radius.
-Unfortunately, repeating this Monte Carlo process would be extremely time and
CPU consuming if it is to be applied to every pixel.
-In order to not loose too much accuracy, in MakeProfiles, the profile is built
using both methods explained below.
-The building of the profile begins from its central pixel and continues
(radially) outwards.
-Monte Carlo integration is first applied (which yields @mymath{F_r}), then the
central pixel value (@mymath{F_c}) is calculated on the same pixel.
-If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given
tolerance level (specified with @option{--tolerance}) MakeProfiles will stop
using Monte Carlo integration and only use the central pixel value.
+In sharper profiles, sometimes the accuracy of measuring the peak profile flux
is more than the overall object brightness.
+In such cases, with this option, the final profile will be built such that its
peak has the given magnitude, not the total profile.
-@cindex Inside-out construction
-The ordering of the pixels in this inside-out construction is based on
@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining
an ellipse and ellipsoid}.
-When the axis ratios are large (near one) this is fine.
-But when they are small and the object is highly elliptical, it might seem
more reasonable to follow @mymath{r_{el}} not @mymath{r}.
-The problem is that the gradient is stronger in pixels with smaller @mymath{r}
(and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
-In other words, the gradient is strongest along the minor axis.
-So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level
will be reached sooner and lots of pixels with large fractional differences
will be missed.
+@cartouche
+@strong{CAUTION:} If you want to use this option for comparing with
observations, please note that MakeProfiles does not do convolution.
+Unless you have de-convolved your data, your images are convolved with the
instrument and atmospheric PSF, see @ref{PSF}.
+Particularly in sharper profiles, the flux in the peak pixel is strongly
decreased after convolution.
+Also note that in such cases, besides de-convolution, you will have to set
@option{--oversample=1} otherwise after resampling your profile with Warp (see
@ref{Warp}), the peak flux will be different.
+@end cartouche
-Monte Carlo integration uses a random number of points.
-Thus, every time you run it, by default, you will get a different distribution
of points to sample within the pixel.
-In the case of large profiles, this will result in a slight difference of the
pixels which use Monte Carlo integration each time MakeProfiles is run.
-To have a deterministic result, you have to fix the random number generator
properties which is used to build the random distribution.
-This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED}
environment variables and calling MakeProfiles with the @option{--envseed}
option.
-To learn more about the process of generating random numbers, see
@ref{Generating random numbers}.
+@item --customtable FITS/TXT
+The filename of the table to use in the custom profiles (see description of
@option{--fcol} in @ref{MakeProfiles catalog}.
+This can be a plain-text table, or FITS table, see @ref{Tables}, if it is a
FITS table, you can use @option{--customtablehdu} to specify which HDU should
be used (described below).
-@cindex Seed, Random number generator
-@cindex Random number generator, Seed
-The seed values are fixed for every profile: with @option{--envseed}, all the
profiles have the same seed and without it, each will get a different seed
using the system clock (which is accurate to within one microsecond).
-The same seed will be used to generate a random number for all the sub-pixel
positions of all the profiles.
-So in the former, the sub-pixel points checked for all the pixels undergoing
Monte carlo integration in all profiles will be identical.
-In other words, the sub-pixel points in the first (closest to the center)
pixel of all the profiles will be identical with each other.
-All the second pixels studied for all the profiles will also receive an
identical (different from the first pixel) set of sub-pixel points and so on.
-As long as the number of random points used is large enough or the profiles
are not identical, this should not cause any systematic bias.
+A custom profile can have any value you want for a given radial profile
(including NaN/blank values).
+Each interval is defined by its minimum (inclusive) and maximum (exclusive)
radius, when a pixel center falls within a radius interval, the value specified
for that interval will be used.
+If a pixel is not in the given intervals, a value of 0.0 will be used for that
pixel.
+The table should have 3 columns as shown below.
+If the intervals are contiguous (the maximum value of the previous interval is
equal to the minimum value of an interval) and the intervals all have the same
size (difference between minimum and maximum values) the creation of these
profiles will be fast.
+However, if the intervals are not sorted and contiguous, Makeprofiles will
parse the intervals from the top of the table and use the first interval that
contains the pixel center.
-@node Oversampling, , Sampling from a function, Modeling basics
-@subsubsection Oversampling
+@table @asis
+@item Column 1:
+The interval's minimum radius.
+@item Column 2:
+The interval's maximum radius.
+@item Column 3:
+The value to be used for pixels within the given interval (including
NaN/blank).
+@end table
-@cindex Oversampling
-The steps explained in @ref{Sampling from a function} do give an accurate
representation of a profile prior to convolution.
-However, in an actual observation, the image is first convolved with or
blurred by the atmospheric and instrument PSF in a continuous space and then it
is sampled on the discrete pixels of the camera.
+For example let's assume you have the radial profile below in a file called
@file{radial.txt}.
+The first column is the larger interval radius (in units of pixels) and the
second column is the value in that interval:
-@cindex PSF over-sample
-In order to more accurately simulate this process, the unconvolved image and
the PSF are created on a finer pixel grid.
-In other words, the output image is a certain odd-integer multiple of the
desired size, we can call this `oversampling'.
-The user can specify this multiple as a command-line option.
-The reason this has to be an odd number is that the PSF has to be centered on
the center of its image.
-An image with an even number of pixels on each side does not have a central
pixel.
+@example
+1 100
+2 90
+3 50
+4 10
+5 2
+6 0.1
+7 0.05
+@end example
-The image can then be convolved with the PSF (which should also be oversampled
on the same scale).
-Finally, image can be sub-sampled to get to the initial desired pixel size of
the output image.
-After this, mock noise can be added as explained in the next section.
-This is because unlike the PSF, the noise occurs in each output pixel, not on
a continuous space like all the prior steps.
+@noindent
+You can construct the table to give to @option{--customtable} with the command
below (using Gnuastro's @ref{Column arithmetic}).
+@example
+asttable radial.fits -c'arith $1 1 -' -c1,2 -ocustom.fits
+@end example
+@noindent
+In case the intervals are different from 1 (for example 0.5), change the
@code{$1 1 -} to @code{$1 0.5 -}.
+On a side-note, Gnuastro has features to extract the radial profile of an
object from the image, see @ref{Generate radial profile}.
-@node If convolving afterwards, Brightness flux magnitude, Modeling basics,
MakeProfiles
-@subsection If convolving afterwards
+@item --customtablehdu INT/STR
+The HDU/extension in the FITS file given to @option{--customtable}.
-In case you want to convolve the image later with a given point spread
function, make sure to use a larger image size.
-After convolution, the profiles become larger and a profile that is normally
completely outside of the image might fall within it.
+@item -X INT,INT
+@itemx --shift=INT,INT
+Shift all the profiles and enlarge the image along each dimension.
+To better understand this option, please see @mymath{n} in @ref{If convolving
afterwards}.
+This is useful when you want to convolve the image afterwards.
+If you are using an external PSF, be sure to oversample it to the same scale
used for creating the mock images.
+If a background image is specified, any possible value to this option is
ignored.
-On one axis, if you want your final (convolved) image to be @mymath{m} pixels
and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set
the axis size to @mymath{m+2n}, not @mymath{m}.
-You also have to shift all the pixel positions of the profile centers on the
that axis by @mymath{n} pixels to the positive.
+@item -c
+@itemx --prepforconv
+Shift all the profiles and enlarge the image based on half the width of the
first Moffat or Gaussian profile in the catalog, considering any possible
oversampling see @ref{If convolving afterwards}.
+@option{--prepforconv} is only checked and possibly activated if
@option{--xshift} and @option{--yshift} are both zero (after reading the
command-line and configuration files).
+If a background image is specified, any possible value to this option is
ignored.
-After convolution, you can crop the outer @mymath{n} pixels with the section
crop box specification of Crop: @option{--section=n:*-n,n:*-n} assuming your
PSF is a square, see @ref{Crop section syntax}.
-This will also remove all discrete Fourier transform artifacts (blurred sides)
from the final image.
-To facilitate this shift, MakeProfiles has the options @option{--xshift},
@option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.
+@item -z FLT
+@itemx --zeropoint=FLT
+The zero point magnitude of the input.
+For more on the zero point magnitude, see @ref{Brightness flux magnitude}.
+@item -w FLT
+@itemx --circumwidth=FLT
+The width of the circumference if the profile is to be an elliptical
circumference or annulus.
+See the explanations for this type of profile in @option{--fcol}.
+@item -R
+@itemx --replace
+Do not add the pixels of each profile over the background, or other profiles.
+But replace the values.
+By default, when two profiles overlap, the final pixel value is the sum of all
the profiles that overlap on that pixel.
+This is the expected situation when dealing with physical object profiles like
galaxies or stars/PSF.
+However, when MakeProfiles is used to build integer labeled images (for
example in @ref{Aperture photometry}), this is not the expected situation: the
sum of two labels will be a new label.
+With this option, the pixels are not added but the largest (maximum) value
over that pixel is used.
+Because the maximum operator is independent of the order of values, the output
is also thread-safe.
-@node Brightness flux magnitude, Profile magnitude, If convolving afterwards,
MakeProfiles
-@subsection Brightness, Flux, Magnitude and Surface brightness
+@end table
-@cindex ADU
-@cindex Gain
-@cindex Counts
-Astronomical data pixels are usually in units of counts@footnote{Counts are
also known as analog to digital units (ADU).} or electrons or either one
divided by seconds.
-To convert from the counts to electrons, you will need to know the instrument
gain.
-In any case, they can be directly converted to energy or energy/time using the
basic hardware (telescope, camera and filter) information.
-We will continue the discussion assuming the pixels are in units of
energy/time.
+@node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile
settings, Invoking astmkprof
+@subsubsection MakeProfiles output dataset
+MakeProfiles takes an input catalog uses basic properties that are defined
there to build a dataset, for example a 2D image containing the profiles in the
catalog.
+In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the
catalog and profile settings were discussed.
+The options of this section, allow you to configure the output dataset (or the
canvas that will host the built profiles).
-@cindex Flux
-@cindex Luminosity
-@cindex Brightness
-The @emph{brightness} of an object is defined as its total detected energy per
time.
-In the case of an imaged source, this is simply the sum of the pixels that are
associated with that detection by our detection tool (for example
@ref{NoiseChisel}@footnote{If further processing is done, for example the Kron
or Petrosian radii are calculated, then the detected area is not sufficient and
the total area that was within the respective radius must be used.}).
-The @emph{flux} of an object is defined in units of
energy/time/collecting-area.
-For an astronomical target, the flux is therefore defined as its brightness
divided by the area used to collect the light from the source: or the telescope
aperture (for example in units of @mymath{cm^2}).
-Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can
define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
+@table @option
-Therefore, while flux and luminosity are intrinsic properties of the object,
brightness depends on our detecting tools (hardware and software).
-In low-level observational astronomy data analysis, we are usually more
concerned with measuring the brightness, because it is the thing we directly
measure from the image pixels and create in catalogs.
-On the other hand, luminosity is used in higher-level analysis (after image
contents are measured as catalogs to deduce physical interpretations).
-It is just important avoid possible confusion between luminosity and
brightness because both have the same units of energy per seconds.
+@item -k FITS
+@itemx --background=FITS
+A background image FITS file to build the profiles on.
+The extension that contains the image should be specified with the
@option{--backhdu} option, see below.
+When a background image is specified, it will be used to derive all the
information about the output image.
+Hence, the following options will be ignored: @option{--mergedsize},
@option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other
WCS related parameters) and the output's data type (see @option{--type} in
@ref{Input output options}).
-@cindex Magnitudes from flux
-@cindex Flux to magnitude conversion
-@cindex Astronomical Magnitude system
-Images of astronomical objects span over a very large range of brightness.
-With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}}
times brighter than the fainter galaxies we can currently detect in the deepest
images.
-Therefore discussing brightness directly will involve a large range of values
which is inconvenient.
-So astronomers have chosen to use a logarithmic scale to talk about the
brightness of astronomical objects.
+The image will act like a canvas to build the profiles on: profile pixel
values will be summed with the background image pixel values.
+With the @option{--replace} option you can disable this behavior and replace
the profile pixels with the background pixels.
+If you want to use all the image information above, except for the pixel
values (you want to have a blank canvas to build the profiles on, based on an
input image), you can call @option{--clearcanvas}, to set all the input image's
pixels to zero before starting to build the profiles over it (this is done in
memory after reading the input, so nothing will happen to your input file).
-@cindex Hipparchus of Nicaea
-But the logarithm can only be usable with a dimensionless value that is always
positive.
-Fortunately brightness is always positive (at least in theory@footnote{In
practice, for very faint objects, if the background brightness is
over-subtracted, we may end up with a negative brightness in a real object.}).
-To remove the dimensions, we divide the brightness of the object (@mymath{B})
by a reference brightness (@mymath{B_r}).
-We then define a logarithmic scale as @mymath{magnitude} through the relation
below.
-The @mymath{-2.5} factor in the definition of magnitudes is a legacy of the
our ancient colleagues and in particular Hipparchus of Nicaea (190-120 BC).
+@item -B STR/INT
+@itemx --backhdu=STR/INT
+The header data unit (HDU) of the file given to @option{--background}.
-@dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
+@item -C
+@itemx --clearcanvas
+When an input image is specified (with the @option{--background} option, set
all its pixels to 0.0 immediately after reading it into memory.
+Effectively, this will allow you to use all its properties (described under
the @option{--background} option), without having to worry about the pixel
values.
-@cindex Zero point magnitude
-@cindex Magnitude zero point
-@noindent
-@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the
pre-defined magnitude of the reference brightness.
-One particularly easy condition is when the reference brightness is unity
(@mymath{B_r=1}).
-This brightness will thus summarize all the hardware-specific parameters
discussed above (like the conversion of pixel values to physical units) into
one number.
-That reference magnitude which is commonly known as the @emph{Zero point}
magnitude (because when @mymath{B=Br=1}, the right side of the magnitude
definition above will be zero).
-Using the zero point magnitude (@mymath{Z}), we can write the magnitude
relation above in a more simpler format:
+@option{--clearcanvas} can come in handy in many situations, for example if
you want to create a labeled image (segmentation map) for creating a catalog
(see @ref{MakeCatalog}).
+In other cases, you might have modeled the objects in an image and want to
create them on the same frame, but without the original pixel values.
-@dispmath{m = -2.5\log_{10}(B) + Z}
+@item -E STR/INT,FLT[,FLT,[...]]
+@itemx --kernel=STR/INT,FLT[,FLT,[...]]
+Only build one kernel profile with the parameters given as the values to this
option.
+The different values must be separated by a comma (@key{,}).
+The first value identifies the radial function of the profile, either through
a string or through a number (see description of @option{--fcol} in
@ref{MakeProfiles catalog}).
+Each radial profile needs a different total number of parameters: S@'ersic and
Moffat functions need 3 parameters: radial, S@'ersic index or Moffat
@mymath{\beta}, and truncation radius.
+The Gaussian function needs two parameters: radial and truncation radius.
+The point function doesn't need any parameters and flat and circumference
profiles just need one parameter (truncation radius).
-@cindex Janskys (Jy)
-@cindex AB magnitude
-@cindex Magnitude, AB
-Having the zero point of an image, you can convert its pixel values to
physical units of microJanskys (or @mymath{\mu{}Jy}) to enable direct
pixel-based comparisons with images from other instruments (just note that this
assumes instrument and observation signatures are corrected, things like the
flat-field or the Sky).
-This conversion can be done with the fact that in the AB magnitude
standard@footnote{@url{https://en.wikipedia.org/wiki/AB_magnitude}},
@mymath{3631Jy} corresponds to the zero-th magnitude, therefore
@mymath{B\equiv3631\times10^{6}\mu{Jy}} and @mymath{m\equiv0}.
-We can therefore estimate the brightness (@mymath{B_z}, in @mymath{\mu{Jy}})
corresponding to the image zero point (@mymath{Z}) using this equation:
+The PSF or kernel is a unique (and highly constrained) type of profile: the
sum of its pixels must be one, its center must be the center of the central
pixel (in an image with an odd number of pixels on each side), and commonly it
is circular, so its axis ratio and position angle are one and zero respectively.
+Kernels are commonly necessary for various data analysis and data manipulation
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
+Because of this it is inconvenient to define a catalog with one row and many
zero valued columns (for all the non-necessary parameters).
+Hence, with this option, it is possible to create a kernel with MakeProfiles
without the need to create a catalog.
+Here are some examples:
-@dispmath{m - Z = -2.5\log_{10}(B/B_z)}
-@dispmath{0 - Z = -2.5\log_{10}({3631\times10^{6}\over B_z})}
-@dispmath{B_z = 3631\times10^{\left(6 - {Z \over 2.5} \right)} \mu{Jy}}
+@table @option
+@item --kernel=moffat,3,2.8,5
+A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated
at 5 times the FWHM.
-@cindex SDSS
-Because the image zero point corresponds to a pixel value of @mymath{1}, the
@mymath{B_z} value calculated above also corresponds to a pixel value of
@mymath{1}.
-Therefore you simply have to multiply your image by @mymath{B_z} to convert it
to @mymath{\mu{Jy}}.
-Don't forget that this only applies when your zero point was also estimated in
the AB magnitude system.
-On the command-line, you can easily estimate this value for a certain zero
point with AWK, then multiply it to all the pixels in the image with
@ref{Arithmetic}.
-For example let's assume you are using an SDSS image with a zero point of 22.5:
+@item --kernel=gaussian,2,3
+A circular Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
+the FWHM.
+@end table
-@example
-bz=$(echo 22.5 | awk '@{print 3631 * 10^(6-$1/2.5)@}')
-astarithmetic sdss.fits $bz x --output=sdss-in-muJy.fits
-@end example
+This option may also be used to create a 3D kernel.
+To do that, two small modifications are necessary: add a @code{-3d} (or
@code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number
(axis-ratio along the third dimension) to the end of the parameters for all
profiles except @code{point}.
+The main reason behind providing an axis ratio in the third dimension is that
in 3D astronomical datasets, commonly the third dimension doesn't have the same
nature (units/sampling) as the first and second.
-@cindex Steradian
-@cindex Angular coverage
-@cindex Celestial sphere
-@cindex Surface brightness
-@cindex SI (International System of Units)
-Another important concept is the distribution of an object's brightness over
its area.
-For this, we define the @emph{surface brightness} to be the magnitude of an
object's brightness divided by its solid angle over the celestial sphere (or
coverage in the sky, commonly in units of arcsec@mymath{^2}).
-The solid angle is expressed in units of arcsec@mymath{^2} because
astronomical targets are usually much smaller than one steradian.
-Recall that the steradian is the dimension-less SI unit of a solid angle and 1
steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial
sphere.
+For example in IFU datacubes, the first and second dimensions are
angularpositions (like RA and Dec) but the third is in units of Angstroms for
wavelength.
+Because of this different nature (which also affects theprocessing), it may be
necessary for the kernel to have a different extent in that direction.
-Surface brightness is therefore most commonly expressed in units of
mag/arcsec@mymath{2}.
-For example when the brightness is measured over an area of A
arcsec@mymath{^2}, then the surface brightness becomes:
+If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will
be a spheroid.
+If its smaller than @mymath{1.0}, the kernel will be button-shaped: extended
less in the third dimension.
+However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped:
extended more in the third dimension.
+In the latter case, the radial parameter will correspond to the length along
the 3rd dimension.
+For example, let's have a look at the two examples above but in 3D:
-@dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
+@table @option
+@item --kernel=moffat-3d,3,2.8,5,0.5
+An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is
truncated at 5 times the FWHM.
+The ellipsoid is circular in the first two dimensions, but in the third
dimension its extent is half the first two.
-@noindent
-In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is
related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units
of arcsec@mymath{^2}) through this equation:
+@item --kernel=gaussian-3d,2,3,1
+A spherical Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
+the FWHM.
+@end table
-@dispmath{S = m + 2.5\log_{10}(A)}
+Ofcourse, if a specific kernel is needed that doesn't fit the constraints
imposed by this option, you can always use a catalog to define any arbitrary
kernel.
+Just call the @option{--individual} and @option{--nomerged} options to make
sure that it is built as a separate file (individually) and no ``merged'' image
of the input profiles is created.
-A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and
divide the object's magnitude by its area.
-But this is wrong because magnitude is a logarithmic scale while area is
linear.
-It is the brightness that should be divided by the solid angle because both
have linear scales.
-The magnitude of that ratio is then defined to be the surface brightness.
+@item -x INT,INT
+@itemx --mergedsize=INT,INT
+The number of pixels along each axis of the output, in FITS order.
+This is before over-sampling.
+For example if you call MakeProfiles with @option{--mergedsize=100,150
--oversample=5} (assuming no shift due for later convolution), then the final
image size along the first axis will be 500 by 750 pixels.
+Fractions are acceptable as values for each dimension, however, they must
reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but
@option{--mergedsize=150/4,300/4} is not.
-@node Profile magnitude, Invoking astmkprof, Brightness flux magnitude,
MakeProfiles
-@subsection Profile magnitude
+When viewing a FITS image in DS9, the first FITS dimension is in the
horizontal direction and the second is vertical.
+As an example, the image created with the example above will have 500 pixels
horizontally and 750 pixels vertically.
-@cindex Brightness
-@cindex Truncation radius
-@cindex Sum for total flux
-To find the profile brightness or its magnitude, (see @ref{Brightness flux
magnitude}), it is customary to use the 2D integration of the flux to infinity.
-However, in MakeProfiles we do not follow this idealistic approach and apply a
more realistic method to find the total brightness or magnitude: the sum of all
the pixels belonging to a profile within its predefined truncation radius.
-Note that if the truncation radius is not large enough, this can be
significantly different from the total integrated light to infinity.
+If a background image is specified, this option is ignored.
-@cindex Integration to infinity
-An integration to infinity is not a realistic condition because no galaxy
extends indefinitely (important for high S@'ersic index profiles), pixelation
can also cause a significant difference between the actual total pixel sum
value of the profile and that of integration to infinity, especially in small
and high S@'ersic index profiles.
-To be safe, you can specify a large enough truncation radius for such compact
high S@'ersic index profiles.
+@item -s INT
+@itemx --oversample=INT
+The scale to over-sample the profiles and final image.
+If not an odd number, will be added by one, see @ref{Oversampling}.
+Note that this @option{--oversample} will remain active even if an input image
is specified.
+If your input catalog is based on the background image, be sure to set
@option{--oversample=1}.
-If oversampling is used then the brightness is calculated using the
over-sampled image, see @ref{Oversampling} which is much more accurate.
-The profile is first built in an array completely bounding it with a
normalization constant of unity (see @ref{Galaxies}).
-Taking @mymath{B} to be the desired brightness and @mymath{S} to be the sum of
the pixels in the created profile, every pixel is then multiplied by
@mymath{B/S} so the sum is exactly @mymath{B}.
+@item --psfinimg
+Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog
into the final image.
+By default they are built separately so you can convolve your images with
them, thus their magnitude and positions are ignored.
+With this option, they will be built in the final image like every other
galaxy profile.
+To have a final PSF in your image, make a point profile where you want the PSF
and after convolution it will be the PSF.
-If the @option{--individual} option is called, this same array is written to a
FITS file.
-If not, only the overlapping pixels of this array and the output image are
kept and added to the output array.
+@item -i
+@itemx --individual
+@cindex Individual profiles
+@cindex Build individual profiles
+If this option is called, each profile is created in a separate FITS file
within the same directory as the output and the row number of the profile
(starting from zero) in the name.
+The file for each row's profile will be in the same directory as the final
combined image of all the profiles and will have the final image's name as a
suffix.
+So for example if the final combined image is named
@file{./out/fromcatalog.fits}, then the first profile that will be created with
this option will be named @file{./out/0_fromcatalog.fits}.
+Since each image only has one full profile out to the truncation radius the
profile is centered and so, only the sub-pixel position of the profile center
is important for the outputs of this option.
+The output will have an odd number of pixels.
+If there is no oversampling, the central pixel will contain the profile center.
+If the value to @option{--oversample} is larger than unity, then the profile
center is on any of the central @option{--oversample}'d pixels depending on the
fractional value of the profile center.
+If the fractional value is larger than half, it is on the bottom half of the
central region.
+This is due to the FITS definition of a real number position: The center of a
pixel has fractional value @mymath{0.00} so each pixel contains these
fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.
+@item -m
+@itemx --nomerged
+Don't make a merged image.
+By default after making the profiles, they are added to a final image with
side lengths specified by @option{--mergedsize} if they overlap with it.
+@end table
-@node Invoking astmkprof, , Profile magnitude, MakeProfiles
-@subsection Invoking MakeProfiles
+@noindent
+The options below can be used to define the world coordinate system (WCS)
properties of the MakeProfiles outputs.
+The option names are deliberately chosen to be the same as the FITS standard
WCS keywords.
+See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al
[2010]} for a short introduction to WCS in the FITS standard@footnote{The world
coordinate standard in FITS is a very beautiful and powerful concept to
link/associate datasets with the outside world (other datasets).
+The description in the FITS standard (link above) only touches the tip of the
ice-burg.
+To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326,
Greisen and Calabretta [2002]},
@url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen
[2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al.
[2006]}, and
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta
et al.}}.
-MakeProfiles will make any number of profiles specified in a catalog either
individually or in one image.
-The executable name is @file{astmkprof} with the following general template
+If you look into the headers of a FITS image with WCS for example you will see
all these names but in uppercase and with numbers to represent the dimensions,
for example @code{CRPIX1} and @code{PC2_1}.
+You can see the FITS headers with Gnuastro's @ref{Fits} program using a
command like this: @command{$ astfits -p image.fits}.
-@example
-$ astmkprof [OPTION ...] [Catalog]
-@end example
+If the values given to any of these options does not correspond to the number
of dimensions in the output dataset, then no WCS information will be added.
-@noindent
-One line examples:
+@table @option
-@example
-## Make an image with profiles in catalog.txt (with default size):
-$ astmkprof catalog.txt
+@item --crpix=FLT,FLT
+The pixel coordinates of the WCS reference point.
+Fractions are acceptable for the values of this option.
-## Make the profiles in catalog.txt over image.fits:
-$ astmkprof --background=image.fits catalog.txt
-
-## Make a Moffat PSF with FWHM 3pix, beta=2.8, truncation=5
-$ astmkprof --kernel=moffat,3,2.8,5 --oversample=1
-
-## Make profiles in catalog, using RA and Dec in the given column:
-$ astmkprof --ccol=RA_CENTER --ccol=DEC_CENTER --mode=wcs catalog.txt
-
-## Make a 1500x1500 merged image (oversampled 500x500) image along
-## with an individual image for all the profiles in catalog:
-$ astmkprof --individual --oversample 3 --mergedsize=500,500 cat.txt
-@end example
-
-@noindent
-The parameters of the mock profiles can either be given through a catalog
(which stores the parameters of many mock profiles, see @ref{MakeProfiles
catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output
dataset}).
-The catalog can be in the FITS ASCII, FITS binary format, or plain text
formats (see @ref{Tables}).
-A plain text catalog can also be provided using the Standard input (see
@ref{Standard input}).
-The columns related to each parameter can be determined both by number, or by
match/search criteria using the column names, units, or comments, with the
options ending in @option{col}, see below.
-
-Without any file given to the @option{--background} option, MakeProfiles will
make a zero-valued image and build the profiles on that (its size and main WCS
parameters can also be defined through the options described in
@ref{MakeProfiles output dataset}).
-Besides the main/merged image containing all the profiles in the catalog, it
is also possible to build individual images for each profile (only enclosing
one full profile to its truncation radius) with the @option{--individual}
option.
-
-If an image is given to the @option{--background} option, the pixels of that
image are used as the background value for every pixel hence flux value of each
profile pixel will be added to the pixel in that background value.
-You can disable this with the @code{--clearcanvas} option (which will
initialize the background to zero-valued pixels and build profiles over that).
-With the @option{--background} option, the values to all options relating to
the ``canvas'' (output size and WCS) will be ignored if specified, for example
@option{--oversample}, @option{--mergedsize}, and @option{--prepforconv}.
-
-The sections below discuss the options specific to MakeProfiles based on
context: the input catalog settings which can have many rows for different
profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles
profile settings}, we discuss how you can set general profile settings (that
are the same for all the profiles in the catalog).
-Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file}
discuss the outputs of MakeProfiles and how you can configure them.
-Besides these, MakeProfiles also supports all the common Gnuastro program
options that are discussed in @ref{Common options}, so please flip through them
is well for a more comfortable usage.
-
-When building 3D profiles, there are more degrees of freedom.
-Hence, more columns are necessary and all the values related to dimensions
(for example size of dataset in each dimension and the WCS properties) must
also have 3 values.
-To allow having an independent set of default values for creating 3D profiles,
MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see
@ref{Configuration files}).
-You can use this for default 3D profile values.
-For example, if you installed Gnuastro with the prefix @file{/usr/local} (the
default location, see @ref{Installation directory}), you can benefit from this
configuration file by running MakeProfiles like the example below.
-As with all configuration files, if you want to customize a given option, call
it before the configuration file.
-
-@example
-$ astmkprof --config=/usr/local/etc/astmkprof-3d.conf catalog.txt
-@end example
-
-@cindex Shell alias
-@cindex Alias, shell
-@cindex Shell startup
-@cindex Startup, shell
-To further simplify the process, you can define a shell alias in any startup
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
-Assuming that you installed Gnuastro in @file{/usr/local}, you can add this
line to the startup file (you may put it all in one line, it is broken into two
lines here for fitting within page limits).
-
-@example
-alias astmkprof-3d="astmkprof --config=/usr/local/etc/astmkprof-3d.conf"
-@end example
-
-@noindent
-Using this alias, you can call MakeProfiles with the name
@command{astmkprof-3d} (instead of @command{astmkprof}).
-It will automatically load the 3D specific configuration file first, and then
parse any other arguments, options or configuration files.
-You can change the default values in this 3D configuration file by calling
them on the command-line as you do with @command{astmkprof}@footnote{Recall
that for single-invocation options, the last command-line invocation takes
precedence over all previous invocations (including those in the 3D
configuration file).
-See the description of @option{--config} in @ref{Operating mode options}.}.
-
-Please see @ref{Sufi simulates a detection} for a very complete tutorial
explaining how one could use MakeProfiles in conjunction with other Gnuastro's
programs to make a complete simulated image of a mock galaxy.
-
-@menu
-* MakeProfiles catalog:: Required catalog properties.
-* MakeProfiles profile settings:: Configuration parameters for all profiles.
-* MakeProfiles output dataset:: The canvas/dataset to build profiles over.
-* MakeProfiles log file:: A description of the optional log file.
-@end menu
-
-@node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof,
Invoking astmkprof
-@subsubsection MakeProfiles catalog
-The catalog containing information about each profile can be in the FITS
ASCII, FITS binary, or plain text formats (see @ref{Tables}).
-The latter can also be provided using standard input (see @ref{Standard
input}).
-Its columns can be ordered in any desired manner.
-You can specify which columns belong to which parameters using the set of
options discussed below.
-For example through the @option{--rcol} and @option{--tcol} options, you can
specify the column that contains the radial parameter for each profile and its
truncation respectively.
-See @ref{Selecting table columns} for a thorough discussion on the values to
these options.
-
-The value for the profile center in the catalog (the @option{--ccol} option)
can be a floating point number so the profile center can be on any sub-pixel
position.
-Note that pixel positions in the FITS standard start from 1 and an integer is
the pixel center.
-So a 2D image actually starts from the position (0.5, 0.5), which is the
bottom-left corner of the first pixel.
-When a @option{--background} image with WCS information is provided or you
specify the WCS parameters with the respective options, you may also use RA and
Dec to identify the center of each profile (see the @option{--mode} option
below).
+@item --crval=FLT,FLT
+The WCS coordinates of the Reference point.
+Fractions are acceptable for the values of this option.
-In MakeProfiles, profile centers do not have to be in (overlap with) the final
image.
-Even if only one pixel of the profile within the truncation radius overlaps
with the final image size, the profile is built and included in the final image
image.
-Profiles that are completely out of the image will not be created (unless you
explicitly ask for it with the @option{--individual} option).
-You can use the output log file (created with @option{--log} to see which
profiles were within the image, see @ref{Common options}.
+@item --cdelt=FLT,FLT
+The resolution (size of one data-unit or pixel in WCS units) of the
non-oversampled dataset.
+Fractions are acceptable for the values of this option.
-If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the
profiles are to be built in one image (when @option{--individual} is not used),
it is assumed they are the PSF(s) you want to convolve your created image with.
-So by default, they will not be built in the output image but as separate
files.
-The sum of pixels of these separate files will also be set to unity (1) so you
are ready to convolve, see @ref{Convolution process}.
-As a summary, the position and magnitude of PSF profile will be ignored.
-This behavior can be disabled with the @option{--psfinimg} option.
-If you want to create all the profiles separately (with @option{--individual})
and you want the sum of the PSF profile pixels to be unity, you have to set
their magnitudes in the catalog to the zero point magnitude and be sure that
the central positions of the profiles don't have any fractional part (the PSF
center has to be in the center of the pixel).
+@item --pc=FLT,FLT,FLT,FLT
+The PC matrix of the WCS rotation, see the FITS standard (link above) to
better understand the PC matrix.
-The list of options directly related to the input catalog columns is shown
below.
+@item --cunit=STR,STR
+The units of each WCS axis, for example @code{deg}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of
them might cause trouble.
-@table @option
+@item --ctype=STR,STR
+The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of
them might cause trouble.
-@item --ccol=STR/INT
-Center coordinate column for each dimension.
-This option must be called two times to define the center coordinates in an
image.
-For example @option{--ccol=RA} and @option{--ccol=DEC} (along with
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns
named @option{RA} and @option{DEC} for the Right Ascension and Declination of
the profile centers.
+@end table
-@item --fcol=INT/STR
-The functional form of the profile with one of the values below depending on
the desired profile.
-The column can contain either the numeric codes (for example `@code{1}') or
string characters (for example `@code{sersic}').
-The numeric codes are easier to use in scripts which generate catalogs with
hundreds or thousands of profiles.
+@node MakeProfiles log file, , MakeProfiles output dataset, Invoking astmkprof
+@subsubsection MakeProfiles log file
-The string format can be easier when the catalog is to be written/checked by
hand/eye before running MakeProfiles.
-It is much more readable and provides a level of documentation.
-All Gnuastro's recognized table formats (see @ref{Recognized table formats})
accept string type columns.
-To have string columns in a plain text table/catalog, see @ref{Gnuastro text
table format}.
+Besides the final merged dataset of all the profiles, or the individual
datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option
is called MakeProfiles will also create a log file in the current directory
(where you run MockProfiles).
+See @ref{Common options} for a full description of @option{--log} and other
options that are shared between all Gnuastro programs.
+The values for each column are explained in the first few commented lines of
the log file (starting with @command{#} character).
+Here is a more complete description.
@itemize
@item
-S@'ersic profile with `@code{sersic}' or `@code{1}'.
+An ID (row number of profile in input catalog).
@item
-Moffat profile with `@code{moffat}' or `@code{2}'.
+The total magnitude of the profile in the output dataset.
+When the profile does not completely overlap with the output dataset, this
will be different from your input magnitude.
@item
-Gaussian profile with `@code{gaussian}' or `@code{3}'.
+The number of pixels (in the oversampled image) which used Monte Carlo
integration and not the central pixel value, see @ref{Sampling from a function}.
@item
-Point source with `@code{point}' or `@code{4}'.
+The fraction of flux in the Monte Carlo integrated pixels.
@item
-Flat profile with `@code{flat}' or `@code{5}'.
+If an individual image was created, this column will have a value of @code{1},
otherwise it will have a value of @code{0}.
+@end itemize
-@item
-Circumference profile with `@code{circum}' or `@code{6}'.
-A fixed value will be used for all pixels less than or equal to the truncation
radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value
to the @option{--circumwidth}).
-@item
-Radial distance profile with `@code{distance}' or `@code{7}'.
-At the lowest level, each pixel only has an elliptical radial distance given
the profile's shape and orientation (see @ref{Defining an ellipse and
ellipsoid}).
-When this profile is chosen, the pixel's elliptical radial distance from the
profile center is written as its value.
-For this profile, the value in the magnitude column (@option{--mcol}) will be
ignored.
-You can use this for checks or as a first approximation to define your own
higher-level radial function.
-In the latter case, just note that the central values are going to be
incorrect (see @ref{Sampling from a function}).
-@item
-Custom profile with `@code{custom}' or `@code{8}'.
-The values to use for each radial interval should be in the table given to
@option{--customtable}. For more, see @ref{MakeProfiles profile settings}.
-@end itemize
-@item --rcol=STR/INT
-The radius parameter of the profiles.
-Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.
-@item --ncol=STR/INT
-The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.
-@item --pcol=STR/INT
-The position angle (in degrees) of the profiles relative to the first FITS
axis (horizontal when viewed in SAO ds9).
-When building a 3D profile, this is the first Euler angle: first rotation of
the ellipsoid major axis from the first FITS axis (rotating about the third
axis).
-See @ref{Defining an ellipse and ellipsoid}.
-@item --p2col=STR/INT
-Second Euler angle (in degrees) when building a 3D ellipsoid.
-This is the second rotation of the ellipsoid major axis (following
@option{--pcol}) about the (rotated) X axis.
-See @ref{Defining an ellipse and ellipsoid}.
-This column is ignored when building a 2D profile.
-@item --p3col=STR/INT
-Third Euler angle (in degrees) when building a 3D ellipsoid.
-This is the third rotation of the ellipsoid major axis (following
@option{--pcol} and @option{--p2col}) about the (rotated) Z axis.
-See @ref{Defining an ellipse and ellipsoid}.
-This column is ignored when building a 2D profile.
-@item --qcol=STR/INT
-The axis ratio of the profiles (minor axis divided by the major axis in a 2D
ellipse).
-When building a 3D ellipse, this is the ratio of the major axis to the
semi-axis length of the second dimension (in a right-handed coordinate system).
-See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
-@item --q2col=STR/INT
-The ratio of the ellipsoid major axis to the third semi-axis length (in a
right-handed coordinate system) of a 3D ellipsoid.
-See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
-This column is ignored when building a 2D profile.
-@item --mcol=STR/INT
-The total pixelated magnitude of the profile within the truncation radius, see
@ref{Profile magnitude}.
+@node MakeNoise, , MakeProfiles, Modeling and fittings
+@section MakeNoise
-@item --tcol=STR/INT
-The truncation radius of this profile.
-By default it is in units of the radial parameter of the profile (the value in
the @option{--rcol} of the catalog).
-If @option{--tunitinp} is given, this value is interpreted in units of pixels
(prior to oversampling) irrespective of the profile.
+@cindex Noise
+Real data are always buried in noise, therefore to finalize a simulation of
real data (for example to test our observational algorithms) it is essential to
add noise to the mock profiles created with MakeProfiles, see
@ref{MakeProfiles}.
+Below, the general principles and concepts to help understand how noise is
quantified is discussed.
+MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.
-@end table
+@menu
+* Noise basics:: Noise concepts and definitions.
+* Invoking astmknoise:: Options and arguments to MakeNoise.
+@end menu
-@node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles
catalog, Invoking astmkprof
-@subsubsection MakeProfiles profile settings
-The profile parameters that differ between each created profile are specified
through the columns in the input catalog and described in @ref{MakeProfiles
catalog}.
-Besides those there are general settings for some profiles that don't differ
between one profile and another, they are a property of the general process.
-For example how many random points to use in the monte-carlo integration, this
value is fixed for all the profiles.
-The options described in this section are for configuring such properties.
-@table @option
+@node Noise basics, Invoking astmknoise, MakeNoise, MakeNoise
+@subsection Noise basics
-@item --mode=STR
-Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles
catalog}) in image or WCS coordinates.
-This option thus accepts only two values: @option{img} and @option{wcs}.
-It is mandatory when a catalog is being used as input.
+@cindex Noise
+@cindex Image noise
+Deep astronomical images, like those used in extragalactic studies, seriously
suffer from noise in the data.
+Generally speaking, the sources of noise in an astronomical image are photon
counting noise and Instrumental noise which are discussed in @ref{Photon
counting noise} and @ref{Instrumental noise}.
+This review finishes with @ref{Generating random numbers} which is a short
introduction on how random numbers are generated.
+We will see that while software random number generators are not perfect, they
allow us to obtain a reproducible series of random numbers through setting the
random number generator function and seed value.
+Therefore in this section, we'll also discuss how you can set these two
parameters in Gnuastro's programs (including MakeNoise).
-@item -r
-@itemx --numrandom
-The number of random points used in the central regions of the profile, see
@ref{Sampling from a function}.
+@menu
+* Photon counting noise:: Poisson noise
+* Instrumental noise:: Readout, dark current and other sources.
+* Final noised pixel value:: How the final noised value is calculated.
+* Generating random numbers:: How random numbers are generated.
+@end menu
-@item -e
-@itemx --envseed
-@cindex Seed, Random number generator
-@cindex Random number generator, Seed
-Use the value to the @code{GSL_RNG_SEED} environment variable to generate the
random Monte Carlo sampling distribution, see @ref{Sampling from a function}
and @ref{Generating random numbers}.
+@node Photon counting noise, Instrumental noise, Noise basics, Noise basics
+@subsubsection Photon counting noise
-@item -t FLT
-@itemx --tolerance=FLT
-The tolerance to switch from Monte Carlo integration to the central pixel
value, see @ref{Sampling from a function}.
+@cindex Counting error
+@cindex de Moivre, Abraham
+@cindex Poisson distribution
+@cindex Photon counting noise
+@cindex Poisson, Sim@'eon Denis
+With the very accurate electronics used in today's detectors, photon counting
noise@footnote{In practice, we are actually counting the electrons that are
produced by each photon, not the actual photons.} is the most significant
source of uncertainty in most datasets.
+To understand this noise (error in counting), we need to take a closer look at
how a distribution produced by counting can be modeled as a parametric function.
-@item -p
-@itemx --tunitinp
-The truncation column of the catalog is in units of pixels.
-By default, the truncation column is considered to be in units of the radial
parameters of the profile (@option{--rcol}).
-Read it as `t-unit-in-p' for `truncation unit in pixels'.
+Counting is an inherently discrete operation, which can only produce positive
(including zero) integer outputs.
+For example we can't count @mymath{3.2} or @mymath{-2} of anything.
+We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
+The distribution of values, as a result of counting efforts is formally known
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson
distribution}.
+It is associated to Sim@'eon Denis Poisson, because he discussed it while
working on the number of wrongful convictions in court cases in his 1837
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous
study by Abraham de Moivre in 1711.
+Therefore some people suggest it should rightly be called the de Moivre
distribution.}.
-@item -f
-@itemx --mforflatpix
-When making fixed value profiles (flat and circumference, see
`@option{--fcol}'), don't use the value in the column specified by
`@option{--mcol}' as the magnitude.
-Instead use it as the exact value that all the pixels of these profiles should
have.
-This option is irrelevant for other types of profiles.
-This option is very useful for creating masks, or labeled regions in an image.
-Any integer, or floating point value can used in this column with this option,
including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant),
and infinities (@code{inf}, @code{-inf}, or @code{+inf}).
+@cindex Probability density function
+Let's take @mymath{\lambda} to represent the expected mean count of something.
+Furthermore, let's take @mymath{k} to represent the result of one particular
counting attempt.
+The probability density function of getting @mymath{k} counts (in each
attempt, given the expected/mean count of @mymath{\lambda}) can be written as:
-For example, with this option if you set the value in the magnitude column
(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask
over an image (which can be given as the argument), see @ref{Blank pixels}.
-Another useful application of this option is to create labeled elliptical or
circular apertures in an image.
-To do this, set the value in the magnitude column to the label you want for
this profile.
-This labeled image can then be used in combination with NoiseChisel's output
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see
@ref{MakeCatalog}).
+@cindex Poisson distribution
+@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3,
\dots @}}
-Alternatively, if you want to mark regions of the image (for example with an
elliptical circumference) and you don't want to use NaN values (as explained
above) for some technical reason, you can get the minimum or maximum value in
the image @footnote{
-The minimum will give a better result, because the maximum can be too high
compared to most pixels in the image, making it harder to display.}
-using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude
column along with this option for all the profiles.
+@cindex Skewed Poisson distribution
+Because the Poisson distribution is only applicable to positive values (note
the factorial operator, which only applies to non-negative integers), naturally
it is very skewed when @mymath{\lambda} is near zero.
+One qualitative way to understand this behavior is that there simply aren't
enough integers smaller than @mymath{\lambda}, than integers that are larger
than it.
+Therefore to accommodate all possibilities/counts, it has to be strongly
skewed when @mymath{\lambda} is small.
-Please note that when using MakeProfiles on an already existing image, you
have to set `@option{--oversample=1}'.
-Otherwise all the profiles will be scaled up based on the oversampling scale
in your configuration files (see @ref{Configuration files}) unless you have
accounted for oversampling in your catalog.
+@cindex Compare Poisson and Gaussian
+As @mymath{\lambda} becomes larger, the distribution becomes more and more
symmetric.
+A very useful property of the Poisson distribution is that the mean value is
also its variance.
+When @mymath{\lambda} is very large, say @mymath{\lambda>1000}, then the
@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian)
distribution}, is an excellent approximation of the Poisson distribution with
mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
+In other words, a Poisson distribution (with a sufficiently large
@mymath{\lambda}) is simply a Gaussian that only has one free parameter
(@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two
parameters (independent @mymath{\mu} and @mymath{\sigma}) that it originally
has.
-@item --mcolisbrightness
-The value given in the ``magnitude column'' (specified by @option{--mcol}, see
@ref{MakeProfiles catalog}) must be interpreted as brightness, not magnitude.
-The zero point magnitude (value to the @option{--zeropoint} option) is ignored
and the given value must have the same units as the input dataset's pixels.
+@cindex Sky value
+@cindex Background flux
+@cindex Undetected objects
+In real situations, the photons/flux from our targets are added to a certain
background flux (observationally, the @emph{Sky} value).
+The Sky value is defined to be the average flux of a region in the dataset
with no targets.
+Its physical origin can be the brightness of the atmosphere (for ground-based
instruments), possible stray light within the imaging instrument, the average
flux of undetected targets, etc.
+The Sky value is thus an ideal definition, because in real datasets, what lies
deep in the noise (far lower than the detection limit) is never
known@footnote{In a real image, a relatively large number of very faint objects
can been fully buried in the noise and never detected.
+These undetected objects will bias the background measurement to slightly
larger values.
+Our best approximation is thus to simply assume they are uniform, and consider
their average effect.
+See Figure 1 (a.1 and a.2) and Section 2.2 in
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+To account for all of these, the sky value is defined to be the average
count/value of the undetected regions in the image.
+In a mock image/dataset, we have the luxury of setting the background (Sky)
value.
-Recall that the total profile magnitude or brightness that is specified with
in the @option{--mcol} column of the input catalog is not an integration to
infinity, but the actual sum of pixels in the profile (until the desired
truncation radius).
-See @ref{Profile magnitude} for more on this point.
+@cindex Simulating noise
+@cindex Noise simulation
+In each element of the dataset (pixel in an image), the flux is the sum of
contributions from various sources (after convolution by the PSF, see
@ref{PSF}).
+Let's name the convolved sum of possibly overlapping objects, @mymath{I_{nn}}.
+@mymath{nn} representing `no noise'.
+For now, let's assume the background (@mymath{B}) is constant and sufficiently
high for the Poisson distribution to be approximated by a Gaussian.
+Then the flux after adding noise is a random value taken from a Gaussian
distribution with the following mean (@mymath{\mu}) and standard deviation
(@mymath{\sigma}):
-@item --magatpeak
-The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be
used to find the brightness only for the peak profile pixel, not the full
profile.
-Note that this is the flux of the profile's peak pixel in the final output of
MakeProfiles.
-So beware of the oversampling, see @ref{Oversampling}.
+@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}
-This option can be useful if you want to check a mock profile's total
magnitude at various truncation radii.
-Without this option, no matter what the truncation radius is, the total
magnitude will be the same as that given in the catalog.
-But with this option, the total magnitude will become brighter as you increase
the truncation radius.
+Since this type of noise is inherent in the objects we study, it is usually
measured on the same scale as the astronomical objects, namely the magnitude
system, see @ref{Brightness flux magnitude}.
+It is then internally converted to the flux scale for further processing.
-In sharper profiles, sometimes the accuracy of measuring the peak profile flux
is more than the overall object brightness.
-In such cases, with this option, the final profile will be built such that its
peak has the given magnitude, not the total profile.
+@node Instrumental noise, Final noised pixel value, Photon counting noise,
Noise basics
+@subsubsection Instrumental noise
-@cartouche
-@strong{CAUTION:} If you want to use this option for comparing with
observations, please note that MakeProfiles does not do convolution.
-Unless you have de-convolved your data, your images are convolved with the
instrument and atmospheric PSF, see @ref{PSF}.
-Particularly in sharper profiles, the flux in the peak pixel is strongly
decreased after convolution.
-Also note that in such cases, besides de-convolution, you will have to set
@option{--oversample=1} otherwise after resampling your profile with Warp (see
@ref{Warp}), the peak flux will be different.
-@end cartouche
+@cindex Readout noise
+@cindex Instrumental noise
+@cindex Noise, instrumental
+While taking images with a camera, a dark current is fed to the pixels, the
variation of the value of this dark current over the pixels, also adds to the
final image noise.
+Another source of noise is the readout noise that is produced by the
electronics in the detector.
+Specifically, the parts that attempt to digitize the voltage produced by the
photo-electrons in the analog to digital converter.
+With the current generation of instruments, this source of noise is not as
significant as the noise due to the background Sky discussed in @ref{Photon
counting noise}.
-@item --customtable FITS/TXT
-The filename of the table to use in the custom profiles (see description of
@option{--fcol} in @ref{MakeProfiles catalog}.
-This can be a plain-text table, or FITS table, see @ref{Tables}, if its a FITS
table, you can use @option{--customtablehdu} to specify which HDU should be
used (described below).
+Let @mymath{C} represent the combined standard deviation of all these
instrumental sources of noise.
+When only this source of noise is present, the noised pixel value would be a
random value chosen from a Gaussian distribution with
-A custom profile can have any value you want for a given radial profile.
-Each interval is defined by its minimum (inclusive) and maximum (exclusive)
radius, when a pixel falls within this radius the value specified for that
interval will be used.
-If a pixel is not in the given intervals, a value of 0 will be used for it.
+@dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}
-The table should have 3 columns as shown below.
-If the intervals are contiguous (the maximum value of the previous interval is
equal to the minimum value of an interval) and the intervals all have the same
size (difference between minimum and maximum values) the creation of these
profiles will be fast.
-However, if the intervals are not sorted and contiguous, Makeprofiles will
parse the intervals from the top of the table and use the first interval that
contains the pixel center.
+@cindex ADU
+@cindex Gain
+@cindex Counts
+This type of noise is independent of the signal in the dataset, it is only
determined by the instrument.
+So the flux scale (and not magnitude scale) is most commonly used for this
type of noise.
+In practice, this value is usually reported in analog-to-digital units or
ADUs, not flux or electron counts.
+The gain value of the device can be used to convert between these two, see
@ref{Brightness flux magnitude}.
-@table @asis
-@item Column 1:
-The interval's minimum radius.
-@item Column 2:
-The interval's maximum radius.
-@item Column 3:
-The value to be used for pixels within the given interval.
-@end table
+@node Final noised pixel value, Generating random numbers, Instrumental noise,
Noise basics
+@subsubsection Final noised pixel value
+Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental
noise}, depending on the values you specify for @mymath{B} and @mymath{C} from
the above, the final noised value for each pixel is a random value chosen from
a Gaussian distribution with
-For example let's assume you have the radial profile below in a file called
@file{radial.txt}.
-The first column is the larger interval radius and the second column is the
value in that interval:
+@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}
+
+
+
+@node Generating random numbers, , Final noised pixel value, Noise basics
+@subsubsection Generating random numbers
+
+@cindex Random numbers
+@cindex Numbers, random
+As discussed above, to generate noise we need to make random samples of a
particular distribution.
+So it is important to understand some general concepts regarding the
generation of random numbers.
+For a very complete and nice introduction we strongly advise reading Donald
Knuth's ``The art of computer programming'', volume 2, chapter
3@footnote{Knuth, Donald. 1998.
+The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
+Quoting from the GNU Scientific Library manual, ``If you don't own it, you
should stop reading right now, run to the nearest bookstore, and buy
it''@footnote{For students, running to the library might be more affordable!}!
+
+@cindex Psuedo-random numbers
+@cindex Numbers, psuedo-random
+Using only software, we can only produce what is called a psuedo-random
sequence of numbers.
+A true random number generator is a hardware (let's assume we have made sure
it has no systematic biases), for example throwing dice or flipping coins
(which have remained from the ancient times).
+More modern hardware methods use atmospheric noise, thermal noise or other
types of external electromagnetic or quantum phenomena.
+All pseudo-random number generators (software) require a seed to be the basis
of the generation.
+The advantage of having a seed is that if you specify the same seed for
multiple runs, you will get an identical sequence of random numbers which
allows you to reproduce the same final noised image.
+
+@cindex Environment variables
+@cindex GNU Scientific Library
+The programs in GNU Astronomy Utilities (for example MakeNoise or
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
+GSL allows the user to set the random number generator through environment
variables, see @ref{Installation directory} for an introduction to environment
variables.
+In the chapter titled ``Random Number Generation'' they have fully explained
the various random number generators that are available (there are a lot of
them!).
+Through the two environment variables @code{GSL_RNG_TYPE} and
@code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
+
+@cindex Seed, Random number generator
+@cindex Random number generator, Seed
+If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default
random number generator type.
+The default type is sufficient for most general applications.
+If no value is given for the @code{GSL_RNG_SEED} environment variable and you
have asked Gnuastro to read the seed from the environment (through the
@option{--envseed} option), then GSL will use the default value of each
generator to give identical outputs.
+If you don't explicitly tell Gnuastro programs to read the seed value from the
environment variable, then they will use the system time (accurate to within a
microsecond) to generate (apparently random) seeds.
+In this manner, every time you run the program, you will get a different
random number distribution.
+
+There are two ways you can specify values for these environment variables.
+You can call them on the same command-line for example:
@example
-1 100
-2 90
-3 50
-4 10
-5 2
-6 0.1
-7 0.05
+$ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astmknoise input.fits
@end example
@noindent
-You can construct the table to give to @option{--customtable} with either of
the commands below: the first one with Gnuastro's @ref{Column arithmetic} which
can also work on FITS tables, and the second one with an AWK command that only
works on plain-text tables..
+In this manner the values will only be used for this particular execution of
MakeNoise.
+Alternatively, you can define them for the full period of your terminal
session or script length, using the shell's @command{export} command with the
two separate commands below (for a script remove the @code{$} signs):
@example
-asttable radial.fits -c'arith $1 1 -' -c1,2 -ocustom.fits
-awk '@{print $1-1, $1, $2@}' radial.txt > custom.txt
+$ export GSL_RNG_TYPE="taus"
+$ export GSL_RNG_SEED=345
@end example
+@cindex Startup scripts
+@cindex @file{.bashrc}
@noindent
-In case the intervals are different from 1 (for example 0.5), change them
respectively: for Gnuastro's table change @code{$1 1 -} to @code{$1 0.5 -} and
for AWK change @code{$1-1} to @code{$1-0.5}.
+The subsequent programs which use GSL's random number generators will hence
forth use these values in this session of the terminal you are running or while
executing this script.
+In case you want to set fixed values for these parameters every time you use
the GSL random number generator, you can add these two lines to your
@file{.bashrc} startup script@footnote{Don't forget that if you are going to
give your scripts (that use the GSL random number generator) to others you have
to make sure you also tell them to set these environment variable separately.
+So for scripts, it is best to keep all such variable definitions within the
script, even if they are within your @file{.bashrc}.}, see @ref{Installation
directory}.
+@cartouche
+@noindent
+@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and
@code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you
don't use the @option{--envseed} option.
+For example you can see the top few lines of the output of MakeProfiles:
-@item --customtablehdu INT/STR
-The HDU/extension in the FITS file given to @option{--customtable}.
+@example
+$ export GSL_RNG_TYPE="taus"
+$ export GSL_RNG_SEED=345
+$ astmkprof -s1 --kernel=gaussian,2,5 --envseed
+GSL_RNG_TYPE=taus
+GSL_RNG_SEED=345
+MakeProfiles A.B started on DDD MMM DD HH:MM:SS YYYY
+ - Building one gaussian kernel
+ - Random number generator (RNG) type: ranlxs1
+ - RNG seed for all profiles: 345
+ ---- ./kernel.fits created.
+MakeProfiles finished in 0.111271 seconds
+@end example
-@item -X INT,INT
-@itemx --shift=INT,INT
-Shift all the profiles and enlarge the image along each dimension.
-To better understand this option, please see @mymath{n} in @ref{If convolving
afterwards}.
-This is useful when you want to convolve the image afterwards.
-If you are using an external PSF, be sure to oversample it to the same scale
used for creating the mock images.
-If a background image is specified, any possible value to this option is
ignored.
+@noindent
+@cindex Seed, Random number generator
+@cindex Random number generator, Seed
+The first two output lines (showing the names of the environment variables)
are printed by GSL before MakeProfiles actually starts generating random
numbers.
+The Gnuastro programs will report the values they use independently, you
should check them for the final values used.
+For example if @option{--envseed} is not given, @code{GSL_RNG_SEED} will not
be used and the last line shown above will not be printed.
+In the case of MakeProfiles, each profile will get its own seed value.
+@end cartouche
-@item -c
-@itemx --prepforconv
-Shift all the profiles and enlarge the image based on half the width of the
first Moffat or Gaussian profile in the catalog, considering any possible
oversampling see @ref{If convolving afterwards}.
-@option{--prepforconv} is only checked and possibly activated if
@option{--xshift} and @option{--yshift} are both zero (after reading the
command-line and configuration files).
-If a background image is specified, any possible value to this option is
ignored.
-@item -z FLT
-@itemx --zeropoint=FLT
-The zero point magnitude of the input.
-For more on the zero point magnitude, see @ref{Brightness flux magnitude}.
+@node Invoking astmknoise, , Noise basics, MakeNoise
+@subsection Invoking MakeNoise
-@item -w FLT
-@itemx --circumwidth=FLT
-The width of the circumference if the profile is to be an elliptical
circumference or annulus.
-See the explanations for this type of profile in @option{--fcol}.
+MakeNoise will add noise to an existing image.
+The executable name is @file{astmknoise} with the following general template
-@item -R
-@itemx --replace
-Do not add the pixels of each profile over the background, or other profiles.
-But replace the values.
+@example
+$ astmknoise [OPTION ...] InputImage.fits
+@end example
-By default, when two profiles overlap, the final pixel value is the sum of all
the profiles that overlap on that pixel.
-This is the expected situation when dealing with physical object profiles like
galaxies or stars/PSF.
-However, when MakeProfiles is used to build integer labeled images (for
example in @ref{Aperture photometry}), this is not the expected situation: the
sum of two labels will be a new label.
-With this option, the pixels are not added but the largest (maximum) value
over that pixel is used.
-Because the maximum operator is independent of the order of values, the output
is also thread-safe.
+@noindent
+One line examples:
-@end table
+@example
+## Add noise with a standard deviation of 100 to image.
+## (this is independent of the pixel value: not Poission noise)
+$ astmknoise --sigma=100 image.fits
-@node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile
settings, Invoking astmkprof
-@subsubsection MakeProfiles output dataset
-MakeProfiles takes an input catalog uses basic properties that are defined
there to build a dataset, for example a 2D image containing the profiles in the
catalog.
-In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the
catalog and profile settings were discussed.
-The options of this section, allow you to configure the output dataset (or the
canvas that will host the built profiles).
+## Add noise to input image assuming a background magnitude (with
+## zero point magnitude of 0) and a certain instrumental noise:
+$ astmknoise --background=-10 -z0 --instrumental=20 mockimage.fits
+@end example
+
+@noindent
+If actual processing is to be done, the input image is a mandatory argument.
+The full list of options common to all the programs in Gnuastro can be seen in
@ref{Common options}.
+The type (see @ref{Numeric data types}) of the output can be specified with
the @option{--type} option, see @ref{Input output options}.
+The header of the output FITS file keeps all the parameters that were
influential in making it.
+This is done for future reproducibility.
@table @option
-@item -k FITS
-@itemx --background=FITS
-A background image FITS file to build the profiles on.
-The extension that contains the image should be specified with the
@option{--backhdu} option, see below.
-When a background image is specified, it will be used to derive all the
information about the output image.
-Hence, the following options will be ignored: @option{--mergedsize},
@option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other
WCS related parameters) and the output's data type (see @option{--type} in
@ref{Input output options}).
+@item -b FLT
+@itemx --background=FLT
+The background value (per pixel) that will be added to each pixel value
(internally) to estimate Poisson noise, see @ref{Photon counting noise}.
+By default the units of this value are assumed to be in magnitudes, hence a
@option{--zeropoint} is also necessary.
+But if the background is in units of brightness, you need add
@option{--bgisbrightness}, see @ref{Brightness flux magnitude}
-The image will act like a canvas to build the profiles on: profile pixel
values will be summed with the background image pixel values.
-With the @option{--replace} option you can disable this behavior and replace
the profile pixels with the background pixels.
-If you want to use all the image information above, except for the pixel
values (you want to have a blank canvas to build the profiles on, based on an
input image), you can call @option{--clearcanvas}, to set all the input image's
pixels to zero before starting to build the profiles over it (this is done in
memory after reading the input, so nothing will happen to your input file).
+Internally, the value given to this option will be converted to brightness
(@mymath{b}, when @option{--bgisbrightness} is called, the value will be used
directly).
+Assuming the pixel value is @mymath{p}, the random value for that pixel will
be taken from a Gaussian distribution with mean of @mymath{p+b} and standard
deviation of @mymath{\sqrt{p+b}}.
+With this option, the noise will therefore be dependent on the pixel values:
according to the Poission noise model, as the pixel value becomes larger, its
noise will also become larger.
+This is thus a realistic way to model noise, see @ref{Photon counting noise}.
-@item -B STR/INT
-@itemx --backhdu=STR/INT
-The header data unit (HDU) of the file given to @option{--background}.
+@item -B
+@itemx --bgisbrightness
+The value given to @option{--background} should be interpretted as brightness,
not as a magnitude.
-@item -C
-@itemx --clearcanvas
-When an input image is specified (with the @option{--background} option, set
all its pixels to 0.0 immediately after reading it into memory.
-Effectively, this will allow you to use all its properties (described under
the @option{--background} option), without having to worry about the pixel
values.
+@item -z FLT
+@itemx --zeropoint=FLT
+The zero point magnitude used to convert the value of @option{--background}
(in units of magnitude) to flux, see @ref{Brightness flux magnitude}.
-@option{--clearcanvas} can come in handy in many situations, for example if
you want to create a labeled image (segmentation map) for creating a catalog
(see @ref{MakeCatalog}).
-In other cases, you might have modeled the objects in an image and want to
create them on the same frame, but without the original pixel values.
+@item -i FLT
+@itemx --instrumental=FLT
+The instrumental noise which is in units of flux, see @ref{Instrumental noise}.
-@item -E STR/INT,FLT[,FLT,[...]]
-@itemx --kernel=STR/INT,FLT[,FLT,[...]]
-Only build one kernel profile with the parameters given as the values to this
option.
-The different values must be separated by a comma (@key{,}).
-The first value identifies the radial function of the profile, either through
a string or through a number (see description of @option{--fcol} in
@ref{MakeProfiles catalog}).
-Each radial profile needs a different total number of parameters: S@'ersic and
Moffat functions need 3 parameters: radial, S@'ersic index or Moffat
@mymath{\beta}, and truncation radius.
-The Gaussian function needs two parameters: radial and truncation radius.
-The point function doesn't need any parameters and flat and circumference
profiles just need one parameter (truncation radius).
+@item -s FLT
+@item --sigma=FLT
+The total noise sigma in the same units as the pixel values.
+With this option, the @option{--background}, @option{--zeropoint} and
@option{--instrumental} will be ignored.
+With this option, the noise will be independent of the pixel values (which is
not realistic, see @ref{Photon counting noise}).
+Hence it is only useful if you are working on low surface brightness regions
where the change in pixel value (and thus real noise) is insignificant.
-The PSF or kernel is a unique (and highly constrained) type of profile: the
sum of its pixels must be one, its center must be the center of the central
pixel (in an image with an odd number of pixels on each side), and commonly it
is circular, so its axis ratio and position angle are one and zero respectively.
-Kernels are commonly necessary for various data analysis and data manipulation
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
-Because of this it is inconvenient to define a catalog with one row and many
zero valued columns (for all the non-necessary parameters).
-Hence, with this option, it is possible to create a kernel with MakeProfiles
without the need to create a catalog.
-Here are some examples:
+Generally, @strong{usage of this option is discouraged} unless you understand
the risks of not simulating real noise.
+This is because with this option, you will not get Poisson noise (the common
noise model for astronomical imaging), where the noise varies based on pixel
value.
+Use @option{--background} for adding Poission noise.
-@table @option
-@item --kernel=moffat,3,2.8,5
-A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated
at 5 times the FWHM.
+@item -e
+@itemx --envseed
+@cindex Seed, Random number generator
+@cindex Random number generator, Seed
+Use the @code{GSL_RNG_SEED} environment variable for the seed used in the
random number generator, see @ref{Generating random numbers}.
+With this option, the output image noise is always going to be identical (or
reproducible).
-@item --kernel=gaussian,2,3
-A circular Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
-the FWHM.
-@end table
+@item -d
+@itemx --doubletype
+Save the output in the double precision floating point format that was used
internally.
+This option will be most useful if the input images were of integer types.
-This option may also be used to create a 3D kernel.
-To do that, two small modifications are necessary: add a @code{-3d} (or
@code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number
(axis-ratio along the third dimension) to the end of the parameters for all
profiles except @code{point}.
-The main reason behind providing an axis ratio in the third dimension is that
in 3D astronomical datasets, commonly the third dimension doesn't have the same
nature (units/sampling) as the first and second.
+@end table
-For example in IFU datacubes, the first and second dimensions are
angularpositions (like RA and Dec) but the third is in units of Angstroms for
wavelength.
-Because of this different nature (which also affects theprocessing), it may be
necessary for the kernel to have a different extent in that direction.
-If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will
be a spheroid.
-If its smaller than @mymath{1.0}, the kernel will be button-shaped: extended
less in the third dimension.
-However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped:
extended more in the third dimension.
-In the latter case, the radial parameter will correspond to the length along
the 3rd dimension.
-For example, let's have a look at the two examples above but in 3D:
-@table @option
-@item --kernel=moffat-3d,3,2.8,5,0.5
-An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is
truncated at 5 times the FWHM.
-The ellipsoid is circular in the first two dimensions, but in the third
dimension its extent is half the first two.
-@item --kernel=gaussian-3d,2,3,1
-A spherical Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
-the FWHM.
-@end table
-Ofcourse, if a specific kernel is needed that doesn't fit the constraints
imposed by this option, you can always use a catalog to define any arbitrary
kernel.
-Just call the @option{--individual} and @option{--nomerged} options to make
sure that it is built as a separate file (individually) and no ``merged'' image
of the input profiles is created.
-@item -x INT,INT
-@itemx --mergedsize=INT,INT
-The number of pixels along each axis of the output, in FITS order.
-This is before over-sampling.
-For example if you call MakeProfiles with @option{--mergedsize=100,150
--oversample=5} (assuming no shift due for later convolution), then the final
image size along the first axis will be 500 by 750 pixels.
-Fractions are acceptable as values for each dimension, however, they must
reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but
@option{--mergedsize=150/4,300/4} is not.
-When viewing a FITS image in DS9, the first FITS dimension is in the
horizontal direction and the second is vertical.
-As an example, the image created with the example above will have 500 pixels
horizontally and 750 pixels vertically.
-If a background image is specified, this option is ignored.
-@item -s INT
-@itemx --oversample=INT
-The scale to over-sample the profiles and final image.
-If not an odd number, will be added by one, see @ref{Oversampling}.
-Note that this @option{--oversample} will remain active even if an input image
is specified.
-If your input catalog is based on the background image, be sure to set
@option{--oversample=1}.
-@item --psfinimg
-Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog
into the final image.
-By default they are built separately so you can convolve your images with
them, thus their magnitude and positions are ignored.
-With this option, they will be built in the final image like every other
galaxy profile.
-To have a final PSF in your image, make a point profile where you want the PSF
and after convolution it will be the PSF.
-@item -i
-@itemx --individual
-@cindex Individual profiles
-@cindex Build individual profiles
-If this option is called, each profile is created in a separate FITS file
within the same directory as the output and the row number of the profile
(starting from zero) in the name.
-The file for each row's profile will be in the same directory as the final
combined image of all the profiles and will have the final image's name as a
suffix.
-So for example if the final combined image is named
@file{./out/fromcatalog.fits}, then the first profile that will be created with
this option will be named @file{./out/0_fromcatalog.fits}.
-Since each image only has one full profile out to the truncation radius the
profile is centered and so, only the sub-pixel position of the profile center
is important for the outputs of this option.
-The output will have an odd number of pixels.
-If there is no oversampling, the central pixel will contain the profile center.
-If the value to @option{--oversample} is larger than unity, then the profile
center is on any of the central @option{--oversample}'d pixels depending on the
fractional value of the profile center.
-If the fractional value is larger than half, it is on the bottom half of the
central region.
-This is due to the FITS definition of a real number position: The center of a
pixel has fractional value @mymath{0.00} so each pixel contains these
fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.
-@item -m
-@itemx --nomerged
-Don't make a merged image.
-By default after making the profiles, they are added to a final image with
side lengths specified by @option{--mergedsize} if they overlap with it.
-@end table
-@noindent
-The options below can be used to define the world coordinate system (WCS)
properties of the MakeProfiles outputs.
-The option names are deliberately chosen to be the same as the FITS standard
WCS keywords.
-See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al
[2010]} for a short introduction to WCS in the FITS standard@footnote{The world
coordinate standard in FITS is a very beautiful and powerful concept to
link/associate datasets with the outside world (other datasets).
-The description in the FITS standard (link above) only touches the tip of the
ice-burg.
-To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326,
Greisen and Calabretta [2002]},
@url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen
[2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al.
[2006]}, and
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta
et al.}}.
-If you look into the headers of a FITS image with WCS for example you will see
all these names but in uppercase and with numbers to represent the dimensions,
for example @code{CRPIX1} and @code{PC2_1}.
-You can see the FITS headers with Gnuastro's @ref{Fits} program using a
command like this: @command{$ astfits -p image.fits}.
+@node High-level calculations, Installed scripts, Modeling and fittings, Top
+@chapter High-level calculations
-If the values given to any of these options does not correspond to the number
of dimensions in the output dataset, then no WCS information will be added.
+After the reduction of raw data (for example with the programs in @ref{Data
manipulation}) you will have reduced images/data ready for processing/analyzing
(for example with the programs in @ref{Data analysis}).
+But the processed/analyzed data (or catalogs) are still not enough to derive
any scientific result.
+Even higher-level analysis is still needed to convert the observed magnitudes,
sizes or volumes into physical quantities that we associate with each catalog
entry or detected object which is the purpose of the tools in this section.
-@table @option
-@item --crpix=FLT,FLT
-The pixel coordinates of the WCS reference point.
-Fractions are acceptable for the values of this option.
-@item --crval=FLT,FLT
-The WCS coordinates of the Reference point.
-Fractions are acceptable for the values of this option.
-@item --cdelt=FLT,FLT
-The resolution (size of one data-unit or pixel in WCS units) of the
non-oversampled dataset.
-Fractions are acceptable for the values of this option.
-@item --pc=FLT,FLT,FLT,FLT
-The PC matrix of the WCS rotation, see the FITS standard (link above) to
better understand the PC matrix.
+@menu
+* CosmicCalculator:: Calculate cosmological variables
+@end menu
-@item --cunit=STR,STR
-The units of each WCS axis, for example @code{deg}.
-Note that these values are part of the FITS standard (link above).
-MakeProfiles won't complain if you use non-standard values, but later usage of
them might cause trouble.
+@node CosmicCalculator, , High-level calculations, High-level calculations
+@section CosmicCalculator
-@item --ctype=STR,STR
-The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
-Note that these values are part of the FITS standard (link above).
-MakeProfiles won't complain if you use non-standard values, but later usage of
them might cause trouble.
+To derive higher-level information regarding our sources in extra-galactic
astronomy, cosmological calculations are necessary.
+In Gnuastro, CosmicCalculator is in charge of such calculations.
+Before discussing how CosmicCalculator is called and operates (in
@ref{Invoking astcosmiccal}), it is important to provide a rough but mostly
self sufficient review of the basics and the equations used in the analysis.
+In @ref{Distance on a 2D curved space} the basic idea of understanding
distances in a curved and expanding 2D universe (which we can visualize) are
reviewed.
+Having solidified the concepts there, in @ref{Extending distance concepts to
3D}, the formalism is extended to the 3D universe we are trying to study in our
research.
-@end table
+The focus here is obtaining a physical insight into these equations (mainly
for the use in real observational studies).
+There are many books thoroughly deriving and proving all the equations with
all possible initial conditions and assumptions for any abstract universe,
interested readers can study those books.
-@node MakeProfiles log file, , MakeProfiles output dataset, Invoking astmkprof
-@subsubsection MakeProfiles log file
+@menu
+* Distance on a 2D curved space:: Distances in 2D for simplicity
+* Extending distance concepts to 3D:: Going to 3D (our real universe).
+* Invoking astcosmiccal:: How to run CosmicCalculator
+@end menu
-Besides the final merged dataset of all the profiles, or the individual
datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option
is called MakeProfiles will also create a log file in the current directory
(where you run MockProfiles).
-See @ref{Common options} for a full description of @option{--log} and other
options that are shared between all Gnuastro programs.
-The values for each column are explained in the first few commented lines of
the log file (starting with @command{#} character).
-Here is a more complete description.
+@node Distance on a 2D curved space, Extending distance concepts to 3D,
CosmicCalculator, CosmicCalculator
+@subsection Distance on a 2D curved space
-@itemize
-@item
-An ID (row number of profile in input catalog).
+The observations to date (for example the Planck 2015 results), have not
measured@footnote{The observations are interpreted under the assumption of
uniform curvature.
+For a relativistic alternative to dark energy (and maybe also some part of
dark matter), non-uniform curvature may be even be more critical, but that is
beyond the scope of this brief explanation.} the presence of significant
curvature in the universe.
+However to be generic (and allow its measurement if it does in fact exist), it
is very important to create a framework that allows non-zero uniform curvature.
+However, this section is not intended to be a fully thorough and
mathematically complete derivation of these concepts.
+There are many references available for such reviews that go deep into the
abstract mathematical proofs.
+The emphasis here is on visualization of the concepts for a beginner.
-@item
-The total magnitude of the profile in the output dataset.
-When the profile does not completely overlap with the output dataset, this
will be different from your input magnitude.
+As 3D beings, it is difficult for us to mentally create (visualize) a picture
of the curvature of a 3D volume.
+Hence, here we will assume a 2D surface/space and discuss distances on that 2D
surface when it is flat and when it is curved.
+Once the concepts have been created/visualized here, we will extend them, in
@ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of
the Universe we live in and hope to study.
-@item
-The number of pixels (in the oversampled image) which used Monte Carlo
integration and not the central pixel value, see @ref{Sampling from a function}.
+To be more understandable (actively discuss from an observer's point of view)
let's assume there's an imaginary 2D creature living on the 2D space (which
@emph{might} be curved in 3D).
+Here, we will be working with this creature in its efforts to analyze
distances in its 2D universe.
+The start of the analysis might seem too mundane, but since it is difficult to
imagine a 3D curved space, it is important to review all the very basic
concepts thoroughly for an easy transition to a universe that is more difficult
to visualize (a curved 3D space embedded in 4D).
-@item
-The fraction of flux in the Monte Carlo integrated pixels.
+To start, let's assume a static (not expanding or shrinking), flat 2D surface
similar to @ref{flatplane} and that the 2D creature is observing its universe
from point @mymath{A}.
+One of the most basic ways to parameterize this space is through the Cartesian
coordinates (@mymath{x}, @mymath{y}).
+In @ref{flatplane}, the basic axes of these two coordinates are plotted.
+An infinitesimal change in the direction of each axis is written as
@mymath{dx} and @mymath{dy}.
+For each point, the infinitesimal changes are parallel with the respective
axes and are not shown for clarity.
+Another very useful way of parameterizing this space is through polar
coordinates.
+For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from
a fixed (but arbitrary) reference axis.
+In @ref{flatplane} the infinitesimal changes for each polar coordinate are
plotted for a random point and a dashed circle is shown for all points with the
same radius.
-@item
-If an individual image was created, this column will have a value of @code{1},
otherwise it will have a value of @code{0}.
-@end itemize
+@float Figure,flatplane
+@center@image{gnuastro-figures/flatplane, 10cm, , }
+@caption{Two dimensional Cartesian and polar coordinates on a flat
+plane.}
+@end float
+Assuming an object is placed at a certain position, which can be parameterized
as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its
position will place it in the coordinates @mymath{(x+dx,y+dy)} and
@mymath{(r+dr,\phi+d\phi)}.
+The distance (on the flat 2D surface) that is covered by this infinitesimal
change in the static universe (@mymath{ds_s}, the subscript signifies the
static nature of this universe) can be written as:
+@dispmath{ds_s=dx^2+dy^2=dr^2+r^2d\phi^2}
+The main question is this: how can the 2D creature incorporate the (possible)
curvature in its universe when it's calculating distances? The universe that it
lives in might equally be a curved surface like @ref{sphereandplane}.
+The answer to this question but for a 3D being (us) is the whole purpose to
this discussion.
+Here, we want to give the 2D creature (and later, ourselves) the tools to
measure distances if the space (that hosts the objects) is curved.
+@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the
curved 2D plane for simplicity.
+The 2D plane is tangent to the spherical shell and only touches it at
@mymath{A}.
+This idea will be generalized later.
+The first step in measuring the distance in a curved space is to imagine a
third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
+For simplicity, the @mymath{z} axis is assumed to pass through the center of
the spherical shell.
+Our imaginary 2D creature cannot visualize the third dimension or a curved 2D
surface within it, so the remainder of this discussion is purely abstract for
it (similar to us having difficulty in visualizing a 3D curved space in 4D).
+But since we are 3D creatures, we have the advantage of visualizing the
following steps.
+Fortunately the 2D creature is already familiar with our mathematical
constructs, so it can follow our reasoning.
+With the third axis added, a generic infinitesimal change over @emph{the full}
3D space corresponds to the distance:
+@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}
+@float Figure,sphereandplane
+@center@image{gnuastro-figures/sphereandplane, 10cm, , }
+@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light
gray) tangent to it at point @mymath{A}.}
+@end float
+It is very important to recognize that this change of distance is for
@emph{any} point in the 3D space, not just those changes that occur on the 2D
spherical shell of @ref{sphereandplane}.
+Recall that our 2D friend can only do measurements on the 2D surfaces, not the
full 3D space.
+So we have to constrain this general change to any change on the 2D spherical
shell.
+To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical
shell.
+Its image (@mymath{P'}) on the flat plain is also displayed. From the dark
gray triangle, we see that
+@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations
allow the 2D creature to find the value of @mymath{z} (an abstract dimension
for it) as a function of r (distance on a flat 2D plane, which it can
visualize) and thus eliminate @mymath{z}.
+From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and
solving for @mymath{z}, we find:
-@node MakeNoise, , MakeProfiles, Modeling and fittings
-@section MakeNoise
+@dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}
-@cindex Noise
-Real data are always buried in noise, therefore to finalize a simulation of
real data (for example to test our observational algorithms) it is essential to
add noise to the mock profiles created with MakeProfiles, see
@ref{MakeProfiles}.
-Below, the general principles and concepts to help understand how noise is
quantified is discussed.
-MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.
+The @mymath{\pm} can be understood from @ref{sphereandplane}: For each
@mymath{r}, there are two points on the sphere, one in the upper hemisphere and
one in the lower hemisphere.
+An infinitesimal change in @mymath{r}, will create the following infinitesimal
change in @mymath{z}:
-@menu
-* Noise basics:: Noise concepts and definitions.
-* Invoking astmknoise:: Options and arguments to MakeNoise.
-@end menu
+@dispmath{dz={\mp r\over R}\left(1\over
+\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of
@mymath{dz} in the @mymath{ds_s^2} equation above, we get:
+@dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}
+The derivation above was done for a spherical shell of radius @mymath{R} as a
curved 2D surface.
+To generalize it to any surface, we can define @mymath{K=1/R^2} as the
curvature parameter.
+Then the general infinitesimal change in a static universe can be written as:
-@node Noise basics, Invoking astmknoise, MakeNoise, MakeNoise
-@subsection Noise basics
+@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}
-@cindex Noise
-@cindex Image noise
-Deep astronomical images, like those used in extragalactic studies, seriously
suffer from noise in the data.
-Generally speaking, the sources of noise in an astronomical image are photon
counting noise and Instrumental noise which are discussed in @ref{Photon
counting noise} and @ref{Instrumental noise}.
-This review finishes with @ref{Generating random numbers} which is a short
introduction on how random numbers are generated.
-We will see that while software random number generators are not perfect, they
allow us to obtain a reproducible series of random numbers through setting the
random number generator function and seed value.
-Therefore in this section, we'll also discuss how you can set these two
parameters in Gnuastro's programs (including MakeNoise).
+Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a
finite universe, where @mymath{r} cannot become larger than @mymath{R} as in
@ref{sphereandplane}.
+When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative
@mymath{K} will correspond to an imaginary @mymath{R}.
+The latter two cases may be infinite in area (which is not a simple concept,
but mathematically can be modeled with @mymath{r} extending infinitely), or
finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 +
dy^2}}, but finite in one direction in size).
-@menu
-* Photon counting noise:: Poisson noise
-* Instrumental noise:: Readout, dark current and other sources.
-* Final noised pixel value:: How the final noised value is calculated.
-* Generating random numbers:: How random numbers are generated.
-@end menu
+@cindex Proper distance
+A very important issue that can be discussed now (while we are still in 2D and
can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent
to the curved space at the observer's position.
+In other words, it is on the gray flat surface of @ref{sphereandplane}, even
when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
+Therefore for the point @mymath{P} on a curved space, the raw coordinate
@mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
+The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on
the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is
called the @emph{proper distance} and is displayed with @mymath{l}.
+For the specific example of @ref{sphereandplane}, the proper distance can be
calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
+Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as
a function of @mymath{r}:
-@node Photon counting noise, Instrumental noise, Noise basics, Noise basics
-@subsubsection Photon counting noise
+@dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
+l(r)=R\sin^{-1}\left({r\over R}\right)}
-@cindex Counting error
-@cindex de Moivre, Abraham
-@cindex Poisson distribution
-@cindex Photon counting noise
-@cindex Poisson, Sim@'eon Denis
-With the very accurate electronics used in today's detectors, photon counting
noise@footnote{In practice, we are actually counting the electrons that are
produced by each photon, not the actual photons.} is the most significant
source of uncertainty in most datasets.
-To understand this noise (error in counting), we need to take a closer look at
how a distribution produced by counting can be modeled as a parametric function.
-Counting is an inherently discrete operation, which can only produce positive
(including zero) integer outputs.
-For example we can't count @mymath{3.2} or @mymath{-2} of anything.
-We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
-The distribution of values, as a result of counting efforts is formally known
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson
distribution}.
-It is associated to Sim@'eon Denis Poisson, because he discussed it while
working on the number of wrongful convictions in court cases in his 1837
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous
study by Abraham de Moivre in 1711.
-Therefore some people suggest it should rightly be called the de Moivre
distribution.}.
+@mymath{R} is just an arbitrary constant and can be directly found from
@mymath{K}, so for cleaner equations, it is common practice to set
@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
+Also note that when @mymath{R=1}, then @mymath{l=\theta}.
+Generally, depending on the curvature, in a @emph{static} universe the proper
distance can be written as a function of the coordinate @mymath{r} as (from now
on we are assuming @mymath{R=1}):
-@cindex Probability density function
-Let's take @mymath{\lambda} to represent the expected mean count of something.
-Furthermore, let's take @mymath{k} to represent the result of one particular
counting attempt.
-The probability density function of getting @mymath{k} counts (in each
attempt, given the expected/mean count of @mymath{\lambda}) can be written as:
+@dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
+l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
+@mymath{l}, the infinitesimal change of distance can be written in a
+more simpler and abstract form of
-@cindex Poisson distribution
-@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3,
\dots @}}
+@dispmath{ds_s^2=dl^2+r^2d\phi^2.}
-@cindex Skewed Poisson distribution
-Because the Poisson distribution is only applicable to positive values (note
the factorial operator, which only applies to non-negative integers), naturally
it is very skewed when @mymath{\lambda} is near zero.
-One qualitative way to understand this behavior is that there simply aren't
enough integers smaller than @mymath{\lambda}, than integers that are larger
than it.
-Therefore to accommodate all possibilities/counts, it has to be strongly
skewed when @mymath{\lambda} is small.
+@cindex Comoving distance
+Until now, we had assumed a static universe (not changing with time).
+But our observations so far appear to indicate that the universe is expanding
(it isn't static).
+Since there is no reason to expect the observed expansion is unique to our
particular position of the universe, we expect the universe to be expanding at
all points with the same rate at the same time.
+Therefore, to add a time dependence to our distance measurements, we can
include a multiplicative scaling factor, which is a function of time:
@mymath{a(t)}.
+The functional form of @mymath{a(t)} comes from the cosmology, the physics we
assume for it: general relativity, and the choice of whether the universe is
uniform (`homogeneous') in density and curvature or inhomogeneous.
+In this section, the functional form of @mymath{a(t)} is irrelevant, so we can
avoid these issues.
-@cindex Compare Poisson and Gaussian
-As @mymath{\lambda} becomes larger, the distribution becomes more and more
symmetric.
-A very useful property of the Poisson distribution is that the mean value is
also its variance.
-When @mymath{\lambda} is very large, say @mymath{\lambda>1000}, then the
@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian)
distribution}, is an excellent approximation of the Poisson distribution with
mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
-In other words, a Poisson distribution (with a sufficiently large
@mymath{\lambda}) is simply a Gaussian that only has one free parameter
(@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two
parameters (independent @mymath{\mu} and @mymath{\sigma}) that it originally
has.
+With this scaling factor, the proper distance will also depend on time.
+As the universe expands, the distance between two given points will shift to
larger values.
+We thus define a distance measure, or coordinate, that is independent of time
and thus doesn't `move'.
+We call it the @emph{comoving distance} and display with @mymath{\chi} such
that: @mymath{l(r,t)=\chi(r)a(t)}.
+We have therefore, shifted the @mymath{r} dependence of the proper distance we
derived above for a static universe to the comoving distance:
-@cindex Sky value
-@cindex Background flux
-@cindex Undetected objects
-In real situations, the photons/flux from our targets are added to a certain
background flux (observationally, the @emph{Sky} value).
-The Sky value is defined to be the average flux of a region in the dataset
with no targets.
-Its physical origin can be the brightness of the atmosphere (for ground-based
instruments), possible stray light within the imaging instrument, the average
flux of undetected targets, etc.
-The Sky value is thus an ideal definition, because in real datasets, what lies
deep in the noise (far lower than the detection limit) is never
known@footnote{In a real image, a relatively large number of very faint objects
can been fully buried in the noise and never detected.
-These undetected objects will bias the background measurement to slightly
larger values.
-Our best approximation is thus to simply assume they are uniform, and consider
their average effect.
-See Figure 1 (a.1 and a.2) and Section 2.2 in
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
-To account for all of these, the sky value is defined to be the average
count/value of the undetected regions in the image.
-In a mock image/dataset, we have the luxury of setting the background (Sky)
value.
+@dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
+\chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}
-@cindex Simulating noise
-@cindex Noise simulation
-In each element of the dataset (pixel in an image), the flux is the sum of
contributions from various sources (after convolution by the PSF, see
@ref{PSF}).
-Let's name the convolved sum of possibly overlapping objects, @mymath{I_{nn}}.
-@mymath{nn} representing `no noise'.
-For now, let's assume the background (@mymath{B}) is constant and sufficiently
high for the Poisson distribution to be approximated by a Gaussian.
-Then the flux after adding noise is a random value taken from a Gaussian
distribution with the following mean (@mymath{\mu}) and standard deviation
(@mymath{\sigma}):
+Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific
reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies
``reference'') when @mymath{a(t_r)=1}.
+At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r},
the proper distance to the object can be scaled with @mymath{a(t)}.
-@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}
+Measuring the change of distance in a time-dependent (expanding) universe only
makes sense if we can add up space and time@footnote{In other words, making our
space-time consistent with Minkowski space-time geometry.
+In this geometry, different observers at a given point (event) in space-time
split up space-time into `space' and `time' in different ways, just like people
at the same spatial position can make different choices of splitting up a map
into `left--right' and `up--down'.
+This model is well supported by twentieth and twenty-first century
observations.}.
+But we can only add bits of space and time together if we measure them in the
same units: with a conversion constant (similar to how 1000 is used to convert
a kilometer into meters).
+Experimentally, we find strong support for the hypothesis that this conversion
constant is the speed of light (or gravitational waves@footnote{The speed of
gravitational waves was recently found to be very similar to that of light in
vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a
vacuum.
+This speed is postulated to be constant@footnote{In @emph{natural units},
speed is measured in units of the speed of light in vacuum.} and is almost
always written as @mymath{c}.
+We can thus parameterize the change in distance on an expanding 2D surface as
-Since this type of noise is inherent in the objects we study, it is usually
measured on the same scale as the astronomical objects, namely the magnitude
system, see @ref{Brightness flux magnitude}.
-It is then internally converted to the flux scale for further processing.
+@dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}
-@node Instrumental noise, Final noised pixel value, Photon counting noise,
Noise basics
-@subsubsection Instrumental noise
-@cindex Readout noise
-@cindex Instrumental noise
-@cindex Noise, instrumental
-While taking images with a camera, a dark current is fed to the pixels, the
variation of the value of this dark current over the pixels, also adds to the
final image noise.
-Another source of noise is the readout noise that is produced by the
electronics in the detector.
-Specifically, the parts that attempt to digitize the voltage produced by the
photo-electrons in the analog to digital converter.
-With the current generation of instruments, this source of noise is not as
significant as the noise due to the background Sky discussed in @ref{Photon
counting noise}.
+@node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a
2D curved space, CosmicCalculator
+@subsection Extending distance concepts to 3D
-Let @mymath{C} represent the combined standard deviation of all these
instrumental sources of noise.
-When only this source of noise is present, the noised pixel value would be a
random value chosen from a Gaussian distribution with
+The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D
space that @emph{might} be curved.
+We can start with the generic infinitesimal distance in a static 3D universe,
but this time in spherical coordinates instead of polar coordinates.
+@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings,
positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is
tangent to a 4D-sphere.
+In our 3D space, a generic infinitesimal displacement will correspond to the
following distance in spherical coordinates:
-@dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}
+@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
-@cindex ADU
-@cindex Gain
-@cindex Counts
-This type of noise is independent of the signal in the dataset, it is only
determined by the instrument.
-So the flux scale (and not magnitude scale) is most commonly used for this
type of noise.
-In practice, this value is usually reported in analog-to-digital units or
ADUs, not flux or electron counts.
-The gain value of the device can be used to convert between these two, see
@ref{Brightness flux magnitude}.
+Like the 2D creature before, we now have to assume an abstract dimension which
we cannot visualize easily.
+Let's call the fourth dimension @mymath{w}, then the general change in
coordinates in the @emph{full} four dimensional space will be:
-@node Final noised pixel value, Generating random numbers, Instrumental noise,
Noise basics
-@subsubsection Final noised pixel value
-Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental
noise}, depending on the values you specify for @mymath{B} and @mymath{C} from
the above, the final noised value for each pixel is a random value chosen from
a Gaussian distribution with
+@dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}
-@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}
+@noindent
+But we can only work on a 3D curved space, so following exactly the same steps
and conventions as our 2D friend, we arrive at:
+@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
+@noindent
+In a non-static universe (with a scale factor a(t)), the distance can be
written as:
-@node Generating random numbers, , Final noised pixel value, Noise basics
-@subsubsection Generating random numbers
+@dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}
-@cindex Random numbers
-@cindex Numbers, random
-As discussed above, to generate noise we need to make random samples of a
particular distribution.
-So it is important to understand some general concepts regarding the
generation of random numbers.
-For a very complete and nice introduction we strongly advise reading Donald
Knuth's ``The art of computer programming'', volume 2, chapter
3@footnote{Knuth, Donald. 1998.
-The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
-Quoting from the GNU Scientific Library manual, ``If you don't own it, you
should stop reading right now, run to the nearest bookstore, and buy
it''@footnote{For students, running to the library might be more affordable!}!
-@cindex Psuedo-random numbers
-@cindex Numbers, psuedo-random
-Using only software, we can only produce what is called a psuedo-random
sequence of numbers.
-A true random number generator is a hardware (let's assume we have made sure
it has no systematic biases), for example throwing dice or flipping coins
(which have remained from the ancient times).
-More modern hardware methods use atmospheric noise, thermal noise or other
types of external electromagnetic or quantum phenomena.
-All pseudo-random number generators (software) require a seed to be the basis
of the generation.
-The advantage of having a seed is that if you specify the same seed for
multiple runs, you will get an identical sequence of random numbers which
allows you to reproduce the same final noised image.
-@cindex Environment variables
-@cindex GNU Scientific Library
-The programs in GNU Astronomy Utilities (for example MakeNoise or
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
-GSL allows the user to set the random number generator through environment
variables, see @ref{Installation directory} for an introduction to environment
variables.
-In the chapter titled ``Random Number Generation'' they have fully explained
the various random number generators that are available (there are a lot of
them!).
-Through the two environment variables @code{GSL_RNG_TYPE} and
@code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
+@c@dispmath{H(z){\equiv}\left(\dot{a}\over a\right)(z)=H_0E(z) }
-@cindex Seed, Random number generator
-@cindex Random number generator, Seed
-If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default
random number generator type.
-The default type is sufficient for most general applications.
-If no value is given for the @code{GSL_RNG_SEED} environment variable and you
have asked Gnuastro to read the seed from the environment (through the
@option{--envseed} option), then GSL will use the default value of each
generator to give identical outputs.
-If you don't explicitly tell Gnuastro programs to read the seed value from the
environment variable, then they will use the system time (accurate to within a
microsecond) to generate (apparently random) seeds.
-In this manner, every time you run the program, you will get a different
random number distribution.
+@c@dispmath{E(z)=[ \Omega_{\Lambda,0} + \Omega_{C,0}(1+z)^2 +
+@c\Omega_{m,0}(1+z)^3 + \Omega_{r,0}(1+z)^4 ]^{1/2}}
-There are two ways you can specify values for these environment variables.
-You can call them on the same command-line for example:
+@c Let's take @mymath{r} to be the radial coordinate of the emitting
+@c source, which emitted its light at redshift $z$. Then the comoving
+@c distance of this object would be:
-@example
-$ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astmknoise input.fits
-@end example
+@c@dispmath{ \chi(r)={c\over H_0a_0}\int_0^z{dz'\over E(z')} }
+
+@c@noindent
+@c So the proper distance at the current time to that object is:
+@c @mymath{a_0\chi(r)}, therefore the angular diameter distance
+@c (@mymath{d_A}) and luminosity distance (@mymath{d_L}) can be written
+@c as:
+
+@c@dispmath{ d_A={a_0\chi(r)\over 1+z}, \quad d_L=a_0\chi(r)(1+z) }
-@noindent
-In this manner the values will only be used for this particular execution of
MakeNoise.
-Alternatively, you can define them for the full period of your terminal
session or script length, using the shell's @command{export} command with the
two separate commands below (for a script remove the @code{$} signs):
+
+
+
+@node Invoking astcosmiccal, , Extending distance concepts to 3D,
CosmicCalculator
+@subsection Invoking CosmicCalculator
+
+CosmicCalculator will calculate cosmological variables based on the input
parameters.
+The executable name is @file{astcosmiccal} with the following general template
@example
-$ export GSL_RNG_TYPE="taus"
-$ export GSL_RNG_SEED=345
+$ astcosmiccal [OPTION...] ...
@end example
-@cindex Startup scripts
-@cindex @file{.bashrc}
-@noindent
-The subsequent programs which use GSL's random number generators will hence
forth use these values in this session of the terminal you are running or while
executing this script.
-In case you want to set fixed values for these parameters every time you use
the GSL random number generator, you can add these two lines to your
@file{.bashrc} startup script@footnote{Don't forget that if you are going to
give your scripts (that use the GSL random number generator) to others you have
to make sure you also tell them to set these environment variable separately.
-So for scripts, it is best to keep all such variable definitions within the
script, even if they are within your @file{.bashrc}.}, see @ref{Installation
directory}.
-@cartouche
@noindent
-@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and
@code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you
don't use the @option{--envseed} option.
-For example you can see the top few lines of the output of MakeProfiles:
+One line examples:
@example
-$ export GSL_RNG_TYPE="taus"
-$ export GSL_RNG_SEED=345
-$ astmkprof -s1 --kernel=gaussian,2,5 --envseed
-GSL_RNG_TYPE=taus
-GSL_RNG_SEED=345
-MakeProfiles A.B started on DDD MMM DD HH:MM:SS YYYY
- - Building one gaussian kernel
- - Random number generator (RNG) type: ranlxs1
- - RNG seed for all profiles: 345
- ---- ./kernel.fits created.
-MakeProfiles finished in 0.111271 seconds
-@end example
+## Print basic cosmological properties at redshift 2.5:
+$ astcosmiccal -z2.5
-@noindent
-@cindex Seed, Random number generator
-@cindex Random number generator, Seed
-The first two output lines (showing the names of the environment variables)
are printed by GSL before MakeProfiles actually starts generating random
numbers.
-The Gnuastro programs will report the values they use independently, you
should check them for the final values used.
-For example if @option{--envseed} is not given, @code{GSL_RNG_SEED} will not
be used and the last line shown above will not be printed.
-In the case of MakeProfiles, each profile will get its own seed value.
-@end cartouche
+## Only print Comoving volume over 4pi stradian to z (Mpc^3):
+$ astcosmiccal --redshift=0.8 --volume
+## Print redshift and age of universe when Lyman-alpha line is
+## at 6000 angstrom (another way to specify redshift).
+$ astcosmiccal --obsline=lyalpha,6000 --age
-@node Invoking astmknoise, , Noise basics, MakeNoise
-@subsection Invoking MakeNoise
+## Print luminosity distance, angular diameter distance and age
+## of universe in one row at redshift 0.4
+$ astcosmiccal -z0.4 -LAg
-MakeNoise will add noise to an existing image.
-The executable name is @file{astmknoise} with the following general template
+## Assume Lambda and matter density of 0.7 and 0.3 and print
+## basic cosmological parameters for redshift 2.1:
+$ astcosmiccal -l0.7 -m0.3 -z2.1
-@example
-$ astmknoise [OPTION ...] InputImage.fits
+## Print wavelength of all pre-defined spectral lines when
+## Lyman-alpha is observed at 4000 Angstroms.
+$ astcosmiccal --obsline=lyalpha,4000 --listlinesatz
@end example
-@noindent
-One line examples:
+The input parameters (for example current matter density, etc) can be given as
command-line options or in the configuration files, see @ref{Configuration
files}.
+For a definition of the different parameters, please see the sections prior to
this.
+If no redshift is given, CosmicCalculator will just print its input parameters
and abort.
+For a full list of the input options, please see @ref{CosmicCalculator input
options}.
-@example
-## Add noise with a standard deviation of 100 to image.
-## (this is independent of the pixel value: not Poission noise)
-$ astmknoise --sigma=100 image.fits
+Without any particular output requested (and only a given redshift),
CosmicCalculator will print all basic cosmological calculations (one per line)
with some explanations before each.
+This can be good when you want a general feeling of the conditions at a
specific redshift.
+Alternatively, if any specific calculation(s) are requested (its possible to
call more than one), only the requested value(s) will be calculated and printed
with one character space between them.
+In this case, no description or units will be printed.
+See @ref{CosmicCalculator basic cosmology calculations} for the full list of
these options along with some explanations how when/how they can be useful.
-## Add noise to input image assuming a background magnitude (with
-## zero point magnitude of 0) and a certain instrumental noise:
-$ astmknoise --background=-10 -z0 --instrumental=20 mockimage.fits
-@end example
+Another common operation in observational cosmology is dealing with spectral
lines at different redshifts.
+CosmicCalculator also has features to help in such situations, please see
@ref{CosmicCalculator spectral line calculations}.
-@noindent
-If actual processing is to be done, the input image is a mandatory argument.
-The full list of options common to all the programs in Gnuastro can be seen in
@ref{Common options}.
-The type (see @ref{Numeric data types}) of the output can be specified with
the @option{--type} option, see @ref{Input output options}.
-The header of the output FITS file keeps all the parameters that were
influential in making it.
-This is done for future reproducibility.
+@menu
+* CosmicCalculator input options:: Options to specify input conditions.
+* CosmicCalculator basic cosmology calculations:: Like distance modulus,
distances and etc.
+* CosmicCalculator spectral line calculations:: How they get affected by
redshift.
+@end menu
+
+@node CosmicCalculator input options, CosmicCalculator basic cosmology
calculations, Invoking astcosmiccal, Invoking astcosmiccal
+@subsubsection CosmicCalculator input options
+
+The inputs to CosmicCalculator can be specified with the following options:
+@table @option
+
+@item -z FLT
+@itemx --redshift=FLT
+The redshift of interest.
+There are two other ways that you can specify the target redshift:
+1) Spectral lines and their observed wavelengths, see @option{--obsline}.
+2) Velocity, see @option{--velocity}.
+Hence this option cannot be called with @option{--obsline} or
@option{--velocity}.
+
+@item -y FLT
+@itemx --velocity=FLT
+Input velocity in km/s.
+The given value will be converted to redshift internally, and used in any
subsequent calculation.
+This option is thus an alternative to @code{--redshift} or @code{--obsline},
it cannot be used with them.
+The conversion will be done with the more general and accurate relativistic
equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified
@mymath{z\approx v/c}.
+
+@item -H FLT
+@itemx --H0=FLT
+Current expansion rate (in km sec@mymath{^{-1}} Mpc@mymath{^{-1}}).
+
+@item -l FLT
+@itemx --olambda=FLT
+Cosmological constant density divided by the critical density in the current
Universe (@mymath{\Omega_{\Lambda,0}}).
+
+@item -m FLT
+@itemx --omatter=FLT
+Matter (including massive neutrinos) density divided by the critical density
in the current Universe (@mymath{\Omega_{m,0}}).
+
+@item -r FLT
+@itemx --oradiation=FLT
+Radiation density divided by the critical density in the current Universe
(@mymath{\Omega_{r,0}}).
+
+@item -O STR/FLT,FLT
+@itemx --obsline=STR/FLT,FLT
+@cindex Rest-frame wavelength
+@cindex Wavelength, rest-frame
+Find the redshift to use in next steps based on the rest-frame and observed
wavelengths of a line.
+This option is thus an alternative to @code{--redshift} or @code{--velocity},
it cannot be used with them.
+Wavelengths are assumed to be in Angstroms.
+The first argument identifies the line.
+It can be one of the standard names below, or any rest-frame wavelength in
Angstroms.
+The second argument is the observed wavelength of that line.
+For example @option{--obsline=lyalpha,6000} is the same as
@option{--obsline=1215.64,6000}.
+
+The pre-defined names are listed below, sorted from red (longer wavelength) to
blue (shorter wavelength).
+You can get this list on the command-line with the @option{--listlines}.
+
+@table @code
+@item siired
+[6731@AA{}] SII doublet's redder line.
+
+@item sii
+@cindex Doublet: SII
+@cindex SII doublet
+[6724@AA{}] SII doublet's mean center at .
+
+@item siiblue
+[6717@AA{}] SII doublet's bluer line.
+
+@item niired
+[6584@AA{}] NII doublet's redder line.
+
+@item nii
+@cindex Doublet: NII
+@cindex NII doublet
+[6566@AA{}] NII doublet's mean center.
+
+@item halpha
+@cindex H-alpha
+[6562.8@AA{}] H-@mymath{\alpha} line.
+
+@item niiblue
+[6548@AA{}] NII doublet's bluer line.
+
+@item oiiired-vis
+[5007@AA{}] OIII doublet's redder line in the visible.
+
+@item oiii-vis
+@cindex Doublet: OIII (visible)
+@cindex OIII doublet in visible
+[4983@AA{}] OIII doublet's mean center in the visible.
+
+@item oiiiblue-vis
+[4959@AA{}] OIII doublet's bluer line in the visible.
+
+@item hbeta
+@cindex H-beta
+[4861.36@AA{}] H-@mymath{\beta} line.
+
+@item heii-vis
+[4686@AA{}] HeII doublet's redder line in the visible.
+
+@item hgamma
+@cindex H-gamma
+[4340.46@AA{}] H-@mymath{\gamma} line.
+
+@item hdelta
+@cindex H-delta
+[4101.74@AA{}] H-@mymath{\delta} line.
+
+@item hepsilon
+@cindex H-epsilon
+[3970.07@AA{}] H-@mymath{\epsilon} line.
+
+@item neiii
+[3869@AA{}] NEIII line.
+
+@item oiired
+[3729@AA{}] OII doublet's redder line.
+
+@item oii
+@cindex Doublet: OII
+@cindex OII doublet
+[3727.5@AA{}] OII doublet's mean center.
+
+@item oiiblue
+[3726@AA{}] OII doublet's bluer line.
+
+@item blimit
+@cindex Balmer limit
+[3646@AA{}] Balmer limit.
+
+@item mgiired
+[2803@AA{}] MgII doublet's redder line.
+
+@item mgii
+@cindex Doublet: MgII
+@cindex MgII doublet
+[2799.5@AA{}] MgII doublet's mean center.
+
+@item mgiiblue
+[2796@AA{}] MgII doublet's bluer line.
+
+@item ciiired
+[1909@AA{}] CIII doublet's redder line.
+
+@item ciii
+@cindex Doublet: CIII
+@cindex CIII doublet
+[1908@AA{}] CIII doublet's mean center.
+
+@item ciiiblue
+[1907@AA{}] CIII doublet's bluer line.
+
+@item si_iiired
+[1892@AA{}] SiIII doublet's redder line.
-@table @option
+@item si_iii
+@cindex Doublet: SiIII
+@cindex SiIII doublet
+[1887.5@AA{}] SiIII doublet's mean center.
-@item -b FLT
-@itemx --background=FLT
-The background value (per pixel) that will be added to each pixel value
(internally) to estimate Poisson noise, see @ref{Photon counting noise}.
-By default the units of this value are assumed to be in magnitudes, hence a
@option{--zeropoint} is also necessary.
-But if the background is in units of brightness, you need add
@option{--bgisbrightness}, see @ref{Brightness flux magnitude}
+@item si_iiiblue
+[1883@AA{}] SiIII doublet's bluer line.
-Internally, the value given to this option will be converted to brightness
(@mymath{b}, when @option{--bgisbrightness} is called, the value will be used
directly).
-Assuming the pixel value is @mymath{p}, the random value for that pixel will
be taken from a Gaussian distribution with mean of @mymath{p+b} and standard
deviation of @mymath{\sqrt{p+b}}.
-With this option, the noise will therefore be dependent on the pixel values:
according to the Poission noise model, as the pixel value becomes larger, its
noise will also become larger.
-This is thus a realistic way to model noise, see @ref{Photon counting noise}.
+@item oiiired-uv
+[1666@AA{}] OIII doublet's redder line in the ultra-violet.
-@item -B
-@itemx --bgisbrightness
-The value given to @option{--background} should be interpretted as brightness,
not as a magnitude.
+@item oiii-uv
+@cindex Doublet: OIII (in UV)
+@cindex OIII doublet in UV
+[1663.5@AA{}] OIII doublet's mean center in the ultra-violet.
-@item -z FLT
-@itemx --zeropoint=FLT
-The zero point magnitude used to convert the value of @option{--background}
(in units of magnitude) to flux, see @ref{Brightness flux magnitude}.
+@item oiiiblue-uv
+[1661@AA{}] OIII doublet's bluer line in the ultra-violet.
-@item -i FLT
-@itemx --instrumental=FLT
-The instrumental noise which is in units of flux, see @ref{Instrumental noise}.
+@item heii-uv
+[1640@AA{}] HeII doublet's bluer line in the ultra-violet.
-@item -s FLT
-@item --sigma=FLT
-The total noise sigma in the same units as the pixel values.
-With this option, the @option{--background}, @option{--zeropoint} and
@option{--instrumental} will be ignored.
-With this option, the noise will be independent of the pixel values (which is
not realistic, see @ref{Photon counting noise}).
-Hence it is only useful if you are working on low surface brightness regions
where the change in pixel value (and thus real noise) is insignificant.
+@item civred
+[1551@AA{}] CIV doublet's redder line.
-Generally, @strong{usage of this option is discouraged} unless you understand
the risks of not simulating real noise.
-This is because with this option, you will not get Poisson noise (the common
noise model for astronomical imaging), where the noise varies based on pixel
value.
-Use @option{--background} for adding Poission noise.
+@item civ
+@cindex Doublet: CIV
+@cindex CIV doublet
+[1549@AA{}] CIV doublet's mean center.
-@item -e
-@itemx --envseed
-@cindex Seed, Random number generator
-@cindex Random number generator, Seed
-Use the @code{GSL_RNG_SEED} environment variable for the seed used in the
random number generator, see @ref{Generating random numbers}.
-With this option, the output image noise is always going to be identical (or
reproducible).
+@item civblue
+[1548@AA{}] CIV doublet's bluer line.
-@item -d
-@itemx --doubletype
-Save the output in the double precision floating point format that was used
internally.
-This option will be most useful if the input images were of integer types.
+@item nv
+[1240@AA{}] NV (four times ionized Sodium).
-@end table
+@item lyalpha
+@cindex Lyman-alpha
+[1215.67@AA{}] Lyman-@mymath{\alpha} line.
+
+@item lybeta
+@cindex Lyman-beta
+[1025.7@AA{}] Lyman-@mymath{\beta} line.
+@item lygamma
+@cindex Lyman-gamma
+[972.54@AA{}] Lyman-@mymath{\gamma} line.
+@item lydelta
+@cindex Lyman-delta
+[949.74@AA{}] Lyman-@mymath{\delta} line.
+@item lyepsilon
+@cindex Lyman-epsilon
+[937.80@AA{}] Lyman-@mymath{\epsilon} line.
+@item lylimit
+@cindex Lyman limit
+[912@AA{}] Lyman limit.
+@end table
+@end table
+@node CosmicCalculator basic cosmology calculations, CosmicCalculator spectral
line calculations, CosmicCalculator input options, Invoking astcosmiccal
+@subsubsection CosmicCalculator basic cosmology calculations
+By default, when no specific calculations are requested, CosmicCalculator will
print a complete set of all its calculators (one line for each calculation, see
@ref{Invoking astcosmiccal}).
+The full list of calculations can be useful when you don't want any specific
value, but just a general view.
+In other contexts (for example in a batch script or during a discussion), you
know exactly what you want and don't want to be distracted by all the extra
information.
+You can use any number of the options described below in any order.
+When any of these options are requested, CosmicCalculator's output will just
be a single line with a single space between the (possibly) multiple values.
+In the example below, only the tangential distance along one arc-second (in
kpc), absolute magnitude conversion, and age of the universe at redshift 2 are
printed (recall that you can merge short options together, see @ref{Options}).
+@example
+$ astcosmiccal -z2 -sag
+8.585046 44.819248 3.289979
+@end example
+Here is one example of using this feature in scripts: by adding the following
two lines in a script to keep/use the comoving volume with varying redshifts:
+@example
+z=3.12
+vol=$(astcosmiccal --redshift=$z --volume)
+@end example
+@cindex GNU Grep
+@noindent
+In a script, this operation might be necessary for a large number of objects
(several of galaxies in a catalog for example).
+So the fact that all the other default calculations are ignored will also help
you get to your result faster.
+If you are indeed dealing with many (for example thousands) of redshifts,
using CosmicCalculator is not the best/fastest solution.
+Because it has to go through all the configuration files and preparations for
each invocation.
+To get the best efficiency (least overhead), we recommend using Gnuastro's
cosmology library (see @ref{Cosmology library}).
+CosmicCalculator also calls the library functions defined there for its
calculations, so you get the same result with no overhead.
+Gnuastro also has libraries for easily reading tables into a C program, see
@ref{Table input output}.
+Afterwards, you can easily build and run your C program for the particular
processing with @ref{BuildProgram}.
+If you just want to inspect the value of a variable visually, the description
(which comes with units) might be more useful.
+In such cases, the following command might be better.
+The other calculations will also be done, but they are so fast that you will
not notice on modern computers (the time it takes your eye to focus on the
result is usually longer than the processing: a fraction of a second).
+@example
+$ astcosmiccal --redshift=0.832 | grep volume
+@end example
-@node High-level calculations, Library, Modeling and fittings, Top
-@chapter High-level calculations
+The full list of CosmicCalculator's specific calculations is present below in
two groups: basic cosmology calculations and those related to spectral lines.
+In case you have forgot the units, you can use the @option{--help} option
which has the units along with a short description.
-After the reduction of raw data (for example with the programs in @ref{Data
manipulation}) you will have reduced images/data ready for processing/analyzing
(for example with the programs in @ref{Data analysis}).
-But the processed/analyzed data (or catalogs) are still not enough to derive
any scientific result.
-Even higher-level analysis is still needed to convert the observed magnitudes,
sizes or volumes into physical quantities that we associate with each catalog
entry or detected object which is the purpose of the tools in this section.
+@table @option
+@item -e
+@itemx --usedredshift
+The redshift that was used in this run.
+In many cases this is the main input parameter to CosmicCalculator, but it is
useful in others.
+For example in combination with @option{--obsline} (where you give an observed
and rest-frame wavelength and would like to know the redshift) or with
@option{--velocity} (where you specify the velocity instead of redshift).
+Another example is when you run CosmicCalculator in a loop, while changing the
redshift and you want to keep the redshift value with the resulting calculation.
+@item -Y
+@itemx --usedvelocity
+The velocity (in km/s) that was used in this run.
+The conversion from redshift will be done with the more general and accurate
relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified
@mymath{z\approx v/c}.
+@item -G
+@itemx --agenow
+The current age of the universe (given the input parameters) in Ga (Giga
annum, or billion years).
+@item -C
+@itemx --criticaldensitynow
+The current critical density (given the input parameters) in grams per
centimeter-cube (@mymath{g/cm^3}).
-@menu
-* CosmicCalculator:: Calculate cosmological variables
-@end menu
+@item -d
+@itemx --properdistance
+The proper distance (at current time) to object at the given redshift in
Megaparsecs (Mpc).
+See @ref{Distance on a 2D curved space} for a description of the proper
distance.
-@node CosmicCalculator, , High-level calculations, High-level calculations
-@section CosmicCalculator
+@item -A
+@itemx --angulardimdist
+The angular diameter distance to object at given redshift in Megaparsecs (Mpc).
-To derive higher-level information regarding our sources in extra-galactic
astronomy, cosmological calculations are necessary.
-In Gnuastro, CosmicCalculator is in charge of such calculations.
-Before discussing how CosmicCalculator is called and operates (in
@ref{Invoking astcosmiccal}), it is important to provide a rough but mostly
self sufficient review of the basics and the equations used in the analysis.
-In @ref{Distance on a 2D curved space} the basic idea of understanding
distances in a curved and expanding 2D universe (which we can visualize) are
reviewed.
-Having solidified the concepts there, in @ref{Extending distance concepts to
3D}, the formalism is extended to the 3D universe we are trying to study in our
research.
+@item -s
+@itemx --arcsectandist
+The tangential distance covered by 1 arc-seconds at the given redshift in
kiloparsecs (Kpc).
+This can be useful when trying to estimate the resolution or pixel scale of an
instrument (usually in units of arc-seconds) at a given redshift.
-The focus here is obtaining a physical insight into these equations (mainly
for the use in real observational studies).
-There are many books thoroughly deriving and proving all the equations with
all possible initial conditions and assumptions for any abstract universe,
interested readers can study those books.
+@item -L
+@itemx --luminositydist
+The luminosity distance to object at given redshift in Megaparsecs (Mpc).
-@menu
-* Distance on a 2D curved space:: Distances in 2D for simplicity
-* Extending distance concepts to 3D:: Going to 3D (our real universe).
-* Invoking astcosmiccal:: How to run CosmicCalculator
-@end menu
+@item -u
+@itemx --distancemodulus
+The distance modulus at given redshift.
-@node Distance on a 2D curved space, Extending distance concepts to 3D,
CosmicCalculator, CosmicCalculator
-@subsection Distance on a 2D curved space
+@item -a
+@itemx --absmagconv
+The conversion factor (addition) to absolute magnitude.
+Note that this is practically the distance modulus added with
@mymath{-2.5\log{(1+z)}} for the desired redshift based on the input parameters.
+Once the apparent magnitude and redshift of an object is known, this value may
be added with the apparent magnitude to give the object's absolute magnitude.
-The observations to date (for example the Planck 2015 results), have not
measured@footnote{The observations are interpreted under the assumption of
uniform curvature.
-For a relativistic alternative to dark energy (and maybe also some part of
dark matter), non-uniform curvature may be even be more critical, but that is
beyond the scope of this brief explanation.} the presence of significant
curvature in the universe.
-However to be generic (and allow its measurement if it does in fact exist), it
is very important to create a framework that allows non-zero uniform curvature.
-However, this section is not intended to be a fully thorough and
mathematically complete derivation of these concepts.
-There are many references available for such reviews that go deep into the
abstract mathematical proofs.
-The emphasis here is on visualization of the concepts for a beginner.
+@item -g
+@itemx --age
+Age of the universe at given redshift in Ga (Giga annum, or billion years).
-As 3D beings, it is difficult for us to mentally create (visualize) a picture
of the curvature of a 3D volume.
-Hence, here we will assume a 2D surface/space and discuss distances on that 2D
surface when it is flat and when it is curved.
-Once the concepts have been created/visualized here, we will extend them, in
@ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of
the Universe we live in and hope to study.
+@item -b
+@itemx --lookbacktime
+The look-back time to given redshift in Ga (Giga annum, or billion years).
+The look-back time at a given redshift is defined as the current age of the
universe (@option{--agenow}) subtracted by the age of the universe at the given
redshift.
-To be more understandable (actively discuss from an observer's point of view)
let's assume there's an imaginary 2D creature living on the 2D space (which
@emph{might} be curved in 3D).
-Here, we will be working with this creature in its efforts to analyze
distances in its 2D universe.
-The start of the analysis might seem too mundane, but since it is difficult to
imagine a 3D curved space, it is important to review all the very basic
concepts thoroughly for an easy transition to a universe that is more difficult
to visualize (a curved 3D space embedded in 4D).
+@item -c
+@itemx --criticaldensity
+The critical density at given redshift in grams per centimeter-cube
(@mymath{g/cm^3}).
-To start, let's assume a static (not expanding or shrinking), flat 2D surface
similar to @ref{flatplane} and that the 2D creature is observing its universe
from point @mymath{A}.
-One of the most basic ways to parameterize this space is through the Cartesian
coordinates (@mymath{x}, @mymath{y}).
-In @ref{flatplane}, the basic axes of these two coordinates are plotted.
-An infinitesimal change in the direction of each axis is written as
@mymath{dx} and @mymath{dy}.
-For each point, the infinitesimal changes are parallel with the respective
axes and are not shown for clarity.
-Another very useful way of parameterizing this space is through polar
coordinates.
-For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from
a fixed (but arbitrary) reference axis.
-In @ref{flatplane} the infinitesimal changes for each polar coordinate are
plotted for a random point and a dashed circle is shown for all points with the
same radius.
+@item -v
+@itemx --onlyvolume
+The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired
redshift based on the input parameters.
-@float Figure,flatplane
-@center@image{gnuastro-figures/flatplane, 10cm, , }
+@end table
-@caption{Two dimensional Cartesian and polar coordinates on a flat
-plane.}
-@end float
-Assuming an object is placed at a certain position, which can be parameterized
as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its
position will place it in the coordinates @mymath{(x+dx,y+dy)} and
@mymath{(r+dr,\phi+d\phi)}.
-The distance (on the flat 2D surface) that is covered by this infinitesimal
change in the static universe (@mymath{ds_s}, the subscript signifies the
static nature of this universe) can be written as:
-@dispmath{ds_s=dx^2+dy^2=dr^2+r^2d\phi^2}
-The main question is this: how can the 2D creature incorporate the (possible)
curvature in its universe when it's calculating distances? The universe that it
lives in might equally be a curved surface like @ref{sphereandplane}.
-The answer to this question but for a 3D being (us) is the whole purpose to
this discussion.
-Here, we want to give the 2D creature (and later, ourselves) the tools to
measure distances if the space (that hosts the objects) is curved.
+@node CosmicCalculator spectral line calculations, , CosmicCalculator basic
cosmology calculations, Invoking astcosmiccal
+@subsubsection CosmicCalculator spectral line calculations
-@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the
curved 2D plane for simplicity.
-The 2D plane is tangent to the spherical shell and only touches it at
@mymath{A}.
-This idea will be generalized later.
-The first step in measuring the distance in a curved space is to imagine a
third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
-For simplicity, the @mymath{z} axis is assumed to pass through the center of
the spherical shell.
-Our imaginary 2D creature cannot visualize the third dimension or a curved 2D
surface within it, so the remainder of this discussion is purely abstract for
it (similar to us having difficulty in visualizing a 3D curved space in 4D).
-But since we are 3D creatures, we have the advantage of visualizing the
following steps.
-Fortunately the 2D creature is already familiar with our mathematical
constructs, so it can follow our reasoning.
+@cindex Rest frame wavelength
+At different redshifts, observed spectral lines are shifted compared to their
rest frame wavelengths with this simple relation:
@mymath{\lambda_{obs}=\lambda_{rest}(1+z)}.
+Although this relation is very simple and can be done for one line in the head
(or a simple calculator!), it slowly becomes tiring when dealing with a lot of
lines or redshifts, or some precision is necessary.
+The options in this section are thus provided to greatly simplify usage of
this simple equation, and also helping by storing a list of pre-defined
spectral line wavelengths.
-With the third axis added, a generic infinitesimal change over @emph{the full}
3D space corresponds to the distance:
+For example if you want to know the wavelength of the @mymath{H\alpha} line
(at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000
Angstroms, you can call CosmicCalculator like the first example below.
+And if you want the wavelength of all pre-defined spectral lines at this
redshift, you can use the second command.
-@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}
+@example
+$ astcosmiccal --obsline=lyalpha,8000 --lineatz=halpha
+$ astcosmiccal --obsline=lyalpha,8000 --listlinesatz
+@end example
-@float Figure,sphereandplane
-@center@image{gnuastro-figures/sphereandplane, 10cm, , }
+Bellow you can see the printed/output calculations of CosmicCalculator that
are related to spectral lines.
+Note that @option{--obsline} is an input parameter, so its discussed (with the
full list of known lines) in @ref{CosmicCalculator input options}.
-@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light
gray) tangent to it at point @mymath{A}.}
-@end float
+@table @option
-It is very important to recognize that this change of distance is for
@emph{any} point in the 3D space, not just those changes that occur on the 2D
spherical shell of @ref{sphereandplane}.
-Recall that our 2D friend can only do measurements on the 2D surfaces, not the
full 3D space.
-So we have to constrain this general change to any change on the 2D spherical
shell.
-To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical
shell.
-Its image (@mymath{P'}) on the flat plain is also displayed. From the dark
gray triangle, we see that
+@item --listlines
+List the pre-defined rest frame spectral line wavelengths and their names on
standard output, then abort CosmicCalculator.
+When this option is given, other operations on the command-line will be
ignored.
+This is convenient when you forget the specific name of the spectral line used
within Gnuastro, or when you forget the exact wavelength of a certain line.
-@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations
allow the 2D creature to find the value of @mymath{z} (an abstract dimension
for it) as a function of r (distance on a flat 2D plane, which it can
visualize) and thus eliminate @mymath{z}.
-From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and
solving for @mymath{z}, we find:
+These names can be used with the options that deal with spectral lines, for
example @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic
cosmology calculations}).
-@dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}
+The format of the output list is a two-column table, with Gnuastro's text
table format (see @ref{Gnuastro text table format}).
+Therefore, if you are only looking for lines in a specific range, you can pipe
the output into Gnuastro's table program and use its @option{--range} option on
the @code{wavelength} (first) column.
+For example, if you only want to see the lines between 4000 and 6000
Angstroms, you can run this command:
-The @mymath{\pm} can be understood from @ref{sphereandplane}: For each
@mymath{r}, there are two points on the sphere, one in the upper hemisphere and
one in the lower hemisphere.
-An infinitesimal change in @mymath{r}, will create the following infinitesimal
change in @mymath{z}:
+@example
+$ astcosmiccal --listlines \
+ | asttable --range=wavelength,4000,6000
+@end example
-@dispmath{dz={\mp r\over R}\left(1\over
-\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of
@mymath{dz} in the @mymath{ds_s^2} equation above, we get:
+@noindent
+And if you want to use the list later and have it as a table in a file, you
can easily add the @option{--output} (or @option{-o}) option to the
@command{asttable} command, and specify the filename, for example
@option{--output=lines.fits} or @option{--output=lines.txt}.
-@dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}
+@item --listlinesatz
+Similar to @option{--listlines} (above), but the printed wavelength is not in
the rest frame, but redshifted to the given redshift.
+Recall that the redshift can be specified by @option{--redshift} directly or
by @option{--obsline}, see @ref{CosmicCalculator input options}.
-The derivation above was done for a spherical shell of radius @mymath{R} as a
curved 2D surface.
-To generalize it to any surface, we can define @mymath{K=1/R^2} as the
curvature parameter.
-Then the general infinitesimal change in a static universe can be written as:
+@item -i STR/FLT
+@itemx --lineatz=STR/FLT
+The wavelength of the specified line at the redshift given to CosmicCalculator.
+The line can be specified either by its name or directly as a number (its
wavelength).
+To get the list of pre-defined names for the lines and their wavelength, you
can use the @option{--listlines} option, see @ref{CosmicCalculator input
options}.
+In the former case (when a name is given), the returned number is in units of
Angstroms.
+In the latter (when a number is given), the returned value is the same units
of the input number (assuming its a wavelength).
-@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}
+@end table
-Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a
finite universe, where @mymath{r} cannot become larger than @mymath{R} as in
@ref{sphereandplane}.
-When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative
@mymath{K} will correspond to an imaginary @mymath{R}.
-The latter two cases may be infinite in area (which is not a simple concept,
but mathematically can be modeled with @mymath{r} extending infinitely), or
finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 +
dy^2}}, but finite in one direction in size).
-@cindex Proper distance
-A very important issue that can be discussed now (while we are still in 2D and
can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent
to the curved space at the observer's position.
-In other words, it is on the gray flat surface of @ref{sphereandplane}, even
when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
-Therefore for the point @mymath{P} on a curved space, the raw coordinate
@mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
-The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on
the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is
called the @emph{proper distance} and is displayed with @mymath{l}.
-For the specific example of @ref{sphereandplane}, the proper distance can be
calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
-Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as
a function of @mymath{r}:
-@dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
-l(r)=R\sin^{-1}\left({r\over R}\right)}
-@mymath{R} is just an arbitrary constant and can be directly found from
@mymath{K}, so for cleaner equations, it is common practice to set
@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
-Also note that when @mymath{R=1}, then @mymath{l=\theta}.
-Generally, depending on the curvature, in a @emph{static} universe the proper
distance can be written as a function of the coordinate @mymath{r} as (from now
on we are assuming @mymath{R=1}):
-@dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
-l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
-@mymath{l}, the infinitesimal change of distance can be written in a
-more simpler and abstract form of
-@dispmath{ds_s^2=dl^2+r^2d\phi^2.}
-@cindex Comoving distance
-Until now, we had assumed a static universe (not changing with time).
-But our observations so far appear to indicate that the universe is expanding
(it isn't static).
-Since there is no reason to expect the observed expansion is unique to our
particular position of the universe, we expect the universe to be expanding at
all points with the same rate at the same time.
-Therefore, to add a time dependence to our distance measurements, we can
include a multiplicative scaling factor, which is a function of time:
@mymath{a(t)}.
-The functional form of @mymath{a(t)} comes from the cosmology, the physics we
assume for it: general relativity, and the choice of whether the universe is
uniform (`homogeneous') in density and curvature or inhomogeneous.
-In this section, the functional form of @mymath{a(t)} is irrelevant, so we can
avoid these issues.
+@node Installed scripts, Library, High-level calculations, Top
+@chapter Installed scripts
-With this scaling factor, the proper distance will also depend on time.
-As the universe expands, the distance between two given points will shift to
larger values.
-We thus define a distance measure, or coordinate, that is independent of time
and thus doesn't `move'.
-We call it the @emph{comoving distance} and display with @mymath{\chi} such
that: @mymath{l(r,t)=\chi(r)a(t)}.
-We have therefore, shifted the @mymath{r} dependence of the proper distance we
derived above for a static universe to the comoving distance:
+Gnuastro's programs (introduced in previous chapters) are designed to be
highly modular and thus contain lower-level operations on the data.
+However, in many contexts, certain higher-level are also shared between many
contexts.
+For example a sequence of calls to multiple Gnuastro programs, or a special
way of running a program and treating the output.
+To facilitate such higher-level data analysis, Gnuastro also installs some
scripts on your system with the (@code{astscript-}) prefix (in contrast to the
other programs that only have the @code{ast} prefix).
-@dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
-\chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}
+@cindex GNU Bash
+@cindex Portable shell
+@cindex Shell, portable
+Like all of Gnuastro's source code, these scripts are also heavily commented.
+They are written in portable shell scripts (command-line environments), which
doesn't need compilation.
+Therefore, if you open the installed scripts in a text editor, you can
actually read them@footnote{Gnuastro's installed programs (those only starting
with @code{ast}) aren't human-readable.
+They are written in C and need to be compiled before execution.
+Compilation optimizes the steps into the low-level hardware CPU
instructions/language to improve efficiency.
+Because compiled programs don't need an interpreter like Bash on every run,
they are much faster and more independent than scripts.
+To read the source code of the programs, look into the @file{bin/progname}
directory of Gnuastro's source (@ref{Downloading the source}).
+If you would like to read more about why C was chosen for the programs, please
see @ref{Why C}.}.
+For example with this command (just replace @code{nano} with your favorite
text editor, like @command{emacs} or @command{vim}):
-Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific
reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies
``reference'') when @mymath{a(t_r)=1}.
-At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r},
the proper distance to the object can be scaled with @mymath{a(t)}.
+@example
+$ nano $(which astscript-NAME)
+@end example
-Measuring the change of distance in a time-dependent (expanding) universe only
makes sense if we can add up space and time@footnote{In other words, making our
space-time consistent with Minkowski space-time geometry.
-In this geometry, different observers at a given point (event) in space-time
split up space-time into `space' and `time' in different ways, just like people
at the same spatial position can make different choices of splitting up a map
into `left--right' and `up--down'.
-This model is well supported by twentieth and twenty-first century
observations.}.
-But we can only add bits of space and time together if we measure them in the
same units: with a conversion constant (similar to how 1000 is used to convert
a kilometer into meters).
-Experimentally, we find strong support for the hypothesis that this conversion
constant is the speed of light (or gravitational waves@footnote{The speed of
gravitational waves was recently found to be very similar to that of light in
vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a
vacuum.
-This speed is postulated to be constant@footnote{In @emph{natural units},
speed is measured in units of the speed of light in vacuum.} and is almost
always written as @mymath{c}.
-We can thus parameterize the change in distance on an expanding 2D surface as
+Shell scripting is the same language that you use when typing on the
command-line.
+Therefore shell scripting is much more widely known and used compared to C
(the language of other Gnuastro programs).
+Because Gnuastro's installed scripts do higher-level operations, customizing
these scripts for a special project will be more common than the programs.
-@dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}
+These scripts also accept options and are in many ways similar to the programs
(see @ref{Common options}) with some minor differences:
+@itemize
+@item
+Currently they don't accept configuration files themselves.
+However, the configuration files of the Gnuastro programs they call are indeed
parsed and used by those programs.
-@node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a
2D curved space, CosmicCalculator
-@subsection Extending distance concepts to 3D
+As a result, they don't have the following options: @option{--checkconfig},
@option{--config}, @option{--lastconfig}, @option{--onlyversion},
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
-The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D
space that @emph{might} be curved.
-We can start with the generic infinitesimal distance in a static 3D universe,
but this time in spherical coordinates instead of polar coordinates.
-@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings,
positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is
tangent to a 4D-sphere.
-In our 3D space, a generic infinitesimal displacement will correspond to the
following distance in spherical coordinates:
+@item
+They don't directly allocate any memory, so there is no @option{--minmapsize}.
-@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
+@item
+They don't have an independent @option{--usage} option: when called with
@option{--usage}, they just recommend running @option{--help}.
-Like the 2D creature before, we now have to assume an abstract dimension which
we cannot visualize easily.
-Let's call the fourth dimension @mymath{w}, then the general change in
coordinates in the @emph{full} four dimensional space will be:
+@item
+The output of @option{--help} is not configurable like the programs (see
@ref{--help}).
-@dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}
+@item
+@cindex GNU AWK
+@cindex GNU SED
+The scripts will commonly use your installed shell and other basic
command-line tools (for example AWK or SED).
+Different systems have different versions and implementations of these basic
tools (for example GNU/Linux systems use GNU Bash, GNU AWK and GNU SED which
are far more advanced and up to date then the minimalist AWK and SED of most
other systems).
+Therefore, unexpected errors in these tools might come up when you run these
scripts on non-GNU/Linux operating systems.
+If you do confront such strange errors, please submit a bug report so we fix
it as soon as possible (see @ref{Report a bug}).
-@noindent
-But we can only work on a 3D curved space, so following exactly the same steps
and conventions as our 2D friend, we arrive at:
+@end itemize
-@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
+@menu
+* Sort FITS files by night:: Sort many files by date.
+* Generate radial profile:: Radial profile of an object in an image.
+* SAO DS9 region files from table:: Create ds9 region file from a table.
+@end menu
-@noindent
-In a non-static universe (with a scale factor a(t)), the distance can be
written as:
+@node Sort FITS files by night, Generate radial profile, Installed scripts,
Installed scripts
+@section Sort FITS files by night
-@dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}
+@cindex Calendar
+FITS images usually contain (several) keywords for preserving important dates.
+In particular, for lower-level data, this is usually the observation date and
time (for example, stored in the @code{DATE-OBS} keyword value).
+When analyzing observed datasets, many calibration steps (like the dark, bias
or flat-field), are commonly calculated on a per-observing-night basis.
+However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is
based on the western (Gregorian) calendar.
+Dates that are stored in this format are complicated for automatic processing:
a night starts in the final hours of one calendar day, and extends to the early
hours of the next calendar day.
+As a result, to identify datasets from one night, we commonly need to search
for two dates.
+However calendar peculiarities can make this identification very difficult.
+For example when an observation is done on the night separating two months
(like the night starting on March 31st and going into April 1st), or two years
(like the night starting on December 31st 2018 and going into January 1st,
2019).
+To account for such situations, it is necessary to keep track of how many days
are in a month, and leap years, etc.
+@cindex Unix epoch time
+@cindex Time, Unix epoch
+@cindex Epoch, Unix time
+Gnuastro's @file{astscript-sort-by-night} script is created to help in such
important scenarios.
+It uses @ref{Fits} to convert the FITS date format into the Unix epoch time
(number of seconds since 00:00:00 of January 1st, 1970), using the
@option{--datetosec} option.
+The Unix epoch time is a single number (integer, if not given in sub-second
precision), enabling easy comparison and sorting of dates after January 1st,
1970.
-@c@dispmath{H(z){\equiv}\left(\dot{a}\over a\right)(z)=H_0E(z) }
+You can use this script as a basis for making a much more highly customized
sorting script.
+Here are some examples
-@c@dispmath{E(z)=[ \Omega_{\Lambda,0} + \Omega_{C,0}(1+z)^2 +
-@c\Omega_{m,0}(1+z)^3 + \Omega_{r,0}(1+z)^4 ]^{1/2}}
+@itemize
+@item
+If you need to copy the files, but only need a single extension (not the whole
file), you can add a step just before the making of the symbolic links, or
copies, and change it to only copy a certain extension of the FITS file using
the Fits program's @option{--copy} option, see @ref{HDU information and
manipulation}.
-@c Let's take @mymath{r} to be the radial coordinate of the emitting
-@c source, which emitted its light at redshift $z$. Then the comoving
-@c distance of this object would be:
+@item
+If you need to classify the files with finer detail (for example the purpose
of the dataset), you can add a step just before the making of the symbolic
links, or copies, to specify a file-name prefix based on other certain keyword
values in the files.
+For example when the FITS files have a keyword to specify if the dataset is a
science, bias, or flat-field image.
+You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the
created file (after the @option{--prefix}) automatically.
-@c@dispmath{ \chi(r)={c\over H_0a_0}\int_0^z{dz'\over E(z')} }
+For example, let's assume the observing mode is stored in the hypothetical
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
+With the step below, you can generate a mode-prefix, and add it to the
generated link/copy names (just correct the filename and extension of the first
line to the script's variables):
-@c@noindent
-@c So the proper distance at the current time to that object is:
-@c @mymath{a_0\chi(r)}, therefore the angular diameter distance
-@c (@mymath{d_A}) and luminosity distance (@mymath{d_L}) can be written
-@c as:
+@example
+modepref=$(astfits infile.fits -h1 \
+ | sed -e"s/'/ /g" \
+ | awk '$1=="MODE"@{ \
+ if($3=="BIAS-IMAGE") print "bias-"; \
+ else if($3=="SCIENCE-IMAGE") print "sci-"; \
+ else if($3==FLAT-EXP) print "flat-"; \
+ else print $3, "NOT recognized"; exit 1@}')
+@end example
-@c@dispmath{ d_A={a_0\chi(r)\over 1+z}, \quad d_L=a_0\chi(r)(1+z) }
+@cindex GNU AWK
+@cindex GNU Sed
+Here is a description of it.
+We first use @command{astfits} to print all the keywords in extension @code{1}
of @file{infile.fits}.
+In the FITS standard, string values (that we are assuming here) are placed in
single quotes (@key{'}) which are annoying in this context/use-case.
+Therefore, we pipe the output of @command{astfits} into @command{sed} to
remove all such quotes (substituting them with a blank space).
+The result is then piped to AWK for giving us the final mode-prefix: with
@code{$1=="MODE"}, we ask AWK to only consider the line where the first column
is @code{MODE}.
+There is an equal sign between the key name and value, so the value is the
third column (@code{$3} in AWK).
+We thus use a simple @code{if-else} structure to look into this value and
print our custom prefix based on it.
+The output of AWK is then stored in the @code{modepref} shell variable which
you can add to the link/copy name.
+With the solution above, the increment of the file counter for each night will
be independent of the mode.
+If you want the counter to be mode-dependent, you can add a different counter
for each mode and use that counter instead of the generic counter for each
night (based on the value of @code{modepref}).
+But we'll leave the implementation of this step to you as an exercise.
+@end itemize
+@menu
+* Invoking astscript-sort-by-night:: Inputs and outputs to this script.
+@end menu
-@node Invoking astcosmiccal, , Extending distance concepts to 3D,
CosmicCalculator
-@subsection Invoking CosmicCalculator
+@node Invoking astscript-sort-by-night, , Sort FITS files by night, Sort FITS
files by night
+@subsection Invoking astscript-sort-by-night
-CosmicCalculator will calculate cosmological variables based on the input
parameters.
-The executable name is @file{astcosmiccal} with the following general template
+This installed script will read a FITS date formatted value from the given
keyword, and classify the input FITS files into individual nights.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
@example
-$ astcosmiccal [OPTION...] ...
+$ astscript-sort-by-night [OPTION...] FITS-files
@end example
-
@noindent
One line examples:
@example
-## Print basic cosmological properties at redshift 2.5:
-$ astcosmiccal -z2.5
-
-## Only print Comoving volume over 4pi stradian to z (Mpc^3):
-$ astcosmiccal --redshift=0.8 --volume
-
-## Print redshift and age of universe when Lyman-alpha line is
-## at 6000 angstrom (another way to specify redshift).
-$ astcosmiccal --obsline=lyalpha,6000 --age
-
-## Print luminosity distance, angular diameter distance and age
-## of universe in one row at redshift 0.4
-$ astcosmiccal -z0.4 -LAg
-
-## Assume Lambda and matter density of 0.7 and 0.3 and print
-## basic cosmological parameters for redshift 2.1:
-$ astcosmiccal -l0.7 -m0.3 -z2.1
+## Use the DATE-OBS keyword
+$ astscript-sort-by-night --key=DATE-OBS /path/to/data/*.fits
-## Print wavelength of all pre-defined spectral lines when
-## Lyman-alpha is observed at 4000 Angstroms.
-$ astcosmiccal --obsline=lyalpha,4000 --listlinesatz
+## Make links to the input files with the `img-' prefix
+$ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
@end example
-The input parameters (for example current matter density, etc) can be given as
command-line options or in the configuration files, see @ref{Configuration
files}.
-For a definition of the different parameters, please see the sections prior to
this.
-If no redshift is given, CosmicCalculator will just print its input parameters
and abort.
-For a full list of the input options, please see @ref{CosmicCalculator input
options}.
+This script will look into a HDU/extension (@option{--hdu}) for a keyword
(@option{--key}) in the given FITS files and interpret the value as a date.
+The inputs will be separated by "night"s (11:00a.m to next day's 10:59:59a.m,
spanning two calendar days, exact hour can be set with @option{--hour}).
-Without any particular output requested (and only a given redshift),
CosmicCalculator will print all basic cosmological calculations (one per line)
with some explanations before each.
-This can be good when you want a general feeling of the conditions at a
specific redshift.
-Alternatively, if any specific calculation(s) are requested (its possible to
call more than one), only the requested value(s) will be calculated and printed
with one character space between them.
-In this case, no description or units will be printed.
-See @ref{CosmicCalculator basic cosmology calculations} for the full list of
these options along with some explanations how when/how they can be useful.
+The default output is a list of all the input files along with the following
two columns: night number and file number in that night (sorted by time).
+With @option{--link} a symbolic link will be made (one for each input) that
contains the night number, and number of file in that night (sorted by time),
see the description of @option{--link} for more.
+When @option{--copy} is used instead of a link, a copy of the inputs will be
made instead of symbolic link.
-Another common operation in observational cosmology is dealing with spectral
lines at different redshifts.
-CosmicCalculator also has features to help in such situations, please see
@ref{CosmicCalculator spectral line calculations}.
+Below you can see one example where all the @file{target-*.fits} files in the
@file{data} directory should be separated by observing night according to the
@code{DATE-OBS} keyword value in their second extension (number @code{1},
recall that HDU counting starts from 0).
+You can see the output after the @code{ls} command.
-@menu
-* CosmicCalculator input options:: Options to specify input conditions.
-* CosmicCalculator basic cosmology calculations:: Like distance modulus,
distances and etc.
-* CosmicCalculator spectral line calculations:: How they get affected by
redshift.
-@end menu
+@example
+$ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
+$ ls
+img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
+@end example
-@node CosmicCalculator input options, CosmicCalculator basic cosmology
calculations, Invoking astcosmiccal, Invoking astcosmiccal
-@subsubsection CosmicCalculator input options
+The outputs can be placed in a different (already existing) directory by
including that directory's name in the @option{--prefix} value, for example
@option{--prefix=sorted/img-} will put them all under the @file{sorted}
directory.
-The inputs to CosmicCalculator can be specified with the following options:
-@table @option
+This script can be configured like all Gnuastro's programs (through
command-line options, see @ref{Common options}), with some minor differences
that are described in @ref{Installed scripts}.
+The particular options to this script are listed below:
-@item -z FLT
-@itemx --redshift=FLT
-The redshift of interest.
-There are two other ways that you can specify the target redshift:
-1) Spectral lines and their observed wavelengths, see @option{--obsline}.
-2) Velocity, see @option{--velocity}.
-Hence this option cannot be called with @option{--obsline} or
@option{--velocity}.
+@table @option
+@item -h STR
+@itemx --hdu=STR
+The HDU/extension to use in all the given FITS files.
+All of the given FITS files must have this extension.
-@item -y FLT
-@itemx --velocity=FLT
-Input velocity in km/s.
-The given value will be converted to redshift internally, and used in any
subsequent calculation.
-This option is thus an alternative to @code{--redshift} or @code{--obsline},
it cannot be used with them.
-The conversion will be done with the more general and accurate relativistic
equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified
@mymath{z\approx v/c}.
+@item -k STR
+@itemx --key=STR
+The keyword name that contains the FITS date format to classify/sort by.
@item -H FLT
-@itemx --H0=FLT
-Current expansion rate (in km sec@mymath{^{-1}} Mpc@mymath{^{-1}}).
-
-@item -l FLT
-@itemx --olambda=FLT
-Cosmological constant density divided by the critical density in the current
Universe (@mymath{\Omega_{\Lambda,0}}).
+@itemx --hour=FLT
+The hour that defines the next ``night''.
+By default, all times before 11:00a.m are considered to belong to the previous
calendar night.
+If a sub-hour value is necessary, it should be given in units of hours, for
example @option{--hour=9.5} corresponds to 9:30a.m.
-@item -m FLT
-@itemx --omatter=FLT
-Matter (including massive neutrinos) density divided by the critical density
in the current Universe (@mymath{\Omega_{m,0}}).
+@cartouche
+@noindent
+@cindex Time zone
+@cindex UTC (Universal time coordinate)
+@cindex Universal time coordinate (UTC)
+@strong{Dealing with time zones:}
+The time that is recorded in @option{--key} may be in UTC (Universal Time
Coordinate).
+However, the organization of the images taken during the night depends on the
local time.
+It is possible to take this into account by setting the @option{--hour} option
to the local time in UTC.
-@item -r FLT
-@itemx --oradiation=FLT
-Radiation density divided by the critical density in the current Universe
(@mymath{\Omega_{r,0}}).
+For example, consider a set of images taken in Auckland (New Zealand, UTC+12)
during different nights.
+If you want to classify these images by night, you have to know at which time
(in UTC time) the Sun rises (or any other separator/definition of a different
night).
+For example if your observing night finishes before 9:00a.m in Auckland, you
can use @option{--hour=21}.
+Because in Auckland the local time of 9:00 corresponds to 21:00 UTC.
+@end cartouche
-@item -O STR/FLT,FLT
-@itemx --obsline=STR/FLT,FLT
-@cindex Rest-frame wavelength
-@cindex Wavelength, rest-frame
-Find the redshift to use in next steps based on the rest-frame and observed
wavelengths of a line.
-This option is thus an alternative to @code{--redshift} or @code{--velocity},
it cannot be used with them.
-Wavelengths are assumed to be in Angstroms.
-The first argument identifies the line.
-It can be one of the standard names below, or any rest-frame wavelength in
Angstroms.
-The second argument is the observed wavelength of that line.
-For example @option{--obsline=lyalpha,6000} is the same as
@option{--obsline=1215.64,6000}.
+@item -l
+@itemx --link
+Create a symbolic link for each input FITS file.
+This option cannot be used with @option{--copy}.
+The link will have a standard name in the following format (variable parts are
written in @code{CAPITAL} letters and described after it):
-The pre-defined names are listed below, sorted from red (longer wavelength) to
blue (shorter wavelength).
-You can get this list on the command-line with the @option{--listlines}.
+@example
+PnN-I.fits
+@end example
@table @code
-@item siired
-[6731@AA{}] SII doublet's redder line.
+@item P
+This is the value given to @option{--prefix}.
+By default, its value is @code{./} (to store the links in the directory this
script was run in).
+See the description of @code{--prefix} for more.
+@item N
+This is the night-counter: starting from 1.
+@code{N} is just incremented by 1 for the next night, no matter how many
nights (without any dataset) there are between two subsequent observing nights
(its just an identifier for each night which you can easily map to different
calendar nights).
+@item I
+File counter in that night, sorted by time.
+@end table
-@item sii
-@cindex Doublet: SII
-@cindex SII doublet
-[6724@AA{}] SII doublet's mean center at .
+@item -c
+@itemx --copy
+Make a copy of each input FITS file with the standard naming convention
described in @option{--link}.
+With this option, instead of making a link, a copy is made.
+This option cannot be used with @option{--link}.
-@item siiblue
-[6717@AA{}] SII doublet's bluer line.
+@item -p STR
+@itemx --prefix=STR
+Prefix to append before the night-identifier of each newly created link or
copy.
+This option is thus only relevant with the @option{--copy} or @option{--link}
options.
+See the description of @option{--link} for how its used.
+For example, with @option{--prefix=img-}, all the created file names in the
current directory will start with @code{img-}, making outputs like
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
-@item niired
-[6584@AA{}] NII doublet's redder line.
+@option{--prefix} can also be used to store the links/copies in another
directory relative to the directory this script is being run (it must already
exist).
+For example @code{--prefix=/path/to/processing/img-} will put all the
links/copies in the @file{/path/to/processing} directory, and the files (in
that directory) will all start with @file{img-}.
+@end table
-@item nii
-@cindex Doublet: NII
-@cindex NII doublet
-[6566@AA{}] NII doublet's mean center.
-@item halpha
-@cindex H-alpha
-[6562.8@AA{}] H-@mymath{\alpha} line.
-@item niiblue
-[6548@AA{}] NII doublet's bluer line.
-@item oiiired-vis
-[5007@AA{}] OIII doublet's redder line in the visible.
-@item oiii-vis
-@cindex Doublet: OIII (visible)
-@cindex OIII doublet in visible
-[4983@AA{}] OIII doublet's mean center in the visible.
-@item oiiiblue-vis
-[4959@AA{}] OIII doublet's bluer line in the visible.
-@item hbeta
-@cindex H-beta
-[4861.36@AA{}] H-@mymath{\beta} line.
-@item heii-vis
-[4686@AA{}] HeII doublet's redder line in the visible.
-@item hgamma
-@cindex H-gamma
-[4340.46@AA{}] H-@mymath{\gamma} line.
-@item hdelta
-@cindex H-delta
-[4101.74@AA{}] H-@mymath{\delta} line.
-@item hepsilon
-@cindex H-epsilon
-[3970.07@AA{}] H-@mymath{\epsilon} line.
-@item neiii
-[3869@AA{}] NEIII line.
-@item oiired
-[3729@AA{}] OII doublet's redder line.
-@item oii
-@cindex Doublet: OII
-@cindex OII doublet
-[3727.5@AA{}] OII doublet's mean center.
-@item oiiblue
-[3726@AA{}] OII doublet's bluer line.
-@item blimit
-@cindex Balmer limit
-[3646@AA{}] Balmer limit.
-@item mgiired
-[2803@AA{}] MgII doublet's redder line.
-@item mgii
-@cindex Doublet: MgII
-@cindex MgII doublet
-[2799.5@AA{}] MgII doublet's mean center.
-@item mgiiblue
-[2796@AA{}] MgII doublet's bluer line.
-@item ciiired
-[1909@AA{}] CIII doublet's redder line.
+@node Generate radial profile, SAO DS9 region files from table, Sort FITS
files by night, Installed scripts
+@section Generate radial profile
-@item ciii
-@cindex Doublet: CIII
-@cindex CIII doublet
-[1908@AA{}] CIII doublet's mean center.
+@cindex Radial profile
+@cindex Profile, profile
+The 1 dimensional radial profile of an object is an important parameter in
many aspects of astronomical image processing.
+For example, you want to study how the light of a galaxy is distributed as a
function of the radial distance from the center.
+In other cases, the radial profile of a star can show the PSF (see @ref{PSF}).
+Gnuastro's @file{astscript-radial-profile} script is created to obtain such
radial profiles for one object within an image.
+This script uses @ref{MakeProfiles} to generate elliptical apertures with the
values equal to the distance from the center of the object and
@ref{MakeCatalog} for measuring the values over the apertures.
-@item ciiiblue
-[1907@AA{}] CIII doublet's bluer line.
+@menu
+* Invoking astscript-radial-profile:: How to call astscript-radial-profile
+@end menu
-@item si_iiired
-[1892@AA{}] SiIII doublet's redder line.
+@node Invoking astscript-radial-profile, , Generate radial profile, Generate
radial profile
+@subsection Invoking astscript-radial-profile
-@item si_iii
-@cindex Doublet: SiIII
-@cindex SiIII doublet
-[1887.5@AA{}] SiIII doublet's mean center.
+This installed script will measure the radial profile of an object within an
image.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
-@item si_iiiblue
-[1883@AA{}] SiIII doublet's bluer line.
+@example
+$ astscript-radial-profile [OPTION...] FITS-file
+@end example
-@item oiiired-uv
-[1666@AA{}] OIII doublet's redder line in the ultra-violet.
+@noindent
+Examples:
-@item oiii-uv
-@cindex Doublet: OIII (in UV)
-@cindex OIII doublet in UV
-[1663.5@AA{}] OIII doublet's mean center in the ultra-violet.
+@example
+## Generate the radial profile with default options (assuming the
+## object is in the center of the image, and using the mean).
+$ astscript-radial-profile image.fits
-@item oiiiblue-uv
-[1661@AA{}] OIII doublet's bluer line in the ultra-violet.
+## Generate the radial profile centered at x=44 and y=37 (in pixels),
+## up to a radial distance of 19 pixels, use the mean value.
+$ astscript-radial-profile image.fits \
+ --xcenter=44 \
+ --ycenter=37 \
+ --rmax=19
-@item heii-uv
-[1640@AA{}] HeII doublet's bluer line in the ultra-violet.
+## Generate the radial profile centered at x=44 and y=37 (in pixels),
+## up to a radial distance of 100 pixels, compute sigma clipped
+## mean and standard deviation (sigclip-mean and sigclip-std) using
+## 3 sigma and 10 iterations.
+$ astscript-radial-profile image.fits \
+ --xcenter=44 \
+ --ycenter=37 \
+ --rmax=100 \
+ --sigmaclip=3,10 \
+ --measure=sigclip-mean,sigclip-std
-@item civred
-[1551@AA{}] CIV doublet's redder line.
+## Generate the radial profile centered at RA=20.53751695,
+## DEC=0.9454292263, up to a radial distance of 88 pixels,
+## axis ratio equal to 0.32, and position angle of 148 deg.
+## Name the output table as `radial-profile.fits'
+$ astscript-radial-profile image.fits --mode=wcs \
+ --xcenter=20.53751695 \
+ --ycenter=0.9454292263 \
+ --rmax=88 \
+ --axisratio=0.32 \
+ --positionangle=148 -oradial-profile.fits
-@item civ
-@cindex Doublet: CIV
-@cindex CIV doublet
-[1549@AA{}] CIV doublet's mean center.
+@end example
-@item civblue
-[1548@AA{}] CIV doublet's bluer line.
+This installed script will read a FITS image and will use it as the basis for
constructing the radial profile.
+The output radial profile is a table (FITS or plain-text) containing the
radial distance from the center in the first row and the specified measurements
in the other columns (mean, median, sigclip-mean, sigclip-median, etc.).
-@item nv
-[1240@AA{}] NV (four times ionized Sodium).
+To measure the radial profile, this script needs to generate temporary files.
+All these temporary files will be created within the directory given to the
@option{--tmpdir} option.
+When @option{--tmpdir} is not called, a temporary directory (with a name based
on the inputs) will be created in the running directory.
+If the directory doesn't exist at run-time, this script will create it.
+After the output is created, this script will delete the directory by default,
unless you call the @option{--keeptmp} option.
-@item lyalpha
-@cindex Lyman-alpha
-[1215.67@AA{}] Lyman-@mymath{\alpha} line.
+With the default options, the script will generate a circular radial profile
using the mean value and centered at the center of the image.
+In order to have more flexibility, several options are available to configure
for the desired radial profile.
+In this sense, you can change the center position, the maximum radius, the
axis ratio and the position angle (elliptical apertures are considered), the
operator for obtaining the profiles, and others (described below).
-@item lybeta
-@cindex Lyman-beta
-[1025.7@AA{}] Lyman-@mymath{\beta} line.
+@cartouche
+@noindent
+@strong{Debug your profile:} to debug your results, especially close to the
center of your object, you can see the radial distance associated to every
pixel in your input.
+To do this, use @option{--keeptmp} to keep the temporary files, and compare
@file{crop.fits} (crop of your input image centered on your desired coordinate)
with @file{apertures.fits} (radial distance of each pixel).
+@end cartouche
-@item lygamma
-@cindex Lyman-gamma
-[972.54@AA{}] Lyman-@mymath{\gamma} line.
+@cartouche
+@noindent
+@strong{Finding properties of your elliptical target: } you want to measure
the radial profile of a galaxy, but don't know its exact location, position
angle or axis ratio.
+To obtain these values, you can use @ref{NoiseChisel} to detect signal in the
image, feed it to @ref{Segment} to do basic segmentation, then use
@ref{MakeCatalog} to measure the center (@option{--x} and @option{--y} in
MakeCatalog), axis ratio (@option{--axisratio}) and position angle
(@option{--positionangle}).
+@end cartouche
-@item lydelta
-@cindex Lyman-delta
-[949.74@AA{}] Lyman-@mymath{\delta} line.
+@cartouche
+@noindent
+@strong{Masking other sources:} The image of an astronomical object will
usually have many other sources with your main target.
+A crude solution is to use sigma-clipped measurements for the profile.
+However, sigma-clipped measurements can easily be biased when the number of
sources at each radial distance increases at larger distances.
+Therefore a robust solution is to mask all other detections within the image.
+You can use @ref{NoiseChisel} and @ref{Segment} to detect and segment the
sources, then set all pixels that don't belong to your target to blank using
@ref{Arithmetic} (in particular, its @code{where} operator).
+@end cartouche
-@item lyepsilon
-@cindex Lyman-epsilon
-[937.80@AA{}] Lyman-@mymath{\epsilon} line.
+@table @option
+@item -h STR
+@itemx --hdu=STR
+The HDU/extension of the input image to use.
-@item lylimit
-@cindex Lyman limit
-[912@AA{}] Lyman limit.
+@item -o STR
+@itemx --output=STR
+Filename of measured radial profile.
+It can be either a FITS table, or plain-text table (determied from your given
file name suffix).
-@end table
+@item -c FLT[,FLT[,...]]
+@itemx --center=FLT[,FLT[,...]]
+The central position of the radial profile.
+This option is used for placing the center of the profiles.
+This parameter is used in @ref{Crop} to center an crop the region.
+The positions along each dimension must be separated by a comma (@key{,}) and
fractions are also acceptable.
+The number of values given to this option must be the same as the dimensions
of the input dataset.
+The units of the coordinates are read based on the value to the
@option{--mode} option, see below.
-@end table
+@item -O STR
+@itemx --mode=STR
+Interpret the center position of the object (values given to
@option{--center}) in image or WCS coordinates.
+This option thus accepts only two values: @option{img} or @option{wcs}.
+By default, it is @option{--mode=img}.
+
+@item -R FLT
+@itemx --rmax=FLT
+Maximum radius for the radial profile (in pixels).
+By default, the radial profile will be computed up to a radial distance equal
to the maximum radius that fits into the image (assuming circular shape).
+@item -Q FLT
+@itemx --axisratio=FLT
+The axis ratio of the apertures (minor axis divided by the major axis in a 2D
ellipse).
+By default (when this option isn't given), the radial profile will be circular
(axis ratio of 1).
+This parameter is used as the option @option{--qcol} in the generation of the
apertures with @command{astmkprof}.
+@item -p FLT
+@itemx --positionangle=FLT
+The position angle (in degrees) of the profiles relative to the first FITS
axis (horizontal when viewed in SAO ds9).
+By default, it is @option{--pangle=0}, which means that the semi-major axis of
the profiles will be parallel to the first FITS axis.
-@node CosmicCalculator basic cosmology calculations, CosmicCalculator spectral
line calculations, CosmicCalculator input options, Invoking astcosmiccal
-@subsubsection CosmicCalculator basic cosmology calculations
-By default, when no specific calculations are requested, CosmicCalculator will
print a complete set of all its calculators (one line for each calculation, see
@ref{Invoking astcosmiccal}).
-The full list of calculations can be useful when you don't want any specific
value, but just a general view.
-In other contexts (for example in a batch script or during a discussion), you
know exactly what you want and don't want to be distracted by all the extra
information.
+@item -m STR
+@itemx --measure=STR
+The operator for measuring the values over each radial distance.
+The values given to this option will be directly passed to @ref{MakeCatalog}.
+As a consequence, all MakeCatalog measurements like the median, mean, std,
sigclip-mean, sigclip-number, etc. can be used here.
+For a full list of MakeCatalog's measurements, please run
@command{astmkcatalog --help}.
+Multiple values can be given to this option, each separated by a comma.
+This option can also be called multiple times.
+
+For example, by setting @option{--measure=mean,sigclip-mean --measure=median},
the mean, sigma-clipped mean and median values will be computed.
+The output radial profile will have 4 columns in this order: radial distance,
mean, sigma-clipped and median.
+By default (when this option isn't given), the mean of all pixels at each
radial position will be computed.
-You can use any number of the options described below in any order.
-When any of these options are requested, CosmicCalculator's output will just
be a single line with a single space between the (possibly) multiple values.
-In the example below, only the tangential distance along one arc-second (in
kpc), absolute magnitude conversion, and age of the universe at redshift 2 are
printed (recall that you can merge short options together, see @ref{Options}).
+@item -s FLT,FLT
+@itemx --sigmaclip=FLT,FLT
+Sigma clipping parameters: only relevant if sigma-clipping operators are
requested by @option{--measure}.
+For more on sigma-clipping, see @ref{Sigma clipping}.
+If given, the value to this option is directly passed to the
@option{--sigmaclip} option of @ref{MakeCatalog}, see @ref{MakeCatalog inputs
and basic settings}.
+By default (when this option isn't given), the default values within
MakeCatalog will be used.
+To see the default value of this option in MakeCatalog, you can run this
command:
@example
-$ astcosmiccal -z2 -sag
-8.585046 44.819248 3.289979
+$ astmkcatalog -P | grep " sigmaclip "
@end example
-Here is one example of using this feature in scripts: by adding the following
two lines in a script to keep/use the comoving volume with varying redshifts:
-
-@example
-z=3.12
-vol=$(astcosmiccal --redshift=$z --volume)
-@end example
+@item -v INT
+@itemx --oversample=INT
+Oversample the input dataset to the fraction given to this option.
+Therefore if you set @option{--rmax=20} for example and
@option{--oversample=5}, your output will have 100 rows (without
@option{--oversample} it will only have 20 rows).
+Unless the object is heavily undersampled (the pixels are larger than the
actual object), this method provides a much more accurate result and there are
sufficient number of pixels to get the profile accurately.
-@cindex GNU Grep
-@noindent
-In a script, this operation might be necessary for a large number of objects
(several of galaxies in a catalog for example).
-So the fact that all the other default calculations are ignored will also help
you get to your result faster.
+@item -t STR
+@itemx --tmpdir=STR
+Several intermediate files are necessary to obtain the radial profile.
+All of these temporal files are saved into a temporal directory.
+With this option, you can directly specify this directory.
+By default (when this option isn't called), it will be built in the running
directory and given an input-based name.
+If the directory doesn't exist at run-time, this script will create it.
+Once the radial profile has been obtained, this directory is removed.
+You can disable the deletion of the temporary directory with the
@option{--keeptmp} option.
-If you are indeed dealing with many (for example thousands) of redshifts,
using CosmicCalculator is not the best/fastest solution.
-Because it has to go through all the configuration files and preparations for
each invocation.
-To get the best efficiency (least overhead), we recommend using Gnuastro's
cosmology library (see @ref{Cosmology library}).
-CosmicCalculator also calls the library functions defined there for its
calculations, so you get the same result with no overhead.
-Gnuastro also has libraries for easily reading tables into a C program, see
@ref{Table input output}.
-Afterwards, you can easily build and run your C program for the particular
processing with @ref{BuildProgram}.
+@item -k
+@itemx --keeptmp
+Don't delete the temporary directory (see description of @option{--tmpdir}
above).
+This option is useful for debugging.
+For example, to check that the profiles generated for obtaining the radial
profile have the desired center, shape and orientation.
+@end table
-If you just want to inspect the value of a variable visually, the description
(which comes with units) might be more useful.
-In such cases, the following command might be better.
-The other calculations will also be done, but they are so fast that you will
not notice on modern computers (the time it takes your eye to focus on the
result is usually longer than the processing: a fraction of a second).
-@example
-$ astcosmiccal --redshift=0.832 | grep volume
-@end example
-The full list of CosmicCalculator's specific calculations is present below in
two groups: basic cosmology calculations and those related to spectral lines.
-In case you have forgot the units, you can use the @option{--help} option
which has the units along with a short description.
-@table @option
-@item -e
-@itemx --usedredshift
-The redshift that was used in this run.
-In many cases this is the main input parameter to CosmicCalculator, but it is
useful in others.
-For example in combination with @option{--obsline} (where you give an observed
and rest-frame wavelength and would like to know the redshift) or with
@option{--velocity} (where you specify the velocity instead of redshift).
-Another example is when you run CosmicCalculator in a loop, while changing the
redshift and you want to keep the redshift value with the resulting calculation.
-@item -Y
-@itemx --usedvelocity
-The velocity (in km/s) that was used in this run.
-The conversion from redshift will be done with the more general and accurate
relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified
@mymath{z\approx v/c}.
-@item -G
-@itemx --agenow
-The current age of the universe (given the input parameters) in Ga (Giga
annum, or billion years).
-@item -C
-@itemx --criticaldensitynow
-The current critical density (given the input parameters) in grams per
centimeter-cube (@mymath{g/cm^3}).
-@item -d
-@itemx --properdistance
-The proper distance (at current time) to object at the given redshift in
Megaparsecs (Mpc).
-See @ref{Distance on a 2D curved space} for a description of the proper
distance.
-@item -A
-@itemx --angulardimdist
-The angular diameter distance to object at given redshift in Megaparsecs (Mpc).
-@item -s
-@itemx --arcsectandist
-The tangential distance covered by 1 arc-seconds at the given redshift in
kiloparsecs (Kpc).
-This can be useful when trying to estimate the resolution or pixel scale of an
instrument (usually in units of arc-seconds) at a given redshift.
-@item -L
-@itemx --luminositydist
-The luminosity distance to object at given redshift in Megaparsecs (Mpc).
-@item -u
-@itemx --distancemodulus
-The distance modulus at given redshift.
-@item -a
-@itemx --absmagconv
-The conversion factor (addition) to absolute magnitude.
-Note that this is practically the distance modulus added with
@mymath{-2.5\log{(1+z)}} for the desired redshift based on the input parameters.
-Once the apparent magnitude and redshift of an object is known, this value may
be added with the apparent magnitude to give the object's absolute magnitude.
-@item -g
-@itemx --age
-Age of the universe at given redshift in Ga (Giga annum, or billion years).
-@item -b
-@itemx --lookbacktime
-The look-back time to given redshift in Ga (Giga annum, or billion years).
-The look-back time at a given redshift is defined as the current age of the
universe (@option{--agenow}) subtracted by the age of the universe at the given
redshift.
-@item -c
-@itemx --criticaldensity
-The critical density at given redshift in grams per centimeter-cube
(@mymath{g/cm^3}).
-@item -v
-@itemx --onlyvolume
-The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired
redshift based on the input parameters.
-@end table
+@node SAO DS9 region files from table, , Generate radial profile, Installed
scripts
+@section SAO DS9 region files from table
+Once your desired catalog (containing the positions of some objects) is
created (for example with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it
often happens that you want to see your selected objects on an image for a
feeling of the spatial properties of your objects.
+For example you want to see their positions relative to each other.
+In this section we describe a simple installed script that is provided within
Gnuastro for converting your given columns to an SAO DS9 region file to help in
this process.
+SAO DS9@footnote{@url{https://sites.google.com/cfa.harvard.edu/saoimageds9}}
is one of the most common FITS image vizualization tools in astronomy and is
free software.
-@node CosmicCalculator spectral line calculations, , CosmicCalculator basic
cosmology calculations, Invoking astcosmiccal
-@subsubsection CosmicCalculator spectral line calculations
+@menu
+* Invoking astscript-ds9-region:: How to call astscript-ds9-region
+@end menu
-@cindex Rest frame wavelength
-At different redshifts, observed spectral lines are shifted compared to their
rest frame wavelengths with this simple relation:
@mymath{\lambda_{obs}=\lambda_{rest}(1+z)}.
-Although this relation is very simple and can be done for one line in the head
(or a simple calculator!), it slowly becomes tiring when dealing with a lot of
lines or redshifts, or some precision is necessary.
-The options in this section are thus provided to greatly simplify usage of
this simple equation, and also helping by storing a list of pre-defined
spectral line wavelengths.
+@node Invoking astscript-ds9-region, , SAO DS9 region files from table, SAO
DS9 region files from table
+@subsection Invoking astscript-ds9-region
-For example if you want to know the wavelength of the @mymath{H\alpha} line
(at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000
Angstroms, you can call CosmicCalculator like the first example below.
-And if you want the wavelength of all pre-defined spectral lines at this
redshift, you can use the second command.
+This installed script will read two positional columns within an input table
and generate an SAO DS9 region file to visualize the position of the given
objects over an image.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
@example
-$ astcosmiccal --obsline=lyalpha,8000 --lineatz=halpha
-$ astcosmiccal --obsline=lyalpha,8000 --listlinesatz
+## Use the RA and DEC columns of 'table.fits' for the region file.
+$ astscript-ds9-region table.fits --column=RA,DEC \
+ --output=ds9.reg
+
+## Select objects with a magnitude between 18 to 20, and generate the
+## region file directly (through a pipe), each region with radius of
+## 0.5 arcseconds.
+$ asttable table.fits --range=MAG,18:20 --column=RA,DEC \
+ | astscript-ds9-region --column=1,2 --radius=0.5
+
+## With the first command, select objects with a magnitude of 25 to 26
+## as red regions in 'bright.reg'. With the second command, select
+## objects with a magnitude between 28 to 29 as a green region and
+## show both.
+$ asttable cat.fits --range=MAG_F160W,25:26 -cRA,DEC \
+ | ./astscript-ds9-region -c1,2 --color=red -obright.reg
+$ asttable cat.fits --range=MAG_F160W,28:29 -cRA,DEC \
+ | ./astscript-ds9-region -c1,2 --color=green \
+ --command="ds9 image.fits -regions bright.reg"
@end example
-Bellow you can see the printed/output calculations of CosmicCalculator that
are related to spectral lines.
-Note that @option{--obsline} is an input parameter, so its discussed (with the
full list of known lines) in @ref{CosmicCalculator input options}.
+The input can either be passed as a named file, or from standard input (a
pipe).
+Only the @option{--column} option is mandatory (to specify the input table
columns): two colums from the input table must be specified, either by name
(recommended) or number.
+You can optionally also specify the region's radius, width and color of the
regions with the @option{--radius}, @option{--width} and @option{--color}
options, otherwise default values will be used for these (described under each
option).
+
+The created region file will be written into the file name given to
@option{--output}.
+When @option{--output} isn't called, the default name of @file{ds9.reg} will
be used (in the running directory).
+If the file exists before calling this script, it will be overwritten, unless
you pass the @option{--dontdelete} option.
+Optionally you can also use the @option{--command} option to give the full
command that should be run to execute SAO DS9 (see example above and
description below).
+In this mode, the created region file will be deleted once DS9 is closed
(unless you pass the @option{--dontdelete} option).
+A full description of each option is given below.
@table @option
-@item --listlines
-List the pre-defined rest frame spectral line wavelengths and their names on
standard output, then abort CosmicCalculator.
-When this option is given, other operations on the command-line will be
ignored.
-This is convenient when you forget the specific name of the spectral line used
within Gnuastro, or when you forget the exact wavelength of a certain line.
+@item -h INT/STR
+@item --hdu INT/STR
+The HDU of the input table when a named FITS file is given as input.
+The HDU (or extension) can be either a name or number (counting from zero).
+For more on this option, see @ref{Input output options}.
-These names can be used with the options that deal with spectral lines, for
example @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic
cosmology calculations}).
+@item -c STR,STR
+@itemx --column=STR,STR
+Identifiers of the two positional columns to use in the DS9 region file from
the table.
+They can either be in WCS (RA and Dec) or image (pixel) coordiantes.
+The mode can be specified with the @option{--mode} option, described below.
-The format of the output list is a two-column table, with Gnuastro's text
table format (see @ref{Gnuastro text table format}).
-Therefore, if you are only looking for lines in a specific range, you can pipe
the output into Gnuastro's table program and use its @option{--range} option on
the @code{wavelength} (first) column.
-For example, if you only want to see the lines between 4000 and 6000
Angstroms, you can run this command:
+@item -n STR
+@itemx --namecol=STR
+The column containing the name (or label) of each region.
+The type of the column (numeric or a character-based string) is irrelevant:
you can use both types of columns as a name or label for the region.
+This feature is useful when you need to recognize each region with a certain
ID or property (for example magnitude or redshift).
-@example
-$ astcosmiccal --listlines \
- | asttable --range=wavelength,4000,6000
-@end example
+@item -m wcs|img
+@itemx --mode=wcs|org
+The coordinate system of the positional columns (can be either
@option{--mode=wcs} and @option{--mode=img}).
+In the WCS mode, the values within the columns are interpreted to be RA and
Dec.
+In the image mode, they are interpreted to be pixel X and Y positions.
+This option also affects the interpretation of the value given to
@option{--radius}.
+When this option isn't explicitly given, the columns are assumed to be in WCS
mode.
-@noindent
-And if you want to use the list later and have it as a table in a file, you
can easily add the @option{--output} (or @option{-o}) option to the
@command{asttable} command, and specify the filename, for example
@option{--output=lines.fits} or @option{--output=lines.txt}.
+@item -C STR
+@itemx --color=STR
+The color to use for created regions.
+These will be directly interpreted by SAO DS9 when it wants to open the region
file so it must be recognizable by SAO DS9.
+As of SAO DS9 8.2, the recognized color names are @code{black}, @code{white},
@code{red}, @code{green}, @code{blue}, @code{cyan}, @code{magenta} and
@code{yellow}.
+The default color (when this option is not called) is @code{green}
-@item --listlinesatz
-Similar to @option{--listlines} (above), but the printed wavelength is not in
the rest frame, but redshifted to the given redshift.
-Recall that the redshift can be specified by @option{--redshift} directly or
by @option{--obsline}, see @ref{CosmicCalculator input options}.
+@item -w INT
+@itemx --width=INT
+The line width of the regions.
+These will be directly interpreted by SAO DS9 when it wants to open the region
file so it must be recognizable by SAO DS9.
+The default value is @code{1}.
-@item -i STR/FLT
-@itemx --lineatz=STR/FLT
-The wavelength of the specified line at the redshift given to CosmicCalculator.
-The line can be specified either by its name or directly as a number (its
wavelength).
-To get the list of pre-defined names for the lines and their wavelength, you
can use the @option{--listlines} option, see @ref{CosmicCalculator input
options}.
-In the former case (when a name is given), the returned number is in units of
Angstroms.
-In the latter (when a number is given), the returned value is the same units
of the input number (assuming its a wavelength).
+@item -r FLT
+@itemx --radius=FLT
+The radius of all the regions.
+In WCS mode, the radius is assumed to be in arc-seconds, in image mode, it is
in pixel units.
+If this option is not explicitly given, in WCS mode the default radius is 1
arc-seconds and in image mode it is 3 pixels.
+
+@item --dontdelete
+If the output file name exists, abort the program and don't over-write the
contents of the file.
+This option is thus good if you want to avoid accidentally writing over an
important file.
+Also, don't delete the created region file when @option{--command} is given
(by default, when @option{--command} is given, the created region file will be
deleted after SAO DS9 closes).
+
+@item -o STR
+@itemx --output=STR
+Write the created SAO DS9 region file into the name given to this option.
+If not explicity given on the command-line, a default name of @file{ds9.reg}
will be used.
+If the file already exists, it will be over-written, you can avoid the
deletion (or over-writing) of an existing file with the @option{--dontdelete}.
+
+@item --command="STR"
+After creating the region file, run the string given to this option as a
command-line command.
+The SAO DS9 region command will be appended to the end of the given command.
+Because the command will mostly likely contain white-space characters it is
recommended to put the given string in double quotations.
+
+For example, let's assume @option{--command="ds9 image.fits -zscale"}.
+After making the region file (assuming it is called @file{ds9.reg}), the
following command will be executed:
+
+@example
+ds9 image.fits -zscale -regions ds9.reg
+@end example
+You can customize all aspects of SAO DS9 with its command-line options,
therefore the value of this option can be as long and complicated as you like.
+For example if you also want the image to fit into the window, this option
will be: @command{--command="ds9 image.fits -zscale -zoom to fit"}.
+You can see the SAO DS9 command-line descriptions by clicking on the ``Help''
menu and selecting ``Reference Manual''.
+In the opened window, click on ``Command Line Options''.
@end table
@@ -20554,7 +21333,7 @@ In the latter (when a number is given), the returned
value is the same units of
-@node Library, Developing, High-level calculations, Top
+@node Library, Developing, Installed scripts, Top
@chapter Library
Each program in Gnuastro that was discussed in the prior chapters (or any
program in general) is a collection of functions that is compiled into one
executable file which can communicate directly with the outside world.
@@ -25063,22 +25842,53 @@ TPD is a superset of all these, hence it has both
prior and sequeal distortion c
More information is given in the documentation of @code{dis.h}, from the
WCSLIB
manual@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/dis_8h.html}}.
@end deffn
+@deffn Macro GAL_WCS_COORDSYS_EQB1950
+@deffnx Macro GAL_WCS_COORDSYS_EQJ2000
+@deffnx Macro GAL_WCS_COORDSYS_ECB1950
+@deffnx Macro GAL_WCS_COORDSYS_ECJ2000
+@deffnx Macro GAL_WCS_COORDSYS_GALACTIC
+@deffnx Macro GAL_WCS_COORDSYS_SUPERGALACTIC
+@deffnx Macro GAL_WCS_COORDSYS_INVALID
+@cindex Galactic coordinate system
+@cindex Ecliptic coordinate system
+@cindex Equatorial coordinate system
+@cindex Supergalactic coordinate system
+@cindex Coordinate system: Galactic
+@cindex Coordinate system: Ecliptic
+@cindex Coordinate system: Equatorial
+@cindex Coordinate system: Supergalactic
+Recognized WCS coordinate systems in Gnuastro.
+@code{EQ} and @code{EC} stand for the Equatorial and Ecliptic coordinate
systems.
+In the equatorial and ecliptic coordinates, @code{B1950} stands for the
Besselian 1950 epoch and @code{J2000} stands for the Julian 2000 epoch.
+@end deffn
+
+@deffn Macro GAL_WCS_LINEAR_MATRIX_PC
+@deffnx Macro GAL_WCS_LINEAR_MATRIX_CD
+@deffnx Macro GAL_WCS_LINEAR_MATRIX_INVALID
+Identifiers of the linear transformation matrix: either in the @code{PCi_j} or
the @code{CDi_j} formalism.
+For more, see the description of @option{--wcslinearmatrix} in @ref{Input
output options}.
+@end deffn
+
+
@deffn Macro GAL_WCS_FLTERROR
Limit of rounding for floating point errors.
@end deffn
-@deftypefun {struct wcsprm *} gal_wcs_read_fitsptr (fitsfile @code{*fptr},
size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs})
-[@strong{Not thread-safe}] Return the WCSLIB @code{wcsprm} structure that
-is read from the CFITSIO @code{fptr} pointer to an opened FITS file. Also
-put the number of coordinate representations found into the space that
-@code{nwcs} points to. To read the WCS structure directly from a filename,
-see @code{gal_wcs_read} below. After processing has finished, you can free
-the returned structure with WCSLIB's @code{wcsvfree} keyword:
+@deftypefun {struct wcsprm *} gal_wcs_read_fitsptr (fitsfile @code{*fptr}, int
@code{linearmatrix}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int
@code{*nwcs})
+[@strong{Not thread-safe}] Return the WCSLIB @code{wcsprm} structure that is
read from the CFITSIO @code{fptr} pointer to an opened FITS file.
+Also put the number of coordinate representations found into the space that
@code{nwcs} points to.
+To read the WCS structure directly from a filename, see @code{gal_wcs_read}
below.
+After processing has finished, you can free the returned structure with
WCSLIB's @code{wcsvfree} keyword:
@example
status = wcsvfree(&nwcs,&wcs);
@end example
+The @code{linearmatrix} argument takes one of three values: @code{0},
@code{GAL_WCS_LINEAR_MATRIX_PC} and @code{GAL_WCS_LINEAR_MATRIX_CD}.
+It will determine the format of the WCS when it is later written to file with
@code{gal_wcs_write} or @code{gal_wcs_write_in_fitsptr} (which is called by
@code{gal_fits_img_write})
+So if you don't want to write the WCS into a file later, just give it a value
of @code{0}.
+For more on the difference between these modes, see the description of
@option{--wcslinearmatrix} in @ref{Input output options}.
+
If you don't want to search the full FITS header for WCS-related FITS keywords
(for example due to conflicting keywords), but only a specific range of the
header keywords you can use the @code{hstartwcs} and @code{hendwcs} arguments
to specify the keyword number range (counting from zero).
If @code{hendwcs} is larger than @code{hstartwcs}, then only keywords in the
given range will be checked.
Hence, to ignore this feature (and search the full FITS header), give both
these arguments the same value.
@@ -25090,7 +25900,7 @@ This function is just a wrapper over WCSLIB's
@code{wcspih} function which is no
Therefore, be sure to not call this function simultaneously (over multiple
threads).
@end deftypefun
-@deftypefun {struct wcsprm *} gal_wcs_read (char @code{*filename}, char
@code{*hdu}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs})
+@deftypefun {struct wcsprm *} gal_wcs_read (char @code{*filename}, char
@code{*hdu}, int @code{linearmatrix}, size_t @code{hstartwcs}, size_t
@code{hendwcs}, int @code{*nwcs})
[@strong{Not thread-safe}] Return the WCSLIB structure that is read from the
HDU/extension @code{hdu} of the file @code{filename}.
Also put the number of coordinate representations found into the space that
@code{nwcs} points to.
Please see @code{gal_wcs_read_fitsptr} for more.
@@ -25176,6 +25986,27 @@ correspond to the pixel scale, and the @code{PCi_j}
will correction show
the rotation.
@end deftypefun
+@deftypefun void gal_wcs_to_cd (struct wcsprm @code{*wcs})
+Make sure that the WCS structure's @code{PCi_j} and @code{CDi_j} keywords have
the same value and that the @code{CDELTi} keywords have a value of 1.0.
+Also, set the @code{wcs->altlin=2} (for the @code{CDi_j} formalism).
+With these changes @code{gal_wcs_write_in_fitsptr} (and thus
@code{gal_wcs_write} and @code{gal_fits_img_write} and its derivates) will have
an output file in the format of @code{CDi_j}.
+@end deftypefun
+
+@deftypefun int gal_wcs_coordsys_from_string (char @code{*coordsys})
+Convert the given string to Gnuastro's integer-based WCS coordinate system
identifier (one of the @code{GAL_WCS_COORDSYS_*}, listed above).
+The expected strings can be seen in the description of the
@option{--wcscoordsys} option of the Fits program, see @ref{Keyword inspection
and manipulation}.
+@end deftypefun
+
+@deftypefun int gal_wcs_coordsys_identify (struct wcsprm @code{*wcs})
+Read the given WCS structure and return its coordinate system as one of
Gnuastro's WCS coordinate system identifiers (the macros
@code{GAL_WCS_COORDSYS_*}, listed above).
+@end deftypefun
+
+@deftypefun {struct wcsprm *} gal_wcs_coordsys_convert (struct wcsprm
@code{*inwcs}, int @code{coordsysid})
+Return a newly allocated WCS structure with the @code{coordsysid} coordinate
system identifier.
+The Gnuastro WCS distortion identifiers are defined in the
@code{GAL_WCS_COORDSYS_*} macros mentioned above.
+Since the returned dataset is newly allocated, if you don't need the original
dataset after this, use the WCSLIB library function @code{wcsfree} to free the
input, for example @code{wcsfree(inwcs)}.
+@end deftypefun
+
@deftypefun int gal_wcs_distortion_from_string (char @code{*distortion})
Convert the given string (assumed to be a FITS-standard, string-based
distortion identifier) to a Gnuastro's integer-based distortion identifier (one
of the @code{GAL_WCS_DISTORTION_*} macros defined above).
The sting-based distortion identifiers have three characters and are all in
capital letters.
@@ -25248,9 +26079,9 @@ return @code{NULL}.
@end deftypefun
@deftypefun double gal_wcs_pixel_area_arcsec2 (struct wcsprm @code{*wcs})
-Return the pixel area of @code{wcs} in arc-second squared. If the input WCS
-structure is not two dimensional and the units (@code{CUNIT} keywords) are
-not @code{deg} (for degrees), then this function will return a NaN.
+Return the pixel area of @code{wcs} in arc-second squared.
+This only works when the input dataset has atleast two dimensions and the
units of the first two dimensions (@code{CUNIT} keywords) are @code{deg} (for
degrees).
+In other cases, this function will return a NaN.
@end deftypefun
@deftypefun int gal_wcs_coverage (char @code{*filename}, char @code{*hdu},
size_t @code{*ondim}, double @code{**ocenter}, double @code{**owidth}, double
@code{**omin}, double @code{**omax})
@@ -28179,6 +29010,20 @@ Convert the input Declination (Dec) degree (a single
floating point number) to o
If @code{usecolon!=0}, then the delimiters between the components will be
colons: @code{_:_:_}.
@end deftypefun
+@deftypefun double gal_units_counts_to_mag (double @code{counts}, double
@code{zeropoint})
+@cindex Magnitude
+Convert counts to magnitudes through the given zero point.
+For more on the equation, see @ref{Brightness flux magnitude}.
+@end deftypefun
+
+@deftypefun double gal_units_counts_to_jy (double @code{counts}, double
@code{zeropoint_ab})
+@cindex Jansky (Jy)
+@cindex AB Magnitude
+@cindex Magnitude, AB
+Convert counts to Janskys through an AB magnitude-based zero point.
+For more on the equation, see @ref{Brightness flux magnitude}.
+@end deftypefun
+
@node Spectral lines library, Cosmology library, Unit conversion library
(@file{units.h}), Gnuastro library
@subsection Spectral lines library (@file{speclines.h})
@@ -30275,10 +31120,38 @@ the body with `@code{This fixes bug #ID.}', or
`@code{This finishes task
full description when reading the commit message, so give a short
introduction too.
@end itemize
-
@end table
+Below you can see a good commit message example (don't forget to read it, it
has tips for you).
+After reading this, please run @command{git log} on the @code{master} branch
and read some of the recent commits for more realistic examples.
+
+@example
+The first line should be the title of the commit
+
+An empty line is necessary after the title so Git doesn't confuse
+lines. This top paragraph of the body of the commit usually describes
+the reason this commit was done. Therefore it usually starts with
+"Until now ...". It is very useful to explain the reason behind the
+change, things that aren't immediately obvious when looking into the
+code. You don't need to list the names of the files, or what lines
+have been changed, don't forget that the code changes are fully
+stored within Git :-).
+
+In the second paragraph (or any later paragraph!) of the body, we
+describe the solution and why (not "how"!) the particular solution
+was implemented. So we usually start this part of the commit body
+with "With this commit ...". Again, you don't need to go into the
+details that can be seen from the 'git diff' command (like the
+file names that have been changed or the code that has been
+implemented). The imprtant thing here is the things that aren't
+immediately obvious from looking into the code.
+
+You can continue the explanation and it is encouraged to be very
+explicit about the "human factor" of the change as much as possible,
+not technical details.
+@end example
+
@node Production workflow, Forking tutorial, Commit guidelines, Contributing
to Gnuastro
@subsection Production workflow
@@ -30569,10 +31442,15 @@ in a special way), are done with installed Bash
scripts (all prefixed with
similarly (with minor differences, see @ref{Installed scripts}).
@table @code
+@item astscript-ds9-region
+(See @ref{SAO DS9 region files from table}) Given a table (either as a file or
from standard input), create an SAO DS9 region file from the requested
positional columns (WCS or image coordinates).
+
+@item astscript-radial-profile
+(See @ref{Generate radial profile}) Calculate the radial profile of an object
within an image.
+The object can be at any location in the image, using various measures
(median, sigma-clipped mean and etc), and the radial distance can also be
measured on any general ellipse.
+
@item astscript-sort-by-night
-(See @ref{Sort FITS files by night}) Given a list of FITS files, and a HDU
-and keyword name (for a date), this script separates the files in the same
-night (possibly over two calendar days).
+(See @ref{Sort FITS files by night}) Given a list of FITS files, and a HDU and
keyword name (for a date), this script separates the files in the same night
(possibly over two calendar days).
@end table
diff --git a/lib/Makefile.am b/lib/Makefile.am
index 5b427c6..4b4b297 100644
--- a/lib/Makefile.am
+++ b/lib/Makefile.am
@@ -150,6 +150,7 @@ gnuastro/config.h: Makefile $(internaldir)/config.h.in
-e 's|@HAVE_WCSLIB_DIS_H[@]|$(HAVE_WCSLIB_DIS_H)|g' \
-e 's|@HAVE_WCSLIB_MJDREF[@]|$(HAVE_WCSLIB_MJDREF)|g' \
-e 's|@HAVE_WCSLIB_OBSFIX[@]|$(HAVE_WCSLIB_OBSFIX)|g' \
+ -e 's|@HAVE_WCSLIB_WCSCCS[@]|$(HAVE_WCSLIB_WCSCCS)|g' \
-e 's|@HAVE_WCSLIB_VERSION[@]|$(HAVE_WCSLIB_VERSION)|g' \
-e 's|@HAVE_PTHREAD_BARRIER[@]|$(HAVE_PTHREAD_BARRIER)|g' \
-e 's|@RESTRICT_REPLACEMENT[@]|$(RESTRICT_REPLACEMENT)|g' \
diff --git a/lib/arithmetic-set.c b/lib/arithmetic-set.c
index 24da3cb..f4d9d68 100644
--- a/lib/arithmetic-set.c
+++ b/lib/arithmetic-set.c
@@ -116,8 +116,10 @@ gal_arithmetic_set_name(struct gal_arithmetic_set_params
*p, char *token)
error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
"fix the problem. The 'name' element should be NULL at "
"this point, but it isn't", __func__, PACKAGE_BUGREPORT);
- if(p->named->unit) { free(p->named->unit); p->named->unit=NULL;
}
- if(p->named->comment) { free(p->named->comment); p->named->comment=NULL;
}
+ if(p->named->unit)
+ { free(p->named->unit); p->named->unit=NULL; }
+ if(p->named->comment)
+ { free(p->named->comment); p->named->comment=NULL; }
gal_checkset_allocate_copy(varname, &p->named->name);
}
else
diff --git a/lib/arithmetic.c b/lib/arithmetic.c
index e7f68d9..8f98575 100644
--- a/lib/arithmetic.c
+++ b/lib/arithmetic.c
@@ -1384,9 +1384,9 @@ arithmetic_multioperand(int operator, int flags,
gal_data_t *list,
{
case GAL_ARITHMETIC_OP_QUANTILE:
if(p1<0 || p1>1)
- error(EXIT_FAILURE, 0, "%s: the parameter given to the 'quantile' "
- "operator must be between (and including) 0 and 1. The "
- "given value is: %g", __func__, p1);
+ error(EXIT_FAILURE, 0, "%s: the parameter given to the "
+ "'quantile' operator must be between (and including) "
+ "0 and 1. The given value is: %g", __func__, p1);
break;
}
}
@@ -1465,15 +1465,17 @@ arithmetic_multioperand(int operator, int flags,
gal_data_t *list,
/* Clean up and return. Note that the operation might have been done in
- place. In that case, the top most list element was used. So we need to
- check before freeing each data structure. */
+ place. In that case, a list element was used. So we need to check
+ before freeing each data structure. If we are on the designated output
+ dataset, we should set its 'next' pointer to NULL so it isn't treated
+ as a list any more by future functions. */
if(flags & GAL_ARITHMETIC_FREE)
{
tmp=list;
while(tmp!=NULL)
{
ttmp=tmp->next;
- if(tmp!=out) gal_data_free(tmp);
+ if(tmp==out) tmp->next=NULL; else gal_data_free(tmp);
tmp=ttmp;
}
if(params) gal_list_data_free(params);
@@ -1775,6 +1777,10 @@ arithmetic_function_binary_flt(int operator, int flags,
gal_data_t *il,
BINFUNC_F_OPERATOR_SET( pow, +0 ); break;
case GAL_ARITHMETIC_OP_ATAN2:
BINFUNC_F_OPERATOR_SET( atan2, *180.0f/pi ); break;
+ case GAL_ARITHMETIC_OP_COUNTS_TO_MAG:
+ BINFUNC_F_OPERATOR_SET( gal_units_counts_to_mag, +0 ); break;
+ case GAL_ARITHMETIC_OP_COUNTS_TO_JY:
+ BINFUNC_F_OPERATOR_SET( gal_units_counts_to_jy, +0 ); break;
default:
error(EXIT_FAILURE, 0, "%s: operator code %d not recognized",
__func__, operator);
@@ -1958,6 +1964,10 @@ gal_arithmetic_set_operator(char *string, size_t
*num_operands)
{ op=GAL_ARITHMETIC_OP_DEGREE_TO_RA; *num_operands=1; }
else if (!strcmp(string, "degree-to-dec"))
{ op=GAL_ARITHMETIC_OP_DEGREE_TO_DEC; *num_operands=1; }
+ else if (!strcmp(string, "counts-to-mag"))
+ { op=GAL_ARITHMETIC_OP_COUNTS_TO_MAG; *num_operands=2; }
+ else if (!strcmp(string, "counts-to-jy"))
+ { op=GAL_ARITHMETIC_OP_COUNTS_TO_JY; *num_operands=2; }
/* Statistical/higher-level operators. */
else if (!strcmp(string, "minvalue"))
@@ -2131,6 +2141,8 @@ gal_arithmetic_operator_string(int operator)
case GAL_ARITHMETIC_OP_DEC_TO_DEGREE: return "dec-to-degree";
case GAL_ARITHMETIC_OP_DEGREE_TO_RA: return "degree-to-ra";
case GAL_ARITHMETIC_OP_DEGREE_TO_DEC: return "degree-to-dec";
+ case GAL_ARITHMETIC_OP_COUNTS_TO_MAG: return "counts-to-mag";
+ case GAL_ARITHMETIC_OP_COUNTS_TO_JY: return "counts-to-jy";
case GAL_ARITHMETIC_OP_MINVAL: return "minvalue";
case GAL_ARITHMETIC_OP_MAXVAL: return "maxvalue";
@@ -2253,6 +2265,8 @@ gal_arithmetic(int operator, size_t numthreads, int
flags, ...)
/* Binary function operators. */
case GAL_ARITHMETIC_OP_POW:
case GAL_ARITHMETIC_OP_ATAN2:
+ case GAL_ARITHMETIC_OP_COUNTS_TO_MAG:
+ case GAL_ARITHMETIC_OP_COUNTS_TO_JY:
d1 = va_arg(va, gal_data_t *);
d2 = va_arg(va, gal_data_t *);
out=arithmetic_function_binary_flt(operator, flags, d1, d2);
diff --git a/lib/fits.c b/lib/fits.c
index 10a0bac..0f31ec7 100644
--- a/lib/fits.c
+++ b/lib/fits.c
@@ -1899,6 +1899,36 @@ gal_fits_key_write(gal_fits_list_key_t **keylist, char
*title,
+/* Fits doesn't allow NaN values, so if the type of float or double, we'll
+ just check to see if its NaN or not and let the user know the keyword
+ name (to help them fix it). */
+static void
+gal_fits_key_write_in_ptr_nan_check(gal_fits_list_key_t *tmp)
+{
+ int nanwarning=0;
+
+ /* Check the value. */
+ switch(tmp->type)
+ {
+ case GAL_TYPE_FLOAT32:
+ if( isnan( ((float *)(tmp->value))[0] ) ) nanwarning=1;
+ break;
+ case GAL_TYPE_FLOAT64:
+ if( isnan( ((double *)(tmp->value))[0] ) ) nanwarning=1;
+ break;
+ }
+
+ /* Print the warning. */
+ if(nanwarning)
+ error(EXIT_SUCCESS, 0, "%s: (WARNING) value of '%s' is NaN "
+ "and FITS doesn't recognize a NaN key value", __func__,
+ tmp->keyname);
+}
+
+
+
+
+
/* Write the keywords in the gal_fits_list_key_t linked list to the FITS
file. Every keyword that is written is freed, that is why we need the
pointer to the linked list (to correct it after we finish). */
@@ -1928,6 +1958,10 @@ gal_fits_key_write_in_ptr(gal_fits_list_key_t **keylist,
fitsfile *fptr)
/* Write the basic key value and comments. */
if(tmp->value)
{
+ /* Print a warning if the value is NaN. */
+ gal_fits_key_write_in_ptr_nan_check(tmp);
+
+ /* Write/Update the keyword value. */
if( fits_update_key(fptr, gal_fits_type_to_datatype(tmp->type),
tmp->keyname, tmp->value, tmp->comment,
&status) )
@@ -2052,7 +2086,7 @@ gal_fits_key_write_version_in_ptr(gal_fits_list_key_t
**keylist, char *title,
if(gitdescribe)
{
fits_update_key(fptr, TSTRING, "COMMIT", gitdescribe,
- "Git's commit description in running dir.", &status);
+ "Git commit in running directory.", &status);
free(gitdescribe);
}
diff --git a/lib/gnuastro-internal/commonopts.h
b/lib/gnuastro-internal/commonopts.h
index d55f98b..16fc496 100644
--- a/lib/gnuastro-internal/commonopts.h
+++ b/lib/gnuastro-internal/commonopts.h
@@ -276,6 +276,20 @@ struct argp_option gal_commonopts_options[] =
gal_options_read_tableformat
},
{
+ "wcslinearmatrix",
+ GAL_OPTIONS_KEY_WCSLINEARMATRIX,
+ "STR",
+ 0,
+ "WCS linear matrix of output ('pc' or 'cd').",
+ GAL_OPTIONS_GROUP_OUTPUT,
+ &cp->wcslinearmatrix,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ gal_options_read_wcslinearmatrix
+ },
+ {
"dontdelete",
GAL_OPTIONS_KEY_DONTDELETE,
0,
@@ -342,7 +356,7 @@ struct argp_option gal_commonopts_options[] =
GAL_OPTIONS_KEY_MINMAPSIZE,
"INT",
0,
- "Minimum bytes in array to not use ram RAM.",
+ "Min. bytes to avoid RAM automatically.",
GAL_OPTIONS_GROUP_OPERATING_MODE,
&cp->minmapsize,
GAL_TYPE_SIZE_T,
diff --git a/lib/gnuastro-internal/options.h b/lib/gnuastro-internal/options.h
index fdb7478..343ebf8 100644
--- a/lib/gnuastro-internal/options.h
+++ b/lib/gnuastro-internal/options.h
@@ -125,6 +125,7 @@ enum options_common_keys
GAL_OPTIONS_KEY_INTERPONLYBLANK,
GAL_OPTIONS_KEY_INTERPMETRIC,
GAL_OPTIONS_KEY_INTERPNUMNGB,
+ GAL_OPTIONS_KEY_WCSLINEARMATRIX,
};
@@ -194,9 +195,10 @@ struct gal_options_common_params
/* Output. */
char *output; /* Directory containg output. */
uint8_t type; /* Data type of output. */
+ uint8_t tableformat; /* Internal code for output table format. */
+ uint8_t wcslinearmatrix; /* WCS matrix to use (PC or CD). */
uint8_t dontdelete; /* ==1: Don't delete existing file. */
uint8_t keepinputdir; /* Keep input directory for auto output. */
- uint8_t tableformat; /* Internal code for output table format. */
/* Operating modes. */
uint8_t quiet; /* Only print errors. */
@@ -276,6 +278,10 @@ gal_options_read_searchin(struct argp_option *option, char
*arg,
char *filename, size_t lineno, void *junk);
void *
+gal_options_read_wcslinearmatrix(struct argp_option *option, char *arg,
+ char *filename, size_t lineno, void *junk);
+
+void *
gal_options_read_tableformat(struct argp_option *option, char *arg,
char *filename, size_t lineno, void *junk);
diff --git a/lib/gnuastro/arithmetic.h b/lib/gnuastro/arithmetic.h
index b1719df..49d6930 100644
--- a/lib/gnuastro/arithmetic.h
+++ b/lib/gnuastro/arithmetic.h
@@ -120,10 +120,12 @@ enum gal_arithmetic_operators
GAL_ARITHMETIC_OP_ACOSH, /* Inverse hyperbolic cosine. */
GAL_ARITHMETIC_OP_ATANH, /* Inverse hyperbolic tangent. */
- GAL_ARITHMETIC_OP_RA_TO_DEGREE, /* right ascension to decimal */
- GAL_ARITHMETIC_OP_DEC_TO_DEGREE,/* declination to decimal */
- GAL_ARITHMETIC_OP_DEGREE_TO_RA, /* right ascension to decimal */
- GAL_ARITHMETIC_OP_DEGREE_TO_DEC,/* declination to decimal */
+ GAL_ARITHMETIC_OP_RA_TO_DEGREE, /* right ascension to decimal. */
+ GAL_ARITHMETIC_OP_DEC_TO_DEGREE,/* declination to decimal. */
+ GAL_ARITHMETIC_OP_DEGREE_TO_RA, /* right ascension to decimal. */
+ GAL_ARITHMETIC_OP_DEGREE_TO_DEC,/* declination to decimal. */
+ GAL_ARITHMETIC_OP_COUNTS_TO_MAG,/* Counts to magnitude. */
+ GAL_ARITHMETIC_OP_COUNTS_TO_JY, /* Counts to Janskys with AB-mag zeropoint.
*/
GAL_ARITHMETIC_OP_MINVAL, /* Minimum value of array. */
GAL_ARITHMETIC_OP_MAXVAL, /* Maximum value of array. */
diff --git a/lib/gnuastro/units.h b/lib/gnuastro/units.h
index 0543e67..3243f1f 100644
--- a/lib/gnuastro/units.h
+++ b/lib/gnuastro/units.h
@@ -62,16 +62,22 @@ gal_units_extract_decimal(char *convert, const char
*delimiter,
double *args, size_t n);
double
-gal_units_ra_to_degree (char *convert);
+gal_units_ra_to_degree(char *convert);
double
-gal_units_dec_to_degree (char *convert);
+gal_units_dec_to_degree(char *convert);
char *
-gal_units_degree_to_ra (double decimal, int usecolon);
+gal_units_degree_to_ra(double decimal, int usecolon);
char *
-gal_units_degree_to_dec (double decimal, int usecolon);
+gal_units_degree_to_dec(double decimal, int usecolon);
+
+double
+gal_units_counts_to_mag(double counts, double zeropoint);
+
+double
+gal_units_counts_to_jy(double counts, double zeropoint_ab);
__END_C_DECLS /* From C++ preparations */
diff --git a/lib/gnuastro/wcs.h b/lib/gnuastro/wcs.h
index 8ad29ba..dfe430a 100644
--- a/lib/gnuastro/wcs.h
+++ b/lib/gnuastro/wcs.h
@@ -70,6 +70,28 @@ enum gal_wcs_distortions
GAL_WCS_DISTORTION_WAT, /* The WAT polynomial distortion. */
};
+/* Macros to identify coordinate system for convesions. */
+enum gal_wcs_coordsys
+{
+ GAL_WCS_COORDSYS_INVALID, /* Invalid (=0 by C standard). */
+
+ GAL_WCS_COORDSYS_EQB1950, /* Equatorial B1950 */
+ GAL_WCS_COORDSYS_EQJ2000, /* Equatorial J2000 */
+ GAL_WCS_COORDSYS_ECB1950, /* Ecliptic B1950 */
+ GAL_WCS_COORDSYS_ECJ2000, /* Ecliptic J2000 */
+ GAL_WCS_COORDSYS_GALACTIC, /* Galactic */
+ GAL_WCS_COORDSYS_SUPERGALACTIC, /* Super-galactic */
+};
+
+/* Macros to identify the type of distortion for conversions. */
+enum gal_wcs_linear_matrix
+{
+ GAL_WCS_LINEAR_MATRIX_INVALID, /* Invalid (=0 by C standard). */
+
+ GAL_WCS_LINEAR_MATRIX_PC,
+ GAL_WCS_LINEAR_MATRIX_CD,
+};
+
@@ -78,16 +100,17 @@ enum gal_wcs_distortions
*********** Read WCS ***********
*************************************************************/
struct wcsprm *
-gal_wcs_read_fitsptr(fitsfile *fptr, size_t hstartwcs, size_t hendwcs,
- int *nwcs);
+gal_wcs_read_fitsptr(fitsfile *fptr, int linearmatrix, size_t hstartwcs,
+ size_t hendwcs, int *nwcs);
struct wcsprm *
-gal_wcs_read(char *filename, char *hdu, size_t hstartwcs,
+gal_wcs_read(char *filename, char *hdu, int linearmatrix, size_t hstartwcs,
size_t hendwcs, int *nwcs);
struct wcsprm *
gal_wcs_create(double *crpix, double *crval, double *cdelt,
- double *pc, char **cunit, char **ctype, size_t ndim);
+ double *pc, char **cunit, char **ctype, size_t ndim,
+ int linearmatrix);
char *
gal_wcs_dimension_name(struct wcsprm *wcs, size_t dimension);
@@ -107,6 +130,19 @@ gal_wcs_write_in_fitsptr(fitsfile *fptr, struct wcsprm
*wcs);
+/*************************************************************
+ *********** Distortions ***********
+ *************************************************************/
+int
+gal_wcs_coordsys_from_string(char *coordsys);
+
+int
+gal_wcs_coordsys_identify(struct wcsprm *inwcs);
+
+struct wcsprm *
+gal_wcs_coordsys_convert(struct wcsprm *inwcs, int coordsysid);
+
+
/*************************************************************
*********** Distortions ***********
@@ -149,6 +185,9 @@ gal_wcs_clean_errors(struct wcsprm *wcs);
void
gal_wcs_decompose_pc_cdelt(struct wcsprm *wcs);
+void
+gal_wcs_to_cd(struct wcsprm *wcs);
+
double
gal_wcs_angular_distance_deg(double r1, double d1, double r2, double d2);
diff --git a/lib/options.c b/lib/options.c
index e029072..ba51988 100644
--- a/lib/options.c
+++ b/lib/options.c
@@ -28,6 +28,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#include <stdlib.h>
#include <string.h>
+#include <gnuastro/wcs.h>
#include <gnuastro/git.h>
#include <gnuastro/txt.h>
#include <gnuastro/fits.h>
@@ -477,6 +478,54 @@ gal_options_read_searchin(struct argp_option *option, char
*arg,
void *
+gal_options_read_wcslinearmatrix(struct argp_option *option, char *arg,
+ char *filename, size_t lineno, void *junk)
+{
+ char *str;
+ uint8_t value=GAL_WCS_LINEAR_MATRIX_INVALID;
+ if(lineno==-1)
+ {
+ /* The output must be an allocated string (will be 'free'd later). */
+ value=*(uint8_t *)(option->value);
+ switch(value)
+ {
+ case GAL_WCS_LINEAR_MATRIX_PC: gal_checkset_allocate_copy("pc", &str);
+ break;
+ case GAL_WCS_LINEAR_MATRIX_CD: gal_checkset_allocate_copy("cd", &str);
+ break;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at '%s' "
+ "to fix the problem. %u is not a recognized WCS rotation "
+ "matrix code", __func__, PACKAGE_BUGREPORT, value);
+ }
+ return str;
+ }
+ else
+ {
+ /* If the option is already set, just return. */
+ if(option->set) return NULL;
+
+ /* Read the value. */
+ if( !strcmp(arg, "pc") ) value = GAL_WCS_LINEAR_MATRIX_PC;
+ else if( !strcmp(arg, "cd") ) value = GAL_WCS_LINEAR_MATRIX_CD;
+ else
+ error_at_line(EXIT_FAILURE, 0, filename, lineno, "'%s' (value "
+ "to '%s' option) couldn't be recognized as a known "
+ "WCS rotation matrix. Acceptable values are 'pc' "
+ "or 'cd'", arg, option->name);
+ *(uint8_t *)(option->value)=value;
+
+ /* For no un-used variable warning. This function doesn't need the
+ pointer.*/
+ return junk=NULL;
+ }
+}
+
+
+
+
+
+void *
gal_options_read_tableformat(struct argp_option *option, char *arg,
char *filename, size_t lineno, void *junk)
{
diff --git a/lib/txt.c b/lib/txt.c
index 7f2c942..5ed3978 100644
--- a/lib/txt.c
+++ b/lib/txt.c
@@ -1513,6 +1513,11 @@ txt_write_keys(FILE *fp, struct gal_fits_list_key_t
**keylist)
tmp->title);
if(tmp->tfree) free(tmp->title);
}
+ else if (tmp->fullcomment)
+ {
+ fprintf(fp, "# %s\n", tmp->fullcomment);
+ if(tmp->fcfree) free(tmp->fullcomment);
+ }
else
{
/* For a string type, we need to return a pointer to the
diff --git a/lib/units.c b/lib/units.c
index 20146e2..ce9e873 100644
--- a/lib/units.c
+++ b/lib/units.c
@@ -331,3 +331,48 @@ gal_units_degree_to_dec(double decimal, int usecolon)
/* Return the final string. */
return dec;
}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+/**********************************************************************/
+/**************** Flux conversions *****************/
+/**********************************************************************/
+
+/* Convert counts to magnitude using the given zeropoint. */
+double
+gal_units_counts_to_mag(double counts, double zeropoint)
+{
+ return ( counts > 0.0f
+ ? ( -2.5f * log10(counts) + zeropoint )
+ : NAN );
+}
+
+
+
+
+
+/* Convert Pixel values to Janskys with an AB-magnitude based
+ zero-point. See the "Brightness, Flux, Magnitude and Surface
+ brightness". */
+double
+gal_units_counts_to_jy(double counts, double zeropoint_ab)
+{
+ return counts * 3631 * pow(10, -1 * zeropoint_ab / 2.5);
+}
diff --git a/lib/wcs.c b/lib/wcs.c
index 0d2a508..0b2b33a 100644
--- a/lib/wcs.c
+++ b/lib/wcs.c
@@ -53,14 +53,6 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
-/* Static functions on for this file. */
-static void
-gal_wcs_to_cd(struct wcsprm *wcs);
-
-
-
-
-
/*************************************************************
*********** Read WCS ***********
*************************************************************/
@@ -149,8 +141,8 @@ wcs_read_correct_pc_cd(struct wcsprm *wcs)
Don't call this function within a thread or use a mutex.
*/
struct wcsprm *
-gal_wcs_read_fitsptr(fitsfile *fptr, size_t hstartwcs, size_t hendwcs,
- int *nwcs)
+gal_wcs_read_fitsptr(fitsfile *fptr, int linearmatrix, size_t hstartwcs,
+ size_t hendwcs, int *nwcs)
{
/* Declaratins: */
int sumcheck;
@@ -361,6 +353,11 @@ gal_wcs_read_fitsptr(fitsfile *fptr, size_t hstartwcs,
size_t hendwcs,
}
}
+ /* If the user wants a CD linear matrix, do the conversion here,
+ otherwise, make sure the PC matrix is used. */
+ if(linearmatrix==GAL_WCS_LINEAR_MATRIX_CD) gal_wcs_to_cd(wcs);
+ else gal_wcs_decompose_pc_cdelt(wcs);
+
/* Clean up and return. */
status=0;
if (fits_free_memory(fullheader, &status) )
@@ -374,8 +371,8 @@ gal_wcs_read_fitsptr(fitsfile *fptr, size_t hstartwcs,
size_t hendwcs,
struct wcsprm *
-gal_wcs_read(char *filename, char *hdu, size_t hstartwcs,
- size_t hendwcs, int *nwcs)
+gal_wcs_read(char *filename, char *hdu, int linearmatrix,
+ size_t hstartwcs, size_t hendwcs, int *nwcs)
{
int status=0;
fitsfile *fptr;
@@ -389,7 +386,8 @@ gal_wcs_read(char *filename, char *hdu, size_t hstartwcs,
fptr=gal_fits_hdu_open_format(filename, hdu, 0);
/* Read the WCS information: */
- wcs=gal_wcs_read_fitsptr(fptr, hstartwcs, hendwcs, nwcs);
+ wcs=gal_wcs_read_fitsptr(fptr, linearmatrix, hstartwcs,
+ hendwcs, nwcs);
/* Close the FITS file and return. */
fits_close_file(fptr, &status);
@@ -403,7 +401,8 @@ gal_wcs_read(char *filename, char *hdu, size_t hstartwcs,
struct wcsprm *
gal_wcs_create(double *crpix, double *crval, double *cdelt,
- double *pc, char **cunit, char **ctype, size_t ndim)
+ double *pc, char **cunit, char **ctype,
+ size_t ndim, int linearmatrix)
{
size_t i;
int status;
@@ -441,6 +440,10 @@ gal_wcs_create(double *crpix, double *crval, double *cdelt,
error(EXIT_FAILURE, 0, "wcsset error %d: %s", status,
wcs_errmsg[status]);
+ /* If a CD matrix is desired make it. */
+ if(linearmatrix==GAL_WCS_LINEAR_MATRIX_CD)
+ gal_wcs_to_cd(wcs);
+
/* Return the output WCS. */
return wcs;
}
@@ -497,15 +500,21 @@ void
gal_wcs_write_in_fitsptr(fitsfile *fptr, struct wcsprm *wcs)
{
char *wcsstr;
- int tpvdist, status=0, nkeyrec;
-
- /* Prepare the main rotation matrix. Note that for TPV distortion, WCSLIB
- versions 7.3 and before couldn't deal with the CDELT keys, so to be
- safe, in such cases, we'll remove the effect of CDELT in the
- 'gal_wcs_to_cd' function. */
- tpvdist=wcs->lin.disseq && !strcmp(wcs->lin.disseq->dtype[1], "TPV");
- if( tpvdist ) gal_wcs_to_cd(wcs);
- else gal_wcs_decompose_pc_cdelt(wcs);
+ int cdfordist, status=0, nkeyrec;
+
+ /* For the TPV, TNX and ZPX distortions, WCSLIB can't deal with the CDELT
+ keys properly and its better to use the CD matrix instead, so we'll
+ use the 'gal_wcs_to_cd' function. */
+ cdfordist = ( wcs->lin.disseq
+ && ( !strcmp( wcs->lin.disseq->dtype[1], "TPV")
+ || !strcmp(wcs->lin.disseq->dtype[1], "TNX")
+ || !strcmp(wcs->lin.disseq->dtype[1], "ZPX") ) );
+
+ /* Finalize the linear transformation matrix. Note that some programs may
+ have worked on the WCS. So even if 'altlin' is already 2, we'll just
+ ensure that the final matrix is CD here. */
+ if(wcs->altlin==2 || cdfordist) gal_wcs_to_cd(wcs);
+ else gal_wcs_decompose_pc_cdelt(wcs);
/* Clean up small errors in the PC matrix and CDELT values. */
gal_wcs_clean_errors(wcs);
@@ -523,33 +532,33 @@ gal_wcs_write_in_fitsptr(fitsfile *fptr, struct wcsprm
*wcs)
status=0;
/* WCSLIB is going to write PC+CDELT keywords in any case. But when we
- have a TPV distortion, it is cleaner to use a CD matrix. Also,
- including and before version 7.3, WCSLIB wouldn't convert coordinates
- properly if the PC matrix is used with the TPV distortion. So to help
- users with WCSLIB 7.3 or earlier, we need to conver the PC matrix to
- CD. 'gal_wcs_to_cd' function already made sure that CDELT=1, so
- effectively the CD matrix and PC matrix are equivalent, we just need
- to convert the keyword names and delete the CDELT keywords. Note that
- zero-valued PC/CD elements may not be present, so we'll manually set
- 'status' to zero and not let CFITSIO crash.*/
+ have a TPV, TNX or ZPX distortion, it is cleaner to use a CD matrix
+ (WCSLIB can't convert coordinates properly if the PC matrix is used
+ with the TPV distortion). So to help users avoid the potential
+ problems with WCSLIB. 'gal_wcs_to_cd' function already made sure that
+ CDELTi=1.0, so effectively the CD matrix and PC matrix are
+ equivalent, we just need to convert the keyword names and delete the
+ CDELT keywords. Note that zero-valued PC/CD elements may not be
+ present, so we'll manually set 'status' to zero to avoid CFITSIO from
+ crashing. */
if(wcs->altlin==2)
{
+ status=0; fits_delete_str(fptr, "CDELT1", &status);
+ status=0; fits_delete_str(fptr, "CDELT2", &status);
status=0; fits_modify_name(fptr, "PC1_1", "CD1_1", &status);
status=0; fits_modify_name(fptr, "PC1_2", "CD1_2", &status);
status=0; fits_modify_name(fptr, "PC2_1", "CD2_1", &status);
status=0; fits_modify_name(fptr, "PC2_2", "CD2_2", &status);
- status=0; fits_delete_str(fptr, "CDELT1", &status);
- status=0; fits_delete_str(fptr, "CDELT2", &status);
+ if(wcs->naxis==3)
+ {
+ status=0; fits_delete_str(fptr, "CDELT3", &status);
+ status=0; fits_modify_name(fptr, "PC1_3", "CD1_3", &status);
+ status=0; fits_modify_name(fptr, "PC2_3", "CD2_3", &status);
+ status=0; fits_modify_name(fptr, "PC3_1", "CD3_1", &status);
+ status=0; fits_modify_name(fptr, "PC3_2", "CD3_2", &status);
+ status=0; fits_modify_name(fptr, "PC3_3", "CD3_3", &status);
+ }
status=0;
- fits_write_comment(fptr, "The CD matrix is used instead of the "
- "PC+CDELT due to conflicts with TPV distortion "
- "in WCSLIB 7.3 (released on 2020/06/03) and "
- "ealier. By default Gnuastro will write "
- "PC+CDELT matrices because the rotation (PC) and "
- "pixel-scale (CDELT) are separate; providing "
- "more physically relevant metadata for human "
- "readers (PC+CDELT is also the default format "
- "of WCSLIB).", &status);
}
}
@@ -622,6 +631,406 @@ gal_wcs_write(struct wcsprm *wcs, char *filename,
+
+/*************************************************************
+ *********** Coordinate system ***********
+ *************************************************************/
+int
+gal_wcs_coordsys_from_string(char *coordsys)
+{
+ if( !strcmp(coordsys,"eq-j2000") ) return GAL_WCS_COORDSYS_EQJ2000;
+ else if( !strcmp(coordsys,"eq-b1950") ) return GAL_WCS_COORDSYS_EQB1950;
+ else if( !strcmp(coordsys,"ec-j2000") ) return GAL_WCS_COORDSYS_ECJ2000;
+ else if( !strcmp(coordsys,"ec-b1950") ) return GAL_WCS_COORDSYS_ECB1950;
+ else if( !strcmp(coordsys,"galactic") ) return GAL_WCS_COORDSYS_GALACTIC;
+ else if( !strcmp(coordsys,"supergalactic") )
+ return GAL_WCS_COORDSYS_SUPERGALACTIC;
+ else
+ error(EXIT_FAILURE, 0, "WCS coordinate system name '%s' not "
+ "recognized, currently recognized names are 'eq-j2000', "
+ "'eq-b1950', 'galactic' and 'supergalactic'", coordsys);
+
+ /* Control should not reach here. */
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to fix the "
+ "problem. Control should not reach the end of this function",
+ __func__, PACKAGE_BUGREPORT);
+ return GAL_WCS_COORDSYS_INVALID;
+}
+
+
+
+
+/* Identify the coordinate system of the WCS. */
+int
+gal_wcs_coordsys_identify(struct wcsprm *wcs)
+{
+ /* Equatorial (we are keeping the dash ('-') to make sure it is a
+ standard). */
+ if ( !strncmp(wcs->ctype[0], "RA---", 5)
+ && !strncmp(wcs->ctype[1], "DEC--", 5) )
+ {
+ if ( !strncmp(wcs->radesys, "FK4", 3) )
+ return GAL_WCS_COORDSYS_EQB1950;
+ else if ( !strncmp(wcs->radesys, "FK5", 3) )
+ return GAL_WCS_COORDSYS_EQJ2000;
+ else
+ error(EXIT_FAILURE, 0, "%s: the '%s' value for 'RADESYS' is "
+ "not yet implemented! Please contact us at %s to "
+ "implement it", __func__, wcs->radesys, PACKAGE_BUGREPORT);
+ }
+
+ /* Ecliptic. */
+ else if ( !strncmp(wcs->ctype[0], "ELON-", 5)
+ && !strncmp(wcs->ctype[1], "ELAT-", 5) )
+ if ( !strncmp(wcs->radesys, "FK4", 3) )
+ return GAL_WCS_COORDSYS_ECB1950;
+ else if ( !strncmp(wcs->radesys, "FK5", 3) )
+ return GAL_WCS_COORDSYS_ECJ2000;
+ else
+ error(EXIT_FAILURE, 0, "%s: the '%s' value for 'RADESYS' is "
+ "not yet implemented! Please contact us at %s to "
+ "implement it", __func__, wcs->radesys, PACKAGE_BUGREPORT);
+
+ /* Galactic. */
+ else if ( !strncmp(wcs->ctype[0], "GLON-", 5)
+ && !strncmp(wcs->ctype[1], "GLAT-", 5) )
+ return GAL_WCS_COORDSYS_GALACTIC;
+
+ /* SuperGalactic. */
+ else if ( !strncmp(wcs->ctype[0], "SLON-", 5)
+ && !strncmp(wcs->ctype[1], "SLAT-", 5) )
+ return GAL_WCS_COORDSYS_SUPERGALACTIC;
+
+ /* Other. */
+ else
+ error(EXIT_FAILURE, 0, "%s: the CTYPE values '%s' and '%s' are "
+ "not yet implemented! Please contact us at %s to "
+ "implement it", __func__, wcs->ctype[0], wcs->ctype[1],
+ PACKAGE_BUGREPORT);
+
+ /* Control should not reach here. */
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to fix the "
+ "problem. Control should not reach the end of this function",
+ __func__, PACKAGE_BUGREPORT);
+ return GAL_WCS_COORDSYS_INVALID;
+}
+
+
+
+
+
+/* Set the pole coordinates (current values taken from the WCSLIB
+ manual.
+ lng2p1: pole of input (1) system in output (2) system's logitude.
+ lat2p1: pole of input (1) system in output (2) system's latitude.
+ lng1p2: pole of output (2) system in input (1) system's longitude.
+
+ Values from NED (inspired by WCSLIB manual's example).
+ https://ned.ipac.caltech.edu/coordinate_calculator
+
+ longi (deg) latit (deg) OUTPUT INPUT
+ ----- ----- ------ -----
+ (------------, -----------) B1950 equ. coords. of B1950 equ. pole.
+ (180.31684301, 89.72174782) J2000 equ. coords. of B1950 equ. pole.
+ (90.000000000, 66.55421111) B1950 ecl. coords. of B1950 equ. pole.
+ (90.699521110, 66.56068919) J2000 ecl. coords. of B1950 equ. pole.
+ (123.00000000, 27.40000000) Galactic coords. of B1950 equ. pole.
+ (26.731537070, 15.64407736) Supgalactic coords. of B1950 equ. pole.
+
+ (359.68621044, 89.72178502) B1950 equ. coords. of J2000 equ. pole.
+ (------------, -----------) J2000 equ. coords. of J2000 equ. pole.
+ (89.300755510, 66.55417728) B1950 ecl. coords. of J2000 equ. pole.
+ (90.000000000, 66.56070889) J2000 ecl. coords. of J2000 equ. pole.
+ (122.93200023, 27.12843056) Galactic coords. of J2000 equ. pole.
+ (26.450516650, 15.70886131) Supgalactic coords. of J2000 equ. pole.
+
+ (270.00000000, 66.55421111) B1950 equ. coords. of B1950 ecl. pole.
+ (269.99920697, 66.55421892) J2000 equ. coords. of B1950 ecl. pole.
+ (------------, -----------) B1950 ecl. coords. of B1950 ecl. pole.
+ (267.21656404, 89.99350237) J2000 ecl. coords. of B1950 ecl. pole.
+ (96.376479150, 29.81195400) Galactic coords. of B1950 ecl. pole.
+ (33.378919140, 38.34766498) Supgalactic coords. of B1950 ecl. pole.
+
+ (270.00099211, 66.56069675) B1950 equ. coords. of J2000 ecl. pole.
+ (270.00000000, 66.56070889) J2000 equ. coords. of J2000 ecl. pole.
+ (86.517962160, 89.99350236) B1950 ecl. coords. of J2000 ecl. pole.
+ (------------, -----------) J2000 ecl. coords. of J2000 ecl. pole.
+ (96.383958840, 29.81163604) Galactic coords. of J2000 ecl. pole.
+ (33.376119480, 38.34154959) Supgalactic coords. of J2000 ecl. pole.
+
+ (192.25000000, 27.40000000) B1950 equ. coords. of Galactic pole.
+ (192.85949646, 27.12835323) J2000 equ. coords. of Galactic pole.
+ (179.32094769, 29.81195400) B1950 ecl. coords. of Galactic pole.
+ (180.02317894, 29.81153742) J2000 ecl. coords. of Galactic pole.
+ (------------, -----------) Galactic coords. of Galactic pole.
+ (90.000000000, 6.320000000) Supgalactic coords. of Galactic pole.
+
+ (283.18940711, 15.64407736) B1950 equ. coords. of SupGalactic pole.
+ (283.75420420, 15.70894043) J2000 equ. coords. of SupGalactic pole.
+ (286.26975051, 38.34766498) B1950 ecl. coords. of SupGalactic pole.
+ (286.96654469, 38.34158720) J2000 ecl. coords. of SupGalactic pole.
+ (47.370000000, 6.320000000) Galactic coords. of SupGalactic pole.
+ (------------, -----------) Supgalactic coords. of SupGalactic pole.
+ */
+static void
+wcs_coordsys_insys_pole_in_outsys(int insys, int outsys, double *lng2p1,
+ double *lat2p1, double *lng1p2)
+{
+ switch( insys )
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=180.31684301; *lat2p1=89.72174782; *lng1p2=359.68621044;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=90.000000000; *lat2p1=66.55421111; *lng1p2=270.00000000;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=90.699521110; *lat2p1=66.56068919; *lng1p2=270.00099211;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=123.00000000; *lat2p1=27.40000000; *lng1p2=192.25000000;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=26.731537070; *lat2p1=15.64407736; *lng1p2=283.18940711;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input EQB1950)", __func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=359.68621044; *lat2p1=89.72178502; *lng1p2=180.31684301;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=89.300755510; *lat2p1=66.55417728; *lng1p2=269.99920697;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=90.000000000; *lat2p1=66.56070889; *lng1p2=270.00000000;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=122.93200023; *lat2p1=27.12843056; *lng1p2=192.85949646;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=26.450516650; *lat2p1=15.70886131; *lng1p2=283.75420420;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input EQJ2000)", __func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ case GAL_WCS_COORDSYS_ECB1950:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=270.00000000; *lat2p1=66.55421111; *lng1p2=90.000000000;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=269.99920697; *lat2p1=66.55421892; *lng1p2=89.300755510;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=267.21656404; *lat2p1=89.99350237; *lng1p2=86.517962160;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=96.383958840; *lat2p1=29.81163604; *lng1p2=179.32094769;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=33.378919140; *lat2p1=38.34766498; *lng1p2=286.26975051;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input ECB1950)", __func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=270.00099211; *lat2p1=66.56069675; *lng1p2=90.699521110;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=270.00000000; *lat2p1=66.56070889; *lng1p2=90.000000000;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=86.517962160; *lat2p1=89.99350236; *lng1p2=267.21656404;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=96.383958840; *lat2p1=29.81163604; *lng1p2=180.02317894;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=33.376119480; *lat2p1=38.34154959; *lng1p2=286.96654469;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input ECJ2000)", __func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=192.25000000; *lat2p1=27.40000000; *lng1p2=123.00000000;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=192.85949646; *lat2p1=27.12835323; *lng1p2=122.93200023;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=179.32094769; *lat2p1=29.81195400; *lng1p2=96.376479150;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=180.02317894; *lat2p1=29.81153742; *lng1p2=96.383958840;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=90.000000000; *lat2p1=6.320000000; *lng1p2=47.370000000;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input GALACTIC)", __func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ switch( outsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *lng2p1=283.18940711; *lat2p1=15.64407736; *lng1p2=26.731537070;
return;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *lng2p1=283.75420420; *lat2p1=15.70894043; *lng1p2=26.450516650;
return;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *lng2p1=286.26975051; *lat2p1=38.34766498; *lng1p2=33.378919140;
return;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *lng2p1=286.96654469; *lat2p1=38.34158720; *lng1p2=33.376119480;
return;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *lng2p1=47.370000000; *lat2p1=6.320000000; *lng1p2=90.000000000;
return;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *lng2p1=NAN; *lat2p1=NAN; *lng1p2=NAN;
return;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'outsys' (input SUPERGALACTIC)",
__func__,
+ PACKAGE_BUGREPORT, outsys);
+ }
+ break;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'insys'", __func__,
+ PACKAGE_BUGREPORT, insys);
+ }
+
+}
+
+
+
+
+
+static void
+wcs_coordsys_ctypes(int coordsys, char **clng, char **clat, char **radesys)
+{
+ switch( coordsys)
+ {
+ case GAL_WCS_COORDSYS_EQB1950:
+ *clng="RA"; *clat="DEC"; *radesys="FK4"; break;
+ case GAL_WCS_COORDSYS_EQJ2000:
+ *clng="RA"; *clat="DEC"; *radesys="FK5"; break;
+ case GAL_WCS_COORDSYS_ECB1950:
+ *clng="ELON"; *clat="ELAT"; *radesys="FK4"; break;
+ case GAL_WCS_COORDSYS_ECJ2000:
+ *clng="ELON"; *clat="ELAT"; *radesys="FK5"; break;
+ case GAL_WCS_COORDSYS_GALACTIC:
+ *clng="GLON"; *clat="GLAT"; *radesys=NULL; break;
+ case GAL_WCS_COORDSYS_SUPERGALACTIC:
+ *clng="SLON"; *clat="SLAT"; *radesys=NULL; break;
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
+ "fix the problem. The code '%d' isn't a recognized WCS "
+ "coordinate system ID for 'coordsys'", __func__,
+ PACKAGE_BUGREPORT, coordsys);
+ }
+}
+
+
+
+
+/* Convert the coordinate system. */
+struct wcsprm *
+gal_wcs_coordsys_convert(struct wcsprm *wcs, int outcoordsys)
+{
+ int incoordsys;
+ char *alt=NULL; /* Only concerned with primary wcs. */
+ double equinox=0.0f; /* To preserve current value. */
+ struct wcsprm *out=NULL;
+ char *clng, *clat, *radesys;
+ double lng2p1=NAN, lat2p1=NAN, lng1p2=NAN;
+
+
+ /* Just incase the input is a NULL pointer. */
+ if(wcs==NULL) return NULL;
+
+ /* Get the input's coordinate system and see if it should be converted at
+ all or not (if the output coordinate system is different). If its the
+ same, just copy the input and return. */
+ incoordsys=gal_wcs_coordsys_identify(wcs);
+ if(incoordsys==outcoordsys)
+ {
+ out=gal_wcs_copy(wcs);
+ return out;
+ }
+
+ /* Find the necessary pole coordinates. Note that we have already
+ accounted for the fact that the input and output coordinate systems
+ may be the same above, so the NaN outputs will never occur here. */
+ wcs_coordsys_insys_pole_in_outsys(incoordsys, outcoordsys,
+ &lng2p1, &lat2p1, &lng1p2);
+
+ /* Find the necessary CTYPE names of the output. */
+ wcs_coordsys_ctypes(outcoordsys, &clng, &clat, &radesys);
+
+ /* Convert the WCS's coordinate system (if 'wcsccs' is available). */
+#if GAL_CONFIG_HAVE_WCSLIB_WCSCCS
+ out=gal_wcs_copy(wcs);
+ wcsccs(out, lng2p1, lat2p1, lng1p2, clng, clat, radesys, equinox, alt);
+#else
+
+ /* Just to avoid compiler warnings for 'equinox' and 'alt'. */
+ if(alt) lng2p1+=equinox;
+
+ /* Print error message and abort. */
+ error(EXIT_FAILURE, 0, "%s: the 'wcsccs' function isn't available "
+ "in the version of WCSLIB that this Gnuastro was built with "
+ "('wcsccs' was first available in WCSLIB 7.5, released on "
+ "March 2021). Therefore, Gnuastro can't preform the WCS "
+ "coordiante system conversion in the WCS. Please update your "
+ "WCSLIB and re-build Gnuastro with it to use this feature. "
+ "You can follow the instructions here to install the latest "
+ "version of WCSLIB:\n"
+ " https://www.gnu.org/software/gnuastro/manual/html_node/"
+ "WCSLIB.html\n"
+ "And then re-build Gnuastro as described here:\n"
+ " https://www.gnu.org/software/gnuastro/manual/"
+ "html_node/Quick-start.html\n\n",
+ __func__);
+#endif
+
+ /* Return. */
+ return out;
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
/*************************************************************
*********** Distortions ***********
*************************************************************/
@@ -1307,7 +1716,7 @@ gal_wcs_decompose_pc_cdelt(struct wcsprm *wcs)
/* Set the WCS structure to use the CD matrix. */
-static void
+void
gal_wcs_to_cd(struct wcsprm *wcs)
{
size_t i, j, naxis;
@@ -1551,10 +1960,9 @@ gal_wcs_pixel_area_arcsec2(struct wcsprm *wcs)
double out;
double *pixscale;
- /* A small sanity check. Later, when higher dimensions are necessary, we
- can find which ones correlate to RA and Dec and use them to find the
- pixel area in arcsec^2. */
- if(wcs->naxis!=2) return NAN;
+ /* Some basic sanity checks. */
+ if(wcs==NULL) return NAN;
+ if(wcs->naxis==1) return NAN;
/* Check if the units of the axis are degrees or not. Currently all FITS
images I have worked with use 'deg' for degrees. If other alternatives
@@ -1589,8 +1997,10 @@ gal_wcs_coverage(char *filename, char *hdu, size_t
*ondim,
size_t i, ndim, *dsize=NULL, numrows;
double *x=NULL, *y=NULL, *z=NULL, *min, *max, *center, *width;
- /* Read the desired WCS. */
- wcs=gal_wcs_read(filename, hdu, 0, 0, &nwcs);
+ /* Read the desired WCS (note that the linear matrix is irrelevant here,
+ we'll just select PC because its the default WCS mode. */
+ wcs=gal_wcs_read(filename, hdu, GAL_WCS_LINEAR_MATRIX_PC,
+ 0, 0, &nwcs);
/* If a WCS doesn't exist, return NULL. */
if(wcs==NULL) return 0;
diff --git a/tests/script/list-by-night.sh b/tests/script/list-by-night.sh
index 4871eeb..3333a8e 100755
--- a/tests/script/list-by-night.sh
+++ b/tests/script/list-by-night.sh
@@ -16,24 +16,24 @@
+
# Preliminaries
# =============
#
# Set the variables (The executable is in the build tree). Do the
# basic checks to see if the executable is made or if the defaults
# file exists (basicchecks.sh is in the source tree).
-#
-# We will be adding noise to two images: the warped (smaller) and unwarped
-# (larger) mock images. The warped one will be used by programs that don't
-# care about the size of the image, but the larger one will be used by
-# those that do: for example SubtractSky and NoiseChisel will be better
-# tested on a larger image.
prog=sort-by-night
dep1=fits
dep2=table
dep1name=../bin/$dep1/ast$dep1
dep2name=../bin/$dep2/ast$dep2
execname=../bin/script/astscript-$prog
+fits1name=clearcanvas.fits
+fits2name=aperturephot.fits
+fits3name=convolve_spatial.fits
+fits4name=convolve_spatial_noised.fits
+fits5name=convolve_spatial_noised_detected.fits
@@ -47,9 +47,16 @@ execname=../bin/script/astscript-$prog
#
# - The executable script was not made.
# - The programs it use weren't made.
+# - The input data weren't made.
if [ ! -f $execname ]; then echo "$execname doesn't exist."; exit 77; fi
if [ ! -f $dep1name ]; then echo "$dep1name doesn't exist."; exit 77; fi
if [ ! -f $dep2name ]; then echo "$dep2name doesn't exist."; exit 77; fi
+if [ ! -f $fits1name ]; then echo "$dep1name doesn't exist."; exit 77; fi
+if [ ! -f $fits2name ]; then echo "$dep1name doesn't exist."; exit 77; fi
+if [ ! -f $fits3name ]; then echo "$dep1name doesn't exist."; exit 77; fi
+if [ ! -f $fits4name ]; then echo "$dep1name doesn't exist."; exit 77; fi
+if [ ! -f $fits5name ]; then echo "$dep1name doesn't exist."; exit 77; fi
+
@@ -74,4 +81,5 @@ ln -sf $dep2name ast$dep2
# Since we want the script to recognize the programs that it will use from
# this same build of Gnuastro, we'll add the current directory to PATH.
export PATH="./:$PATH"
-$check_with_program $execname *.fits
+$check_with_program $execname $fits1name $fits2name $fits3name \
+ $fits4name $fits5name
- [gnuastro-commits] master 05cc693 080/113: Added 3rd dimension to MakeCatalog's minimum-maximum columns, (continued)
- [gnuastro-commits] master 05cc693 080/113: Added 3rd dimension to MakeCatalog's minimum-maximum columns, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 0eaf0dc 091/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 9e2397a 094/113: Imported recent work in master, no conflicts, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 9258f68 096/113: Imported recent work in master, minor conflict fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master d0d8d20 109/113: Imported recent work in master, minor conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 6e11585 111/113: Imported recent work in master, minor conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master aa80ac4 112/113: Imported recent updates in master branch, minor conflict fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master dadb3f3 107/113: Imported recent work in master, minor conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 313d522 099/113: Imported recent work in master, minor conflict fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 536b056 110/113: Imported recent changes in master, conflicts in book fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master dd4d43e 113/113: NoiseChisel and Segment: now working on 3D data cubes,
Mohammad Akhlaghi <=
- [gnuastro-commits] master eb91303 042/113: Merged recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 2dba72f 101/113: Imported work in master, conflicts fixed, changes made, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master b3416de 102/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 353c0c1 104/113: Imported recent work in master, conflict fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master f21c30f 105/113: Imported recent work in master, no conflicts, Mohammad Akhlaghi, 2021/04/16