[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[gnuastro-commits] master 5dac3c3 075/113: Recent work in master importe
From: |
Mohammad Akhlaghi |
Subject: |
[gnuastro-commits] master 5dac3c3 075/113: Recent work in master imported, small conflict in book fixed |
Date: |
Fri, 16 Apr 2021 10:33:51 -0400 (EDT) |
branch: master
commit 5dac3c33e0981c07e58211209ebf71e7715961a4
Merge: d6051a1 ab6a787
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Recent work in master imported, small conflict in book fixed
A small typo conflict in the book was fixed in this commit.
---
NEWS | 42 +++-
README | 50 ++--
bin/convertt/convertt.c | 2 +-
bin/fits/fits.c | 2 +-
bin/match/match.c | 6 +-
bin/mkcatalog/mkcatalog.c | 4 +-
bin/mkcatalog/ui.c | 10 +
bin/mkcatalog/upperlimit.c | 2 +-
bin/noisechisel/threshold.c | 2 +-
bin/segment/clumps.c | 2 +-
bin/segment/segment.c | 2 +-
bin/statistics/statistics.c | 2 +-
bin/statistics/ui.c | 2 +-
bin/table/args.h | 20 +-
bin/table/main.h | 3 +-
bin/table/table.c | 3 +-
bin/table/ui.c | 6 +
bin/table/ui.h | 7 +-
bootstrap | 48 +++-
configure.ac | 16 +-
developer-build | 111 +++++++--
doc/announce-acknowledge.txt | 17 +-
doc/gnuastro.en.html | 10 +-
doc/gnuastro.fr.html | 8 +-
doc/gnuastro.texi | 530 ++++++++++++++++++++++++++-----------------
doc/release-checklist.txt | 75 +++---
lib/data.c | 58 ++---
lib/gnuastro/data.h | 10 +-
lib/gnuastro/table.h | 3 +-
lib/gnuastro/txt.h | 3 +-
lib/interpolate.c | 7 +
lib/table.c | 9 +-
lib/txt.c | 67 +++---
tests/match/merged-cols.sh | 4 +-
34 files changed, 742 insertions(+), 401 deletions(-)
diff --git a/NEWS b/NEWS
index db4fa6b..ef6d63d 100644
--- a/NEWS
+++ b/NEWS
@@ -1,10 +1,38 @@
GNU Astronomy Utilities NEWS -*- outline -*-
-* Noteworthy changes in release A.A (library 4.0.0) (YYYY-MM-DD) [stable]
+* Noteworthy changes in release X.X (library 5.0.0) (YYYY-MM-DD) [alpha]
** New features
+ Table:
+ - `--colinfoinstdout': column information when writing to standard output.
+
+** Removed features
+
+** Changed features
+
+ Library:
+ - gal_txt_write: new `colinfoinstdout' argument.
+ - gal_table_write: new `colinfoinstdout' argument.
+
+** Bugs fixed
+
+ bug #54057: Building failure due to not finding gsl_interp_steffen.
+ bug #54063: Match tests in make check fail randomly.
+ bug #54186: MakeCatalog's --checkupperlimit not keeping output's name.
+
+
+
+
+
+* Noteworthy changes in release 0.6 (library 4.0.0) (2018-06-04) [stable]
+
+** New features
+
+ Building:
+ - New optional dependency: The TIFF library (libtiff).
+
All programs:
- Input image dataset(s) can be in any of the formats recognized by
Gnuastro (e.g., FITS, TIFF, JPEG), provided that their libraries
@@ -195,7 +223,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
gal_statistics_number: the output dataset now has a `size_t' type. Until
now it was `uint64_t'.
-** Bug fixes
+** Bugs fixed
bug #50957: --version output not possible on Mac OS X
bug #52979: Many unused result warnings for asprintf in some compilers.
@@ -378,7 +406,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
the input coordinates, thus their API has been greatly simplified and
their functionality increased.
-** Bug fixes
+** Bugs fixed
ConvertType crash when changing values (bug #52010).
@@ -515,7 +543,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
not their allocated blocks of memory). Until now, it was necessary for
the two blocks to have the same size and this is no longer the case.
-** Bug fixes
+** Bugs fixed
MakeProfiles long options on 32bit big endian systems (bug #51341).
@@ -835,7 +863,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
this to set edge pixels that are not fully covered in the new grid to
blank and have a flat warped image.
-** Bug fixes
+** Bugs fixed
Using `%zu' to print `size_t' variables for clean build on 32-bit
systems.
@@ -871,7 +899,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
* Noteworthy changes in release 0.2 (library 0.0.0) (2016-10-03) [stable]
-** Bug fixes
+** Bugs fixed
Linker errors on some operating systems have been fixed (bug #48076).
@@ -958,7 +986,7 @@ GNU Astronomy Utilities NEWS -*-
outline -*-
* Noteworthy changes in release 0.1 (2016-05-30) [stable]
-** Bug fixes
+** Bugs fixed
MakeCatalog's problem in checking the sizes of all input images is now
fixed.
diff --git a/README b/README
index 30c0247..94d965a 100644
--- a/README
+++ b/README
@@ -19,18 +19,19 @@ sections). There is also a separate chapter devoted to
tutorials for
effectively use Gnuastro combined with other software already available on
your Unix-like operating system (see Chapter 2).
-If you have already installed gnuastro, you can read the full book by
-running the following command. You can go through the whole book by
-pressing the 'SPACE' key, and leave the Info environment at any time by
-pressing 'q' key. See the "Getting help" section below (in this file) or in
-the book for more.
+To install Gnuastro, follow the instructions in the "Install Gnuastro"
+section below. If you have already installed gnuastro, you can read the
+full book by running the following command. You can go through the whole
+book by pressing the 'SPACE' key, and leave the Info environment at any
+time by pressing 'q' key. See the "Getting help" section below (in this
+file) or in the book for more.
info gnuastro
-The programs released in version 0.2 are listed below followed by their
-executable name in parenthesis and a short description. This list is
-ordered alphabetically. In the book, they are grouped and ordered by
-context under categories/chapters.
+Gnuastro's programs are listed below followed by their executable name in
+parenthesis and a short description. This list is ordered
+alphabetically. In the book, they are grouped and ordered by context under
+categories/chapters.
- Arithmetic (astarithmetic): For arithmetic operations on multiple
(theoretically unlimited) number of datasets (images). It has a large
@@ -120,17 +121,32 @@ Installing Gnuastro
-------------------
The mandatory dependencies which are required to install Gnuastro from the
-tarball are listed below. See the "Dependencies" section of the book for
-their detailed installation guides and optional dependencies to enable
-extra features. If you have just cloned Gnuastro and want to install from
-the version controlled source, please read the 'README-hacking' file (not
-available in the tarball) or the "Bootstrapping dependencies" subsection of
-the manual before continuing.
+tarball are listed below.
- GNU Scientific Library (GSL): https://www.gnu.org/software/gsl/
- CFITSIO: http://heasarc.gsfc.nasa.gov/fitsio/
- WCSLIB: http://www.atnf.csiro.au/people/mcalabre/WCS/
+The optional dependencies are:
+
+ - GNU Libtool: https://www.gnu.org/software/libtool/
+ - Git library (libgit2): https://libgit2.github.com/
+ - JPEG library (libjpeg): http://ijg.org/
+ - TIFF library (libtiff): http://simplesystems.org/libtiff/
+ - Ghostscript: https://www.ghostscript.com/
+
+See the "Dependencies" section of the book for their detailed installation
+guides and optional dependencies to enable extra features. Prior to
+installation, you can find it in the `doc/gnuastro.texi' file (source of
+the book), or on the web:
+
+ https://www.gnu.org/software/gnuastro/manual/html_node/Dependencies.html
+
+If you have just cloned Gnuastro and want to install from the version
+controlled source, please read the 'README-hacking' file (not available in
+the tarball) or the "Bootstrapping dependencies" subsection of the manual
+before continuing.
+
The most recent stable Gnuastro release can be downloaded from the
following link. Please see the "Downloading the source" section of the
Gnuastro book for a more complete discussion of your download options.
@@ -142,8 +158,8 @@ the standard GNU Build system as shown below. After the
'./configure'
command, Gnuastro will print messages upon the successful completion of
each step, giving further information and suggestions for the next steps.
- tar xf gnuastro-latest.tar.gz # Also works for `tar.lz' files
- cd gnuastro-0.1
+ tar xf gnuastro-latest.tar.lz # Also works for `tar.gz' files
+ cd gnuastro-X.X
./configure
make
make check
diff --git a/bin/convertt/convertt.c b/bin/convertt/convertt.c
index 7ed7ebf..be79d20 100644
--- a/bin/convertt/convertt.c
+++ b/bin/convertt/convertt.c
@@ -336,7 +336,7 @@ convertt(struct converttparams *p)
/* Plain text: only one channel is acceptable. */
case OUT_FORMAT_TXT:
gal_checkset_writable_remove(p->cp.output, 0, p->cp.dontdelete);
- gal_txt_write(p->chll, NULL, p->cp.output);
+ gal_txt_write(p->chll, NULL, p->cp.output, 0);
break;
/* JPEG: */
diff --git a/bin/fits/fits.c b/bin/fits/fits.c
index 0381d72..320ec50 100644
--- a/bin/fits/fits.c
+++ b/bin/fits/fits.c
@@ -235,7 +235,7 @@ fits_print_extension_info(struct fitsparams *p)
printf(" Column 4: Size of data in HDU.\n");
printf("-----\n");
}
- gal_table_write(cols, NULL, GAL_TABLE_FORMAT_TXT, NULL, NULL);
+ gal_table_write(cols, NULL, GAL_TABLE_FORMAT_TXT, NULL, NULL, 0);
gal_list_data_free(cols);
}
diff --git a/bin/match/match.c b/bin/match/match.c
index bfbbf21..bd659c3 100644
--- a/bin/match/match.c
+++ b/bin/match/match.c
@@ -104,7 +104,7 @@ match_catalog_read_write_all(struct matchparams *p, size_t
*permutation,
else
{
/* Write the catalog to a file. */
- gal_table_write(cat, NULL, p->cp.tableformat, outname, extname);
+ gal_table_write(cat, NULL, p->cp.tableformat, outname, extname, 0);
/* Correct arrays and sizes (when `notmatched' was called). The
`array' element has to be corrected for later freeing.
@@ -169,7 +169,7 @@ match_catalog_write_one(struct matchparams *p, gal_data_t
*a, gal_data_t *b,
/* Reverse the table and write it out. */
gal_list_data_reverse(&cat);
- gal_table_write(cat, NULL, p->cp.tableformat, p->cp.output, "MATCHED");
+ gal_table_write(cat, NULL, p->cp.tableformat, p->cp.output, "MATCHED", 0);
}
@@ -255,7 +255,7 @@ match_catalog(struct matchparams *p)
/* Write them into the table. */
gal_table_write(mcols, NULL, p->cp.tableformat, p->logname,
- "LOG_INFO");
+ "LOG_INFO", 0);
/* Set the comment pointer to NULL: they weren't allocated. */
mcols->comment=NULL;
diff --git a/bin/mkcatalog/mkcatalog.c b/bin/mkcatalog/mkcatalog.c
index 3d39c67..97ef3bc 100644
--- a/bin/mkcatalog/mkcatalog.c
+++ b/bin/mkcatalog/mkcatalog.c
@@ -617,7 +617,7 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
write the objects catalog and free the comments. */
gal_list_str_reverse(&comments);
gal_table_write(p->objectcols, comments, p->cp.tableformat, p->objectsout,
- "OBJECTS");
+ "OBJECTS", 0);
gal_list_str_free(comments, 1);
@@ -636,7 +636,7 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
write the objects catalog and free the comments. */
gal_list_str_reverse(&comments);
gal_table_write(p->clumpcols, comments, p->cp.tableformat, p->clumpsout,
- "CLUMPS");
+ "CLUMPS", 0);
gal_list_str_free(comments, 1);
}
diff --git a/bin/mkcatalog/ui.c b/bin/mkcatalog/ui.c
index 0422049..b7b9311 100644
--- a/bin/mkcatalog/ui.c
+++ b/bin/mkcatalog/ui.c
@@ -1170,6 +1170,7 @@ static void
ui_preparations_outnames(struct mkcatalogparams *p)
{
char *suffix;
+ uint8_t keepinputdir=p->cp.keepinputdir;
/* The process differs if an output filename has been given. */
if(p->cp.output)
@@ -1211,13 +1212,22 @@ ui_preparations_outnames(struct mkcatalogparams *p)
/* If an upperlimit check image is requsted, then set its filename. */
if(p->checkupperlimit)
{
+ /* See if the directory should be respected. */
+ p->cp.keepinputdir = p->cp.output ? 1 : p->cp.keepinputdir;
+
+ /* Set the suffix. */
suffix = ( p->cp.tableformat==GAL_TABLE_FORMAT_TXT
? "_upcheck.txt" : "_upcheck.fits" );
+
+ /* Set the file name. */
p->upcheckout=gal_checkset_automatic_output(&p->cp,
( p->cp.output
? p->cp.output
: p->objectsfile),
suffix);
+
+ /* Set `keepinputdir' to what it was before. */
+ p->cp.keepinputdir=keepinputdir;
}
/* Just to avoid bugs (`p->cp.output' must no longer be used), we'll free
diff --git a/bin/mkcatalog/upperlimit.c b/bin/mkcatalog/upperlimit.c
index d883a4e..a338a10 100644
--- a/bin/mkcatalog/upperlimit.c
+++ b/bin/mkcatalog/upperlimit.c
@@ -447,7 +447,7 @@ upperlimit_write_check(struct mkcatalogparams *p,
gal_list_sizet_t *check_x,
else { y->next=s; }
gal_list_str_reverse(&comments);
gal_table_write(x, comments, p->cp.tableformat, p->upcheckout,
- "UPPERLIMIT_CHECK");
+ "UPPERLIMIT_CHECK", 0);
/* Inform the user. */
if(!p->cp.quiet)
diff --git a/bin/noisechisel/threshold.c b/bin/noisechisel/threshold.c
index c489f54..70ab66d 100644
--- a/bin/noisechisel/threshold.c
+++ b/bin/noisechisel/threshold.c
@@ -226,7 +226,7 @@ threshold_write_sn_table(struct noisechiselparams *p,
gal_data_t *insn,
/* write the table. */
gal_checkset_writable_remove(filename, 0, 1);
- gal_table_write(cols, comments, p->cp.tableformat, filename, "SN");
+ gal_table_write(cols, comments, p->cp.tableformat, filename, "SN", 0);
/* Clean up (if necessary). */
diff --git a/bin/segment/clumps.c b/bin/segment/clumps.c
index d9bf72d..b01f12a 100644
--- a/bin/segment/clumps.c
+++ b/bin/segment/clumps.c
@@ -548,7 +548,7 @@ clumps_write_sn_table(struct segmentparams *p, gal_data_t
*insn,
/* write the table. */
gal_checkset_writable_remove(filename, 0, 1);
- gal_table_write(cols, comments, p->cp.tableformat, filename, "SN");
+ gal_table_write(cols, comments, p->cp.tableformat, filename, "SN", 0);
/* Clean up (if necessary). */
diff --git a/bin/segment/segment.c b/bin/segment/segment.c
index a0dbd6a..189f7d0 100644
--- a/bin/segment/segment.c
+++ b/bin/segment/segment.c
@@ -797,7 +797,7 @@ segment_save_sn_table(struct clumps_params *clprm)
objind->next=clumpinobj;
gal_checkset_writable_remove(p->clumpsn_d_name, 0, 1);
gal_table_write(objind, comments, p->cp.tableformat, p->clumpsn_d_name,
- "CLUMPS_SN");
+ "CLUMPS_SN", 0);
/* Clean up. */
diff --git a/bin/statistics/statistics.c b/bin/statistics/statistics.c
index 3ecdf48..5cd1c63 100644
--- a/bin/statistics/statistics.c
+++ b/bin/statistics/statistics.c
@@ -588,7 +588,7 @@ write_output_table(struct statisticsparams *p, gal_data_t
*table,
/* Write the table. */
gal_checkset_writable_remove(output, 0, p->cp.dontdelete);
- gal_table_write(table, comments, p->cp.tableformat, output, "TABLE");
+ gal_table_write(table, comments, p->cp.tableformat, output, "TABLE", 0);
/* Let the user know, if we aren't in quiet mode. */
diff --git a/bin/statistics/ui.c b/bin/statistics/ui.c
index 3819c11..edf2e85 100644
--- a/bin/statistics/ui.c
+++ b/bin/statistics/ui.c
@@ -834,7 +834,7 @@ ui_preparations(struct statisticsparams *p)
gal_checkset_writable_remove(tl->tilecheckname, 0,
cp->dontdelete);
gal_table_write(check, NULL, cp->tableformat, tl->tilecheckname,
- "TABLE");
+ "TABLE", 0);
}
gal_data_free(check);
}
diff --git a/bin/table/args.h b/bin/table/args.h
index f67bcb5..0123f24 100644
--- a/bin/table/args.h
+++ b/bin/table/args.h
@@ -49,21 +49,39 @@ struct argp_option program_options[] =
+ /* Output. */
{
"information",
UI_KEY_INFORMATION,
0,
0,
"Only print table and column information.",
- GAL_OPTIONS_GROUP_OPERATING_MODE,
+ GAL_OPTIONS_GROUP_OUTPUT,
&p->information,
GAL_OPTIONS_NO_ARG_TYPE,
GAL_OPTIONS_RANGE_0_OR_1,
GAL_OPTIONS_NOT_MANDATORY,
GAL_OPTIONS_NOT_SET
},
+ {
+ "colinfoinstdout",
+ UI_KEY_COLINFOINSTDOUT,
+ 0,
+ 0,
+ "Column info/metadata when printing to stdout.",
+ GAL_OPTIONS_GROUP_OUTPUT,
+ &p->colinfoinstdout,
+ GAL_OPTIONS_NO_ARG_TYPE,
+ GAL_OPTIONS_RANGE_0_OR_1,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET
+ },
+
+
+
+ /* End. */
{0}
};
diff --git a/bin/table/main.h b/bin/table/main.h
index bbd8288..89b885e 100644
--- a/bin/table/main.h
+++ b/bin/table/main.h
@@ -46,7 +46,8 @@ struct tableparams
struct gal_options_common_params cp; /* Common parameters. */
char *filename; /* Input filename. */
gal_list_str_t *columns; /* List of given columns. */
- uint8_t information; /* ==1, only print FITS information. */
+ uint8_t information; /* ==1: only print FITS information. */
+ uint8_t colinfoinstdout; /* ==1: print column metadata in CL. */
/* Output: */
gal_data_t *table; /* Linked list of output table columns. */
diff --git a/bin/table/table.c b/bin/table/table.c
index 2411467..b0afbf8 100644
--- a/bin/table/table.c
+++ b/bin/table/table.c
@@ -47,5 +47,6 @@ void
table(struct tableparams *p)
{
gal_checkset_writable_remove(p->cp.output, 0, p->cp.dontdelete);
- gal_table_write(p->table, NULL, p->cp.tableformat, p->cp.output, "TABLE");
+ gal_table_write(p->table, NULL, p->cp.tableformat, p->cp.output,
+ "TABLE", p->colinfoinstdout);
}
diff --git a/bin/table/ui.c b/bin/table/ui.c
index a401692..6a77393 100644
--- a/bin/table/ui.c
+++ b/bin/table/ui.c
@@ -120,11 +120,17 @@ ui_initialize_options(struct tableparams *p,
/* Select individually. */
switch(cp->coptions[i].key)
{
+ /* Mandatory options. */
case GAL_OPTIONS_KEY_SEARCHIN:
case GAL_OPTIONS_KEY_MINMAPSIZE:
case GAL_OPTIONS_KEY_TABLEFORMAT:
cp->coptions[i].mandatory=GAL_OPTIONS_MANDATORY;
break;
+
+ /* Options to ignore. */
+ case GAL_OPTIONS_KEY_TYPE:
+ cp->coptions[i].flags=OPTION_HIDDEN;
+ break;
}
/* Select by group. */
diff --git a/bin/table/ui.h b/bin/table/ui.h
index 1402ee3..c7482e5 100644
--- a/bin/table/ui.h
+++ b/bin/table/ui.h
@@ -32,14 +32,15 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
/* Available letters for short options:
- a b d e f g j k l m n p r s t u v w x y z
+ a b d e f g j k l m n p r t u v w x y z
A B C E G H J L O Q R W X Y
*/
enum option_keys_enum
{
/* With short-option version. */
- UI_KEY_COLUMN = 'c',
- UI_KEY_INFORMATION = 'i',
+ UI_KEY_COLUMN = 'c',
+ UI_KEY_INFORMATION = 'i',
+ UI_KEY_COLINFOINSTDOUT = 's',
/* Only with long version (start with a value 1000, the rest will be set
automatically). */
diff --git a/bootstrap b/bootstrap
index eddacfb..002edf6 100755
--- a/bootstrap
+++ b/bootstrap
@@ -1,6 +1,6 @@
#! /bin/sh
# Print a version string.
-scriptversion=2018-04-28.14; # UTC
+scriptversion=2018-05-27.20; # UTC
# Bootstrap this package from checked-out sources.
@@ -47,6 +47,8 @@ PERL="${PERL-perl}"
me=$0
+default_gnulib_url=git://git.sv.gnu.org/gnulib
+
usage() {
cat <<EOF
Usage: $me [OPTION]...
@@ -76,6 +78,37 @@ contents are read as shell variables to configure the
bootstrap.
For build prerequisites, environment variables like \$AUTOCONF and \$AMTAR
are honored.
+Gnulib sources can be fetched in various ways:
+
+ * If this package is in a git repository with a 'gnulib' submodule
+ configured, then that submodule is initialized and updated and sources
+ are fetched from there. If \$GNULIB_SRCDIR is set (directly or via
+ --gnulib-srcdir) and is a git repository, then it is used as a reference.
+
+ * Otherwise, if \$GNULIB_SRCDIR is set (directly or via --gnulib-srcdir),
+ then sources are fetched from that local directory. If it is a git
+ repository and \$GNULIB_REVISION is set, then that revision is checked
+ out.
+
+ * Otherwise, if this package is in a git repository with a 'gnulib'
+ submodule configured, then that submodule is initialized and updated and
+ sources are fetched from there.
+
+ * Otherwise, if the 'gnulib' directory does not exist, Gnulib sources are
+ cloned into that directory using git from \$GNULIB_URL, defaulting to
+ $default_gnulib_url.
+ If \$GNULIB_REVISION is set, then that revision is checked out.
+
+ * Otherwise, the existing Gnulib sources in the 'gnulib' directory are
+ used. If it is a git repository and \$GNULIB_REVISION is set, then that
+ revision is checked out.
+
+If you maintain a package and want to pin a particular revision of the
+Gnulib sources that has been tested with your package, then there are two
+possible approaches: either configure a 'gnulib' submodule with the
+appropriate revision, or set \$GNULIB_REVISION (and if necessary
+\$GNULIB_URL) in $me.conf.
+
Running without arguments will suffice in most cases.
EOF
}
@@ -634,9 +667,11 @@ if $use_gnulib; then
trap cleanup_gnulib 1 2 13 15
shallow=
- git clone -h 2>&1 | grep -- --depth > /dev/null && shallow='--depth 2'
- git clone $shallow git://git.sv.gnu.org/gnulib "$gnulib_path" ||
- cleanup_gnulib
+ if test -z "$GNULIB_REVISION"; then
+ git clone -h 2>&1 | grep -- --depth > /dev/null && shallow='--depth 2'
+ fi
+ git clone $shallow ${GNULIB_URL:-$default_gnulib_url} "$gnulib_path" \
+ || cleanup_gnulib
trap - 1 2 13 15
fi
@@ -671,6 +706,11 @@ if $use_gnulib; then
;;
esac
+ if test -d "$GNULIB_SRCDIR"/.git && test -n "$GNULIB_REVISION" \
+ && ! git_modules_config submodule.gnulib.url >/dev/null; then
+ (cd "$GNULIB_SRCDIR" && git checkout "$GNULIB_REVISION") || cleanup_gnulib
+ fi
+
# $GNULIB_SRCDIR now points to the version of gnulib to use, and
# we no longer need to use git or $gnulib_path below here.
diff --git a/configure.ac b/configure.ac
index 07b37b3..dcb470f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -50,7 +50,7 @@ AC_CONFIG_MACRO_DIRS([bootstrapped/m4])
# Library version, see the GNU Libtool manual ("Library interface versions"
# section for the exact definition of each) for
-GAL_CURRENT=4
+GAL_CURRENT=5
GAL_REVISION=0
GAL_AGE=0
GAL_LT_VERSION="${GAL_CURRENT}:${GAL_REVISION}:${GAL_AGE}"
@@ -192,7 +192,7 @@ PATH=$(AS_ECHO([$PATH]) | $SED -e 's|'"$currpwd"'||g' \
-e 's|^:||' \
-e 's|:$||' )
AS_IF([test $oldPATH = $PATH],
- [ path_warning=no ],
+ [ path_warning=no ],
[ path_warning=yes; anywarnings=yes ])
AC_MSG_RESULT( $path_warning )
@@ -213,6 +213,10 @@ AC_SEARCH_LIBS([cblas_sdsdot], [gslcblas], [],
[AC_MSG_ERROR([GSL CBLAS not present, cannot continue.])])
AC_SEARCH_LIBS([gsl_integration_qng], [gsl], [],
[AC_MSG_ERROR([GSL not found, cannot continue.])])
+AC_CHECK_DECLS(gsl_interp_steffen,
+ [ gsl_version_old=no ],
+ [ gsl_version_old=yes; anywarnings=yes ],
+ [[#include <gsl/gsl_interp.h>]])
# Since version 0.42, if `libcurl' is installed, CFITSIO will link with it
# and thus it will be necessary to explicitly link with libcurl also. If it
@@ -657,6 +661,14 @@ AS_IF([test x$enable_guide_message = xyes],
AS_ECHO(["Configuration warning(s):"])
AS_ECHO([])
+ AS_IF([test "x$gsl_version_old" = "xyes"],
+ [AS_ECHO([" - The version of GNU Scientific Library (GSL) on
this system doesn't"])
+ AS_ECHO([" have some features that can be useful in
Gnuastro. This build"])
+ AS_ECHO([" won't crash, but Gnuastro will have less
functionality afterwards."])
+ AS_ECHO([" We thus recommend building and installing a more
recent version"])
+ AS_ECHO([" of GSL (version >= 2.0, released in October
2015)."])
+ AS_ECHO([]) ])
+
AS_IF([test "x$has_libjpeg" = "xno"],
[AS_ECHO([" - libjpeg, could not be linked with in your library
search path."])
AS_ECHO([" If JPEG inputs/outputs are requested, the
respective tool will"])
diff --git a/developer-build b/developer-build
index 0b917d9..22b8ae2 100755
--- a/developer-build
+++ b/developer-build
@@ -37,11 +37,13 @@ set -e
# Default values for variables.
jobs=8
+dist=0
debug=0
clean=0
check=0
+reconf=0
+upload=0
install=0
-tar_pdf_upload=0
top_build_dir=/dev/shm
if [ -f .version ]; then
version=$(cat .version)
@@ -57,6 +59,13 @@ fi
me=$0 # Executable file name.
help_print() {
+ # See if autoreconf is enabled or not.
+ if [ $reconf = "0" ]; then
+ reconf_status="DISABLED"
+ else
+ reconf_status="ENABLED"
+ fi
+
# See if debug is enabled or not.
if [ $debug = "0" ]; then
debug_status="DISABLED"
@@ -85,11 +94,18 @@ help_print() {
install_status="ENABLED"
fi
- # See if tar_pdf_upload is enabled or not.
- if [ $tar_pdf_upload = "0" ]; then
- tpu_status="DISABLED"
+ # See dist is enabled or not.
+ if [ $dist = "0" ]; then
+ dist_status="DISABLED"
+ else
+ dist_status="ENABLED"
+ fi
+
+ # See if upload is enabled or not.
+ if [ $upload = "0" ]; then
+ upload_status="DISABLED"
else
- tpu_status="ENABLED"
+ upload_status="ENABLED"
fi
# Print the output.
@@ -117,6 +133,10 @@ Options:
build directory name.
Current value: $version
+ -a, --autoreconf Run 'autoreconf -f' (to set the version and
+ update the build system) before anything else.
+ Current status: $reconf_status
+
-c, --clean Delete (with 'rm') all its contents of the build
directory before starting new configuration.
Current status: $clean_status
@@ -135,10 +155,19 @@ Options:
-i, --install Run 'sudo make install' after the build.
Current status: $install_status
- -u, --tar-pdf-upload STR Build a tar.lz tarball and PDF manual, then
- upload them to the given server:folder.
+ -D, --dist Build a tar.lz tarball and PDF manual.
+ Current status: $dist_status
+
+ -u, --upload STR First run '--dist', then upload tarball and PDF
+ manual to the given 'server:folder'.
For example: -u my-server:folder
- Current status: $tpu_status
+ Current status: $upload_status
+
+ -p, --publish STR Short for '-a -c -d -C -u STR'. '-d' is added
+ because it will greatly speed up the build. It
+ will have no effect on the produced tarball.
+
+ -I, --install-archive Short for '-a -c -C -i -D'.
-P, --printparams Another name for '--help', for similarity with
Gnuastro's programs. Note that the output of
@@ -177,6 +206,10 @@ do
echo $version
exit 0
;;
+ -a|--autoreconf)
+ reconf=1
+ shift # past argument
+ ;;
-c|--clean)
clean=1
shift # past argument
@@ -202,16 +235,44 @@ do
install=1
shift # past argument
;;
- -u|--tar-pdf-upload)
- tar_pdf_upload=1
+ -D|--dist)
+ dist=1
+ shift # past argument
+ ;;
+ -u|--upload)
+ dist=1
+ upload=1
+ url="$2"
+ if [ x"$url" = x ]; then
+ echo "No SERVER:DIR given to '--upload' ('-u') ."
+ exit 1;
+ fi
+ shift # past argument
+ shift # past value
+ ;;
+ -p|--publish)
+ dist=1
+ clean=1
+ debug=1
+ check=1
+ reconf=1
+ upload=1
url="$2"
if [ x"$url" = x ]; then
- echo "No argument given to '--tar-pdf-upload' ('-u')."
+ echo "No SERVER:DIR given to '--publish' ('-p') ."
exit 1;
fi
shift # past argument
shift # past value
;;
+ -I|--install-archive)
+ dist=1
+ clean=1
+ check=1
+ reconf=1
+ install=1
+ shift # past argument
+ ;;
-h|-P|--help|--printparams)
help_print
exit 0
@@ -240,6 +301,15 @@ fi
+# If reconfiguration was requested, do it.
+if [ $reconf = 1 ]; then
+ autoreconf -f
+fi
+
+
+
+
+
# Keep the address of this source directory (where this script is being run
# from) which we will need later.
srcdir=$(pwd)
@@ -363,13 +433,22 @@ fi
-# Build a tarball, and upload it to the requested server.
-if [ x$tar_pdf_upload = x1 ]; then
-
- # Make the distribution tarball and pdf manual.
+# Make the tarball and PDF for distribution.
+if [ x$dist = x1 ]; then
make dist-lzip pdf
+fi
+
+
+
+
+
+# Build a tarball, and upload it to the requested server.
+if [ x$upload = x1 ]; then
- # Get the base package name, and use it to make a generic tarball name.
+ # Get the base package name, and use it to make a generic tarball
+ # name. Note that with the `--upload' option, `--dist' is also
+ # activated, so the tarball is already built and ready by this
+ # step.
base=$(ls *.tar.lz | sed -e's/-/ /' | awk '{print $1}')
mv *.tar.lz $base"-latest.tar.lz"
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index b36eb58..1f82afb 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,19 +1,6 @@
-People who's help must be acknowledged in the next release.
+Alphabetically ordered list to acknowledge in the next release.
Leindert Boogaard
Nushkia Chamba
-Nima Dehdilani
-Antonio Diaz Diaz
-Lee Kelvin
-Brandon Kelly
+Takashi Ichikawa
Alan Lefor
-Guillaume Mahler
-Bertrand Pain
-Ole Streicher
-Michel Tallon
-Juan C. Tello
-Éric Thiébaut
-David Valls-Gabaud
-Aaron Watkins
-Sara Yousefi Taemeh
-Johannes Zabl
diff --git a/doc/gnuastro.en.html b/doc/gnuastro.en.html
index 5f23131..323e152 100644
--- a/doc/gnuastro.en.html
+++ b/doc/gnuastro.en.html
@@ -85,9 +85,9 @@ for entertaining and easy to read real world examples of using
<p>
The current stable release
- is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.gz">Gnuastro
- 0.5</a> (December 22nd, 2017).
- Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.5.tar.gz">a
+ is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.6.tar.gz">Gnuastro
+ 0.6</a> (June 4th, 2018).
+ Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.6.tar.gz">a
mirror</a> if possible.
<!-- Comment the test release notice when the test release is not more
@@ -97,8 +97,8 @@ for entertaining and easy to read real world examples of using
in <a
href="https://lists.gnu.org/mailman/listinfo/info-gnuastro">info-gnuastro</a>.
To stay up to date, please subscribe.</p>
-<p>For details of the significant changes please see the
- <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.5">NEWS</a>
+<p>For details of the significant changes in this release, please see the
+ <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.6">NEWS</a>
file.</p>
<p>The
diff --git a/doc/gnuastro.fr.html b/doc/gnuastro.fr.html
index 26b7785..5705db3 100644
--- a/doc/gnuastro.fr.html
+++ b/doc/gnuastro.fr.html
@@ -85,15 +85,15 @@ h3 { clear: both; }
<h3 id="download">Téléchargement</h3>
<p>La version stable actuelle
- est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.gz">Gnuastro
- 0.5</a> (sortie le 22 december
- 2017). Utilisez <a
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.5.tar.gz">un
+ est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.6.tar.gz">Gnuastro
+ 0.6</a> (sortie le 4 juin
+ 2018). Utilisez <a
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.6.tar.gz">un
miroir</a> si possible. <br />Les nouvelles publications sont annoncées
sur <a
href="https://lists.gnu.org/mailman/listinfo/info-gnuastro">info-gnuastro</a>.
Abonnez-vous pour rester au courant.</p>
<p>Les changements importants sont décrits dans le
- fichier <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.5">
+ fichier <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.6">
NEWS</a>.</p>
<p>Le lien
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 081f01c..10b7e93 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -699,7 +699,7 @@ to expect a fully familiar experience in the source code,
building,
installing and command-line user interaction that they have seen in all the
other GNU software that they use. The official and always up to date
version of this book (or manual) is freely available under @ref{GNU Free
-Doc. License} in various formats (pdf, html, plain text, info, and as its
+Doc. License} in various formats (PDF, HTML, plain text, info, and as its
Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.
For users who are new to the GNU/Linux environment, unless otherwise
@@ -801,7 +801,6 @@ $ sudo make install
@end example
@noindent
-
See @ref{Known issues} if you confront any complications. For each program
there is an `Invoke ProgramName' sub-section in this book which explains
how the programs should be run on the command-line (for example
@@ -830,16 +829,16 @@ is precisely these that will ultimately allow future
generations to
advance the existing experimental and theoretical knowledge through
their new solutions and corrections.
-In the past, scientists would gather data and process them
-individually to achieve an analysis thus having a much more intricate
-knowledge of the data and analysis. The theoretical models also
-required little (if any) simulations to compare with the data. Today
-both methods are becoming increasingly more dependent on pre-written
-software. Scientists are dissociating themselves from the intricacies
-of reducing raw observational data in experimentation or from bringing
-the theoretical models to life in simulations. These `intricacies'
-are precisely those unseen faults, hidden assumptions, simplifications
-and approximations that define scientific progress.
+In the past, scientists would gather data and process them individually to
+achieve an analysis thus having a much more intricate knowledge of the data
+and analysis. The theoretical models also required little (if any)
+simulations to compare with the data. Today both methods are becoming
+increasingly more dependent on pre-written software. Scientists are
+dissociating themselves from the intricacies of reducing raw observational
+data in experimentation or from bringing the theoretical models to life in
+simulations. These `intricacies' are precisely those unseen faults, hidden
+assumptions, simplifications and approximations that define scientific
+progress.
@quotation
@cindex Anscombe F. J.
@@ -874,7 +873,7 @@ will probably be a false positive.
Users of statistical (scientific) methods (software) are therefore not
passive (objective) agents in their result. Therefore, it is necessary to
-actually understand the method not just use it as a black box. The
+actually understand the method, not just use it as a black box. The
subjective experience gained by frequently using a method/software is not
sufficient to claim an understanding of how the tool/method works and how
relevant it is to the data and analysis. This kind of subjective experience
@@ -886,11 +885,12 @@
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}},
poorly written (or non-existent) scientific software manuals, and
non-reproducible papers@footnote{Where the authors omit many of the
analysis/processing ``details'' from the paper by arguing that they would
-make the paper too long/unreadable. However, software methods do allows us
-to supplement papers with all the details necessary to exactly reproduce
-the result. For example see @url{https://doi.org/10.5281/zenodo.1163746,
-zenodo.1163746} and @url{https://doi.org/10.5281/zenodo.1164774,
-zenodo.1164774} and this @url{
+make the paper too long/unreadable. However, software engineers have been
+dealing with such issues for a long time. There are thus software
+management solutions that allow us to supplement papers with all the
+details necessary to exactly reproduce the result. For example see
+@url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} and
+@url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
http://akhlaghi.org/reproducible-science.html, general discussion}.}. This
approach to scientific software and methods only helps in producing dogmas
and an ``@emph{obscurantist faith in the expert's special skill, and in his
@@ -938,6 +938,8 @@ and this book are thus intimately linked, and when
considered as a single
entity can be thought of as a real (an actual software accompanying the
algorithms) ``Numerical Recipes'' for astronomy.
+@cindex GNU free documentation license
+@cindex GNU General Public License (GPL)
The second major, and arguably more important, difference is that
``Numerical Recipes'' does not allow you to distribute any code that you
have learned from it. In other words, it does not allow you to release your
@@ -969,7 +971,7 @@ something no astronomer of the time took seriously. In the
paradigm of the
day, what could be the purpose of enlarging geometric spheres (planets) or
points (stars)? In that paradigm only the position and movement of the
heavenly bodies was important, and that had already been accurately studied
-(recently by Tyco Brahe).
+(recently by Tycho Brahe).
In the beginning of his ``The Sidereal Messenger'' (published in 1610) he
cautions the readers on this issue and @emph{before} describing his
@@ -986,14 +988,24 @@ the universe: roughly 14 billion years (suggested by the
current consensus
of the standard model of cosmology) and less than 10,000 years (suggested
from some interpretations of the Bible). Both these numbers are
@emph{results}. What distinguishes these two results, is the tools/methods
-used to derive them. Therefore, as the term ``Scientific method'' also
-signifies, it is the @emph{method} that defines a scientific statement, not
-the result of one implementation of the method.}.
-
-The same is true today: science cannot progress with a black box. Technical
-knowledge and experience (to experiment on its tools, or software in this
-context@footnote{Of course, this also applies to hardware.}), is critical
-to scientific vitality.
+that were used to derive them. Therefore, as the term ``Scientific method''
+also signifies, a scientific statement it defined by its @emph{method}, not
+its result.}
+
+The same is true today: science cannot progress with a black box, or poorly
+released code. Technical knowledge and experience (to experiment on its
+tools, or software in this context@footnote{Of course, this also applies to
+hardware.}), is critical to scientific vitality. Scientific research are
+only considered for peer review and publication if they have a sufficiently
+high standard of English style. A similar level of quality assessment is
+necessary regarding the codes/methods scientists use to derive their
+results. Therefore, when a scientist says ``software is not my specialty, I
+am not a software engineer. So the quality of my code/processing doesn't
+matter. Why should I master good coding style, or release my code, when I
+am hired to do Astronomy/Biology?''. This statement is akin to a French
+scientist saying that "English is not my language, I am not Shakespeare. So
+the quality of my English writing doesn't matter. Why should I master good
+English style, when I am hired to do Astronomy/Biology?"
@cindex Ken Thomson
@cindex Stroustrup, Bjarne
@@ -1009,7 +1021,7 @@ This can happen when scientists get too distant from the
raw data and
methods, and are mainly discussing results. In other words, when they feel
they have tamed Nature into their own high-level (abstract) models
(creations), and are mainly concerned with scaling up, or industrializing
-science. Roughly five years before special relativity, and about two
+those results. Roughly five years before special relativity, and about two
decades before quantum mechanics fundamentally changed Physics, Lord Kelvin
is quoted as saying:
@@ -1242,32 +1254,24 @@ astronomer or astronomical facility or project.
@node New to GNU/Linux?, Report a bug, Version numbering, Introduction
@section New to GNU/Linux?
-Some astronomers initially install and use the GNU/Linux operating systems
-because the software that their research community use can only be run in
-this environment, the transition is not necessarily easy. To encourage you
-in investing the patience and time to make this transition, we define the
-GNU/Linux system and argue for the command-line interface of scientific
-software and how it is worth the (apparently steep) learning curve.
-@ref{Command-line interface} contains a short overview of the powerful
-command-line user interface. @ref{Tutorials} is a complete chapter with
-some real world example applications of Gnuastro making good use of
-GNU/Linux capabilities written for newcomers to this environment. It is
-fully explained, easy and (hopefully) entertaining.
+Some astronomers initially install and use a GNU/Linux operating system
+because their necessary tools can only be installed in this environment.
+However, the transition is not necessarily easy. To encourage you in
+investing the patience and time to make this transition, and actually enjoy
+it, we will first start with a basic introduction to GNU/Linux operating
+systems. Afterwards, in @ref{Command-line interface} we'll discuss the
+wonderful benefits of the command-line interface, how it beautifully
+complements the graphic user interface, and why it is worth the (apparently
+steep) learning curve. Finally a complete chapter (@ref{Tutorials}) is
+devoted to real world scenarios of using Gnuastro (on the
+command-line). Therefore if you don't yet feel comfortable with the
+command-line we strongly recommend going through that chapter after
+finishing this section.
-@cindex Linux
-@cindex GNU/Linux
-@cindex GNU C library
-@cindex GNU Compiler Collection
You might have already noticed that we are not using the name ``Linux'',
but ``GNU/Linux''. Please take the time to have a look at the following
essays and FAQs for a complete understanding of this very important
-distinction. In short, the Linux kernel is built using the GNU C library
-(glibc) and GNU compiler collection (gcc). The Linux kernel software alone
-is useless, in order have an operating system you need many more packages
-and the majority of such low-level packages in most distributions are
-developed as part of the GNU project: ``the whole system is basically GNU
-with Linux loaded''. In the form of an analogy: to say “running Linux”, is
-like saying “driving your carburetor”.
+distinction.
@itemize
@@ -1285,6 +1289,24 @@ like saying “driving your carburetor”.
@end itemize
+@cindex Linux
+@cindex GNU/Linux
+@cindex GNU C library
+@cindex GNU Compiler Collection
+In short, the Linux kernel@footnote{In Unix-like operating systems, the
+kernel connects software and hardware worlds.} is built using the GNU C
+library (glibc) and GNU compiler collection (gcc). The Linux kernel
+software alone is just a means for other software to access the hardware
+resources, it is useless alone: to say “running Linux”, is like saying
+“driving your carburetor”.
+
+To have an operating system, you need lower-level (to build the kernel),
+and higher-level (to use it) software packages. The majority of such
+software in most Unix-like operating systems are GNU software: ``the whole
+system is basically GNU with Linux loaded''. Therefore to acknowledge GNU's
+instrumental role in the creation and usage of the Linux kernel and the
+operating systems that use it, we should call these operating systems
+``GNU/Linux''.
@menu
@@ -1299,24 +1321,25 @@ like saying “driving your carburetor”.
@cindex GUI: graphic user interface
@cindex CLI: command-line user interface
One aspect of Gnuastro that might be a little troubling to new GNU/Linux
-users is that (at least for the time being) it only has a command-line
-user interface (CLI). This might be contrary to the mostly graphical user
-interface (GUI) experience with proprietary operating systems. To a first
-time user, the command-line does appear much more complicated and adapting
-to it might not be easy and a little frustrating at first. This is
-understandable and also experienced by anyone who started using the
-computer (from childhood) in a graphical user interface. Here we hope to
-convince you of the unique benefits of this interface which can greatly
-enhance your productivity while complementing your GUI experience.
+users is that (at least for the time being) it only has a command-line user
+interface (CLI). This might be contrary to the mostly graphical user
+interface (GUI) experience with proprietary operating systems. Since the
+various actions available aren't always on the screen, the command-line
+interface can be complicated, intimidating, and frustrating for a
+first-time user. This is understandable and also experienced by anyone who
+started using the computer (from childhood) in a graphical user interface
+(this includes most of Gnuastro's authors). Here we hope to convince you of
+the unique benefits of this interface which can greatly enhance your
+productivity while complementing your GUI experience.
@cindex GNOME 3
Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based
operating systems now have an advanced and useful GUI. Since the GUI was
created long after the command-line, some wrongly consider the command line
-to be obsolete. Both interfaces are useful for different tasks (for example
+to be obsolete. Both interfaces are useful for different tasks. For example
you can't view an image, video, pdf document or web page on the
-command-line!), on the other hand you can't reproduce your results easily
-in the GUI. Therefore they should not be regarded as rivals but as
+command-line. On the other hand you can't reproduce your results easily in
+the GUI. Therefore they should not be regarded as rivals but as
complementary user interfaces, here we will outline how the CLI can be
useful in scientific programs.
@@ -1697,7 +1720,7 @@ Kelvin, Brandon Kelly, Mohammad-Reza Khellat, Floriane
Leclercq, Alan
Lefor, Guillaume Mahler, Francesco Montanari, Bertrand Pain, William Pence,
Bob Proulx, Yahya Sefidbakht, Alejandro Serrano Borlaff, Lee Spitler,
Richard Stallman, Ole Streicher, Alfred M. Szmidt, Michel Tallon, Juan
-C. Tello, Éric Thi@'ebaut, Ignacio Trujillo, David Valls-Gabaud, Aaron
+C. Tello, @'Eric Thi@'ebaut, Ignacio Trujillo, David Valls-Gabaud, Aaron
Watkins, Christopher Willmer, Sara Yousefi Taemeh, Johannes Zabl. The GNU
French Translation Team is also managing the French version of the top
Gnuastro webpage which we highly appreciate. Finally we should thank all
@@ -2276,7 +2299,7 @@ tool) in special situations.
During the tutorial, we will take many detours to explain, and practically
demonstrate, the many capabilities of Gnuastro's programs. In the end you
-will see that the things you learned during this toturial are much more
+will see that the things you learned during this tutorial are much more
generic than this particular problem and can be used in solving a wide
variety of problems involving the analysis of data (images or tables). So
please don't rush, and go through the steps patiently to optimally master
@@ -2993,7 +3016,7 @@ second (NoiseChisel's main output, @code{DETECTIONS}) has
a numeric data
type of @code{uint8} with only two possible values for all pixels: 0 for
noise and 1 for signal. The third and fourth (called @code{SKY} and
@code{SKY_STD}), have the Sky and its standard deviation values of the
-input on a tessellation and were calculated over the un-detected regions.
+input on a tessellation and were calculated over the undetected regions.
@cindex DS9
@cindex GNOME
@@ -3389,9 +3412,9 @@ all apertures to be identical after all).
@example
$ rm *.txt
-$ $ asttable cat/xdf-f160w.fits -h2 -cRA,DEC \
- | awk '!/^#/@{print NR, $1, $2, 5, 5, 0, 0, 1, NR, 1@}' \
- > apertures.txt
+$ asttable cat/xdf-f160w.fits -h2 -cRA,DEC \
+ | awk '!/^#/@{print NR, $1, $2, 5, 5, 0, 0, 1, NR, 1@}' \
+ > apertures.txt
@end example
We can now feed this catalog into MakeProfiles to build the apertures for
@@ -5362,8 +5385,10 @@ outputs from the previous version being mixed with
outputs from the newly
pulled work. Therefore, the first step is to clean/delete all the built
files with @command{make distclean}. Fortunately the GNU build system
allows the separation of source and built files (in separate
-directories). You can use it to avoid the cleaning step. Gnuastro already
-has script for this, see @ref{Configure and build in RAM}.
+directories). This is a great feature to keep your source directory clean
+and you can use it to avoid the cleaning step. Gnuastro comes with a script
+with some useful options for this job. It is useful if you regularly pull
+recent changes, see @ref{Separate build and source directories}.
After the pull, we must re-configure Gnuastro with @command{autoreconf -f}
(part of GNU Autoconf). It will update the @file{./configure} script and
@@ -6142,7 +6167,7 @@ behaviors or configuration files for example.
@noindent
@strong{White space between option and value:} @file{developer-build}
doesn't accept an @key{=} sign between the options and their values. It
-also needs atleast one character between the option and its
+also needs at least one character between the option and its
value. Therefore @option{-n 4} or @option{--numthreads 4} are acceptable,
while @option{-n4}, @option{-n=4}, or @option{--numthreads=4}
aren't. Finally multiple short option names cannot be merged: for example
@@ -6170,6 +6195,13 @@ GNU/Linux operating systems, see @ref{Configure and
build in RAM}).
Print the version string of Gnuastro that will be used in the build. This
string will be appended to the directory name containing the built files.
+@item -a
+@itemx --autoreconf
+Run @command{autoreconf -f} before building the package. In Gnuastro, this
+is necessary when a new commit has been made to the project history. In
+Gnuastro's build system, the Git description will be used as the version,
+see @ref{Version numbering} and @ref{Synchronizing}.
+
@item -c
@itemx --clean
@cindex GNU Autoreconf
@@ -6177,7 +6209,7 @@ Delete the contents of the build directory (clean it)
before starting the
configuration and building of this run.
This is useful when you have recently pulled changes from the main Git
-repository, or commited a change your self and ran @command{autoreconf -f},
+repository, or committed a change your self and ran @command{autoreconf -f},
see @ref{Synchronizing}. After running GNU Autoconf, the version will be
updated and you need to do a clean build.
@@ -6215,15 +6247,44 @@ checks to work on (for example defined in
@file{tests/during-dev.sh}).
@itemx --install
After finishing the build, also run @command{make install}.
+@item -D
+@itemx --dist
+Run @code{make dist-lzip pdf} to build a distribution tarball (in
+@file{.tar.lz} format) and a PDF manual. This can be useful for archiving,
+or sending to colleagues who don't use Git for an easy build and manual.
+
@item -u STR
-@item --tar-pdf-upload STR
-After finishing the build, run @command{make dist-lzip pdf} to build an
-Lzip tarball and pdf manual. Then rename the tarball suffix to
-@file{-latest.tar.lz} (instead of the version number). Then use secure copy
-(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the
-server and directory specified in the value to this option. For example
-@command{--tar-pdf-upload my-server:dir}, will copy the two files to the
-@file{dir} directory of @code{my-server}.
+@item --upload STR
+Activate the @option{--dist} (@option{-D}) option, but also rename the
+tarball suffix to @file{-latest.tar.lz} (instead of the version
+number). Then use secure copy (@command{scp}, part of the SSH tools) to
+copy the tarball and PDF to the server and directory specified in the value
+to this option. For example @command{--upload my-server:dir}, will copy the
+two files to the @file{dir} directory of @code{my-server} server.
+
+@item -p
+@itemx --publish
+Short for @option{--autoreconf --clean --debug --check --upload
+STR}. @option{--debug} is added because it will greatly speed up the
+build. It will have no effect on the produced tarball. This is good when
+you have made a commit and are ready to publish it on your server (if
+nothing crashes). Recall that if any of the previous steps fail the script
+aborts.
+
+@item -I
+@item --install-archive
+Short for @option{--autoreconf --clean --check --install --dist}. This is
+useful when you actually want to install the commit you just made (if the
+build and checks succeed). It will also produce a distribution tarball and
+PDF manual for easy access to the installed tarball on your system at a
+later time.
+
+Ideally, Gnuastro's Git version history makes it easy for a prepared system
+to revert back to a different point in history. But Gnuastro also needs to
+bootstrap files and also your collaborators might (usually do!) find it too
+much of a burden to do the bootstrapping themselves. So it is convenient to
+have a tarball and PDF manual of the version you have installed (and are
+using in your reserach) handily available.
@item -h
@itemx --help
@@ -7277,6 +7338,15 @@ file. When this option is called in those programs, the
log file will also
be printed. If the program doesn't generate a log file, this option is
ignored.
+@cartouche
+@noindent
+@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed
+name. Therefore if two simultaneous calls (with @option{--log}) of a
+program are made in the same directory, the program will try to write to
+the same file. This will cause problems like unreasonable log file,
+un-defined behavior, or a crash.
+@end cartouche
+
@cindex CPU threads, set number
@cindex Number of CPU threads to use
@item -N INT
@@ -14755,16 +14825,16 @@ signal, where non-random factors (for example light
from a distant galaxy)
are present. This classification of the elements in a dataset is formally
known as @emph{detection}.
-In an observational/experimental dataset, signal is always burried in
-noise: only mock/simulated datasets are free of noise. Therefore detection,
-or the process of separating signal from noise, determines the number of
-objects you study and the accuracy of any higher-level measurement you do
-on them. Detection is thus the most important step of any analysis and is
-not trivial. In particular, the most scientifically interesting
-astronomical targets are faint, can have a large variety of morphologies,
-along with a large distribution in brightness and size. Therefore when
-noise is significant, proper detection of your targets is a uniquely
-decisive step in your final scientific analysis/result.
+In an observational/experimental dataset, signal is always buried in noise:
+only mock/simulated datasets are free of noise. Therefore detection, or the
+process of separating signal from noise, determines the number of objects
+you study and the accuracy of any higher-level measurement you do on
+them. Detection is thus the most important step of any analysis and is not
+trivial. In particular, the most scientifically interesting astronomical
+targets are faint, can have a large variety of morphologies, along with a
+large distribution in brightness and size. Therefore when noise is
+significant, proper detection of your targets is a uniquely decisive step
+in your final scientific analysis/result.
@cindex Erosion
NoiseChisel is Gnuastro's program for detection of targets that don't have
@@ -14781,7 +14851,7 @@ $ astarithmetic in.fits 100 gt 2 connected-components
Since almost no astronomical target has such sharp edges, we need a more
advanced detection methodology. NoiseChisel uses a new noise-based paradigm
-for detection of very exteded and diffuse targets that are drowned deeply
+for detection of very extended and diffuse targets that are drowned deeply
in the ocean of noise. It was initially introduced in
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. The
name of NoiseChisel is derived from the first thing it does after
@@ -14801,7 +14871,7 @@ as the input but only with two values: 0 and 1. Pixels
that don't harbor
any detected signal (noise) are given a label (or value) of zero and those
with a value of 1 have been identified as hosting signal.
-Segmentation is the process of classifing the signal into higher-level
+Segmentation is the process of classifying the signal into higher-level
constructs. For example if you have two separate galaxies in one image, by
default NoiseChisel will give a value of 1 to the pixels of both, but after
segmentation, the pixels in each will get separate labels. NoiseChisel is
@@ -14813,8 +14883,8 @@ directly/readily fed into Segment.
For more on NoiseChisel's output format and its benefits (especially in
conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see
@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}. Just note that
-when that paper was published, Segment was not yet spinned-off, and
-NoiseChisel done both detection and segmentation.
+when that paper was published, Segment was not yet spun-off into a separate
+program, and NoiseChisel done both detection and segmentation.
NoiseChisel's output is designed to be generic enough to be easily used in
any higher-level analysis. If your targets are not touching after running
@@ -15274,15 +15344,23 @@ The HDU/extension containing the convolved image in
the file given to
@itemx --widekernel=STR
File name of a wider kernel to use in estimating the difference of the mode
and median in a tile (this difference is used to identify the significance
-of signal in that tile, see @ref{Quantifying signal in a tile}). This
-convolved image is only used for this purpose. Once the mode is found to be
+of signal in that tile, see @ref{Quantifying signal in a tile}). As
+displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
+and Ichikawa [2015]}, a wider kernel will help in identifying the skewness
+caused by data in noise. The image that is convolved with this kernel is
+@emph{only} used for this purpose. Once the mode is found to be
sufficiently close to the median, the quantile threshold is found on the
image convolved with the sharper kernel (@option{--kernel}), see
@option{--qthresh}).
-Since convolution will significantly slow down the processing, this option
-is optional. If it isn't given, a single convolved image will be used in
-all situations.
+Since convolution will significantly slow down the processing, this feature
+is optional. When it isn't given, the image that is convolved with
+@option{--kernel} will be used to identify good tiles @emph{and} apply the
+quantile threshold. This option is mainly useful in conditions were you
+have a very large, extended, diffuse signal that is still present in the
+usable tiles when using @option{--kernel}. See @ref{Detecting large
+extended targets} for a practical demonstration on how to inspect the tiles
+used in identifying the quantile threshold.
@item --whdu=STR
HDU containing the kernel file given to the @option{--widekernel} option.
@@ -15354,7 +15432,7 @@ This option is useful when a large and diffuse (almost
flat within each
tile) signal exists with very small regions of Sky. The flatness of the
profile will cause it to successfully pass the tests of @ref{Quantifying
signal in a tile}. As a result, without this option the flat and diffuse
-signal will be interpretted as sky. In such cases, you can see the status
+signal will be interpreted as sky. In such cases, you can see the status
of the tiles with the @option{--checkqthresh} option (first image extension
is enough) and select a quantile through this option to ignore the measured
values in the higher-valued tiles.
@@ -15591,17 +15669,24 @@ are 27 neighbors.
The maximum hole size to fill during the final expansion of the true
detections as described in @option{--detgrowquant}. This is necessary when
the input contains many smaller objects and can be used to avoid marking
-blank sky regions as detections. For example multiple galaxies can be
-positioned such that they surround an empty region of sky. If all the holes
-are filled, the Sky region in between them will be taken as a detection
-which is not desired. To avoid such cases, the integer given to this option
-must be smaller than the hole between the objects.
+blank sky regions as detections.
+
+For example multiple galaxies can be positioned such that they surround an
+empty region of sky. If all the holes are filled, the Sky region in between
+them will be taken as a detection which is not desired. To avoid such
+cases, the integer given to this option must be smaller than the hole
+between such objects. However, we should caution that unless the ``hole''
+is very large, the combined faint wings of the galaxies might actually be
+present in between them, so be very careful in not filling such holes.
On the other hand, if you have a very large (and extended) galaxy, the
-diffuse wings of the galaxy may create very large holes. In such cases, a
-larger value to this option will cause the whole region to be detected as
-part of the large galaxy and thus detect it to extremely low surface
-brightness limits.
+diffuse wings of the galaxy may create very large holes over the
+detections. In such cases, a large enough value to this option will cause
+all such holes to be detected as part of the large galaxy and thus help in
+detecting it to extremely low surface brightness limits. Therefore,
+especially when large and extended objects are present in the image, it is
+recommended to give this option (very) large values. For one real-world
+example, see @ref{Detecting large extended targets}.
@item --cleangrowndet
After dilation, if the signal-to-noise ratio of a detection is less than
@@ -15723,13 +15808,13 @@ identify which pixels are connected to which. By
default the main output is
a binary dataset with only two values: 0 (for noise) and 1 (for
signal/detections). See @ref{NoiseChisel output} for more.
-The purpose of Noisechisel is to detect targets that are extended and
+The purpose of NoiseChisel is to detect targets that are extended and
diffuse, with outer parts that sink into the noise very gradually (galaxies
and stars for example). Since NoiseChisel digs down to extremely low
surface brightness values, many such targets will commonly be detected
together as a single large body of connected pixels.
-To properly separte connected objects, sophisticated segmentation methods
+To properly separate connected objects, sophisticated segmentation methods
are commonly necessary on NoiseChisel's output. Gnuastro has the dedicated
@ref{Segment} program for this job. Since input images are commonly large
and can take a significant volume, the extra volume necessary to store the
@@ -15760,7 +15845,7 @@ output and comparing the detection map with the input:
visually see if
everything you expected is detected (reasonable completeness) and that you
don't have too many false detections (reasonable purity). This visual
inspection is simplified if you use SAO DS9 to view NoiseChisel's output as
-a multi-extension datacube, see @ref{Viewing multiextension FITS images}.
+a multi-extension data-cube, see @ref{Viewing multiextension FITS images}.
When you are satisfied with your NoiseChisel configuration (therefore you
don't need to check on every run), or you want to archive/transfer the
@@ -15989,7 +16074,7 @@ of this correction factor is irrelevant: because it
uses the ambient noise
applies that over the detected regions.
A distribution's extremum (maximum or minimum) values, used in the new
-critera, are strongly affected by scatter. On the other hand, the convolved
+criteria, are strongly affected by scatter. On the other hand, the convolved
image has much less scatter@footnote{For more on the effect of convolution
on a distribution, see Section 3.1.1 of
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
@@ -16111,7 +16196,7 @@ file given to @option{--detection}. Ultimately, if all
three are in
separate files, you need to call both @option{--detection} and
@option{--std}.
-The extensions of the three mandatory inputs can be speicified with
+The extensions of the three mandatory inputs can be specified with
@option{--hdu}, @option{--dhdu}, and @option{--stdhdu}. For a full
discussion on what to give to these options, see the description of
@option{--hdu} in @ref{Input output options}. To see their default values
@@ -16150,7 +16235,7 @@ name. When the value is a file, the extension can be
specified with
either have the same size as the output or the same size as the
tessellation (so there is one pixel per tile, see @ref{Tessellation}).
-When this option is given, its value(s) will be subtraced from the input
+When this option is given, its value(s) will be subtracted from the input
and the (optional) convolved dataset (given to @option{--convolved}) prior
to starting the segmentation process.
@@ -16253,7 +16338,7 @@ with this option (that is also available in
NoiseChisel). However, just be
careful to use the input to NoiseChisel as the input to Segment also, then
use the @option{--sky} and @option{--std} to specify the Sky and its
standard deviation (from NoiseChisel's output). Recall that when
-NoiseChisel is not called with @option{--rawoutput}, the first extention of
+NoiseChisel is not called with @option{--rawoutput}, the first extension of
NoiseChisel's output is the @emph{Sky-subtracted} input (see
@ref{NoiseChisel output}. So if you use the same convolved image that you
fed to NoiseChisel, but use NoiseChisel's output with Segment's
@@ -16487,7 +16572,7 @@ dataset and the sky standard deviation dataset (if it
wasn't a constant
number). This can help in visually inspecting the result when viewing the
images as a ``Multi-extension data cube'' in SAO DS9 for example (see
@ref{Viewing multiextension FITS images}). You can simply flip through the
-exetensions and see the same region of the image and its corresponding
+extensions and see the same region of the image and its corresponding
clumps/object labels. It also makes it easy to feed the output (as one
file) into MakeCatalog when you intend to make a catalog afterwards (see
@ref{MakeCatalog}. To remove these redundant extensions from the output
@@ -16585,7 +16670,7 @@ properties: its position (relative to the rest) and its
value. In
higher-level analysis, an entire dataset (an image for example) is rarely
treated as a singular entity@footnote{You can derive the over-all
properties of a complete dataset (1D table column, 2D image, or 3D
-datacube) treated as a single entity with Gnuastro's Statistics program
+data-cube) treated as a single entity with Gnuastro's Statistics program
(see @ref{Statistics}).}. You usually want to know/measure the properties
of the (separate) scientifically interesting targets that are embedded in
it. For example the magnitudes, positions and elliptical properties of the
@@ -16605,7 +16690,7 @@ It is important to define your regions of interest
@emph{before} running
MakeCatalog. MakeCatalog is specialized in doing measurements accurately
and efficiently. Therefore MakeCatalog will not do detection, segmentation,
or defining apertures on requested positions in your dataset. Following
-Gnuastro's modularity principle, There are separate and higly specialized
+Gnuastro's modularity principle, There are separate and highly specialized
and customizable programs in Gnuastro for these other jobs:
@itemize
@@ -16639,12 +16724,12 @@ MakeCatalog's output is already known before running
it.
Before getting into the details of running MakeCatalog (in @ref{Invoking
astmkcatalog}, we'll start with a discussion on the basics of its approach
-to separating detection from measuremens in @ref{Detection and catalog
+to separating detection from measurements in @ref{Detection and catalog
production}. A very important factor in any measurement is understanding
its validity range, or limits. Therefore in @ref{Quantifying measurement
limits}, we'll discuss how to estimate the reliability of the detection and
basic measurements. This section will continue with a derivation of
-elliptical parameters from the labelled datasets in @ref{Measuring
+elliptical parameters from the labeled datasets in @ref{Measuring
elliptical parameters}. For those who feel MakeCatalog's existing
measurements/columns aren't enough and would like to add further
measurements, in @ref{Adding new columns to MakeCatalog}, a checklist of
@@ -16786,7 +16871,7 @@ of clear water increases, the parts of the hills with
lower heights (parts
with lower surface brightness) can be seen more clearly. In this analogy,
height (from the ground) is @emph{surface brightness}@footnote{Note that
this muddy water analogy is not perfect, because while the water-level
-remains the same all over a peak, in data analysis, the poisson noise
+remains the same all over a peak, in data analysis, the Poisson noise
increases with the level of data.} and the height of the muddy water is
your surface brightness limit.
@@ -16960,7 +17045,7 @@ result. Fortunately, with the much more advanced
hardware and software of
today, we can make customized segmentation maps for each object.
-If requested, MakeCatalog will estimate teh the upper limit magnitude is
+If requested, MakeCatalog will estimate the the upper limit magnitude is
found for each object in the image separately, the procedure is fully
configurable with the options in @ref{Upper-limit settings}. If one value
for the whole image is required, you can either use the surface brightness
@@ -17397,7 +17482,7 @@ file (given as an argument) for the required
extension/HDU (value to
@item --clumpshdu=STR
The HDU/extension of the clump labels dataset. Only pixels with values
-above zero will be considered. The clump lables dataset has to be an
+above zero will be considered. The clump labels dataset has to be an
integer data type (see @ref{Numeric data types}) and only pixels with a
value larger than zero will be used. See @ref{Segment output} for a
description of the expected format.
@@ -17490,12 +17575,12 @@ One very important consideration in Gnuastro is
reproducibility. Therefore,
the values to all of these parameters along with others (like the random
number generator type and seed) are also reported in the comments of the
final catalog when the upper limit magnitude column is desired. The random
-seed that is used to define the random positionings for each object or
-clump is unique and set based on the given seed, the total number of
-objects and clumps and also the labels of the clumps and objects. So with
-identical inputs, an identical upper-limit magnitude will be found. But
-even if the ordering of the object/clump labels differs (and the seed is
-the same) the result will not be the same.
+seed that is used to define the random positions for each object or clump
+is unique and set based on the given seed, the total number of objects and
+clumps and also the labels of the clumps and objects. So with identical
+inputs, an identical upper-limit magnitude will be found. But even if the
+ordering of the object/clump labels differs (and the seed is the same) the
+result will not be the same.
MakeCatalog will randomly place the object/clump footprint over the image
and when the footprint doesn't fall on any object or masked region (see
@@ -17572,7 +17657,7 @@ faint undetected wings of bright/large objects in the
image). This option
takes two values: the first is the multiple of @mymath{\sigma}, and the
second is the termination criteria. If the latter is larger than 1, it is
read as an integer number and will be the number of times to clip. If it is
-smaller than 1, it is interpretted as the tolerance level to stop
+smaller than 1, it is interpreted as the tolerance level to stop
clipping. See @ref{Sigma clipping} for a complete explanation.
@item --upnsigma=FLT
@@ -17583,9 +17668,9 @@ magnitude.
@item --checkupperlimit=INT[,INT]
Print a table of positions and measured values for all the full random
distribution used for one particular object or clump. If only one integer
-is given to this option, it is interpretted to be an object's label. If two
+is given to this option, it is interpreted to be an object's label. If two
values are given, the first is the object label and the second is the ID of
-requested clump wihtin it.
+requested clump within it.
The output is a table with three columns (its type is determined with the
@option{--tableformat} option, see @ref{Input output options}). The first
@@ -17770,8 +17855,8 @@ integral field unit data cubes.
The brightness (sum of all pixel values), see @ref{Flux Brightness and
magnitude}. For clumps, the ambient brightness (flux of river pixels around
the clump multiplied by the area of the clump) is removed, see
-@option{--riverflux}. So the sum of clump brightnesses in the clump catalog
-will be smaller than the total clump brightness in the
+@option{--riverflux}. So the sum of all the clumps brightness in the clump
+catalog will be smaller than the total clump brightness in the
@option{--clumpbrightness} column of the objects catalog.
If no usable pixels (blank or below the threshold) are present over the
@@ -17878,7 +17963,7 @@ than the minimum, a value of @code{-inf} is reported.
@item --upperlimitskew
@cindex Skewness
-This column contains the nonparametric skew of the sigma-clipped random
+This column contains the non-parametric skew of the sigma-clipped random
distribution that was used to estimate the upper-limit magnitude. Taking
@mymath{\mu} as the mean, @mymath{\nu} as the median and @mymath{\sigma} as
the standard deviation, the traditional definition of skewness is defined
@@ -18016,7 +18101,7 @@ be stored as multiple extensions of one FITS file. You
can use @ref{Table}
to inspect the column meta-data and contents in this case. However, in
plain text format (see @ref{Gnuastro text table format}), it is only
possible to keep one table per file. Therefore, if the output is a text
-file, two ouput files will be created, ending in @file{_o.txt} (for
+file, two output files will be created, ending in @file{_o.txt} (for
objects) and @file{_c.txt} (for clumps).
@item --noclumpsort
@@ -18116,7 +18201,7 @@ $ astmatch --aperture=2 input1.txt input2.fits
## Similar to before, but the output is created by merging various
## columns from the two inputs: columns 1, RA, DEC from the first
-## input, followed by all columns starting with MAG and the BRG
+## input, followed by all columns starting with `MAG' and the `BRG'
## column from second input and finally the 10th from first input.
$ astmatch --aperture=2 input1.txt input2.fits \
--outcols=a1,aRA,aDEC,b/^MAG/,bBRG,a10
@@ -18176,6 +18261,16 @@ below). When @option{--logasoutput} is called, no log
file (with a fixed
name) will be created. In this case, the output file (possibly given by the
@option{--output} option) will have the contents of this log file.
+@cartouche
+@noindent
+@strong{@option{--log} isn't thread-safe}: As described above, when
+@option{--logasoutput} is not called, the Log file has a fixed name for all
+calls to Match. Therefore if a separate log is requested in two
+simultaneous calls to Match in the same directory, Match will try to write
+to the same file. This will cause problems like unreasonable log file,
+un-defined behavior, or a crash.
+@end cartouche
+
@table @option
@item -H STR
@itemx --hdu2=STR
@@ -18512,7 +18607,7 @@ an image) in the dataset to the profile center. The
profile value is
calculated for that central pixel using monte carlo integration, see
@ref{Sampling from a function}. The next pixel is the next nearest neighbor
to the central pixel as defined by @mymath{r_{el}}. This process goes on
-until the profile is fully built upto the trunctation radius. This is done
+until the profile is fully built upto the truncation radius. This is done
fairly efficiently using a breadth first parsing
strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}}
which is implemented through an ordered linked list.
@@ -19119,7 +19214,7 @@ In MakeProfiles, profile centers do not have to be in
(overlap with) the
final image. Even if only one pixel of the profile within the truncation
radius overlaps with the final image size, the profile is built and
included in the final image image. Profiles that are completely out of the
-image will not be created (unless you explicity ask for it with the
+image will not be created (unless you explicitly ask for it with the
@option{--individual} option). You can use the output log file (created
with @option{--log} to see which profiles were within the image, see
@ref{Common options}.
@@ -19191,7 +19286,7 @@ the @option{--circumwidth}).
@item
Radial distance profile with `@code{distance}' or `@code{7}'. At the lowest
level, each pixel only has an elliptical radial distance given the
-profile's shape and orentiation (see @ref{Defining an ellipse and
+profile's shape and orientation (see @ref{Defining an ellipse and
ellipsoid}). When this profile is chosen, the pixel's elliptical radial
distance from the profile center is written as its value. For this profile,
the value in the magnitude column (@option{--mcol}) will be ignored.
@@ -19336,7 +19431,7 @@ your catalog.
@item --mcolisbrightness
The value given in the ``magnitude column'' (specified by @option{--mcol},
-see @ref{MakeProfiles catalog}) must be interpretted as brightness, not
+see @ref{MakeProfiles catalog}) must be interpreted as brightness, not
magnitude. The zeropoint magnitude (value to the @option{--zeropoint}
option) is ignored and the given value must have the same units as the
input dataset's pixels.
@@ -19631,11 +19726,11 @@ they overlap with it.
@noindent
The options below can be used to define the world coordinate system (WCS)
-properties of the MakeProfiles outputs. The option names are delibarately
+properties of the MakeProfiles outputs. The option names are deliberately
chosen to be the same as the FITS standard WCS keywords. See Section 8 of
@url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [2010]} for a
short introduction to WCS in the FITS standard@footnote{The world
-coordinate standard in FITS is a very beatiful and powerful concept to
+coordinate standard in FITS is a very beautiful and powerful concept to
link/associate datasets with the outside world (other datasets). The
description in the FITS standard (link above) only touches the tip of the
ice-burg. To learn more please see
@@ -19792,7 +19887,7 @@ noise (error in counting), we need to take a closer
look at how a
distribution produced by counting can be modeled as a parametric function.
Counting is an inherently discrete operation, which can only produce
-positive (including zero) interger outputs. For example we can't count
+positive (including zero) integer outputs. For example we can't count
@mymath{3.2} or @mymath{-2} of anything. We only count @mymath{0},
@mymath{1}, @mymath{2}, @mymath{3} and so on. The distribution of values,
as a result of counting efforts is formally known as the
@@ -21793,7 +21888,7 @@ other functions are also finished.
@deftypefun int pthread_barrier_destroy (pthread_barrier_t @code{*b})
Destroy all the information in the barrier structure. This should be called
-by the function that spinned-off the threads after all the threads have
+by the function that spun-off the threads after all the threads have
finished.
@cartouche
@@ -22156,7 +22251,7 @@ allocated and the value will be simply put the memory.
If
string will be read into that type.
Note that when we are dealing with a string type, @code{*out} should be
-interpretted as @code{char **} (one element in an array of pointers to
+interpreted as @code{char **} (one element in an array of pointers to
different strings). In other words, @code{out} should be @code{char ***}.
This function can be used to fill in arrays of numbers from strings (in an
@@ -22200,7 +22295,7 @@ suggests, they @emph{point} to a byte in memory (like
an address in a
city). The C programming language gives you complete freedom in how to use
the byte (and the bytes that follow it). Pointers are thus a very powerful
feature of C. However, as the saying goes: ``With great power comes great
-responsability'', so they must be approached with care. The functions in
+responsibility'', so they must be approached with care. The functions in
this header are not very complex, they are just wrappers over some basic
pointer functionality regarding pointer arithmetic and allocation (in
memory or HDD/SSD).
@@ -22735,6 +22830,17 @@ The functions listed in this section describe the most
basic operations on
are declared in @file{gnuastro/data.h} which is also visible from the
function names (see @ref{Gnuastro library}).
+@deftypefun {gal_data_t *} gal_data_alloc (void @code{*array}, uint8_t
@code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm
@code{*wcs}, int @code{clear}, size_t @code{minmapsize}, char @code{*name},
char @code{*unit}, char @code{*comment})
+
+Dynamically allocate a @code{gal_data_t} and initialize it will all the
+given values. See the description of @code{gal_data_initialize} and
+@ref{Generic data container} for more information. This function will often
+be the most frequently used because it allocates the @code{gal_data_t}
+hosting all the values @emph{and} initializes it. Once you are done with
+the dataset, be sure to clean up all the allocated spaces with
+@code{gal_data_free}.
+@end deftypefun
+
@deftypefun void gal_data_initialize (gal_data_t @code{*data}, void
@code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize},
struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, char
@code{*name}, char @code{*unit}, char @code{*comment})
Initialize the given data structure (@code{data}) with all the given
@@ -22765,21 +22871,10 @@ memory or it is used in multiple datasets, be sure to
set it to @code{NULL}
not have any zero values (a dimension of length zero is not defined).
@end deftypefun
-@deftypefun {gal_data_t *} gal_data_alloc (void @code{*array}, uint8_t
@code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm
@code{*wcs}, int @code{clear}, size_t @code{minmapsize}, char @code{*name},
char @code{*unit}, char @code{*comment})
-
-Dynamically allocate a @code{gal_data_t} and initialize it will all the
-given values. See the description of @code{gal_data_initialize} and
-@ref{Generic data container} for more information. This function will often
-be the most frequently used because it allocates the @code{gal_data_t}
-hosting all the values @emph{and} initializes it. Once you are done with
-the dataset, be sure to clean up all the allocated spaces with
-@code{gal_data_free}.
-@end deftypefun
-
@deftypefun void gal_data_free_contents (gal_data_t @code{*data})
Free all the non-@code{NULL} pointers in @code{gal_data_t}. If @code{data}
is actually a tile (@code{data->block!=NULL}, see @ref{Tessellation
-library}), then @code{tile->array} is not freed. For a complete description
+library}), then @code{data->array} is not freed. For a complete description
of @code{gal_data_t} and its contents, see @ref{Generic data container}.
@end deftypefun
@@ -24045,21 +24140,34 @@ this line is ignored.
@end itemize
@end deftypefun
-@deftypefun void gal_table_write (gal_data_t @code{*cols}, gal_list_str_t
@code{*comments}, int @code{tableformat}, char @code{*filename}, char
@code{*extname})
-Write the @code{cols} list of datasets into a table in @code{filename} (see
-@ref{List of gal_data_t}). The format of the table can be determined with
-@code{tableformat} that accepts the macros defined above. If
-@code{comments} is not @code{NULL}, then the list of comments will also be
-printed into the output table. When the output table is a plain text file,
-each node's string will be printed after a @code{#} (so it can be
-considered as a comment) and in FITS table they will follow a
-@code{COMMENT} keyword. If @file{filename} is a FITS file, the table
-extension that will be written will have the name @code{extname}.
+@deftypefun void gal_table_write (gal_data_t @code{*cols}, gal_list_str_t
@code{*comments}, int @code{tableformat}, char @code{*filename}, char
@code{*extname}, uint8_t @code{colinfoinstdout})
+Write @code{cols} (a list of datasets, see @ref{List of gal_data_t}) into a
+table stored in @code{filename}. The format of the table can be determined
+with @code{tableformat} that accepts the macros defined above. When
+@code{filename==NULL}, the column information will be printed on the
+standard output (command-line).
+
+If @code{comments!=NULL}, the list of comments (see @ref{List of strings})
+will also be printed into the output table. When the output table is a
+plain text file, every node of @code{comments} will be printed after a
+@code{#} (so it can be considered as a comment) and in FITS table they will
+follow a @code{COMMENT} keyword.
-If a file named @file{filename} already exists, the operation depends on
-the type of output. When @file{filename} is a FITS file, the table will be
-added as a new extension after all existing ones. If @file{filename} is a
-plain text file, this function will abort with an error.
+If a file named @code{filename} already exists, the operation depends on
+the type of output. When @code{filename} is a FITS file, the table will be
+added as a new extension after all existing extensions. If @code{filename}
+is a plain text file, this function will abort with an error.
+
+If @code{filename} is a FITS file, the table extension will have the name
+@code{extname}.
+
+When @code{colinfoinstdout!=0} and @code{filename==NULL} (columns are
+printed in the standard output), the dataset metadata will also printed in
+the standard output. When printing to the standard output, the column
+information can be piped into another program for futher processing and
+thus the meta-data (lines starting with a @code{#}) must be ignored. In
+such cases, you only print the column values by passing @code{0} to
+@code{colinfoinstdout}.
@end deftypefun
@deftypefun void gal_table_write_log (gal_data_t @code{*logll}, char
@code{*program_string}, time_t @code{*rawtime}, gal_list_str_t
@code{*comments}, char @code{*filename}, int @code{quiet})
@@ -24373,7 +24481,7 @@ allocations may be done by this function (one if
@code{array!=NULL}).
Therefore, when using the values of strings after this function,
-@code{keysll[i].array} must be interpretted as @code{char **}: one
+@code{keysll[i].array} must be interpreted as @code{char **}: one
allocation for the pointer, one for the actual characters. If you use
something like the example, above you don't have to worry about the
freeing, @code{gal_data_array_free} will free both allocations. So to read
@@ -24676,7 +24784,9 @@ files. They can be viewed and edited on any text editor
or even on the
command-line. This section are describes some functions that help in
reading from and writing to plain text files.
-Lines are one of the most basic buiding blocks (delimiters) of a text
+@cindex CRLF line terminator
+@cindex Line terminator, CRLF
+Lines are one of the most basic building blocks (delimiters) of a text
file. Some operating systems like Microsoft Windows, terminate their ASCII
text lines with a carriage return character and a new-line character (two
characters, also known as CRLF line terminators). While Unix-like operating
@@ -24744,7 +24854,7 @@ the dataset. If the necessary space for the image is
larger than
see the description under the same name in @ref{Generic data container}.
@end deftypefun
-@deftypefun void gal_txt_write (gal_data_t @code{*cols}, gal_list_str_t
@code{*comment}, char @code{*filename})
+@deftypefun void gal_txt_write (gal_data_t @code{*cols}, gal_list_str_t
@code{*comment}, char @code{*filename}, uint8_t @code{colinfoinstdout})
Write @code{cols} in a plain text file @code{filename}. @code{cols} may
have one or two dimensions which determines the output:
@@ -24761,16 +24871,25 @@ table.
be written.
@end table
-If @code{filename} already exists this function will abort with an error
-and will not write over the existing file. Please make sure if the file
-exists or not and take the appropriate action before calling this
-function. If @code{comments!=NULL}, a @code{#} will be put at the start of
-each node of the list of strings and will be written in the file before the
-column meta-data in @code{filename} (see @ref{List of strings}).
-
This is a low-level function for tables. It is recommended to use
@code{gal_table_write} for generic writing of tables in a variety of
formats, see @ref{Table input output}.
+
+If @code{filename} already exists this function will abort with an error
+and will not write over the existing file. Before calling this function
+make sure if the file exists or not. If @code{comments!=NULL}, a @code{#}
+will be put at the start of each node of the list of strings and will be
+written in the file before the column meta-data in @code{filename} (see
+@ref{List of strings}).
+
+When @code{filename==NULL}, the column information will be printed on the
+standard output (command-line). When @code{colinfoinstdout!=0} and
+@code{filename==NULL} (columns are printed in the standard output), the
+dataset metadata will also printed in the standard output. When printing to
+the standard output, the column information can be piped into another
+program for futher processing and thus the meta-data (lines starting with a
+@code{#}) must be ignored. In such cases, you only print the column values
+by passing @code{0} to @code{colinfoinstdout}.
@end deftypefun
@@ -24836,7 +24955,7 @@ algorithm, JPEG files can have low volumes, making it
used heavily on the
internet. For more on this file format, and a comparison with others,
please see @ref{Recognized file formats}.
-For scientific purposes, the lossy compression and very limited dymanic
+For scientific purposes, the lossy compression and very limited dynamic
range (8-bit integers) make JPEG very un-attractive for storing of valuable
data. However, because of its commonality, it will inevitably be needed in
some situations. The functions here can be used to read and write JPEG
@@ -24893,7 +25012,7 @@ aren't defined in their context.
To display rasterized images, PostScript does allow arrays of
pixels. However, since the over-all EPS file may contain many vectorized
elements (for example borders, text, or other lines over the text) and
-interpretting them is not trivial or necessary within Gnuastro's scope,
+interpreting them is not trivial or necessary within Gnuastro's scope,
Gnuastro only provides some functions to write a dataset (in the
@code{gal_data_t} format, see @ref{Generic data container}) into EPS.
@@ -24923,7 +25042,7 @@ Write the @code{in} dataset into an EPS file called
(@code{GAL_TYPE_UINT8}, see @ref{Numeric data types}). The desired width of
the image in human/non-pixel units (to help the displayer) can be set with
the @code{widthincm} argument. If @code{borderwidth} is non-zero, it is
-interpretted as the width (in points) of a solid black border around the
+interpreted as the width (in points) of a solid black border around the
image. A border can helpful when importing the EPS file into a document.
@cindex ASCII85 encoding
@@ -24932,7 +25051,7 @@ EPS files are plain-text (can be opened/edited in a
text editor), therefore
there are different encodings to store the data (pixel values) within
them. Gnuastro supports the Hexadecimal and ASCII85 encoding. ASCII85 is
more efficient (producing small file sizes), so it is the default
-encoding. To use Hexademical encoding, set @code{hex} to a non-zero
+encoding. To use Hexadecimal encoding, set @code{hex} to a non-zero
value. Currently If you don't directly want to import the EPS file into a
PostScript document but want to later compile it into a PDF file, set the
@code{forpdf} argument to @code{1}.
@@ -24970,14 +25089,13 @@ Write the @code{in} dataset into an EPS file called
(@code{GAL_TYPE_UINT8}, see @ref{Numeric data types}). The desired width of
the image in human/non-pixel units (to help the displayer) can be set with
the @code{widthincm} argument. If @code{borderwidth} is non-zero, it is
-interpretted as the width (in points) of a solid black border around the
-image. A border can helpful when importing the PDF file into a
-document.
+interpreted as the width (in points) of a solid black border around the
+image. A border can helpful when importing the PDF file into a document.
This function is just a wrapper for the @code{gal_eps_write} function in
@ref{EPS files}. After making the EPS file, Ghostscript (with a version of
9.10 or above, see @ref{Optional dependencies}) will be used to compile the
-EPS file to a PDF file. Therfore if GhostScript doesn't exist, doesn't have
+EPS file to a PDF file. Therefore if GhostScript doesn't exist, doesn't have
the proper version, or fails for any other reason, the EPS file will
remain. It can be used to find the cause, or use another converter or
PostScript compiler.
@@ -25303,7 +25421,7 @@ Unary operand absolute-value operator.
@deffnx Macro GAL_ARITHMETIC_OP_MEDIAN
Multi-operand statistical operations. When @code{gal_arithmetic} is called
with any of these operators, it will expect only a single operand that will
-be interpretted as a list of datasets (see @ref{List of gal_data_t}. The
+be interpreted as a list of datasets (see @ref{List of gal_data_t}. The
output will be a single dataset with each of its elements replaced by the
respective statistical operation on the whole list. See the discussion
under the @code{min} operator in @ref{Arithmetic operators}.
@@ -25615,7 +25733,7 @@ operation will be done on @code{numthreads} threads.
@deffn {Function-like macro} GAL_TILE_PARSE_OPERATE (@code{IN}, @code{OTHER},
@code{PARSE_OTHER}, @code{CHECK_BLANK}, @code{OP})
Parse @code{IN} (which can be a tile or a fully allocated block of memory)
and do the @code{OP} operation on it. @code{OP} can be any combination of C
-expressions. If @code{OTHER!=NULL}, @code{OTHER} will be interpretted as a
+expressions. If @code{OTHER!=NULL}, @code{OTHER} will be interpreted as a
dataset and this macro will allow access to its element(s) and it can
optionally be parsed while parsing over @code{IN}.
@@ -25630,7 +25748,7 @@ different) may have different sizes. Using @code{OTHER}
(along with
@code{PARSE_OTHER}), this function-like macro will thus enable you to parse
and define your own operation on two fixed size regions in one or two
blocks of memory. In the latter case, they may have different numeric
-datatypes, see @ref{Numeric data types}).
+data types, see @ref{Numeric data types}).
The input arguments to this macro are explained below, the expected type of
each argument are also written following the argument name:
@@ -26073,7 +26191,7 @@ An @code{ndim}-dimensional dataset of size @code{naxes}
(along each
dimension, in FITS order) and a box with first and last (inclusive)
coordinate of @code{fpixel_i} and @code{lpixel_i} is given. This box
doesn't necessarily have to lie within the dataset, it can be outside of
-it, or only patially overlap. This function will change the values of
+it, or only partially overlap. This function will change the values of
@code{fpixel_i} and @code{lpixel_i} to exactly cover the overlap in the
input dataset's coordinates.
@@ -26754,8 +26872,8 @@ round of clipping).
The role of @code{param} is determined based on its value. If @code{param}
is larger than @code{1} (one), it must be an integer and will be
-interpretted as the number clips to do. If it is less than @code{1} (one),
-it is interpretted as the tolerance level to stop the iteration.
+interpreted as the number clips to do. If it is less than @code{1} (one),
+it is interpreted as the tolerance level to stop the iteration.
The output dataset has the following elements with a
@code{GAL_TYPE_FLOAT32} type:
@@ -27046,7 +27164,7 @@ For example, if the returned array is called
@code{indexs}, then
casting to @code{size_t *}) containing the indexs of each one of those
elements/pixels.
-By @emph{index} we mean the 1D position: the input number of dimentions is
+By @emph{index} we mean the 1D position: the input number of dimensions is
irrelevant (any dimensionality is supported). In other words, each
element's index is the number of elements/pixels between it and the
dataset's first element/pixel. Therefore it is always greater or equal to
@@ -27118,7 +27236,7 @@ input->flags &= ~GAL_DATA_FLAG_HASBLANK; /* Set bit to
0. */
@deftypefun void gal_label_clump_significance (gal_data_t @code{*values},
gal_data_t @code{*std}, gal_data_t @code{*label}, gal_data_t @code{*indexs},
struct gal_tile_two_layer_params @code{*tl}, size_t @code{numclumps}, size_t
@code{minarea}, int @code{variance}, int @code{keepsmall}, gal_data_t
@code{*sig}, gal_data_t @code{*sigind})
@cindex Clump
This function is usually called after @code{gal_label_watershed}, and is
-used as a measure to idenfity which over-segmented ``clumps'' are real and
+used as a measure to identify which over-segmented ``clumps'' are real and
which are noise.
A measurement is done on each clump (using the @code{values} and @code{std}
@@ -27137,7 +27255,7 @@ make a measurement on each clump and over all the
river/watershed
pixels. The number of clumps (@code{numclumps}) must be given as an input
argument and any clump that is smaller than @code{minarea} is ignored
(because of scatter). If @code{variance} is non-zero, then the @code{std}
-dataset is interpretted as variance, not standard deviation.
+dataset is interpreted as variance, not standard deviation.
The @code{values} and @code{std} datasets must have a @code{float} (32-bit
floating point) type. Also, @code{label} and @code{indexs} must
@@ -27299,7 +27417,7 @@ During data analysis, it happens that parts of the data
cannot be given a
value, but one is necessary for the higher-level analysis. For example a
very bright star saturated part of your image and you need to fill in the
saturated pixels with some values. Another common usage case are masked
-sky-lines in 1D specra that similarly need to be assigned a value for
+sky-lines in 1D spectra that similarly need to be assigned a value for
higher-level analysis. In other situations, you might want a value in an
arbitrary point: between the elements/pixels where you have data. The
functions described in this section are for such operations.
@@ -28805,9 +28923,9 @@ $ mv TEMPLATE.h myprog.h
@item
@cindex GNU Grep
-Correct all occurances of @code{TEMPLATE} in the input files to
+Correct all occurrences of @code{TEMPLATE} in the input files to
@code{myprog} (in short or long format). You can get a list of all
-occurances with the following command. If you use Emacs, it will be able to
+occurrences with the following command. If you use Emacs, it will be able to
parse the Grep output and open the proper file and line automatically. So
this step can be very easy.
@@ -28867,7 +28985,7 @@ program or library). Later on, when you are coding,
this general context
will significantly help you as a road-map.
A very important part of this process is the program/library introduction.
-These first few paragraphs explain the purposes of the program or libirary
+These first few paragraphs explain the purposes of the program or library
and are fundamental to Gnuastro. Before actually starting to code, explain
your idea's purpose thoroughly in the start of the respective/new section
you wish to work on. While actually writing its purpose for a new reader,
@@ -29925,7 +30043,7 @@ popular graphic user interface for GNU/Linux systems),
version 3. For GNOME
make it your self (with @command{mkdir}). Using your favorite text editor,
you can now create @file{~/.local/share/applications/saods9.desktop} with
the following contents. Just don't forget to correct @file{BINDIR}. If you
-would also like to have ds9's logo/icon in GNOME, download it, uncomment
+would also like to have ds9's logo/icon in GNOME, download it, un-comment
the @code{Icon} line, and write its address in the value.
@example
diff --git a/doc/release-checklist.txt b/doc/release-checklist.txt
index 05a851f..76f2660 100644
--- a/doc/release-checklist.txt
+++ b/doc/release-checklist.txt
@@ -6,10 +6,23 @@ set of operations to do for making each release. This should
be done after
all the commits needed for this release have been completed.
- - [ALPHA] Only in the first alpha release after a stable release: update
- the library version (the values starting with `GAL_' in
- `configure.ac'). See the `Updating library version information' section
- of the GNU Libtool manual as a guide.
+ - Build the Debian distribution (just for a test) and correct any build or
+ Lintian warnings. This is recommended, even if you don't actually want
+ to make a release before the alpha or main release. Because the warnings
+ are very useful in making the package build cleanly on other systems.
+
+ If you don't actually want to make a Debian release, in the end, instead
+ of running `git push', just delete the top commits that you made in the
+ three branchs with the following command
+
+ $ git checkout master
+ $ git reset --hard HEAD~1
+ $ git checkout pristine-tar
+ $ git reset --hard HEAD~1
+ $ git checkout upstream
+ $ git tag -d upstream/$ver
+ $ git reset --hard HEAD~1
+ $ git checkout master
- [STABLE] Run a spell-check (in emacs with `M-x ispell') on the whole book.
@@ -22,7 +35,11 @@ all the commits needed for this release have been completed.
- Check if THANKS and the book's Acknowledgments section have everyone in
- `doc/announce-acknowledge.txt' in them.
+ `doc/announce-acknowledge.txt' in them. To see who has been added in the
+ `THANKS' file since the last stable release (to add in the book), you
+ can use this command:
+
+ $ git diff gnuastro_vP.P..HEAD THANKS
- [STABLE] Correct the links in the webpage (`doc/gnuastro.en.html' and
@@ -41,7 +58,6 @@ all the commits needed for this release have been completed.
> doc/announce-acknowledge.txt
-
- Commit all these changes:
$ git add -u
@@ -53,7 +69,7 @@ all the commits needed for this release have been completed.
$ git clean -fxd
$ ./bootstrap --copy --gnulib-srcdir=/path/to/gnulib
- $ ./developer-build
+ $ ./developer-build -c -C -d # using `-d' to speed up the build.
$ cd build
$ make distcheck -j8
@@ -75,12 +91,10 @@ all the commits needed for this release have been completed.
- [STABLE] The tag will cause a change in the tarball version. So clean
the build directory, and repeat the steps for the final release:
- $ rm -rf ./build/*
$ autoreconf -f
- $ ./developer-build
+ $ ./developer-build -c -C -d
$ cd build
- $ make distcheck -j8
- $ make dist-lzip
+ $ make dist dist-lzip # to build `tar.gz' and `tar.lz'.
- Upload the tarball with the command below: Note that `gnupload'
@@ -108,7 +122,7 @@ all the commits needed for this release have been completed.
that you will need to configure and build Gnuastro in the main source
directory to build the full webpage with this script.
- $ ./configure
+ $ ./configure CFLAGS="-g -O0" --disable-shared
$ make -j8
$ cd doc
$ ./forwebpage /path/to/local/copy/of/webpage
@@ -140,8 +154,8 @@ all the commits needed for this release have been completed.
$ /path/to/gnulib/build-aux/announce-gen --release-type=XXXX \
--package-name=gnuastro --previous-version=0.1 \
--current-version=0.2 --gpg-key-id=$mykeyid \
- --url-directory=http://YYYY.gnu.org/gnu/gnuastro \
- --archive-suffix=tar.lz > announcement.txt
+ --url-directory=https://YYYY.gnu.org/gnu/gnuastro \
+ --archive-suffix=tar.lz > ~/announcement.txt
- Based on previous announcements, add an intro, the NEWS file and the
@@ -160,6 +174,10 @@ all the commits needed for this release have been
completed.
(only for STABLE) and Savannah news (only for STABLE).
+ - Open `configure.ac' and increment `GAL_CURRENT' for the next
+ release. See the `Updating library version information' section of the
+ GNU Libtool manual as a guide. Note that we are assuming that until the
+ next release some change will be made in the library.
@@ -183,6 +201,7 @@ Steps necessary to Package Gnuastro for Debian.
$ sudo apt-get update
$ sudo apt-get upgrade
+
- If this is the first time you are packaging on this system, you will
need to install the following programs. The first group of packages are
general for package building, and the second are only for Gnuastro.
@@ -232,13 +251,6 @@ Steps necessary to Package Gnuastro for Debian.
$ rm -f gnuastro_* gnuastro-*
- - To keep things clean, define Gnuastro's version as a variable (if this
- isn't a major release, we won't use the last four or five characters
- that are the first commit hash characters):
-
- $ export ver=A.B.CCC
-
-
- [ALPHA] Build an ASCII-armored, detached signature for the tarball with
this command (it will make a `.asc' file by default, so use that instead
of `.sig' in the two following steps).
@@ -247,10 +259,17 @@ Steps necessary to Package Gnuastro for Debian.
- Put a copy of the TARBALL and its SIGNATURE to be packaged in this
- directory (use a different address for the experimental releases)
+ directory (use a different address for the experimental releases).
+
+ $ wget https://ftp.gnu.org/gnu/gnuastro/gnuastro-XXXXX.tar.gz
+ $ wget https://ftp.gnu.org/gnu/gnuastro/gnuastro-XXXXX.tar.gz.sig
+
- $ wget https://ftp.gnu.org/gnu/gnuastro/gnuastro-$ver.tar.gz
- $ wget https://ftp.gnu.org/gnu/gnuastro/gnuastro-$ver.tar.gz.sig
+ - To keep things clean, define Gnuastro's version as a variable (if this
+ isn't a major release, we won't use the last four or five characters
+ that are the first commit hash characters):
+
+ $ export ver=A.B.CCC
- Make a standard symbolic link to the tarball (IMPORTANT: the `dash' is
@@ -278,7 +297,7 @@ Steps necessary to Package Gnuastro for Debian.
$ git add --all
$ git commit -m "Upstream Gnuastro $ver"
- $ git tag upstream/$ver
+ $ git tag -a upstream/$ver
$ pristine-tar commit ../gnuastro_$ver.orig.tar.gz \
-s ../gnuastro_$ver.orig.tar.gz.asc
@@ -310,7 +329,7 @@ Steps necessary to Package Gnuastro for Debian.
- In `debian/control', change all the old sonames to the new value.
- - Update `debian/changeLog' with all the Debian-related changes (since
+ - Update `debian/changelog' with all the Debian-related changes (since
merging with the upstream branch). Gnuastro's changes don't need to be
mentioned here. If there was no major changes, just say "New upstream
version".
@@ -349,6 +368,7 @@ Steps necessary to Package Gnuastro for Debian.
$ git add --all
$ git status # For a visual check
$ git commit -m "Gnuastro $ver"
+ $ git tag -a debian/$ver-1
- Push all the changes to the repository (you can't call `--all' and
@@ -358,4 +378,5 @@ Steps necessary to Package Gnuastro for Debian.
$ git push --tags
- - Inform Debian Astro.
+ - Inform Debian Astro: Ole Streicher (olebole@debian.org) has been
+ uploading Gnuastro to Debian until now.
diff --git a/lib/data.c b/lib/data.c
index b1e40a5..45b4762 100644
--- a/lib/data.c
+++ b/lib/data.c
@@ -63,6 +63,35 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
/*********************************************************************/
/************* Allocation *******************/
/*********************************************************************/
+/* Allocate a data structure based on the given parameters. If you want to
+ force the array into the hdd/ssd (mmap it), then set minmapsize=-1
+ (largest possible size_t value), in this way, no file will be larger. */
+gal_data_t *
+gal_data_alloc(void *array, uint8_t type, size_t ndim, size_t *dsize,
+ struct wcsprm *wcs, int clear, size_t minmapsize,
+ char *name, char *unit, char *comment)
+{
+ gal_data_t *out;
+
+ /* Allocate the space for the actual structure. */
+ errno=0;
+ out=malloc(sizeof *out);
+ if(out==NULL)
+ error(EXIT_FAILURE, errno, "%s: %zu bytes for gal_data_t",
+ __func__, sizeof *out);
+
+ /* Initialize the allocated array. */
+ gal_data_initialize(out, array, type, ndim, dsize, wcs, clear, minmapsize,
+ name, unit, comment);
+
+ /* Return the final structure. */
+ return out;
+}
+
+
+
+
+
/* Initialize the data structure.
Some notes:
@@ -185,35 +214,6 @@ gal_data_initialize(gal_data_t *data, void *array, uint8_t
type,
-/* Allocate a data structure based on the given parameters. If you want to
- force the array into the hdd/ssd (mmap it), then set minmapsize=-1
- (largest possible size_t value), in this way, no file will be larger. */
-gal_data_t *
-gal_data_alloc(void *array, uint8_t type, size_t ndim, size_t *dsize,
- struct wcsprm *wcs, int clear, size_t minmapsize,
- char *name, char *unit, char *comment)
-{
- gal_data_t *out;
-
- /* Allocate the space for the actual structure. */
- errno=0;
- out=malloc(sizeof *out);
- if(out==NULL)
- error(EXIT_FAILURE, errno, "%s: %zu bytes for gal_data_t",
- __func__, sizeof *out);
-
- /* Initialize the allocated array. */
- gal_data_initialize(out, array, type, ndim, dsize, wcs, clear, minmapsize,
- name, unit, comment);
-
- /* Return the final structure. */
- return out;
-}
-
-
-
-
-
/* Free the allocated contents of a data structure, not the structure
itsself. The reason that this function is separate from `gal_data_free'
is that the data structure might be allocated as an array (statically
diff --git a/lib/gnuastro/data.h b/lib/gnuastro/data.h
index 590917f..32adbd7 100644
--- a/lib/gnuastro/data.h
+++ b/lib/gnuastro/data.h
@@ -232,17 +232,17 @@ typedef struct gal_data_t
/*********************************************************************/
/************* allocation *******************/
/*********************************************************************/
-void
-gal_data_initialize(gal_data_t *data, void *array, uint8_t type, size_t ndim,
- size_t *dsize, struct wcsprm *wcs, int clear,
- size_t minmapsize, char *name, char *unit, char *comment);
-
gal_data_t *
gal_data_alloc(void *array, uint8_t type, size_t ndim, size_t *dsize,
struct wcsprm *wcs, int clear, size_t minmapsize,
char *name, char *unit, char *comment);
void
+gal_data_initialize(gal_data_t *data, void *array, uint8_t type, size_t ndim,
+ size_t *dsize, struct wcsprm *wcs, int clear,
+ size_t minmapsize, char *name, char *unit, char *comment);
+
+void
gal_data_free_contents(gal_data_t *data);
void
diff --git a/lib/gnuastro/table.h b/lib/gnuastro/table.h
index 456380e..b92a340 100644
--- a/lib/gnuastro/table.h
+++ b/lib/gnuastro/table.h
@@ -152,7 +152,8 @@ gal_table_comments_add_intro(gal_list_str_t **comments,
void
gal_table_write(gal_data_t *cols, gal_list_str_t *comments,
- int tableformat, char *filename, char *extname);
+ int tableformat, char *filename, char *extname,
+ uint8_t colinfoinstdout);
void
gal_table_write_log(gal_data_t *logll, char *program_string,
diff --git a/lib/gnuastro/txt.h b/lib/gnuastro/txt.h
index d6ec0be..e26e7f5 100644
--- a/lib/gnuastro/txt.h
+++ b/lib/gnuastro/txt.h
@@ -89,7 +89,8 @@ gal_data_t *
gal_txt_image_read(char *filename, size_t minmapsize);
void
-gal_txt_write(gal_data_t *input, gal_list_str_t *comment, char *filename);
+gal_txt_write(gal_data_t *input, gal_list_str_t *comment, char *filename,
+ uint8_t colinfoinstdout);
diff --git a/lib/interpolate.c b/lib/interpolate.c
index 51631e8..319fde9 100644
--- a/lib/interpolate.c
+++ b/lib/interpolate.c
@@ -512,7 +512,14 @@ gal_interpolate_1d_make_gsl_spline(gal_data_t *X,
gal_data_t *Y, int type_1d)
case GAL_INTERPOLATE_1D_AKIMA_PERIODIC:
itype=gsl_interp_akima_periodic; break;
case GAL_INTERPOLATE_1D_STEFFEN:
+#if HAVE_DECL_GSL_INTERP_STEFFEN
itype=gsl_interp_steffen; break;
+#else
+ error(EXIT_FAILURE, 0, "%s: Steffen interpolation isn't available "
+ "in the system's GNU Scientific Library (GSL). Please install "
+ "a more recent GSL (version >= 2.0, released in October 2015) "
+ "and rebuild Gnuastro", __func__);
+#endif
default:
error(EXIT_FAILURE, 0, "%s: code %d not recognizable for the GSL "
"interpolation type", __func__, type_1d);
diff --git a/lib/table.c b/lib/table.c
index ec174e9..84bfdd5 100644
--- a/lib/table.c
+++ b/lib/table.c
@@ -521,7 +521,8 @@ gal_table_comments_add_intro(gal_list_str_t **comments,
char *program_string,
specified by `tableformat'. */
void
gal_table_write(gal_data_t *cols, gal_list_str_t *comments,
- int tableformat, char *filename, char *extname)
+ int tableformat, char *filename, char *extname,
+ uint8_t colinfoinstdout)
{
/* If a filename was given, then the tableformat is relevant and must be
used. When the filename is empty, a text table must be printed on the
@@ -531,11 +532,11 @@ gal_table_write(gal_data_t *cols, gal_list_str_t
*comments,
if(gal_fits_name_is_fits(filename))
gal_fits_tab_write(cols, comments, tableformat, filename, extname);
else
- gal_txt_write(cols, comments, filename);
+ gal_txt_write(cols, comments, filename, colinfoinstdout);
}
else
/* Write to standard output. */
- gal_txt_write(cols, comments, filename);
+ gal_txt_write(cols, comments, filename, colinfoinstdout);
}
@@ -553,7 +554,7 @@ gal_table_write_log(gal_data_t *logll, char *program_string,
gal_table_comments_add_intro(&comments, program_string, rawtime);
/* Write the log file to disk */
- gal_table_write(logll, comments, GAL_TABLE_FORMAT_TXT, filename, "LOG");
+ gal_table_write(logll, comments, GAL_TABLE_FORMAT_TXT, filename, "LOG", 0);
/* In verbose mode, print the information. */
if(!quiet)
diff --git a/lib/txt.c b/lib/txt.c
index 097be54..a8655e0 100644
--- a/lib/txt.c
+++ b/lib/txt.c
@@ -1141,36 +1141,14 @@ txt_print_value(FILE *fp, void *array, int type, size_t
ind, char *fmt)
-static FILE *
-txt_open_file_write_info(gal_data_t *datall, char **fmts,
- gal_list_str_t *comment, char *filename)
+static void
+txt_write_metadata(FILE *fp, gal_data_t *datall, char **fmts)
{
- FILE *fp;
gal_data_t *data;
char *tmp, *nstr;
size_t i, j, num=0;
- gal_list_str_t *strt;
int nlen, nw=0, uw=0, tw=0, bw=0;
- /* Make sure the file doesn't already eixist. */
- if( gal_checkset_check_file_return(filename) )
- error(EXIT_FAILURE, 0, "%s: %s already exists. For safety, this "
- "function will not over-write an existing file. Please delete "
- "it before calling this function", __func__, filename);
-
- /* Open the output file. */
- errno=0;
- fp=fopen(filename, "w");
- if(fp==NULL)
- error(EXIT_FAILURE, errno, "%s: couldn't be open to write text "
- "table by %s", filename, __func__);
-
-
- /* Write the comments if there were any. */
- for(strt=comment; strt!=NULL; strt=strt->next)
- fprintf(fp, "# %s\n", strt->v);
-
-
/* Get the maximum width for each information field. */
for(data=datall;data!=NULL;data=data->next)
{
@@ -1187,14 +1165,7 @@ txt_open_file_write_info(gal_data_t *datall, char **fmts,
}
- /* Write the column information if the output is a file. When the
- output is directed to standard output (the command-line), it is
- most probably intended for piping into another program (for
- example AWK for further processing, or sort, or anything) so the
- user already has the column information and is probably going to
- change them, so they are just a nuisance.
-
- When there are more than 9 columns, we don't want to have cases
+ /* When there are more than 9 columns, we don't want to have cases
like `# Column 1 :' (note the space between `1' and `:', this
space won't exist for the 2 digit colum numbers).
@@ -1231,7 +1202,6 @@ txt_open_file_write_info(gal_data_t *datall, char **fmts,
/* Clean up and return. */
free(nstr);
- return fp;
}
@@ -1239,10 +1209,12 @@ txt_open_file_write_info(gal_data_t *datall, char
**fmts,
void
-gal_txt_write(gal_data_t *input, gal_list_str_t *comment, char *filename)
+gal_txt_write(gal_data_t *input, gal_list_str_t *comment, char *filename,
+ uint8_t colinfoinstdout)
{
FILE *fp;
char **fmts;
+ gal_list_str_t *strt;
size_t i, j, num=0, fmtlen;
gal_data_t *data, *next2d=NULL;
@@ -1290,10 +1262,31 @@ gal_txt_write(gal_data_t *input, gal_list_str_t
*comment, char *filename)
/* Set the output FILE pointer: if it isn't NULL, its an actual file,
otherwise, its the standard output. */
- fp = ( filename
- ? txt_open_file_write_info(input, fmts, comment, filename)
- : stdout );
+ if(filename)
+ {
+ /* Make sure the file doesn't already exist. */
+ if( gal_checkset_check_file_return(filename) )
+ error(EXIT_FAILURE, 0, "%s: %s already exists. For safety, this "
+ "function will not over-write an existing file. Please delete "
+ "it before calling this function", __func__, filename);
+
+ /* Open the output file. */
+ errno=0;
+ fp=fopen(filename, "w");
+ if(fp==NULL)
+ error(EXIT_FAILURE, errno, "%s: couldn't be open to write text "
+ "table by %s", filename, __func__);
+
+ /* Write the comments if there were any. */
+ for(strt=comment; strt!=NULL; strt=strt->next)
+ fprintf(fp, "# %s\n", strt->v);
+ }
+ else
+ fp=stdout;
+ /* Write the meta-data if necessary. */
+ if(filename ? 1 : colinfoinstdout)
+ txt_write_metadata(fp, input, fmts);
/* Print the dataset */
switch(input->ndim)
diff --git a/tests/match/merged-cols.sh b/tests/match/merged-cols.sh
index ea1e124..3cc7245 100755
--- a/tests/match/merged-cols.sh
+++ b/tests/match/merged-cols.sh
@@ -49,5 +49,5 @@ if [ ! -f $execname ]; then echo "$execname not created.";
exit 77; fi
# Actual test script
# ==================
-$execname $cat1 $cat2 --aperture=0.5 --log --output=match-positions.fits \
- --outcols=a1,aEFGH,bACCU1,aIJKL,bACCU2 -omatch-merged-cols.txt
+$execname $cat1 $cat2 --aperture=0.5 -omatch-merged-cols.txt \
+ --outcols=a1,aEFGH,bACCU1,aIJKL,bACCU2
- [gnuastro-commits] master 5cd3c1e 044/113: Recent work in master merged, small conflicts fixed, (continued)
- [gnuastro-commits] master 5cd3c1e 044/113: Recent work in master merged, small conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 1b0046b 051/113: Recent additions in master imported here, no conflicts, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master b1ea932 054/113: Separate Segment program now available in 3D, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master f537368 066/113: Segment's default 3D configuration file in tarball, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 133e277 071/113: Imported recent work from master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master a42ac37 085/113: Fixed MakeProfiles loop in checking radius values, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 1f9d941 090/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 737a0cd 046/113: Merged recent corrections in master branch, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 486f915 067/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master d6051a1 074/113: Upperlimit magnitude in 3D, corrected creation of check table, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 5dac3c3 075/113: Recent work in master imported, small conflict in book fixed,
Mohammad Akhlaghi <=
- [gnuastro-commits] master 3882956 077/113: Narrow-band areas from MakeCatalog in 3D datasets, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master abe82cb 086/113: Imported recent work in master, no conflicts, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 924787b 098/113: Updated copyright year in files specific to this branch, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master fbbc2fc 100/113: Imported recent work in master, minor conflict fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 22d0fba 089/113: Imported recent work in master, no conflicts, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 8edc804 092/113: Imported recent work in master, minor conflic fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master b5385f7 093/113: Projected spectra given NaN when no measurement on slice, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master ac2a821 103/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 3f51e31 108/113: Imported work in master, conflicts fixed, corrections made, Mohammad Akhlaghi, 2021/04/16
- [gnuastro-commits] master 45bd003 049/113: Imported recent work in master, conflicts fixed, Mohammad Akhlaghi, 2021/04/16