Commit d5828815 authored by Mohammad Akhlaghi's avatar Mohammad Akhlaghi

Degraded HST images are now sky subtracted

Until now, NoiseChisel was run on the degraded HST images only for the
segmentation map. However, because of the bad Sky subtraction in the
HST images (as shown in the previous commit), the F814W result was
strongly affected. So we now run NoiseChisel on the degraded HST
images first, save the segments, and then subtract the Sky value from
the degraded image. The result in the F814W image is now reasonable.
parent 526d8e92
......@@ -86,4 +86,4 @@ include $(foreach m, preparations download degrade-hst input-cutouts \
#
# To clean the outputs if necessary.
clean:
rm -rf $(BDIR)/* $(BSYM)
rm -rf $(BDIR)/* $(BSYM) tikz tex/pipeline.tex
......@@ -75,20 +75,20 @@ The resulting plots for the UDF field (combined from all 9 subfields)
can be seen in Figures \ref{udff606w} to \ref{udff850lp} and those for
the deep UDF-10 field are shown in Figures \ref{udf10f606w} to
\ref{udf10f850lp}. The UDF-10 and mosiac magnitudes mostly agree with
each other. The F606W plot is mostly similar to a similar plot in the
HDFS field, see Figure 10 in \citet{bacon15}. \tonote{The results from
F775W and F850LP images are also consistent, but the F814W image is
very strange: the HST magnitudes get much larger for fainter objects
than the MUSE magnitudes. Exactly the same scripts have been used in
all the steps for all the filters, so it is surprizing for me why
only the F814W image shows this strange behavior. }
\tonote{Looking at the degraded and HST images (Figure
\ref{f814w-demo}), I feel that the HST images for the F814W image in
the XDF survey have very bad sky residuals. Currently I feel that
this strong residual is the cause of this offset that is unique to
this filter. But if you have any ideas or suggestions for futher
tests, please let me know to try them out.}
each other. The F606W \new{and F814W} plot is mostly similar to a
similar plot in the HDFS field, see Figure 10 in \citet{bacon15}.
\new{The degraded F814W image in the XDF survey has a strong bias in
its Sky value which dramatically affects the magnitudes of objects
below the 24th magnitude (Figure \ref{f814w-demo}). To correct this
residual, the Sky value generated by NoiseChisel during the creation
of segmentation maps has been subtracted from the image. \tonote{The
fact that the big problem was removed once we subtracted the Sky
value maybe one confirmation that the problem was in the XDF
images, not in our MUSE processing. If you can think of any other
test, I would be happy to apply it. This maybe an important issue
with the XDF survey and we should be very sure to confirm this
claim.}}
\begin{figure}
\centering
......
......@@ -24,58 +24,6 @@
# One pixel kernel
# ----------------
#
# This kernel is created since in practice it means no convolution
# with NoiseChisel.
segdir = $(BDIR)/seg-maps
onepkernel = $(segdir)/one-pix-kernel.fits
$(onepkernel): | $(segdir)
echo "1" > $(segdir)/one-pix-kernel.txt
astconvertt $(segdir)/one-pix-kernel.txt -o$@
rm $(segdir)/one-pix-kernel.txt
# Create the segmentation map
# ---------------------------
#
# The first thing we need to do is to create a segmentation map that
# will be fed into Gnuastro's MakeCatalog to generate the final
# catalog. Unfortunately as it currently stands, the MUSE-generated
# broad-band image has too many artifacts at low surface
# brightnesses. So if we want to do detection over it, we are going to
# miss a lot of the fainter objects. Thus, the convolved and scaled
# HST image is our only choice.
#
# However, the HST image doesn't have much noise left (because of the
# huge kernel). So we will be convolving it with a 1 pixel kernel
# (effectively no convolution), and then using much more looser
# NoiseChisel parameters to give a reasonable result.
segments = $(foreach uid, 1 2 3 4 5 6 7 8 9 10, \
$(foreach f, $(filters), $(segdir)/udf$(uid)-$(f).fits) )
$(segments): $(segdir)/%.fits: $(cutdir)/%-h.fits $(onepkernel) \
$(hdegdir)/pix-res-scale.txt $(noisechisel) | $(segdir)
# Generate a segmentation map on the convolved and rescaled
# HST image.
astnoisechisel $< -o$(@D)/$*-nc.fits --kernel=$(onepkernel) \
--minbfrac=0.0 --minmodeq=0.3 --qthresh=0.4 \
--dthresh=0.8 --detsnminarea=5 --minnumfalse=50 \
--segquant=0.5 --gthresh=1e10 --objbordersn=1e10
# Make the clumps image an objects image. Note that because we
# disabled growth and object separation, each "object" only
# has one clump. So simply setting all the non-1 valued pixels
# in the objects image to zero, will do the job for us.
astarithmetic $(@D)/$*-nc.fits $(@D)/$*-nc.fits 1 neq 0 where \
-h1 -h2 -o$@ --type=long
......@@ -206,8 +154,6 @@ $(cleancats): $(ccatdir)/%.txt: $(catdir)/%-h.txt $(catdir)/%-m.txt | $(ccatdir)
# Full field comparisons
# ----------------------
#
......
......@@ -134,22 +134,91 @@ $(hdegdir)/pix-res-scale.txt: $(mcutdir)/udf1-f606w.fits \
# The HST images were convolved with the MUSE PSF for the same spatial
# resolution, now, we need to warp them to the MUSE pixel grid to
# easily use one segmentation map over both images.
cutdir = $(BDIR)/cutouts
h-to-m-pixres = $(foreach uid, 1 2 3 4 5 6 7 8 9 10, \
$(foreach f, $(filters), $(cutdir)/udf$(uid)-$(f)-h.fits) )
$(h-to-m-pixres): $(cutdir)/%-h.fits: $(hdegdir)/%-c.fits $(mcutdir)/%.fits \
$(hdegdir)/pix-res-scale.txt $(imgwarp) $(imgcrop) \
| $(cutdir)
$(foreach f, $(filters), $(hdegdir)/udf$(uid)-$(f).fits) )
$(h-to-m-pixres): $(hdegdir)/%.fits: $(hdegdir)/%-c.fits \
$(hdegdir)/pix-res-scale.txt $(imgwarp)
# Warp the HST image to the MUSE pixel scale, first find the
# scale factor (sf), then wap the image.
scalefactor=$$(cat $(hdegdir)/pix-res-scale.txt); \
# scale factor (sf), then warp the image.
scalefactor=$$(cat $(hdegdir)/pix-res-scale.txt); \
astimgwarp $< --scale=$$scalefactor -o$@ --numthreads=1
# One pixel kernel
# ----------------
#
# This kernel is created since in practice it means no convolution
# with NoiseChisel.
segdir = $(BDIR)/seg-maps
onepkernel = $(segdir)/one-pix-kernel.fits
$(onepkernel): | $(segdir)
echo "1" > $(segdir)/one-pix-kernel.txt
astconvertt $(segdir)/one-pix-kernel.txt -o$@
rm $(segdir)/one-pix-kernel.txt
# Create the segmentation map
# ---------------------------
#
# The first thing we need to do is to create a segmentation map that
# will be fed into Gnuastro's MakeCatalog to generate the final
# catalog. Unfortunately as it currently stands, the MUSE-generated
# broad-band image has too many artifacts at low surface
# brightnesses. So if we want to do detection over it, we are going to
# miss a lot of the fainter objects. Thus, the convolved and scaled
# HST image is our only choice.
#
# However, the HST image doesn't have much noise left (because of the
# huge kernel). So we will be convolving it with a 1 pixel kernel
# (effectively no convolution), and then using much more looser
# NoiseChisel parameters to give a reasonable result.
segments = $(foreach uid, 1 2 3 4 5 6 7 8 9 10, \
$(foreach f, $(filters), $(segdir)/udf$(uid)-$(f).fits) )
$(segments): $(segdir)/%.fits: $(hdegdir)/%.fits $(onepkernel) \
$(hdegdir)/pix-res-scale.txt $(noisechisel)
# Generate a segmentation map on the convolved and rescaled
# HST image.
astnoisechisel $< -o$(@D)/$*-nc.fits --kernel=$(onepkernel) \
--minbfrac=0.0 --minmodeq=0.3 --qthresh=0.4 \
--dthresh=0.8 --detsnminarea=5 --minnumfalse=50 \
--segquant=0.5 --gthresh=1e10 --objbordersn=1e10
# Make the clumps image an objects image. Note that because we
# disabled growth and object separation, each "object" only
# has one clump. So simply setting all the non-1 valued pixels
# in the objects image to zero, will do the job for us.
astarithmetic $(@D)/$*-nc.fits $(@D)/$*-nc.fits 1 neq 0 where \
-h1 -h2 -o$@ --type=long
# Subtract the Sky value from the HST images
# ------------------------------------------
#
# Once NoiseChisel is run on the degraded HST images, we have the Sky
# value and we can subtract it from the input image to clean it up. As
# described in commit 526d8e9 (titled: `A description is written for
# the process and results'), the F814W image in particular shows
# strong sky residuals, so this step is necessary.
cutdir = $(BDIR)/cutouts
finalhstdeg = $(foreach uid, 1 2 3 4 5 6 7 8 9 10, \
$(foreach f, $(filters), $(cutdir)/udf$(uid)-$(f)-h.fits) )
$(finalhstdeg): $(cutdir)/%-h.fits: $(segdir)/%.fits | $(cutdir)
astarithmetic $(segdir)/$*-nc.fits $(segdir)/$*-nc.fits - -h0 -h3 -o$@
# Correct MUSE image size
# -----------------------
#
......@@ -163,7 +232,8 @@ $(h-to-m-pixres): $(cutdir)/%-h.fits: $(hdegdir)/%-c.fits $(mcutdir)/%.fits \
# column in the image, so it shouldn't be significant.
muse-corr = $(foreach uid, 1 2 3 4 5 6 7 8 9 10, \
$(foreach f, $(filters), $(cutdir)/udf$(uid)-$(f)-m.fits) )
$(muse-corr): $(cutdir)/%-m.fits: $(mcutdir)/%.fits $(cutdir)/%-h.fits
$(muse-corr): $(cutdir)/%-m.fits: $(mcutdir)/%.fits $(cutdir)/%-h.fits \
| $(cutdir)
# If the sizes are identical, then just copy the actual
# cropped MUSE image, otherwise, using ImageCrop's `--section'
......@@ -193,5 +263,5 @@ $(muse-corr): $(cutdir)/%-m.fits: $(mcutdir)/%.fits $(cutdir)/%-h.fits
# Crops from the HST images for a demonstration of bad F814W results.
h814demodir = $(BDIR)/tex/f814w-demo
deghst-demo = $(h814demodir)/udf1-f814w.pdf $(h814demodir)/udf1-f850lp.pdf
$(deghst-demo): $(h814demodir)/%.pdf: $(cutdir)/%-h.fits | $(h814demodir)
$(deghst-demo): $(h814demodir)/%.pdf: $(hdegdir)/%.fits | $(h814demodir)
astconvertt $< -o$@ --fluxlow=-0.001 --fluxhigh=0.02 --noinvert
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment