Friday, 30 January 2009

GOODS progress

Implemented a new code that should hopefully be able to deal with the seemingly fragile network running here. Have modularized the initialization section of my rastermap v7 code. There are now two code blocks that run independently and I'll need to document them properly at some point in the future, but I need to keep track of files that come out of them first.

AORBUILD.pro

Outputs:
"meanmap.fits" - hashed up map of the summed images normalized by weight.

"meanmap.idl" - stores data used to construct the initial mean map:
  • "file[]" - list of filenames of files that contain fair images to work with.
  • "mean" - the map written out in the meanmap.fits file
  • "hd" - the header for the map
  • "wt" - the summed weight of all the images in the map plane
  • "usedfiles" - number of files in "file"
"aorprops.idl" - stores useful tidbits of information that relate images to their AORs:
  • "filesperaor[]" - number of (useful) images contained within the AOR
  • "medsky[]" - median image produced by simply combining the images without interpolating; serves as the sky background employed in the initial sky subtraction. Good for spotting detector failures.
  • "aornames[]" - what it says on the tin
"meta/.idl" - name of the AOR is used to build idl save files that store:
  • "stack[]" which holds stack.wt and stack.map which are the individual weight and image files interpolated onto the map plane.
  • "wtaor" which is simply the summation of the individual weights of the image files in the map plane for each particular AOR.

STACKBUILD.pro

Outputs:
"meta/STKx####y&&&&.idl" is a stack of pixels in the map plane where #### and &&&& represent the x and y coordinates:
  • "pxfil[]" - array containing full path of the file containing originating pixel.
  • "pxsrc[]" - the key part of the filename used to describe the above file, allowing a full AOR/file/image/location search of the file in question.
  • "pxval []" - the actual value of the pixel.

Sunday, 25 January 2009

Lost in Translation

So having gotten back to work after the New Year, a tip to India and a touch of lurgy, things seem to have been happening! The googlesky problem I was faced with seems to have magically been solved using Google AppEngine. My python code, instead of instantiating a web page and calling methods within, is now delivered in javascript, which then calls python from the javascript handler. I think avoiding javascript is no longer a choice now and I'm going to have to face learning yet another programming language for this specific project else I'm going to be fairly lost in translation.

In the meantime, I've been ploughing through the C4 algorithm and I think I'm happy with what the component parts do in terms of input and output. I still need to figure out what's actually going on inside the subroutines though, especially the count and testcount functions. They have a lot more going on than I initially thought. And the net output of the program is still gibberish to me. I need to see if I can come up with some sensible output. Or maybe find some standardized naming of the actual variables. I'll look up the NVO to see if they have anything useful I can call on later. Still haven't geared it up for dynamic array setting... the problem is when I have to deal with an observing footprint, specifically the DES observing footprint, I'm going to get a lot of junk from the testcount function. I'll have to dig a bit more and see if this is something that's previously been solved or whether I'm going to have to start from scratch. If I do have to start from scratch, I think I'll try and use the SQL syntax used in the NVO. This might mean extra work having a C++ parser for an SQL query... a risk of being lost in translation all over again. I think perhaps a lookup table might be the fastest/easiest way to do this.

Thursday, 22 January 2009

It begins...

Seeing as the department webserver is down for now, and I have no useful way of tracking my academic thoughts at present, I may as well kick this off. The naming of this blog comes from what seems to be more of a hobby than my actual PhD research but here goes: I'm trying to make super high quality maps from deep infrared data taken from the Spitzer space telescope as part of the GOODS programme. These will hopefully be the highest quality maps produced from this set of data. And as well as that, I met some excellent folks at Santa Fe last year and we've begun setting up scientific tools and interface for the web version of GoogleSky: sort of a GoogleMap for astronomers. Tons of potential for growth in the future of these!

My PhD work, in stark contrast, is directed towards finding galaxy clusters in the Dark Energy Survey. Hmm.. this might take some explaining. Dark Energy is this "stuff" present in the universe that
(a) we can't see because it's made up of something weird,
(b) we can't find because we can't see it,
(c) but know it's all over the place and blowing the universe apart.
Intriguing, no? So in the infinite wisdom of a bunch of cosmology geeks, the Dark Energy Survey (or DES) was set up to solve problems (a) and (b). Or at least get a better answer than shrugging our shoulders and mumbling something about scalar fields, maybe. I'll indulge with how I fit in further on down the line... I do stuff faaaarr less exciting!

So, for building maps of the sky, searching the heavens, and hunting for clusters buried in the quintessence of the universe; cosmic cartography begins thus.