What was the bit depth of the DISR cameras and what is the difference between the raw images posted on the internet and raw imagery downloaded from Huygens? -Bartek Okonek
The camera has a bit depth of 12 bits per pixel, which makes it 16 times more sensitive than your common digital camera. This sensitivity was VITAL to seeing Titan’s surface through the thicker than expected haze. The raw download from Huygens had the image data ‘square rooted’ to go from 12 bits down to 8 bits with minimal loss of dynamic range or signal to noise ratio. The 8 bit images were than compressed using a hardware discrete cosine compression routine, and typically compressed to about 1 bit per pixel. The data was then broken up into packets of 56 bytes for transmission on the probe telemetry links. The software I wrote to process this raw telemetry stream then had to undue all these steps to reassemble the images in 12 bit per pixel format.
What is that honey comb pattern seen eg. in this image?:
The camera optics consisted of three lenses on the front cover and three coherent fiber optic bundles to bring the image to the CCD surface. The honey comb or chicken wire, as tend to call it, pattern is the slight drop in brightness at the boundaries between clumps of fibers drawn out to make the full bundles.
Was there any noticeable difference between the first and last image taken on Titan’s surface? -Bill Leung
I have not yet had time to fully analyze the post impact images. We have not yet seen any significant variations, although some amateurs are suggesting possibilities for us to look at. Time will tell.
What was the relative or absolute exposure time of the images? Which filters were used with the Huygens images? -Bob Webster
The typical camera exposure times were around 20 milliseconds. This was limited by the spin rate of the probe, and the angular resolution of the pixels in each camera. The on board software limited the exposure time to either keep the average exposure to half of full range, or just short enough to avoid blurring. The cameras each have a low pass filter that allows only near infrared light to pass through. This minimizes the effects of the haze, and allows for the best possible pictures. This filter runs from 650 nanometer long ward to the cutoff wavelength of the CCD.
Why didn’t you plan mirrors in order to widen surface imaging, as it was planned on early lunar probes? -Bernard Bouquin
The probe mission was an atmospheric descent one, surface operations were a possible added bonus. We were quite limited in total mass and power consumption, as well a funds to build and test. Plus, mirrors on the moon operate in a vacuum at close to earth temperatures. On Titan, they would work in a thick atmosphere, at nearly liquid nitrogen temperatures, after 7 years in vacuum. We decided to build the entire instrument with only one moving part to reduce the chances of failure under such difficult conditions. We count our selves very lucky to have ANY surface pictures, although we of course would not refuse the extra images a mirror might add, if we could have added it for free after the probe safely landed without it J!
What are the dimensions in width and height in pixel of the imaging surface of the CCD, for example, 150 x 300 pixels. If would be nice to have the FoV angles as well. Could you please send me the values or direct me to the site where detailed engineering information is given? -Christopher Batory
The active area of the CCD is 256 by 524 pixels. Each camera has approximately a 15 degree wide FOV, so that 12 properly timed triplets combined will make a full panorama. The SLIs don’t quite meet, and the HRIs have LOTS of overlap. The SLI ranges from approximately 7 degrees above the horizon to 43 degrees below it. The MRI from 43 degrees below to 73 degrees down, and the HRI from 67 degrees down to 82 degrees (nadir angle of 8 degrees). We have EXTREMELY detailed calibrations reports on every aspect of the imaging system, as well as the other parts of DISR. However, each instrument team is granted proprietary access to their instrument’s data for 18 months after the descent. After that time is up, we will release both all of our raw data and all of the detailed calibration reports. They will be released to the public and maintained by NASA in the Planetary Data System.
What color is the surface science lamp? If it is not orange, why did we not
see any other colors other than orange on the surface? -Gene Cotton
The surface science lamp is a standard tungsten lamp, run slightly below rated power to increase its’ durability. The reason the surface appears orange is not due to the surface science lamp at all, but due to the absorption of blue light by the atmosphere, and the reflected color of the ‘soil’. If you were able to stand on Titan’s surface, it would look roughly that way to your naked eye (until you froze to death J!). The colored images of the surface have already had the color of the lamp factored out. It is because we work hard to remove such confusing factors with our calibration data that it take us longer to release new images than amateurs on the net. But ours have to be scientifically accurate, and that takes time and effort.
One of the Q&As was "Did
we use any SCHOTT optical glasses or filter glasses in our instrument?"
Would one of those happen to be a Schott VERIL variable linear interference filter? -Karl Dube
I don’t know. We contracted Martin Marietta to manufacture the flight instrument, and they would have handled that kind of detail.
Is there only one image of the surface of Titan? Why don't we see more images as the probe is closer (feet, or yards) from the surface. It seems to stop at about five miles up until we get to the ground, then only one image of the surface? -Neil Robinson
During the last moments of the descent, we only took new images when the telemetry
buffer was empty. That way if the probe was destroyed on impact, we would not
lose the lowest, most interesting data. Since the telemetry rate is very low,
that means a long time between images, and we were taking spectral data, as
well. In hindsight, the probe survived impact just fine, and we could have
loaded up the telemetry buffer with interesting pictures and spectra. But we
had no way of knowing that in advance. There is only one image of the post
impact surface because the probe was no longer moving or rotating, and our
cameras are fixed. See above for why we chose to do it that way.
Was there ever any concern that very little visible light would reach Titan’s surface through the atmosphere? -Mike Schultz
We had very detailed models of how bright the light would be on Titan’s surface. While it would be very dark, it would still be far brighter than at night under a full moon on earth. The cameras were specifically designed to work well under those lighting conditions. It turned out that we had plenty of light, just more haze than we expected!
Why are the mosaic pictures delayed so much? When will more images be released? -Per Torphammar
There are many steps to each stage of improving the panoramic images, and each takes more time the harder we try to improve the images. Without the full set of images and without the azimuth angle information from the sun sensor, even piecing the images together accurately is difficult. The many images seen by amateurs on the net are often stitched together in photoshop or other software. This treats each little piece as a ‘rubber’ sheet, and stretches each to get them to fit. This is not scientifically accurate, and we do not distort ANY image piece accept as the geometric projection techniques require. To make the images still fit and look good take quite a bit of tweaking the ‘free parameters’, namely the height, latitude, longitude, azimuth, roll and tip of the space craft at the moment each image was taken. Also the wind slowed down just as we descended far enough through the haze to clearly see the surface, so all the clear pans are of the same area.
We have 3 new panoramas appearing the science journal Nature sometime this summer, and 2 new images in the April 30th issue of Science News. Other will take more time to make significant enhancements over what we have already released.
Why do the same images seem to repeat dozens of times in the raw images? -Fred Wolke
When the channel A received was not powered up on Cassini, we lost half the
images. When the raw image data was converted to triplets, the program did
not expect half the images to be missing, so it did not blank out each part
of the triplet before overwriting with the new image. Thus, every place that
an image was missing, the old image was left in place, so that it looked like
it repeated. In the actual science data images, the only repeats are the post
impact surface images, and that is because we are looking at exactly the same
thing over and over again, so they SHOULD be nearly identical. We apologize
for the confusion, but we were just a bit busy in Darmstadt trying to make
sense of the data we had, despite the loss of half the images!
Ask an Expert Featuring Lyn Doose I Ask an Expert Featuring Mike Bushroe, 1st Edition I Ask an Expert Main