NASA's Universe of Learning Science Briefing: Visualizing the Universe with NASA Data

Moderator: Jeff Nee

April 12, 2018

Jeff Nee: Wonderful. Hello everybody, happy Yuri Gagarin day. I'd like to welcome you to this Universe of Learning telecon hosted by the NASA Museum Alliance. Thanks to all of you for joining us and to anyone listening to the recording in the future. Today we're talking about Visualizing the Universe with NASA Data. As always if you have any issues or questions now or in the future, you can email Jeff Nee of the Museum Alliance at

You can read the full bios of all of our speakers on the web sites. But as a brief introduction our facilitator today is Dr. Emma Marcucci who is an Education and Outreach Scientist from the Space Telescope Science Institute. Emma it's all you.

Emma Marcucci: Thanks Jeff. And thanks everyone for being on the call today.

So we're going to jump right in. Starting on Slide 2, these are our resources. As always, they are collected at a NASA Wavelengths List. We have our resources today roughly broken into three different categories. We have some examples of hands-on activities, some information about products and process as well as some databases that you may find helpful. And several of these you will hear about during our talks today.

So if we move onto Slide 3, as Jeff mentioned this is going to be a panel style talk. Each of our speakers will have about 10 to 15 minutes to share their experiences and work with visualizing the universe.

Go to Slide 4, we'll get started with our first speaker. I will briefly introduce each speaker. But please see their full bios on the web site.

So our first speaker is Kim Arcand. She is the Visualization Lead for NASA's Chandra X-ray Observatory, which is headquartered at the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. She is a leading expert in studying the perception and comprehension of high energy data visualization across the novice-expert spectrum. Kim, please take it away.

Kim Arcand: Thanks so much Emma. I'm going to start us on Slide 5. So, as Emma mentioned I work for the Chandra X-ray Observatory as a Visualization Lead. And I've been there almost 20 years. I'm very excited to celebrate my 20th anniversary actually in just about another month or so.

So in that many years I've had the pleasure of working with a lot of X-ray data of course but also if you look on Slide 6, I do like to spend just a minute talking about different kinds of light across the electromagnetic spectrum because it's just incredibly important to how we understand the universe and how we visualize the universe.

For me every different kind of light essentially offers another tool in the astronomer's toolkits if you will. And the same thing is essentially for the data visualizer's toolkit.

So the image that we're looking at on Slide 6 is an image from NASA Solar Dynamics Observatory. And it's just showing some of the different kinds of light that our sun emits. I just think it's a lovely image so I like to use that for the electromagnetic spectrum.

Kim Arcand: So if we're looking at Slide 7, we are looking at one of my favorite objects in the universe. This is called Cassiopeia A, a really lovely supernova remnant and of course a very famous one as well.

The handy nature of this being so famous is that a lot of telescopes have observed it for quite a bit of time. The image that we're looking at here is from the Hubble Space Telescope. And it's showing some really amazing filamentary structure on the 10,000 degree mark.

Now we can look at that same patch of sky, that same field of view at X-ray wavelengths. And if you go to Slide 8 you can see you're getting a pretty different picture now of what the supernova remnant Cassiopeia A looks like.

And to me this picture really shows how stuff can sort of come alive. The beautiful sort of star remains of this stellar gas I think is just a really lovely way to look at it.

If we look on Slide 9, we can see that we combine these images together so the optical fields from Hubble as well as the X-ray field from Chandra and get a little bit better understanding of this object's place in the universe.

I'm going to more towards the end of my piece as to why we would include optical and X-ray data in this image both aesthetically and also contextually but…

Kim Arcand: So on Slide 10, this is kind of a better picture of what the data looks like before we've done too much processing to it.

But if you look at Slide 11 you can actually see down in the lower right corner, that's a little bit closer still to what the raw data looks like. But then of course if we go to Slide 12, you can see this is really what the data starts off looking like.

So I'm going to talk very briefly about how we do this kind of transformation or translation of data from the ones and zeros, the binary code all the way to the processed image at the end.

So if you're looking at Slide 13 essentially this is just showing the data path. So, there is this really amazing object in space. Cassiopeia A. It's about 10,000 lightyears away from us, so 10,000 times, about 10 trillion kilometers. I'm really rounding numbers just to make the math simpler.

Kim Arcand: So the light from Cassiopeia A has been traveling to in this case the Chandra X-ray Observatory for a long time. That 10,000 years. And it's essentially observed by the scientific equipment. It's comes up into the form of ones and zeroes, binary code. And then sent down through NASA's deep space network before eventually it makes its way to me or one of my teammate's laptops over here in Cambridge, Massachusetts.

So gathering these photons is a really important part of the whole image creation process of course. But the following steps that will come after that are also really important

So if you look at Slide 14, after we get the really raw data, the binary code, its processed through some scientific software to create a table. And the table just shows the time, the location, X and Y coordinates and energy of each photon, each packet of energy that struck the detector during observation.

And then after we do some more translation through additional scientific software if you look on Slide 15, we get the digital representation of the object.

So right now we're looking at CasA in two-dimensions. And we're seeing it kind of raw. It's been processed a little bit. A tiny bit of smoothing has been done. A tiny bit of artifacts removal has been done. A little bit of scaling has been done. But no color has been applied yet.

I like to think of color as sort of the finishing touch for when we're doing any sort of data visualization like that process.

So for example on Slide 16 you can see that in this case the color that we've applied is by energy cut. So, the highest energy X-ray are assigned to blue. On Slide 17 you can see that the medium energy X-rays are assigned to green. On Slide 18 you can see what the lowest energy X-rays are assigned to red.

So this coloring, this chromatic ordering of the color actually provides more information for the image. And if you look on Slide 19 you can see a version of the image in the three colors by color code that I just shown.

Additionally, we've been looking at Cassiopeia A from Chandra for a really long time. So, we've got data for over more than 17 years now. We've got about 2 million seconds worth of data to actually work with.

So we can combine this data essentially and see it moving over time. And that URL on this slide just points to the time-lapse moving if you can bring that up separately in a browser either as I'm talking or after I'm finished talking.

But that movie shows just a few different epochs from the Chandra observations taken in 2000, 2002, 2004 and 2007.

And that ability to look at things over time is really useful. I think it's one of the benefits for having a mature mission like Chandra that's been around for almost two decades itself.

You know and when we look at how these objects change over time for one I think it's really useful to show non-experts that the universe is ever-changing. I think there's this idea that these images that we produce and send out into the big wide world are these giant cosmic selfies. And that these objects these are photographs of objects, right, and that they're really not changing all that much.

What I like about being able to show things like time-lapse movies is that you can see with your own eyes that these objects are changing.

So for me astronomy is kind of essentially an ultimate time traveling gig albeit from the safety of your desktop. And you can peer back in the universe and see these objects as they were a long-time ago.

And if you're lucky enough to get the data over a number of observations you can see a change.

If you're looking at the video now of the time-lapse you can see that the outer shock wave, that blue ring around the rim, that that velocity is moving out and around 11 million hours per hour. So not only is it useful to understand that the object is changing, that the universe is dynamic but then you can also of course learn something from it as well.

So we have two-dimensional images. We have two-dimensional images changing over time. And then if you look on Slide 20 you can see a still from another video if you go to that URL. You can play a very simple movie that shows this object in 3-dimensions.

Now this was a really cool project to work on. It was the first time we'd ever successfully created a three-dimensional reconstruction of a supernova remnant in this type of software. We actually used something called astronomical medicine in order to create this 3D version because at the time, this was back in 2009, we actually didn't have our own astronomy-based software to do the modeling in, which is why some of the textures of this 3D object look a little strange, 3D Slicer/Astronomical Medicine. It's a brain imaging software so you can see why this looks a little bit brain like.

And the way this 3D object was created was essentially by using the Doppler Effect and I think a lot of people are familiar with the Doppler Effect. But even if not I think it's a relatively straightforward concept to be able to talk about.

So having the Doppler Effect and understanding the basic geometry of the object that this material is emanating from a very specific point, we could essentially do this map into 3-dimensions.

And the color codes show that it's where the iron is, where the silicon is, where the argon is for example.

So again for an object that's over 10,000 lightyears away, 10,000 potentially in kilometers, being able to see around it and through it and above it and below it is a really amazing thing for me to have experienced.

On Slide 21 there's another movie there if you want to a look at that version. After that version, we took a wire frame of the data and essentially brought it into a Hollywood animation package. So we had more control over the colorization and texturization for that 3D model.

So you can see that we applied colors that were a little bit more aesthetically pleasing and also textures to make it look a little bit more nebulous like it is.

But this idea of translating our information into various forms it kind of just kept us thinking like what else can we do with this object.

So if you look on Slide 22 you can see that eventually we learned how to 3D print it. And I don't know if any of you have 3D printers at your venues. But for us having the ability to have the tactile version of the data has been incredibly important and has launched essentially a new area of our communications and outreach, you know, skillset so that we can offer these tactile touchable versions of our data for many different kinds of audiences, different kinds of learners, experts, non-expert, cited, non-cited individuals.

And it's been a really powerful experience to be able to offer our data in a truly accessible way to a much, much larger portion of the population.

You can see on Slide 23, my latest toy is that we've actually been able to figure out how to 3D print the same object in three color. So now we have the tactile information. But we're also not losing the color-based information that tells you where these areas, kind of the elements are located in the remnant. And that's been a lot of fun too.

And I put up a URL to go to the 3D print file page for it because these files are free for anybody to be able to print themselves.

So looking at Slide 24 and I realize I have too many slides so I'll try to go faster. We did a virtual reality version of this recently this is an Oculus Rift and also Google Cardboard Version. So if you're starting to experiment with immersive technologies we can provide that for you as well.

But essentially there are all of these versions of Cassiopeia A that I've shown you and some that are on this list that I haven't shown you. But all that came from that first original batch of ones and zeroes, a lot of ones and zeroes and it's just a process of data translation in various kinds of forms.

If you look on Slide 26, just to show this is not a one trick pony, we've done similar processes of 2D imaging and then 2D over time as well as 3D and also 3D over time for supernova 1987A which is another really lovely supernova remnant to be able to talk about.

And you can see on Slide 27 is the 3D print in progress of that object as well.

So just very quickly I'll talk about some of the psychology of how we approach creating these images and other visual representations of these objects. We've done a lot of work in understanding the psychology behind them in a program called Aesthetics and Astronomy.

And that essentially helps us understand how both experts and non-experts look at and perceive and comprehends this data.

So a key factor for us is that when an expert looks at the image here she sees the questions first so -- what is it, what data is included, what is the scientist trying to say.

And then eventually move to the -- oh wow, let's see now, that's pretty good-looking or whatever the case may be.

But a non-expert does the reverse. So they start with wow beautiful or, you know, really lovely or cool. And then eventually it's what does that mean or perhaps how big is it or what does a scientist see when he or she looks at it so understanding that. They move in different directions from aw and wonder onto the actual data questions and vice versa for non-experts and experts is useful for us to know.

Color is also another really important thing. I think Robert's talking about color so I won't spend too long. But the idea of what kind of colors to use is really important because color has power and sort of contextual value for a lot of people.

So if we're assigning an object to red colors that offers a specific storyline immediately. Most non-experts see red as hot so if you're presenting an object like what we're looking at now on Slide 29, that's NDC4696, an elliptical galaxy with a supermassive black hole in that central white region. That is emitting all of the super high energy material around it and it's a pretty darn hot thing.

So for us using the red color made sense. If you look at Slide 30 you can see the blue version that we started out with originally. Now the physics story would probably dictate that the blue should be the version that we show because in physics the blue is really hot but for non-experts it's red. You don't hear a parent saying to his or her child — don't touch that stove. It's blue hot. Unless maybe if you're married to a physicist.

But the idea essentially is that color has power. And that's something that we try to consider when we are thinking of these data representations.

If you look at Slide 31, scale is also incredibly important. How people look at images is actually impacted by whether there's a scalable art image or a scale of reference or not so somebody can look at an image at the Museum of Art in New York City and spend about 17 seconds in front of it. Looking at astronomical image on a computer for our studies has been about the same amount of time though obviously in any online study there's a lot of variability.

But then as soon as we show some sort of sense of scale that time that they spend increases up to 50% and as soon as you add a really cool caption or descriptor that doubles it as well.

Slide 32, you may ask why am I showing a picture of this older lady. Well just the point is context. If I was in a museum I would look at this image and think so I don't know her. I don't know the painter. But there's a fantastic backstory about her which you can read in my notes if you're interested.

But essentially she had a fantastic history and was really well loved by her nephew, the artist that painted it.

And if you look on Slide 33 you'll see that that story just changes the context and can make it more interesting.

If you look on Slide 34, we can do the same thing with astronomical images. So I can talk about this image being Saturn and mostly ultraviolet light.

But a lot of people are highly interested in the fact that around 10 o'clock there's an image, a little pixel that represents Earth. So being able to provide relevance, being able to provide information about this image that helps people connect to it not only helps comprehension values but also aesthetic enjoyment as well.

So Slide 35, I'll stop here. Some references that you can peruse if you're interested in learning a little bit more and hopefully I didn't go through this too fast for everybody.

Emma Marcucci: Great, thanks so much Kim. So, we will move onto our second speaker, Dr. Robert Hurst is the Visualization Scientist for the NASA's Spitzer Space Telescope Science Center at Caltech and has worked on the visual public identity of a number of other NASA missions. He is closely involved with the astronomy visualization community and has led efforts to develop metadata standards and best practices in the field.

So we'll start on Slide 36 and Robert please take it away.

Robert Hurt: Thank you and so I too like Kim has a lot of slides so let's be prepared to flip them through quickly as I sift through a lot of ideas here.

So I'm just talking about two aspects of this both starting with the data and whether we end up referencing the data directly or whether we create other visual stories to help tell the science more artistically oriented.

So on Slide 37 I ask the question, can the data speak for itself?

And sometimes the answer is a resounding yes. You look at these images and there are stories and that stories can lead people through the actual science results.

So what we need to do sometimes is just look at the data and try to extract the most interesting visual information out of that.

And sort of talking on some parallel topics to what Kim's already covered, if you go to Slide 38, one of the first things that we've got to do is consider exposure. Astronomical data is intrinsically what a photographer would call a high dynamic range image. We have the tremendous range of brightness represented from incredibly faint filamentary of restructure to a galaxy to the deep cores.

So one of the things -- games we have to play as a Visualization Specialist is to understand what is the right range exposure to bring out all of the structure we want to see but not maybe to overemphasize artifacts or things that we're not interested in.

And then by taking different channels of data pulled from different parts of the spectrum we can construct a color image.

But this brings me to a few points I'd like to make a little bit of a soap box I have on Slide 39. I want to use a couple really bad words that we kind of hate to use for this process. One is false color, the other is colorize, right. False color just it's a term that had a purpose and history to represent an era when almost every photo you ever saw was a photo of representing the way light hits the eye. They needed a way of calling out images that have been processed in some other way.

And so the era naturally came into the idea that it's a false color and has a way of distinguishing it. But it carries with it a horrible connotation that we are doing something wrong with the imagery and somehow these colors aren't real.

Likewise the word colorize is very deeply associated I think with taking a black and white picture and arbitrarily choosing colors to apply to it with no motivation other than artistic inspiration.

Instead I like to divert people more to what we consider good words which might be phrases like representative color. Colors that are representing light from other parts of the spectrum or one I particularly enjoy is translated color. The idea that color is real, color exists across the whole spectrum. Color is just slight variations and relative brightness from one band to another.

But most of the spectrum of light are at wavelengths or I can't see. So we want to take those colors that are real at other parts of the spectrum and translate them into the red, green, blue that our eyes can see, which is my big bailiwick that colors aren't false. They're only misunderstood.

Going onto Slide 40, one of the tricks you can use to really better understand color is to actually stop and ask -- how is color being represented in the image that you're looking at.

So in this case I'm showing a picture of the Pinwheel Galaxy represented in infrared light that has been translated into red, green, blue.

And by having a little lexicon on the side if you well that helps guide the eye on how these colors have been interpreted I think it can help people understand what they're looking at.

And if you go to the next Slide 41, I have the same galaxy but a very different rendering of it. And this is one that actually spans the spectrum from X-rays through ultraviolet, visible and infrared. Again, color in every astronomical image you see can mean something different.

And part of the job I think of that anyone who translates images for the public and communicates astronomy to the public is to try to understand how color has been used. And make that part of the narrative.

So we do have a web site that helps with this and I believe it's been mentioned a number of times on these calls in past. The AstroPix site, is something that is a collaboration between those of us on the call and many other institutions around where we've been trying to basically pull together all of the institutional imagery from telescopes like Chandra and Spitzer and Hubble. And put it into one place you can search but more importantly bring together a lot of contextual information on those images which is good to color mapping.

And you notice that for the images that have that metadata supplied on the lower right hand side there you see there's a color mapping widget that does pretty much what my previous slides did. They actually indicate which thumbnail colors in the image correspond to which parts of the spectrum.

And so on a site like ASTRO pics you can do this for many, many images and I think it really helps people understand the interpretation of what colors means in that particular case.

So the next Slide 43, asks the question what if data cannot speak for itself?

Going to Slide 44, I think here's a nice example. There's a data plot straight out of the paper, which doesn't tell much of a story. Even if you're a scientist you might recognize these are two black body curves but what it means is not evident by looking at the graphic.

But if you go to Slide 45, here's a different way of taking that same data and reinterpreting it with imagery to reinforce the data plot to guide the eye on how you might interpret and understand like -- oh, there are two different things contributing to this new points of data.

And there are ways I think we can motivate exposing people to the real data plots, the real science, but using illustration and context. We can kind of have even those basic data plots tell more of the story.

Going onto Slide 46, here's another example of in this case an infrared monitoring of a young protostar. And this is basically the brightness of this young stellar system over time over the course of many months with a clear but not very elegant way of explaining what was interesting in this plot. The fact that after a gap in observation the brightness of the system increased noticeably and went through a lot of weird variation. That was interpreted to mean that there was a collision event.

But again going to Slide 47, we see that we can take that same kind of data and with an illustration that speaks to that point that shows the asteroid with some calamitous event that has created rubble burning through the system. Again you can actually kind of connect people using the original data but with more of a visual narrative to help enhance and tell that story.

On Slide 48 we go onto exoplanets. So this is something that needs a lot of hand holding, I think for the public to interpret the science because the science is incredibly visually uninteresting, right. When we study exoplanets particularly I'm going to speak a lot now to things we studied through the transit method. All we're looking at is a blurry image of typically a very faint star that barely wiggles and brightens at the, you know, 1% or less than 1% level over time.

But from those really minor tiny little tweaks in light we can actually directly figure out things like the type of star, the orbital period of the planet, the orbital distance from the star, the size and in some cases the planet's mass, density and maybe even characteristics of its atmosphere. But those are things that don't stand out in the data.

But going onto Slide 49, here are a couple sketches and how we will take exoplanet results and then try to create artwork and it helps. And whenever we approach doing artwork to illustrate science, it really starts with a science bullet list. The list of ideas that we want the art to represent so that even without knowing it, when you glance at that you might take away some core ideas we want people to understand.

It works on two levels. One, to engage people to be pretty enough that they want to know what that picture is telling them but also after having looked at the picture be armed with a little bit of the information. Whether you're sitting at a tabletop rendering of a planet next to the Earth. You establish immediately it's relative scale, that its composition is must gassier and has a higher temperature or even more subtly in the case of planets that may have written habitable zones, showing the presence of water on the surface to indicate habitable zone location, using the icecap to indicate relative temperature. We can do things like represent the actual color temperature of the star to get some of those ideas across or that there in this case are a lot more planets in the system.

But of course my favorite exoplanet system and on Slide 50 is the TRAPPIST-1. And this is a remarkable system. We actually know more now about the TRAPPIST-1 Planetary System than any other exoplanet system or planetary system beyond our own of course. And this is because of sort of remarkable data that comes from the system shown in Slide 51.

Now again this data is remarkable. But it doesn't tell a clear story. Not even another astronomer might be able to glance at this and realize they're looking at time sequence.

But won't know the interpretation. That's why we turn to art on Slide 52. Because what gets you on the cover of Nature to tell the story or even more above the fold on the New York Time is to have illustrations that speak to the data but have framed out of that story that we're trying to tell, a collection of seven Earth sized planets all orbiting a relatively cool orangish star.

And going onto Slide 53, just a little bit of what goes into the work of the art. Sometimes artistic inspiration can come from weird places. And in the case of TRAPPIST-1 I had this idea I wanted to show again the marbles on the tabletop as a way of abstractly representing relative sciences, some of the surface properties of the planets.

But also get across the idea of habitable zone in a way that was a little more visually interesting than just laying down the green band like we tend to do in sematic diagrams. And that led me to throw a bunch of my marbles on the floor in the living room and play around.

So I found something that ended up being very, very close to the image that we put together for Nature which included an element like showing how water turns to steam if you're too close to the star or frost if you're too far away from the star.

But in the middle there's a point at which it transitions to a liquid form. Is a way of talking about habitability without overselling any particular issue on any of the planets.

Slide 54, just gives a little view on from the inside and how we go about rendering some of the data. This is a procedurally generated planet surface and this is the shader network that we used in the Lightweight 3D Program to create the looks for this particular planet.

Of course on Slide 55, this is the lineup that we had back in February 2017. But anytime we do art that's inspired by the science where we attempt to basically for each planet and come up with a plausible visual model that is consistent with what we knew at that time that may change over time as the sun has evolved.

And in fact the masses that we determined in 2017 were actually still very uncertain. They had fairly large error bars.

By 2018 if you go to Slide 56, we actually know these masses a lot more precisely which put much tighter constraints on what would be plausible models for each of these planets.

So as a result, we actually ended up revisiting and reworking the art of all of the worlds. In fact in 2017 the one that we thought was probably most likely rocky and might have some liquid water on the surface ends up being the one that probably is maybe up to 5% mass in the form of water and became the water world.

And likewise one we thought was the water world is the one that really seems to be the closer analog to Earth.

Slide 57 shows the view from the surface of TRAPPIST-1e looking back to the inner system capturing again the idea of the warm star, the possibility of liquid water. And in this case the amazing idea of just how large your neighboring planets would be in the sky, a system where your stars barely as big as Jupiter and the spacing between the planet is more akin to the space in between Jovian moons.

For TRAPPIST-1 on Slide 58 of course there are a lot of visual resources for this. On the Spitzer side, I've given you a link for where we are linking all the major ones that we got through JPL and our partners.

And of course what's good about the art is that everyone can kind of play that game, right. ESO has done this. JPL has done pieces, the Exoplanet Travel Poster. The beauty of this is that everyone can take the same data and find different ways that are consistent of telling that story and it can make it a very rich experience I think and it's one worth exploring.

Of course Slide 59, throws the single word fun now and that's because sometimes we do want to have a little fun when we're trying to telling these stories and we want to about things that lead into maybe a more broad narrative than just the technical details of a particular result.

And I have a couple of examples of that on Slide 60 where basically sometimes scientists do like to play with their data and sometimes as visualizers we like to play with the scientists playing with their data whether it is doing a story about a researcher who basically tries to study planets in the laboratory by doing experiments on gases or doing theoretical model representing that as holding a hot beaker over a Bunsen Burner or people basically searching deep fields of data to find interesting galaxies represented by fishermen working on a deep galaxy fishing expedition.

But the fun part about the art side is that you don't have to be an astronomer to do it.

On my last Slide 61, my kind of parting theme thought for the group is that science-motivated art is a game that anyone can play.

And I think it would be great to encourage everyone from kids to adults to learn everything they can about some interesting piece of science and make it into their own art. And then the art is an awesome narrative tool for then telling the story to other people. And why choices are made the way they were so I take that out for everyone. Go tell your science and art.

Emma Marcucci: All right, well thank you so much Robert. That was great. Our last speaker today is Dr. Frank Summers. He is an “astrovizicist” at the Space Telescope Science Institute contributing to the Hubble and James Webb Space Telescope. His specialty is creating accurate and aesthetic scientific visualization by combining research data and computer simulation with Hollywood rendering techniques which have led to his involvement in several IMAX films.

And when not attempting to reimagine the universe he is likely to be found at the disc golf course, so starting on Slide 62, Frank take it away.

Frank Summers: Afternoon everybody, glad to be back here. Let's see, on 62 you see me as an astrophysicist on the red carpet. I am actually at the theater where they give away the Academy Awards doing a NASA at the Movies presentation. The woman on my arm is Toni Myers, the Director of IMAX Hubble. And she and I were there to talk about creation of IMAX Hubble.

And so to echo what Robert just said about anyone can do STEAM it's also true that if you have a Ph.D. in astronomy you can also do some art stuff too. So it goes both ways.

All right, let's go to Slide 63. I'm going to take you through the pipeline of creating cinematic scientific visualizations and just give you the sort of feel of just how much depth you need to go to. We're going to start with the data. And this is what we did for this IMAX short film, Hubble Galaxies Across Space and Time.

And in this film we took you 10 billion lightyears into the universe using the Great Observatories Origins Deep Survey.

Next Slide 64, this is the actual image. It's 627 megapixels. Yes, that's why we did an IMAX film because we have so much data to do it. At the time we did this, 2003 I think it was, just handling a 600-million-pixel image was very difficult for Photoshop and other things. I actually had to do a lot of work in scientific software to handle it because we actually cut it up into these 15 pieces. It's a mosaic of a whole slew of observations of Hubble and everything.

And so just handling the data because it's so large was difficult back then. It's much easier these days.

But on the next Slide 65, you'll see one of the major problems we had to deal with is that the top image is the press release image as processed. And the bottom left image we brought up the gamma on the image to show you the faint details.

And the faint details show off all of that mosaicking artifacts from the image. This is the sort of stuff that while, you know, astronomers really try to get all the detail out of their images and such and they don't care about ugliness, things like that. But if you're doing an IMAX film you need a high dynamic range for the images.

And we had to go in and in lower right you can see how we cleaned that up at this high gamma contrast and then when we brought it back down to normal it doesn't appear.

And I will say we learned our lesson then but we still have made a mistake. This year we found that we had a problem in our image that we didn't know about. And we're in the process of solving it now. So we're relearning these lessons every year unfortunately.

Next Slide 66, let's talk about the modeling of the data into 3-D. This is the flight into the star cluster Westerlund 2 within the nebula Gum 29 that we did for Hubble's 25th Anniversary.

And on Slide 67 we show you screenshots from our 3D Modeling Program. The top image shows you the star cluster. There are about 3000 really bright stars. This is a honking big star cluster with really honking big stars in it. Someone described it as the biggest cluster of big stars in the nearby universe.

I'm not a star specialist so I couldn't confirm it. But hey I'll believe them.

And so you can see how he's modeled that star cluster. The lower left shows you the stars we extracted from the Hubble image. And these are placed in a foreground. These are the stars that are not in the cluster. They're placed in the foreground to give us a 3D sense as we fly through it.

And we did not of course measure the distances for all of these stars. We did a statistical model.

And then we were doing a 16 x 9 widescreen presentation. And so, we needed extra stars which is in the lower right. Those come from the 2MASS Survey. And they are also statistically modeled so they give us a good three-dimensional feel.

On the next slide, Slide 68 show you the layers of the nebula starting in the upper left with the layers behind the star cluster. In upper right, we have the pieces of the star cluster. I'll show you these in just a second. But these are the pillars and the filaments and such inside the cluster.

The lower left is the material on the front side of the nebula. That gives it the near foreground and then the lower right is a very thin wispy layer called the Veil which is just sort of gas that's been blown out of the nebula. But it's right in front of it. And actually it's a really cool layer because we get to fly through it on our way to getting to the cluster.

On Slide 69 you see the visualization, sort of a buildup from back to front. Starting in the top left and going down the page. You get the background, the star cluster and then you start to get these pillars and filaments inside the nebula.

On the right side starting at the top and going down you get to the front of the nebula. You get the veil of the nebula and finally you add in the stars from Hubble and from 2MASS emanating.

So this is about 15 to 20 layers of gases material and about 27,000 stars that have to be modeled in 3D here.

And I will definitely say we did not model all 27,000 stars by hand. That of course requires a computer programming with scripts to be able to do stuff like that.

Slide 70, we'll talk a little bit about camera. One of the reasons we use software from Hollywood is they put all sorts of money into doing these really cool things with their camera. You can see the camera view that as you're choreographing it in the software on this Slide 70.

But the real trick is in Slide 71 that you take the curves for your camera track in X, Y and Z. And this is actually a pretty simple camera here.

And if you ever took calculus and you had to work with the tangents and the derivatives of all your curves, you need to get curves for these camera tracks that are not just first dimensional smooth but second derivate smooth. You want your third derivative to be zero for all these. I'm speaking in the mathematical terms.

But the tangents in the lower picture here on Slide 71, those pink tangents you can adjust those carefully. Make them bigger or smaller and tilt them a little. It can really, really do some amazing adjustments with your camera to make it nice and smooth and Hollywood does a great job of that. We could never do it ourselves.

Next slide, Slide 72, something we have to do ourselves is rendering. And this is me standing in front of Georges Seurat who was the father of pointillism, a definitely masochistic painting style where he drew individual points on a canvas. And you can see just how big this canvas is. This is Sunday at La Grande Jatte which is at the Chicago Art Institute.

And what we do in our thing is actually we're drawing points but semitransparent points.

On Slide 73, the top is a test image I did with a Hollywood piece of software called Blue Moon Render Tools, BMRT. And I found back in 2003 or so when I did this it took 29 hours to render that one frame. That told me there was something wrong here.

And so I over the next two weeks wrote my own code to do the same thing. And my code took 27 minutes. And it's not that I'm smarter than Hollywood. I'm by no means whatsoever.

But I'm doing one thing and one thing only in my code, and something that Hollywood doesn't extremely well. Draw these semitransparent splats. They're used to hard surfaces where they can do occlusion color and all sorts of other cool things.

And so I bit the bullet and I had to write my own software. On Slide 74 my software is now morphed into something call Pointillism after Georges Seurat.

And it's just this point-based rendering code that I wrote in C and it does various things. It's got all sorts of bells and whistles that I put in and includes lots of cameras to do domes and do all skies and do VR stuff on that.

But as I said it only does one thing but it does really, really well. It does it better than the Hollywood software. So sometimes you do you have to write your own stuff. So it's a back and forth in terms of trying to take the tools where you've got them and build your own when you don't.

Slide 75, we'll talk about the render farm we had to use on the IMAX film Hubble 3D because the day, the very day that the Director Toni Myers greenlit us to produce our sequences we got Slide 76, 24 inches of snow.

And I don't know where you are but here in Baltimore, 24 inches of snow shuts down the city.

And if it wasn't bad enough, four days later we got 27 more inches of snow. We were shut down for a week. And I was supposed to start the rendering. And I couldn't go into the office to do it.

So Slide 77 shows you what I was doing. I was sitting in my basement coming in over a terminal connection to my machine Urania. And from there I was using my Pearl Scripts and my SSH connections to all these machines around the building telling them what frames to render, getting them to send their frames to the central server which is AVL Serv1. And then pulling all that stuff off of that back to my thing.

And then I actually had to do compositing using command line. Fortunately, Image Magic is out there and I could do command line composting, set it all up. And finally, when we're able to get back into the office we pulled it all together. Put it on some USB disks and sent it off to the folks in California.

So I guess my advice on this is be prepared for the worst. Sometimes it does happen. But if you're going to try cinematic stuff you're going to need a lot of machines. And several people sitting around the table here at STScI lent their machines. As I say I like to keep CPUs warm at night when we're doing a big render.

Slide 78, just want to mention music. I have three movies here that if you just think about them, you know exactly what the music there sounds like because it carries so much of the emotional content, okay.

And you learn from that. And if you want to make it cinematic you got to have good music. Slide 79 shows where I steal mine from. It is the freebies archive. And this piece here, the serenade for Strings Opus 22 was a perfect fitting for the sequence that I'm just going to talk about next. You spend a few hours listening to things. Yes, it takes a while.

It's not my specialty -- music. Matter of fact Joe sitting right here has a specialty in music so I'm going to use his expertise moving forward. But you got to have music to really make it cinematic.

Slide 80 message, okay, and this is actually the very first thing we think about when we do something. We think about what is the message we're going to send.

But I saved it to last because it's the most important thing here. In this image, you see on the left, the Hubble image of the Orion Nebula. And you see on the right the Spitzer image of the Orion Nebula. And you can see what see in visible versus infrared. Major theme of the sequence that we put together. How different wavelengths that like show us different characteristics of the same object, okay.

And then what we did is we took those to this two-dimensional comparison and on Slide 81 you can see that we took it into 3D. We had built the visible light model for IMAX Hubble. We built a corresponding infrared model for this sequence here. And then we cross fade back and forth to show you what's going on.

The message, you get to give the viewer an experience of flying through a nebula. They build up a mental model of what these star forming regions look like. They get a feel for the size and scale and structure.

Now is this absolutely scientifically accurate? No, it can't be. We don't have this much information.

But we make something that I call scientifically reasonable. Something that when I show it to an expert he goes, yes, well you could tweak this or that but overall it's pretty good, you know.

And I guess I would say the greatest compliment I ever got, one of the great compliments I got was from a professor at a university who said, you know, this fly through is worth three of my lectures. And so just to give people a feel for what's going on.

And so in trying to do cinematic scientific visualizations what we've learned over the 15 years or so that we've been doing them here is that you got to be prepared to go big. You got to be prepared to learn all the techniques that Hollywood has created.

And work it through and only select objects in the universe that have enough 3D information that we can make something really cool and really big like this. But the payoff is really, really good. We got over 500,000 views of this in the first week alone. It's approaching 1 million views on YouTube now. It was a lot of fun to work.

And by the way I should mention Robert Hurt was my partner in working on this. He was the Spitzer person working with me.

All right, thank you.

Emma Marcucci: All right, well thank you very much Frank and Robert and Kim for sharing all that information. We'll now go into questions. So if you have a question feel free to unmute and speak up.

Lisa: >Yes. This is a question that comes up quite often during presentations. I know as a former program the W3 Consortium did -- basically all agreed on a basic language that all would use as a basis and then build-out on that.

On the color palette, I know we have the Hubble palette. And one of the first presentations they did their color palette based on X-ray. Is there any consistency in the color coding based on gases or elements?

And where we would find that so I can answer questions? And most of them are wondering if it's just a person's artistic interpretation as opposed to based on science.

Robert Hurt: This is Robert. I can maybe jump in and start a response on that. I would say ultimately the bottom line is there is no standard palette that's a simple thing. You can say oh if it's Hubble it's this. If it's Spitzer it's that.

And but it's not just because of arbitrariness. It's because that every time you're approaching an image there may be very specific bands or channels of data that you're using for one reason or another.

And that can change literally from image to image. And that ultimately we only have three places to put color into red, into green and into blue in different combinations.

And so depending on what data you have access to and where it comes from the spectrum you basically on some of them we'll have to reinvent every time how it is you're going to represent that. Kim pointed out that there are some clear reasons why you might choose -- when you have an arbitrary choice on how to map it one way or the other, there are perceptual reasons you may go for it. There are also reasons that are based on printability or which part the spectrum more muddles the information you're trying to show.

There are certain kinds of information if you put it into red and green you can see much more subtle color variations than if you put it in say green and into blue.

That said, I can say there are a few general things that we have often tended to do which is often for like the infrared data we try to keep the relative ordering of the colors the same so the longer wavelength things more often than not tend to get put into the red colors. And the shorter wavelengths part of the spectrum tend to get put into the bluer colors. So, we tend to preserve that relative ordering.

But, you know, in any given case or any like particularly in the case of say with Hubble data we have narrow line observation. One nebula may have been observed in oxygen three but not silicon. And another might have been observed in each alpha but not oxygen three.

So you just have to take which data you have and assign it where you can.

Kim Arcand: And this is Kim. I would just add that as Robert was saying, you essentially let the data drive where you're going with the image. So the science story is driving how the resulting image will look.

When you're dealing with things in X-ray light for example that you can't see there is more arbitrary color being assigned. But the colors that you're choosing are for a specific reason based on the story that you're trying to tell whether it's the chemical emission in the supernova remnant like G292 where you're applying multiple colors to a map, the silicon or the sulfur or what have you.

That said, there is always an aesthetic judgment that does come into play because if you combine certain colors it might not be simple for somebody who's say color blind or it might not be the most aesthetically pleasing result.

So you try to work within the science story to end up with an image that is both scientifically accurate but also aesthetically pleasing.

Robert Hurt: This is Robert again. Just to say very quickly, that's also one of the motivations behind the AstroPix web site. While not all of the images in AstroPix have been provided with the full color metadata, a lot of them have.

And so the motivation to create that color widget so that for any image that has that metadata to support, you can look and immediately identify oh okay, in this case the blue came from H-alpha, it's a hydrogen gas or the red came from radio, right. And that's why we really felt that kind of color widget expression was really important to help interpretation of color on a case-by-case basis.

Joel Goodman: Hi. Joel Goodman from Robinson Nature Center in Columbia, Maryland. Frank the visualization of the Orion Nebula, is that available for fulldome?

Frank Summers: That has just finished rendering this month. And when I mentioned that there was a relearning of the process your images at gamma equals 2 that's where again the problem cropped up. We're fixing that problem this week. And we should have the fulldome available within a month or so, okay.

Joel Goodman: And where should we look for that?

Frank Summers: It will be available on HubbleSite. I will also post to the news groups and such on Dome-l and the Facebook Fulldome I think it is or whatever.

Emma Marcucci: And on the NASA Wavelengths List that's on Slide 2, there is a link to that HubbleSite video resources that Frank was just mentioning.

Joel Goodman: Great. Thank you. And that was a great answer about the color. That's the most common question I get from my 6th graders doing Skynet junior scholars. Thank you.

Emma Marcucci: Thanks.

Kevin: Yes. This is Kevin from U.S. Base in Rocket Center. And I'm trying to find a resource for some -- I noticed you had these different images and the different energy levels so you get the RGB type of thing.

And I'm wondering where I can get some FITS format images so that I can bring those down and use my FITS liberator and transformer, then put them in my photo shops to try and work out. I'm trying to work on maybe a workshop to where it would be like a processing imaging workshop but with some of this raw data that we're getting.

Kim Arcand: I can answer that from Chandra's perspective. We do have a selection of FITS files that are ready to go on the Chandra web site. I can post the URL. I can send it to you Emma. Would that be the easiest way to add it? Okay.

Emma Marcucci: Yes, that would be great.

Frank Summers: And here at Space Telescope we have what we call the Multi-mission Archive at Space Telescope or the Mikulski Archive at Space Telescope for our former Senator who's a big fan of Hubble.

And from there the FITS images are available. I particularly don't deal with them but Zolt and Joe are in the room and they do deal with them. How easy is it to get it from MAST? Can you get good FITS images easily?

Zolt Levay: It's a little complicated. There are several different interfaces to the MAST archive. Probably the user-friendly is called the HLA so if you go to

Man 2: Yes. The -- is that the Hubble Legacy Archive, right.

Zolt Levay: Hubble Legacy Archive.

Man 2: Yes.

Zolt Levay: You can do a search for Hubble Legacy Archive.

Man 2: Right.

Robert Hurt: Okay. And for the other side, for the inside part of the spectrum we have the other major archive is maintained at Caltech, the Infrared Science Archive or IRSA,

And likewise it's a little bit of learning curve to learning their interface. But it actually is not that bad. They have YouTube videos and things that will walk you through it.

And honestly I think pretty much anyone with just a little bit of computer expertise can go in within 10 or 15 minutes, kind of get the idea.

And it's nice because they do have a lot of visualization tools actually built into the archive. And so you can actually get good meaningful previews of what the FITS data sets look like before you decide which ones you want to pull down. You know again this is kind of the raw science data so it hasn't all necessarily been super neatly mosaic for you.

But because you can change your stretch and apply different color palettes and even construct RGB images from within the archive interface it gives you a lot of opportunity to find out like which data sets are the ones that they're going to be the most useful for you.

Kevin: Oh well cool. That was

Robert Hurt: Or just Google Infrared Science Archive and you should find it.

Kevin: Oh okay. Well thank you so much.

Kim Arcand: I would just add that depending on the ages that you're going for -- if you have a younger audience you'd like them to look at how to color images. The hands-on activities list that Emma mentioned at the beginning of the talk, Re-Coloring the Universe is a free open source, pencil code-based activity where you can step kids through how we do colorizing of images using a very simple like drag and drop block code type of activity. So great for like middle school kids too.

Kevin: Thank you.

Man 3: I just have a quick question about Slide 8, the different colors obviously represent different energy, X-ray energies coming from Cas A. Do we know the mechanisms that are producing those different energies?

Kim Arcand: So Slide 8 is actually a version from -- I can send you the direct URL to that release. But that's actually showing the chemical emissions so yes, silicon thread. Yes, sulfur is yellow, calcium is green, iron is purple. And then the blue is a blast wave from the explosion seen as mostly the other ring and then moved elsewhere. So I can actually send you the press release for that image if you're interested in it.

Man 3: Yes. I am. Thank you.

Kim Arcand: Sure.

Emma Marcucci: All right, excellent. Well thank you again to our speakers Kim, Robert and Frank for that fascinating science briefing today.

If we go to Slide 82 I just have some final words to say. This -- if you've been on these calls before you may be familiar that we have a Professional Development Series. It's a series of seven webinars. We're about halfway through them.

But they are being archived for later viewing. If anyone is interested in those the web site is, the URL is in the middle of the page here.

And if we go to Slide 83, the last slide, as always, we want to make these briefings as beneficial to you our listeners as possible. So we do do regular evaluations.

If you would prefer not to participate in this process, please opt out by contacting Kay Ferrari.

The last note I will say is that our next science briefing will be Thursday, May 3rd. And it will be about the birth of stars both near and far.

With that, I will pass it back to Jeff for any last comments.

Jeff Nee: Great, thanks everybody for joining us today. And thanks of course for the whole Universe of Learning Team and to all of our speakers. Remember that all the talks are always recorded and posted on the member web sites. And you are encouraged to share these presentations as professional development with your colleagues including your education staff and your museum docents even.

If you have further questions like Emma said, always feel free to email me. Again this is Jeff Nee with the Museum Alliance

And I think that in general we have another webinar next week for MAVEN if you want to join us for that as well.

Okay, thanks everybody. Have a great Thursday. And we hope to hear you next time.