The Geeky Geological Features of Charon

As talked about in a previous post, Charon was named after the wife if the discoverer James Christy. Since then the New Horizons probe has visited and taken some amazing pictures of the surface. As part of the mapping they have also started naming some of the craters and other geological features found on the surface, and they all have very fictional culture names. Although some have been accepted but he International Astronomical Union, there are still many that haven’t. As of April 2018 they have set out an agreed naming convention and set of rules for the names. They should conform to one of the following:

  • Destinations or milestones of fictional space and other exploration.
  • Fictional and mythological vessels of space or other exploration.
  • Fictional and mythological voyagers, travelers and explorers.
  • Authors and artists associated with space exploration, especially Pluto and the Kuiper Belt.

So far there have been many provisional names given by the New Horizons team based on mostly science fiction franchises such as Star Wars, Star Trek, Doctor Who and Firefly. Most are still provisional, but some have been accepted

Charon Enhanced
An enhanced colour version of Charon taken by New horizons space probe. It is enhanced to show the differences in surface composition. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

A Terra is a large landmass or highland, and there is only one highland region on Charon. It was named Oz Terra after the Wonderful Wizard of Oz children’s novel by L. Frank Baum. The dark spots on the surface are called maculae in planetary science. The first is named Gallifrey Macula after the home planet of Doctor Who (Gallifrey). The second is the Mordor Macula after the base of Sauron in the Lord of the rings books by J.R.R. Tolkien. A planum is a scientific name for a plateau (elevated plain) and Charon only has one. Named Vulcan Planum after the home planet of Spock in the Star Trek Series. Terrae, Maculae and Plana are all being named after fictional destinations. A Mons is a planetary mountain, you may have heard of some of the Mons currently being explored by NASA rovers on Mars. Charon has three major mountains and are named after authors and artists. Butler Mons is named after Octavia E. Butler, an american science fiction author. Clarke Montes is named after Arthur C. Clarke, a famous English science fiction author who wrote 2001: A Space Odyssey. Kubrick Mons is named after Stanley Kubrick, a film director of films such as the shining and clockwork Orange. All three of the Mons names are accepted by the IAU.

Mordor Macula is located at Charon. A large dark area about 475 km in diameter near the north pole of Charon, Pluto’s largest moon. It is named after the shadow lands in J.R.R. Tolkien’s The Lord of the Rings.  It is not currently known what Mordor is. It may be frozen gases captured from Pluto’s escaping atmosphere, a large impact basin, or both. Credit: NASA

A chasma is a deep steep sided depression (a chasm), and are being named after fictional vessels. Argo Chasma is named after a ship in the Greek myth of Jason and the Argonauts, it is also the spaceship in the English translation of the Space Battleship Yamato anime series. Caleuche Chasma is named after the mythological ghost ship that travels the seas around Chiloé Island off the coast of Chile, collecting dead who then forever live aboard (much like Davy Jones). Mandjet Chasma is named after the solar boat of the ancient Egyptian God Ra. All three of the above Chasmas are recognised by the IAU. Macross Chasma is named after the SDF-1 spaceship in the Macross anime series. Nostromo Chasma should be known to most as the spaceship in the Alien films. Serenity Chasma is from the spaceship used in the Firefly series. Tardis Chasma is named after the infamous blue box flown by Doctor Who.

Annotated map of Charon, with provisional names for features. Credit: NASA/JPL.

There are 16 notable craters found on Charon’s surface, of which six have officially recognised names. They have all been named after characters associated with science fiction and fantasy. Dorothy Crater is named after the main character is the Wizard of Oz, also naming the only terra on Charon. Nasreddin crater is a sufi traveler from folklore. Nemo is after Captain Nemo from novels by Jules Verne. Pirx crater is the main character from the short stories by Stanislaw Lem. Revati Crater is named after the main character in the Hindu epic narrative Mahabharata. Sadako Crater is the adventurer who traveled to the bottom of the sea in the medieval Russian epic Bylina. All of the above craters have been officially recognised by the IAU. Alice Crater is named after the main character of the Lewis Carroll novels. Kaguyahime Crater is named after the princess of the Moon in Japanese folklore. Organa Crater is named after princess Leia in the Star wars films, along with Vader Crater, and Skywalker crater. Ripley Crater is one of the more studied craters and is named after the main character in the Alien films. Kirk Crater, Spock Crater, Sulu Crater, and Uhura Crater are all named after main characters in the Star Trek TV franchise.

Photo of Charon centered on Ripley Crater. Nostromo Chasma crosses Ripley vertically. Vader is the dark crater at 12:00, Organa Crater is at 9:00, Skywalker Crater at 8:00, Gallifrey Macula and Tardis Chasma at 4:00. Credit: NASA/JPL

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Notes From NASA’s Chief Scientist Jim Green’s Talk on The Search For Extraterrestrial Life

A few weeks ago my place of work, STFC, was lucky enough to host NASA’s chief scientist Jim Green for a talk titled “The search for life on Earth in space and time”. At the time of writing there is a version of the talk on University of Oxfords Facebook page. A really interesting talk for anyone interested in space, and our solar system. It also goes much more in depth that this post today and gives a real insight into current science of our solar system. A planetary scientist himself he talks about the planets in our solar system that could harbor life and those that might have done previously. I found it a real insight into what NASA’s goals are and where they are looking for signs of life. I personally enjoyed the talk as Jim Green hosts the “Gravity Assist” podcast made by NASA.

logo for NASA’s Gravity Assist podcast hosted by Jim Green. Credit: NASA.

The first real point he made was how to define what life is, which is a reasonable question. If you want to go out and find life on other planets, how do you know when you have found it? Spacecraft and astronauts need instruments and tools to detect things, and to build those instruments you need to know what they are looking for. The definition they came up with was that life needs three things, to metabolize, reproduce and evolve. This is a pain because it’s difficult to see any of those things directly. If you take just the metabolizing part and break it down it makes it a bit simpler, you need organics, the energy source, and water. You also need some way to get rid of waste. Plus we need to take into account time, you could have a fully habitable environment but not have life if it isn’t the right time.

The ingredients needed for life, a slide in the Jim Green Talk. Credit: NASA

Time is a really important factor, Earth has existed for 4.6 billion years, and it hasn’t always had life. They have been at least 5 mass extinction events in that time as well. To really see what is happening we need to look at how the sun has changed over that time, it is the thing in the solar system with the most effect on us. Since its birth 4.6 billion years ago it has brightened, with the luminosity increasing up to 25 or 30% by some estimates. We know that the Goldilocks region or habitable zone of a star exists where water can exist in all three states, but that depends on how big the star is and how bright it is, and therefore over time this Goldilocks region changes. This would make life simpler when looking for exoplanets, just work out where the habitable zone is and choose planets in it, unfortunately it isn’t that simple. 

A diagram of how the habitable zone of a star changes over time with different brightnesses. Credit: NASA

Let’s start off with Mercury, the closest planet to the sun. It is larger than the moon, but it isn’t large by any means. It has a magnetic field, it is nearly tidally locked and it is incredibly hot. It out gasses, and from Messenger data most scientists have agreed that it has never had a substantial atmosphere, so water is very unlikely to have existed there. The next candidate would be Venus, it is a similar size to the Earth after all. The Soviet Union Venera missions looked at the atmosphere and the temperature, and found it is extremely hot. The surface is hot enough to melt lead, and the pressure is 90 times that of our own planet. The NASA Magellan probe found it to be highly volcanic, with a very thick atmosphere. This means there is basically no chance of water, and makes Venus a bad choice for finding life today. Using some fairly interesting concepts, scientists have modeled what early Venus may have looked like and found it likely had water at some point, but the runaway greenhouse effect along with the lack of magnetic field has stripped all water away. That being said one day we could produce probes good enough to dig through the surface and look for signs of life below the ever evolving surface layer.

Five global views of Venus by the Magellan probe. Credit: NASA.

The next obvious choice is Mars, much larger than the Moon, but only about half the size of Earth. It’s a bit of a runt due to Jupiter. The asteroid belt between Mars and Jupiter is made of rocks that could have been a part of Mars, but Jupiter’s massive gravitational pull denied that. We also know that at some point in its life it had oceans that covered two thirds of the surface that could have been up to a mile deep in places. It then went through massive climate change, and it lost its magnetic field. That means the solar winds have stripped away the atmosphere and left a dry and arid surface. The pressure is about 1% that of Earth. Plus as it is fairly close to Earth it means that we can visit it fairly easily. From a number of missions including satellites and a number of rovers, we know that there are organic compounds on the surface, and likely water under the surface. Although not a guarantee of life it is a big hint. There a number of missions planned including ESA’s ExoMars, and NASA’s InSight and the 2020 rover. These missions are designed to drill into the surface and understand more about the planet, and what the water held.

True color image of Mars taken by the OSIRIS instrument on the ESA Rosetta spacecraft during its February 2007 flyby of the planet. Credit: ESA.

We talked about the habitable zone, but there is another line (or sphere technically) that planetary scientists use called the snow line. Lying somewhere in the Kuiper belt, it defines that liquid water cannot exist beyond it. For a long time that was thought to be true, but research has revealed that some moons have liquid water below their icy surface. In 1611 Galileo discovered some of Jupiter’s moons, and they have been visited and studied by the Juno and Galileo probe. All the moons at one point had an ice crust. Scientists have found that some moons such as IO, lost this crust and have become very volcanic and volatile. Ganymede, Callisto and Europa still have this ice crust. Only Ganymede and Europa have any signs of a watery ocean underneath the crust, but Ganymede is somewhat ruled out from having life because of its very cold temperatures. This leaves Europa in this Jupiter habitable zone. Slightly smaller than out moon, it has been shown to have watery geysers that reach 400 km above the planet. That would be equivalent to Earth geysers hitting the space station. From tests by Galileo data it has been shown to have twice as much water than on Earth. Plus it has been like that for 4.6 billion years, so that is a good indication that there could be microbial or even complex life below the surface. There is a mission planned to go to visit Europa called Europa Clipper.

An image showing the icy crust of Jupiter’s moon Europa. Europa is about 3,160 kilometers (1,950 miles) in diameter, or about the size of Earth’s moon. This image was taken on September 7, 1996, by the camera on board the Galileo spacecraft during its second orbit around Jupiter. Credit: NASA/JPL/DLR.

Then there is Saturn, which has had many studies, and the thing that stands out is the moon Enceladus. It is the moon that really drew NASA’s attention to the possibility of water on these distant moons. It also has geysers, coming from huge cracks in the southern hemisphere. They are huge walls or water just pouring out of the body. With it being only a small moon of around 300 km, it suffers from tidal forces. The water pours out less when it is closer to to Saturn, and more when it is further away (due to an elliptical orbit). This has been measured and shown, as the Galileo spacecraft actually flew through one of the geysers and didn’t know it. We have spacecraft that have literally tasted this water. About 98% of the water that comes out of the geysers falls back onto the moon, but that 2% escapes and forms an e-ring. The Cassini spacecraft also flew through these plumes and managed to measure some of this water, and more importantly small bits of rock. It gives indications of hydro thermal vents being the cause of these plumes of water.

NASA’s Cassini spacecraft captured this view as it neared icy Enceladus for its closest-ever dive past the moon’s active south polar region. Credit: NASA/JPL

Another spectacular moon of Saturn in the running is the famous Titan. It is bigger than the planet Mercury, the atmosphere is about twice that of ours, and is dominated by nitrogen. Trace gasses of methane and ethane have been detected, and it has large bodies of liquid. Radar images of the surface piercing through the thick atmosphere show rocky terrain and flat lakes of liquid methane. This has spurred on the idea that life could be very different, and could survive in such liquids as methane. So if we want a chance of finding life not like us then Titan would be the best place to go. There are a number of important missions that are planned to visit Titan and make much better measurements of the surface. Including robotic missions and maybe even very simple rovers. By all accounts it is still in early stages.

These six infrared images of Saturn’s moon Titan represent some of the clearest, most seamless-looking global views of the icy moon’s surface produced so far. The views were created using 13 years of data acquired by the Visual and Infrared Mapping Spectrometer (VIMS) instrument on board NASA’s Cassini spacecraft. Credit: NASA/JPL-Caltech/University of Nantes/University of Arizona

This data taken from these missions have allowed us to look further afield to find exoplanets that could fit what we now use to define habitable planets. Missions such as Kepler have refined the way to detect planets by looking at stars for long periods of time. looking at how stars dim and wobble when planets go in front if them. The big exoplanet mission for NASA currently is TESS. Launched in April it has gone through its commissioning and is already finding planets out there. The idea for it is to take large amounts of images over a long time and try to find as many exoplanets as possible. Hopefully producing thousands of potential planets, the best looking ones can then use much more powerful and advanced telescopes such as JWST to make better measurements and tease out the atmosphere and makeup of these exoplanets. One closing point that Jim Green made, when you go out and look at the stars at night, just remember that there are more planets on our galaxy than there are stars visible in the sky. 

One of the first images taken by NASA TESS, centered on the southern constellation Centaurus, reveals more than 200,000 stars. Credit: NASA.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

NASA Turns 60

The official logo for NASA turning 60.

As of today, the 1st of October 2018, NASA has turned 60. It was created as a new agency based on its precursor NACA, started in 1915. The cold war between the USA and the Soviet Union created a space race the late 1950’s. From 1946, the National Advisory Committee for Aeronautics (NACA) was experimenting with rocket planes. One of the famous ones was the Bell X-1 that took Chuck Yeager past the speed of sound (and was the first to do so). They were also the team behind the running of the X-15 rocket plane that Neil Armstrong famously flew. In the early 1950’s there was a call to look into launching artificial satellites towards the end of the decade, mainly driven by the International Geophysical Year which was 1957/58.

The x-15 rocket plane, currently the fastest plane ever, it reached mach 7, and was developed by NACA. Credit: NASA.

An effort towards this by the USA started with Project Vanguard, led by the 
United States Naval Research Laboratory, which ended in catastrophic failure. This was the perceived state of the US side of the space race at the time. On October 4th, 1957 Sputnik 1 launched and instantly grabbed the attention of the United States public. The perceived threat to national security was known as the Sputnik crisis, and US congress urged immediate action. President Dwight D. Eisenhower with his advisers worked on immediate measures to catch up. It eventually led to an agreement to create a new federal agency based on the activity of NACA. The agency would conduct all non-military activity in space. The Advanced Research Projects Agency was also created to develop space technology for the military applications.

The failed Project Vanguard by the Naval Research Laboratory, it was meant to be the first US satellite in space but ended in disaster.

Between 1957 and 1958 NACA began studying what a new non-military space agency would be, and what it would do. On January 12th, 1958 NACA convened a “special committee on space technology” headed by Guyford Stever (director of the national science foundation). The committee had consultation from the Army Ballistic Missile Agency headed by the famous Werner Von Braun, the soon to be architect of the Saturn V. On January 14th 1958, the NACA director Hugh Dryden published “A National Research Program for Space Technology” that stated:

It is of great urgency and importance to our country both from consideration of our prestige as a nation as well as military necessity that this challenge [Sputnik] be met by an energetic program of research and development for the conquest of space… It is accordingly proposed that the scientific research be the responsibility of a national civilian agency… NACA is capable, by rapid extension and expansion of its effort, of providing leadership in space technology

On January 31st 1958, Explorer 1 was launched. Officially names Satellite 1958 Alpha, it was the first satellite of the United States. Talked about in a recent post, the payload consisted of the Iowa Cosmic Ray Instrument without a tape recorder (there was not enough time to install it). A big turning point in the US side of the space race, it gave civilian space activities a chance in the spotlight to allow for more funding.

The logo for Explorer 1, the first US satellite in space. It was the first satellite to pick up the Van Allen belts. Credit: NASA/JPL.

In April 1958, Eisenhower delivered to the U.S. Congress an address to support the formation of a civilian space agency. He then submitted a bill to create the “National Aeronautical and Space Agency”. Somewhat reworked the bill was passed as the National Aeronautics and Space Act of 1958 on July 16th. Two days later Von Braun’s Working group submitted a report criticizing the duplication of efforts between departments on space related programs in the US government. On July 29th the bill was signed by Eisenhower and NASA was formed. It began operations on October 1st 1958. NASA absorbed NACA in its entirety, including its 8,000 employees, annual budget of $100 million, and the research labs under its jurisdiction. The three main labs were Langley Aeronautical Laboratory, Ames Aeronautical Laboratory, and Lewis Flight Propulsion Laboratory. It also inherited two small test facilities. Elements of the Army Ballistic Missile Agency were transferred to NASA, including Werner Von Brauns Working Group. Elements of the Naval Research Laboratory that failed to launch project Vanguard were also transferred to NASA. In December of that year NASA gained control Jet Propulsion Laboratory (JPL). It is important to remember that NASA was based upon the success of the rocket scientist Rober Goddard, who inspired Werner Von Braun and other German Rocket scientists brought over by project paperclip. There was also huge influences from the research conducted by ARPA and US Air Force research programs.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

JAXA Lands Rovers on an Asteroid

An artist’s impression of the Hayabusa 2 probe. Targeting an asteroid, it plans to land, sample it and then return with the sample by 2020.

The Japanese Space Agency have successfully landed and deployed two small rovers onto the surface of a near Earth asteroid from the Hayabusa 2 probe. Following on from its predecessor Hayabusa, this second mission is an asteroid sample return mission, building on and addressing the weak points of the first mission. It launched on the 3rd of December 2014, and it rendezvoused with the near-earth asteroid 162173 Ryugu on the 27th of June 2018. Currently in the process of surveying the asteroid for a year and a half, it will depart in December 2019, returning to Earth in December 2020.

Photo taken by Rover-1B on Sept 21 at ~13:07 JST. It was captured just after separation from the spacecraft. Ryugu’s surface is in the lower right. The misty top left region is due to the reflection of sunlight. 1B seems to rotate slowly after separation, minimising image blur. Credit: JAXA

The Hayabusa probe carries four small rovers that are designed to investigate the asteroid surface in situ. They are designed to provide data and context of the environment around where the returned samples are from. Different from rovers that we are used to, these all use a hopping mechanism to get around. None of the rovers have wheels as there is so little gravity that they would be very inefficient. Deployed at different dates, they are all dropped onto the surface from 60-80 m altitude and fall to the surface by the very weak gravity. The MINERVA-II-1 lander is the container that deployed two of the rovers. ROVER-1A and ROVER-1B were deployed on 21st of September 2018. Developed by JAXA and the University of Aizu, the rovers are identical. They are 18cm in diameter and 7cm tall, with a mass of 1.1kg (2.4lb) each. They hop by using rotational masses within the rover. They have stereo cameras, a wide angle camera, and thermometers aboard. Solar power and a double layer capacitor power them.

First pictures from a MINERVA-II-1 rover that landed on the asteroid. Credit: JAXA.

The  MINERVA-II-2 container holds the ROVER-2, developed by a consortium of universities led by Tokyo University. It is an octagonal prism shape, 15cm diameter and 16cm tall. The mass is about 1kg (2.2lb), and has two cameras, a thermometer and an accelerometer on board. It has optical and UV LED’s for illumination to detect floating dust particles. It has four mechanisms to hop and relocate. The fourth rover, named MASCOT (Mobile Asteroid Surface Scout) was developed by the German Aerospace Center in cooperation with the French Space Agency CNES. It measures 29.5cm x 27.5 cm x 19.5cm and has a mass of 9.6kg (21lb). It carries an infrared spectrometer, a magnetometer, a radiometer and a camera that will image the small-scale structure, distribution and texture of regolith. it is capable of tumbling to re-position itself, and is designed to measure the mineralogical composition, the thermal behavior and magnetic properties of the asteroid. The non-rechargeable battery will only last for 16 hours. The infrared radiometer on the InSight Mars lander, launched in 2018, is based on the MASCOT radiometer.

An artistic rendering of Hyabusa 2 collecting a surface sample.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Locating Where a Sound is Coming From

For my masters year, half the marks came from one module, the masters project. Being a team effort, we were in a group of three. Putting our heads together, and taking ideas from lecturers, we made a list of potential projects. We knew for one that I wanted to be making hardware, and the other two wanted to use/learn machine learning and maybe FPGA’s. After much deliberation we decided to make a project that listened for a sound, and using time difference of arrival worked out where the sound came from. This post is mostly about the hardware and circuitry designed for the project.

The final board for our masters project. Contains four amplifier sections for the microphones and a micro controller with USB interface.

With a world with a big focus on safety in public places, we thought it would be a good product for the security industry, potentially with links to smart cities. Imagine a shopping center, somewhere with lots of security already. They tend to have lots of security cameras, alarm systems and a dedicated guard. This isn’t uncommon in big public places/attractions, especially in the UK. Sports stadiums, train stations and museums are always looking for new ways to protect themselves and isolate problems. The example that inspired us was the horrendous shooting in Las Vegas at a concert in October 2017, just as we were picking projects. The main problem was that the security services did not know where the shooter was, meaning it took longer to get to him. If they had a system like we envisaged, the microphones would pick up the sound and triangulate it. The location could then be sent to relevant authorities to use.

The front page of the New York times just days after the Las Vegas shooting

To start with we needed microphones. We didn’t need to reinvent the wheel, and microphones can be easily bought off the shelf. For ease we used standard stage microphones, that had 3-pin XLR outputs. Although we had been warned that they would not work they had a good omnidirectional pattern, and had lots of good documentation. One issue with them is the output is balanced, which means it needs to go through a pre-amp. To get an idea of what a balanced signal is, imagine a ground connection and two signals. The two signals are the same, but one is inverted. This means when it travels down the cable it is much less susceptible to interference. This is part of the reason we liked using stage rated equipment, as sound engineers have already worked out issues with transporting sound signals long distances through noisy situations. We concluded from research that the signals could reach over 100m, which was the number we were aiming for.

One of the pre-amplifier sections used on the board, using four operational amplifiers.

Once the signal got to the box it needed to be converted to a signal that could be read by an ADC. To do this we used an INA217, a pre-amp designed for basically this purpose. An instrument amplifier, it measures the difference between the signals and amplifies them, outputting a voltage with reference to ground. The signal from the microphone is tiny, in the tens of milivolts range, so it needed some dramatic amplification to get it near the 5V ADC. The INA217 did a good job but we put a second stage amplifier to give it the extra push, as very large gains can sometimes be bad for a number of reasons. We used an OP07D but if we were to do it again we would get a rail-to-rail to get better results. This amp had a pot as one of the gain resistors so that we could easily trim the gain depending on test. Finally, the signal at this point sat between -2.5V and +2.5V so we needed to shift it up so it was between 0 and 5V. This was done with a simple shift circuit and an amplifier. We used another OP07D to make buying easier.

Me manufacturing the PCB, at this point I was inspecting to see how well it had been soldered.

From here the signal gets read by the 12 bit ADC in an STM32 microcontroller. It then streams the data via the USB to a PC where MATLAB picks it up. This is where my knowledge is a bit lacking as I did not make it. In essence MATLAB uses a machine learning algorithm that had listened to over 1000 gunshots, screams and explosions. It has categorized them, and used a number of features to notice the difference. Then when playing a new sound of one of these things (not heard by it before) it categorizes it and outputs it to the user. It also used a selection of sounds from the background to know when there is not one of these events happening, else there will false negatives.

One of our set ups to get a useful output and test the amplifiers were working properly.

All in all the project did actually work. It detected gunshots and screams being played into the microphone, and the triangulation algorithm worked, just not in real time. We managed to win the best masters project, mainly because we had good quality hardware, a partially working system and a good business case behind it. There is a lot of scope of where this project could go, and many things that could be improved, but we were happy with how it came out. I may be able to use some of the circuitry on other projects, who knows. If you are interested in more of the project, maybe some more detail about the hardware or manufacture, comment or message on Twitter. Thanks for reading.

A good example of how much difference there is between the microphones when a big sound was made. Minute distances made a big time difference.

The Dawn of Ion Engines

Ion thrusters are becoming a bigger and bigger part of modern satellite design. Over 100 geosynchronous Earth Orbit communication satellites are being kept in the desired locations in orbit using this revolutionary technology. This post is about its most amazing achievement to date, the Dawn Spacecraft. Just reported that it is at the end of its second extension of the mission it has a few records under its belt. It is the first spacecraft to orbit two different celestial bodies, and the first to orbit any object in the main asteroid belt between Mars and Jupiter. It is also a record breaker for electric speed. Travelling over 25,700 mph it is 2.7x faster than the previous fastest electric thrusted spacecraft. That is a comparable speed to the Delta 2 launch vehicle that got it to space in the first place.

Delta 2 launch
The Dawn spacecraft launching on a Delta 2 rocket from Cape Canaveral Air Force Station SLC 17 on Sept 27th, 2007. Credit: NASA/Tony Gray & Robert Murra

The Dawn mission was designed to study two large bodies in the main asteroid belt. This is to get a deeper insight into the formation of the solar system . It also has the added benefit of testing the ion drive in deep space for much longer than previous spacecraft. Ceres and Vesta are the two most massive bodies in the belt, and are also very useful protoplanets from a scientific standpoint. Ceres is an icy and cold dwarf planet whereas Vesta is a rocky and dry asteroid. Understanding these bodies can bridge the understanding of how the rocky planets and icy bodies of the solar system form. It could also show how some of the rocky planets can hold water/ice. In 2006 the International Astronomical Union (IAU) changed the definition of what a planet is, and introduced the term “dwarf planet”. This is the change that downgraded Pluto from its planet status, although that has been argued to be wrong by Dr. Phil Metzger in a recent paper. Ceres is classified as a dwarf planet. As Dawn arrived at Ceres a few months before New Horizons reached Pluto, Dawn was the first to study a dwarf planet.

Dawn prior to encapsulation at its launch pad on July 1, 2007. Credit: NASA/Amanda Diller

The ion engine is so efficient that without them a trip to just Vesta would need 10 times more propellant, a much larger spacecraft, and therefore a much larger launch vehicle (making it much more expensive). The ion propulsion system that it uses was first proven by Deep Space Mission 1, along with 11 other technologies. Dawn has three 30 cm diameter (12 inch) ion thrust units. They can move in two axis to allow for migration of the center of mass as the mission progresses. The attitude control system can also use the movable ion thrusters to control the attitude. The mission only needs two of the thrusters to complete the mission, the third being a spare. All three have been used at some point during the mission, one at a time. As of September 7th 2018 the spacecraft has spent 5.9 years with the ion thrusters on, which is about 54% of its total time in space. The thrust to its first orbit took 979 days, with the entire mission being over 2000 days. Deep Space 1’s mission in contrast lasted 678 days before the fuel ran out.

An artist’s impression of Dawn with its ion thrusters on. Credit: NASA

The thrusters work by using electrical charge to accelerate ions from xenon fuel to speeds 7-10 times that of chemical engines. The power level and the fuel feed can be adjusted to act like a throttle. The thruster is very thrifty with its fuel, using a minor 3.25 milligrams of xenon per second, roughly 280g per day, at maximum thrust. The spacecraft carried 425 kg (937 pounds) of xenon propellant at launch. Xenon is a great fuel source because it is chemically inert, easily stored in compact form. Plus the atoms are very heavy so they provide large thrust compared to other comparable candidate propellants. At launch on Earth the xenon was 1.5 times the density of water. At full thrust the ion engines produce a thrust of 91 mN, which is roughly the force needed to hold a small sheet of paper. Over time these minute forces add up and over the course of years can produce very large speeds. The electrical power is produced by two 8.3 m (27 ft) x 2.3 m (7.7 ft) solar arrays. Each 18 meter squared (25 yard squared) array is covered in 5,740 individual photo voltaic cells. They can convert 28% of the sun’s energy into useful electricity. If these panels were on Earth they would produce 10 kW of energy. Each of the panels are on gimbals that mean they can turn any time to face the sun. The spacecraft uses a nickel-hydrogen battery to charge up and power during dark points in the mission.

The dawn mission patch.  This logo represents the mission of the Dawn spacecraft. During its nearly decade-long mission, Dawn will study the asteroid Vesta and dwarf planet Ceres Credit: NASA.

Vesta was discovered on March 29th 1807 by astronomer Heinrich Wilhelm Olbers, and is named after the Roman virgin goddess of home and hearth. The Dawn mission uncovered many unique surface features of the protoplanet ,twice the area of California, that have intrigued scientists. Two colossal impact craters were found in the southern hemisphere, the 500 km (310 miles) wide Rheasilvia basin, and the older 400 km (250 miles) wide Veneneia crater. The combined view of these craters was apparent even to the Hubble telescope. Dawn showed that the Rheasilvia crater’s width is 95% of the width of Vesta (it’s not perfectly spherical) and is roughly 19 km (12 miles) deep. The central peak of the crater rises to 19-25 km (12-16 miles) high, and being more that 160 km (100 miles) wide, it competes with Mars’ Olympus Mons as the largest mountain in the solar system. The debris that was propelled away from Vesta during the impacts made up 1% of its mass, and is now beginning its journey through the solar system. These are known as Vestoids, ranging from sand and gravel all the way up to boulders and smaller asteroids. About 6% of all meteorites that land on Earth are a result of this impact.


The brave new world of 4 Vesta, courtesy of NASA’s Dawn spacecraft. Credit: NASA/JPL-Caltech/UCAL/MPS/DLR/IDA

Dawn mapped Vesta’s geology, composition, cratering record and more during its orbit. It also managed to determine the inner structure by measuring its gravitational field. The measurements were consistent with the presence of an iron core of around 225 km (140 miles), in agreement with the size predicted by
howardite-eucrite-diogenite (HED)-based differentiation models. The Dawn mission confirmed that Vesta is the parent body of the HED meteorites, by matching them with lab based measurements. These experiments measured the elemental composition of Vesta’s surface and its specific mineralogy. These results confirm that Vesta experienced pervasive, maybe even global melting, implying that differentiation may be a common history for large planetesimals that condensed before short-lived heat-producing radioactive elements decayed away. The pitted terrains and gullies were found in several young craters. This could be interpreted as evidence of volatile releases and transient water flow. Vesta’s composition is volatile-depleted, so these hydrated materials are likely exogenic (formed on the surface).

A colour coded topographic map from the Dawn mission of the giant asteroid Vesta. Credit: NASA/JPL

The first object ever discovered in the main asteroid belt was Ceres. Named after the Roman goddess of corn and harvest, it was discovered by Italian astronomer Father Giuseppe Piazzi in 1801. Initially classified as a planet, it was later classified as an asteroid as more objects were found in the same region. In recognition of its planet like properties (being very spherical) it was designated a dwarf planet in 2006 along with Pluto and Eris. Observed by the Hubble telescope between 2003 and 2004, it was shown to be nearly spherical, and approximately 940 km (585 miles) wide. Ceres makes up 35% of the mass of the main asteroid belt. Before Dawn there were plenty of signs of water on Ceres. First, its low density indicates that it is 25% ice by mass, which makes it the most water rich body in the inner solar system after Earth (in absolute amount of water). Also, using Hershel in 2012 and 2013, evidence of water vapor, probably produced by ice near the surface transforming from solid to gas (known as sublimating).

Dwarf planet Ceres is shown in these false-color renderings, which highlight differences in surface materials. Credit: NASA/JPL-CalTech/UCLA/MPS/DLR/IDA

Acquiring all the data it needed by the middle of 2016, Dawn measured its global shape, mean density, surface morphology, mineralogy, elemental composition, regional gravity and topography at exceeded resolutions. The imaging from the mission showed a heavily cratered surface with bright features. Often referred to as “bright spots” they are deposits of carbonates and other salts. Multiple measurements showed an abundance of ice at higher latitudes. However the retention of craters up to 275 km (170 miles) in diameter argue for a strong crust, with lots of hydrated salts, rocks and clathrates (molecules trapped in a cage of water molecules). Gravity and topography data also indicated that that Ceres’ internal density increases with depth. This is evidence for internal differentiation resulting from the separation of the dense rock from the low density water-rich phases in Ceres history. The rock settled to form an inner mantle overlain with a water-rich crust. This internal differentiation is typical of small planets like Ceres and Vesta that Sets them apart from asteroids.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Considerations When Making a Current Shunt Sensor

For battery powered projects, current consumption is a really important consideration when designing the circuitry. While designing my final year project I spent a huge amount of time researching how to put together a simple current sensor. Considering most applications for me are DC, fairly low current and low voltage, the most obvious design is to make a current shunt. The basic idea of a current shunt is that you put a very low value resistor between the circuit you want to measure and ground, and measure the voltage across it. When one side is of the shunt resitor is ground it is low side, there are also high side versions but are mlre complex. As the resistor has a small resistance, there will be a low voltage drop (usually mV) across it meaning it shouldn’t affect the load circuitry. The voltage will also be proportional to the current running through it, meaning if you measure it, and do the right maths you can get a consistent and reliable current reading. This post is about how to get that tiny voltage into 1’s and 0’s, while thinking about the considerations that have to be made about the design to make it accurate and reliable in the environments you want.

Final Year Project
My final year project needed current sensors on the motors as well as monitoring the drain on the battery.

The first thing that needs to be decided is the shunt resistor itself. A shunt resistor is basically a low value resistor, with very tight and known tolerance usually with a fairly high power rating. It can be used in AC and DC circuitry, with the concept behind it being that as a current flows  though it, a voltage is induced across it. The voltage can then be measured and using a simple calculation (based on ohms law) converted into a value for current. The value of the resistor depends on what it is measuring and what is measuring it. Start with what is measuring it. If you are like me, it is likely that it will be read by an ADC, probably on a 5V or 3V3 microcontroller. The voltage across the resistor is going to be amplified between 10 and 100 times (we will get to why in a moment) so pick a maximum voltage within that range. I tend to go with 100mV maximum voltage drop, which for a 5V ADC would require an amplification of 50. Then, take the maximum value of current you want to be able to measure. You can then use ohms law to figure the resistance you need. For example if I wanted to measure 1A, the resistor would be 100mV/1A = 100 mohm. Now we know the resistor value, use the power equation to work out the power eating we want. For this example we would need P = I V = 1 x 0.1 = 100mW. This is the minimum power rating you need, I personally would get a 250mW or even a 500mW just to keep the temperature of the circuit down.

The simple equation to work out what sizesunt resistor to use. Credit: Texas Instruments 

Now we have a voltage that will be somewhere between 0 and 100mV with reference to ground. We want this value to be scaled up to 0 to 5V. To do this we are going to use an operational amplifier. There are plenty out there, and most people have their favourites and I’m not here to convince you otherwise. I tend to use an op amp that I am using somewhere else in the circuit to make life easier. There are a few things you do need from an op amp in this circuit though, it needs to be rail to rail, and have a low input offset voltage. Offset voltage in an op amp is the voltage difference on the inputs, and even though they are tiny differences they can have a big effect because we are amplifying small voltages, and any noise or offset will be amplified too. The op amp needs to be in a simple non inverting configuration. The equations needed to design this are in most first year textbooks and there are plenty of calculators online. I have set a gain of 50 in my calculation, which is in the fairly common range. The output of the amplifier can then go straight straight into an ADC to be measured.

The basic layout of a current shunt sensor showing where the shunt resistors and gain resistors go in the circuit. Credit: Texas Instruments 
The first version of my current sense test circuit, using an OP170 made by TI.

Now let’s look at a few places where errors can come into a design like this. There are two types of errors that occur in a circuit like this, gain error and offset error. A gain error is one where the output error gets further away from the ideal output as the current gets higher. An offset error is one that has the same amount of error whatever the input, just like an offset. The only common source of offset error in a circuit like this is from the offset error in the op amp discussed previously, solved with an improved choice of amplifier. The gain errors are usually due to a difference in resistance from the ideal. Many things can cause this, one is the tolerance of the resistor used, we want to use a precision resistor of 1% or less tolerance. Another cause could be temperature changes in the resistor itself, it may be next to a large MOSFET or other hot component, or could have too low of a power rating making it heat up, wither way a change in temperature means change in resistance. Layout can also be an issue, if tracks are too thin or too long they can add extra unwanted resistance.

Great graphs showing the difference between gain an offset error. Credit: Texas Instruments.

If you want to add a bit of fancyness into the project, or really need to measure down to low currents, you need to tackle the zero-current error. The problem is that when using an op amp, even a rail to rail one, it never quite reaches the power rails. Even the best ones can only get within 100mV or so of the power rails, this is known as saturation. Solving this involves moving the power rails slightly so the saturation point is less than ground. If you have a negative voltage rail you can use that but home projects tend to be single supply, so we need another power source. This can be made using a voltage inverter (a type of charge pump). Usually only needing an external capacitor to work, they are cheap and easy to integrate into a project. I used a LTC1983, which creates a negative 5V rail, but there are plenty of others such as the LM7705. Research what fits your circuit and cost point, and just attach the negative output to the negative supply rail of the op amp.

 A great graph showing how the zero current error occurs, and what it would look like if you tested it. Credit: Texas Instruments.

Most issues with error can be fixed during the hardware design phase. You can pick better op amps, such as ones designed to combat offset voltage. Some amplifiers have internal calibration procedures, and some such as chopper stabilizers are specifically designed to correct these problems. You can also use a potentiometer instead of a power resistor, but they are more susceptible to temperature and can be knocked. Another way is to fix issues in software by creating a calibration procedure. Using a calibrated precision current source and a multimeter, measure the reading of the ADC and compare the value to the reading from the instruments. You should get an offset and gain value that can then be used to calibrate the sensor.

A simple set up that I used to calibrate an early sensor, with a big power resistor as the load and a variable power supply to change the current. Marked down to put into calibration.

I would suggest trying out a few of these sensors in future projects, they don’t cost too much, and can be a valuable addition to a design. Especially for power sensitive devices, or smart sensors, this could be a better solution than an off the shelf or breakout board solution. If you want to hear more about my current sensor designs, and how well the testing and calibration went then comment or tweet at me. I already have some documentation that I may release at some point.

Follow @TheIndieG
Tweet to @TheIndieG

Getting Ugly, Dead Bugs, and Manhattan Style

If you are anything like me, you love to build small circuits. I like to try and get my head around how things work by making it in front of me. This is usually in the form of breadboarding, but sometimes that doesn’t cut it, and soldering is needed. Veroboard tends to be my go to for building a simple circuit as something a bit more permanent, but it doesn’t always lend itself to certain designs. Take a design with lots of grounding points in the design, like an RF circuit, it can be difficult to have lots of ground strips everywhere, and the extra capacitance can mess with those high frequency signals. Also, designs with lots of different separate signal traces going round the board can make for a real pain. lots of slicing the traces, which tend to lead to mistakes. With my constant desire for order and straight lines, and pretty layouts this can get annoying quickly. Recently I have found a few new and simpler ways to throw together simple circuitry. For any budding electronic engineer, they are good skills to add to an arsenal.

A veroboard design of a hybrid microphone amplifier and level shifter I made for a recent project, prototyped on veroboard, many mistakes were made.

Ugly Circuits

As the name suggests, ugly circuits are not always the prettiest of designs. There are a few different definitions of what makes an ugly circuit, but my favourite is any circuit where the components are not completely mechanically connected to the substrate. The substrate usually being a copper clad board, but not always. This method can be a tricky one to master, as it is literally a balancing act. The prefered method that I see this being used is having a single copper clad board as a giant unobstructed ground plane. Two wire passives are usually the easiest to start with (standard resistors/capacitors), soldering one side to ground, then soldering the other side to another component in the air. This means any point that is not grounded is usually floating physically in the air (but depending on how good you are it could have floating voltage too). This can be a big benefit to RF circuitry or circuits that need good solid grounding. The unobstructed copper clad board means anything connected to it has a great connection to ground. It is fairly easy to build simple passive filters, but gets very fiddly and fragile if you aren’t careful.

A very “haywire” circuit constructed in the ugly style by Rick Anderson – KE3IJ in 2006. An experimental stage of his AGC-80 Regan receiver.
Not sure on the origin of this one, but it is more chaotic rather than ugly. It is definitely in the ugly style.

Technically ugly circuits don’t have to have a substrate at all, although it makes life easier. There are plenty of examples out there of ugly circuitry that just connects pins to pins via small wires. As said before, it can be very fiddly to make a circuit like this, but it is much cheaper to make singles as there is no need for expensive copper clad board. Plus after plenty of practice one can get very good at doing it. The wires connecting the parts together can be part of the structure of the unit, and if designed correctly could be very strong. The construction method can be useful in certain circumstances, and as long as you have the components, it can be build easily with just a soldering iron and solder. Although there are some amazing looking circuits made from this method, the majority do earn the name of an ugly circuit. If you can make a pretty one I would love to see.

Nathanxl at the Electro music forums creates some amazing almost artistic music project using the ugly style, but they look incredibly hard to make.
An Arduino Uno made without any substrate, just wires and components. Made by Kimo Kosaka, it is not an ugly, but it uses an ugly style of construction.

Dead Bugs

No, this method does not actually use dead insects as a manufacturing material, but it may look like it. The idea is to take an IC, traditionally in a DIP package, and place it upside down on the substrate. Usually glueing, but not always, with the pins facing upwards, so it has the look of a dead bug. The pins can be bent to attach to the substrate if required, but they tend to be facing up. Taking many methods from ugly, the pins are usually directly connected to passives or wires to other chips. This means the mechanical connections are usually in the air. The benefit to this method is that you don’t have to waste time drilling holes in in the substrate, and can integrate IC’s into an ugly design fairly easily. If trying to use this method, just be weary that all the pins on the chip will be the wrong way round as it gets flipped when placed upside down. I recommend making your own diagram to go from to make life easier.

As you can see in this use of dead bug mixed with ugly construction made by JCHaywire is the chip flipped over and the pins moved about with all the connections floating in the air

Although not really dead bugs, the concept can be seen in many modifications of PCB’s. It is easy to order the wrong package or get sections of pins wrong when designing and ordering PCB’s, especially if you have manually made the part. So it is not uncommon to find upside down IC’s on prototype PCBs or even sometimes on short runs. That being said, anything smaller than a DIP or SOIC package can get very fiddly, and is difficult to hand solder, and will need some extra magnification. Don’t be deterred though, there are many examples of even QFN and even BGA devices being hand soldered in the dead bug form, with very thin gauge jumper wires. With plenty of practice and spares, it can be a useful method of saving money without having a new run of PCB’s.

A bodge on a PCB before the real chip arrives, a 6650 is being used in dead bug style to get the circuit working by Dave Curran.

Manhattan Style

This one is my favourite styles of circuit design on the cheap and quick, and if done right can be very pretty and efficient. The big issue with the ugly method is that it is difficult to create, and often difficult to follow, and horrible to document. Manhattan is an upgrade, using cut out sections of copper board as small islands, much like manhattan. This method means there are no connections floating in the air, as all points on a component are mechanically connected to copper clad in some way, even if it is only a small bit. This leads to generally a much nicer laid out board, that can easily be followed and replicated. It also allows for use of SMD components, which is possible with ugly, but very difficult. The small pads don’t have to be separate, they can simply be cut outlines on the same backplane, making the process cheaper, but get it wrong and it can get messy. I much prefer manhattan as a quick construction method, due to its neat look and ease of use. Another reason for the name Manhattan is the fact the capacitors and resistors tend to line up and are perpendicular to the substrate, looking a bit like tower blocks and skyscrapers like Manhattan itself.

A great example of Manhattan style soldering by Dave Richards. Solid copper substrate with QRPme pads to attach components together. 
Another impressive circuit in the Manhattan style by Dave Richards, this one is a high performance regen receiver, with a full write up on his blog.
An example by VE7SL – Steve of making his own pads for his amateur radio transmitters and receivers, using an Ebay punch to make the pads.

One step on from Manhattan style, and the final step before fully fledged PCB’s is a little known style called Pittsburg, much like the steak. I have also seen it be called muppet style, and I am sure there are many other names for it. It is very similar to an actual PCB, usually etched, a layout is carved into the board, with traces and pads. The difference from a PCB is the fact there are no holes anywhere to be seen. Meaning you get the benefit of being able to etch a nice looking layout at home, and the benefit of not needing expensive routers/drills that quickly break. To allow for the pads to be soldered to as the main mechanical connection they are much bigger to allow for more solder on a bigger surface area. These pads would be overkill for a thru hole project, but also allows for easy use of SMD components. You can sometimes see specialist pads like Pittsburg to use SMD chips on a Manhattan style board. It is a matter of taste and confidence. These methods are obviously not suitable for all prototypes, but could come in useful for your next project. 

A Pittsburgh style PCB at one point sold by Joe Porter, unsure if they are still sold.

A good source of the small pads used in Manhattan can be found here. Reasonable price, and if you are doing lots of prototyping you can even buy the tools to make it yourself!

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.


Roundup: Parker Solar Probe Launch

Rocket flames
An awesome image of the Delta IV heavy launching from pad 37B. Credit: Aerojet Rocketdyne.

At 07:31 UTC on August the 12th 2018 the 10th ever Delta IV heavy vehicle launched the long awaited Parker Solar Probe from Cape Canaveral Space Launch Complex 37B. The Delta 4 Heavy launched PSP towards a heliocentric orbit. The mission aims to “touch the sun”, and to get as close to the sun as man has ever been. Getting as close as 3.9 million miles from the sun, that’s roughly 4% of the distance between the Earth and the Sun (roughly 93 million miles).

time lapse
A great timelapse of the Delta 4 heavy launching towards the sun. Credit: Marcus Cote.

The Parker Solar Probe was named after Dr Eugene Parker who discovered the solar winds in 1958. He was present at the launch at the Kennedy Space Centre, seeing the 685kg spacecraft lifted. The 7 year mission will make 24 elliptical orbits of the sun, and uses 7 flybys of Venus to drop the low point of the orbit. It will make the closest point of the orbit closer than any other man made object in heliocentric orbit. It will enter the sun’s “atmosphere”, a section known as the corona, the outermost part of the atmosphere. Protected by a 4.5 inch sunshield, it can withstand temperatures of 2500F (1377C). The aim is to understand how the sun can creates and evolves solar flares and solar winds. It is to understand how the highest energy particles that pass the Earth are formed. It is hoped that it will revolutionise our understanding of the sun, to help us develop and create technology here on Earth.

The rocket has three RS-68A boosters, with the outbound boosters cutting off at T+3 min 57 sec, the core then cut off a minute and a half later at T+5 min 36 sec. The Delta’s cryogenic first stage engine was RL10B-2, which began burning at T+5 min 55 sec, and stopped its first burn at T+10 min 37 sec. This burn entered the 3,044 kg load into a 168 km x 183 km x 28.38 deg parking orbit. The second burn started at T+22 min 25 sec, and ended at T+36 min 39 sec, accelerating it to C3 of 59 km2/sec2, roughly 5,300 m/s out of LEO. At this point the Probe was in solar orbit, the Star 4BV separated at T+37 min 9 sec, with it firing at T+37 min 29 sec. The burn ended a minute and a half later at T+38 min 58 sec, accelerating it to 8,750 m/s beyond LEO. The Parker Solar Probe separated four and a half minutes later. The orbits after this point become much more complicated to get to the prefered orbit touching the sun.

Engineers at the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, work on NASA’s Parker Solar Probe spacecraft. Parker Solar Probe will be the first-ever mission to fly directly through the Sun’s atmosphere. Photo & Caption Credit: NASA / JHU-APL

The Delta 380 was the first Cape Canaveral Delta to use the upgraded “common avionics” system for its flight controller. The rocket was shipped to the Cape over a year ago, with it being assembled in the SLC 37 HIF. The rocket was then rolled out to the pad in April 2018, and there was a wet dress rehearsal on June 2 and 6th. The initial date for launch was the day before, august 11th but it was scrubbed at T-1 min 55 sec. Some of the best images of these launches are now taken by amateurs. I usually post a few of the images, but this launch was different as most of those who placed their cameras just a few hundred feet from the rocket got very damaged equipment.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

The Items Apollo 11 Left behind on the Moon

Aldrin Looks Back at Tranquility Base
Buzz Aldrin Looks Back at Tranquility Base just after deploying the Early Apollo Scientific Experiments Package (EASEP). Credit: NASA.

July 21st 1969. The time is 2:56 UTC, Neil Armstrong is taking the first steps on the moon, 20 minutes later Buzz Aldrin is following. The landing site looks clean apart from the big lander that is their lift home. By the end of the two hour EVA on the lunar surface the site would be walked over, science experiments laid out, and a pile of rubbish left in a pit. A view you don’t get to see in the images from Apollo 11, the astronauts left over 100 items on the lunar surface. Some commemorative, but mostly items they didn’t need for the return journey.

The plaque
The plaque attached to the lunar lander, with a message from all mankind, just in case some other being finds it. It commemorates the first steps on the Moon. Credit: NASA.

Famously landing in the sea of tranquillity, the Eagle lander has a number of official commemorative items attached to it. The main one is a plaque proclaiming “Here men from planet Earth first set foot upon the Moon. July 1969, A.D. We came in peace for all mankind.” Under the “we come in peace” is a golden replica of an olive branch. Nearby is a small aluminium capsule with a tiny Silicon disc inside. It contained on it messages from four US presidents, and seventy three other heads of state. It was sketched onto it in microscopic lettering, with the wording found here. There are also a few non official items taken there by the astronauts. An Apollo 1 patch in memory of Roger Chaffee, Gus Grissom, and Ed White who died in January 1967 in a fire inside the first Apollo capsule. They also left behind two military medals that belonged to Yuri Gagarin and Vladimir Komarov, both famous USSR cosmonauts. It showed the respect these men had for Soviet cosmonauts who had achieved so many firsts, and went through the same trials and tests they did.

The Apollo 1 patch
The patch for the famous Apollo 1 where Roger Chaffee, Gus Grissom, and Ed White tragically died in a fire. The patch was left on the Moon. Credit: NASA

On top of this they left the science experiments that they had used, such as the passive seismic experiment. The experiment that used meteorite impacts on the surface to map the inside structure of the Moon. They also placed a master reflector so that scientists could measure the distance from Earth precisely. This retroreflector still works, and if you have access to a powerful enough laser you can measure it yourself. They also had to pick up lots of moon rocks and moon dust as part of the science mission. They used sample scoops, scales and even a small hammer. There are also many specific tools that were needed, but were discarded before the return journey.

Map of Tranquillity base
Map of Tranquillity base including the Toss Zone where all the rubbish was discarded. Credit: NASA

Overall they left roughly 106 random bits if rubbish at the launch site. Including lots of tools like the hammers, chisel and brushes needed for sampling; astronaut EVA gear such as the over boots and and life support systems; and actual rubbish like the empty food bags, some armrests they wanted to dispose of; a TV camera; insulation blanket; pins and plastic covers for items like the flag (and the flag itself) plus the urine, defecation and sickness bags, although there is no word on whether they were used. They threw all the items into an area behind the lander known as the “Toss Zone”, basically just a rubbish pit.

Buzz with science
Buzz carrying science experiments to the required place slightly away from tranquility base. Credit: NASA

The astronauts left a surprisingly large amount of stuff on the Moon, but it does make sense, as they needed that weight to be replaced with the 300 kg of Moon rocks that they wanted to bring back, so they just left it all there. There is a full list of the items on this webpage, and its worth a look. Archived by the Lunar Legacy Project, they count it as over 106 items. Depending on how you count it, there can be over 116 items left by the Apollo Astronauts.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.