How to Define a Kilogram

So I recently read a great twitter thread by Max Fagin about the redefinition of the kilogram. It was on the news, and I saw a few TV segments about it, but I don’t think most appreciate the enormity of changing this basic building block of science. I learnt because if this story that there is a whole branch of science dedicated to the study of measurement, metrology. Metrologists ask and have answered the question, what actually is a unit of measurement? A weird question because we all probably have an idea in our head of what it means. If you are in the UK (or most of the world) then you would use a metre to measure distance, liters to measure volume, and kilograms to measure weight. In some parts of the world such as the USA then you may use gallons, feet, inches and pounds, but they all have to reference something. There are 2.2 pounds in a kilogram, 2.54mm in an inch but which one defines the other? Is it just a tangled mess of  interconnecting reliance?

It turns out it is not. All official units used in science (even the imperial units used in America) are defined in relation to the SI (Système International) unit definitions. They were established and have been maintained by the Bureau International des Poids et Mesures (BIPM) in France. It all starts with seven base units, and from them every other unit of measurement in existence can be defined. They are:

  • Kilogram, kg (mass)
  • Metre, m (distance)
  • Second, s (time)
  • Kelvin, K (temp)
  • Ampere, A (electric current)
  • Candela, cd (luminous intensity)
  • Mole, mol (quantity)

Every unit you have ever used is “officially” derived from these seven units. Ever used Watts? defined as 1 kg*m^2/s^3. 5 volts is officially 5 kg*m^2/(s^3*A). What about a gallon? Officially it is 0.003785 m^3. One atmosphere of pressure would seem simple enough until you realise it is 101325(kg/(m*s^2) of pressure.

Difference between the SI base units and the SI units that are derived from it. Image from @usnistgov

There is no standard foot, pound or gallon sitting in a vault in Washington or London, although they was at one point. All units are defined (also known as traceable) through the SI base units. But how do we define the 7 base units themselves? Well historically they are all based off of a thing, or artifact. For instance in the 1940’s 1 second was defined as 1/86400 the time it takes the Earth to rotate once. Then in the 1950’s it was redefined as 1/31556925.9747 the time it takes the Earth to orbit the sun. It took until the 1960’s for technology to advance far enough to redefine the second to something that isn’t an artifact (in this case the Earth). The time it takes the Earth to orbit the sun changes over time, it may be a long time, but it does change slightly. If the thing you are defining from changes, the thing you are using changes. If the Earth takes slightly longer to orbit the sun then the second becomes slightly longer. Although most of us don’t care, some very important science is based off of this very specific value.  Now it is based off of a fundamental property of the universe, something that doesn’t ever change. The fundamental property defining the second is the time it takes an electron in a cesium-133 atom to oscillate 9,192,631,770 times. As current science believes that the oscillations of electrons are constant properties of the universe the definition is now good forever.

The new caesium fountain atomic clock NPL-CsF3 becomes operational, allowing continuous operation of a primary clock at NPL. Work continues on the next generation of optical atomic clocks at NPL, which should achieve accuracies equivalent to losing or gaining one second in the age of the universe. Credit: NPL

The metre had a similarly long journey. British and BIPM spelling is metre, and American is meter. First it was defined as 1/10,000,000 of the distance from the equator to the north pole, but then they realised that the Earth surface is not consistent, and is definitely not constant. The Earth’s shape will change over time. So then they defined it as the distance between two marks on a platinum-iridium bar in France, literally an artifact. The bar was 90% platinum and 10% iridium, and was measured at the melting point of ice. This original bar is still kept under the conditions specified at it’s creation in 1889. In the 1980’s timing equipment became precise enough to move away from this artifact based definition. The constant of the universe used is the speed of light in a vacuum, 299,792,458 m/s (roughly 300 million). Basically a meter is defined as the distance light travels in 1/299,792,458 of a second. As it is constant it can be measured literally anywhere in space and time and it would be exactly the same, so a good definition. It also relies on time being measured using a constant of the universe too.

Closeup of National Prototype Metre Bar No. 27, made in 1889 by the International Bureau of Weights and Measures (BIPM) and given to the United States, which served as the standard for defining all units of length in the US from 1893 to 1960

All of the other base SI units have been defined in some way based on constants in the universe, with some being more complex than others. That is all but the kilogram, the stubborn one that has up until this point eluded redefinition. For 129 years the kilogram has been defined as the mass of an artifact stored in a vault in France. It is called the International Prototype Kilogram. Made like the metre, it is 90% platinum and 10% iridium. Unlike time and distance there is no easy way to precisely measure mass. So every attempt up until this point of redefining the kilogram has not met the precision of just taking the IPK out of the vault and giving it a good measure every once in a while. This is somewhat frustrating to metrologists as over the past 100 years there is evidence that the IPK has actually changed in mass by losing some material. It has changed by about 50 micro grams compared to it’s replicas. This is an odd paradox because it was the only thing in the universe that cannot not be a kilogram. So if some technician or scientist dropped it or chipped it then the weight we define atoms would literally go up. The IPK always weighs 1 kg. This is equally annoying because so many other measurements are based on the kilogram. The Newton for instance is defined as the force needed to accelerate one kilogram by 1 m/s^2.

Mass drift over time of national prototypes K21–K40, plus two of the IPK’s sister copies: K32 and K8(41). All mass changes are relative to the IPK.

The new way to define the kilogram is very complex, and I honestly don’t understand the details, there are better sites out there to get the details of this. Essentially it is based on the Planck constant, a fundamental property of the universe. Basically it relates the energy of a photon to its frequency. If Because you can know the energy of the photon, you can know the mass, and this is then directly related to the frequency of that photon. This means the kilogram can be defined in terms of the metre, second and a few constants of the universe. This has been hypothesized for some time, but until this point the technology has not been good enough to measure the Planck constant to a  sufficient accuracy. The best way that we currently use to do this measurement is using something called a Kibble balance, previously known as a watt-balance. It uses the electric power needed to oppose the force of the kilogram. As current, and electrical potential are already defined by constants of the universe the Kibble balance, with extensive calibration, can define 1 kilogram in terms of current. When I say extensive I do mean it, there needs to be an extremely precise measurement of gravity at that point. This extra complexity does mean that most countries are unlikely to invest in such devices, for the moment. But this is an important time, all measurements we currently know of are now based on actual constants of the universe. Gone are the days where we have to take something out of a vault to calibrate our measuring devices. 

Thank You for reading, take a look at my other posts if you are interested in space, electronics, or any other sort of history. Alternatively follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Why The Moon Could Fuel Future Space Missions

A recent report funded and published by the United Launch Alliance outlines the the potential viability of mining the Moon for rocket fuel. At over 170 pages, it is quite a read but Philip Metzger, one of the authors wrote a good summary on his website. I thought I would try and make a similar summary here with slight more explanation, and where it fits into the future NASA plans of getting back to the Moon and going onward to Mars and beyond. It all revolves around water, or more specifically ice. For many years we thought the Moon was a baron rocky cold desert. The samples that the Apollo astronauts brought back from the Moon, and the Soviet Luna samples imply that there is no water in the rocks or regolith. The trace amounts of water found in the samples were assumed to be contamination. Although, in 2008, a study of the Apollo rock samples did show some water molecules trapped in volcanic glass beads. Also in 1978, Soviet scientists published a paper claiming the 1976 Luna 24 probe contained 0.1% water by mass. Plus the Apollo 14 ALSEP Suprathermal Ion Detector Experiment (SIDE) detected the first direct evidence of water vapor ions on March 7th 1971. None of these discoveries were taken as conclusive proof of water on the Moon at the time.

An image of the SIDE experiment from Apollo 14. It measured the energies and masses of positively charged ions near the surface of the Moon and also studied the interaction between the solar wind and the Moon as the Moon moved through the Earth’s magnetic field. Credit: NASA.

On September 24th 2009 it was reported that the Moon Mineralogy Mapper (M3) spectrometer on India’s Chandrayaan-1 spacecraft had detected water ice on the Moon. The map of the features show that there is more at cooler, higher latitudes, and in some deep craters. Basically, the parts of the Moon that see the least light like the poles and far into craters near the poles have managed to keep the water on and below the surface. This water is in a number of different states, some locked up in minerals, some in ice form, and others in OH form, not technically water but near. This has lead to a number of new possibilities of inhabiting the Moon and using its resources. This is why the United Launch Alliance funded a paper on the possible use of mining this water and using it as a future fuel source. By some estimates there could be as much as 10 billion tonnes of water on the Moon. The water could in theory be mined, and through electrolysis turned into hydrogen and oxygen, the fuel that got men to the Moon on the famous Saturn V.

Left side of the Moon Mineralogy Mapper that was located on the Chandrayaan-1 lunar orbiter. Credit: NASA

But you have to ask, why does it matter? the Moon is far away, and surely water is just useful for astronauts living on the Moon to survive. Well the report talks about the business case to use the water as fuel, in the form of hydrogen and oxygen. First we need to understand the commercial satellite world, and geostationary satellites. When first sending up such satellites, the way to do it was to use a multi stage rocket, with the first stage getting to a Geostationary Transfer Orbit (GTO) and then the second (upper) stage being used to get into the Geostationary Orbit (GEO). Recent years have allowed the first stage to be hugely improved and often reusable by companies like SpaceX and Blue Origin. The second stage hasn’t had the same improvements though. Traditionally the second stage was a normal, and very heavy liquid rocket. This meant that it was very expensive to get things to GTO, much more difficult than LEO. The rocket was also thrown away afterwards, and as it is so far out it will take hundreds or even thousands of years to burn up in the atmosphere. 

A diagram of the traditional way to boost communication satellites into orbit. Credit Dr Phil Metzger

Now we do have a better way to do it, sort of. I talked recently about the rise of electric thrusters. A lightweight, cheap and powerful solution when used over a long period of time. Over the time span of years they can pick up speeds of thousands of miles per hour. That is the biggest downside of them though, in this situation, slowly pushing the satellite to orbit, they take up to a year to get a satellite into position. That is a year that it could be making the owner money. By some estimates that year could lose $100 million in revenue just waiting for the slow thrusters. By all accounts though, this is still cheaper than launching a large traditional rocket upper stage. The electric thrusters are amazingly light comparatively, which means you need a smaller first stage to get it up to space in the first place. The key thing you have to remember about these geosynchronous satellites is that they already have a huge price tag, some can cost upwards of half a billion dollars to build and launch.Part of the reason is that they tend to be huge, in size and weight. Some have been as big as London double Decker buses, and weigh 6 tonnes. The rockets then need to get them to one of the furthest and time consuming orbits, a costly exercise. 

A diagram of the current way to boost communication satellites into orbit. Credit Dr Phil Metzger

So why can this Moon mining idea help? well the Moon is in lots of countries space plans at the moment. China are currently sending lots of probes, and by some accounts looking to get humans there. The USA are building the SLS which should be able to get humans to the Moon, and are also developing the LOP-G idea. The concept to have an orbital station around the Moon, almost like a fuel stop for rockets going on to further parts of the solar system. This idea to mine the Moon for hydrogen and oxygen could be transferred up to this orbital space station to be transferred to the rockets that need it. This is where the geosynchronous satellites come in. Imagine if this fuel, that is dug up and processed by robots, and then sent up to an orbital station could be brought back closer to Earth via a space tug. This space tug could meet up with the rocket with the satellite on board. The upper stage rocket could have been sent up with no fuel (the heaviest bit) and is fueled by this space tug. It would allow for the speed of the old style engines, but the weight of newer electric engines. As long as the price for this whole system is cheaper than the $100 million it currently costs, then it could be a viable option. All the while, setting ground work for space agency’s to have viable water sources that can be used for future exploration. It may be the future of space travel.

Atlas 5 taking off
Atlas 5 lifting off from pad 41 at Cape Canaveral Air Force Base. If this idea takes off, these rockets could propel much larger payloads into much bigger orbits. Credit: @marcuscotephoto on Twitter

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Where Did the Ending of First Man Come From

For those out there who love space and the history behind it, of which I count myself, Damien Chazelle and Ryan Gosling have created First Man. The film follows the life of Neil Armstrong on the run up to the Apollo 11 landings where he became the first man to step foot on the Moon. All in all a great film, with lots of historical facts for those who know where to look. Beyond the few big plot points that Chazelle took minor liberties with, it gives a good account of run up to a huge moment for human engineering. The thing this post is focused on though, was the ending accurate, did Neil Armstrong actually throw his daughters bracelet into the crater.

A promotional still from the First Man film of Ryan Gosling as Neil Armstrong.

On the 20th of July 1969, Neil Armstrong and Buzz Aldrin spent 2 hours and 31 minutes exploring the lunar landscape, conducting experiments and collecting samples. All of it was scripted by NASA, practiced down to the last minute. There was a moment though where Neil took a short deviation from the plan, and that did actually happen. He wandered over to an area known as Little West Crater and took a moment there. It is not publicly known what happened at the edge of the crater, whether he was just reflecting, or like in the movie he may have thrown something into it. Either way it is unclear what actually happened, but some effort has been made to find out, mainly by the author of the First Man official biography in 2012, James K. Hansen.

The front cover of the book First Man. The official biography of Neil Armstrong, written by James R. Hansen.

Hansen spent four years researching the book about Armstrong, speaking to Neil himself and most of his family including his ex-wife Janet, Sister June and his children Eric and Mark. Throughout the interviews he develops a hunch that Neil may have left something on the Moon. This isn’t a crazy idea either because the astronauts did leave sentimental items on the Moon. On that very mission Buzz Aldrin left an Apollo 1 mission patch to commemorate the lost astronauts in the fire. The 10th person on the Moon, Charlie Duke left a photo of his family on the surface in 1972.

One of the photographs taken of the picture of Charlie Dukes family left on the lunar surface. Part of the Apollo archived photos. Credit: NASA

The big question is if he ever took the bracelet in the first place. If he did he wouldn’t have just snuck it on, it would be in the manifest known as the personal property kit (PPK) and Neil had a copy of this. When probed by Hansen he claimed to have lost the document, but on his death in August 2012 all of his archives were donated to his Alma mater Purdue University, and the document was part of it. The archives are under seal until 2020. When Hansen asked his sister June whether she thought he left something on the Moon for Karen she said “Oh I hope so”. Some may see the ending as a dreamt up Hollywood-ised version of the Moon landing. The decision not to add in the planting of the flag upset many Americans, and labeled the film as un-american. For me though, the scene when he steps foot upon the Moon is more important. That is the moment people remember, the bit that really counted. Plus the flag was included in the film, just not the planting of it.

A promotional still from the First Man movie, with Ryan Gosling as Neil Armstrong on the lunar surface with the sun visor down.

On a final note, I really liked some of the additions the film made. I loved the bit at the start where Chuck Yeager, who famously disliked Armstrong, grounded him. There were lots of tidbits and facts that were added in just to show that they had done their research. There were some inconsistencies, his daughter actually died well before that exact X-15 flight that got him grounded. There was also a famous point where Armstrong had to eject from the flying bedstead which got him in trouble. He is seen to be talking and arguing after, but in real life he had bit his tongue and could speak for days. Also, after the Apollo 1 fire the administrator, James Webb resigned, whereas they don’t seem to change the character in the films to make it simpler. These are not really massive plot problems though, they make little difference to the story, and don’t change our view of him. The minor changes made the film flow better, and those who care know the issues. Overall, it is a film people need to see.

Ryan Gosling as Neil Armstrong in First Man, just after he crashes the flying bedstead, in real life he bit his tongue so badly that he couldn’t speak for days after.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

The Geeky Geological Features of Charon

As talked about in a previous post, Charon was named after the wife if the discoverer James Christy. Since then the New Horizons probe has visited and taken some amazing pictures of the surface. As part of the mapping they have also started naming some of the craters and other geological features found on the surface, and they all have very fictional culture names. Although some have been accepted but he International Astronomical Union, there are still many that haven’t. As of April 2018 they have set out an agreed naming convention and set of rules for the names. They should conform to one of the following:

  • Destinations or milestones of fictional space and other exploration.
  • Fictional and mythological vessels of space or other exploration.
  • Fictional and mythological voyagers, travelers and explorers.
  • Authors and artists associated with space exploration, especially Pluto and the Kuiper Belt.

So far there have been many provisional names given by the New Horizons team based on mostly science fiction franchises such as Star Wars, Star Trek, Doctor Who and Firefly. Most are still provisional, but some have been accepted

Charon Enhanced
An enhanced colour version of Charon taken by New horizons space probe. It is enhanced to show the differences in surface composition. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.

A Terra is a large landmass or highland, and there is only one highland region on Charon. It was named Oz Terra after the Wonderful Wizard of Oz children’s novel by L. Frank Baum. The dark spots on the surface are called maculae in planetary science. The first is named Gallifrey Macula after the home planet of Doctor Who (Gallifrey). The second is the Mordor Macula after the base of Sauron in the Lord of the rings books by J.R.R. Tolkien. A planum is a scientific name for a plateau (elevated plain) and Charon only has one. Named Vulcan Planum after the home planet of Spock in the Star Trek Series. Terrae, Maculae and Plana are all being named after fictional destinations. A Mons is a planetary mountain, you may have heard of some of the Mons currently being explored by NASA rovers on Mars. Charon has three major mountains and are named after authors and artists. Butler Mons is named after Octavia E. Butler, an american science fiction author. Clarke Montes is named after Arthur C. Clarke, a famous English science fiction author who wrote 2001: A Space Odyssey. Kubrick Mons is named after Stanley Kubrick, a film director of films such as the shining and clockwork Orange. All three of the Mons names are accepted by the IAU.

Mordor Macula is located at Charon. A large dark area about 475 km in diameter near the north pole of Charon, Pluto’s largest moon. It is named after the shadow lands in J.R.R. Tolkien’s The Lord of the Rings.  It is not currently known what Mordor is. It may be frozen gases captured from Pluto’s escaping atmosphere, a large impact basin, or both. Credit: NASA

A chasma is a deep steep sided depression (a chasm), and are being named after fictional vessels. Argo Chasma is named after a ship in the Greek myth of Jason and the Argonauts, it is also the spaceship in the English translation of the Space Battleship Yamato anime series. Caleuche Chasma is named after the mythological ghost ship that travels the seas around Chiloé Island off the coast of Chile, collecting dead who then forever live aboard (much like Davy Jones). Mandjet Chasma is named after the solar boat of the ancient Egyptian God Ra. All three of the above Chasmas are recognised by the IAU. Macross Chasma is named after the SDF-1 spaceship in the Macross anime series. Nostromo Chasma should be known to most as the spaceship in the Alien films. Serenity Chasma is from the spaceship used in the Firefly series. Tardis Chasma is named after the infamous blue box flown by Doctor Who.

Annotated map of Charon, with provisional names for features. Credit: NASA/JPL.

There are 16 notable craters found on Charon’s surface, of which six have officially recognised names. They have all been named after characters associated with science fiction and fantasy. Dorothy Crater is named after the main character is the Wizard of Oz, also naming the only terra on Charon. Nasreddin crater is a sufi traveler from folklore. Nemo is after Captain Nemo from novels by Jules Verne. Pirx crater is the main character from the short stories by Stanislaw Lem. Revati Crater is named after the main character in the Hindu epic narrative Mahabharata. Sadako Crater is the adventurer who traveled to the bottom of the sea in the medieval Russian epic Bylina. All of the above craters have been officially recognised by the IAU. Alice Crater is named after the main character of the Lewis Carroll novels. Kaguyahime Crater is named after the princess of the Moon in Japanese folklore. Organa Crater is named after princess Leia in the Star wars films, along with Vader Crater, and Skywalker crater. Ripley Crater is one of the more studied craters and is named after the main character in the Alien films. Kirk Crater, Spock Crater, Sulu Crater, and Uhura Crater are all named after main characters in the Star Trek TV franchise.

Photo of Charon centered on Ripley Crater. Nostromo Chasma crosses Ripley vertically. Vader is the dark crater at 12:00, Organa Crater is at 9:00, Skywalker Crater at 8:00, Gallifrey Macula and Tardis Chasma at 4:00. Credit: NASA/JPL

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Notes From NASA’s Chief Scientist Jim Green’s Talk on The Search For Extraterrestrial Life

A few weeks ago my place of work, STFC, was lucky enough to host NASA’s chief scientist Jim Green for a talk titled “The search for life on Earth in space and time”. At the time of writing there is a version of the talk on University of Oxfords Facebook page. A really interesting talk for anyone interested in space, and our solar system. It also goes much more in depth that this post today and gives a real insight into current science of our solar system. A planetary scientist himself he talks about the planets in our solar system that could harbor life and those that might have done previously. I found it a real insight into what NASA’s goals are and where they are looking for signs of life. I personally enjoyed the talk as Jim Green hosts the “Gravity Assist” podcast made by NASA.

logo for NASA’s Gravity Assist podcast hosted by Jim Green. Credit: NASA.

The first real point he made was how to define what life is, which is a reasonable question. If you want to go out and find life on other planets, how do you know when you have found it? Spacecraft and astronauts need instruments and tools to detect things, and to build those instruments you need to know what they are looking for. The definition they came up with was that life needs three things, to metabolize, reproduce and evolve. This is a pain because it’s difficult to see any of those things directly. If you take just the metabolizing part and break it down it makes it a bit simpler, you need organics, the energy source, and water. You also need some way to get rid of waste. Plus we need to take into account time, you could have a fully habitable environment but not have life if it isn’t the right time.

The ingredients needed for life, a slide in the Jim Green Talk. Credit: NASA

Time is a really important factor, Earth has existed for 4.6 billion years, and it hasn’t always had life. They have been at least 5 mass extinction events in that time as well. To really see what is happening we need to look at how the sun has changed over that time, it is the thing in the solar system with the most effect on us. Since its birth 4.6 billion years ago it has brightened, with the luminosity increasing up to 25 or 30% by some estimates. We know that the Goldilocks region or habitable zone of a star exists where water can exist in all three states, but that depends on how big the star is and how bright it is, and therefore over time this Goldilocks region changes. This would make life simpler when looking for exoplanets, just work out where the habitable zone is and choose planets in it, unfortunately it isn’t that simple. 

A diagram of how the habitable zone of a star changes over time with different brightnesses. Credit: NASA

Let’s start off with Mercury, the closest planet to the sun. It is larger than the moon, but it isn’t large by any means. It has a magnetic field, it is nearly tidally locked and it is incredibly hot. It out gasses, and from Messenger data most scientists have agreed that it has never had a substantial atmosphere, so water is very unlikely to have existed there. The next candidate would be Venus, it is a similar size to the Earth after all. The Soviet Union Venera missions looked at the atmosphere and the temperature, and found it is extremely hot. The surface is hot enough to melt lead, and the pressure is 90 times that of our own planet. The NASA Magellan probe found it to be highly volcanic, with a very thick atmosphere. This means there is basically no chance of water, and makes Venus a bad choice for finding life today. Using some fairly interesting concepts, scientists have modeled what early Venus may have looked like and found it likely had water at some point, but the runaway greenhouse effect along with the lack of magnetic field has stripped all water away. That being said one day we could produce probes good enough to dig through the surface and look for signs of life below the ever evolving surface layer.

Five global views of Venus by the Magellan probe. Credit: NASA.

The next obvious choice is Mars, much larger than the Moon, but only about half the size of Earth. It’s a bit of a runt due to Jupiter. The asteroid belt between Mars and Jupiter is made of rocks that could have been a part of Mars, but Jupiter’s massive gravitational pull denied that. We also know that at some point in its life it had oceans that covered two thirds of the surface that could have been up to a mile deep in places. It then went through massive climate change, and it lost its magnetic field. That means the solar winds have stripped away the atmosphere and left a dry and arid surface. The pressure is about 1% that of Earth. Plus as it is fairly close to Earth it means that we can visit it fairly easily. From a number of missions including satellites and a number of rovers, we know that there are organic compounds on the surface, and likely water under the surface. Although not a guarantee of life it is a big hint. There a number of missions planned including ESA’s ExoMars, and NASA’s InSight and the 2020 rover. These missions are designed to drill into the surface and understand more about the planet, and what the water held.

True color image of Mars taken by the OSIRIS instrument on the ESA Rosetta spacecraft during its February 2007 flyby of the planet. Credit: ESA.

We talked about the habitable zone, but there is another line (or sphere technically) that planetary scientists use called the snow line. Lying somewhere in the Kuiper belt, it defines that liquid water cannot exist beyond it. For a long time that was thought to be true, but research has revealed that some moons have liquid water below their icy surface. In 1611 Galileo discovered some of Jupiter’s moons, and they have been visited and studied by the Juno and Galileo probe. All the moons at one point had an ice crust. Scientists have found that some moons such as IO, lost this crust and have become very volcanic and volatile. Ganymede, Callisto and Europa still have this ice crust. Only Ganymede and Europa have any signs of a watery ocean underneath the crust, but Ganymede is somewhat ruled out from having life because of its very cold temperatures. This leaves Europa in this Jupiter habitable zone. Slightly smaller than out moon, it has been shown to have watery geysers that reach 400 km above the planet. That would be equivalent to Earth geysers hitting the space station. From tests by Galileo data it has been shown to have twice as much water than on Earth. Plus it has been like that for 4.6 billion years, so that is a good indication that there could be microbial or even complex life below the surface. There is a mission planned to go to visit Europa called Europa Clipper.

An image showing the icy crust of Jupiter’s moon Europa. Europa is about 3,160 kilometers (1,950 miles) in diameter, or about the size of Earth’s moon. This image was taken on September 7, 1996, by the camera on board the Galileo spacecraft during its second orbit around Jupiter. Credit: NASA/JPL/DLR.

Then there is Saturn, which has had many studies, and the thing that stands out is the moon Enceladus. It is the moon that really drew NASA’s attention to the possibility of water on these distant moons. It also has geysers, coming from huge cracks in the southern hemisphere. They are huge walls or water just pouring out of the body. With it being only a small moon of around 300 km, it suffers from tidal forces. The water pours out less when it is closer to to Saturn, and more when it is further away (due to an elliptical orbit). This has been measured and shown, as the Galileo spacecraft actually flew through one of the geysers and didn’t know it. We have spacecraft that have literally tasted this water. About 98% of the water that comes out of the geysers falls back onto the moon, but that 2% escapes and forms an e-ring. The Cassini spacecraft also flew through these plumes and managed to measure some of this water, and more importantly small bits of rock. It gives indications of hydro thermal vents being the cause of these plumes of water.

NASA’s Cassini spacecraft captured this view as it neared icy Enceladus for its closest-ever dive past the moon’s active south polar region. Credit: NASA/JPL

Another spectacular moon of Saturn in the running is the famous Titan. It is bigger than the planet Mercury, the atmosphere is about twice that of ours, and is dominated by nitrogen. Trace gasses of methane and ethane have been detected, and it has large bodies of liquid. Radar images of the surface piercing through the thick atmosphere show rocky terrain and flat lakes of liquid methane. This has spurred on the idea that life could be very different, and could survive in such liquids as methane. So if we want a chance of finding life not like us then Titan would be the best place to go. There are a number of important missions that are planned to visit Titan and make much better measurements of the surface. Including robotic missions and maybe even very simple rovers. By all accounts it is still in early stages.

These six infrared images of Saturn’s moon Titan represent some of the clearest, most seamless-looking global views of the icy moon’s surface produced so far. The views were created using 13 years of data acquired by the Visual and Infrared Mapping Spectrometer (VIMS) instrument on board NASA’s Cassini spacecraft. Credit: NASA/JPL-Caltech/University of Nantes/University of Arizona

This data taken from these missions have allowed us to look further afield to find exoplanets that could fit what we now use to define habitable planets. Missions such as Kepler have refined the way to detect planets by looking at stars for long periods of time. looking at how stars dim and wobble when planets go in front if them. The big exoplanet mission for NASA currently is TESS. Launched in April it has gone through its commissioning and is already finding planets out there. The idea for it is to take large amounts of images over a long time and try to find as many exoplanets as possible. Hopefully producing thousands of potential planets, the best looking ones can then use much more powerful and advanced telescopes such as JWST to make better measurements and tease out the atmosphere and makeup of these exoplanets. One closing point that Jim Green made, when you go out and look at the stars at night, just remember that there are more planets on our galaxy than there are stars visible in the sky. 

One of the first images taken by NASA TESS, centered on the southern constellation Centaurus, reveals more than 200,000 stars. Credit: NASA.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

NASA Turns 60

The official logo for NASA turning 60.

As of today, the 1st of October 2018, NASA has turned 60. It was created as a new agency based on its precursor NACA, started in 1915. The cold war between the USA and the Soviet Union created a space race the late 1950’s. From 1946, the National Advisory Committee for Aeronautics (NACA) was experimenting with rocket planes. One of the famous ones was the Bell X-1 that took Chuck Yeager past the speed of sound (and was the first to do so). They were also the team behind the running of the X-15 rocket plane that Neil Armstrong famously flew. In the early 1950’s there was a call to look into launching artificial satellites towards the end of the decade, mainly driven by the International Geophysical Year which was 1957/58.

The x-15 rocket plane, currently the fastest plane ever, it reached mach 7, and was developed by NACA. Credit: NASA.

An effort towards this by the USA started with Project Vanguard, led by the 
United States Naval Research Laboratory, which ended in catastrophic failure. This was the perceived state of the US side of the space race at the time. On October 4th, 1957 Sputnik 1 launched and instantly grabbed the attention of the United States public. The perceived threat to national security was known as the Sputnik crisis, and US congress urged immediate action. President Dwight D. Eisenhower with his advisers worked on immediate measures to catch up. It eventually led to an agreement to create a new federal agency based on the activity of NACA. The agency would conduct all non-military activity in space. The Advanced Research Projects Agency was also created to develop space technology for the military applications.

The failed Project Vanguard by the Naval Research Laboratory, it was meant to be the first US satellite in space but ended in disaster.

Between 1957 and 1958 NACA began studying what a new non-military space agency would be, and what it would do. On January 12th, 1958 NACA convened a “special committee on space technology” headed by Guyford Stever (director of the national science foundation). The committee had consultation from the Army Ballistic Missile Agency headed by the famous Werner Von Braun, the soon to be architect of the Saturn V. On January 14th 1958, the NACA director Hugh Dryden published “A National Research Program for Space Technology” that stated:

It is of great urgency and importance to our country both from consideration of our prestige as a nation as well as military necessity that this challenge [Sputnik] be met by an energetic program of research and development for the conquest of space… It is accordingly proposed that the scientific research be the responsibility of a national civilian agency… NACA is capable, by rapid extension and expansion of its effort, of providing leadership in space technology

On January 31st 1958, Explorer 1 was launched. Officially names Satellite 1958 Alpha, it was the first satellite of the United States. Talked about in a recent post, the payload consisted of the Iowa Cosmic Ray Instrument without a tape recorder (there was not enough time to install it). A big turning point in the US side of the space race, it gave civilian space activities a chance in the spotlight to allow for more funding.

The logo for Explorer 1, the first US satellite in space. It was the first satellite to pick up the Van Allen belts. Credit: NASA/JPL.

In April 1958, Eisenhower delivered to the U.S. Congress an address to support the formation of a civilian space agency. He then submitted a bill to create the “National Aeronautical and Space Agency”. Somewhat reworked the bill was passed as the National Aeronautics and Space Act of 1958 on July 16th. Two days later Von Braun’s Working group submitted a report criticizing the duplication of efforts between departments on space related programs in the US government. On July 29th the bill was signed by Eisenhower and NASA was formed. It began operations on October 1st 1958. NASA absorbed NACA in its entirety, including its 8,000 employees, annual budget of $100 million, and the research labs under its jurisdiction. The three main labs were Langley Aeronautical Laboratory, Ames Aeronautical Laboratory, and Lewis Flight Propulsion Laboratory. It also inherited two small test facilities. Elements of the Army Ballistic Missile Agency were transferred to NASA, including Werner Von Brauns Working Group. Elements of the Naval Research Laboratory that failed to launch project Vanguard were also transferred to NASA. In December of that year NASA gained control Jet Propulsion Laboratory (JPL). It is important to remember that NASA was based upon the success of the rocket scientist Rober Goddard, who inspired Werner Von Braun and other German Rocket scientists brought over by project paperclip. There was also huge influences from the research conducted by ARPA and US Air Force research programs.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

JAXA Lands Rovers on an Asteroid

An artist’s impression of the Hayabusa 2 probe. Targeting an asteroid, it plans to land, sample it and then return with the sample by 2020.

The Japanese Space Agency have successfully landed and deployed two small rovers onto the surface of a near Earth asteroid from the Hayabusa 2 probe. Following on from its predecessor Hayabusa, this second mission is an asteroid sample return mission, building on and addressing the weak points of the first mission. It launched on the 3rd of December 2014, and it rendezvoused with the near-earth asteroid 162173 Ryugu on the 27th of June 2018. Currently in the process of surveying the asteroid for a year and a half, it will depart in December 2019, returning to Earth in December 2020.

Photo taken by Rover-1B on Sept 21 at ~13:07 JST. It was captured just after separation from the spacecraft. Ryugu’s surface is in the lower right. The misty top left region is due to the reflection of sunlight. 1B seems to rotate slowly after separation, minimising image blur. Credit: JAXA

The Hayabusa probe carries four small rovers that are designed to investigate the asteroid surface in situ. They are designed to provide data and context of the environment around where the returned samples are from. Different from rovers that we are used to, these all use a hopping mechanism to get around. None of the rovers have wheels as there is so little gravity that they would be very inefficient. Deployed at different dates, they are all dropped onto the surface from 60-80 m altitude and fall to the surface by the very weak gravity. The MINERVA-II-1 lander is the container that deployed two of the rovers. ROVER-1A and ROVER-1B were deployed on 21st of September 2018. Developed by JAXA and the University of Aizu, the rovers are identical. They are 18cm in diameter and 7cm tall, with a mass of 1.1kg (2.4lb) each. They hop by using rotational masses within the rover. They have stereo cameras, a wide angle camera, and thermometers aboard. Solar power and a double layer capacitor power them.

First pictures from a MINERVA-II-1 rover that landed on the asteroid. Credit: JAXA.

The  MINERVA-II-2 container holds the ROVER-2, developed by a consortium of universities led by Tokyo University. It is an octagonal prism shape, 15cm diameter and 16cm tall. The mass is about 1kg (2.2lb), and has two cameras, a thermometer and an accelerometer on board. It has optical and UV LED’s for illumination to detect floating dust particles. It has four mechanisms to hop and relocate. The fourth rover, named MASCOT (Mobile Asteroid Surface Scout) was developed by the German Aerospace Center in cooperation with the French Space Agency CNES. It measures 29.5cm x 27.5 cm x 19.5cm and has a mass of 9.6kg (21lb). It carries an infrared spectrometer, a magnetometer, a radiometer and a camera that will image the small-scale structure, distribution and texture of regolith. it is capable of tumbling to re-position itself, and is designed to measure the mineralogical composition, the thermal behavior and magnetic properties of the asteroid. The non-rechargeable battery will only last for 16 hours. The infrared radiometer on the InSight Mars lander, launched in 2018, is based on the MASCOT radiometer.

An artistic rendering of Hyabusa 2 collecting a surface sample.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Locating Where a Sound is Coming From

For my masters year, half the marks came from one module, the masters project. Being a team effort, we were in a group of three. Putting our heads together, and taking ideas from lecturers, we made a list of potential projects. We knew for one that I wanted to be making hardware, and the other two wanted to use/learn machine learning and maybe FPGA’s. After much deliberation we decided to make a project that listened for a sound, and using time difference of arrival worked out where the sound came from. This post is mostly about the hardware and circuitry designed for the project.

The final board for our masters project. Contains four amplifier sections for the microphones and a micro controller with USB interface.

With a world with a big focus on safety in public places, we thought it would be a good product for the security industry, potentially with links to smart cities. Imagine a shopping center, somewhere with lots of security already. They tend to have lots of security cameras, alarm systems and a dedicated guard. This isn’t uncommon in big public places/attractions, especially in the UK. Sports stadiums, train stations and museums are always looking for new ways to protect themselves and isolate problems. The example that inspired us was the horrendous shooting in Las Vegas at a concert in October 2017, just as we were picking projects. The main problem was that the security services did not know where the shooter was, meaning it took longer to get to him. If they had a system like we envisaged, the microphones would pick up the sound and triangulate it. The location could then be sent to relevant authorities to use.

The front page of the New York times just days after the Las Vegas shooting

To start with we needed microphones. We didn’t need to reinvent the wheel, and microphones can be easily bought off the shelf. For ease we used standard stage microphones, that had 3-pin XLR outputs. Although we had been warned that they would not work they had a good omnidirectional pattern, and had lots of good documentation. One issue with them is the output is balanced, which means it needs to go through a pre-amp. To get an idea of what a balanced signal is, imagine a ground connection and two signals. The two signals are the same, but one is inverted. This means when it travels down the cable it is much less susceptible to interference. This is part of the reason we liked using stage rated equipment, as sound engineers have already worked out issues with transporting sound signals long distances through noisy situations. We concluded from research that the signals could reach over 100m, which was the number we were aiming for.

One of the pre-amplifier sections used on the board, using four operational amplifiers.

Once the signal got to the box it needed to be converted to a signal that could be read by an ADC. To do this we used an INA217, a pre-amp designed for basically this purpose. An instrument amplifier, it measures the difference between the signals and amplifies them, outputting a voltage with reference to ground. The signal from the microphone is tiny, in the tens of milivolts range, so it needed some dramatic amplification to get it near the 5V ADC. The INA217 did a good job but we put a second stage amplifier to give it the extra push, as very large gains can sometimes be bad for a number of reasons. We used an OP07D but if we were to do it again we would get a rail-to-rail to get better results. This amp had a pot as one of the gain resistors so that we could easily trim the gain depending on test. Finally, the signal at this point sat between -2.5V and +2.5V so we needed to shift it up so it was between 0 and 5V. This was done with a simple shift circuit and an amplifier. We used another OP07D to make buying easier.

Me manufacturing the PCB, at this point I was inspecting to see how well it had been soldered.

From here the signal gets read by the 12 bit ADC in an STM32 microcontroller. It then streams the data via the USB to a PC where MATLAB picks it up. This is where my knowledge is a bit lacking as I did not make it. In essence MATLAB uses a machine learning algorithm that had listened to over 1000 gunshots, screams and explosions. It has categorized them, and used a number of features to notice the difference. Then when playing a new sound of one of these things (not heard by it before) it categorizes it and outputs it to the user. It also used a selection of sounds from the background to know when there is not one of these events happening, else there will false negatives.

One of our set ups to get a useful output and test the amplifiers were working properly.

All in all the project did actually work. It detected gunshots and screams being played into the microphone, and the triangulation algorithm worked, just not in real time. We managed to win the best masters project, mainly because we had good quality hardware, a partially working system and a good business case behind it. There is a lot of scope of where this project could go, and many things that could be improved, but we were happy with how it came out. I may be able to use some of the circuitry on other projects, who knows. If you are interested in more of the project, maybe some more detail about the hardware or manufacture, comment or message on Twitter. Thanks for reading.

A good example of how much difference there is between the microphones when a big sound was made. Minute distances made a big time difference.

The Dawn of Ion Engines

Ion thrusters are becoming a bigger and bigger part of modern satellite design. Over 100 geosynchronous Earth Orbit communication satellites are being kept in the desired locations in orbit using this revolutionary technology. This post is about its most amazing achievement to date, the Dawn Spacecraft. Just reported that it is at the end of its second extension of the mission it has a few records under its belt. It is the first spacecraft to orbit two different celestial bodies, and the first to orbit any object in the main asteroid belt between Mars and Jupiter. It is also a record breaker for electric speed. Travelling over 25,700 mph it is 2.7x faster than the previous fastest electric thrusted spacecraft. That is a comparable speed to the Delta 2 launch vehicle that got it to space in the first place.

Delta 2 launch
The Dawn spacecraft launching on a Delta 2 rocket from Cape Canaveral Air Force Station SLC 17 on Sept 27th, 2007. Credit: NASA/Tony Gray & Robert Murra

The Dawn mission was designed to study two large bodies in the main asteroid belt. This is to get a deeper insight into the formation of the solar system . It also has the added benefit of testing the ion drive in deep space for much longer than previous spacecraft. Ceres and Vesta are the two most massive bodies in the belt, and are also very useful protoplanets from a scientific standpoint. Ceres is an icy and cold dwarf planet whereas Vesta is a rocky and dry asteroid. Understanding these bodies can bridge the understanding of how the rocky planets and icy bodies of the solar system form. It could also show how some of the rocky planets can hold water/ice. In 2006 the International Astronomical Union (IAU) changed the definition of what a planet is, and introduced the term “dwarf planet”. This is the change that downgraded Pluto from its planet status, although that has been argued to be wrong by Dr. Phil Metzger in a recent paper. Ceres is classified as a dwarf planet. As Dawn arrived at Ceres a few months before New Horizons reached Pluto, Dawn was the first to study a dwarf planet.

Dawn prior to encapsulation at its launch pad on July 1, 2007. Credit: NASA/Amanda Diller

The ion engine is so efficient that without them a trip to just Vesta would need 10 times more propellant, a much larger spacecraft, and therefore a much larger launch vehicle (making it much more expensive). The ion propulsion system that it uses was first proven by Deep Space Mission 1, along with 11 other technologies. Dawn has three 30 cm diameter (12 inch) ion thrust units. They can move in two axis to allow for migration of the center of mass as the mission progresses. The attitude control system can also use the movable ion thrusters to control the attitude. The mission only needs two of the thrusters to complete the mission, the third being a spare. All three have been used at some point during the mission, one at a time. As of September 7th 2018 the spacecraft has spent 5.9 years with the ion thrusters on, which is about 54% of its total time in space. The thrust to its first orbit took 979 days, with the entire mission being over 2000 days. Deep Space 1’s mission in contrast lasted 678 days before the fuel ran out.

An artist’s impression of Dawn with its ion thrusters on. Credit: NASA

The thrusters work by using electrical charge to accelerate ions from xenon fuel to speeds 7-10 times that of chemical engines. The power level and the fuel feed can be adjusted to act like a throttle. The thruster is very thrifty with its fuel, using a minor 3.25 milligrams of xenon per second, roughly 280g per day, at maximum thrust. The spacecraft carried 425 kg (937 pounds) of xenon propellant at launch. Xenon is a great fuel source because it is chemically inert, easily stored in compact form. Plus the atoms are very heavy so they provide large thrust compared to other comparable candidate propellants. At launch on Earth the xenon was 1.5 times the density of water. At full thrust the ion engines produce a thrust of 91 mN, which is roughly the force needed to hold a small sheet of paper. Over time these minute forces add up and over the course of years can produce very large speeds. The electrical power is produced by two 8.3 m (27 ft) x 2.3 m (7.7 ft) solar arrays. Each 18 meter squared (25 yard squared) array is covered in 5,740 individual photo voltaic cells. They can convert 28% of the sun’s energy into useful electricity. If these panels were on Earth they would produce 10 kW of energy. Each of the panels are on gimbals that mean they can turn any time to face the sun. The spacecraft uses a nickel-hydrogen battery to charge up and power during dark points in the mission.

The dawn mission patch.  This logo represents the mission of the Dawn spacecraft. During its nearly decade-long mission, Dawn will study the asteroid Vesta and dwarf planet Ceres Credit: NASA.

Vesta was discovered on March 29th 1807 by astronomer Heinrich Wilhelm Olbers, and is named after the Roman virgin goddess of home and hearth. The Dawn mission uncovered many unique surface features of the protoplanet ,twice the area of California, that have intrigued scientists. Two colossal impact craters were found in the southern hemisphere, the 500 km (310 miles) wide Rheasilvia basin, and the older 400 km (250 miles) wide Veneneia crater. The combined view of these craters was apparent even to the Hubble telescope. Dawn showed that the Rheasilvia crater’s width is 95% of the width of Vesta (it’s not perfectly spherical) and is roughly 19 km (12 miles) deep. The central peak of the crater rises to 19-25 km (12-16 miles) high, and being more that 160 km (100 miles) wide, it competes with Mars’ Olympus Mons as the largest mountain in the solar system. The debris that was propelled away from Vesta during the impacts made up 1% of its mass, and is now beginning its journey through the solar system. These are known as Vestoids, ranging from sand and gravel all the way up to boulders and smaller asteroids. About 6% of all meteorites that land on Earth are a result of this impact.

The brave new world of 4 Vesta, courtesy of NASA’s Dawn spacecraft. Credit: NASA/JPL-Caltech/UCAL/MPS/DLR/IDA

Dawn mapped Vesta’s geology, composition, cratering record and more during its orbit. It also managed to determine the inner structure by measuring its gravitational field. The measurements were consistent with the presence of an iron core of around 225 km (140 miles), in agreement with the size predicted by
howardite-eucrite-diogenite (HED)-based differentiation models. The Dawn mission confirmed that Vesta is the parent body of the HED meteorites, by matching them with lab based measurements. These experiments measured the elemental composition of Vesta’s surface and its specific mineralogy. These results confirm that Vesta experienced pervasive, maybe even global melting, implying that differentiation may be a common history for large planetesimals that condensed before short-lived heat-producing radioactive elements decayed away. The pitted terrains and gullies were found in several young craters. This could be interpreted as evidence of volatile releases and transient water flow. Vesta’s composition is volatile-depleted, so these hydrated materials are likely exogenic (formed on the surface).

A colour coded topographic map from the Dawn mission of the giant asteroid Vesta. Credit: NASA/JPL

The first object ever discovered in the main asteroid belt was Ceres. Named after the Roman goddess of corn and harvest, it was discovered by Italian astronomer Father Giuseppe Piazzi in 1801. Initially classified as a planet, it was later classified as an asteroid as more objects were found in the same region. In recognition of its planet like properties (being very spherical) it was designated a dwarf planet in 2006 along with Pluto and Eris. Observed by the Hubble telescope between 2003 and 2004, it was shown to be nearly spherical, and approximately 940 km (585 miles) wide. Ceres makes up 35% of the mass of the main asteroid belt. Before Dawn there were plenty of signs of water on Ceres. First, its low density indicates that it is 25% ice by mass, which makes it the most water rich body in the inner solar system after Earth (in absolute amount of water). Also, using Hershel in 2012 and 2013, evidence of water vapor, probably produced by ice near the surface transforming from solid to gas (known as sublimating).

Dwarf planet Ceres is shown in these false-color renderings, which highlight differences in surface materials. Credit: NASA/JPL-CalTech/UCLA/MPS/DLR/IDA

Acquiring all the data it needed by the middle of 2016, Dawn measured its global shape, mean density, surface morphology, mineralogy, elemental composition, regional gravity and topography at exceeded resolutions. The imaging from the mission showed a heavily cratered surface with bright features. Often referred to as “bright spots” they are deposits of carbonates and other salts. Multiple measurements showed an abundance of ice at higher latitudes. However the retention of craters up to 275 km (170 miles) in diameter argue for a strong crust, with lots of hydrated salts, rocks and clathrates (molecules trapped in a cage of water molecules). Gravity and topography data also indicated that that Ceres’ internal density increases with depth. This is evidence for internal differentiation resulting from the separation of the dense rock from the low density water-rich phases in Ceres history. The rock settled to form an inner mantle overlain with a water-rich crust. This internal differentiation is typical of small planets like Ceres and Vesta that Sets them apart from asteroids.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Considerations When Making a Current Shunt Sensor

For battery powered projects, current consumption is a really important consideration when designing the circuitry. While designing my final year project I spent a huge amount of time researching how to put together a simple current sensor. Considering most applications for me are DC, fairly low current and low voltage, the most obvious design is to make a current shunt. The basic idea of a current shunt is that you put a very low value resistor between the circuit you want to measure and ground, and measure the voltage across it. When one side is of the shunt resitor is ground it is low side, there are also high side versions but are mlre complex. As the resistor has a small resistance, there will be a low voltage drop (usually mV) across it meaning it shouldn’t affect the load circuitry. The voltage will also be proportional to the current running through it, meaning if you measure it, and do the right maths you can get a consistent and reliable current reading. This post is about how to get that tiny voltage into 1’s and 0’s, while thinking about the considerations that have to be made about the design to make it accurate and reliable in the environments you want.

Final Year Project
My final year project needed current sensors on the motors as well as monitoring the drain on the battery.

The first thing that needs to be decided is the shunt resistor itself. A shunt resistor is basically a low value resistor, with very tight and known tolerance usually with a fairly high power rating. It can be used in AC and DC circuitry, with the concept behind it being that as a current flows  though it, a voltage is induced across it. The voltage can then be measured and using a simple calculation (based on ohms law) converted into a value for current. The value of the resistor depends on what it is measuring and what is measuring it. Start with what is measuring it. If you are like me, it is likely that it will be read by an ADC, probably on a 5V or 3V3 microcontroller. The voltage across the resistor is going to be amplified between 10 and 100 times (we will get to why in a moment) so pick a maximum voltage within that range. I tend to go with 100mV maximum voltage drop, which for a 5V ADC would require an amplification of 50. Then, take the maximum value of current you want to be able to measure. You can then use ohms law to figure the resistance you need. For example if I wanted to measure 1A, the resistor would be 100mV/1A = 100 mohm. Now we know the resistor value, use the power equation to work out the power eating we want. For this example we would need P = I V = 1 x 0.1 = 100mW. This is the minimum power rating you need, I personally would get a 250mW or even a 500mW just to keep the temperature of the circuit down.

The simple equation to work out what sizesunt resistor to use. Credit: Texas Instruments 

Now we have a voltage that will be somewhere between 0 and 100mV with reference to ground. We want this value to be scaled up to 0 to 5V. To do this we are going to use an operational amplifier. There are plenty out there, and most people have their favourites and I’m not here to convince you otherwise. I tend to use an op amp that I am using somewhere else in the circuit to make life easier. There are a few things you do need from an op amp in this circuit though, it needs to be rail to rail, and have a low input offset voltage. Offset voltage in an op amp is the voltage difference on the inputs, and even though they are tiny differences they can have a big effect because we are amplifying small voltages, and any noise or offset will be amplified too. The op amp needs to be in a simple non inverting configuration. The equations needed to design this are in most first year textbooks and there are plenty of calculators online. I have set a gain of 50 in my calculation, which is in the fairly common range. The output of the amplifier can then go straight straight into an ADC to be measured.

The basic layout of a current shunt sensor showing where the shunt resistors and gain resistors go in the circuit. Credit: Texas Instruments 
The first version of my current sense test circuit, using an OP170 made by TI.

Now let’s look at a few places where errors can come into a design like this. There are two types of errors that occur in a circuit like this, gain error and offset error. A gain error is one where the output error gets further away from the ideal output as the current gets higher. An offset error is one that has the same amount of error whatever the input, just like an offset. The only common source of offset error in a circuit like this is from the offset error in the op amp discussed previously, solved with an improved choice of amplifier. The gain errors are usually due to a difference in resistance from the ideal. Many things can cause this, one is the tolerance of the resistor used, we want to use a precision resistor of 1% or less tolerance. Another cause could be temperature changes in the resistor itself, it may be next to a large MOSFET or other hot component, or could have too low of a power rating making it heat up, wither way a change in temperature means change in resistance. Layout can also be an issue, if tracks are too thin or too long they can add extra unwanted resistance.

Great graphs showing the difference between gain an offset error. Credit: Texas Instruments.

If you want to add a bit of fancyness into the project, or really need to measure down to low currents, you need to tackle the zero-current error. The problem is that when using an op amp, even a rail to rail one, it never quite reaches the power rails. Even the best ones can only get within 100mV or so of the power rails, this is known as saturation. Solving this involves moving the power rails slightly so the saturation point is less than ground. If you have a negative voltage rail you can use that but home projects tend to be single supply, so we need another power source. This can be made using a voltage inverter (a type of charge pump). Usually only needing an external capacitor to work, they are cheap and easy to integrate into a project. I used a LTC1983, which creates a negative 5V rail, but there are plenty of others such as the LM7705. Research what fits your circuit and cost point, and just attach the negative output to the negative supply rail of the op amp.

 A great graph showing how the zero current error occurs, and what it would look like if you tested it. Credit: Texas Instruments.

Most issues with error can be fixed during the hardware design phase. You can pick better op amps, such as ones designed to combat offset voltage. Some amplifiers have internal calibration procedures, and some such as chopper stabilizers are specifically designed to correct these problems. You can also use a potentiometer instead of a power resistor, but they are more susceptible to temperature and can be knocked. Another way is to fix issues in software by creating a calibration procedure. Using a calibrated precision current source and a multimeter, measure the reading of the ADC and compare the value to the reading from the instruments. You should get an offset and gain value that can then be used to calibrate the sensor.

A simple set up that I used to calibrate an early sensor, with a big power resistor as the load and a variable power supply to change the current. Marked down to put into calibration.

I would suggest trying out a few of these sensors in future projects, they don’t cost too much, and can be a valuable addition to a design. Especially for power sensitive devices, or smart sensors, this could be a better solution than an off the shelf or breakout board solution. If you want to hear more about my current sensor designs, and how well the testing and calibration went then comment or tweet at me. I already have some documentation that I may release at some point.

Follow @TheIndieG
Tweet to @TheIndieG