Taking a Selfie on Mars

Curiosity in a dust storm
An image shared by Seán Doran on Sunday of the Mars Curiosity in the middle of a dust storm reported to cover an area the size of the US and Russia Combined. CredIt: NASA/JPL/Seán Doran.

Curiosity is a famous, car sized rover currently exploring Gale Crater on Mars. Famous because it has an impressive track record. Landing on Mars in August 2012, the rover was designed to last 687 days/668 sols (martian days) but was extended to indefinitely in December 2012. Although at the time of writing it is trying to wait out a dust storm that has forced Opportunity into a deep sleep, it is still going strong to this day, and has managed to even take a selfie while waiting for it all to blow over. That is over 2100 earth days, still functioning and completing chemical analysis on soil from 560 million km (350 million mi) away!

Mars Curiosity Rover MAHLI
The Mars Hand Hand Lens Imager (MAHLI) on NASA’s Curiosity Rover, taken by Curiosities Mast Camera on the 32nd martian day. Credit: NASA/JPL.
Curiosity first space selfie
The first selfie that Curiosity took of itself with its MAHLI camera with it’s dust cover closed. Taken September 7th, 2012. Credit: JPL/NASA.

Even though this impressive piece of engineering has been collecting samples and completing scientific experiments for over 5 years, the rover still has time to take the occasional selfie. It has a 2.1m robotic arm, and a sophisticated camera (MHLI) mounted on the end of it. The obvious thing you will notice about the images is that you can’t see the arm taking the image. To many of the NASA sceptics and flat earthers this is conclusive proof that the rover is in a film studio somewhere in California rather than on our nearest neighbour planet. At first glance you can understand the problem, where is the arm? The first clue is that the arm isn’t in the picture at all, and when you see the images taken of it here on Earth you can see it is a very prominent feature.

Mars Rover selfie October 2012
The Curiosity Rover taking a selfie at “Rocksnest” a sand patch on the surface of Aeolis Palus, between Peace Vallis and Aeolis Mons (“Mount Sharp”) Taken in October 2012, not long after landing. Credit: NASA/JPL.

The simple answer was explained by NASA/JPL when these questions came up after the first self shot. As the Curiosity camera has a limited view, it cannot get the entire rover into one shot, and even when it does, it looks slightly odd depending on the angle. This is also a problem that they have when taking images of the martian landscape. To get round it, the camera takes many images at differing angles. The images can then be stitched together in photoshop by engineers. They did something similar when putting together images of the moon taken by satellites. As the following image posted by NASA shows, the arm has to move during the changes in camera location, often moving out of frame. Even when the arm is slightly in an image they tend to cover it with another image, so it doesn’t confuse the people looking at it. The selfie would look odd if it had more than one arm showing.

Even though they take care to put together the images in a way that dont look like many stitched together there are still sometimes some inconsistencies. Notice that in the next image the shadow of the arm is still in the image, and there is a slight ghost of the arm below the rover. As you can see below this shot too 72 images stitched together to be made. 20 of those images, over 2 tiers just make up the horizon. Selfies are generally taken at each new drill site, as part of an overall effort to document the trip and of that site. The entire picture taking sequence has now been automated, and tested rigorously on the second identical rover that is here on Earth. If the rover were to take the multiple pictures from individual commands the process would be too long and drawn out.

Mars Rover Selfie August 2015
The Mars rover from a different lower angle. Taken at “Buckskin” on Aeolis Mons on
Mars. Taken on Aug. 5, 2015, during the 1,065th Martian day. Credit: NASA/JPL.
Mars rover selfie component images
The 72 images taken by the rover over the period of an hour. Credit: NASA/JPL/MSSS/Emily Lakdawalla.

There are at least 7 of these selfies taken over the years, all from a very similar angle. The big thing to notice is the difference in the rover itself. Over time it slowly gets covered in more and more dust, starting to blend in with the martian soil behind it. The saddest part to see is the slow deterioration of the wheels. There are small holes developing and getting bigger in the metalwork on the wheels, and in some images they can seem prominent. Either way, these selfies show a slight human side to the robot. There are many people throughout Twitter that anthropomorphize Curiosity and its predecessors, wishing them well on their journey.

Mars Rover selfie September 2016
A slightly newer selfie taken at “MurrayB” a named rock on
Aeolis Mons in Gale Crater. An awesome image taken in September 2016. Credit: NASA/JPL.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.


How the Type G Gate Worked

apollo 3 input NOR gate
An image of the silicon die inside the Type G 3 input NOR gate used to power the Apollo Guidance computer.

Previously I went through the three input NOR gate that ran the Apollo Guidance Computer and how the circuit works. Previous to that I also told the story of how this chip partially funded Silicon Valley as we know it today. This post builds on that and goes through how the silicon works, and the simplicity of the circuit. Quite a famous image of the chip, fairly detailed image of the silicon inside the device spurred on this post, and taught me lots about silicon that I want to pass on.

apollo 3 input NOR gate schem annotated
The schematic of the 3 input NOR gate. From the schematic of the Apollo Guidance Computer. Annotated with my own designators for reference.

The above schematic of the 3 input NOR gate is also shown in previous posts. It is from the NASA Apollo Guidance Computer schematic, but I have annotated it so that I can reference to specific parts. It is a handy schematic considering it was right at the start of the development of semiconductors. The first image in the post is the best image of the silicon, but is not very big. The biggest image I can find is not quite as sharp, but is much better to annotate, it is the same chip. The first annotation shows the pinout of the device, and how those pins actually connect to the pins.

apollo 3 input NOR pin out
The silicon of the 3 input NOR gate with annotations to show which pin is connected. The pin numbers are from the schematic.
Showing how pins are connected
An image showing how the pins coming off of the silicon are connected into pins of the flat pack.

The noted parts of the above images are pins 5 and 10, and are the starting points to deciphering the layout. If you look at pin 5 and 10 on the schematic, they correspond to GND and power respectively. They are the only pins that are shared between both NOR gates. Apart from that the two sides look remarkably similar, and are basically a mirrored version. To figure which is ground and which is power, the resistors need to be taken into account.

apollo 3 input NOR gate resistors
The resistors on the silicon of the device. Shown above as brown lines they are P doped silicon that act like a resistor.

The above image shows the resistors found on the device. They tend to just be a thin section of P doped silicon, and above connect two sections of aluminum to form a resistor. It is also noted that there is big section of brown surrounding the whole circuit. Although it functions like a resistor and is made in the same way, it is puterly for ESD purposes, protecting the circuit. This big ring also is a big hint that it is connected to ground (pin 5). the second hint is that GND has no resistors attached to it on the schematic, but power has two. They are R1 and R2, connecting to pin 9 and 1 respectively, and are pull up resistors. Pin R3 to R8 are simply the base resistors for the transistors. They are all roughly the same size, and are there are 6 of them. The transistors are also fairly obvious in the centre of the silicon.

apollo NOR gate transistors
The centre silicon from the Apollo 3 input NOR gate. The transistors have been shown, and the collector, base and emitter also shown,

The above image is showing the heart of the device. the 6 transistors that make it resistor-transistor logic. As you can see in the above image, all the collectors are connected together, connected to pins 1 and 9. If you look closely, the base and emitter of each transistor sit inside a brown section like the resistors. This is P doped silicon and forms the base-emitter junction. This allows the base and emitter to sit anywhere within that P doped silicon detection to work. This means that the transistors do not conform to the standard Collector-base-emitter topology. All of the emitters are also connected together via the aluminium placed on the top, but the P doped sections of each device are seperate. As all the transistors of each device have common emitters, it doesn’t matter that they are all connected together, by design, only one of the transistors needs to be on for it to function.

Ken Shirriff transistor side view
A great image showing how the transistor works from a side view by Ken Shirriff.

The above image found on Ken Shirriff’s blog shows how the transistor works with the emitter and base in the P doped silicon. I may do some more posts about it, but his blog is a great place to find more information on silicon reverse engineering.

Electronics world 1963
A cutout from electronics world in 1963 showing the new process of planar technology. This method was used to make the NOR gate.

The above image is an interesting one I found while researching this chip. A section in electronics world 1963 showing how micrologic is made. The type G chip was part of the second batch of micrologic circuits. This section was useful to see how silicon was actually manufactured, and in some ways, still is today.

McMoon: How the Earliest Images of the Moon Were so Much Better than we Realised

Earthrise
An Earthrise over the moon’s horizon, taken by Lunar Orbiter 1 on August 24th 1966. Credit NASA/LOIRP.

Fifty years ago, 5 unmanned lunar orbiters circled the moon, taking extremely high resolution photos of the surface. They were trying to find the perfect landing site for the Apollo missions. They would be good enough to blow up to 40 x 54ft images that the astronauts would walk across looking for the great spot. After their use, the images were locked away from the public until after the bulk of the moon landings, as at the time they would have revealed the superior technology of the USA’s spy satellite cameras, which the orbiters cameras were designed from. The main worry was the USSR gaining valuable information about landing sites that the US wanted to use. In 1971 many of the images were released, but nowhere near to their potential quality, and mainly to an academic audience as public interest in the moon had waned. Up until 2008 most of the reported images from the project were the 1966 versions that were grainy and lower quality.

Earthrise difference
Comparison of the Earthrise image shown to the public in 1966 on top, and the restored image directly from the tape on the bottom. The bottom image was released in 2008, 42 years after it was taken. Credit: NASA/LOIRP.

These spacecraft were Lunar Orbiter I to V, and they were sent by NASA during 1966 and 67. In the late 1960’s, after the Apollo era, the data that came back on analog tapes was placed in storage in Maryland. In the mid 1980’s they were transferred to JPL, under the care of Nancy Evans, co-founder of the NASA Planetary Data System (PDS). The tapes were moved around for many years, until Nancy found Dennis Wingo and Keith Cowing. They decided they needed to be digitised for future generations, and brought them to NASA Ames Research Centre. They set up shop in an abandoned McDonalds, offered to them as free space. They christened the place McMoon. The aim was to digitise these tapes before the technology used to read them disappeared, or the tapes destroyed.

The Mcdonalds
The McDonalds nicknamed McMoon, with the trademark skull and crossbones flag denoting the “hacker” methodology. Credit: MIT Technology Review.

The Lunar Orbiters never returned to Earth with the imagery. Instead, the Orbiter developed the 70mm film (yes film) and then raster scanned the negatives with a 5 micron spot (200 lines/mm resolution) and beamed the data back to Earth using lossless analog compression, which was yet to actually be patented by anyone. Three ground stations on earth, one of which was in Madrid, another in Australia and the other in California recieved the signals and recorded them. The transmissions were recorded on to magnetic tape. The tapes needed Ampex FR-900 drives to read them, a refrigerator sized device that cost $300,000 to buy new in the 1960’s.

FR-900
The FR-900 that was used to restore the old images. A mix of old and new equipment to get the images to modern PC’s. Credit: MIT Technology Review.
FR-900 signed
The back of the first FR-900 has been signed by the people who brought the project to life, including Nancy Evans. Credit: MIT Technology Review.

The tape drive that they found first had to be restored, beginning with a wash in the former restaurants sink. The machine needed a custom built demodulator to extract the image, an analog to digital converter, and a monitor connection to view what was happening. As the labelling system of the tapes had been forgotten, and documentation was not readily available, they had to hand decode the coordinates on the tapes. They also had a big collection from parts of other FR-900’s and similar designs. The spare parts were constantly needed to keep the recorder going, there was good reason that the format didn’t continue for long.

moon image reels
These are just some of the reels of moon images. They use this machine to hand inspect the reels, mainly to figure out the coordinate labelling system. Credit: MIT Technology Review.

In order to read the tapes, the heads of the FR-900 apply a magnetic field to the tape inducing a current through it. The current can be measured and run through the demodulator. This pulls out the image signal, that is then run through an analog to digital converter. The data is then processed on a computer using the custom system they set up. They made custom software that interfaced with Photoshop to link the relevant parts of the image together. The orbiters sent out each image in multiple transmissions, with each strip (one tin) making up part of the image. the software manages to link up the images nearly seamlessly at the full potential resolution. The best of the images can show the lunar surface at a resolution less than 1m, much better than any other orbiter that has been there.

tapes tapes tapes
The image shows the sheer amount of tapes that the few images are stored on. Inside McMoon you can also see a sleeping bag some poor guy had to stay in. Credit: thelivingmoon.com.

They were huge files, even by today’s standards. One of the later images can be as big as 2GB on a modern PC, with photos on top resolution DSLRs only being in the region of 60MB you can see how big these images are. One engineer said you could blow the images up to the size of a billboard without losing any quality. When the initial NASA engineers printed off these images, they had to hang them in a church because they were so big. The below images show some idea of the scale of these images. Each individual image when printed out was 1.58m by 0.4m.

NASA printing
This image shows the large thin strip images being laid out on the floor of a large room so the engineers could look for good landing spots. Credit: NASA.
NASA Engineer
The image shows a NASA technician with a ream of photograph printouts used to assemble the overall image. Credit: NASA.

Orbiter IV was there to produce a single big image of the front side of the moon. In pictures taken between May 11-25, 1967 the Orbiter took a number of images that span the area from the north pole to the south pole and from the eastern limb to the western limb. The complete mosaic of an image stretched 40 by 45 ft. The engineers laid it out on the floor and all the observers including the astronauts had to crawl over it and take off their shoes. The images were so good, even at this size that some astronomers used magnifying glasses. This giant image was the primary source to select the sites for Orbiter V  to photograph in a higher resolution. The images taken by Orbiter V decided the exact locations for Apollo 11 to land.

Tsiolkovskiy Crater
The very prominent feature in this image is the Tsiolkovskiy Crater on the far side of the moon. Taken by Orbiter 3 on 19 February 1967. Credit: NASA/LOIRP.

Since 2007 the Lunar Orbiter Image Recovery Project has brought back 2000 images from 1500 analog tapes. The first ever picture of an earthrise. As Keith Cowing said “an image taken a quarter of a fucking million miles away in 1966. The Beatles were warming up to play Shea Stadium at the moment it was being taken.” To find more of those images go to their website, but I warn you those images are huge.

Thank You for reading, take a look at my other posts if you are interested in space or electronics, or follow me on Twitter to get updates on projects I am currently working on.


The Manned Orbiting Laboratory

NASA Special Agent Dan Oakland holds up a long-lost spacesuit uncovered at the Cape Canaveral Air Force Station (CCAFS) in Florida. Credit NASA.

In early 2005, two security officers at Cape Canaveral Air Force Base in Florida were doing a check of a facility known as the Launch Complex 5/6 museum. NASA Special Agent Dann E. Oakland and Security Manager Henry Butler, of the company that oversees the museum, Delaware North Parks and Resorts, discovered a locked room. The problem was they had no key, and nobody else did! Luckily, being security officers they found a master key and gained entry. By the looks of things the room hadn’t been accessed in  many years, at least not by people, the rodents had made themselves at home. With no power the officers explored with torches and found some interesting stuff.

This is Launch Complex 5/6 blockhouse, now a museum at the Cape Canaveral Air Force Station (CCAFS) in Florida, where long-lost space suits were found. Credit: NASA.

They found retired spacesuits designed for Americans in the 1960’s that were training to be space spies. Initially they assumed the spacesuits were training suits from the end of Gemini or the beginning of Apollo space programs. When inspected by their manufacturer, the Hamilton Standard Corporation, they determined they were actually MH-7 training suits. Kept in surprisingly good condition, the suits were made for a short lived cold war-era military program to put a manned space station in orbit.

This locker reveals a long-lost spacesuit uncovered at the Cape Canaveral Air Force Station (CCAFS) in Florida. Credit: NASA

In 1964 the Manned Orbiting Laboratory program was an Air Force initiative to send a Air Force astronauts to a space station in a Gemini capsule, as they had plenty of experience with it. While up there they would take part in surveillance and reconnaissance efforts. After spending a few weeks in orbit, the crew would simply un dock and return to Earth. A test launch from Complex 40 on Nov. 30, 1966, of a MOL was conducted with an unmanned Gemini capsule. The MOL was constructed from tankage of a Titan II rocket. The program was abandoned by the Air Force in 1969 but not before they made a great deal of technological developments. when the USAF abandoned the MOL program, they transferred all equipment and their astronaut corps to NASA.

A 1960 conceptual drawing of the Manned Orbiting Laboratory. Credit: NASA

There were two spacesuits found, one identified as 007 and another 008. The spacesuit with identifying number 008 had the name “LAWYER” on the left sleeve. The suit was traced to Lt. Col. Richard E. Lawyer, a member of the first group recruited to be MOL astronauts in 1965. Three groups of military officers trained to be MOL astronauts, when the program was cancelled seven of the younger ones were transferred to NASA’s human space flight program, and went on to have standout careers. Notable mentions are Robert Crippen, pilot of the first Space Shuttle mission, and Richard H. “Dick” Truly, who later became a NASA Administrator. All MOL astronauts who were under age 35 and survived eventually flew in NASA programs, either on board Skylab or the space shuttle.

When Planes Need an Eye Test: NOLF Webster.

webster overall map
From Google Maps. the locations and distances between the 4 photo resolution markers. Taken in 2007.

In a previous post, I put together lots of images of photo resolution markers, from across the USA. This post is about the four markers found at a little known airfield named Naval Outlying Field Webster in Maryland. In posts on this subject in other blogs it is often incorrectly named Walker Field, just to make things confusing. The four markers are in a straight line, with an almost exact 2000ft between them. This is likely for some sort of calibration testing, so the planes have an exact known distance to calibrate their cameras from. They are in parallel with one of the main runways to make it easy to maintain them, and as another reference for the planes.

Photo res marker 1
The most eastern photo resolution marker at Naval Outlying Field Webster. Taken in 2007 by Google Maps.

NOLF Webster is located 12 miles south west of Naval Air Station PAX River. It was bought by the military from a set of jesuit fathers during WW2 for just $96,000. It was bought as a auxiliary airfield for PAX River, to send aircraft to on busy days. PAX River is a very famous aircraft testing base, with lots of history associated with it. Part of the history is the photoreconnaissance training school found there. That explains the reasoning for the photo resolution markers just 12 miles to the SW.

Photo res 2
The second photo resolution marker at Naval Outlying Field Webster. Taken in 2007 by Google Maps.

NOLF Webster is good as an air base due to it’s great location. It has a good approach by water from two sides, especially good for testing and training. The other approaches were mainly woodland and fields. The three runways are built in accordance with the prevailing winds, with two of the runways being 5,000ft long. The base was heavily used in the 1950’s as a ‘touch and go’ site for training at PAX.

photo res 3
The third photo resolution marker at Naval Outlying Field Webster. Taken in 2007 by Google Maps.

In the 1960’s the former electronics test division moved in, now known as Naval Air Navigation Electronics Project (NANEP). They helped develop many air navigation systems. They stopped the interference with operations at PAX River. They may also have been a big part on the development of the photo resolution markers found there.

photo res 4
The fourth photo resolution marker at Naval Outlying Field Webster. Taken in 2007 by Google Maps.

Most of the images I have used are taken in 2007, but the final one (of the fourth marker) is taken in 2015, where it has a slightly different pattern. This is maybe to define markers between each of them, so the planes know the final one. There don’t seem to be any other changes according to the images found on Google Earth.

photo res 4
The fourth photo resolution marker at Naval Outlying Field Webster. Taken in 2015 by Google Maps.

Hope you enjoyed this short post, If you enjoy stories and posts on space and electronics, take a look at some of the other posts on my blog. Thank You for reading.

The NOR Gate That Got Us To The Moon

Type G micrologic
The Fairchild Type ‘G’ Micrologic gate for the Apollo Guidance Computer – this is the flat pack verison

In a previous post I talked about how the going to the moon kick started the silicon age. If you haven’t read it, it is short but really interesting story about how NASA made Integrated circuits cheap, and partially funded what we now know as Silicon Valley. In this post I am going to take a slightly closer look at the circuit that ran the famous type “G” Micrologic gate that ran the Apollo Guidance Computer.

apollo 3 input NOR gate
The official NASA schematic of the Type G micrologic gate found in the Apollo Guidance Computer

As you can see in the above image, the circuit was not particularly complicated. You have to remember that this is very early logic, before CMOS or NMOS or any other fancy IC technologies. This is basically two 3 input NOR gates, they both run off the same power, with pin 10 at the top, and the negative which was likely ground being shared on pin 5. The output for the left NOR gate is pin 1, and the output for the right is pin 9. The three inputs for the left are pins 4, 2 and 3, with the right having pins 6, 7, and 8 as inputs. Simply put, the output is “pulled” high to power when all the inputs are OFF. The resistor between pin 10 and pin 1 (or 10 and 9) are a simple pull up resistor as you would find in most electronic circuits. As expected with a NOR gate, the output will be only be ON when all the inputs are OFF. When any of the inputs are ON the output of that gate will be pulled to ground. One two, or all the inputs can be on, but it just needs one to turn OFF the output. The resistors going into the base of the transistor are just to limit the current.

3 input NOR
My breadboarded version of the 3 input NOR gate, it is made with BC547 transistors and a DIP switch. the output has been inverted with the LED.

I made a simple recreation of this circuit using BC547 NPN transistors, but most NPN transistors would work, these were ones I found in my parts box. As you can see in the image above, I have made it on a breadboard, with the inputs being a DIP switch attached to the power (5V in this case). The base resistors for the transistors are 1K and the pull-up to 5V is a 10K. I recommend making up this circuit if you want to learn a bit more about logic, and is a cheaper method than going out to buy 74 series logic chips! As you can see in the images there are a number of states that I showed the circuit in, and notice that if any of the switches are on, the circuit turns on, this is slightly against what I mentioned earlier, but thats due to the output LED using the transistor as a current sink, not a source, so the output is inverted. Basically, when the output is 0 the LED turns on. The only time the LED is off (output high) is when no switches are on, meaning all the transistors are off.

apollo 3 input NOR gate
An image of the silicon die inside the Type G 3 input NOR gate. We will be going through how the layout works in a future post.

The final point for this post is why the circuit is actually quite inefficient. Modern logic is amazingly low power compared to this. One of the biggest issues is that it is always taking power in some way. When the inputs are off, there is still some leakage through the pull up resistor, when an input is on, then there is current going through the resistor to ground. Also, by the nature of the transistors there is always parasitic leakages, and inefficiencies in the process. They are only small numbers, but the AGC used over 3000 of these circuits, so the small leakages soon add up to draw some hefty power needs, especially for battery powered operations.

If you enjoyed this post, take a look at the rest of my blog, there is lots about space, electronics and random history. I am always open to ideas and feedback, and where is best to post links to my posts.

What We Learnt From The Peter Beck AMA

Peter Beck is the CEO and founder of Rocket Lab, a US/New Zealand orbital launch provider who is trying to provide access to space for small satellites. On at 19:00 UTC on April 5th he participated in a Reddit AMA on /r/space, where he answered as many questions as he could about the Electron launch vehicle and the upcoming ‘it’s business time’ launch, as well as what the future of space access looks like. It was a good AMA, he answered lots of questions, and the full post can be found here. This post is to round up some of the most common and important questions he got asked for those interested.

Peter Beck by Electron
Peter Beck, president of Rocket Lab in front of the Electron launcher. Credit: Rocket Lab

The most questions came with reference to SpaceX, and the way their business model compares to Rocket Lab.

SpaceX didn’t see a market It’s known that the Falcon 1 was a similar size to the Electron and they quickly moved on from it. So people asked if SpaceX didn’t stay with it, why will it work for Rocket Lab?  Peter makes the point that SpaceX retired that rocket 10 years ago, and most of Rocket Labs customers didn’t even exist then. He mentioned that Electrons manifest is fully booked for the next 2 years for dedicated flights. He also doesn’t see a slowdown in demand anytime soon.

Reusability – On the SpaceX front, they have made big inroads to reusability and the Electron is not reusable, so many asked about plans to make a reusable version. The simple answer he gave was that reusability makes sense for medium lift vehicles like the falcon 9, but it doesn’t scale well to small vehicles. So it isn’t on the radar for them at the moment.

Electron Launch Vehicle
Rocket Lab’s first Electron rocket, seen here in a hangar at the company’s New Zealand launch site. Credit: Rocket Lab

Other Rocket Manufacturers – As there are many small rocket manufacturers popping up, and attempting to compete in this space, many wanted to know what the market is actually like for them. His comment was that not all of those manufacturers will make it, and they are currently the only dedicated small launcher that has actually made it to orbit. Others were quick to point out that other rockets of similar size do launch but nowhere near as frequently and do not have the same quality or launch frequency as the Electron.

Where else will they launch from – Currently they have a single launch site, but many wanted to know if they will branch out, to different pads of even different countries, maybe even pad-39A. He mentions that he wants to have many potential launch pads to serve many different inclinations, but Launch Complex 1 is a good start.

Going Bigger – There were lots of questions about making a bigger rocket, like an Electron Heavy. He made a point of saying they are currently only making one product really well. They have no plans to make bigger rockets, and they understand the market they are in. Rocket lab do not want to compete with SpaceX on these launches. He mentions that they can launch a huge amount of spacecraft to LEO, and going bigger only allows a 2% increase in market at the moment. That being said they will continue improving the rocket as they go along.

electron on the pad
The Electron launch vehicle waiting on the pad for takeoff, Credit: Rocket Lab

Using composites – As the LOX tank and other parts are made of carbon composites, there were questions about the difficulty surrounding the design and development of that. He talked about the several years developing and testing the composite tanks. The two main issues being microcracking and oxygen compatibility. They ended up with liner-less tanks with common bulkheads that have similar oxygen compatibility to aluminium but much lighter mass. All the composite manufacturing is in house. Some wanted to know how they manage to use such expensive processes, and he says that although carbon fibre is expensive, when done right you can use very little of it.

Why black – with most rockets out there being white, to help with the thermal efficiency, why did they go for black? Well the simple answer he gave was it looks better. Many engineers wanted to paint it, but the thermal experts made a special effort to make sure they could keep it black. Also, it does save some time/money/weight on paint.

It’s all about the money – The key question is, is it profitable, and when will they start making those profits? Well Peter states that they will see positive cash flow after their 5th flight. Each launch costs $4.9 million to each customer, and they get a dedicated launch, so no need to worry about rideshares where they have less control.

Electron Launch
Electron Rocket takes off from Rocket Lab Launch Complex 1 during the “Still Testing” mission. Credit: Rocket Lab

Adding to space junk – In the news recently, there has been lots of the junk that currently floats in space, so there were some questions on how the Electron tries to stop being just more rubbish. Peter talks about the Curie stage of the rocket that is designed to fix this issue. It puts it the second stage into an orbit that makes it deorbit quickly, and the kick stage can deorbit itself. Also most of the LEO payloads they will orbit will deorbit within 5-7 years.

Launch cadence – A few asked how often they are able to launch rockets, or at least the plans to do so. He mentioned that the current plan is to launch once a month for the next year, then once every two weeks, and then double down from there. The Launch complex 1 can support a launch every 72 hours, which is pretty impressive.

Job opportunities – As you would expect, many people asked how you get a job/internship at Rocket Lab. Peter gave a link to email a resume to, but mentioned that the bar is high, they are open to new people but they have to be passionate, and enjoy (and be good at) what they do. They are a small team trying to do big things! They care about what you do outside your formal education, what are you passionate about? what have you built, tested and broken?

Rocket testing
Rocket Lab testing its engines for the Electron launch vehicle. Credit: Rocket Lab

Some hardcore technical answers

  • Each propellant had a dedicated and independent pump system rather than a single electric motor.  That was due to wanting super accurate control over the oxygen fuel ratio and startup and shutdown transients.
  • Ignition is from an augmented spark igniter (a spark plug surrounded by a tube, what acts sort of like a blowtorch).
  • The engine is fully regeneratively cooled, 3D printed chamber.
  • The area ratios for the booster and vacuum nozzles are 14 and 100 respectively.
  • The steering and ullage on the upper stage is controlled by cold gas RCS and PMD.
  • The whole vehicle is non pyro, the decouplers are all pneumatic.

Explorer 1 and the Van Allen Story

On February 1st, 1958 at 03:48 UTC (January 31st at 22:48 EST), the first Juno booster launched Explorer 1 into Low Earth Orbit. It was the first satellite to be successfully launched by the United States, and the third ever, after Sputnik 1 and 2 in 1957. Launched from the Army Ballistic Missile Agency’s (ABMA) Cape Canaveral Missile Annex in Florida, now known as Launch Complex 26. The launch played a pivotal part in the discovery of the Van Allen Belt, Explorer 1 was the start of the Explorer series, a set of over 80 scientific satellites. Although sometimes looked over in the history of space, it guided the US space program to what it eventually became.

William Hayward Pickering, James Van Allen, and Wernher von Braun display a full-scale model of Explorer 1 at a crowded news conference in Washington, DC after confirmation the satellite was in orbit.

In 1954 The US Navy and US Army had a joint project known as Project Orbiter, aiming to get a satellite into orbit during 1957. It was going to be launched on a Redstone missile, but the Eisenhower administration rejected the idea in 1955 in favour of the Navy’s project Vanguard. Vanguard was an attempt to use a more civilian styled booster, rather than repurposed missiles. It failed fairly spectacularly in 1957 when the Vanguard TV3 exploded on the launchpad on live TV, less than a month after the launch of Sputnik 2. This deepened American public dismay at the space race. This lead to the army getting a shot at being the first american object in space.

The launch
Launch of Jupiter-C/Explorer 1 at Cape Canaveral, Florida on January 31, 1958.

In somewhat of a mad dash to get Explorer 1 ready, the Army Ballistic Missile Agency had been creating reentry vehicles for ballistic missiles, but kept up hope of getting something into orbit. At the same time Physicist James Van Allen of Iowa State University, was making the primary scientific instrument payload for the mission. As well this, JPL director William H. Pickering was providing the satellite itself. Along with Wernher Von Braun, who had the skills to create the launch system. After the Vanguard failure, the JPL-ABMA group was given permission to use a Jupiter-C reentry test vehicle (renamed Juno) and adapt it to launch the satellite. The Jupiter IRBM reentry nose cone had already been flight tested, speeding up the process. It took the team a total of 84 days to modify the rocket and build Explorer 1.

Preparing the explorer 1
Explorer 1 is mated to its booster at LC-26

The satellite itself, designed and built by graduate students at California Institute of Technology’s JPL under the direction of William H. Pickering was the second satellite to carry a mission payload (Sputnik 2 being the first). Shaped much like a rocket itself, it only weighed 13.37kg (30.8lb) of which 8.3kg (18.3lb) was the instrumentation. The instrumentation sat at the front of the satellite, with the rear being a small rocket motor acting as the fourth stage, this section didn’t detach. The data was transmitted to the ground by two antennas of differing types. A 60 milliwatt transmitter fed dipole antenna with two fiberglass slot antennas in the body of the satellite, operating at 108.3MHz, and four flexible whips acting as a turnstile antenna, fed by a 10 milliwatt transmitter operating at 108.00MHz.

Explorer 1 parts
A diagram showing some of the main parts of the Explorer 1 satellite

As there was a limited timeframe, with limited space available, and a requirement for low weight, the instrumentation was designed to be simple, and highly reliable. An Iowa Cosmic Ray instrument was used. It used germanium and silicon transistors in the electronics. 29 transistors were used in the Explorer 1 payload instrumentation, with others being used in the Army’s micrometeorite amplifier.  The power was provided by mercury chemical batteries, what weighed roughly 40% of the total payload weight. The outside of the instrumentation section was sandblasted stainless steel  with white and black stripes. There were many potential colour schemes, which is why there are articles models and photographs showing different configurations. The final scheme was decided by studies of shadow-sunlight intervals based on firing time, trajectory, orbit and inclination. The stripes are often also seen on many of the early Wernher Von Braun Rockets.

NASM flight spare
The flight ready spare of the Explorer 1, now shown at the National Air and Space Museum.

The instrument was meant to have a tape recorder on board, but was not modeled in time to be put onto the spacecraft. This meant that all the data received was real-time and from the on board antennas. Plus as there were no downrange tracking stations, they could only pick up signals while the satellite was over them. This meant that they could not get a recording from the entire earth. It also meant that when the rocket went up, and dipped over the horizon, they had no idea whether it got into orbit. Half an hour after the launch Albert Hibbs, Explorers System designer from JPL, who was responsible for orbit calculations walked into the room and declared there was a 95% chance the satellite was in orbit. In response, the Major snapped: “Don’t give me any of this probability crap, Hibbs. Is the thing up there or not?”.

Explorer 1 Mission Badge
The official JPL mission pac=tch for the Explorer 1 mission.

The instrument was the baby of one of Van Allens graduate students, George Ludwig. When he heard the payload was going into the Explorer 1 (and not the Vanguard) he packed up his family and set off for JPL to work with the engineers there. He has a good oral history section on this link, talking about designing some of the first electronics in space. He was there watching the rocket launch and waiting for results. From the Navy’s Vanguard Microlock receiving station they watched the telemetry that reported the health of the cosmic-ray package. The first 300 seconds were very hopeful, with a quick rise in counting rates followed by a drop to a constant 10-20  counts per second, as expected. The calculations told them when they should hear from the satellite again, but 12 minutes after the expected time, nothing showed up but eventually, after pure silence, Explorer 1 finally reported home.

The Van Allen Belt
This diagram showcases the Van Allen belts, which were first detected by instruments aboard Explorer 1 and Explorer 3. The Van Allen belts were the first major scientific discovery of the space age.

Once in orbit, Explorer 1 transmitted data for 105 days. The satellite was reported to be successful in its first month of operation. From the scientist point of view, the lack of data meant the results were difficult to conclude. The data was also different to the expectations, it was recording less meteoric dust than expected and varying amounts of cosmic radiation, and sometimes silent above 600 miles. This was figured out on Explorer 3 when they realised the counters were being saturated by too much radiation. Leading to the discovery of the Van Allen Radiation Belt. Although they described the belt as “death lurking 70 miles up” it actually deflects high energy particles away from earth, meaning life can be sustained on earth. The satellite batteries powered the high-powered transmitter for 31 days, and after 105 days it sent it’s last transmission on May 23rd 1958. It still remained in orbit for 12 years, reentering the atmosphere over the pacific ocean on March 31st after 58,000 orbits.

When Planes Need an Eye Test

Naval Outlying Field Webster
The photo resolution marker at Naval Outlying Field Webster, From Google Maps

A few years ago, The Center for Land Use Interpretation (CLUI) reported on the dozens of Photo calibration targets found in the USA. They are odd looking two dimensional targets with lots of lines on the of various sizes, used as part of the development of aerial photography. Mostly built in the 1950’s and 60’s as part of the US effort of the cold war.

Shaw Air Force Base
The photo resolution marker at Shaw Air Force Base. From Google Maps

At this point, just after the second world war, there was a huge push to get better information about the enemy. The military needed better aerial recconasance. This very problem lead to the development of the U-2 and the SR-71. As part of this, there needed to be methods of testing these planes with the big camera systems attached to them. This was before the development of digital photography, so resolution is much more difficult to test.

The USAF test target
The 1951 USAF test target from wikipedia, they can still be bought.
Fort Huachuca
The photo resolution marker at Fort Huachuca. From Google Maps

This is where the photo resolution markers came in. Much like an optometrist uses an eye chart, military aerial cameras used these giant markers. Defined in milspec MIL-STD-150A, they are generally 78ft x 53ft concrete or asphalt rectangles, with heavy black and white paint. The bars on it are sometimes called a tri-bar array, but they can come in all forms, such as white circles, squares, and checkered patterns.

Beaufort Marine Corps Base
The photo resolution marker at Beaufort Marine Corps Base. From Google Maps

The largest concentration of resolution targets is in the Mojave desert, around Edwards Air Force Base. This is the place most new planes were tested during this time, with the U-S, SR-71 and X-15 being just some of the planes tested there. There are a set of 15 targets over 20 miles, known as photo resolution road. There are also plenty of other resolution targets at aerial reconnaissance bases across the US, such as Travis AFB, Beaufort Marine Corps Base and Shaw Air Force Base.

Elgin Air Force Base
The photo resolution marker at Elgin Air Force Base. From Google Maps

How Going To The Moon Kick-started the Silicon Age

In the late 1950’s, there were three people who were at the epicenter of a huge breakthrough in the world of electronics, the invention of the Integrated Circuit (IC). Jack Kilby of Texas Instruments, Kurt Lehovec of Sprague Electric Company, and Robert Noyce of Fairchild Semiconductor. In August 1959, Fairchild Semiconductor Director of R&D, Robert Noyce asked Jay Last to begin development on the first Integrated Circuit. They developed a flip-flop with four transistors and five resistors using a modified Direct Coupled Transistor Logic. Named the type “F” Flip-Flop, the die was etched to fit into a round TO-18 packaged, previously used for transistors. Under the name Micrologic, the “F” type was announced to the public in March 1961 via a press conference in New York and a photograph in LIFE magazine. Then in October, 5 new circuits were released, the type “G” gate function, a half adder, and a half shift register.

The Type F flip flop
Junction-isolated version of the type “F” flip-flop. The die were etched to fit into a round TO-18 transistor package
Type F life image
Physically-isolated Micrologic flip-flop compared to a dime from LIFE magazine March 10, 1961

These first few integrated circuits were relatively slow, and only replaced a handful of components, while being sold for many times the price of a discrete transistor. The only applications that could afford the high prices were Aerospace and Military systems. The low power consumption and small size outweighed the price drawbacks, and allowed for new and more complex designs. In 1961, Jack Kilby’s colleague Harvey Craygon built a “molecular electronic computer” as a demonstration for the US Air Force to show that 587 Texas Instruments IC’s could replace 8,500 discrete components (like transistors and resistors) that performed the same function. In 1961, the most significant use of Fairchild Micrologic devices were in the Apollo Guidance Computer (AGC). It was designed by MIT and used 4,000 type “G” three input NOR gates. Over the Apollo project, over 200,000 units were purchased by NASA. The very early versions were $1000 each ($8000 today) but over the years prices fell to $20-$30 each. The AGC was the largest single user of IC’s through 1965.

apollo guidance computer logic module
Apollo logic module assembled by Raytheon to be used in the AGC
Type G micrologic
Philco Ford also produced the Fairchild Type ‘G’ Micrologic gate for the Apollo Guidance Computer – this is the flat pack verison

Note that although Fairchild designed and owned the type “G” device, they were mostly made by Raytheon and Philco Ford under licence from Fairchild. Over this time many semiconductor manufacturers such as Texas Instruments, Raytheon and Philco Ford were also making large scale silicon production for other military equipment. These included the LGM-30 Minuteman ballistic missiles, and a series of chips for space satellites. This major investment from the government and the military kick started the development of the increasingly complex semiconductor, and eventually forced the prices low enough for non military applications. The processes improved and by the end of the Apollo program, hundreds of transistors could be fitted into an IC, and more complex circuits were being made. Eventually the costs of adding more transistors to a circuit got extremely low, with the difficulty being the quality of manufacturing. It could be argued that NASA and the Pentagon paved the way for silicon device production as we know it today.