A Lesson in Quarantine from 1665

We are all fairly knowledgeable about pandemics now. We watch the governments try and advise us on our everyday life, which can sometimes seem confusing, and unreasonable. We have seen medical experts tell us how pandemics and endemics happen, and we see plenty of graphs to try and describe it all. In the course of human history, there have been countless pandemics, some of which are famous, some of which are just documented in niche medical journals. The one we are talking about today is the Bubonic Plague, specifically the black death, which in the space of four years killed half of the population of Eurasia (the land mass made up of Europe and Asia). That equates to somewhere between 75 and 200 million people. For scale, its estimated that around half of Europe’s population died. The total population of the world is estimated to be around 475 million before the Black Death.

A copper engraving of Doctor Schnabel (known as Dr Beak), a Plague doctor in 17th Century Rome. This was typical apparel during the 17th century outbreak. It was created in 1656 Paul Fuerst. Credit: Wissen Media

Interestingly, there was not just one large spike of bubonic plague, as most history books simplify it as, there were actually three major recurrences, and to this day there are still a few hundred cases every year, even with a widespread vaccine and treatments available. The Black Death was the name attributed to the second wave of this plague pandemic, and although reaching Europe in 1348, there were still smaller endemics recurring into the 17th century (and smaller outbreaks for a few more centuries). The third plague is also not often mentioned, which started only 150 years ago in the mid 19th century. This post concentrates on the UK in the 17th century, so towards the end of the second wave. When it comes to this part of history, there is a real concentration in the city of London. The Plagues that ravaged the British Isles during the second wave were often named after the city of London. It makes sense for London to be at the heart of these epidemics, as it was a heavily concentrated group of people, in less than sanitary situations on average, very much reliant on each other.

A recreation of a map of London by Wenceslaus Hollar in 1665. Note that this is before the Great Fire of London. Credit: University of Toronto Wenceslaus Hollar Digital Collection

We also have to remember that in no way was this isolated to London. Being the heart of a country meant travel and trade, meaning that it spread slowly throughout the country, with lots of little epidemics popping up. This leads into the topic of this post. Tucked away in a northern corner of Derbyshire is a small Anglo-Saxon village named Eyam. Lying within the Peak District National Park, at one point it relied on lead mining, all the way back to at least the Romans. There is even evidence of stone circles and earth barrows on the moors above the village which imply earlier settlements. Its name was recorded in the Domesday Book as Aium, which is a dative form of the noun ēg (an island in Middle English). It probably refers to a patch of land where crops could grow in the moors, but the village is also settled between two brooks, Jumber Brooke and Hollow Brooke. The Anglican word ēg often refers to dry ground surrounded by marsh, but can also mean well watered land.

The village of Eyam taken from a nearby hill. Credit: letsgopeakdistrict

On a fateful day in 1665, in the run up to a religious festival called Wake’s Week, a bundle of cloth arrived from London at the local tailors, run by Alexander Hadfield. A week later, his assistant George Viccars noticed the bundle was damp, so opened it up, and hung it by the fire in an attempt to dry it. He didn’t notice that the cloth was infested with fleas, which became much more active with the warmth of the fire. A few days later he woke up with a headache and symptoms that were similar to the flu, so rested more, but soon after that his lymph nodes began to swell up, turn black and within the week he was dead. People in his household, and those of his neighbours were also suffering. The village was in the grip of a pandemic. This is a time before antibiotics, which meant that the plague had a 60% fatality rate. Understandably some people fled in the night, and people started to panic. In the next three months 42 villagers had died. Consider that the population of the village at the start of the year was 350.

Celtic Cross in Eyam churchyard. A Saxon cross, notable as the head still survives, its estimated to be 8th century and was moved to the churchyard from the moor. Credit: Dave Dunford.

In comes the church. Anyone from England will know that every town and village has some sort of church. A town near where I grew up, Shaftesbury, at one point had over 10! So it is safe to say that at this point in time, religion is at the heart of how people live their daily lives. Understandably people looked towards their local priest. Oddly, the local church had replaced Eyam’s very popular rector with another man named William Mompesson. Now, Mompession was no stranger to the plague, and had seen how it spreads fast, and devastates communities. So recruiting his popular predecessor, they came up with a plan to ensure the disease did not spread to the rest of the country and local area. They would put themselves in isolation, and by todays standards, a lockdown. It was a particularly unpopular plan, but a simple one. No one was allowed out of the village.

In front of Eyam Hall – the stocks built in Saxon times for miscreants, felons & thieves of ore from the lead mines to reflect upon their trajectory. Credit: pastmasters

The local lord, the Earl of Devonshire understood the plan, and agreed to send food and supplies to the villagers via a system. The local merchants would bring everything they needed to the edge of the village and leave them at marked rocks for collection. They would then make holes, fill them with vinegar and leave coins in it as payment. The vinegar would act as a disinfectant, shielding the merchants from the infected village folk, a very innovative idea. The isolation started in June 1666, and two months later the disease reached its peak, up to 6 people were dying of the disease every day, that 2% of the population at the start, every single day! Not everybody died though, one of the survivors, Marshall Howe caught the disease early on, but survived. He assumed that he couldn’t get it twice, so took the job of burying the village’s dead, getting paid from the dead persons estate. Unfortunately it resulted in his own wife and 2 year old child dying of the disease.

The Boundary Stone – looking down the valley to the village of Stoney Middleton, money still sits in vinegar as a reminder. Credit: pastmasters.
The Mompesson Well on the edge of Eyam, a place that can still be visited (see links at the bottom of the post) Credit: derbyshire-peakdistrict

There are many horror stories from these few months. There was the farmers wife Elizabeth Hancock, who watched over from a hilltop by those outside the cordon as over a period of eight days, she dragged the bodies of her husband and six children out of the house, burying them in the garden. These graves have since been known as the Riley graves, after the farm where they lived. That can still be visited. Elizabeth herself never got sick. There was also Mompesson, the leader, his wife, just 27 years old was walking with him one day, and the next she was dead. By this time the worst of the pandemic was over. As it went through September, October, November, the disease fizzled out until it had basically disappeared by the end of the year. The last person in the village to die was Abraham Morton, a farmer, and the 18th member of his family to fall to the disease.

The famous Riley graves, where Elizabeth Hancock buried her entire family over an 8 day period at the height of the epidemic.

In all, over 260 people from 76 families died in Eyam that year. This is a disputed number, but the church has records of 273 individuals who were victims. The survival of those who caught the disease appeared to be random, as many of those who survived had contact with those who died. It is unknown quite how much the quarantine helped stop the spread in Derbyshire, but it undoubtedly saved thousands of lives. It is often said how it saved the nearby town of Sheffield, which even then was very large. On many of the cottages there are now plaques, and some of the stones that held the vinegar soaked money still reside around the village. There is also the tradition of the village called “Plague Sunday”, where a memorial wreath is laid on the tomb of Mompesson’s wife, Catherine, on the last Sunday of August every year. A yearly reminder of how isolating at the right time can save countless lives.

Places to visit to see more

Getting Started With The STM32

So this post is part tutorial, and part reminder to myself in the future when I come back to trying to do the same thing again. It describes the process of programming a SM32 L1 discovery board, specifically (for me) an STM32 L100C discovery board, but it should work with most STM32 devices with some changes. This was a free board that a friend got in a goody bag from an ARM conference. Essentially its how to create a project, how to run it in debug mode, and make a couple of LEDs flash. The basic process it the same for most STM32 boards/chips, using the STLink/V2 which most discovery boards have.

The STM32L100C-discovery board used in this tutorial

To start make sure you have the following downloaded and installed:


Go ahead and open STM32CubeMX, and wait for it to load. Once you get to the home screen, go to File->New Project, and a new window will pop up. Now is the time to find the MCU that you are using. In the MCU/MPU Selector tab, use the series/search options to find the MCU that you are using. As I am using the STM32L100C-discovery board, I have to find and select the STM32L100RCT. When you have selected the right part, click on the start project button in the top right.

Page searching for the STM32 MCU you are using

When the project then loads up it should look like this in the pinout view.

The pinout screen when using STM32CubeMX

At this point it is a good idea to save it under File->Save Project. Save it somewhere you will remember. I have a folder specifically for STM32 projects like this.

At this point it is a good idea to get the schematic up of the board you have. If you have chosen the part view the board choice menu instead of the MCU menu, then it should fill in all the pins for you, but this tutorial should show how to assign the pins manually. For this board it has two LEDs and a button that I need to assign to pins. Remember your board may be slightly different. Schematics are often available somewhere for these discovery boards by ST, they are just difficult to find sometimes.

  • USER button: PA0
  • Green LED (LED 3): PA9
  • Blue LED (LED 4): PA8

Firstly, on the left, under System Core, select the SYS configuration, and a widow should pop out. Make sure that the drop down next to debug is set to Serial Wire. Then you can close this window using the little arrows. some of the pins in the chip diagram should have changed to green now which means they have been set.

Next we want to assign what each pin needs to do. This is done by left clicking on the pin in question and choosing its mode. So for me I need to click on PA0 and set it as GPIO_Input (as its a button) and setting PA8 and PA9 as GPIO_Output (as they are LEDs). When they are selected, they should go green.

One thing that may make life easier is naming the pins differently. If you right click on an assigned pin you can “assign user label” which will allow you to assign a label. I have assigned names like LED 3 and User button to make life easier.

An example of what the pin out should look like when you have set the pins

You probably now see how this is very useful for very complex projects where almost all the pins are going to be taken up by being connected to different inputs and outputs. You can also do things like set different clock speeds, set up ADC’s and other peripherals such as USB’s.

Now in the Project manager section, I have made sure the toolchain/IDE is set to STM32CubeIDE, and make sure the rest looks like the locations you have set and the names as well.

Now click the “Generate Code” button at the top.


It will generate the code and open the STM32CubeIDE. I use the recommended workspace when it prompts for it. Then the window should open. On the left hand side, go to your project name->Src->main.c and double click on main.c.

If you have a look in this main.c it will look like a lot to start with, but we just need to find the while(1) loop, for me it is line 98. The rest of the file is doing useful things for us, like setting up the pins we designated before in MX, and also setting up all the required HAL libraries.

For this next part we are going to use the HAL libraries to toggle the GPIO pins for the LED’s, and lighting them up. in the while loop between the comments USER CODE BEGIN WHILE and USER CODE END WHILE I have put:


which will toggle each pin on and off every 100ms.

What my code looks like

To upload this to the board, first we build it by going to Project->Build project, and wait for the build, hopefully there should be no errors. Then, making sure the board you are uploading to is connected, go to Run->Debug. It may ask you to save the file, and go ahead and do so. Also it may ask you to switch to a new view, go ahead. Go to Run->Resume, and the LED’s on board should then flash fast!

When you wish to stop, go to Run->Stop and you should return to main.c to do some more editing.

Well done on getting this far, and if you didn’t through some parts then remember Google is your friend, but also the comments section below is open to ask questions. Equally get in touch on Twitter and share this if you found it useful. I may even be able to do more like this, for instance more on the HAL libraries.

Fogbank: That time the US forgot how to make Nukes

The U.S nuclear weapons program is extremely secretive, yet oddly we can still figure some things out about it from the information that is out there. The U.S department of Energy, the government department concerned with the safety and handling of nuclear material, is in charge of one of the largest nuclear stockpiles in the world, and those weapons need to be constantly upgraded and refurbished with newer technologies. Occasionally those technologies get hinted at by politicians when asking for more money, or defending delays, and sometimes its clear that processes to create technologies have been forgotten with engineers retiring. The secret material, publicly refereed to as Fogbank is a great example of this.

Aerial photo of the Y-12 facility
Aerial photograph of the Y-12 facility at Oak Ridge taken around 2007 (Credit: NNSA)

Fogbank first became well known around 2007, when it came to light that it was the root problem behind technical delays in the life extension project for the W76 warhead. This particular warhead is used on the Trident II, submarine launched ballistic missiles. They are known as D5s, and are used by both the U.S Navy, and and the Royal Navy of the UK. It took until 2018 before the W76-1 warheads were finally delivered.

A Trident II (D-5) launch from a submerged Royal Navy submarine
A Trident II (D-5) launch from a submerged Royal Navy submarine. Image from Lockheed Martin, but is a U.S. Department of Defence photo.

There is a material that we currently use and it’s in a facility that we built … at Y-12 … It’s a very complicated material that – call it the Fogbank. That’s not classified, but it’s a material that’s very important to, you know, our [W76] life extension activity.

The then NNSA director Thomas D’Agostino, talking to members of the House of Representatives in 2007. He is referring to the Y-12 National Security Complex, located near Oak Ridge.
Y-12 Security Complex
The Entrance building of the Y-12 security complex, a Department of Energy facility located near Oak Ridge (Credit: NNSA).

Late in 2007, D’Agostino described Fogbank as an “interstage material”, which heavily implies that it sits somewhere between the primary and secondary stages of the the two stage thermonuclear weapon. The W76 is a two stage warhead, so would need some sort of material to trigger the second stage. In a thermonuclear bomb, also known as a H-bomb, the first stage creates the easily produced fission reaction (what happens in a nuclear reactor), which creates a superheated plasma at the interstage (the expected role of Fogbank), which then triggers the fusion reaction in the second stage. Fusion is the process that happens in the Sun, and may be the way we power future nuclear reactors.

There’s another material in the [W76] – it’s called interstage material, also known as Fogbank, but the chemical details, of course, are classified,

Thomas D’Agostino speaking to senators late in 2007
NNSA director Thomas D’Agostino in 2009 (Credit: NNSA)

This has obviously lead to many engineers and scientists to comment on possible materials that Fogbank could be. The most common reccuring theme is that it is likely to be an Aerogel, a type of material known as an ultra light gel. Aerogels are extremely light while being surprisingly strong. Invented in the 1930’s, the concept of aerogels have been around for a long time, but it was NASA’s Glen Research Centre that introduced modern methods of manufacture, and even used them on a few space missions such as Stardust. There have been suggestions, such as the one from Jeffery Lewis, an expert on missiles and nuclear weapons, that the code name Fogbank could be a reference to the other nicknames for Aerogels. Names such as “frozen smoke” and “San Francisco fog” have all been used in reference to the light and almost see through solid.

The Stardust dust collector with aerogel blocks. (NASA)
The dust collector with aerogel blocks on NASA’s Stardust spacecraft. (Credit: NASA)

There are a few other things that imply that Fogbank could be an aerogel. In 2007, D’Agostino told legislators that Fogbank’s production process required the material to be purified using “a cleaning agent that is extremely flammable.”. Then in that same year he talked at the Widrow Wilson centre about “another material that requires a special solvent to be cleaned”, potentially the same material, and that the solvent used was identified as “ACN”, the common abbreviation of acetonitrile, a solvent commonly used in aerogel production. He describes the solvent by saying “That solvent is very volatile, it’s very dangerous. It’s explosive.” which describes acetonitrile well.

A 2.5 kg brick is supported by a piece of aerogel with a mass of 2 g.
A 2.5 kg brick is supported by a piece of aerogel with a mass of 2 g. (Credit: NASA/JPL)

More evidence comes from a 2007 briefing slide on a program known as the Reliable Replacement Warhead (RRW), which aimed to develop a new design of warhead to replace a selection of the existing versions. The slide points at an interstage material that is an expensive, “specialty” material that if replaced would eliminate the need for unique facilities. The RRW effort was de-funded in 2008 and cancelled by president Obama the following year.

NNSA slide from a 2007 briefing
The NNSA slide from a 2007 briefing about the RRW program that details the properties of the current interstage material. (Credit: NNSA)

Up until 1989 the Y-12 facility in Oak Ridge had a specialised site known as Facility 9404-11, which was apparently used to create Fogbank. 1989 was the year that the final W76 warhead was finished, so the facility was closed down and a new “purification facility” took its place. The original building was eventually torn down in 2004, and replaced with a new facility, renamed to Facility 9225-3. According to Denis Ruddy, who served as the president of the division of the Babcock and Wilcox Company (BWXT Y-12) that ran the facility between 2000 and 2014, the purification facility reprocessed a material that they were taking out of weapons so that it can be used in refurbished weapons. The material is apparently classified, and the use in the weapon is classified, and so is the process that they follow, unsurprisingly.

It is also public knowledge that this site uses ACN as a part of at least one of the processes going on in it. On three seperate occasions in March 2006, workers had to evacuate the facility after alarms were triggered. According to the Department of Energy, the facility is alarmed to monitor for ACN levels, but it is not confirmed whether the alarms that went off were from ACN. There was also an ACN spill that forced an evacuation of the facility in December 2014, and although there were no injuries, it took months for the facility to get back up and running.

The Purification Facility, also known as Facility 9225-3, at Y-12.
The Purification Facility, also known as Facility 9225-3, at Y-12.

In 2009, in an issue of the Nuclear Weapons Journal, an official publication of the Los Alamos National Laboratory, an article was published that disclosed that there was a decision to restart the manufacture of Fogbank in 2000, and confirmed that it was linked to the W76-1 warhead project. It also revealed that in the years between the projects that the NNSA had lost almost all of its institutional knowledge base regarding the manufacture and development of Fogbank. The article said “Most personnel involved with the original production process were no longer available.” So the newer personnel had to reconstruct the production process from the records. It also mentioned that “a new facility had to be constructed, one that met modern health and safety requirements.” The facility would make sense to be the 9225-3 facility as it stands on the site of the old one. The best part of it all, the new facility, with the new staff using old records to manufacture, created a higher purity final product than the initial W76 warheads used. The problem with this “improved” material was that it was actually too pure,the impurity was actually essential for the final product to work as intended. The process was concluded in 2008, almost a decade after first deciding to restart production, and the W76-1 warheads began that year.

The improved production process
The extremely vauge diagrams shown in the article describing the production process the NNSA initially developed to produce Fogbank in the 2000’s and the improved versions. (Credit: NNSA)

This is one example of how organisations such as the NNSA lose technical institutional knowledge over relatively short periods of time when the technology isn’t being used. They had similar problems with designing high explosive capabilities recently. In March 2020, The director of the Natural Resources and Environment team at the Government Accountability Office, Allison B. Bawden, highlighted that the NNSA had not produced a particular type of high explosives at scale since 1993, and highlighted the issues with Fogbank production as an example. This highlights one issue of governmental technology not being taken on by commercial business to continue. Either way, Fogbank is a prime example of the complexity, and secrecy of nuclear warhead production, and how in that secrecy, technology is lost.

Why You Should Care About Quantum Key Distribution

We live in an ever changing world, and on the horizon is the era of quantum. We all hear about quantum computers, and how some may or may not have hit quantum supremacy, but one of the lesser known impacts of the advent of quantum computing is the impact it will have on encryption. In this day and age, encryption is used everywhere, and it is highly important. From protecting the messages to your friends on WhatsApp to keeping your credit car details safe when buying online, encryption keeps data safe by cleverly encoding it using secret keys that only the sender and receiver can decode… Or at lease that is the current working assumption.

For a moment think about how much data is out there. We are far away from the days where websites had a single static page and an email address. The internet of things is creating sensors that monitor our every moment, companies like Amazon are amassing our buying habits over generations, and almost all banking in the modern world is conducted online. Data is everywhere, and it is worth money, hence the need for solid encryption technologies. There are many working on what those technologies will one day look like, and to many, quantum key distribution (QKD) could be the answer to all of our problems.

Why is Current Encryption Tech Not Good Enough?

Encryption is split into two types, symmetric and asymmetric.

Starting with the simpler one, symmetric key encryption uses the same “secret” key for encryption and decryption. So you encrypt the data using your “secret” key, and give the encrypted data and the key to the person you want to read it. This is good if you are happy to literally take the data to person by hand on a memory stick, but how do you send the “secret” key secretly over the internet? It also has the problem that you are giving away your key, so your next message to a different person will need a different key. With a large user base, this becomes very difficult to manage. Asymmetric or “public” key encryption uses a different key for encryption and decryption. Every user in the network will have a “public” key and a “private” key associated with them. If you want to know more about the modern form of encryption, look into RSA.

This “public” key system is the bedrock to modern encryption algorithms. This is then added with the complexity of dealing with factors of very large prime numbers. When you multiply two large numbers together, you get a very large number that you can use for your key. The only reasons that computers struggle to beat the system is that modern computers cannot deal with these huge prime numbers effectively, and the process is too slow to be useful for malicious intent. The problem comes when we bring in quantum computing. There is a famous algorithm known as Shor’s algorithm that was developed in 1994, but would need a functioning quantum computer to be able to complete. If it ever works then it could in theory crack any encrypted data that uses these prime numbers as keys. So our banking data, all our messages and everything we currently use could be seen by anyone with a quantum computer. What all this means to us is that we need a fundamentally different way of encrypting data, or at least the way we generate and share keys.

Researchers at the National University of Singapore work on a QKD light source for a satellite (Source: NUS)

Why is QKD any better?

Quantum physics is basically the way we describe the way matter and energy interact at the sub-atomic level. This is a follow on to classical physics which describes things like why the apple hit Newton on the head, and we we don’t sink in a swimming pool.

I could go into detail at this point about all the different nuances about the differences between quantum and classical physics, but most of it doesn’t really impact on QKD, as all we really care about are photons, the carrier of light. All we need to know about photons are that the have a direction, and a polarisation. The direction bit is fairly self explanatory, but polarisation needs a quick explanation. Photons oscillate (go up and down like a wave) at a certain frequency depending on the energy, for this we don’t really care about the frequency, but imagine what that looks like if it was going slowly. It only oscillates in one plane, so if it was to go through a slit, it would have to be at the same angle as the direction the light was oscillating in to actually pass through it. This is actually how sun glasses work, it only lets through light of a certain polarisation, so only some of it is let through.

The creepy thing about this polarised light is what quantum physics tell us about the polarisation. You can only know what polarisation it is by measuring it, which might seem like a stupid thing to say, but its a key part of quantum physics. Until that photon is measured it has an unknown state, so it can be seen as having every state. This is the part of the QKD technology that makes it work. Imagine if you sent a series of photons with binary bits encoded in the polarisation of the photons, so each photon is at a different angle depending on the bit it represents. The person reading the photons would measure the polarisation, which means that they now have a known state, i.e it has been chosen. The key is that if somebody tried to intercept the photon, and then resend what they read, the photon that was resent cannot be exactly the same so there will be a mismatch between the data sent by the sender, and received by the receiver. If they compared a small portion of the data over insecure channels, they would realise that the data sent is corrupted and start again.

Now imagine if that was a “secret” key. It would mean that there is a way to guarantee that the key you have sent was not corrupted or intercepted by anyone, and it was safe between the people communicating. On top of that, you don’t have the problem with prime numbers and quantum computers defeating it. In fact it may be a way that quantum computers (especially the optical ones) communicate within themselves, and with others, as they utilise this phenomenon as well to function.

That is QKD in a nutshell, and why it could the thing that changes encryption, something we all rely on every day, for the better. For further reading, look up the first ever QKD protocol, named BB84, it explains how QKD started to be implemented in the real world.

Wartime RAF Harwell

As we found out about how RAF Harwell was created in a previous post, it was taken over by the RAF between the 2nd and 12th of February 1937. The first aircraft flown in that April were Hawker Audaxes of No. 226 Squadron, in from Upper Heyford. They were quickly followed by Hawker Hinds of No. 105 Squadron from Old Sarum in Wiltshire. These were all biplanes with open cockpits, the pilots wearing leather flying helmets with huge goggles, maybe even a trademark scarf and bomber jacket to go with it. Just imagine that scene in Blackadder when Baldrick is hanging out the back of the plane. That was until later that year when No. 105 (B) and No. 107 (B) Squadrons brought in the brand new monoplanes. The planes introduced were the Fairey Battle and the Bristol Blenheim. The first Fairey Battle arrived in august, with both the squadrons fully equipped by October 1937.

On the 9th of May 1938, His Majesty King George VI and Air Chief Marshal Sir Edgar Ludelow Hewitt visited Harwell as part of a tour of four airfields. They were visiting one airfield for each of the major commands, fighter, bomber, coastal, and training. At this point in time RAF Harwell was still a bomber station, so was visited as such. The tour itself was brief at only 50 minutes, with the king inspecting a line of bombers, most of which were flown in for the occasion. He also visited the aircraft hangars, stores, dining halls and armament sections. Finishing up in North drive to inspect the married officers quarters, allegedly some of the best in the country. He was then whisked off to RAF Upton. During the short time, the A34 which goes right by the site was lined with waving crowds. Just one week later, on the 16th of May the bomb stores began loading the eventual 240 tons of bombs, shells and bullets supplied from the depot at Altrincham. This is the same bomb stores that was at the end of the runway, meaning there were a few close run ins with pilots that didn’t gain enough speed to take off. When the site became an Operational Training Unit (OTU) in 1939 the king made a second visit to inspect the No. 15 OTU.

King visiting Harwell
An image from when the his Majesty, Marshall of the RAF, King George VI on the 9th of May 1938. Credit: RAF, National Archives.

On the 10th of June 1938, four German officers visited the airfield by arrangement with the Air Ministry, with the German Air Attache (an Air Force officer who is part of a diplomatic mission) visiting a year later in June 39. They were likely looking for weaknesses in the airfields designs. On Empire Day 1939 (24th May) RAF Harwell held a public open day, inviting 11,000 visitors to come and see what the airfield looked like. There were reportedly many coaches of ‘charbancs’ from around the UK. There were also an unknown number of guests from Europe, of which there were likely a few German spies. They easily visited due to the reduced security for an open day. There were obviously many areas on the site fenced off the the public for safety and secrecy. There was one notable visit later on, by King Haakon of Norway. During the visit a display display was put on, three Avro Ansons flew in formation. Unfortunately two of them collided at low altitude, with one of the pilots parachutes failing to open in time. He died, with his plane crashing near Hendred Wood.

Hawker Hurricane, Likely at Harwell.
A Hawker Hurricane II held down, likely at Harwell. Taken in 1940. Credit: Paul Nash, the Tate.

This accident showed that flying was still a dangerous job, and the most dangerous flying (outside battles) was at nighttime. The landing strips were marked out at night by “goose necked flares” which looked a bit like a watering can or oil lamp. They burnt paraffin, with a big wick sticking out of the spout. The danger with them that was when the wind changed the flame could warm the chamber, potentially ending in an explosion. The ground of the airfield was well suited for its job as it had very deep ground water, meaning it was very unlikely to flood. That being said, anyone living in the area knows the ground is full of clay at the surface, and the famous chalk ridges to the south reach the site. This means when it all mixes together it gave everything a sticky white coating. Planes, cars and boots were all affected. In 1940 all this was over though, with the McAlpine company being contracted to build three concrete runways. It used stone from a quarry just up the road in Sutton Courtney, which afterwards became a water treatment plant, and is now a lake (bounded by Churchmere Rd and All Saints Ln). As well as this, the old paraffin lamps were replaced with electric runway lights, that would still be uncovered up to 50 years later. these lamps were built to last, with some still working half a century later after being buried!

goose necked flare
Goose-neck runway light from Tiree Airport. Similar flares would have been used at Harwell. Credit: an iodhlann

The winter of 1940 was known as a particularly cold one. Before planes could land, men with shovels would have to go out to move the snow out of the way. At the start of the war, the Fairey Battles left for France, with Wellington bombers taking their place. The first attack of the site was in February 1940 by Heinkel bomber, with their pale grey bodies,and black crosses on their side. Later that year on the evening of the 16th of August two bombers were refueled by the mound at the rear of hanger 7. A lone German plane came via Rowstock (NE of site), dropping 4 bombs and strafing first street. Both aircraft were destroyed, along with another nearby, with two men killed. One of the airmen died trying to pull a burning bowser (type of storage tank on wheels) away from the storage tanks. A bullet did get into the ventilation pipe but did not catch the main fuel tanks on fire. There was another raid that night at midnight, then another three days later. The 26th of August raid was the most serious, with four bombs being dropped on the bomb dump, with 6 civilian men dying while building a wall. In August 1942 a single aircraft managed to drop 7 large bombs on the airfield, with four failing to explode. It was at night, with some pilots thinking they saw a cat in a shower of sparks running between hangar 9 and 10. It was actually a 500 kg bomb! These bombs were made safe, emptied, painted white and mounted on the wall of the CO’s office in B77. After the war the scientists buried them in the bomb dump, and were found 50 years later in 2002.

storage tanks
The storage tanks at the rear of hangar 7. Credit: RAF, National Archives.

There was plenty of defense against attacks, with an important part being the air raid shelters littered around site. Land surveys in 2003 in SW corner of campus revealed four underground air-raid shelters. There are also lots of concrete tunnels connecting buildings around the site. Most of these tunnels are long forgotten, and most were not on any maps or plans even at the time for security reasons. Subterranean tunnels linked B150 with B151 and many air raid shelters came to light in surveying by UKAEA in the late 1990’s. The cellar underneath ‘B’ mess (B173) was also serviced by a tunnel that emerged via vertical steel steps into shrubbery 15 m away. This was apparently still accessible in 2005. Other similar structures and tunnels were constructed with half inch thick steel blast doors.

RAF Harwell Pill Box
A pill box just outside the Curie entrance of what is now Harwell Campus. Credit: Steve Carvel
pill box in the snow
The same pill box as above, but in the snow.

During an air raid in 1943 a German Junkers 88 bomber got into trouble and dropped its bombs over countryside between Upton and the A417 to Rowstock. They landed on the airfield and the two crew were captured as prisoners of war. Interestingly, when released a few years later they actually stayed in England and worked for the Thames water board. The last attack was in 1944 by a ‘doodlebug’ flying bomb, and it destroyed three aircraft. The war ended on the 2nd of September, and just a couple of months later there was a visit by JD Cockroft of DSIR, the Department of Scientific and Industrial Research. This was a very special reconnaissance mission, and was the start of the end of RAF’s occupancy of Harwell. Cockroft got a “somewhat frosty reception” by all accounts, but it made sense when you looked at the military secrets held at RAF Harwell, a heritage that was seen as useful to DSIR. This was the beginning of the age of Harwell being at the heart of Atomic research, but that is for another post.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or any other sort of history. Alternatively follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Red Star: The Soviets can Capture Enemy Planes Too


Read any book about the United States Air Force during the cold war and you will probably find a section about the secret fleet of soviet fighter jets that they kept, tested and stole technology from. The less known part is that the Soviets also captured US planes during conflicts, although it seems like less overall. This is the story of the F5 that ended up deep in Russia.


It wasn’t actually the Russians that captured the plane in the first place, it was the Vietnamese. At the end of the Vietnam war, there were many captured parts of american military equipment in different forms. Vietnam, a famously communist country gave several samples of captured US aviation equipment to the USSR, among it was a F-5E light fighter bomber. Overall 27 were captured during the war, along with 87 F-5A’s. Overall 877 aircraft were captured. The Vietnamese actually plan to bring some back into service. The particular F-5E had serial number 73-00807, and was an extremely valuable intelligence coup that had the ability to tell the communists about American design, and how this form of mass produced plane could function. Therefore how they could design planes to counter it.


The plane was sent to the VVS airbase in Chkalovsky before being transferred to the Akhtubinsk air base not long after. Engineers and research staff from the Aeronautical research institute were formed as a test team to investigate the American fighter jet and test its abilities. Overall they were impressed with the design of the jet, and admired the ease of maintenance on the F-5E while they operated it. They were also impressed with the wing design, as t gave the jet an impressive flying ability at high angles of attack and minimum speeds. The F-5E was known officially as the Tiger II. From the end of July 1976 to May 1977, a full scale test flight was conducted at the Air Force Research Institute. A.S.Byezhyevets and V.N. Kondaurov, both decorated Heroes of the Soviet Union, were the pilots in charge of the test flight.

test report of the USSR F-5E

They were surprised with the results, the F-5E was much more maneuverable than most Soviet aircraft, especially then the MiG-21, which was the highly capable soviet dog fighter of the time. It even showed some advantages over the MiG-23, the most advanced Russian fighter of the time. That being said, it was noted that the F-5E did have a disadvantage when it came to vertical maneuverability and energy when compared to the MiG-23. It also had a lacking arsenal, with nothing beyond visual range medium-range missiles, which the MiG-23 could hold. The Central Aerohydrodynamic Institute (TsAGI) in Moscow were in charge of static tests of the aircraft, with the results exhaustively recorded. It is interesting when you look at planes such as the T-8 and the T-10, as you can see some design features obviously lifted from the F-5E. Eventually it was moved in the 1990’s, or at least the nose was, to a display area known as Hangar 1, which is now virtually impossible for any outsiders to visit.

The USSR F-5E on display with descriptions around it

Thank you for reading, take a look at my other posts if you are interested in space, electronics, physics or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

What is an Atom Chip

Atom chip by RAL Space
Cold atom chip as a source for atom interferometer​. Credit: RAL Space, STFC, UKRI

If you follow physics or science news, you will know that a huge part of current physics research is in the field of particle physics. Scientists aim to understand and harness the power of atoms. In laboratories across the world, scientists have been using silicon circuitry to sense the effects of their experiments, with huge silicon detectors being commonplace. You will also find silicon circuitry in the driving circuits of things like magnets and lasers, but these instruments are usually large, as it only needs to fit in a lab. There is no need to minimise. There is also an upcoming exciting area of physics that uses all of these techniques to truly harness the power of the atom known as Cold Atoms. The world of cold atoms uses the concept of trapping small amounts of atoms in a very small area, and super cooling them very close to absolute zero. At this temperature the quantum effects of the atom take over, and can be observed and maybe even harnessed.

 Atom Chip
A close up of an atom chip by The Atom Chip Lab at Ben-Gurion University

This is where Atom chips come in. They are not the only way to practice cold atoms by any means, but it is becoming a popular method to practice the art. The popularity is down to how small the overall circuit is, and the lower amounts of instrumentation needed to drive it. That being said, they are also more temperamental, and much more sensitive to things like noise. The way to trap atoms in an area is to use electric, magnetic and optical fields, all these things have control of the location and activity of the atoms. Atom chips use these three fields to confine, control and manipulate the cold atoms. If you imagine a normal Integrated Circuit (IC), the electrons move through the surface, through things like transistors, capacitors and resistors. In Atom chips the atoms are trapped above the surface, and using forces that we can control, we manipulate their motion, and internal state. The electric, magnetic and optical fields come from small structures on the chip, sometimes protruding out.

Atom chip at Vienna University of Technology
Another example of an atom chip at TU Wien. Credit: Vienna University of Technology

The area that the atoms are held in is often around 1 micrometer squared, and the amount of atoms is around 10,000. This is a surprisingly small amount when you think about it, that’s the amount of students you would find at most universities. The atoms are held at a few hundred nano Kelvin, and due to their design are often well isolated from the warm solid state environment around it. This allows their quantum state to remain undisturbed for tens or even hundreds of seconds. This is partly the basis of modern Atomic clocks. In fact the atoms used are usually the same, strontium or cesium. When you see images of modern atomic clocks, there usually is some sort of atom chip controlling the cold atom cloud directly. This is down to the ease of both reducing the size and complexity of the clocks without impacting the resolution of the clock circuit itself.

Cold Atoms Lab ISS
An artists impression of the Cold Atoms Lab on the International Space Station. Using techniques similar to the ones mentioned here. Credit: NASA/JPL-Caltech

The basis of the trapping part of the circuit uses something known as a magnetic trap (sometimes known as a micro trap). Imagine a wire, for the moment we will imagine it it straight. When a current is induced through it a magnetic field is created around it, a bit like a tube moving round the wire at a certain distance. This is the red line on the diagram below. As you learnt in physics class, the intensity of that magnetic field is directly proportional to the current running through the wire. Control the current then we control the magnetic field. In a magnetic trap there is also another magnetic field induced across the entire experiment, that we can assume is constant and uniform. This is represented by the green line, and is called B. Although there is only one green line, The magnetic field ie everywhere, but the green line is the bit we really care about. Now it took me a while to visualise this, but these two magnetic fields interact, and add up. So if the wire magnetic field is travelling the same way as the field B at any point then the magnetic field gets stronger, if the magnetic field oppose then the field will get weaker at that point. This means there is a magnetic gradient across the entire experiment.

The point we care about is where the magnetic field is zero, meaning the wires magnetic field is equal to, and opposing the field B. As the magnetic field from the wire gets less as it gets further away from the wire, there is a point at a certain radius (R on the diagram) away from the wire where this is the case. The atoms used want to be in the lowest energy state, and are trying to get away from the magnetic field, so it will “seek out” the point with the minimum magnetic field. in this case, the point R distance away from the wire. The wire now has a single line of trapped atoms R distance away from it. Now imagine that wire is bent into a circle with a radius of R. All those atoms are no longer trapped in a line, but now at a single point in the center of the wire. In practice to get the required magnetic field strength it will be a coil rather than a single wire, but the concept is the same. You now have a collection of atoms trapped in a small area defined by you, to do an experiment. Most of the time the atoms are then super cooled with lasers, or trapped and compressed further. This allows experiments with Bose Einstein Condensates, and potential to make quantum “qbits” for quantum computers, but that is a post for another day.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, physics or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

The Difference Between Bolts vs. Screws

It is almost an age old question, and by many out there these two words are almost interchangeable. Ask one engineer and they will give you an answer, ask another you will likely get a slightly different answer. Over the last few hundred years engineers have developed fasteners, sometimes for very specific applications, so the line can sometimes become blurred. Some fasteners can only be defined when it has been put into an assembly, being dependent on the design. There are a few standards out there that make an effort to define this, with varying degrees of vagueness, but I am going to try and make sense of it in this post and give you a few examples to make it more obvious. So lets put a few definitions into the mix. The definition I think is the best I can find is from the Specification for Identification of Bolts and Screws, ANSI-ASME B18.2.1 1981. This document has been superseded a few times but the changes have been to add many more extra definitions rather than change this one. Their definition is:


A bolt is an externally threaded fastener designed for insertion through the holes in assembled parts, and is normally intended to be tightened or released by torquing a nut.


A screw is an externally threaded fastener capable of being inserted into holes in assembled parts, of mating with a preformed internal thread or forming its own thread, and of being tightened or released by torquing the head.

I like these definitions as they put it in super simple terms that you can get your head around. Think of how you can tighten the fastener, if you have to use the head then it much be a screw, if you have a nut on the other end it mush be a bolt. From there it gets a bit more complicated, as you can also often tighten a bolt by the head as well, but the point is that you can use either, whereas a screw can only be tightened or loosened by turning the head. The other key point is that a bolt should not be tapping its own thread in the part it fastens to. A screw doesn’t have to form its own thread in the material, but if it does it can only be a screw. Basically if it is pointy it is likely to be a screw, if it is flat ended it is more likely a bolt (but not always true as we will see. There is also one other way to loosely define a bolt and that is the way it drives. A screw drives from the center (like an Alan head or flat head screw) but a bolt tends to need to be fastened with a wrench, so away from the center. Some companies like Accu group use this as a definition but it is not defined in most standards I have read, but still a good rule of thumb. Now lets look at a few examples to get a better idea:

Bolts That Cannot be Unfastened by the Head

A definite subset of bolts, the round head, oval head and plow bolt have not way to be undone via the head. The round head and oval head both protrude above the surface, but are completely smooth and rounded on the edges, so there is no surface for a wrench to lever against, so they have to be fasted by a nut on the other end. These bolts usually have a non circular area near the head to stop it from turning when the nut is being attached.

Externally threaded fasteners with a head that cannot be used to fasten it in place is a bolt. Credit (1)

Screws That Cannot Use a Nut

The classic screw is something that we are all familiar with, with it tapering to a point, often with a straight thread with multiple pitch length, and cannot use a nut. The tapering prohibits the use of adding a nut, this describes a classic wood screw. Other screws such as tapping and grub screws with points or shanks are also definitely screws by the fact they often make their own thread and have a point in the end to make some sort of non screwed connection with another part.

An externally threaded fastener that which has a thread that cannot be used with a nut is a screw. Credit: (1)

Bolts That Need a Nut to Function

Some bolts such as a hex structural bolt that have a shaft the same diameter as the thread (no shoulder) and therefore go through a part and needs to be attached into a nut on the other side to be fastened. The bolt has a smooth shaft near the head which cannot be fastened on its own, therefore needing a nut. By the fact it needs a nut it has to be a bolt. Most classic bolts often need a nut to work in an assembly, and it is the best way to recognise a bolt over a screw.

A hex Structural bolt is a great example of a bolt that needs a nut to function as the lack of thread near the head cannot be used to fasten. Credit: (1)

Screws That Look Like Bolts

This is where things can get a bit iffy, fasteners that on the face of it look like bolts but act a bit more like screws. Set screws for instance are a screw as they never use a nut, and they are usually used to secure an object within or against another object. Things like attaching a gear or pulley to a shaft is a common example. The other is a shoulder screw which looks much like a normal bolt but is different by the fact that the non-threaded shaft is bigger than the threads, hence the shoulder. The threaded part does not tend to be screwed into a nut, leading to the definition of it being a screw rather than a bolt. They tend to be used as a shaft for rotating things like pulleys or gears.


  1. Distinguishing Bolts from Screws – U.S. Customs and Border Protection – July 2012
  2. I you can get access – Specification for Identification of Bolts and Screws, ANSI – ASME B18.2.1 1981
  3. If you can get access – Square, Hex, Heavy Hex, and Askew Head Bolts and Hex, Heavy Hex, Hex Flange, Lobed Head, and Lag Screws (Inch Series), ANSI – ASME B18.2.1 2012
  4. For interesting reading about a court case about this: Rocknel Fastener, Inc. v. United States, 24 C.I.T. 900, 118 F.Supp. 2d 1238 (Ct. Int’l. Trade 2000)

LM3909 – An IC Just to Flash an LED

So during my placement year I was getting really into old electronics, and old IC’s, especially those no longer in production. We were also on a project where we were trying to design a circuit that would flash an LED for a short period of time from the charge on a small super capacitor. The big issue we had was how to minimise current flow, and power an LED at really low voltages, less than 2V. This on the face of it seems like a simple problem, until you start to think about it.

Million Mile Light
Products like the Million Mile Light, a flashing low powered, high brightness LED indicator need to flash for long periods of time on very little charge, much like the problem I faced. Credit: Million Mile Light

There are two go to ways that most engineers would go with to make a flashing LED with a constant flash rate. First is to use a small microcontroller, such as an ATTiny, or a Pic12F series, and use software to flash the LED. This seems good on the surface (and it is what we used in the end product) but it has a big drawback, it can only output a voltage less than the power rail. some versions of the PIC12LF’s can function down to 1.8V, perfect for our power supply needs, but LED’s need upwards of 2.7V (usually) before they start to light, so although our micro will work the LED wont. The second go to way to make an LED flash would be to use the classic 555 timer, one of the most manufactured chips of all time. There is a good reason it is famous, it is extremely versatile. You can decide the frequency based purely on the capacitor and resistor choices. We still have a similar drawback though, a 555 timer needs at least 4.5v as a power supply. So with our potential sub 2V power supply, neither the IC or the LED will turn on. That is one way to conserve energy!

A cut out from the datasheet, with a basic view of the circuit inside the IC, which I will go through in another post, and a pinout diagram, very useful for prototyping. Credit: National Semiconductor.

This is where the LM3909 came in to play. You have to remember that this chip was developed prior to 1995 (so it is older than me) when the electronics market was very different. Battery technology was not the same, and nowhere near as cheap. It was much more common for people to want to use off the shelf single use batteries such as AA, C and D batteries, or even coin cells in most projects. If you wanted something with a little flashing light on it there were plenty of applications for it. There are buoys in the ocean, store signs and displays, and Christmas lights, all of which would benefit from minimising weight of batteries, but lasting for serious amounts of time. Just as a reference, you could get to 4.5V (to power a 555) by using 3 AA batteries, but the voltage across them would soon dip below this, so you would need at least 4 in most applications. 4 AA batteries take up a lot of space, and weight, not great for many of these applications. Plus most of the chips we have discussed use a fair amount of power, the 555 uses at least 3mA while running, not including the dissipation in the resistors, and all of the power charging the capacitor wasted.

1.5v schematic
A snippet of the datasheet, showing the simplest connection diagram, and a graph of typical current consumption with relation to the battery voltage. It also has a great table describing how long standard batteries tend to last in this configuration, up to 2 years! Credit: National Semiconductor.

So how does the LM3909 solve these issues? well it makes use of a clever concept similar to the 555 of charging up a capacitor. The difference is that the 3909 uses that charge in the capacitor to flash the LED. Although it is slightly more complex than the below schematic, you can think of it as there being a switch inside that oscillates between two states. We will go through how it actually works in a future post. To start with the capacitor is in series with the battery, and in parallel with the LED. The LED wont light, but the capacitor charges up to near the power supply voltage. Once charged, the switch inside flips, and now the power supply, charged capacitor, and LED are in series with each other. To the LED it now sees the capacitor (charged to 1.5V) plus the 1.5V power supply, equivilent to 3V, more than the forward voltage it needs to turn on. As there is a very small resistance, the LED will be on as long as the capacitor has some charge, which isn’t very long as it will discharge fairly quickly. This is the “flash”, as once the cap is discharged the LED will turn off, and the switch will flip back. The capacitor starts charging again, and the whole process restarts. This goes on for as long as the battery has power to give.

Rob Paisley
A great description of the basic principle of how the LM3909 works, charging the capacitor up, and then releasing all that energy through the LED, with increased voltage. Credit: Rob Paisley.

A couple of points to note, the timing and the brightness of the flashing is based upon the capacitor you use, which is quite clever. There are two settings, depending in the pin you put the capacitor in will also double (or halve) the time the cap will take to charge. This means slower flashing, but longer lifetime. Having a smaller capacitor will mean faster, but less bright flashing, and a bigger cap will therefore be slower and much brighter flashing. The design of the chip also means that only two external components are needed for it to work, a capacitor and the LED, compared to the many resistors and extra cap needed on things like a 555 timer. The fact it can use less than 1.5V power source means we can use a single AA battery to power this device, and according to the data sheet it can last up to 6 months on one battery! I have one on my desk that has lasted longer than this.

my LM3909 circuit
My version of this circuit fit into a AA battery box, with it being powered by a single AA battery. It has a switch meaning I can turn it on and off. Poundland LED lights are a good source of these!

All in all I can see why National Semiconductor decided to make this chip, it filled a gap, and was used widely for a long time. Developments in battery technology, and more complex designs needed for the applications this was for has meant that they no longer make the LM3909, but they are still available on Ebay and some Chinese manufactures make them. There is also a design out there to make a discrete version of the LM3909, and I may try that for a future post, as it looks interesting.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG

Delta 4 Medium Makes Penultimate Launch

John Kraus photos
A great image taken by John Kraus of the Delta 4’s main booster and four smaller boosters, and the awesome power they produce. Visit his patreon to find more! Credit: John Kraus

Just after midnight, 00:23 UTC on March 16th 2019, a Delta 4 medium rocket placed a US military network relay satellite into orbit. Launching from Space Launch Complex 37B at Cape Canaveral AFB in Florida, the 66 meter tall Delta 4 is nearing retirement, with this being its second to last launch. After several technical issues, the ground teams eventually got the rocket and the satellite tracking network functioning correctly. The hydrogen fueled RS-68A main engine ignited moments before liftoff for 5 seconds before the hold down bolts released at T-0, firing away with 1.8 million pounds of thrust. This mission has extended ULA’s streak of successful missions to 133 since its inception in 2006.

Marcus Cote
Maybe the photo of the night by Marcus Cote, showing the huge exhaust plume created by the Delta 4 in 5, 4 configuration. Credit: Marcus Cote
marcus cote
A great time lapse of the Delta 4 launching WGS10 satellite into a geostationary orbit. Credit: Marcus Cote.

The rocket veered towards an easterly direction over the Atlantic Ocean, aiming to place the communications satellite to its final operating position 36,000 km (22,000 miles) above the equator in geostationary orbit. The solid rocket boosters burned out and were jettisoned in pairs roughly 1 minute and 40 seconds into flight. The main engine continued to fly on until 4 minutes in when the first stage was cut off, and then released shortly after. The first stage then fell back to Earth into the Atlantic Ocean. The upper stage was powered by a RL10B-2 engine, made by Aerojet Rocketdyne, the same manufacturers of the main engine. The upper stage engine ignited twice to push the satellite into an elliptical transfer orbit. The satellite separated from the second stage at T+36 minutes 50 seconds.

An image showing the scary power of the rocket boosters at liftoff, the rocket firing 1.8 million pounds of thrust into the ground trying to escape the Earth. Credit: ULA.

On board was the WGS 10 military communications satellite. It is a 6000kg (13,200 lb) broadband satellite, that is joining nine others that have been slowly placed in orbit since 2007. The idea is to form a globe spanning network that can relay video, data and other useful information between the battlefield and the headquarters, wherever they may be. The WGS fleet transmits both classified and unclassified information, and supports the US and its allies. On board is a digital channelizer that allows the satellite to relay signals using high data-rate X-band and Ka-band frequencies during its 14 year expected life. All of the WGS satellites were launched on ULA rockets, with the first two on Atlas V’s and all the rest on Delta 4’s. This mission had an estimated price tag of $400 million.

Glen Davis
An almost artistic image of the Delta 4 medium launching. Heavily edited, but still capturing that raw power. Credit: Glen Davis

Marking the second to last flight of the Delta 4 Medium variant rocket, it is noticeable as only having a single first stage core, whereas the Delta 4 Heavy has three. ULA are retiring certain areas of their launch family as they plan to debut the new Vulcan booster soon which will apparently be cheaper than their current offering. The decision to halt selling of the Delta 4 medium flight was made in 2014, but this and the next launch were already on the books at that time. The Delta 4 medium is apparently more expensive than the Atlas V launcher, but with a similar launch capability, leading to the reason for retirement. ULA described it as it being cheaper to run a few launchers more frequently than many launchers sporadically. The bigger Delta 4 heavy will continue to launch heavier payloads well into the mid 2020’s. Another reason for keeping the Delta 4 Medium was to allow the US military to have two choices to launch their payloads, that and the Atlas V. Now that the Falcon 9 is cleared to fly military satellites there is less need for the Delta variant.

marcus cote
The Delta 4 sitting on the pad, ready to launch the WGS10 satellite. Taken close up by Marcus cote the day before when setting up the remote cameras for the launch. Credit: Marcus Cote.
mike seely
A behind the scenes photo of setting up cameras before the launch. Credit: Mike Seeley.

Thank you for reading, take a look at my other posts if you are interested in space, electronics, or military history. If you are interested, follow me on Twitter to get updates on projects I am currently working on.

Follow @TheIndieG
Tweet to @TheIndieG