Showing posts with label engineering. Show all posts
Showing posts with label engineering. Show all posts

Thursday, 11 February 2021

System safety

 A couple of decades ago, a company was working on a new transport system that was *the future*. It offered fast, silent, and comfortable transport that had the potential to replace both rail and air.

They got millions in funding, and developed a fully-functioning record-breaking prototype.

A publicity document (1) mentioned 'safety' several times. It claimed:

"Collisions between (the) vehicles are also ruled out due to the technical layout of the system and the section-wise switching of the ”guideway motor“. The vehicle and the traveling field of the guideway motor move synchronously, i.e. with the same speed and in the same direction. Additionally, the section of the longstator linear motor in which the vehicle is moving is only switched on as the vehicle passes."

In other words, you can only have one vehicle on a track at once. This sounds brilliant, as you can only have a collision if two vehicles are on the same track, and the system does not allow two vehicles on the same section of track.

The system was the German Transrapid Maglev system. 

In September 2006 (2), a Transrapid Maglev vehicle was in a collision at Lathen (3), killing 23 people. It collided with a maintenance vehicle on the track; a maintenance vehicle that did not depend on power from the track, and therefore 'defeated' the inherent safety systems mentioned in the paragraph above. Add in an earlier-than-usual Maglev test run, and multiple staff errors, and you had a tragedy. 

No-one wanted the crash to occur; it was an accident, and yet it was totally caused by Human error, not an act of nature. The systems were not in place to prevent it.

What can we learn from this? Simply, safety is difficult. Human and technical errors compound safety issues, and therefore you require safety in depth with many fail-safes. These lessons have been learnt the hard way over a couple of centuries on the 'traditional' railway; they should not be forgotten by new systems, as the lessons are often paid for in human blood.

Most of all, safety has to be built-in to the system, not an afterthought. No system can be made safe by liberal applications of handwavium. And I fear this is a major issue with the proposed Hyperloop systems.

(1): TRI_Flug_Hoehe_e_5_021.pdf

(2): Sadly, the document is undated. However, it obviously dayes from before the crash.

(3): https://en.wikipedia.org/wiki/Lathen_train_collision

Sunday, 20 January 2019

Book review: "Slide Rule: Autobiography of an Engineer", by Nevil hute

Many books immerse you in a bygone world. Sherlock Holmes plunges you into a mid- and late-Victorian London, whilst Philippa Gregory drowns you in Tudor intrigue. "Slide Rule" takes you soaring through the aeronautical world of the 1930s.

Nevil Shute was one of the best-selling authors of the 1950s, with books such as 'On the Beach' or 'A Town like Alice', and he is most famed for his writing.

However Shute was also an engineer, and 'Slide Rule' covers that portion of his life, before the Second World War and literary fame intervened. His early life is mentioned, including a fascinating portion about his time in Dublin during the Easter Rising (his father was head of the Post Office in the city at the time, although he was fortunately outside the building when it was taken over). To get him  away from the troubles, his parents sent him to Oxford.

After Oxford, he went to work for De Havilland at the start of that distinguished company, and learned to fly - a skill he loved, and one that proved very useful in his later work. It was a time of rapid change in the aeronautical industry, and he soon moved on to Vickers for the start of the massive R100 airship project. Much of the book covers his work on this ship, and its rivalry with the ill-fated government-run R101 airship. He became the project's Deputy Chief Engineer by the age of 30 - something that perhaps could only happen in what was a 'young' industry.

One theme of this book is socialism versus capitalism, especially when it comes to engineering. In his view, the excess money (unfairly) thrown at the R101 project hindered it, whilst the fixed-cost contract Vickers had for the R100 forced them to be efficient. I got the impression that he was too involved with the project to be truly impartial, and besides, the costs of such projects are now so great that any lessons are probably irrelevant: even SpaceX relied on government money via NASA to develop their Falcon 9 rocket.

An interesting section of the book details how stress calculations for the R100 were completed. Two men ('calculators') would work for weeks calculating the stresses on the ring of girders forming a section of the ship, finding mistakes or problems and recalculating, until eventually the calculations done by different means agreed. These would just have been a small part of the calculations the ship required, and it highlights how much time and effort was required to do something that nowadays might only take a few seconds on a computer.

The R101 disaster caused the government to turn its back on airships - a move Shute admits was probably for the best given the rapid increase in aeroplane performance throughout the 1930s. Out of a job, he decided to start his own company with a fellow R100 engineer, Alfred Tiltman. They named their company 'Airspeed', and the second half of the book highlights the problems of starting a new company in a rapidly - and  radically - changing industry. Shute is disarmingly honest about some of the financial techniques he used to keep the company afloat and how, if the dice had rolled differently, he could have ended up in jail!

He eventually left Airspeed in 1938, his capabilities as Managing Director being more suited to running a young company than a relatively mature one with a bulging order book. He does not give the impression he minded leaving, nor does he appear to object when, during the war, de Havilland took over Airspeed.

This is very much the autobiography of an engineer, and his personal life is scarcely mentioned. His wife only graces the pages on a few occasions - mostly in how her job allowed him a little financial security. It would have been nice to have heard more about her, and his two children only get a short mention at the end of the book. It would also have been nice to hear more about Shute's work during the Second World War, when he worked on special projects and weapons. Perhaps that was because this book was published in 1953, when the events of the war were still raw and many special projects were still secret. It feels as though the book ends too soon.

I'm slightly surprised I had never read this book before, but I shall be reading it again in the future. Shute may have got fame from his writing, but his other work probably had more of an impact on the world.

4 out of 5.

Thursday, 24 July 2014

Dawlish diversions

In February, part of the railway line at Dawlish was destroyed by the sea. Some heroic work by Network Rail and its contractors saw the line reopen in April at a cost (to the railway) of £40 to 45 million. However that work left longer-term questions about the viability of the coastal route, especially if sea levels rise as expected.

Campaigners favoured various options:
  • Reopening the Teign Valley branch. This was a heavily-graded, single-track line, most of which was closed in the 1960s after fluvial flooding.
  • Reopening Tavistock to Okehampton. This LSWR line skirted around the north of Dartmoor. A large branch was until recently open to Okehampton to serve a quarry at Meldon. Reopening this line would open up large areas of North Devon to rail services, but would not be ideal operationally due to time-consuming reversals at Exeter and Plymouth.
  • Tunnel under Dawlish and Teignmouth to avoid the sea wall. I have previously written about the GWR's pre-war proposals to tunnel under the hills inland, avoiding the tidal and estuarine sections. There are several options for the route.
Network Rail have now released an initial study into these alternatives, and it does not make pleasant reading for people supporting any of these proposals. The government compare planned infrastructure improvements by something called the Benefit-Cost Ratio (BCR). This is a calculation of the return on every pound invested over a period - in this case, sixty years. A BCR of two or above is seen as very good, and represents two pounds back for every pound invested.

It should be said that calculating BCR is an imprecise art: working out the benefits of a scheme and allocating monetary values to them is difficult at best. But as long as the BCR is calculated in the same way, then it provides a reasonable means of comparing projects.

The Network Rail study shows that the BCR for the alternatives range from 0.08 to 0.29. Unless flaws can be found in the BCR calculations, these schemes are absolute non-starters. These flaws might be factors such as wider social and economic benefits, which are not currently included in the calculations. If the BCR figures are unchanged, then investment in such schemes would have to be made on an emotional, rather than financial, footing.

Therefore it looks likely that the existing route will be hardened against the sea. This too is costly, but is very much a known quantity and has the best BCR. Personally I view this as a shame as I favoured the tunnelling option, but if it is not economic, fair enough.

If the government and Network Rail are sensible, they may throw some extra money at improving the rest of the rail routes into Devon and Cornwall. For instance, a significant cause of loss of time is not at the place the line got breached, but on the South Devon banks, which include the third, fourth and seventh steepest inclines on Britain's main line railways. Whilst the gradients can not be improved, there are many things that can be done to improve journey times.

Cornwall is amongst the poorest regions in the UK, and perhaps improving the rail line into the southwest would be a step towards changing that.

Thursday, 27 February 2014

The Dawlish Diversion

The recent problems with the seawall at Dawlish has left much of Devon and the entirety of Cornwall cut off from the rest of the rail network. This is problematic, but as I said in an earlier post, it is far from the first time it has happened. This week, National Rail released a document that shows some of the damage done by the storms, and the work being done to fix it. The last page includes a low-resolution map showing potential long-term diversionary routes that are being looked at. These include two potential routes (C1 and C3) that have a long history.

Because of the problems with the line, in the 1930s the GWR proposed a couple of alternative routes that would bypass the troublesome coastal route. Unfortunately there is a very big hill called Holcombe Moor in the way, and therefore any line was going to be very heavily engineered.

The first alternative route headed south from Exminster, bypassing to the west of Kenton before heading more or less directly south to Dawlish, where it headed to the west of the town before curving southwestwards under Teignmouth to rejoin the existing line to the southwest of Bishopsteignton. This would require about eleven miles of new double-track railway, and is depicted by the red line below.

The alternative replaced the northern half of that route, leaving the existing line to the north of Dawlish Warren and heading under the town before rejoining the route mentioned above in Dawlish. This route would require seven miles of new double-track railway, but would be subject to tidal conditions along the Exe estuary, and would probably be slower. It is depicted by the light-blue line below.

The GWR got the former plan through parliament, and actually started construction when war interrupted in 1939. Given recent events, many people are thinking of building another diversion line, so I thought I'd take a look at what the GWR proposed.

Thanks to David Brown on the Railform Blog, I've found a map of the proposed routes. This is not the whole story as there were other proposals, but it's interesting nonetheless. I have transcribed the routes onto a modern OS map:

As can be seen, Dawlish Warren and Dawlish will still be able to be served by trains, albeit the latter a kilometre from the seafront. Teignmouth, however, will be more difficult, as the line passes behind the town in a very deep tunnel.

To my surprise, there does not appear to have been a massive amount of development over the last seventy-five years that would stop either of these lines being built. True, there would be some demolition, but not as much as I feared.

Either of these routes would be fast and weather-proof, and would serve the south of Devon with similar service patterns to those that already exist. The line could also be electrified. The downsides that I can see are cost, and the problem of giving Teignmouth a station.

If more information comes out on the C2 alternative in the Network Rail document (which seems to leave from north of Starcross, midway between the two routes above), then I shall do another post.

For another alternative proposal, see http://www.townend.me/files/southdevon.pdf

Sunday, 9 February 2014

Dawlish

I'd like to pay tribute to the engineers that are currently working to fix the railway line at Dawlish, where the sea wall that protects the railway line and houses has been breached by the recent bad weather.

When I first saw the pictures, my inexpert reaction was that the house that was left right at the edge of the breach would have to be demolished. But their work has saved it for the moment - they laid sections of the damaged track against the remaining earth, and are covering the lot with shotcrete (sprayable concrete) to form a temporary barrier.

In the meantime, they are placing 20-foot shipping containers along the recently-installed concrete toe of the wall, which is undamaged, and filling them with rubble to act as barriers to protect workmen from the worst of the waves. This is an act of genius, and will hopefully allow them to speedily rebuild this breach.

It is at times like these that engineers really come into their own, and I've been really impressed with the work that they are doing. They've got some really good people on the job, despite other problems on the network caused by flooding.

It's also given me a reason to do a little research on the sea wall, which I know well from my childhood. The wall has withstood the weather over the last 160 years well, despite the infrequent breaches. In fact, the line is more often closed because of rock falls from the cliffs on the other side of the railway.

Let's hope the solutions modern engineers come up with last for a similar period.

The seawall near the breach, seen in happier times in 2003.

Sunday, 2 February 2014

Tunnel vision

As some of you may have noticed, I am a tunnel junkie. Whether canal or railway,Victorian or modern; hand-dug or submersed tube, I love them all.

It was therefore with interest that I found Graeme Bickerdike's video on the excellent 'Forgotten Relics' website. The video explains the way some of the tunnels were built in Victorian times, and is well worth a watch if you're into such things. The production standards are surprisingly high for such an esoteric video, and Graeme does a splendid job as presenter.

If you want to know more, then there are a couple of books available on-line that also detail Victorian tunnelling techniques:

Railway Tunnelling in Heavy Ground (1879)
and
Practical Tunnelling (1896), which also had some chapters by the famous D.K. Clarke.

They make you realise how amazing the modern Tunnel Boring Machines are, and the way we can bore so many miles of tunnels deep under our capital city without any deaths of major injuries.

The man and boys who built our canal and railway network - now so dismissively called Navvies - really were a breed apart.

I hope you haven't found this a boring post ...

Sunday, 18 August 2013

Hyperloop

I have long been a fan of Elon Musk, the Internet entrepreneur who has made the difficult jump into hardware with his successful SpaceX company, which sends cargo (and soon passengers) to the International Space Station.

He also co-founded Tesla, the company that proved that electric cars can be sporty.

But SpaceX is just an iteration on existing technology: that is not to denigrate what they have done, but at the end of the day it is just a long tube of fuel sitting on top of rocket motors, just as all rockets have been since Sputnik 1 first orbited the earth. And Tesla also uses proven technology, albeit in a novel way.

It is therefore with interest that I note that Mr Musk and his team have come up with the Hyperloop, a solution for mass transit between Los Angeles and San Francisco.

The Hyperloop is a tube running between the two cities. A partial vacuum is maintained in the tube whilst a linear induction motor fires off a pod containing passengers (and in some designs cars) through it. The partial air pressure is sucked in at the front of the pod, compressed, and used to levitate the pod on a cushion of air (so-called 'air bearings'). Occasional linear induction motors continue to accelerate the pod to account for the small amount of friction and aerodynamic drag; for most of the time the pod coasts. One pod can be fired off every 30 seconds, and they travel at high subsonic speeds (to a maximum of 760 MPH).

The tube is supported on pylons above ground, and is covered with solar panels which will provide the power for the system. The pods contain batteries that run the compressors that provide the lift air.

The whole scheme is described in the following link:
http://www.spacex.com/sites/spacex/files/hyperloop_alpha-20130812.pdf

I have read the paper, and the following issues come to mind. None of these are necessarily game changers, but will need addressing:
  • Crashworthiness: The energies involved at high-subsonic travel are immense. What happens if a component breaks off and is left in the tube to be hit by the next vehicle? Or if the vehicle makes contacts with the sides somehow, imparting great energies to the tube and pod? Even a 5 gram nut has significant energy when hit at over 700 MPH.
  • Evacuation: If there is a problem and people need to evacuate, how does that happen? Remember, the tube is sealed and in a partial vacuum. And as the tube is intended to be supported on pylons above ground, how do passengers get from the tube to the ground?
  • Life support: the air pressure within the tube (i.e. outside the pod) will be harmful to human life. The pods will have to maintain a pressure that we can survive in, and all hatches and seals will have to be foolproof. They have addressed this in the document, but I'm not sure they have the whole answer, especially with hatches and seals that will have to be repeatedly used over a period of days, months or years. Will the air inside the pod be at normal sea-level atmospheric pressure, or reduced as in aircraft?
  • Claustrophobia: the passenger-only vehicle appears rather cramped. Claustrophobia may be a significant problem for many passengers - airplanes are bad enough for some people. This effect may be worsened by G-forces, which will be considerably more than is the case for high-speed rail.
  • Breakdowns: With one pod every 30 seconds, what happens if one breaks down mid-tube and away from one of the accelerator areas? The paper says there will be deployable wheels that can be driven along using electric motors; this is not only extra complexity, but the power required may be significant if a long way away from an accelerator area. In addition, there is no mention of gradients. If the tube has a significant gradient, the amount of energy required to take a pod up slope will be large. And what happens if the emergency wheel system fails?
  • Construction: in the paper, I fear the team underestimate the costs and complexities of construction. They have designed the route on Google Earth to follow existing transport corridors where possible (for example Interstates). This takes no account of ground conditions: if the line passes through an area of soft or difficult ground, the costs of constructing the pylons will grow significantly. As the route will be passing through an area that can exhibit significant seismic activity, the pylon foundations will have to be designed to cope with liquefaction and other effects.
  • Braking and signalling: If a car does stop, how do the others get messages to stop? What sort of signalling system will be used, and how fail-safe will it be? The proposed system of braking is simply referred to as 'emergency mechanical braking system' What is this, and how does it work?
  • Pointwork: One of the deal breakers with Maglev systems is pointwork. With one pod leaving every 30 seconds, there will be many pods at the stations unloading and loading. The paper suggest that there may be branch lines to other cities in the area. How are the pods transferred to different tubes or tracks at the stations (or indeed into depots or maintenance areas)? If this is done in tubes, you will need moving tubes and/or walls, preferably whilst keeping a low pressure vacuum. Not an easy task.
  • Charging: the on-board batteries will need charging every few journeys. How is this done during intensive usage of the pods?
  • Fire: all mechanical and electrical systems suffer from the risk of fire, and those risks need managing. Being in a sealed pod with a fire, and a vacuum outside, is not necessarily healthy. In addition, there are the risks of smoke for other pods further down the line. Fire and smoke management are very costly in similar tube-like systems such as the Channel Tunnel, which has a service tunnel and refuges at regular intervals.
I could be wrong about all of this, and could end up sounding like Doctor Dionysius Lardner who in the 1830s said the following:
Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia.
On a positive note: engineering-wise, it is perfectly feasible to construct a partially-evacuated tube that is supported by pylons. The propulsion system also appears feasible at first glance, as is the air-cushion support mechanism. Engineering difficulties will happen, but if you throw enough money at it, it should work.

However, getting such a system to work reliably and safely is a whole different matter, and I am unsure that anywhere near enough thought has been put into this.

Another view on the Hyperloop's feasibility is at the Ambivalent Engineer blog. The costings are explored at the New York Times. The New Statesman is sceptical.

And the Daily Mash has its own take on the Hyperloop....

Saturday, 19 January 2013

Boeing's woes.


Further to my last post,  the FAA in America have grounded all the Boeing 787s currently flying after two battery-related incidents this year (1). This includes an in-flight fire (2) that took fire crew forty minutes to extinguish whilst the plane was on the ground.

The 787 is revolutionary in several ways - most prominently the extensive use of carbon-fibre, but also for it's 'all-electric' architecture. Amongst other things, this means that pressurisation is not performed by bleed-air off the engines, but by using compressors.

A desire to reduce weight led to lithium-ion batteries being used, which allow the batteries to be smaller and lighter than those used in other planes. This technology is relatively new in aerospace, and lithium-ion batteries carried as freight are suspected to have caused at least one crash already and numerous other problems. (3)

The administration building of the firm who created the charging system for the 787's batteries burnt to the ground in 2006 after a battery caught fire. Additionally, a 787's Power-Control Panel caught fire during flight testing in November 2010 (4), causing further large delays in its entry into service. Whilst such problems are to be expected in flight test, it does look worrying with hindsight, and asks serious questions about Boeing's knowledge of the 787's electrical systems.

So what does this mean for Boeing? it is unlikely that the flight ban will be lifted until the reason for the battery fires are understood and fixes developed. These fixes (they can be fairly simple or massively complex - we should not prejudge) then need applying to each airframe. This will certainly take time and be costly.

Initial suspicions are that the batteries are overcharging. If this is the case (and it may take some time to know for certain and to reproduce), then there are issues of why such problems were not experienced or anticipated before. Boeing will not want to replace the lithium-ion batteries with alternative batteries that are heavier and bulkier.

Worse, the FAA certified the use of Lithium-Ion batteries on the 787, a first for civil aircraft. If the certification process has been proved wrong, their burden of proof for safety will be much higher this time around. As well as alterations to prevent the batteries from catching fire, they may well insist on systems to negate the effects of any fire.

In the meantime, the uncertainty means it will be hard for prospective purchasers to arrange funding for 787s. And this gives an advantage on Airbus, who were massively behind with their competing A350, but who are catching up due to Boeing's woes. Although they have plenty of time to develop their own problems with the A350...

(1): http://www.flightglobal.com/news/articles/analysis-grounding-orders-moves-787-into-uncharted-territory-381148/
(2): http://blogs.crikey.com.au/planetalking/2013/01/15/burned-787-battery-underlines-seriousness-of-incident/
(3): http://gigaom.com/2011/04/04/lithium-ion-batteries-faulted-for-jet-crash/
(4): http://www.flightglobal.com/blogs/flightblogger/2010/11/a-closer-look-787-fire-investi.html

Thursday, 15 March 2012

William Jessops

It may have been noted that I am rather fond of engineering. Indeed, the heavier the engineering - whether planes, trains, bridges, tunnels etc - the better. Given this, it is strange that I went into computer software, where the engineering is as light as it is possible to get. But my love of engineering  - and especially civil engineering - has continued unabated.

In 1992 I found a copy of Samuel Smiles' 'The lives of the Engineers' in the university library. If you wish to read this excellent book, then it is available for download from the Guttenberg Project. The book, written in 1862, describes the lives of the great early Victorian engineers. I read it, rapt at the descriptions of the great men and their equally great works. Many of the names were familiar to me, but there was one sad exclusion: William Jessop was only mentioned in three places. Indeed, the great engineers of the canal age were sadly forgotten in Smiles' fascinating project.

Many of the great names of the canal-building era (spanning from the opening of the Bridgewater Canal in 1761 to about 1840) are well-known: John Smeaton for his pioneering lighthouse on Eddystone Rock, now rebuilt on Plymouth Hoe; James Brindley, responsible for the Bridgewater, the Trent and Mersey and other canals; and Thomas Telford, whose fame is such that a town was named after him.

Yet arguably the most influential canal engineer, and one who was at his best at the height of the canal mania in the 1790s, was William Jessops. Born in Devonport in 1745, at the age of 16 he started work for the famous engineer John Smeaton. Soon the pupil overshadowed his tutor, although the two remained close until Smeaton's death.

Unlike many engineers he was keen to try new technologies; he was a pioneer in ironworking and was responsible for several early cast-iron aqueducts. He was also not entirely wedded to canals and often recommended the construction of plateways (a form of early railways) where canals were impractical.

Rather than give an in-depth description of his life, it is perhaps best to list some of the works with which he was involved to a large degree:

  • Grand Junction Canal
  • Grantham Canal
  • Nottingham Canal
  • Cromford Canal
  • Caledonian Canal
  • Grand Canal of Ireland
  • The West India Docks
  • Bristol Floating Harbour
  • Surrey Iron Railway

He was also responsible for a multitude of harbour and drainage works; he was a master at the manipulation of water. Much of the design of the Pontcysyllte Aqueduct (routinely attributed to Telford) was performed by Jessop, who oversaw the younger man's work.

He also jointly started one of my favourite Victorian companies - Butterley Engineering, a steelwork company that sadly went into administration in 2009, over 200 years after it was founded. Butterley made the grand spans of the overall roof at St Pancras, and the company's stamps can still be seen on the ironwork. More recently they made the steelwork for the Falkirk Wheel and the Spinnaker Tower.

In addition, he was held in such high regard that he was often called to parliament to give his judgement on schemes proposed by various other engineers, and investors would call on him to inspect plans drawn up by others.

To become a great engineer you need to be a self-publicist; both Brunel and Telford were excellent at this part of their work. Jessop, however, was not - his family did not allow his personal papers to be used and no biography of him was written for decades. For this reason works that he deserves major credit for - such as the Caledonian Canal - are routinely credited to others, such as Telford.

Part of the problem is that he had his fingers in so many pies that he often had to let more junior engineers perform the actual construction. The same is true of other engineers such as Brunel, but they were better at making sure that they got the credit for the resulting works.

Wherever you go in Britain you come across his works: from the Caledonian Canal through the Great Glen in Scotland to the docks that lie in the shadow of Canary Wharf. What is more, his capability to swap between water and iron, canals and railways, helped set the foundation of the railway revolution of the 1830s.

He deserves more recognition.

Wednesday, 22 February 2012

The elegance of FM stereo

FM radio is going to die. Slowly, inevitably, it is going to be overtaken by digital radio that squeezes many more stations into the same frequencies. Which is a shame, as FM exhibits what is, for me, an elegant engineering solution to a problem.

Originally FM was mono-only. That is, the stations broadcast just one signal that was fed to all speakers on the radio. In the 1950s it was realised you could broadcast stereo signals easily on FM. However this required sending two signals; one for the left-hand speaker and one for the right. An obvious approach would be for the left-hand signal to be broadcast on the mono frequency, and the right-hand on a frequency broadcast a short distance away.

However by this stage there were many mono FM radios in homes, and this approach would have made these useless as they would only get the left-hand signal (try listening to only one speaker of a stereo system to hear the problem).

To solve this, they came up with a simply cunning solution.

They broadcast a sum signal (left + right) on the main (mono) frequency and, a short frequency hop away, a difference (left - right). From these, both the left-hand and right-hand signals can be retrieved using simple analogue circuitry, and the mono signal maintains the qualities of the combined stereo channels.

Say at any one period the left-hand signal is at 5, and the right-hand at 7 (they are really sine waves, but the maths works well enough for discrete digital values).
This means 12 (5+7) is broadcast on the sum frequency, and -2 (5-7) on the difference.

To obtain the original left and right stereo values, you simply:
1) To get the left-hand signal, you add the difference and sum values, i.e. 10, then divide by 2 to get 5
2) To get the right-hand signal, you subtract the difference from the sum, i.e. 14, then divide by 2 to get 7

Of course there are other complexities, but the basic approach is simple: what is even better, it was easy to perform in 1950s-era electronics.

It is an utterly elegant solution. It also explains why, if you have poor signal quality, the radio degrades to mono, which is broadcast on the main frequency.

Digital radio has many interesting engineering and mathematical tricks, (for instance the magnificent Fast Fourier Transforms), but nothing beat the simple elegance of the FM stereo solution.

As usual Wikipedia has much more information and this page goes into more detail than almost anyone will want...

Sunday, 16 January 2011

Planning for failure.

Years ago I heard a story - possibly apocryphal - about the emergent electronics industry in the sixties. A large American company wanted to get the contract for building some of the Saturn V / Apollo hardware. They worked on their proposal, costed it and got ready for the meeting with NASA.

They were surprised to find that the NASA team was mostly comprised of engineers. This team sat through the company's slick presentation without comment until the end, when they were asked if they had any questions. One of the NASA engineers asked a simple question: "How does it fail?"

The company's marketing men were shocked and did not have an answer. They had prepared for the meeting with lots of questions relating to cost, timescales and capabilities, but this first question totally stumped them.

So why did NASA want to know how it would fail? And why was it their first question? The answer is simple: they trusted the company to meet the specification requested; after all, that was their job. However, they wanted to ensure that if it failed it would not damage any of the other components made by other companies.

After that, the company always had engineers in their meetings with NASA, and always made sure they knew how failure of their devices would affect the rest of the system.

Many of the common bugs in computer programs are caused by the programmer not planning for failure.

Let us take one simple and common instruction in the C programming language. malloc() allocates an area of memory for use by the programmer. On the vast majority of occasions it will succeed, returning a pointer to the memory. However, sometimes it will fail. It is common to see code where the programmer does not check for this failure case. The reason is checking for all possible failures takes time, and programmers are more interested in the cases where it works.

For instance, the following line of code, whilst nominally correct, will have me tearing my hair out:
int *broken_ptr = malloc(20);

A better example would be the following:
int *good_ptr = malloc(20 * sizeof(*good_ptr));
if (good_ptr == NULL)
{
  // Failed to allocate memory, must recover.
}
else
{
  // We can now do something.
  ...
  // We have finished with the buffer. Free the memory.
  free (good_ptr);
  good_ptr = NULL;
}

Even a non-programmer can see that the second example takes far longer to write and requires much more thought. It is, however, much better code (although still not perfect). In particular the programmer will need to consider exactly how to recover from the failure to allocate the memory. Unfortunately, misuse of malloc() in C is a prominent cause of programming bugs.

Similar problems can be seen in many other forms of engineering. It can be seen when 'cascade failures' occur; the failure of one part of a system causes other parts to fail in a cascade. This particularly occurs in power transmission systems, and engineers strive to design against it.

The key is to give engineers the time to design and implement systems fully. It is relatively trivial to get a system working; the real work lies in making it work properly in all cases, including the unforeseen.

Wednesday, 12 January 2011

Rambling thoughts on wind and power generation, part 4

There was never going to be a fourth part to this rambling, but comments on- and off-line have rather forced my hand. This section will go into what I believe environmentalists should do.

Firstly, let me say that most environmentalists I have met have had their hearts in the right place. They care deeply about the environment, and many are guided by their own personal morals. I may disagree with some of what they say, but much of it makes sense. (And let's face it: there is no one environmental movement, and there are many disagreements between environmentalists about the way forward).

Having got that unheralded unanimity out of the way, this is what I think they should do.

All interested parties (i.e. the government, environmentalists and even armchair commenters such as myself) should sit down and produce detailed figures of where they foresee our power coming from by 2021 and 2031. In doing so, they are only allowed to reduce the maximum power used by the country by 10% (history has shown that efficiency savings are swamped by new uses for power). Their figures should include costs and risks.

Again, I would recommend David MacKay's book, 'Sustainable energy without the hot air' for anyone wanting to start on this process. At the very least you will learn a great deal about the issues. Knowledge is key - I have certainly learnt a great deal as I have written these posts. Try to throw your preconceptions into the long grass as you do the work, and try out various scenarios. Of course this is exceptionally hard to do in practice.

If anyone wants to decrease the available power by more than 10% then they need to explain:
  • What the coping strategies will be (i.e. how to ensure that our economic and social life can continue with that reduced power).
  • What the effect of that change will be with respect to the world's total energy consumption.
I read with interest the Green Party's manifesto at the last election. Amongst more sensible proposals (e.g. introducing smart meters), the 'energy' section contains the following:
Prioritise the new 3 Rs: Remove, Reduce, Replace. First remove demand altogether where possible (e.g. by stopping the carbonintensive activity altogether, or by true zerocarbon technology); then reduce demand (e.g. by energy-efficiency measures); then switch to renewables for whatever energy need is left.
I would like to know what 'true zerocarbon technology' is, as it does not currently exist in any form for many industries. Just look at the problem with electric cars: the range of such cars are far too low to be usable for most people, and the charging time is prohibitive. There is currently no acceptable replacement for the petrol and diesel engine. This is called betting the future on the unknown ('oh, something will come along...'). It may, but it may not, and possibly not in the required timescale. Even if we all moved to electric cars tomorrow, we will need a way to generate the power for them. The only solution is for us to all travel less, and it will be a brave politician to demand this of his or her electorate.

Stopping carbonintensive activity is also immensely difficult. Fort instance, do they want to ban the use of cement (responsible for about 5% of man-made CO2 emissions)? If so, how does that conflict with their other manifesto commitments, for instance to build new houses? Can we build high-speed rail without cement?

Reducing demand is economically dangerous. How do you reduce demand? Agriculture is a major source of CO2 emissions, yet how do we reduce demand and still feed the world's population? Oh, and the environmentalists will not let us use genetic modification to increase yields either.


Additionally, they say:
Aim to obtain about half our energy from renewable sources by 2020 and ensure that emissions from power generation are zero by 2030.
Yet they do not give details of how we will meet these targets without risking massive social upheaval (remember, there is only nine years before 2020). Saying 'build more windfarms!' is not a solution.

It seems to me that few in the environmental movement are being honest about their plans. Frankly, their sums do not appear to add up.

The energy section of the Green Party manifesto details their obsession with carbon, and nothing about how to mitigate the effect that their policies will have on the population. Whilst depressingly vague on the form these amazing zerocarbon technologies will take, it contains an entire section detailing their reasoning against nuclear power. This includes the staggering claim that, as doubling nuclear power would only reduce carbon emissions by 8%, it is not worth doing. They also say that consumers would have to pay for nuclear reactors, yet they conveniently forget that consumers are already paying excesses for renewable power (indeed, they want to increase such payments by increasing the feed-in tariffs).

It was not an energy policy; it was a series of wishes wrapped up in an unsustainable package.

The environmentalists need to come up with full solutions, including figures, risks and costs, rather than just sniping from the sidelines. Only then can there be true debate. MacKay has made a good stab at some of this (see chapter 27 of his book, where he details five possible low-carbon plans - page 212 shows these in comparison). None of these plans are perfect. I have yet to see similar breakdowns from the Green Party, Greenpeace or any of the other campaigners. (*)

If they cannot come up with the figures then their comments should be treated as a small part of a much larger whole.

(*) It would be interesting to see a website that takes MacKay's work and allows the user to build his or her personal energy policy for the country. Each decision could come with estimated costs and risks. At the very least it would give people an indication of the awful complexity of the issues. It could also give you CO2 emission totals and the geopolitical problems (e.g. of getting oil from the Middle East, or solar power from northern Africa). There is something similar to this that can be downloaded from the http://2050-calculator-tool.decc.gov.uk/ website, although based on Excel (**). Unfortunately it does require more than a little knowledge to use. A little extra work should get it there.

(**) This is a freakilly powerful Excel spreadsheet, and shows the power of this brilliant package. As a further aside, I once worked with a project manager who had written a comprehensive project management system in Excel. It was amazing, but I could never quite get my head around it.

Tuesday, 11 January 2011

Rambling thoughts on wind and power generation, part 3

As I stated in part 1, our nation is faced with two significant energy problems.
  • Global warming
  • Energy security
In the first and second parts of this post I concentrated on wind power. In this part I will talk about energy security.

So what is energy security? As is often the case, the term covers a series of issues. Firstly, it means that we have to have continuity of the raw sources of our energy. In the case of traditional power stations, it means we need uninterrupted supplies of gas, coal and oil, allowing us to generate power for end-users at an economic price. This is a problem as much of our gas and oil comes from countries that whose governments are far from stable, and supply is subject to the whims of their governments.

Secondly, it means that we have to be able to generate enough power to meet our requirements. It is no good having enough oil and gas if our generation and refinery capacity is too low. This is an issue as power stations built in the seventies and eighties reach the end of their lifespan.

Our politicians and media are concentrating almost solely on global warming, and little on energy security. This is a problem as energy security poses a much more significant threat to our way of life than global warming. Prolonged brown-outs (reduction in voltage to conserve power) and blackouts (power cuts) were common in the 1970s.  Unfortunately many people (including the National Grid chief) say we are heading towards blackouts by 2015. The Economist has a very good article about this. Some experts I have talked to say that blackouts will occur in some parts of the country in the next year or two.

It takes many years to bring a new power plant on-line, and we need to be planning for the problems now. The Labour government cynically kept on kicking this issue into the long grass, and it is now far too late to prevent it from happening. I hope I am wrong, but one of the issues at the next election will be a looming energy crisis. And the coalition government will be getting the blame for Labour's cowardliness, especially in relation to power generation.

However, the coalition are not blameless. They are continuing the last Government's plans that make it uneconomical for companies to build new power plants. Several Trent Valley power stations (e.g. Willington) were going to be rebuilt, but many of these plans have been thrown into doubt by the economics. Compare this with wind farms, which receive massive subsidies from the public. At the same time, our existing power stations are being targeted by green activists, making it harder for councils and the government to grant planning permissions for new build plants.

The ideal would be for us to rely on a varied combination of power sources. Wind power would be a part of this, as would tidal, hydro and wave. However, with the best will in the world these renewable sources will come nowhere near matching our requirements.

All politicians (indeed, anyone) who profess knowledge on this subject should read and digest David MacKay's book 'Sustainable Energy - without the hot air'. It is available free on-line, or a hard-copy version can be bought from Amazon. It is so easy to come up with soundbites about this subject, but MacKay's book honestly describes the complexity of the issues in a readable manner. What is more, he tries to show how hard it is to meet the country's energy requirements from each source. I do not agree with everything within, but it is undoubtedly a vitally important read.

So what steps would I like to see the government make to improve energy security?
  • Firstly, the Government should set a per-capita target for power requirements in twenty years time. 
  • Secondly, they should work out what proportion of this energy should come from each source.
  • Thirdly, they should make it economical for the power generators to build that capacity.
  • Fourthly, we should invest in research and development of other energy sources (e.g. new nuclear designs and wave power).
  • Fifthly, we should reduce the usage of oil and gas in power generation. We have to reduce the usage of gas from Russia and oil from the Middle East.
Power generation should be seen as a critical issue for our country. Many people say that the free market should be allowed to just get on with it; that may or may not work. What is not working is the current situation, where the government are interfering with the markets and forcing them onto a path (wind) that could never supply enough power to the country. We either let the markets rule, or control them more. The current halfway house is a farce.

Monday, 10 January 2011

Rambling thoughts on wind and power generation, part 2

A turbine blade outside the Vestas factory on the Isle of Wight
In part 1 I discussed whether the environmental benefits of wind power generation were worth the environmental disadvantages. I thought that I would try and find some figures for the efficiency of wind power, especially in comparison with the environmentalist's hated figure, nuclear.

Many people claim that wind power is inefficient; after all, it is obviously at the whim of the wind, and no power can be generated when there is no wind.

So how bad is the situation? I have read many claims over the years, either stating that they are very efficient or terribly inefficient; naturally enough, environmentalists tend towards the former.

So what is the truth? A letter in issue 1278 of Private Eye gave me a useful pointer. A company called Elexon monitors the power usage in Britain. Its day-by-day reports  can be found at http://www.bmreports.com/bsp/bsp_home.htm (note, this website does not seem to work in Chrome, but works in IE 8). Scroll down the page to 'Peak Wind Generation Forecast'.

There is a great deal of interesting information on this page, but one thing that stands out is the variability of power generated by the wind farms metered by Elexon. They currently estimate that 2.4GW of power can be generated by such wind farms; currently (07/01) 406MW, or one sixth of the installed capacity, is estimated as being generated. Tomorrow it should be 1263 MW, or one half of installed capacity.

As can be seen, these figures are risible.

It should be remembered that whilst the maximum installed capacity of 2.4 GW is double the 1.2GW generated by the Sizewell B power station, the actual power generated can be far less. Think about this for a moment: *all* the installed wind power in the country can generate only double what one of our nuclear power stations generate. Think of the 3,000 wind turbines on land and out at sea, and realise how there is no chance of wind providing anything near all of our power.

So what about cost? Sizewell B cost £2 billion to build, and was designed to produce power at about 8 pence per kWh, including construction costs. It is believed that modern designs will allow the costs of nuclear power to be reduced significantly.

Modern designs are estimated to cost 2.3 pence per kWh, including decommissioning costs. In comparison, wind power is estimated to cost 3.7pence per kWh for onshore wind and 5.5 pence per kWh for offshore wind. Such figures should always be taken with a pinch of salt as the devil is truly in the details, but it shows the problem of wind power. Not only can we not generate enough power using wind, but the power we do generate is massively costly.

The sad thing is that successive governments have seen fit to reduce the skillsets available in this country to the degree that we will need to buy nuclear reactor designs from other countries. The situation is not much better in respect to wind power, where the majority of turbines are being constructed abroad. (*)

The answer seems obvious to me: nuclear is far better, if only because of the sheer reliability of its base-load generation. The French realise this, yet we are going down the road towards installing as much wind power as possible. In my opinion this is a mistake that British consumers will pay for in the future.

The Vestas R&D facility under construction
(*) There was a great deal of fuss in the media when Vestas closed down their factory making turbine blades on the Isle of Wight last year. Imagine my surprise when I walked beside the Medina River last week and found a truly massive new building being built by... Vestas. It is part of a £50 million research and development complex. Although it will not employ as many people as the old manufacturing plant, it surely is a welcome development.

The scale of the building was quite something to behold.

Sunday, 9 January 2011

Rambling thoughts on wind and power generation, part 1

There is a lot of talk on the outdoor blogs about the number and size of windfarms being created in the Scottish mountains; Alan Sloman has written a number of excellent articles about a new wind farm in the Monadhliath range of hills.

Not being able to better his prose, and also not having a particular knowledge of that area of Scotland, I thought that I would look at the problem from other angles. Mainly: is it actually worth building wind farms?

What problems are we trying to solve in building wind farms? Put simply, our nation is faced with two significant energy-related problems:
  • Global warming
  • Energy security
Unfortunately, wind power does little to solve either of these. Wind power is intermittent in nature, whilst energy use is cyclical according to time of day and season. For much of the time we will have nowhere near enough power to meet demand. Part 2 will look into this a little further.

The answer, according to environmentalists, is to store the power for when it is needed. This is done in various places, such as the Ffestiniog and Dinorwig pump-storage schemes in Wales. These pump water up to reservoirs using electricity during the night, when there is a surplus of cheap power, and release it at times of peak demand. I have heard claims that we just need to build more of these. There are several obvious problems with this:
  • There are few sites suitable for such schemes; you need a large height difference between the storage reservoirs and the generating plant, and the upper lake needs to be large to store the water. 
  • Building such schemes are hardly green; a million tonnes of concrete were used at Dinorwig. Building large lakes in our upland areas also has obvious environmental consequences.
  • They depend on cheap electricity to pump the water up; wind power is hardly cheap and is currently massively subsidised.
Of course, there are proposals for other means of storing energy, for instance molten salt storage. However these have only been built on a small scale, and there are a number of concerns about them, including pollution. We cannot bet the future on untried technologies.

We need maximum power in winter and yet, as happened recently, the cold weather coincided with low wind speeds. Therefore the wind farms were at low efficiency when we needed them most. This means that we will either need a massive over-capacity of wind power, some form of (currently untested at scale) power storage mechanism, or more traditional power plants to provide back-up power.

There is also the issue of how inefficient wind power is.According to the Telegraph, an area of land the size of Wales will need to be covered with turbines to generate just one-sixth of the country's energy needs. From this, it is clear that wind power is not the answer to either of the two problems that face us.

We need more honesty in the debate. What I would like to see are publicly-available and honest (*) figures about the power generated by wind farms compared to their stated capacity. Fortunately we have such figures (see part 2).

Wind farms have other problems. People campaigning against wind farms are often called NIMBYs, sometimes rightly. However such name-calling does not hide the fact that, in many cases, they have a point. Our uplands are precious, and anything that permanently alters them should only be done with care. It would be exceptionally hard for me to get planning permission to build a cottage in Brassington in Derbyshire, yet the Government are allowing four massive 102-metre tall turbines to be built nearby. A house can have negligible visible impact on a landscape; these turbines will be visible for miles around.

I was once told by a Greenpeace representative that, if necessary, windfarms could be dismantled and the wilderness reinstated. He was assuming that the turbines just sat on large blocks of concrete that could be easily removed. That may be the case; but it does not account for the miles of haul roads and power lines that are needed for construction and maintenance of the turbines, or to distribute the power. It may not be fashionable to say so, but this is a significant form of pollution of some of our most precious places.

I am not against all wind farms; off-shore ones may be useful (and expensive). But given the manifest disadvantages of wind, we really have to weigh the advantages of wind against the disadvantages on a case-by-case basis. The Scottish Government in particular is failing in this regard.

It seems to me that many proponents of wind power are looking at the advantages and ignoring the disadvantages. They think 'green' as being solely about power, and not about wildernesses.

 Will future generations thank us for destroying some of the last wildernesses in Britain in a perhaps-pointless quest for 'green' energy? I think not.

(*) I say honest, because figures have been massaged in the past. Solar installations in Spain have been accused of fraud. In one case, investigators noticed that a solar power plant was impossibly generating significant power at night. It turns out that as solar power generators can charge more for their power, they were using diesel generators to produce electricity and selling it at higher cost, pocketing the profit and defrauding the public.

Wednesday, 22 December 2010

Hysterical journalism

The media narrative on the snow is becoming quite hysterical. Of course it is easy for me to say this, as the snow down here in Southampton has not been particularly bothersome. They are all at it: the BBC, Sky and the newspapers, hand-wringing and asking why 'we' (by which they mean anyone but themselves) cannot cope with snow.

Take BBC News 24 on Monday afternoon. They interviewed a spokeswoman for the Burlington International Airport in Vermont, who claimed with pride that their airport rarely closed due to snow. The presenter did not ask any particularly pertinent questions, and seemed keen to push blame onto BAA, the company that operates Heathrow.

So I thought that I would look up Burlington International Airport. The link shows that in 2008 the airport performed 72,189 individual aircraft operations.

Compare this with Heathrow, which had 466,393 individual aircraft operations in 2009. As can be seen, Heathrow is six times as busy with only two runways. This means that Heathrow is far busier, and has less slack for doing maintenance of runways, taxiways and stands between flights. Indeed, Heathrow is operating at 98% of capacity. This means that there even the slightest delay to operations can cascade down. What is amazing about Heathrow is that they manage to run services as well as they do.

Again as a comparison, Birmingham Airport had 101,221 flights in 2009.

True, things could have been done better. But I am getting fed up with journalists - many of whom have had no experience of engineering - criticising things they have little idea of.

Take a common complaint: that the organisations involved (the airlines, the airports or the railway companies) do not give enough information out. This complaint assumes one massively important thing, and that is that the organisations *know* what the situation is. Snowfall in Britain can be hard to predict, both in when it falls, the severity and the duration. We must all have driven and seen heavy snow lying in one area, and green fields just a few miles away.

The BAA people will be spending all their time trying to get as many planes in the air as possible, and the situation must be extremely fluid. Planes take time to clear, and the authorities will not know with any certainty which plane might be the next to be ready to go. Therefore it must be next to impossible to tell an individual passenger when his plane will be leaving.

There is one thing that I find amazing: that passengers were left on a plane for hours after it had left the gate. This was wrong, and should be avoided in the future. Again, this can be easier said than done. It would be interesting to see where the fault for that lies. Was it the airline or BAA who made those passengers suffer?

Much credit to Channel 4's seven o'clock news, whose reporting and criticisms appear to be much more valid. Having said that, they did broadcast an interview with an American lady last night who said that the travel chaos were similar to the images she had seen of the aftermath of Hurricane Katrina. Yeah, right. A few people getting delayed or changing their travel plans is anything like a disaster where 1,800 people died and thousands lost their homes. Some people need to get a sense of perspective...

Saturday, 18 December 2010

Moore and Heisenberg

In 1965 Gordon Moore, one of Intel's co-founders, wrote a paper stating that the number of individual switches (transistors) on an integrated circuit was doubling roughly every eighteen to twenty-four months. This was soon known as Moore's Law, and remarkably his prediction has held true ever since. Today's unbelievably fast modern processors are roughly twice as large - and complex - as those of two years ago. The greater the complexity, and the smaller the components, the faster a chip can operate. It has other side effects as well; in the case of computer memory, more transistors can be fitted on the same-size memory chip.

The end of Moore's Law has been predicted since at least the early eighties, yet it has never come to pass. Each time a limitation has been approached, engineers have improved processes to shuffle them back. Unfortunately this will not continue forever. In a previous post, I wrote about how the speed of light was becoming a limitation in the clock speed - and therefore the size - of computer chips.

There are several other limitations, perhaps the most fundamental of which is something called Heisenberg's uncertainty principle. This is a complex topic that can perhaps best be described as follows: it is impossible to know to any certainty the position and momentum of an individual particle to any certainty. If that sounds confusing, then consider the following.

You have a simple light switch. Flick it into one position, and bulb 1 will light. Flick it into the second position, and bulb 2 will light. As long as a mechanical or electrical problem do not interrupt the circuit, then you can always guarantee that the correct bulb will light according to the switch position.

Unfortunately, a man called Werner Heisenberg worked out in 1927 that this consistency does not hold at the level of an individual electron or other particle. This is unimportant at the scale of a light switch, as other factors massively outweigh the uncertainty. As computer chips get smaller, however, the individual transistors get smaller and the uncertainty principle will start to have effect. Taking the analogy above, you could flick the switch without knowing with any certainty which bulb will light. Obviously this is a very bad thing for chips.

Today I came across a short article in the August 2008 Proceedings of the IEEE, entitled 'The Quantum Limits to Moore's Law' (available to subscribers on the IEEE website). In it, the author performs calculations to show when, if Moore's Law continues to hold, that the uncertainty limit will be reached. There is little point in reproducing the equations here, but the end result is noteworthy: if chip technology was altered to use electron spin as a transistor (a technology demonstrated in labs, but a long way from production) then the uncertainty limit would be reached in 2036.

It should be noted that this is a best-case estimate; there are many other physical limitations, such as heat and noise (*), that could stop chips from getting more powerful. As noted above, however, engineers have proved remarkably adept at pushing these physical limitations.

As might be seen, I am fascinated by the ultimate limitations to the amazing technology that we have today. Perhaps the most important of these is in no way physical, but cost: it may simply cost too much to work around.  When this happens the engineers will have to look elsewhere in their never-ending quest for more speed.


(*) There are many types of electronic noise. Particularly important with respect to chips is thermal noise: this is is the noise generated by the equilibrium fluctuations of the electric current inside an electrical conductor, which happens regardless of any applied voltage, due to the random thermal motion of electrons in the conducting medium. (from http://thermalnoise.wordpress.com/about/) This noise can cause problems both in the circuit itself, and in adjacent circuits.

Saturday, 20 November 2010

Stupid blogpost comment of the day

The Internet allows anyone with half a brain to make comments and pronouncements on topics that they have not the least knowledge of. This blog is, of course, a supreme example.


I allow most inane comments to wash over me, but some are hard to ignore. Today I was browsing a FlightGlobal article about the A380 engine failure. FlightGlobal blog posts are often worth reading as knowledgeable people comment, and the signal to noise ratio is quite high.


However, a comment by someone called Jen made me both furious and amused:
RR do not have the metallurgy expertise that GE or Pratt & Whitney Rockedyne have unless they steal the technology. Advanced metallurgy is an extremely important factor in modern turbofan engines.
Which shows that (s)/he is just a fan boy who knows little about the industry. The last sentence is, of course, true: metallurgy (and especially the weird mechanics of crystal growth in superalloys) is essential in modern engines design. The idea that Rolls Royce could not do it without stealing the technology is the bit that gets my goat. S/he offers no evidence, just wild accusations.


Materials science is one area of technology that we Brits are particularly good at. It requires both scientific and engineering prowess, and the presence in the country of Rolls Royce and others has allowed us to be competitive. We should not rest on our laurels, however: China, Russia and others have the capability and desire to overtake the West in this and other areas.


This is where blind patriotism such as Jen's is so dangerous. The mere idea that another country might be capable of making competitive technology to America is such an anathema that s/he has to accuse them of stealing it. After all, only American engineers can do this cool stuff, okay?


And whilst s/he is in this happy la-la land, other countries will continue to make progress. And if they do a good job and actually beat American technology, then it can only be because they stole it.


Putting your fingers in your ears and downplaying the competition is not a way to advance.

Thursday, 18 November 2010

Richard Noble

Richard Noble is a unique man. In 1983 he beat the land speed record in his Thrust 2 car, recording a maximum speed of 633.468 mph.

Thrust 2 was designed, built and run on a shoestring budget, much smaller than some of his rivals. Due to the work of his team, Britain regained the land-speed record.

Move on 14 years, and other teams were looking at breaking that record. This, of course, is the way that record attempts go: one team breaks a record, and other teams look at how they can respond; national honour is at stake. Richard Noble saw this activity, and wondered if he should try and increase the record, to put it out of the reach of the other teams. A milestone figure lies a short distance above 733 MPH - the sound barrier.

Thus Thrust SSC was born. On October 15, 1997, Thrust SSC, driven by RAF pilot Andy Green, reached a speed of 763 mph on the Black Rock Desert in Nevada, breaking the sound barrier in the process.

In doing so, Richard Noble became a unique man in the annals of the land-speed record. Most holders of the record raise funding, build and drive the machine themselves, or lead the team that does so. That is utterly understandable; if you are going to risk your life, you want to be in charge. Yet Richard Noble, the fastest man on earth, knew that he did not have the skills to drive Thrust SSC. What it needed was someone who had reactions and experience far greater than his own. So a competition was started to find a relevant driver, and RAF pilot Andy Green was selected. In the process, Richard Noble organised for his own record to be broken by someone else.

It was an incredibly noble thing to do.

Move on another eleven years from Thrust SSC, and again Noble is working on the Bloodhound SSC project. Having broken the sound barrier, they are going for the next obvious target - 1,000 MPH. This is an amazing speed - 237 MPH over the current record, and it will be by far the biggest ever jump in the land-speed record.

Noble has not been resting on his laurels in the intervening years . In 1998 JCB, a British construction company (and based right under where I used to go to school), had a problem. For decades they had used engines provided by British company Perkins for many of their machines. Then it was announced that Perkins was being taken over by JCB's massive US competitor, Caterpillar. Not wanting the control of their engines to be in the hands of their competitor, JCB set about the task of designing and making their own. In 2004, the first JCB444 engine rolled off the production line.

JCB wanted to do something to publicise this new capability. As well as having the JCB 'Dancing Diggers' display team (which I saw on several occasions when I was a child), they also built the JCB GT, the fastest digger on earth, capable of easily reaching 100 MPH. But JCB wanted something extra, and decided upon the diesel-powered land speed record. But who could they get to run such a project?

Step forward Richard Noble and Andy Green. They and their large team designed, built and drove the JCB DieselMax car. In 2006 this got them the diesel-powered land speed record, a speed of 350 MPH, and improvement of 90MPH over the previosu record. In doing so they only ran in the fifth out of six gears - the limitation on the speed was down to the tyres. If they had wanted, they could have pushed the car further.

So I wish Noble the best of luck with the Bloodhound SSC. Not all of his projects have worked out, but I hope this one will. We can all dream impossible dreams, but it takes a special man to make them possible.

Saturday, 13 November 2010

Three heroes

What do the locomotive cow-catcher, Lord Byron's daughter, and the standardised screw thread all have in common?

Answer, the first computer.

And all three involved heroes of mine.

Firstly, the easy connection. The locomotive cow-catcher was one of the less well-known inventions of a certain Charles Babbage. Anyone who knows about the history of computing (or has been to London's Science Museum) will recognise the name. Charles Babbage designed the first computer, the mechanical Analytical Engine.

The connection with Lord Byron's daughter? Her name was Ada Lovelace, and she wrote a mathematical description of the Analytical Engine, in the process becoming the first ever computer programmer. Her fame is entrenched in the fact that Ada, a computer language used by the military and others, was named after her.

Then the third connection? What does the standardised screw thread have to do with Babbage and Ada Lovelace? At first sight, nothing. The standardised screw thread is something that we take for granted nowadays; we expect a nut and bolt to fit together well enough. Yet nothing was further from the truth in the early nineteenth century. You could go to a local blacksmith and get a 1/2" nut and bolt, but the pitch and depth of the thread could be very different from those made in the next town. This was of little relevance until the start of the nineteenth century, when the industrial revolution required precise engineering.

One man, Joseph Whitworth, saw this problem, and in 1841 came up with a very simple idea: a standardised screw thread, which became known as the 'Whitworth' standard. He not only had the idea, but also designed and manufactured machines capable of making them and other high-tolerance parts. In the process he made a personal fortune and started an engineering colossus - the Whitworth company.

The Whitworth standard was later replaced by metric threads, but you can still find Whitworth nuts and bolts in the strangest places - for instance the thread that attaches a camera to a tripod.

So what is the connection with Babbage and Lovelace? Whitworth spent some time working for engineer Joseph ClementWhile at Clement's workshop he helped with the abortive manufacture of the Difference Engine. The parts of the Difference Engine required unheard-of tolerances, and the failure to mass-produce the parts was one reason it failed. It is utterly conceivable that Whitworth's work was a response to all that he had learnt working for Clement.

Two replica Difference Engines have been made; one is visible in the Science Museum. John Graham-Cummings has launched Plan 28, a project to actually build Charles Babbage's Analytical Engine. It is an ambitious, some say impossible project; but I have pledged my £10.

If you want to see Babbage and Lovelace in cartoon form, then the Sydney Padua's excellent 2D Goggles is a muse-see. I am just waiting for her to include Whitworth in cartoon form.