Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Tuesday, 31 March 2020

How do we test for Covid-19 ?

(Yet again, IANAE. Below is information I have gleaned from various sources for my own interest. Any mistakes are my own.)

In all the information being written about Covid-19, there seem to be few resources that highlight the test's methodology and its limitations. It has become a black box 'the test', and many - sadly including journalists - seem to be treating it as an all-conquering miracle.

In reality, whilst it's the best we've got, it's awkward.

The current Covid-19 tests have been produced very rapidly, and is a tribute to the companies and organisations that have developed them. They were aided by the fact there have been several close calls over the last couple of decades - for instance SARS in 2002 and MERS from 2012. These earlier diseases proved to be less susceptible to spreading between humans, and gave investigators a target to concentrate on. The Covid-19 tests are built on that earlier work, which is why we got test for Covid-19 within a couple of weeks of the outbreak starting.

The current commonly-used Covid-19 test are variants of a PCR test.

So (deep breath), what is a PCR test?

A polymerase chain reaction (PCR) test detects viral particles in bodily fluids, such as blood. It is essentially molecular photocopying: small amounts of a pathogen's DNA or RNA are copied many times (amplified) to a level where they can be detected. Without a PCR test, the virus's RNA would be at too low a level for detection. I like to think of it as a gigantic magnifying glass, although perhaps not wielded by Sherlock Holmes.

PCR's inventor, Kary B. Mullis, won a Nobel Prize for Chemistry it in 1993, and it initially proved useful for the Human Genone Mapping Project, although is also used for purposes such as DNA fingerprinting and genetic research (1).

So, what is the testing procedure?

  1. A swab is taken from the patient, or a sample taken from the back of the throat.
  2. The sample is sealed into a tube and sent to a lab for processing.
  3. In the lab, the sample's RNA is extracted.
  4. Chemicals are mixed with the sample in different combinations.
  5. These mixtures are tested in a PCR machine.
  6. The result is given as positive, negative, or uncertain (a catch-all for various errors and problems, for instance the presence of similar viruses).

There are many issues with the test:

  • The PCR tests can only tell if you currently have the disease; not if you have had it and have recovered. For that, we need an antibody test.
  • The test is not instant; samples have to be sent to labs (often distant) for testing.
  • It is not just a case of having enough testing kits: you also need the downstream laboratory to process the samples. It is pointless having a test that you do not get a result from for weeks or months. Tales of countries or organisations ordering tens of thousands of kits seem to neglect the downstream processing. This processing means it is perfectly possible for (say) 10,000 tests to be performed in a day, but for results of only 8,000 to come through, as there is a lag between tests and results - especially if the labs are inundated with tests.
  • The test takes time. Getting samples to a lab takes time. Extracting the RNA takes time. Mixing it with the chemicals takes time. Performing the PCR test itself takes time. Even when samples are batched up, it can tale many hours for a sample to be tested, and that does not include transport from patient to lab.
  • The tests require consumables: from reagents to protective equipment for the lab workers. These consumables and workers are in short supply at a time when every country in the world is demanding them.
  • The tests may be inaccurate. False positives (a patient reported to have the disease when they do not), is less important, as the patient will then be treated with caution, e.g. self-isolation. The big problem is with false negatives: where a patient is reported to be clear of the disease when, in fact, they have it. Some reports give the current test an accuracy of about 70%: in other words, it will only detect Covid-19 within a patient 70% of the time.

Why might false negatives be reported? (2)

  1. In the early stages of the illness, the patient may have too low a viral load to be detected.
  2. The swabs are taken from the nose and/or the back of the throat, and if the patient's respiratory illness is not too severe, not much of the virus may not make it up the respiratory tract.
  3. The sample may be simply incorrectly taken.
  4. The samples may have been poorly handled.
  5. There might be technical issues in the test.

PCR is a tool, and as with any tool, it needs using with care, and with a deep understanding of the tool's limitations.

As an aside, PCR tests are used in Low Copy Number (LCN) DNA fingerprinting techniques, which allow tiny amounts of DNA to be fingerprinted in criminal cases. This is particularly useful in cold cases, where DNA might have degraded over time. The LCN technique proved somewhat controversial a little over a decade ago (3).

Hopefully we will get better, more immediate tests that do not require such a complex process. But in the meantime, thanks to all the companies, organisations and people who are working their socks off to increase the availability of testing kits and increase the testing capability.

(1): https://www.genome.gov/about-genomics/fact-sheets/Polymerase-Chain-Reaction-Fact-Sheet
(2): https://ourworldindata.org/covid-testing
(3): https://en.wikipedia.org/wiki/Low_copy_number#Criticism




Thursday, 10 January 2019

Book review: "The Planet Factory", by Elizabeth Tasker

It is unusual for a popular science book to start by saying that what you are reading will probably be proved wrong in a very short period. Yet that is exactly what astrophysicist Elizabeth Tasker states in this excellent book about how planets and moons form.

There is a good reason for this: twenty-five years ago we did not know of any planets outside our solar system, and some people claimed that our planetary system might be unique. All our models on how planets formed had to be based on what we could see in our own system. Yet by mid-2018 we knew of 3,700 planetary systems, and virtually every one has posed more questions than it has answered. Together, they have caused us to question our assumptions on how all planets - including the Earth - formed.

Ms Tasker details how primordial clouds of dust and gas collapses to form full solar systems with stars and planets, and how much we still have to learn about this most fundamental of processes.

She examines the weird planets that may exist: such as ones that orbit within their star, ones with seas of tar, or ones made of lava or others where it rains diamonds. Truly alien worlds that belong in science fiction - and indeed, science fiction worlds may not be as fictional as we once thought. Want a planet with two suns, such as Star Wars' Tatoooine? They exist. Want an ice world? Take your pick.

The reason many people are interested in planets is the possibility they may harbour life. In reality this is the only time when the media pays attention to the discovery of a new planet, usually with headlines such as "Most Earth-like planet could harbour life." Ms Tasker dives behind the headlines and looks at why they are often misleading, and how life might occur on planets that might be very different from the Earth. Finally, she examines how in the future we might be able to detect life on a distant planet, if not the form of the life, even from tremendous distances.

Planetary formation can be a very dry subject, and Ms Tasker does a good job in explaining the terminology in a light and accessible manner. Even so, this is not a children's book, and in places will require a little perseverance to understand the concepts, and some thumbing back through the pages to find definitions. But the perseverance certainly pays off.

The biggest issue I found with this book is how the uncertainty of how things happen and the resultant speculation can make things confusing as temporary theories conflict. An enhancement of the glossary at the end of the book for commonly-used terms would also be helpful.

If you have any interest in how the Earth formed, or in whether there is life elsewhere in the universe, then this book in an invaluable primer.

4 out of 5.

Monday, 20 January 2014

Drake's equation redux

A quite startling five years ago (have I really been blogging that long?) I wrote a post on the Drake Equation, the formula proposed by Dr Frank Drake in 1960 to try to guess the number of intelligent species ("Intelligent Life Elsewhere") in the galaxy.

Since that post, there have been a number of developments:

  • Many more extrasolar planets have been discovered; there are now 1070 planets in 810 systems (some systems have multiple planets). Most of these planets are large, some even larger than Jupiter, but a few Earth-sized ones have been detected. When I wrote that post five years ago, it was just 339 planets. When I was a child, some scientists claimed that our own solar system might be unique - they have been proved utterly wrong.
  • A couple of dozen planetary atmospheres have been detected, although most of these belong to gas giant planets like Jupiter. This allows temperatures and atmospheric composition to be detected in some cases.
  • The first direct pictures of an extrasolar planet have been taken by the Gemini telescopes. It is of Beta Pictoris b, a gas giant several times the size of Jupiter. It orbits the young star Beta Pictoris, which is 63.4 light years from us.
  • As we discover more extrasolar planets, we are able to classify them. This has led to a list of planet types. My favourites have to be the puffy planet and Super Earth, the latter of which brings to mind a planet populated with the likes of Clark Kent and Kara Jor-El.
  • Latest estimates show that there might be 11 billion Earth-size planets in our galaxy
  • Only a small fraction of these planets may be in the so-called 'habitable zone', where life in our solar system can exist. However recent research shows that life may exist much further from the sun that we previously thought.
This progress can only continue. I find it an absolute wonder: the galaxy is turning out to be so much more interesting than I could ever have guessed as a child.

And the conclusion must be that we are not alone. The chances of life only developing on our planet in our solar system, when there are so many similar planets, must be minute.

So why are we not hearing them? Why are we not conversing with them? My views have not changed from five years ago, and that one of the following options is true:
  • We are the only intelligent lifeform in the Galaxy;
  • We are not listening for messages in the right way;
  • Other ILE is too far away from us, and their signals too weak for us to detect;
  • Other ILE has developed, perhaps several times over, but died out many years ago (disease, nuclear devastation etc).
  • They are around us as we speak, watching us and waiting for the right moment to intervene...
Let's see how things change in the next five years.


Saturday, 18 January 2014

Human Genome Mapping

For the last three decades or so, it has been possible to 'map' the human genome, to untangle the code of guaninecytosine, adenine and thymine (GCAT) that comprise out genetic make up.

This has revolutionised parts of our life, including crime detection and paternity tests. It has had a much smaller effect in medicine, where there are few treatments available that use genetics. In fact, the whole area of genetics is more complex than anyone realised thirty years ago, and now other concepts such as epigenetics are coming to the fore. It has proved relatively easy to find genetic markers for certain diseases; it has proved much more difficult to produce the long-promised cures from that information.

For years, scientists strove to create the first map of the entire human genome. An international collaborative project called the Human Genome Project started work in 1990. The machines were expensive, and worked slowly, with some human interaction required. The project was scheduled to run for around 15 years to produce a typical map.

Neither was it to be one individual: the map produced was to be of a composite of people.

However the technology continued improving, and in 1998 an American scientists, Craig Venter, set up a company called Celera Genomics to sequence the entire genome of an unknown individual by 2001, a few years earlier than the public project. To pay for it, he wanted to patent important parts of the genome, meaning that any scientists wanting to use that genetic information would have to pay Celera for the honour.

To make matters worse, the public project had released lots of the information they had already sequenced, and Celera did not need to resequence those parts - they used the public information.

This got the scientific world in a tizzy. The public project had a series of meetings, and the Wellcome Trust  threw a massive amount of money at the public project, accepting to sequence a third of the map by itself, rather than the sixth it was scheduled to do. Other companies pledged to give more money to the public project: science could not allow genetics to become patented.

It became an arms race between the private company and the public effort. Thanks to this massive effort by the Wellcome Trust and others around the world, the first drafts of the HGP were completed in 2001, at roughly the same time as Celera's project.

Later, it turned out the Celera's unknown individual was Venter himself. He is, in my opinion, one of the greatest scientific villains of the last few decades.

It's worth looking at some figures.

In 1990, the project believed it would cost $3 billion and take 15 years to sequence the genome.

In 1998, Ventor believed it would cost $300 million and be done in five years.

Now, we have machines that can sequence the map of 1,800 individuals a year, at a cost of $1,000 per sample.

The march of this technology is absolutely fantastic.

Sunday, 28 August 2011

Scale

Sencan pointed the following webpage out to me this morning:
http://primaxstudio.com/stuff/scale_of_universe/

I have seen several versions of this sort of thing before, but I have never seen such an impressively interactive version. Seeing the scale of man in the middle gives a good idea of our insignificant view on the universe, both to the micro and the macro.

There is so much remaining to be learnt at both ends of the scale - the science behind subatomic particles is still hotly debated, and we are still finding new frontiers in our knowledge of the wider universe.

It is awe-inspiring. I want to be a scientist.

Sunday, 2 January 2011

In praise of the Royal Institution Christmas Lectures

This Christmas I was glued to the TV for the Royal Institution Christmas Lectures. This year the topic was 'size matter', presented by the spectacularly-named Dr Mark Miodownik.

This year it moved back to its spiritual home on the BBC, but was unfortunately reduced from five to three episodes.

Being of a relatively scientific bent, there was little that was new to me in this year's presentation. Despite this, it was still fascinating stuff. Science is notoriously difficult to present to children, yet the lectures never fail to arrange complex topics into a form that children can comprehend. Strangely, I never fail to learn something, even if it is something long forgotten.


For instance, take the last of this year's three programs. It went from why some materials look like solids but are actually liquids, to how mountains sink into the earth's mantle, to the limits of skyscrapers height and carbon nanotubes to space elevators. All of this was told in an accessible manner without a single equation.


These lectures have been running since 1825, and the names of some of the presenters go through the luminaries of British and world science: Michael Faraday, John TyndallFrank Whittle (strangely talking about petroleum and not the jet engine), the genius Eric Laithwaite, Desmond Morris, David Attenborough, Heinz Wolff, Carl Sagan, Richard Dawkins, Kevin Warwick and Susan Greenfield, amongst others. Each series of lectures have been televised since 1966. Google Books has a potted history of the lectures.

Faraday started the lectures to teach children about science; a pioneering ambition, especially for those pre-Victorian times. Children still dominate the audience, and the presenter often encourages them to take part in experiments.

It is so easy to dumb down science - something that the media never fail to achieve with sensationalist headlines. It is therefore somewhat amazing that the Royal Institution manage to make science accessible without dumbing it down. I can only hope it will continue in a world where the truly good science programs - QED, Horizon, and Equinox - have all disappeared.

Wednesday, 3 November 2010

Faster, Better, Cheaper

A decade ago, NASA had some well-noted disasters with unmanned spacecraft. The Mars Polar Lander, the Lewis earth-observing satellite, and the Mars Climate Orbiter. Fortunately none cost any lives, but they all proved embarrassing to NASA, which is supposed to be the pinnacle of American scientific and engineering achievement.

What is perverse is that many of the problems could be put down to one phrase: "Faster, Better, Cheaper". This phrase was dreamt up by NASA Administrator Dan Goldin, who took up the post in the early 90's. It is now widely seen as having been a disaster, even in official reports.

So what was the problem? The problem was, in my opinion, simple. Engineers need to be able to measure things. You can measure time, speed, money, weight, distance, and any other number of metrics. In the phrase "Faster, Better, Cheaper", it is easy to measure 'faster'. Has a project been delivered faster than would have been the case under the old system? Cheaper is also easy: has the project cost less than it would under the old system?

Of course the actual metrics used will be more complex that that, but with both 'faster' and 'cheaper' the measurement is possible and obvious.

The devil is in the word 'better'. How do you measure betterness? Could a project that didn't work fully still be called better because of some arbitrary other metric? "Gee, the craft crashed into the moon instead of orbiting, but it was better because we all got more publicity!"

Perversely, 'better' allows you to mask failures, and it does not give engineers direction.

Many engineers say that it is possible only to have two out of the three; you can have faster and cheaper, but you won't get better. Or you can have faster and better, but you can't have cheaper. Then there is another viewpoint, where you can have all three. There is the following quote from that link:
No, it’s not a fact of life. It is possible. There are two cultures. The second culture is the culture that dominates the new information-age industries -- like Microsoft -- which is, you can simultaneously improve cost, schedule and performance.
And hereby lies the problem. The writer talks about cost, schedule and performance. Cost is related to 'cheaper', and schedule to 'faster'. However, performance is just one part of 'better'. A measure of 'better' might be something different from performance, depending on the mission. A 'better' on the Space Shuttle might be measured on the safety rating for the crew, whilst performance might be the maximum payload lifted, or the thrust of the engine, or any other such metric. He has altered 'Faster, Better, Cheaper' to be 'Faster, Better, Performance'.

Additionally, it is a fallacy to say that the high-tech industries such as Microsoft have any relation to the space industry. They do not. A company like Microsoft can afford to take limited risks, whereas in space they cannot. Put simply, if software goes wrong, most of the time it can be updated and fixed (there are exceptions to this; such as firmware updates, but these are relatively rare). A rocket launch or a space mission is a one-off shot; if it fails, it can cost hundreds of millions or even billions of dollars.

By all means, keep faster and cheaper. Space access needs faster and cheaper. But instead of 'better', pick another, narrower metric. For manned systems, perhaps they should use 'faster, cheaper, safer'.

Thursday, 28 October 2010

The state of science coverage

I am not a scientist. Indeed, I am nowhere near a scientist. I am as likely to pen a scientific paper as my mother is to write a computer program to generate a website. (*)

However, I have a fair idea of what science is, and have always enjoyed reading about the latest advances. Indeed, when I was nine or ten I was designing simple PWR nuclear reactors. Yes, I was that sad.

Unfortunately, there is a great problem with science, and that is the scientific media. I used to read Scientific American avidly (especially for the bimonthly 'mathematical recreations' section), but that has gone really downhill. New Scientist always seemed like a joke to me, and it has just got worse.

The specialist literature is far better, but also much harder to get into.

Scientific American and New Scientist are faced with a problem: they are the public face of science. If you want to know what is going on, then they will tell you. Or that used to be the case. Nowadays (**), sadly, they are going for the populist vote, and covering stories from a headline-making angle. They depend on circulation, and they therefore want people to read them. To reach as broad a base of readers as possible, they dumb down and create sensationalist headlines. Sometimes they even forget basic science.

Perversely, the best general scientific coverage tends to be in, of all things, the Economist, especially in their technology quarterlys. It is good science written in clear, concise terms that is accessible to the layman.

The problem has also worked its way into broadcast media. Tomorrow's World was once a great program, making a good attempt at explaining science (and sometimes failing) was converted into a load of populist tosh before it was finally put out of its misery. Its eventual replacement, 'Bang goes the Theory', is risible and almost unwatchable if you know anything about the topics it is covering. It has been dumbed down to the point of insensibility.

Then there was the excellent QED, which they renamed 'Living Proof' (allegedly as no-one understood what QED meant). The quality of the programs fell at the same time. Channel 4's Equinox series seems to have died a death.

However, there is hope. Earlier this week there was a program on BBC Four called 'Atom', where Professsor Jim Al-Khalili talked about the history of the atom . It was great, thrilling watching. Even better, it was followed by the lovely Victoria Coren and 'Only Connect'; the only truly intelligent quiz show on the TV. So good scientific programming can still be done, but it has to be broadcast in quiet backwaters.

(*) I should stress that my mother is hardly unintelligent. It is just that her skills lie elsewhere; like managing three unruly children.

(**) I really hope that I do not sound like an old codger.

Friday, 22 October 2010

Internet encryption and the Clipper chip

A recent announcement about the invention of public-key encryption has made me think about encryption and the Internet. I have been watching development in encryption for over twenty years, and it is interesting to see how it has - and has not - developed.

Almost all residential encryption on the web uses Secure Sockets Layer (SSL); a secure webpage is often denoted by a padlock in the URL bar of the browser. Common strengths of SSL encryption are 40-bit, 128-bit and 256-bit. The numbers represent the strength of the 'key', or the security pass. If you know the key, then you can decipher the information being sent over the Internet. If you do not know the key then it is extremely difficult to decipher the message.

Of course, the recipient of the message also needs to decode it, and therefore needs the key. If a key needs to be transmitted, then it might be intercepted. For this reason, SSL uses a complex system called public-key cryptography. The theory behind this is not necessary for this post, but basically it allows the recipient to decode the message without having the full key; the public key is all that is needed.

The ability for people to send messages with only the intended recipient reading them is the basis of Internet commerce. It allows me to order a few books on Amazon, or to check the balance on my bank accounts without third parties seeing what I am doing. Such encryption is an essential part of modern life.

That same ability has frightened security services for the last couple of decades. In the days of postal letters, laws were passed allowing the authorities to open and read the contents. Phones could be tapped, subject to t a legal process. Both of these had obvious privacy issues, and the law had to tread a difficult line between privacy and national security. They did not always get it right.

Unfortunately, public-key cryptography meant that, although the messages could still be intercepted, they could not be read. 40-bit messages were just about feasible to be cracked using massive computing power; 128-bit was essentially impossible, and is still very difficult. This left the authorities with an obvious problem: what would happen if organised crime, terrorists or any other ner-do-wells started using unbreakable encryption?

Initially they tried to ban the technology for export. A decade ago I attended a few export control meetings in London. Export of SSL technology was limited to 40-bit, which was weak, and the company I was working for wanted to export 128-bit to secure Internet banking. As our customer was in Scandinavia, we needed an export licence.

This would all have been very amusing if it had not been so time-consuming and pointless. The code for 40-bit SSL was freely-available open-source, which must have spread all over the world before anyone had even thought of slapping controls on it. Adding support for 128-bit was quite simple for anyone with good mathematical and coding skills.

For this reason, the US Government came up with the idea of Clipper. This was a chip and associated architecture that would allow encrypted phone calls, but would also give the US Government a back door to listen to conversations if required. An associated chip, Capstone, would be used tom encrypt data on computers.

Think about the problems involved with this: it involved the public trusting the US Government to use the system properly (i.e. only listening to messages when there was a real need); the protocols and encryption standard was secret and could not be evaluated; and there was little idea what the rest of the world would do. Indeed, the mere threat of Clipper led to the creation of more open-source publicly-available software systems to enable encryption.

Clipper seemed like an advanced concept in the early nineties. Yet its critics rightly sought its abandonment. So would the world be safer if Clipper had been introduced? I doubt it. Public-key cryptography had been invented long before Clipper, and criminals would surely have used it instead of Clipper-enabled systems. How could the US government have forced, say, the Iranians or other governments to use Clipper?

For these reasons, the Clipper proposal was really a non-starter. The project was abandoned in 1996 after three years and a great deal of money had been spent.

Instead, some countries have introduced laws that say it is illegal to fail to produce an encryption key when demanded. In the UK this is enshrined as part of the Regulation of Investigatory Powers Act 2000, otherwise known as RIPA. RIPA has led to arrests. Although imperfect, this seems like a far better system than any Clipper-type system. It means that the authorities have to go though a legally-defined process to read encrypted messages (*).

The mathematics behind public-key encryption is fascinating, but so are the legal and moral dilemmas that it produces. If you want to read more about cryptography and encryption, then you can do worse than read Simon Singh's The Code Book. If you want a web resource, then Greg Goebel's website has an excellent primer to codes and ciphers, and also a guide to codes, ciphers and codebreaking that has a chapter on public-key cryptography.

(*) It may be possible, perhaps even probable, that GCHQ and others have computer systems capable of breaking all encryption using brute-force or other techniques. If so, then it is little known, and it is doubtful whether data from such systems could be used in a court of law.

Thursday, 21 October 2010

In memory of Benoit Mandelbrot

The mathematician Benoit Mandelbrot has died, aged 85.

Although hardly a household name, he is known amongst mathematicians and computer scientists as the father of the fractal. Fractals are mathematical constructs that states that something can be split into small parts that are similar to the larger object.

Fractals would probably have remained a mathematical curiosity except for the fact that fractal geometry can explain many of the things we observe in nature - a classic example is a fern frond, where the entire frond consists of small parts that resemble the whole. Ice crystals and clouds exhibit fractal characteristics, as can some financial systems.

This means that a fairly complex system such as the shape of a leaf can be controlled by very simple rules; understand the rules and you can recreate the shape.

Fractals can be used to generate images of startling beauty, for instance the Mandelbrot or Julia sets. These can be zoomed into, each level of zoom producing images of startling beauty. They are the best of maths: relatively simple in theory, with vast implications for the real world, that can also produce startling beauty.

There is one other reason why Mandelbrot appeals to me: one of his first papers on fractals was called "How long is the coast of Britain?", published in 1967. In this, he details how finding a 'correct' length for the coastline of Britain is next to impossible, as it depends on the scale you measure it at. The closer you look, the more detailed and longer the coastline becomes.

I first wrote Mandelbrot and Julia set creation programs a couple of decades ago, when the computer power required meant that the zooming was exceptionally slow. I loved both the maths and the resultant images. So, courtesy of Wikipedia, here is a Mandelbrot set:


And why not discover the beauty for yourself: have a play at Yale's website.

Wednesday, 15 September 2010

The speed of light is too slow!

The speed of light is too slow. No, seriously. This marvellous giver of life, the sustainer of every living thing, is just too darned slow.

Okay, I know you think that I'm mad. But it is true, for computer chips at least.

Almost all (*) digital computer chips rely on something called clock signals. These are the timekeepers of the chip, keeping all the operations synchronised. Want to add two numbers? Do it now. Want to fetch something? wait... wait... now! It is vital for operations to occur in the correct sequence, and the clock signal helps control this.

These clock speeds are the 25 Megahertz (MHz) or 33 MHz numbers we used to see in the early to mid-1990s. These numbers mean that the 386 or 46 chips of the day performed 25 or 33 million operations a second.

Both light and electricity travel at a smidgen under 300 million metres a second (**); that is 300 thousand kilometres every second. In the 33 millionth of a second that the fastest chips that the bearded engineers of the early 90s could design, light would travel nine metres. This is considerably larger than the roughly 16mm longest side of the chip.

However, modern chips operate much faster. A 2 Gigahertz (GHz) chip performs 2,000,000,000 operations every second (***). In this case, light can only travel 15 centimetres between each tick of the clock. That is getting very near to the physical size of the chip. Consider what this means; if the distance light can travel in a tick of the clock is less than the size of the chip, then it is impossible for the chip to use a clock signal to control all its parts. Things become much more complex. In many cases that effective distance is much less due to the convoluted path that signals have to take through the chip.

This is one of the reasons why the increase in clock speeds is slowing down. Until recently chip manufacturers proudly displayed the clock speeds of their chips; a consumer knew that a 66MHz chip would, everything else being equal, be faster than a 33MHz chip. Unfortunately clock speeds have stalled around the 2 to 3GHz mark. One of the reasons for this are the problems caused by the speed of light within the chip.

Chip designers are constantly pushing at the limits of the possible. In many cases new technologies or materials can push those limits a little further away, buying a few more years. In the case of the speed of light, however, there can be no improvement. It is a fundamental limit that cannot be broken.

(*) Some attempts have been made to make asynchronous, or unclocked chips, such as the Amulet project at Manchester University. These are rare and can be ignored for the purposes of this discussion.

(**) This is the speed of light in a vacuum.The speed in most electrical circuits is somewhat less.

(**). This is not quite true; modern chips have some parallelisation that allow multiple operations to be performed at the same time. The controlling clock still operates at this speed, however.

Wednesday, 11 March 2009

Book review: "Supercontinent" by Ted Nield

This was another book that we picked up after a talk at the Bath Literature Festival. Of the four talks we attended, this was (for me, at least) the most interesting, as it covered a subject that I have long been interested in.

The book starts off with a piece of science fiction; an alien race coming back to visit Earth a few hundred million years in the future, only to find no trace of life - the entire surface of earth has essentially been wiped clean as the continents have reformed into one massive landmass. Only when the aliens turn their attention to the moon do they find traces of the race who once inhabited the planet below.

This is an interesting way of introducing this book on what is called 'deep time', or geologic time. In particular, it talks about how continental drift formed, then broke apart, one massive landmass on Earth. It also details how the current map of Earth that we all know is in transit, and how another large landmass will one day form. This cycle of the creation of a supercontinent which is then broken up, takes about 500 to 750 million years. It is the longest cycle in nature, longer even than the time it takes for the sun to revolve around the galaxy. As an aside, for maps of what past Earth and future Earth may look like, see the excellent www.scotese.com website.

That is one thing that needs noting about this book - the numbers mentioned are either very, very large, or infinitesimally tiny. As you read it, you are exploring things that are far away from our ordinary everyday understanding. Despite this, the information is presented in a way that is far from overwhelming.

The author outlines the botanical and geological reasons for believing that now-separated landmasses were once together, and goes into the theories that gained currency before continental drift, mainly involving the 'lost continents' of Atlantis, Lemuria and Mu (fans of the KLF can sing 'All bound for Mu Mu land' at that mention). It is a fascinating history, one that even includes Marie Stopes and Scott of the Antarctic.

There is a certain amount of humour in his writing, not least when the author discusses uniformitarianism (the concept that natural processes that operated in the past are the same as those that are observed today). He also details the conflict over Graham Bank, an island near Sicily whose tendency to rise out of the sea routinely causes diplomatic incidents as countries attempt to claim the land mass. Before it is all sorted out, it invariably sinks beneath the waves once more. This lightness of touch does much to improve the readability of what could otherwise have been a very dry academic text.

It also mentions the geology time charts that are so familiar. These charts outline the various geological epochs, and should be instantly recognisable to anyone who has done elementary geography. Right at the bottom, past the Cretaceous and the Jurassic, is the Precambrian, little more than a small, ill-regarded little sliver on the diagram. It is like this, so the author claims, because Precambrian rocks have very few fossils - it was before complex life had evolved. Later bands (such as the Jurassic) can be classified by the lifeforms within them. Earlier ones cannot. This led all those early rocks to be rolled up into one big 'precambrian' chunk.

Only now, scientists realise that 88% of the Earth's history is contained within that chunk. The rest, that long list of names of familiar names, takes up only 542 million years, or just 12% of our planet's life. Later chapters in this book details some of the ways that geologists are attempting to uncover what the Earth was like in those ancient times.

Two timely lessons are embedded in this book; one is that the consensus in science can be wrong, and that scientists will fight very hard to keep that consensus despite the evidence. Both of these are embodied in the way that many geologists and geophysicists declined to believe in the theory of continental drift, despite the ever-increasing body of evidence for it.

Perhaps the most interesting part of the book is when he talked about snowball earth, the time when the entirety of the earth's surface was covered by thick layers of ice. I have always had some problems with this idea, and the author explains the situation far more clearly than the several TV programs I have seen on the matter. He also takes time to display some of the counter arguments against the controversial theory.

The book started with a fiction, and ends with a tragic fact. It ends with a description of the 2004 Asian Tsunami, and the author then makes an eloquent case not just for his science, but all science:
If today there is fresh water on Namibian farms and in Vienna, and an emerging tsunami early-warning system in the Indian Ocean, it is because geologists in the past have done the science that brings a closer understanding of deep time and the inner workings of the Earth. You cannot pick and chose with science. A seemingly rarefied geology that reconstructs the lost supercontinents of Earth's deep past is the same science that (with political will) can save hundreds of thousands of lives in the Indian Ocean when the next tsunami strikes. The arcane business of how our Earth's atmosphere evolved during the Precambrian under the influence of evolving life is the same science that helped us understand the massive, uncontrolled climate experiment in which the human race is currently engaged. But to deny one part of science is to deny it all. Science hangs together. It is a supercontinent.
If you want a detailed introduction to plate tectonics and deep time, then this could be just the book for you. It is very readable, and is (for a science book) fairly accessible. I would give it 5 out of 5, as the author has managed to make a subject that is often impenetrable understandable.

Thursday, 5 February 2009

The Drake Equation and Intelligent Life Elsewhere

I thought I'd have a little fun today.

A BBC News article published today claims that scientists have calculated that there could be between 361 and 38,000 intelligent civilisations in our Galaxy. This has long been a question that has interested many scientists and members of the public, but the problem was that there was so little information available. People would look at the issue; some would say that there were thousands, others that we were unique. Although rooted in science, such estimates were little more than guesswork.

Until 1995 it was not even known whether any planets existed outside our solar system. It is perhaps reasonably assumed that life cannot evolve without any such planet, called extrasolar planets. However, since then, thanks to some rather nifty astronomy, we have found 339, and a number that is increasing all the time. This makes the odds of there being intelligent life elsewhere (ILE) much greater. All of the planets found so far outsize the Earth; most are gas giants the size of Jupiter or larger. However, it is believed that if gas giants can form in a system, then smaller, rocky planets (such as Earth, Mars or Venus) are likely, if not inevitable.

So, how to work this out? When I was a teenager I was fascinated with the Drake Equation, created by Dr Frank Drake in 1960, as part of the Search for Extraterrestrial intelligence (SETI) project. It was developed to try and work out the probabilities of radio signals being sent out by ILE.

There are various forms of the equation; perhaps the most accessible is:
N = R * fp * Ne * fl * fi * fc * L

This is not as complex as it looks. Basically to work out N, (the number of races capable of communicating with us), you need to know or estimate:
  1. How many stars there are in the Galaxy at the current time (R)
  2. The number of such stars that have planets (fp)
  3. The number of those planets that can support life. In our solar system, this is one. (ne)
  4. The probability that such a planet has developed life (f)
  5. The probability that such life is intelligent (fi)
  6. The probability that the society survives long enough to send detectable signals into space (fc)
  7. The length of time that society exists (i.e. sends radio signals) (L)
As can be seen, the odds of detecting intelligent life reduces through every step; we have got firm figures for the number of stars in the Galaxy, and we now know that a good number of those stars have planets. After this, we get into total guesswork. For instance, if life develops, how likely is it for intelligent life to develop? In the 4.5 billion years that Earth has existed, only one race has developed enough intelligence to send radio waves into outer space. Is it inevitable that life, given enough time, becomes intelligent, or were we a fluke?

If you wish to try your own values, there is a Drake Equation calculator on the www.classbrain.com server. Remember, your guesses may be as accurate as any scientists...

Within my lifetime I expect many of these factors to be increasingly firmed up. Science is improving all the time. New telescopes such as the postponed Terrestrial Planet Finder from NASA or the planned Darwin mission from ESA should allow us to see exosolar planets in great detail, even to the extent of detecting chemicals required by life in atmospheric gasses. However, we are finding it hard enough to decide if there has ever been life on Mars, and that is literally in our own backyard. Any evidence found will be interpreted and argued over ad nauseum, just as the Martian meteorites have been.

In many ways this is pointless information; I cannot foresee us ever having the capability to travel to these worlds, and the knowledge that life exists on other planets will not effect the Human conciousness long term. Scientists will be excited, theologians worried, and the rest of us will continue living our lives regardless.

So what are my views? Basically, they have not changed in twenty years. I am certain there is life elsewhere amongst the 200 to 400 billion stars in our galaxy. The number is just too large, and you would have to be very, very insular to believe that Earth is unique in having developed life. Intelligent life, however, is a different matter. The fact is, after nearly fifty years of searching we have not heard anything from outer space (the Wow! signal notwithstanding). This makes me believe one of the following is probably true:
  • We are the only intelligent lifeform in the Galaxy;
  • We are not listening for messages in the right way;
  • Other ILE is too far away from us, and their signals too weak for us to detect;
  • Other ILE has developed, perhaps several times over, but died out many years ago (disease, nuclear devastation etc).
  • They are around us as we speak, watching us and waiting for the right moment to intervene...
As much as I would like for the last of these to be true, and for friendly aliens to land tomorrow outside Washington (*), I think it is highly unlikely. I like sci-fi, but I never forget the 'fi' part of the title. The distance are just so vast, even to our nearest star, Proxima Centauri, that it would be exceptionally hard to travel to it using current or realistically envisaged technology.

(*) Why is it always Washington and America that the aliens land at first? My argument would be for New Delhi or Beijing. Then again, I would love it if the aliens read the wrong map and landed outside the Old Hall in Washington, Tyne and Wear...