Studying in Complexity Science
Home In the news Courses Careers FAQ Contact us
Image: Fractals

In the news



Fully-funded PhD places are available at the new Center for Doctoral Training in Next Generation Computational Modelling. The CDT pursues computational modelling research spanning engineering, computer science, mathematics, and the physical, natural and life sciences.

Scratch the surface and complexity issues turn out to be central to many of today’s most significant news stories. Are we prepared for a bird flu outbreak? Can patterns of financial activity be used to identify terrorist cells? Could smart road use charges automatically manage traffic congestion? Why is yet another massive government IT project over budget and behind schedule? Will drug simulation allow bespoke medicines to be tailored to the needs of specific patients?

Bird Flu

When epidemiologists and biologists first determined that the arrival of bird flu in the UK was simply a matter of time, and that the evolution of a strain capable of being transmitted from person to person could not be ruled out, preparation for a bird flu outbreak became a priority. But how can we gauge the adequacy of our preparation? How quickly can we detect an outbreak? How far will it have spread before detection? In which parts of the population will it spread fastest? What reserves of drugs must we stockpile? Where should they be held? What mechanism for releasing and deploying them should be implemented? Answering these questions is the preserve of epidemiological modellers. But are our models and methods sophisticated enough? What does the future hold?

The Digital NHS

Notwithstanding previous failures to deliver large-scale software systems (See Swanwick, below), the government is committing tens of billions of pounds to the largest ever civil IT project: the NHS NationalProgram for IT (NPfIT). A high-speed broadband network will connect the UK’s 30,000 GPs to 300 hospitals in order to share electronic patient records, booking and prescription systems, and other services. But how can it be made to work where so many large and interconnected systems have failed in the past? Is complexity the enemy within? Or, could a complex adaptive systems perspective help?

Home Office Deportation Row

The recent eruption of public concern over the home office’s failure to properly process foreign nationals released from prison is a good example of a problem at the interfaces between large systems. A lack of communication between the prison service and other parts of the criminal justice system had long been identified, but taking action was like “turning a tanker” according to Home Secretary, Charles Clarke.

The resulting situation prompted disparaging questions in the House of Commons.

"Taking, for example, the Home Office ... can you tell us whether the prisons service computer does talk to the immigration and nationality directorate computer and whether either communicates with the police national computer or the courts service computers? If none of this happens, can we go back to an easier technology of each prisoner's file having a big red stamp on the front saying: 'Don't release - deport'?'' – David Heath, Liberal Democrat.

Others see wider problems with a home office that is being expected to cover too much ground. But would breaking it up solve the problem, or create more opportunities for breakdown of communication? How should we deal with structuring large-scale, interconnected organisations such that they can carry out the “joined-up thinking” that is required of them today?

Iraq & Al Qaeda

Ever since Al Qaeda became a badge for the terrorist menace posed by Islamic fundamentalism in 2001, there have been attempts to describe the organisation, its structure or lack of structure, and how it operates. Is it a huge multinational logistic and financial network supporting a large number of terrorist cells, a small cadre of Osama Bin Laden’s close associates, or just a flag of convenience? Could analysis of internet or financial activity answer these questions? Determining whether Al Qaeda is indeed a “complex but dull” international corporation-like entity or merely “an idea about cleansing a corrupt world through religious violence” is a first step toward determine how its activities might best be dealt with. The LA Times reports that the Sims (or something like them) are being mobilised in an effort to simulate terrorist networks and assess the effectiveness of counter-terrorist strategies.

In Iraq, the fallout of war continues to frustrate attempts to establish democratic governance. Could the nature and extent of the continued insurgency within Iraq have been predicted by the US and its allies prior to invasion, perhaps via “war-games” modelling of some kind? Could we at least have better estimated the likelihood of protracted insurgency vs. regime change of a smoother kind? Or must it be the case that a fast-changing, highly sensitive, and politically charged scenario is a kind of complex adaptive system that cannot be predicted in detail or with confidence? What are the limits on our ability to predict the costs of war in terms of lives, resources, money and political instability?

Smart Roads

Traffic congestion is a problem on many levels, wasting time and fuel, eroding services, harming the environment and damaging the economy. A combination of wireless technologies, IT, smart sensors, in car systems and roadside infrastructure offers the possibility of managing road traffic to minimise congestion and improve safety. But how can these technologies be mobilised effectively and efficiently? Tinkering with traffic networks is costly, and the effects (both immediate and longer term) of even small interventions can be large and difficult to predict. Can we understand road use sufficiently to enable us to automate its management?

Swanwick Air Traffic Control Centre

The consolidation of UK air traffic control at Swanwick’s National Air Traffic Services control centre draws attention to the UK government’s poor track record in attempting to engineer large-scale centralised technological infrastructure, in this case, a combination of people, buildings, software, networks, and hardware. The Swanwick story is not a pretty one: as deadlines were missed, costs spiralled, design issues were fudged, and delays piled up. Once the centre was finally operational, a series of system failures brought UK air travel to its knees and propelled Swanwick into the public gaze. More generally, a similar catalogue of woes has struck other high-profile government IT projects. Immigration, passports, pensions, etc. have cost the government billions of pounds in failed systems, prompting Paul Nightingale, a specialist in complex engineered systems, to coin the first rule of building large-scale software systems: “Don’t!”.