Home Blog Page 3

Intentional Electromagnetic Interference (IEMI) IoT Threat

IEMI

As IoT adoption continues to proliferate, manufactures and adopters are increasingly aware of cybersecurity risks to IoT. Yet, even among the IoT security professionals, one significant potential remote attack vector is often overlooked: intentional electromagnetic interference (IEMI).


Electromagnetic interference (EMI) surrounds us – natural causes, such as solar flares and lightning; and man-made sources such as radio and TV broadcasting, radars, microwaves and many others all emit electromagnetic waves that could disrupt operation of electrical and electronic devices. That is, if devices wouldn’t comply with numerous electromagnetic compatibility (EMC) standards which ensure correct operation in common electromagnetic environment and resilience to unintentional EMI. Unfortunately, adversaries are increasingly turning to intentional electromagnetic interference (IEMI) – electromagnetic (EM) pulses generated by adversaries at powers above those protected by EMC and capable of disrupting or even damaging digital devices.

Recognizing the threat

The decreasing power requirements for IoT devices provides the perfect target for analog interference with the EM fields that surround the circuits and wiring that connect them. IoT devices operate at decreasingly low internal voltages and communicate through low power wireless networks. These can easily be disrupted by IEMI attacks that use tools that are easily obtainable by any garage enthusiast. Short, sharp pulses of high voltage, low energy interference capable of disrupting systems can be generated by a device the size of a suitcase.

Many people associate IEMI threats only with High Altitude Electromagnetic Pulse (HEMP) associated with nuclear explosions due to frequent depictions in popular media. Although damage from such attacks would be devastating – inflicting permanent damage on all electronics over a large area –resources needed for such a powerful punch are beyond the capabilities of even rogue nations, thus limiting the field of potential attackers.

Lesser, but still damaging attacks, can be accomplished by anyone willing to study information readily available on the internet, using off-the-shelf devices like microwaves, electro-magnetic jammers or ESD guns used to test electronic devices for electro-static discharge resistance. More complex devices with greater ranges and power can be assembled from specialty devices into even more powerful tools with greater ranges.

IEMI attacks can be accomplished through either a hard-wired attack or a broadcast attack. Hard-wired attacks produce a more powerful jolt, but broadcast attacks let attackers disrupt a facility from outside rather than requiring them to physically breach it. A poorly protected system could be disrupted by a device sitting in a truck parked outside it or in a briefcase in a public part of the facility.

The nature of IEMI attacks

IEMI attacks cause sharp, high-voltage pulses that temporarily disable the target’s digital systems. They are almost undetectable. Unlike a hacker, whose attempts to breach the system can be detected as they attempt to breach the system, the first sign of an IEMI attack is when you see the system fail.

IEMI attacks leave no physical trace in the equipment they disrupt. Even error logs leave little evidence of the IEMI nature of an attack. They tend to assign normal operational error codes to the failure that mask the attack’s true nature.

Thus, it is impossible to determine how common IEMI attacks are because of their lack of a physical or digital footprint. Add to that the fact that many suspected attacks are hushed up to avoid damaging the organization’s reputation and assessing the full scope of the threat becomes even more difficult.

What IEMI attacks can accomplish

The most widely suspected IEMI attack to date was the May 2012 North Korean jamming that affected two South Korean airports. More than 300 airplanes flying into or out of those airports were affected, along with more than 100 ships and fishing vessels that were in the sea near those airports and an untold number of car navigation systems on nearby roadways.

The disruptions were sporadic and likely were part of a series of disruptions over the previous few years. No direct damage is known to have come from these attacks, but experts believe that these attacks were merely tests of the effectiveness of North Korean jamming systems, precursors to future, more damaging attacks.

Researchers irradiating an automobile with a van-mounted IEMI source demonstrated it would be possible to stop automobile operations at the distance of 500 meters and cause permanent damage at 15 meters. Swedish Defence researcher estimates that a suitcase-based IEMI source could cause upset or damage to cars, PCs, etc. on up to 50 meters distance and even a permanent damage in close vicinity.

System disruption is not the only potential problem with IEMI attacks, either. Researchers have found it possible to use EM fields to intercept and decrypt sensitive information from systems, as well. They can reconstruct information that passes through monitors, keyboards, printers and cryptographic devices. And the methods used to reconstruct such sensitive information are now well within the capabilities of any determined hacker.

Other research has shown that VHF waves could be used by attackers to inject commands into voice interface-capable devices. This includes a growing number of items in IoT as voice interface grows increasingly popular.

Protecting against IEMI

One clear conclusion is that standard EMI testing is not sufficient to guard against this threat. Standard EMI testing checks only components’ ability to withstand normal interference. And testing of components individually cannot ensure security of the complete system. Test environments for protecting against IEMI are being developed, but they still have a long way to go.

In the meantime, there are steps that can be taken to reduce system vulnerabilities to IEMI:

  • Proper grounding procedures are essential. Make sure, however, that the technicians who create your grounding system are well-versed in grounding procedures or you may inadvertently increase your vulnerability.
  • If possible, ensure your facility has a large, open space around it to make it hard for an attacker to get a disruptive device close enough to your system to be effective.
  • Include metal rebar in outside walls, metal mesh in windows and specialized filtering on cables at their entry points to minimize EMI penetration.
  • If possible, replace copper cables with EMI-proof fiber-optic cables.
  • Install EMI warning systems appropriate to your level of risk.

Takeaways

IEMI threats are often overlooked in security assessments. Attackers require little technical expertise and can use easily obtained EMI-generating devices. Protecting systems against such attacks should always be part of a comprehensive security plan. To find out more about the IEMI threat and what you can do about it, see my more in-depth article, “The growing threat of intentional electromagnetic interference IEMI attacks.”


Originally published on CSOonline on April 30, 2018.

Canadian Critical Infrastructure Cyber Protection

Canada Critical Infrastructure

Targeted cyberattacks against critical infrastructure (CI) are increasing on a global scale. Critical systems are rapidly being connected to the internet, affording attackers opportunities to target virtual systems that operate and monitor physical structures and physical processes through various modes of cyberattack.

When people think of cyberattacks, their minds often go first to the financial sector. After all, that’s the type of attack people hear about most frequently; it’s where the money is and it’s what seems most natural for cybercriminals to target. Enterprises frequently focus on such cyber-enabled financial crimes to the point that they give too little thought to attacks that target CI. Among the rapidly growing ranks of cyberattackers, however, motivations are far more varied.

Industrial Control Systems (ICS) and Internet of Things (IoT) give CI operators the ability to increasingly transition CI control to distant monitoring of remote systems. This increases efficiencies and cuts costs, but leaves those systems more vulnerable to both attack and degradation. That’s why I will focus in this article on the threats to those areas of CI that are less talked about – those that are outside the financial sector and those that are vulnerable to cyber-kinetic attacks – and examine how Canada is dealing with the real and present threats.

Having worked on critical infrastructure cyber protection in the US, UK, EU, Australia, Singapore, Hong Kong and the rest of Greater China, South Korea and other countries, I can compare the approaches with my new country – Canada.

The Changing Threat Landscape for Critical Infrastructure Cyber Protection

CI is at the core of the cyberwarfare worry. It’s a rapidly growing concern and, these days, the majority of countries are developing at least some defensive cyberwarfare capabilities. Furthermore, more than 30 countries are allegedly building offensive cyberwarfare capabilities. While few public examples exist of overt cyberwarfare, we see lots of cyberwarfare “positioning” – geopolitical adversaries breaking into each other’s CI in order to test defences and create footholds for future potential destructive attacks.

Traditional boundaries between cybersecurity concerns, however, are disappearing. As US Department of Homeland Security (DHS) noted last month: while “nation-states continue to present a considerable cyber threat,” non-state actors, too, “are emerging with capabilities that match those of sophisticated nation-states.

Financially motivated cybercriminals came up with many innovative ways to monetize their craft and have started intentionally or inadvertently impacting CI. Over the last few years, a number of CI operations have been impacted by ransomware or cryptojacking infections for example.

We increasingly see hacktivists, agents of hostile countries, or potentially terrorists seeking to use cyberattacks to strike a blow for their cause or disrupt the lives of those they see as enemies.

We observed employees of some of the critical industries’ competitors (e.g., in critical manufacturing) seeking to harm those competitors through cyberattacks that give the attacker’s company a competitive advantage.

Disgruntled employees who work within critical systems seek revenge on employers they feel have wronged them, using cyber means to sabotage systems. Even individuals as seemingly benign as bored teens may launch attacks on digitized physical systems essential to our lives, simply to test how much of an impact they can have on them.

Attacks from all these vectors have occurred repeatedly.

Defining Critical Infrastructure (CI)

Every country has their own, unique definition of CI, and they don’t always include all the same industries. The Canadian government’s Public Safety Canada (PSC) website defines Canada’s CI as follows:

Critical infrastructure refers to processes, systems, facilities, technologies, networks, assets and services essential to the health, safety, security or economic well-being of Canadians and the effective functioning of government.

Its explanation of Canadian CI includes the following ten sectors: Health, Food, Finance, Water, Information and Communication Technology, Safety, Energy and Utilities, Manufacturing, Government, Transportation.

In reality, divisions are not entirely clear-cut. These sectors rely on one another heavily in order to function smoothly. A report by the McDonald-Laurier Institute states:

CI is so widely distributed and pervasive that it is impossible to say who is responsible and who is accountable for CI either as a system or a set of sub-systems. In fact, it can be argued that this is a case of multiple interests crossing all the normal divides of public and private to the degree that there is a real danger that even with titular leadership being lodged with Public Safety Canada, in the end no one is responsible and no one is accountable. In reality, the entire policy process is merely a combination of muddling through and ad hoc problem solving. When faced with an assault on key CI or its breakdown in the face of a natural disaster or total failure due to neglect, the responsibility to prevent, respond, and restore must be clear.

Yet, breaking them down is at least one step toward getting a handle on different parts of the interrelated whole. These sectors align closely with some of the sectors identified by other nations with which Canada most closely works – Australia, New Zealand, the UK and the US – in its efforts to develop a joint understanding of and cooperative effort toward how to protect those infrastructures.

Canadian and US critical infrastructure cyber protection cooperation

Because of their long, shared border and the vast number of interconnections that their proximity provides, Canada is especially closely aligned with US on CI issues. Canada and the US cooperate on CI cybersecurity, and on emergency and disaster management in numerous ways.

This cooperation is essential, because CI failures in one nation do not necessarily stop at the neighbouring country’s border. This is clearly demonstrated by the cascading power failure that blacked out much of the Northeastern US and Southeastern Canada on August 14, 2003, or the 2006 cascade failure that left 15 million Europeans across several countries without power. These examples show that small disruptions can have enormous effects, not just on the power grids themselves, but on all services – telecommunications, transportation, financial, health and emergency services – that rely on those grids.

The two nations have long cooperated in three key areas: building partnerships, increased information sharing and joint risk management. A joint Emergency Management Consultative Group oversees joint emergency management, promoting dialogue between stakeholders in both countries, including on CI issues. Sector-specific networks in both nations promote cross-border collaboration. A Critical Infrastructure Risk Analysis Cell also enables the nations to jointly enhance information-sharing and develop risk management and analytics tools that are equally effective on both sides of the border.

PSC endorses the NIST Framework, developed by the United States’ Department of Homeland Security (DHS) with the National Institute for Standards and Technology (NIST), and acknowledges the relevance and applicability of the NIST Framework in the Canadian context.

In addition, the Canadian Cyber Incident Response Centre (CCIRC) works jointly with the US DHS’ United States Computer Emergency Readiness Team and Industrial Control Systems Cyber Emergency Response Team on several key initiatives. They facilitate real-time collaboration between analysts across both countries and help develop standardized incident management processes and escalation procedures. They also seek enhanced ways to more effectively communicate cybersecurity issues to the private sector and general public.

Canadian government agencies involved in cybersecurity

Canada also deals with CI issues through a number of internal agencies. The CCIRC serves as Canada’s computer security incident response team. It monitors and advises on cyber threats and coordinates the national response to any incidents. Also, the Cyber Security Cooperation Program helps owners and operators of vital cyber systems reduce vulnerabilities in Canada’s cyber systems through grants.

Many other departments and agencies also play roles in protecting CI. Departments like the Canadian Security Intelligence Service, Communications Security Establishment and Defence Research and Development Canada form the front line in assessing and mitigating threats. The Department of Justice and Global Affairs Canada focuses on cyber-related legislation within Canada and helps shape multinational cyber law. And many other departments play specific roles to help provide a secure online environment and resilience.

Plans and successes

Although the National Strategy for Critical Infrastructure (published in 2009) didn’t mention cybersecurity specifically, it set high-level guidelines that formed the foundation for the strategies that developed from it. Most notably, it established the National Cross-Sector Forum (NCSF), which publishes regular action plans for CI. These plans have fostered collaboration between the federal government, provincial/territorial governments and CI owners and operators in sharing information, developing policies to protect CI and enhancing its resilience.

The Regional Resilience Assessment Program (RRAP) enables government officials to work directly with CI owners and operators to identify vulnerabilities, assess threats and develop strategies to reduce exposures. The Virtual Risk Assessment Cell (VRAC) provides experts to help the CI community identify impacts of potential threats and improve response planning for disruptive events.

The NCSF’s Fundamentals of Cyber Security for Canada’s CI Community offers industries specific steps they can take to achieve baseline levels of cybersecurity recommended in the NIST Cybersecurity Framework. It also identifies measurement indicators and further resources for building greater resilience.

The government is currently updating its Cybersecurity Strategy. In 2016 it held public consultation on cybersecurity and the consultation report was issued in January 2017. In addition, PSC also offers a free, one-day cybersecurity assessment to organizations and has a certification program in the works for CI operators.

Many other successes have occurred over the years. PSC issued guidances and mandates for all sizes and all industries; helped strengthen the reporting of cyber-crime and developed Cyber Incident Management Framework for Canada; forged partnerships with cyber professional associations; and much more.

As the threat environment continues to evolve, so do these groups and programs. They continue to meet with and deepen relationships between international partners, different levels of government and the private sector, constantly reassessing vulnerabilities and seeking to strengthen resilience. They continue to build on past successes. Clearly those involved in protecting Canadian CI take cybersecurity seriously.

Room for improvement

That is not to say that Canada cannot not do even better in mitigating the risks to which Canada’s CI is vulnerable.

Despite all the achievements, I believe Canada still lags behind many of its peer countries and could do better in CI protection. Bear in mind that my opinions are based only on public sources and my discussions with Canadian CI operators. I am not speaking as someone with insider info not known to the public.

In fairness, I must state that the Canadian government is confident – based on the report of Public Safety Minister Ralph Goodale at a March 1, 2018, CyberCanada Conference – in its present efforts toward preparedness. Based on what I see from my perspective, however, I feel some areas could be further improved.

Cyber-kinetic attacks, Internet of Things (IoT) and Intentional Electromagnetic Interference (IEMI)

One area that concerns me is the lack of awareness of cyber-kinetic attacks. I believe this area is becoming the greatest cyber threat but is often overlooked outside of the small, specialized subset of national cyber defence organizations.

Industries are rapidly connecting Industrial Control Systems (ICSes) to the internet. Such systems’ principal security approach in the past had been the fact that they were physical features attached to physical systems in secured buildings. Increasingly, though, over recent years, they have been connected to the internet. By doing this, efficiencies are multiplied but so are vulnerabilities. Failing to account for the security vulnerabilities and all the potential security and safety impacts such access creates is risky at best and could prove disastrous.

Similarly, Internet of Things (IoT) solutions are rapidly proliferating across all society – and especially in CI, where they offer the opportunity to gather and analyse data at minute levels never before possible. In most cases, these come to market with far too little thought given to security.

Those demanding IoT solutions focus more on the efficiencies they increase than on the risks they create. Meanwhile, those supplying the demand often give little thought to the question – namely, how these devices are secured – that prospective customers are not asking. With the IoT, the attack surface is greatly expanding, far beyond the previous legacy ICS technology. This must be explicitly recognized and addressed in CI cyber protection approaches.

To complicate things further – IoT devices are even more geographically distributed than legacy ICS and access to them cannot easily be physically controlled. With such geographical distribution, increasing interdependencies, constantly decreasing power requirements and increasing communication over low-power networks, threats of Intentional Electromagnetic Interference (IEMI) to IoT used in CI is rapidly growing and should be addressed together with cybersecurity approaches to CI protection.

Cybersecurity in Canadian industries

According to data from IDC Canada’s IT Advisory Panel, 2017 (June 2017), infrastructure services was the largest adopter of IoT solutions, with 77% of them using IoT in some form in their production, another 2% having IoT in pilot programs or proof of concept and 19% considering adoption in mid-2017. That places 98% of that sector either actively using or likely to soon adopt IoT functions, with other industries not far behind.

What makes this concerning is the result of interviews of 296 Canadian executives on the matter of cybersecurity. Only 52% stated that security strategies were addressed at the highest level of their organizations. The majority were most concerned about threats against traditional cyberattack information targets: proprietary information and customer data. Only 26% saw cyberattacks as a threat to physical property and 22% to human life, although nearly half (49%) acknowledged the threat of cyberattacks disrupting operations.

Even more concerning is the low level of security testing described by respondents. Only 40% run penetration testing; 37% conduct threat assessments; 41% conduct vulnerability assessments; and 38% actively monitor information security intelligence. Even assuming those numbers are higher in the CI community, they are still disturbing, considering the domino effect possible in a highly interconnected digital world.

Gaps in governmental statements

Although the Canadian government has clearly taken cybersecurity seriously, its approach to the concept of cyber-kinetic attacks is surprisingly vague. The bedrock document on the need to secure CI, its National Strategy for Critical Infrastructure, doesn’t mention cyber threats at all. Granted, it may not be fair to point to a document drafted in 2009 for not focusing more heavily on the threats that have become more prominent in the past decade, but more recent documents also seem to touch on the issue only lightly, if at all.

Fundamentals of Cyber Security for Canada’s CI Community, published in 2016, describes the risk to CI by saying, “Critical infrastructure sectors are interconnected and dependent upon secure cyber systems, and cyber disruptions to critical infrastructure can have significant economic implications – creating the potential of extensive losses for businesses and negatively affecting local, national and global economies.” This assessment is good but doesn’t go far enough. It doesn’t explicitly mention risks to the physical world and safety risks to people. It talks only about economic implications.

Or consider Canada’s official definition of a cyberattack: “[T]he unintentional or unauthorized access, use, manipulation, interruption or destruction (via electronic means) of electronic information and/or the electronic and physical infrastructure used to process, communicate and/or store that information.” While being good as far as it goes, it, too, appears to overlook the threat of cyber-kinetic attacks.

It appears to focus merely on information. Certainly, one could extrapolate this to include cyber-kinetic attacks by saying that every Cyber-Physical System (CPS) – whether ICS or IoT – at its root, is merely processing information. It would seem wise, though, to recognize the very real affect that digitally connected systems have on our physical world, delivering critical services such as water, electricity, transportation and even – to a growing degree – our food and our health care. We’re not talking just about disrupted data here; we’re talking about lives, well-being and the environment.

The much older (2001) Canadian Anti-Terrorism Act comes closer on this point by defining cyber terrorism as an act of causing interference with the disruption of essential service by using the internet facilities or a computer system, to result in the conduct of harm to the federal government of Canada and its citizens. Yet even this cyber-terrorism definition focuses only on disruption of services.

The one description that addresses the issue of cyber-kinetic attack that I found in the public documents I reviewed was in the recent National Cross Sector Forum 2018-2020 Action Plan for Critical Infrastructure. It stated:

[T]he increased reliance of organizations on cyber systems and technologies creates exposure to new risks that could produce significant physical consequences. ICS are at the intersection of the cyber and physical security domains. These systems, many of which were developed prior to the internet era, are used in a variety of critical applications, including within the energy and utilities, transportation, health, manufacturing, food and water sectors. For a variety of reasons, including efforts to minimize costs and increase efficiencies, these systems are increasingly connected to the internet, which can result in exposure to more advanced threats than those considered at the time of their design.

It’s good to see that the physical impacts of cyberattacks are slowly being recognized. Yet, even in the rest of this document it mentions only ICS, which is unnecessarily specific. It makes no mention of IoT, robotics, autonomous transportation, medical devices, etc. The authors might want to consider using the term “cyber-physical systems” as the broadest term that encompasses all the others.

In my opinion, recognition of cyber-kinetic risks – or, physical impact risks of cyberattacks – in Canada lags behind many other countries with which I have worked. Even though the most recent documents are beginning to mention cyber-kinetic risks, those documents still don’t mention IoT.

CI providers are rapidly digitizing and adopting new technologies such as IoT, robotics, autonomous transportation, drones and many others. As they do, the physical impacts of cyberattacks are becoming increasingly important. We need to recognize this and prepare accordingly. Not only that, but the government should take a stronger leading role in protecting its people from this growing threat.

Problems with funding

Another issue is the matter of underfunding cybersecurity efforts. British Columbia Senator Mobina Jaffer pointed out that while countries like the US, UK and Australia are budgeting billions for cybersecurity, Canada’s budget for 2018 was only CA$77.4 million.

Canadian Finance Minister Bill Morneau outlined the plans to spend CA$500 million on cybersecurity initiatives over the next five years in his 2018 federal government budget. After that initial period, the government will set aside CA$108 million a year for continuing cybersecurity initiatives. It is a significant improvement and a movement in the right direction. However, in my mind it is still not commensurate with risks. For comparison, the United States spends US$15 billion a year on cybersecurity.

A speaker at the CyberCanada Conference was not as optimistic about Canada’s cybersecurity readiness as Public Safety Minister Goodale had been. Melissa Hathaway, former cybersecurity advisor to US presidents George W. Bush and Barack Obama, fellow at the Canadian Centre for International Governance Innovation and president of Hathaway Global Strategies, pointed out a discrepancy between what government officials say and the level of funding designated to achieve those goals.

She said of Goodale’s aspiration to see Canada as a leading digital economy:

…there’s not an underlying ‘How do I make it resilient, secure and safe?’ You can’t have an economic vision taking the country in one direction and then security and resilience on the back burner. They have to be of equal importance.

The PSC free cybersecurity assessment program and certification programs are a good example of this. While they are great initiatives by PSC and valuable steps toward mitigating vulnerabilities, their benefits to improving overall cybersecurity posture of Canadian CI could be better. The assessment program and upcoming certification program are voluntary. So far only a small percentage of Canadian CI organizations have taken this free assessment. Assessments, being quite short (2-3 days), can only scratch the surface. It apparently takes a long time for PSC to issue the report to the CI operator. And based on my discussions with CI operators, they rarely take advantage of further follow-ups with PSC.

PSC efforts at forming partnerships are worthwhile. It must be acknowledged, though, that getting people talking is only a starting point, not a final destination.

Even there, weaknesses exist. Organizations that have been breached often are reluctant to disclose breaches, fearful of losing the public’s confidence and damaging their brand. Such fears are being mitigated somewhat by the new mandatory breach notification – although it focuses on data breach of personal data – and by the existence of the Canadian Cyber Threat Exchange, a not-for-profit, independent organization to which organizations can disclose attacks with confidence that their data will be anonymized before it is analysed or shared. This is a step in the right direction, but reluctance still exists, impairing solid analysis.

What others are doing

Is this just a matter of semantics, though? Is the terminology that Canadian policy documents use insignificant, or does it reflect a position that is leaving it behind other countries in the world community?

Having worked in other countries on those issues puts me in a position to share what I have seen there. There is a broad-scale recognition among policy makers in all the countries I mentioned that collaborative efforts between public and private sectors are required to develop and enforce effective CI protection. Different jurisdictions, however, approach it differently.

UK

In 2010, the British government recognized cybersecurity as a “Tier One risk” to its economic and national security. UK cybersecurity strategy also recognizes that “The ‘Internet of Things’ creates new opportunities for exploitation and increases the potential impact of attacks which have the potential to cause physical damage, injury to persons and, in a worst case scenario, death.

In an effort to comply with the EU Network and Information Systems (NIS) Directive, the UK government announced in January that UK CI operators could face fines of up to £17 million (~CA$30 million) if they do not have adequate cybersecurity measures in place.

In 2016, the British government announced its plans to almost double its investment in cybersecurity – up to a maximum of £1.9 billion (~CA$ 3.6 billion) in new funding over the next five years.

The UK has conducted several national cybersecurity exercises in collaboration with industry to practice crisis response plans for government agencies and specific operators of CI. For instance, in November 2013, the Bank of England conducted the “Waking Shark II Cyber Security Exercise” – a stress-test exercise developed to evaluate the response of the wholesale banking sector to a simulated cyberattack.

China

Over the last few years, China has included CI protection in numerous government strategy documents, laws and regulations. This includes their controversial Cybersecurity Law that gives the government broad powers to spot-check networks within Chinese borders and requires that data generated in China also be stored in China.

Interestingly, in addition to traditional CI sectors, China also includes large-scale commercial internet services, including search and social media, in their CI definition.

China’s Ministry of Public Security has also been managing its risk-based security certification – Multi-Level Protection Scheme (MLPS) – compliance for years.

In contrast to Canada, China, in many ways, takes a very hands-on approach to CI protection, from enforcing compliance with regulations to stepping in when CI incidents occur. This is easier to execute there, of course, because most CI operators in China are state-owned – apart from the internet web service sector – whereas in Canada, Europe and the US, CI operators are mostly privately owned.

EU

Similarly, the EU has moved steadily toward greater standardization of cybersecurity and notification requirements across member states. Back in 2014, France passed the first mandatory CI cybersecurity installation and maintenance requirements. Germany followed in 2015 with legislation that imposed stiff monetary penalties for noncompliance with mandatory standards.

The EU has since followed with its Network and Information Security Directive (NIS Directive), which took effect in 2016 and required that all member nations transpose the directive’s mandatory standards into their own national law by mid-2018. This directive takes cybersecurity standards EU-wide, targeting “operators of essential services” (OESes) – which is the EU term for CI providers. The directive will carry significant fines.

There, as with China, the nature of the governmental body made it easier to develop and enforce stricter standards. In China’s case, it was their control of state-run CI providers. In the EU, it was the extreme interconnectedness of CI between the nations that motivated standardization across the continent.

In addition to country-level cyber exercises that the likes of UK, Germany or others perform, ENISA (European Union Agency for Network and Information Security) manages the programme of pan-European exercises named Cyber Europe. This is a series of biennial EU-level cyber incident and crisis management exercises involving both the public and private sectors. The Cyber Europe exercises are simulations of large-scale cybersecurity incidents that escalate to become cyber crises.

US

Canada perhaps comes closest in its approach to the US because of its shared geographic space and interconnected CI. The nature of their geographic proximity and interconnected CI is markedly different from the partnership between EU nations, though, and that difference is what is likely responsible for Canada and the US lagging behind.

Sharing of resources and CI involves far more partners in the EU, motivating nations to a higher degree of standardization. Sharing of resources and CI between Canada and the US, on the other hand, is far more limited, with both countries being vast enough in size that even their long, shared border represents only a small volume of interaction compared to what EU nations experience.

Yet, even though Canada and the US have much in common in their approaches to CI vulnerabilities, the US still appears to stand significantly ahead of Canada in addressing them. For example, the US DHS has defined a more comprehensive CI definition than that adopted in Canada or in the UK/EU within the NIS Directive. The US recognizes 16 CI sectors compared to Canada’s 10, including also the chemical industry, dams, the defence industry and emergency services, and breaking other Canadian sectors into separate entities for more comprehensive attention.

US President Barak Obama’s 2013 Executive Order – Improving Critical Infrastructure Cybersecurity – tasked the US to lead the development of a framework to minimize cybersecurity risks to CI. It sought feedback from public and private sector stakeholders and incorporated industry best practices to the fullest extent possible.

The framework developed by the US NIST in 2014 has had much broader success in achieving adoption by US CI providers than Canadian efforts have had. Many key US stakeholders, including the private sector, actively participated in its development and made adaptations in its implementation to meet their respective security needs.

PSC has endorsed the NIST Framework but doesn’t enforce it in the same way. And the private sector organizations, unfortunately, have not bought into it in the same way that US organizations have. Even though approximately 85% of CI in the US is privately owned and free-market principles typically steer US industry, the US government is more hands-on in CI protection than Canada has been.

Not only has the US’ strong Department of Homeland Security been effective in encouraging compliance, but also the US military has been deeply involved in domestic affairs regarding cybersecurity. This is surprising, because US constitutional power-sharing principles prohibit the military from policing inside the US. But this only goes to show how seriously the US approaches the cyber protection of its CI.

Singapore

In 2015, Singapore established the overarching cybersecurity agency – The Cyber Security Agency of Singapore (CSA). The CSA is a Singaporean government agency under the Prime Minister’s Office that provides centralised oversight of national cybersecurity functions and works with public and private sector leads to protect Singapore’s critical services.

Singapore’s Cybersecurity Bill was officially passed into law in February 2018. The new Act applies to “any critical information infrastructure located wholly or partly in Singapore.” A critical information infrastructure (CII) is any “computer or computer system” deemed necessary for the continuous delivery of Singapore’s 11 primary essential services. Essential services are considered any service that, if compromised, would have a “debilitating impact on the national security, defense, foreign relations, economy, public health, public safety or order of Singapore.”

Under the new law, those entities considered to be CII providers will be required to report cybersecurity events to the CSA. Additionally, CIIs will be required to report on technical architecture related to interconnected infrastructure, conduct regular compliance audits and ongoing risk assessments, as well as participate in required cybersecurity exercises put in place by the Commissioner. Failure to comply with the new development will results in financial penalties of up to US$100,000 and/or two years in jail.

Recommendations

In Canada, the threat of cyber-kinetic attacks does not appear to be as recognized as elsewhere. Neither are the growing risks to CI coming from the IoT adoption. Canada, more than any other country I know, has a hands-off approach. While it tries to encourage information sharing and collaboration – and has had some successes in it – it doesn’t get involved directly in CI protection as much as other nations.

  • In my view, CI is only as strong as is weakest link. I believe that Canada should develop mandatory standards across CI and more strictly enforce them. If mandatory standards are not implemented, voluntary programs and activities will leave too many gaps.
  • This implies that government agencies should receive more authority to establish, to direct and to enforce compliance with cybersecurity standards.
  • Cyber-kinetic threats – i.e., cyber threats to well-being, lives or the environment – should be given more importance in government initiatives – from awareness to policy formulation, controls definition and crisis and emergency exercising.
  • Related to above, it only makes sense that cybersecurity emerge from its silo and form a closer cooperation between cybersecurity, engineering, safety and emergency management teams at and across CI operators. This is another area in which the government could play a role – by raising awareness, developing frameworks that cut across traditional silos and facilitating or demanding cross-silo crisis exercises.
  • Canada’s NCSF and other relevant organizations need to continue their work in updating and aligning national cybersecurity strategies with the ever-evolving world of cyberthreats. IoT is a current threat that should be addressed already now. Even though both – ICS and IoT – could be defined as cyber-physical systems, there are some IoT-specific threats and approaches. Intentional Electromagnetic Interference (IEMI) is another type of IoT vulnerability that should be considered. Artificial Intelligence (AI) introduces a whole new set of threats and vulnerabilities. While AI adoption is still slow within CI operators, we should start thinking about it now, as we should also with quantum computing.
  • Canada should also consider using more sector-wide crisis exercises, similar to the biennial sector-wide crisis exercises conducted by the European Union Agency for Network and Information Security (ENISA). Such exercises are regularly performed not only by EU, but by almost all other G7 countries and many others, such as Singapore. The midst of an all-but-inevitable cybersecurity crisis is not a good time to test the CI cyber response plan.
  • The government could also provide stronger liability protection for organizations that voluntarily share critical information, so as to better benefit from the successes so far. This is a common problem everywhere – private sector organizations fear that sharing sensitive information would open them up to penalties or civil suits affecting reputations and profit. If Canada wants to be hands-off and rely on voluntary collaboration and on private companies doing the right thing, it should consider innovative approaches to make that collaboration effective.
  • A 2017 study by PSC shows some concerning failures in carrying out Canada’s cybersecurity strategies, such as inadequate documentation by oversight bodies that make it impossible to assess those bodies’ effectiveness in carrying out their responsibilities, or confusion between oversight bodies regarding what responsibilities fall within their domain. To PSC’s credit, these gaps are being identified and addressed. Correcting these and other flaws in governance must continue.
  • Yet even with strong government leadership, securing Canada’s CI is not a simple task. Much of Canada’s CI lies in the hands of a wide variety of private organizations. Without their buy-in, there will be only slow progress. Thus, there should also be more awareness activities across private CI operators and the general public.

Canada has made a good start on this but needs to keep pressing the point. Private owners will address risks to the degree that they are aware of them. They will balk at investing beyond what they feel meets those risks and often will underestimate risks until they are convinced that those risks pose a clear and present threat. Whether through stronger mandatory standards or increased awareness of the scope of threats, government can help motivate private owners to make the decisions necessary for increased security.

Does it really matter?

General opinion outside of the Canadian government sometimes appears unconcerned about the possibility of attacks on Canadian CI. The common thought is that cyberattackers are so focused on the US that they have no interest in attacking Canadian interests. Or, as quite a few of my interlocutors put it “We are Canadians. We are friends with everybody. Why would anybody attack us?

I don’t agree with that, though. For one, Canada and the US have significant inter-dependencies. Impacts on the US could cascade into Canada. Also, if attackers seeking to disrupt US interests find greater vigilance in protecting CI on the US side of the border, they are likely to turn to Canadian CI, hoping their efforts can start a cascade that causes disruption in the US.

Such is the case with the recent alert issued by the US Computer Emergency Readiness Team. This alert warned of efforts by Russian state actors to compromise both US and Canadian networks in energy, nuclear, aviation, critical manufacturing facilities and others to gather intelligence on the ICSes that control processes there.

The CCIRC’s 2018 report for the year 2017 also described an ongoing campaign of malicious cyber incidents that reveal a high degree of effort to compromise Canada’s electrical grids and nuclear facilities, as well as lesser efforts directed against financial, academic and critical manufacturing sectors. The report also showed a growing trend toward targeting hospital systems and network-connected medical devices.

It is nothing but wishful thinking to believe that no one is targeting Canada. Canada is indeed being heavily targeted. A 2013 study of hacker forums came to some disturbing conclusions:

Through data mining of data and manual revision, this study sought to identify information pertinent to critical infrastructures, with specific emphasis on Canadian systems, posted to publicly accessible hacker forums. Results indicate that Canadian IP addresses were overrepresented in the data, meaning that Canadian critical systems are of particular interest to hacker forum users…

This prevalence of Canadian IP addresses across hacker forums highlights the potential threats to Canadian systems. In addition, a 2017 study showed that 38% of Canadians had been victimized by cybercrime in the previous year. A 2018 study reported that Canada is the No. 1 target for phishing scams. Another 2017 study of cybersecurity readiness showed the organizations that reported a breach that exposed sensitive data had increased by 46% since 2014 and that the number of organizations that believe their organizations are winning the war against hackers dropped from 41% to 34% over the same time period.

A 2013 study showed that 69% of Canadian businesses had suffered some kind of attack in the previous 12-month period. The idea that nobody’s interested in targeting Canada is blatantly false.

Even in 2012, security experts were recognizing the growing threat: “Given the rapid changes in information and communication technologies, the Canadian government is not alone in concluding that existing defences will not be enough to ensure the integrity and availability of its information systems nor prevent critical infrastructure from being destroyed or shut down.

Takeaways

With an estimated 21 billion devices and 212 billion sensor-enabled objects predicted to be part of the IoT by 2020, it is essential to ensure that cyberspace is secure, particularly where it applies to CI. Realistically, it can be said that making CI absolutely secure is unrealistic. Situations change rapidly, new technologies are constantly being incorporated, vulnerabilities and risks continually evolve. You can’t confine such a moving target – especially one whose purpose is to be open enough to deliver services to the public.

Continually building more robust CI, however, is essential for the safety and security of Canada and its citizens. We cannot eliminate all possibility of disruptions, but we can work to ensure that they are as few and as limited in scope as possible.

What we can come closest to calling a solution, I believe, involves three elements: Continue to work toward bringing all stakeholders into a greater awareness of vulnerabilities and threats; Achieve a better understanding of the different ways that public and private stakeholders perceive risks; Facilitate the flow of information so all parties can work together to form strategies that provide the best possible protections against the wide variety of threats.

No one strategy and no one tool will suffice. Complex and interconnected issues like this require complex and interconnected efforts that involve all stakeholders. But with continued effort, Canadian stakeholders can better protect its CI – and its citizens.

Stuxnet: The Father of Cyber-Kinetic Weapons

IoT Bomb Stuxnet

While Stuxnet is gone, the world now knows what can be accomplished through cyber-kinetic attacks.


As we approach the 10th anniversary of when Stuxnet was (likely) deployed, it is worthwhile to examine the effect it still has on our world. As the world’s first-ever cyberweapon, it opened Pandora’s box. It was the first true cyber-kinetic weapon – and it changed military history and is changing world history, as well. Its impact on the future cannot be overstated.

Stuxnet’s beginnings

Stuxnet is believed to have been conceived jointly by the U.S. and Israel in 2005 or 2006 to cripple Iran’s nuclear weapon development without Iran even realizing that it had been attacked. An early version appears to have been deployed in 2007, but it didn’t reach its target. Perhaps that version’s goal was merely to gather intelligence. Its sophisticated platform was readily adaptable to espionage purposes and several related pieces of malware were primarily designed for that purpose.

The intelligence that its developers eventually obtained about Iranian operations enabled them to get Stuxnet inside Iran’s air-gapped (not connected to the internet) Natanz facility in 2009. They did this by infecting five Iranian companies that installed equipment in Natanz. When technicians at these companies connected their laptops to Natanz equipment, they unwittingly caused Stuxnet to download and spread throughout the facility. Through this indirect connection, Stuxnet’s developers were able to upload and command the malware through 2010, even though they did not have a direct connection with it.

How it worked

Stuxnet is considered the largest and most expensive malware development effort in history, a project too big for anyone but a nation-state to produce. It was also far too precisely targeted to damage anything other than equipment used only in Iranian uranium enrichment facilities.

Stuxnet contained valid security certificates, stolen from legitimate software companies, and multiple zero-day exploits to infect the technicians’ PCs. This combination enabled Stuxnet to easily compromise the PCs once the infected thumb drives were plugged into USB ports.

These three approaches, however, underscore the extraordinary resources Stuxnet’s developers had. Valid security certificates are well protected. Zero-day exploits (vulnerabilities that are unknown to the software manufacturer whose software is exploited) are very difficult to find. A single zero-day exploit is rare to find in malware. Dedicating multiple ones to a single piece of malware was unheard of at the time. Finally, by having the attack depend on getting a physical thumb drive into the possession of technicians protected by tight security requires extraordinary skill.

Once on the Natanz network, Stuxnet looked for Siemens PLCs that possessed two specific blocks of code used to control Iranian uranium enrichment centrifuges. Stuxnet also used rootkit functions that made it hard to discover or remove.

The attack damaged centrifuge rotors through two different routines. The first involved dramatically, but briefly, speeding centrifuges above their maximum safe speed, then briefly slowing them dramatically below their minimum safe speed. The malware would then wait weeks before repeating the cycle, to reduce the chances of detection. The second, more complex routine involved over-pressurizing centrifuges to increase rotor stress over time.

Thus, Stuxnet exerted years of wear on the centrifuges in mere months, causing them to fail faster than the Iranians could replace them. Experts believe that Stuxnet disabled one-fifth of Natanz centrifuges in a year.

A chilling discovery

When Stuxnet was discovered in the wild, security experts were baffled with this complex malware that contained both IT and Industrial Control System (ICS) components. Experts in each discipline had little experience in the other. Working together, they unravelled Stuxnet’s purpose: It was the world’s first true cyberweapon, designed to cause physical damage through infected computer systems.

Natanz ultimately was identified as the target, because of an unexpectedly high replacement rate of centrifuges that international inspectors had noticed there. It fulfilled cybersecurity experts’ warnings of the threat of such cyberattacks as IT and industrial control systems converged.

Stuxnet successfully targeted each of the three layers of a cyber-physical system. 1) It used the cyber layer to distribute the malware and identify its targets. 2) It used the control system layer (in this case, PLCs) to control physical processes. 3) Finally, it affected the physical layer, causing physical damage. Stuxnet thus was 1) a cyberattack 2) that created kinetic impacts 3) that resulted in physical destruction.

Moreover, it demonstrated how it is possible to:

  • Infect an air-gapped system
  • Target precise cyber-physical systems for infection
  • Introduce subtle, almost undetectable flaws into physical processes that could be just as damaging, if not more, than crashing a system, while much harder to detect.

Consider the implications of such a subtle attack. Defects built into cars or airplanes could cause the finished product to malfunction only after they are being used. In the same way, weaknesses could be built into power grids, making them prone to failure when the original attacker triggers a condition under which the grid was designed to fail. Or what about food or water processing? What if toxic additions were made in which the danger is not in a single dose, but cumulatively, over time? All such scenarios could inflict devastating damage without the target even realizing it was under attack.

A continuing threat

Stuxnet itself is gone. Experts believe it stopped functioning in 2012. But what it did continues to affect us.

Despite developer efforts to keep Stuxnet confined to the Natanz facility, it reached the wild and was discovered there. The innovative techniques that cost millions of dollars and thousands of hours of time to create now are available to other malware developers to adapt to new cyberweapons.

By revealing the vulnerability of cyber-physical systems, Stuxnet made them an inviting target. Although lack of knowledge of each other’s fields that kept antivirus experts and CPS security experts from unravelling Stuxnet independently is not as extreme as it was then, defending against cyber-kinetic attacks still requires skill in both.

When Stuxnet’s developers launched this cyber-kinetic attack on their enemies, it legitimized cyberweapons. Just as use of nuclear weapons on Japan in World War II spurred a nuclear arms race, today’s nations are believed to be pursuing a similar cyberweapons race.

North Korea’s alleged connection to the 2017 WannaCry ransomware attacks is a prime example. While those attacks affected information systems more than cyber-physical ones, physical hospital equipment was compromised in some locations, forcing delays or cancellations of medical procedures. In addition, five Iranian nationals have been charged with cyberattacks against U.S. targets, including failed cyber-kinetic attacks against the Bowman Avenue Dam in Rye Brook, New York.

Takeaways

While Stuxnet is gone, it forever changed our world. It showed how to inflict damage by targeting cyber-physical systems. It made advanced techniques for breaching secure systems available to cybercriminals and terrorists, and opened the doors to the threat of cyberwarfare.

The world now knows what can be accomplished through cyber-kinetic attacks. Developing Stuxnet required deep pockets and the talents of some of the world’s best minds. Dare we put anything less toward securing the cyber-physical systems that Stuxnet exposed?


For my more detailed write up on Stuxnet see Stuxnet and the Birth of Cyber-Kinetic Weapons

Growing Cyber-Kinetic Vulnerabilities of Railway Systems

Cyber-Kinetic Railway

In their growing efforts to increase efficiencies through digitization and automation, railways are becoming increasingly vulnerable to cyber-kinetic attacks as they move away from strictly mechanical systems and bespoke standalone systems to digital, open-platform, standardized equipment built using Commercial Off the Shelf (COTS) components.

In addition, the increasing use of networked control and automation systems enable remote access of public and private networks. Finally, the large geographical spread of railway systems, involving multiple providers and even multiple countries, and the vast number of people involved in operating and maintaining those widespread systems offer attackers an almost unlimited number of attack vectors. Railway systems are rampant with cyber-kinetic vulnerabilities.

Railways at cyber-kinetic risk

My team performed a comprehensive assessment for a rail provider in a middle of a large digitization project. The bottom line was that we found more than 20 cyber and intentional electromagnetic interference (IEMI) ways to cause kinetic impacts that would either lead to application of emergency breaks, derailment or a crash.

Other researchers’ three-year security assessment of a wider scope of railway systems have found some glaring poor practices in some systems’ security. These failures include:

  • Continued use of obsolete software whose manufacturers no longer provide patches for discovered security vulnerabilities
  • Hard-coded passwords for remote access
  • Networks that housed both engineering systems and easy-to-access passenger entertainment systems, making it possible to hack into operations equipment through the passenger systems

Another study revealed other common vulnerabilities. It showed that one type of programmable logic controller (PLC) that is known to be vulnerable to cyberattacks is commonly used by rail systems across Europe. Furthermore, GSM-R SIM cards commonly used for communication with trains in some countries could be compromised by GSM jamming or hijacked by a skilled attacker through their OTA (over-the-air) firmware update function.

Clearly, vulnerabilities abound. But what is even more shocking is to find how aware potential attackers are of those vulnerabilities.

“Project HoneyTrain”

In 2015, automation experts Koramis GmbH and security experts Sophos joined in a project to help them learn what railway systems were up against in terms of the skill level, methods and apparent motives of potential cyberattackers of mass transit and rail systems. They created a simulated rail infrastructure that they called “Project HoneyTrain” that would look like a real railway to online attackers. The project was online for six weeks, so that researchers could learn, record and analyze attacks.

This system simulated much more than just the digital systems of a rail system. To someone hacking it, it was indistinguishable from a real rail system, all the way down to closed circuit TV feeds from the supposed train operator cabins, working control interfaces that attackers could access and websites that offered real-time schedules and statuses of trains. This system was intentionally left with security protections that used some of poor practices found in real rail systems to see how attackers would respond.

Results were jaw-dropping. In those mere six weeks, 2,745,267 attacks were recorded. Approximately 10% of them reached industrial components and four valid login attempts enabled access to human machine interface (HMI), through which the attackers were able to gain limited control of the virtual trains, but not control them to the point where serious damage would have occurred had the trains been real. There was also evidence that once HMI was breached, attackers later returned to try to penetrate deeper into the systems.

Overall, the sequence of attacks showed the researchers “… that the attacker has a deep knowledge of the industrial control systems used for our HoneyTrain project. These actions were not performed randomly, but deliberately.” The project shows that malicious actors with knowledge of ICSes actively scan for and attempt to exploit railway infrastructure online.

The attractiveness of targeting mass transit and railway systems with cyber-kinetic attacks

Railways and metropolitan mass transit are prime targets for terrorists. Causing death or injury to a large number of people traveling by rail would make just the kind of impact terrorists seek.

Freight train transit is also an attractive target for terrorists. A large train derailment can take days to clear. It blocks not only the rail line, but also intersecting roadways.

Railways are also the preferred transportation mode for hazardous materials. So, from terrorist’s point of view, derailment of such materials can potentially multiply impact of an attack. Release of hazardous materials can force nearby residents to evacuate their homes and cause long-term damage to the environment.

With both people and goods being highly dependent on rail transport, disruption to it would have huge economic impact. The U.S. Department of Transportation Federal Railroad Administration quotes a 2012 report that states:

“Since each person in the U.S. requires the movement of approximately 40 tons of freight every year (emphasis added), many of the goods people use daily are either wholly shipped or contain components shipped by rail. Of rail freight, 91 percent are bulk commodities, such as agriculture and energy products, automobiles and components, construction materials, chemicals, coal, equipment, food, metals, minerals, and paper and pulp. The remaining 9 percent is intermodal traffic which generally consists of consumer goods and other miscellaneous products.”

How attractive are rail systems to terrorists? Rail systems are so attractive that Al Qaeda published detailed information in its online magazine for sympathizers instructions for how to construct an untraceable tool to derail trains, and the best rail lines to target in the U.S. and Europe. And they have proven their willingness to target rail systems. They were the group responsible for the 2004 Madrid railway bombings that killed 191 people and wounded more than 1,800.

Railways are also attractive to criminals because of the economic impact. Their goal is not to harm people, but to extort municipalities into paying ransom to release compromised mass transit systems from their control. Ransomware attacks like this have already occurred against the San Francisco Muni system in November of 2016 and the Deutsche Bahn German railway system in 2017.

Railway systems would offer attackers a wide range of targets even if they weren’t connected to the cyberworld: a multitude of stations, signaling systems and miles of tracks, often in remote areas, make securing everything highly challenging. Urban transit stations move hundreds of thousands of passengers through crowded concourses every day and have already attracted attackers in London, Russia, Belgium, Turkey and India in recent years.

Add to that the increased vulnerabilities that arise through the rapid digital transformation of the industry, and the potential attack footprint grows even larger. Digital vulnerabilities pose a painful conundrum for the rail industry, as Waterfall Security’s CEO and co-founder Lior Frenkel describes:

“The biggest risk to industrial networks occurs when there is a connection to an external network. In many ways, connecting rail systems to the internet is quite reckless, but delivers so many efficiencies that it’s hard to see a day when public transport won’t be connected. What is most concerning is when the mission-critical control systems are connected to the same networks used by the passengers or the business networks. Here you open up the control system to the bad guys, who needn’t even be on-board the train to find a way into the control system.”

Known railway system cyberattacks

Known cyberattacks on railway services, so far, have caused few injuries and no deaths. Their frequency, however, has increased over the years. A growing number of what might be called “preparatory attacks” cause no immediate disruption but establish a foothold in systems for future attacks. Similarly, ransomware attacks that extort money from rail services in exchange for returning control of systems to the owners have increased. The following examples show the history of rail service cyberattacks and their developing trends.

CSX U.S. railway system

A worm that infected millions of computers in August 2003 caused nearly a day’s worth of delays along CSX’s entire East Coast rail operations. Although the attack did not target the rail system specifically, the lack of system segmentation in CSX’s systems allowed an infection in an email opened by an employee to shut down signalling, dispatching and other critical systems.

Lozd, Poland, tram hack

In 2008 a 14-year-old described by his teachers as a model student and computer genius, studied his city’s tram system for months to reverse-engineer its remote operations functions. He converted a television remote control into an infrared device that enabled him to control key points of the tram’s track switching system. Without considering the consequences, he caused four trains to derail, injuring 12 people.

Unidentified U.S. railway system

An unnamed U.S. rail service in the Pacific Northwest suffered a cyberattack in December 2011 that had a minor effect on its operations. The official investigation described the attack as part of random exploration of U.S. digital systems by overseas hackers rather than as an attack that specifically targeted that rail service.

Four cyberattacks on UK rail

Security researchers, mid-2016, uncovered at least four cyberattacks on UK rail systems over the previous year. Although the incidents did no discernible damage, they suggest that state actors were compromising systems to establish a presence in them to prepare for possible future exploitation.

San Francisco Municipal Transit System ransomware attack

A ransomware attack on the San Francisco Municipal Transit Authority (SF Muni) paralyzed the ticketing system for one day, until backed up data for those systems were installed to get the system running again. Disruptions to passengers were minimal. In fact, the transit system remained operational during the attack, giving riders free fares until the problem was resolved. SF Muni’s systematic backing up of all data and segmentation of backups from the main system enabled the fast recovery.

German railway ransomware attack

In May 2017 a ransomware attack on German railway service Deutsche Bahn disrupted service for half a day. The initial infection likely entered through an opened phishing email, but it spread further than anticipated because of unexpectedly inadequate segmentation of systems.

Critical railway systems

Railway and mass transit services rely on a multitude of critical component systems for their operations. These include communications-based train control, traction power control, emergency ventilation control, alarms and indications, fire detection systems, intrusion detection systems, train control/signaling, fare collection, automatic vehicle location, physical security camera feeds (CCTV), access control, public information systems, public address systems, and radio/wireless/related communication, to name a few.

Each system offers attackers an attractive target, and a growing number of operational systems are COTS solutions. Such generic solutions save the railway companies money compared to proprietary solutions, but often offer questionable security.

Typically, railway systems consist of a mix of old, nondigital legacy systems and newer digital ones. Obviously, nondigital systems pose no cyber-connectivity vulnerabilities and require only physical and administrative security. Digital systems may vary on the level of connectivity they possess, with greater connectivity posing greater vulnerability.

Specific cyber-kinetic security challenges in railway systems

Rail systems pose some unique security challenges. One challenge is the massive shift from mechanical to digital systems.

As tests proceed on replacing old, mechanical signaling systems across the UK with digital ones of the European Rail Traffic Management System (ERTMS), researchers warn about the possibility of cyberattacks. They express concern that rail leaders have developed a false sense of security about the changeover to digital systems, because the old mechanical systems have been so rarely compromised. Assuming that this low rate will carry over to digital systems is not realistic, though.

Mechanical systems required attackers to be physically present at a physical control point to compromise it. As digitization increases, this restriction on attackers’ location will disappear. Systems will be hackable remotely.

Adding to the difficulty in securing rail systems is the fact that they are physically spread over large geographic areas. This multiplies potential access points to systems, which increases the difficulty in securing them.

The large number of personnel needed to access and run systems adds another security challenge. Disgruntled workers, or workers who have been bribed or coerced will have access to digital input points across vast geographical areas. Each of these many input point offers opportunities to introduce malware into the system, either knowingly or accidentally.

Rail systems also share a problem common to many cyber-physical systems, especially those that are widely distributed like rail services. Maintaining the physical components of the system is critical for maintaining safety, resulting in safety personnel and security personnel having responsibility for the same systems. This opens the possibility of the two sets of personnel inadvertently working at cross purposes.

Ensuring the safety of signaling systems, for example, is crucial to preventing collisions. A physical breakdown of signaling systems is just as hazardous to safety as someone digitally altering their intended operation. Both safety and security need to be designed into systems and constantly maintained. Yet those who ensure safety of physical systems prone to wear and tear and those who ensure security of digital systems often approach those systems in completely opposite ways.

The broad geographic area that rail systems cover require safety professionals to approach them by breaking them into separate components on which they can conduct systematic, rotating checks over time. Security professionals, on the other hand, approach the system holistically, rather than by its component parts.

These different approaches hold potential that either safety or security personnel might unknowingly make changes that impair the efforts of the other. Thus, it is imperative that both work closely together to ensure that they achieve an optimal combination of security and safety.

Furthermore, while safety personnel need concern themselves only with the equipment owned by their own rail service, security needs may involve multiple stakeholders, as systems connect between those stakeholders – not to mention, at times, different countries. All connections between stakeholders need to be secured, as well as the individual systems within them. Thus, cooperation is crucial between not only different disciplines, but also a wide variety of stakeholders.

Finally, we need to add the time factor into the challenges of securing rail systems. The organizations running them are used to dealing with mechanical systems whose replacement windows are measured in decades. Incorporating digital controls into the mix changes that dynamic dramatically. The speed of digital advancement is measured in mere years. If rail organizations digitize and yet expect to maintain the same extended replacement windows, they will only increase vulnerabilities, because digital systems become obsolete much faster than mechanical ones and are prone to undiscovered vulnerabilities.

What governments and rail services are doing

Fortunately, governments and rail services are not turning a blind eye to the new vulnerabilities. Many are actively involved in developing new standards to address security needs as rail transport systems become increasingly digitized. Here are some of the efforts they are making.

UK

The Rail Safety and Standards Board (RSSB) and the UK Department for Transport have released a guideline, Rail Cyber Security Guidance to Industry to address the emerging concerns on digitized railway systems and offer high-level recommendations for enhancing security. RSSB and the UK Department for Transport also work together with The Centre for Protection of National Infrastructure (CNPI) on developing a cross-industry cybersecurity strategy for the rail industry.

In addition, the rail industry itself is looking into reducing vulnerabilities. The Rail Delivery Group, a UK rail industry body, has released its own Rail Cyber Security Strategy, that seeks to set UK railways on course of creating a culture of cybersecurity that can make UK railway cybersecurity a model for railway systems around the world.

UK railway providers have taken up this challenge, seeking to embed cybersecurity into the end-to-end systems lifecycle, and have taken a strong risk management approach, borrowing methodologies from the financial services sector to better recognize potential attack vectors. They also have focused on better managing their supply chain risk, recognizing that many successful attacks started from a third-party relationship. Finally, they lead the way among rail services in developing rapid response capabilities in case someone succeeds in defeating their preventative measures.

U.S.

The U.S. Department of Homeland Security (DHS) has issued a series of guides on railway cybersecurity standards, including Transportation Industrial Control Systems (ICS) Cybersecurity Standards Strategy in 2012, as well as Transportation Systems Sector Cybersecurity Framework Implementation Guidance in 2015. In addition, the U.S. Transportation Security Administration (TSA), which is part of DHS, developed a Surface Transportation Cybersecurity Toolkit which refers to ICS framework and guidelines from the U.S. National Institute of Standards and Technology (NIST) and includes emergency response support from the US-CERT.

TSA also added cybersecurity into its Mass Transit BASE assessment framework to help transit providers evaluate the status of their security and emergency response. Finally, DHS, together with American Public Transportation Association (APTA) and Association of American Railroads (AAR), established the Surface Transportation Information Sharing & Analysis Center (ST-ISAC) to enable sharing of incidents and suspicious patterns among surface transportation organizations. This initiative includes the Transit and Rail Intelligence Awareness Daily Report (TRIAD).

In addition, individual U.S. rail services focus on continuous assessment, monitoring and awareness. They monitor thousands of log sources and assess hundreds of millions of log events each day. Cyber intelligence is widely and frequently shared. Security staffs monitor a wide number of intelligence sources daily and keep personnel apprised of the latest attacks and techniques.

Some rail services have worked hard to bring IT and OT security staffs together into a converged body that promotes enhanced communication between the two disciplines. They also have developed dedicated security teams to give intimate attention to key areas of vulnerability, to train rank-and-file staff in vulnerabilities of which they need to be aware and to decrease response time to attacks.

Canada

Transport Canada cooperates closely with the U.S. TSA on cybersecurity guidance and frameworks, but Canadian railways share cyber threat information through the cross-industry Canadian Cyber Threat Exchange and US Surface Transportation Information Sharing and Analysis Center (ST-ISAC). Like U.S. railway services, Canadian ones focus on making staff aware of threats, attacks and new techniques. Also, like UK railway services, they have established a strong focus on standards for working with third-party technology providers, to reduce their vulnerabilities there.

France

The ANSSI (National Agency for Information System Security) has identified VIO (Vital Importance Operators), which includes also the main rail and metro operators. It also publishes step-by-step recommendations on cybersecurity. These will be formalized through laws.

EU

The European Union Agency for Network and Information Security (ENISA) published numerous guidelines. These include Cyber Security and Resilience of Intelligent Public Transport and Cybersecurity for Railway Signaling. The EU has also funded multiple public-private projects on cybersecurity of railways, such as CYRAIL (CYbersecurity in the RAILways sector); SECRET Project, to address the risks of electromagnetic interference (EMI) attacks; and ECOSSIAN (European Control System Security Incident Analysis Network), the pan-European program for detection and management of attacks on critical infrastructure.

Other EU/ENISA initiatives include CERT support for cyber incidents in railways, as well as EU-wide disaster alerting, management and exercises (including cyberattack exercises). Also, the Network and Information Security Directive (NIS) implemented into law in 2018 focuses particularly on cybersecurity of operators of essential services, including rail.

International

The International Union of Railways now includes cybercrime in their Rail High Speed Network Security Handbook.

Further steps to take

Norbert Howe, in connection with the International Railway Signal Engineers (IRSE) Technical Committee, makes the following basic suggestions that can be applied across all railway systems:

  1. Establish cybersecurity design principles – Develop a mindset for security within the organization and apply design rules on several levels of the system architecture.
  2. Create a stronger perimeter – Apply strong measures such as Security Gateways and Web Application Firewalls and VPN on the external interfaces.
  3. Deploy system security and detection/recovery controls – A Security Information and Event Management Solution (SIEM) and centralized antivirus platforms should be used in order to immediately alert those responsible for security.
  4. Meet cybersecurity standards – Follow industry best practices by implementing according to strong standards.
  5. Embed cybersecurity in the project life cycle – Signaling system projects have to incorporate and apply security measures during all phases – from bid to testing and putting into operation.
  6. Establish and perform risk assessments and penetration testing – Map the threats and vulnerabilities for each asset of the system and establish a clear view on the related cybersecurity risk. Ensure the implementation of the relevant measures and perform penetration tests in order to verify their effectiveness.
  7. Maintain good operational conditions – Apply and regularly check the application of clear rules and procedures in order to keep up the level of security management throughout the system lifetime, including updates and system changes.
  8. Mandate safety protection – Implement measures to ensure message integrity and safety control in order to safeguard the system against unauthorized messages and malicious software.

The following list provides a brief summary of the most common initiatives I end up advising my railway clients for securing railway systems from their present vulnerabilities, as they relate to the National Institute of Standards and Technology (NIST) Cybersecurity Framework. The fact that many of them are very basic and obvious to cybersecurity professionals is just an indication of frequently low cybersecurity and cyber-kinetic security maturity of railways around the world.

Detect

Issues to consider:

  • Anomalies and Events
  • Security Continuous Monitoring
  • Detection Processes

Recommended priorities:

  • Design security operations for rapid detection and triage.
  • Expand detection to network flows from just logs.
  • Expand detection to ICS, making sure to consider all security and resilience issues that apply to ICS.
  • Utilize threat intelligence (both, strategic and tactical) and industry collaboration to improve detection.
  • Assess cybersecurity on a continuous/regular basis, including multiple types across VA, people, process to red teaming.
  • Explore mechanisms to detect intentional electromagnetic interference (IEMI) attacks, if justified by the Threat, Vulnerability and Risk Assessment (TVRA).

Respond

Issues to consider:

  • Response Planning
  • Communications
  • Mitigation
  • Improvements

Recommended priorities:

  • Establish cybersecurity response playbook (operation and IT level).
  • Strengthen the linkage between cybersecurity response and emergency, incident and crisis management processes.
  • Engage third-party expertise for incident response, digital forensics, crisis communication, etc.
  • Conduct regular and comprehensive cyber-crisis exercises, including all relevant railway functions (primarily OT and IT) as well as third-party responders and key suppliers.

Recover

Issues to consider:

  • Recovery Planning
  • Improvements
  • Communications

Recommended priorities:

  • Develop Business Impact Assessment (BIA) and Business Continuity Planning (BCP).
  • Disaster Recovery Planning.
  • Develop comprehensive and regularly tested backup and restore practices.
  • Include third-party suppliers in recovery planning and recovery activities.
  • Feed recovery lessons learned back into problem management.

Takeaways

Much is being done to reduce the cyber-kinetic vulnerabilities that fully cyber-enabled railway systems potentially offer. But much more needs to be done. Fully cyber-enabled railway systems offer attackers a range of vulnerabilities perhaps unmatched by any other type of industrial control system.

And potential attackers are well aware of their opportunities, as Project HoneyTrain demonstrates. Prevention is the best form of defense. Victims that are most vulnerable will be attacked first. Those systems that do not want to fall victim to disasters that could well dwarf the noncyber tragedies that have been occurring on rail lines need to be vigilant in their efforts to make – and keep – their cyber-systems secure.

IEMI – Threat of Intentional Electromagnetic Interference

IEMI

As our cities, our transportation, our energy and manufacturing – our everything – increasingly embrace Internet of Things (IoT) and Industrial Controls Systems (ICS), securing its underlying cyber-physical systems (CPS) grows ever more crucial. Yet, even among engineers and cybersecurity specialists, one potential attack trajectory is often overlooked: Intentional Electromagnetic Interference (IEMI).

ICS and IoT – digital systems that run today’s modern society – rely on changes in electrical charges flowing through physical equipment. Creating the 1s and 0s of which all digital information is composed requires electronic switching processes in circuits. The current used in this process is not confined to the circuits and the wiring connected to them. It also creates electromagnetic (EM) fields around them that could leak information. Potentially even more concerning is that the flow of 1s and 0s through the physical equipment or through electromagnetic wave-based communication can be disturbed or stopped by external electromagnetic interference (EMI) sources causing unpredictable results.

Electromagnetic interference (EMI) that can affect performance of electronic circuits or impact digital data paths could be caused by any of the myriad of electromagnetic fields that surround us. Electric and magnetic fields are produced by natural sources like solar flares, lightning or auroras, and man-made sources such mobile networks, radio and TV broadcasts, radars, integrated circuits, ignition systems, electric power transmission lines, toaster ovens, and countless other things.

The industry has established a number of legal requirements and standards for electromagnetic compatibility (EMC). The goal is to ensure correct operation of devices in common electromagnetic environment and resilience to unintentional EMI. With the exception of aircrafts, these EMC requirements are typically not sufficient to protect against intentional electromagnetic interferences (IEMI) generated by malicious actors in order to disrupt performance of electronic equipment. This might invite terrorists, criminals and other adversaries to intentionally interfere or damage critically important CPSes such as telecommunications, power networks, financial systems, medical care, broadcast media, industrial plants, traffic control systems, food and water supply, critical manufacturing, mass transit and others.

IEMI was until the turn of the millennium essentially a military concern but have since then generated quite a lot of interest in the civil arena. Capabilities of IEMI attackers have been growing steadily over the last two decades. On the other hand, growing complexity and distribution of CPSes and decreasing power requirements for the devices that make up the Internet of Things (IoT) make it increasingly possible to connect more and more components of our physical world to monitoring and control devices. This provides a growing pool of increasingly vulnerable targets for attackers. Together, these few trends increase the threat of IEMI attacks exponentially.

What IEMI is

A widely accepted definition for IEMI came out of a workshop held at the Zurich EMC Symposium in February 1999:

“Intentional malicious generation of electromagnetic energy introducing noise or signals into electric and electronic systems, thus disrupting, confusing or damaging these systems for terrorist or criminal purposes.”

To put it bluntly, “[Systems] can be laid low by short, sharp pulses high in voltage but low in energy – output that can now be generated by a machine the size of a suitcase, batteries included.”

Why IEMI matters in cybersecurity

Although most people view cybersecurity as applying only to protecting systems against remote malicious cyber hackers, it should go well beyond that limitation. Especially when applied to cyber-physical systems. It should apply to protection against all threats to such digital systems.

Technically speaking, IEMI is a cyberattack only because it targets “cyber” elements, such as computers, networks and devices. It is not, however, “cyber” in the sense of being a digital attack that directly manipulates 1s and 0s. Rather than manipulating 1s and 0s, IEMI focuses on analog interference with the EM signals that are used in our electronic devices and communications.

With that in mind, threats to digital systems, and therefore areas of concern for cyber-physical systems security specialists, could be broken down into three areas:

  • Physical security – protecting the physical components of digital systems from unauthorized physical access or tampering;
  • [Core] Cybersecurity – protecting embedded computation core – that controls physical processes – from being hacked, its data from being compromised or its functions from being hijacked by unauthorized persons;
  • EMI – protecting the electronic components in CPSes from disruption from IEMI as they operate at low internal voltages and that communicate via low-power wireless networks.

Cyber-physical systems security thus must protect systems on three levels, with the EMI level being the most often overlooked. As the physical and cyber worlds converge, though, we cannot afford to maintain artificial distinctions between these conjoined systems. Engineers charged with creating and maintaining the physical devices and systems must work together with cybersecurity professionals to ensure that vulnerabilities are addressed on a comprehensive level, with no cracks left open between the work of the different disciplines.

Nuclear Explosions and NEMP / HEMP

Most people are familiar with one of the IEMI types through frequent (albeit often incorrect) depiction in popular media – the concept of Nuclear Electromagnetic Pulse (NEMP) or High Altitude Electromagnetic Pulse (HEMP) because of talk of it as a side-effect of a nuclear explosion – a short burst of strong electromagnetic radiation that “fries” all the electronics and throws the target area back into the Middle Ages.

The U.S. and former USSR governments are known to have conducted extensive research since the 1950s on how to produce such pulses without a nuclear explosion, because it would give the nation that possessed that capability the ability to disable an enemy’s communication and critical infrastructure without direct human losses. Since the dissolution of the USSR, this research has spread to dozens of countries. Details of this work and how to generate non-nuclear electromagnetic pulses of an impact similar to a NEMP are relatively unknown outside the military.

More realistic types of IEMI attacks

There are, however, other types of IEMI attacks squarely within the reach of non-state adversaries such as criminals, terrorists, disgruntled insiders, hacktivists or simply curious teenagers. These are based on injecting EM pulses directly into electric or electronic systems, or by jamming electromagnetic wave-based communication (radio, WiFi, GSM, GPS, etc). Damage from IEMI attacks can range from extremely subtle disruptions that cause data stream errors to massive disruptions from narrowband attacks or nuclear explosions that cause system failure or even irreparable damage to equipment. Obviously, the more severe the disruption, the more serious the results.

IEMI attacks that aim to disrupt systems come, in the simplest of terms, in two forms: potentially destructive, but harder to accomplish, narrowband attacks, and wideband attacks that have greater success in targeting systems, but pack less of a punch.

Narrowband attack describes a single frequency that may be transmitted in a pulse on the order of microseconds in length. This type of threat is often referred to as HPM (high power microwave). Narrowband attacks could do permanent damage when successful, but must match the precise resonance pattern of the equipment to cause damage – not an easy task. While relatively easy to generate, ensuring that these attacks actually achieve their objective require sophisticated technology, infrastructure, knowledge and resources to design and build. Non-state actors or individuals could not easily replicate such devices, although they have proliferated throughout the world and could possibly be acquired through sale.

Wideband EMI pulse produces frequency content over a wide range of frequencies. The advantage of the wideband pulses is that the resonances of different sized systems can be stimulated simultaneously. By covering more bandwidth wideband attacks provide a greater chance of causing a disruption. The disadvantage is that the energy produced in a single pulse is spread over many frequencies packing less of a punch. Apart from the wideband disruption caused by a nuclear explosion, they are generally not powerful enough to “fry” the system. Their effectiveness is generally limited to temporary disruption. Wideband EMI sources are simple and inexpensive enough technologies that garage enthusiasts could construct them. As such, they are of more immediate concern, especially because technology for these kinds of attacks has been growing rapidly, and targets for this kind of attack have multiplied in our increasingly cyber-connected society.

Either narrowband or wideband attacks can be accomplished by broadcast attacks or hard-wired ones. Broadcast attacks have the advantage of not requiring attackers to physically breach a secured facility, but still require the attacker to be close, because the EM field diminishes quickly over distance. Hard-wired attacks require the attacker to physically connect to the targeted system. This guarantees that the attack will provide strong enough pulses to disrupt the system, but eliminates the advantage of being able to connect from outside a facility.

Weapons for such attacks range from simple, off-the-shelf devices, like microwaves, ESD guns used to test electronic devices for resistance to electro-static discharge, or electro-magnetic jammers disrupting radio communications (GSM jammers, GPS jammers, etc). More complex devices with greater ranges can be built from specialty components that are still readily available, with tools that most handymen own. They can be large enough to require transport by truck or small enough to fit into a briefcase.

The goal in most IEMI attacks is to inflict sharp high-voltage pulses capable of temporarily disabling digital systems. Researchers have also shown it possible, though, to use a targeted digital device’s EM properties to covertly access sensitive data. It is even possible to take limited control of some devices through IEMI attacks.

Known IEMI incidents

EM vulnerabilities came to public attention through an accident in 1967 where a degraded cable shield termination on a military aircraft landing on an aircraft carrier failed to protect the aircraft from the ship’s radar. The magnetic interference caused the aircraft to fire its weapons at a fully armed and fully fueled aircraft on the deck. The resulting explosion killed 134 people.

Other accidental incidents occurred when antilock braking systems first came out. Cars equipped with this feature on Germany’s Autobahn experienced braking malfunctions when passing a radio transmitter. Another incident occurred when a heart-attack victim’s defibrillator shut down every time the ambulance’s radio transmitter was used.

Not all incidents have been accidental. The following criminal uses of IEMI display some of the threats:

  • An EM disruptor was used to interfere with a gaming machine and trigger a payout in Japan.
  • An EM disruptor was used to disable a jewelry store security system in Russia.
  • An RF jammer was used by Chechen rebels to disrupt Russian police communications during a raid.
  • Jammers were used to disable security on limousines in Berlin.
  • An EM disruptor was used by Chechen rebels to disable security and access a secured area.
  • An EM disruptor was built by a disgruntled customer of a Netherlands bank to disable the bank’s IT system.

In each case, the devices used were extremely mobile and required little technical expertise. IEMI attacks are within reach of anyone willing to invest a little time studying documentation on the internet and purchasing readily available tools.

More advanced IEMI attacks can cause widescale disruption. In May 2012, North Korean jamming equipment caused GPS failures of more than 500 airplanes flying into or out of two South Korean airports, as well as hundreds of ships and fishing vessels in the nearby sea. Experts have quietly expressed concern that North Korea’s motives may have been to conduct testing for future, more destructive attacks.

It is impossible to document all IEMI attacks. On one hand, they are difficult to positively identify because of their lack of a physical or digital footprint. On the other hand, many suspected attacks are hushed up for security reasons or to avoid damaging the victim’s reputation.

Potential kinetic impacts of IEMI attacks

A paper by French national defence researchers calculates that a truck-mounted IEMI weapon with parabolic antenna could disrupt an aircraft to a distance of 1km.

An experiment irradiating an automobile with narrowband waveforms at high power and field levels (HPM) demonstrated that with a van-mounted HPM source it would be possible to stop automobile operations at the distance of 500 meters and cause permanent damage at 15 meters. The types of damage observed included: engine control units, relays, speedometer, revolution counter, burglar alarm, and a video camera.

Another experiment injected both narrowband and wideband waveforms onto the power lines entering a five story office building. The measurements indicated that voltages injected on external wiring could propagate well through the internal wiring of a building even when considering multiple switchboards inside the building. The experiment demonstrated that injected test pulses could have easily propagated through the building and cause damage to computer power supplies and potentially other types of connected equipment.

Swedish Defence researcher estimates that a suitcase-based HPM source could cause upset or damage to cars, PCs, etc. on up to 50 meters distance and even a permanent damage in close vicinity.

In my own practice, we have demonstrated how a simple GPS jammer could be abused to cause kinetic impacts in maritime or drones; or how a $15 GSM jammer could impact GSM-R communication that a train uses for the Communication-based Train Control (CBTC) which led to application of emergency brakes.

Examples of other types of IEMI threats

EM information leakage

Another way that EM can be misused is as an avenue for intercepting and decrypting sensitive information. Research on how to reconstruct information from the EM fields has been done since the 1950s and is commonly referred to by the codename of TEMPEST (Telecommunications and Electrical Machinery Protected from Emanations Security). Techniques have been demonstrated that enable an attacker to reconstruct information through analysis of the electromagnetic field generated by monitors, keyboards, printers or cryptographic devices. Such reconstructions were once thought possible only for government-level attacks, but they have long since passed into capabilities of virtually any determined attacker.

For example, different colors of pixels displayed on a screen generate different patterns of electromagnetic fields that can be analyzed and reconstructed. Different keys on a keyboard translate into different electromagnetic patterns for each key pressed. The messages received by printers are translated from electromagnetic patterns into print instructions. And secret keys on cryptographic devices can be detected by finding common patterns in the electromagnetic fields they generate.

This last one is particularly concerning, considering the rapidly increasing demand for secure transmission of confidential information. Standard cryptographic algorithms have stood up exceptionally well to digital attacks, but their hardware and software have proven vulnerable to side-channel attacks, such as power analysis and fault injection attacks.

In fault injection attacks, the attacker injects faults into the cryptographic device to obtain faulty ciphertexts. They then derive the secret key from faulty ciphertexts. This was not seen as a threat in the past because it required physical access to cryptographic modules. But recent research has shown ways to accomplish IEMI fault injection from a distance through EM waves leaded from cryptographic modules.

Countermeasures to this “information leakage” generally involve either masking such patterns by introducing additional “noise” that makes the pattern more difficult to decipher, narrowing the range of differences within the signals or taking measures to reduce the escape of EM signals. The third of these approaches, reducing unintended emissions and increasing shielding, is the most commonly practiced.

Compromising smartphones through EM attacks

A 2015 study showed the vulnerability that FM-capable smartphones possessed to being hijacked by attackers. Smartphones that have FM capability can receive FM’s VHF waves only by plugging in the headphones, which also serve as an antenna. This opens the possibility that the headphones could also serve as an antenna to inject commands into the smartphone.

In most voice-interface-capable devices, the voice interpreter always runs in the background, so users can access it instantly by speaking the keyword that alerts the device to an incoming voice command. This practice, itself, poses a well-documented risk to users’ privacy. But the fact that it can be used to insert commands into the device remotely is of even greater concern.

Speaking the keyword activates the voice interpreter. The voice interpreter then sends the subsequent command to the service provider to carry it out. Smartphone users who leave the voice command function always on are especially at risk. Even when users program their smartphones to activate the voice interpreter only after the user pushes a button the smartphone is still vulnerable. By sending an AM radio wave to simulate the button push, followed by an FM voice command, an attacker can send a voice command that bypasses the speaker and activates a command silently.

By doing this, an attacker could:

  • Covertly track the smartphone user
  • Activate a phone call to the attacker’s phone and eavesdrop on the smartphone user
  • Covertly place texts or calls to pay-per-text or pay-per-call services
  • Use the smartphone owner’s text, phone, social networks or email to send false information under the smartphone owner’s identity or to conduct phishing
  • Cause the browser to visit a site that downloads malware onto the smartphone to further compromise the operating system

Voice command is one of the most commonly used interfaces. It is already prevalent in smartphones, cars, desktop computers, smart watches and other “smart” devices in the Internet of Things (IoT). Because it fits so naturally with the way humans interact, it is perfectly positioned to become the most common user interface of the future as technology advances make it more reliable. That makes it a possible threat not only to smartphones, but to a wide range of devices.

Countermeasures are available for this vulnerability, but they reduce usability, a tradeoff that many users are unwilling to make. Users can improve headphone shielding and turn off the always-on access of the voice interpreter to reduce opportunities for attack. Other features that manufacturers could build into smartphones are voice recognition, which would require the voice commands received to match the user’s voice signature, and internal sensors that detect unusual EM activity concurrent with the voice command and reject the command.

Problems with securing against IEMI

Intrusions via IEMI are largely undetectable. Unlike a hacker, whose attempts to access a system can be identified and countered, the only way to detect an IEMI attack is by seeing the attacked system fail.

In addition, IEMI attacks leave no physical trace in the materials they penetrate or digital trace in the systems they compromise. They are hard to detect even by examining system error logs, because error detection systems are programmed to record errors based on normal system operations. Thus, they are likely to misidentify disruptions caused by IEMI attacks as internal system malfunctions rather than as an attack.

Additional difficulties with large grids

Adding to the problems in securing systems from IEMI, is the fact that many of today’s systems are quite widespread. For example, an electrical grid or railway network can cover hundreds of square miles and offer attackers thousands of potential access points. Securing such a massive network can border on impossible.

EMI testing

Assuming that systems are protected because components have been tested to meet industry standards is not enough. Such testing checks only for components’ ability to withstand normal interference. Components are not tested for ability to withstand IEMI attacks stronger than those encountered in normal operating conditions.

Typical EMI testing also is done in a fragmented way. Individual components are tested rather than complete systems. And they are generally tested in environments that do not match their ultimate destination.

And just because individual parts prove resilient doesn’t mean that the assembled system will. Continued efforts like the Bundeswehr Research Institute for Protective Technologies and NBC-Protection (WIS) testing of systems in a mobile office facility may help to close that gap.

The risk of IEMI attack can be determined by calculating three parameters: technological challenge, threat level and mobility. Technological challenge focuses on the skill level of the attacker, the availability of off-the-shelf IEMI sources that could be used against the target (or of components needed to build one), as well as the cost of such equipment or components. Threat level focuses on what level of risk of being disrupted a system has if exposed to an IEMI attack. Mobility focuses on how close to the targeted system the attacker’s equipment must be to successfully accomplish the attack.

These parameters are useful in that they consider more factors than traditional IEMI risk assessment processes do. Researchers Frank Sabath and Heyno Garbe have taken this approach even further, breaking its components down further to quantify and calculate what level of protection is needed against the level and severity of potential threats.

So what can be done

Research on potential IEMI attacks shows more vulnerabilities with each passing year. That is not surprising considering how our society increasingly uses systems that rely on EM sources. Certain countermeasures can help protect against IEMI attacks, though.

Grounding

Proper equipment grounding is essential. Run each chassis to a single ground system rather than grounding chasses individually. And make sure that the technician creating the grounding system is well versed on proper grounding procedures. Otherwise you may unknowingly create a grounding system that increases EM resonance and makes your system more prone to IEMI attacks.

Distance

Provide, if possible, a large open area outside critical facilities to reduce the chances of attackers getting close enough to launch a broadcast attack without being detected. This takes advantage of the fact that electromagnetic energy falls off as the square of the distance between the interfering and interfered with devices.

Shielding

Building architecture can provide protection. Outside walls reinforced with metal rebar, windows with metal mesh and cables with metal casings can attenuate EM fields but, again, be careful of unwarranted assumptions. A metal clad building would seem to protect systems within it from IEMI attacks, but if unshielded conductors lead into the building, the strength of the field inside the building can be intensified.

Filtering

Because cables that come from the outside could potentially be used as avenues for an IEMI attack, it is best to run them through filters. Don’t count on filters designed to protect from the types of surges that come from lightning strikes, though. Although lightning strikes are high-voltage disruptions, they are slow compared to the rise times of most IEMI attacks, which are measured in nanoseconds (billionths of a second) or picoseconds (trillionths of a second) compared to lightning strikes, whose rise times are measured in microseconds (millionths of a second).

Using fiber-optics

Where possible, replace copper wires with fiber-optic cables. They are immune to EM pulse interference

Warning systems

In recent years, it has become increasingly possible to install affordable detection systems in facilities whose protection was robust enough to withstand minor interruptions without suffering permanent damage. Although systems capable of providing interruption-free operation are exorbitantly expensive, more basic systems can still gather useful data for analyzing whether a more expensive detection system is needed.

Government and industry initiatives

Governments and industry bodies are taking these threats seriously. Steps are being taken to address them.

For example, the STRUCTURES (Strategies for The improvement of critical infrastrUCTUes Resilisnece to Electromagnetic attackS) program is one of several EU-funded research programs that evaluate the effects of IEMI on critical infrastructure, including protection and detection.

Similarly, SECRET – SECurity of Railways against Electromagnetic aTtacks – focuses on assessing and mitigating risks of EM attacks on railways. Finally, the EU’s HIPOW Project aims to develop a holistic regime for protection of critical infrastructures against threats from electromagnetic radiation.

Takeaways

IEMI attacks that take advantage of the very nature of digital systems are becoming more common. They often require little technical expertise and often make use of off-the-shelf EMI-generating devices. While not as well-documented as other forms of attack on digital systems, it is essential to recognize their danger and protect against them.

When Hackers Threaten your Life – Cyber-Kinetic Attacks Intro

Cyber-Kinetic

The attacker stepped out from behind a hedge in the upper-class suburban neighborhood, being careful to stay in the shadows. Across the street, the last lights shining through the windows of the house had just flickered out. She tugged the bottom of her black hoodie into place and pulled the hood up over her head, casting her face deeper in shadow.

Her target sat in the driveway at the front of the house, a bright red and completely decked out SUV. Glancing up and down the street to ensure no one was looking, she slipped across the street into the driveway, ducking down between the the side of the SUV and the landscaping. She pulled a small device out of her pocket and mashed the button down for a few seconds, smiling as she heard the answering thunk of the car doors unlocking.

“Hooray for technology.”

Quickly opening the driver’s door, she reached up under the steering wheel with a gloved hand and inserted a small device into the vehicle’s diagnostic port. She quickly ducked back out of the vehicle, shutting the door and locking it behind her again. She vanished swiftly into the night.

The next morning, a federal judge sat waiting in the traffic backed up on the highway entry ramp. He was on the way to begin the first day of what was to be a landmark case in combating drug and gang violence. As he accelerated on to the highway, he passed a car sitting off on the shoulder, the directional antenna pointing out its window completely escaping his notice.

As he got up to speed, the accelerator suddenly slammed to the floor. He desperately jammed his foot on the brakes, to no effect. As he began to panic in earnest, the steering wheel turned sharply, aiming the vehicle toward the oncoming lanes of traffic…

Though this tale does not describe an actual event, its ingredients are far from fictional. The technology described is very real, as are the attacks it facilitates. This is the world of cyber-kinetic attacks and cyber-physical systems (CPS), and it is growing more dangerous by the day.

Defining cyber-physical systems

Cyber-physical systems have been by described Dr. Helen Gill of the National Science Foundation as:

physical, biological, and engineered systems whose operations are integrated, monitored, and/or controlled by a computational core. Components are networked at every scale. Computing is deeply embedded into every physical component, possibly even into materials. The computational core is an embedded system, usually demands real-time response, and is most often distributed

Though this is a useful definition, it doesn’t technically incorporate Industrial Control Systems (ICS), those systems we would normally find in control of power generation and distribution, water mains, air handling systems, building control systems, smart factories, and the like.

A more inclusive and relevant definition is provided by the Cyber-Physical Systems Security Institute (CPSSI):

Cyber-Physical Systems are any physical or biological systems with an embedded computational core in which a cyber attack could adversely affect physical space, potentially impacting well-being, lives or the environment

Why is this definition more relevant? Because it focuses specifically on a critical aspect of this technology: security. Lumping different types of systems under the umbrella terms of CPS would be totally academic were it not for the benefits to be gained from addressing the security risks they share. ICS, SCADA, IoT devices, drones, smart grid, self-piloting transportation (automobiles, aircraft, ships etc), computer controlled artificial organs and connected medical implants, wearable technology – these digital systems differ hugely in their design and application, but they have one major trait in common: they can be hijacked to affect widespread damage in the physical world.

Threats of cyber-kinetic attacks against cyber-physical systems

Cyber-physical systems offer attackers a unique opportunity to go beyond the ‘traditional’ hacking of digital systems used to obtain information, steal money, extort money or bring down networks. The nation states, extortionists, terrorists, hackers, and criminals targeting CPS are doing so because they have the ability to reach into and manipulate physical environments.

Nation state attacks against cyber-physical systems are becoming routine. The first documented case of a nation state attack against CPS occurred with the Stuxnet malware being used to disrupt uranium enrichment in the Iranian plant at Natanz in 2010. In this case, the malware was ultimately used to interfere with the Programmable Logic Controllers (PLCs) causing the centrifuges at the plant to run at speeds alternatively above and below specifications, resulting in both damage to the equipment and improperly processed output. Stuxnet has since been attributed to a partnership between Israel and the National Security Agency (NSA) of the United States.

Malware of a similar nature has been used in numerous attacks since. The Dragonfly/Crouching Yeti attacks, estimated to have taken place from 2011 to 2014, were espionage campaigns against targets in the aviation and defense industries in the US and Canada, and various energy industry targets in the US, Spain, France, Italy, Germany, Turkey, and Poland. Similar tactics could be seen in the BlackEnergy malware causing power outages in the Ukraine in 2015.

But acts of aggression against CPS are not confined to major industrial targets or exclusively carried out by nation states with immense resources. For a less glamorous example of cyber-kinetic attacks we need only look as far as Vitek Boden. Boden, then in the employ of Hunter Wartech, an Australian installer of SCADA controlled sewage valves, had a difficult relationship with both his employer and the city council of Maroochy Shire, where he had installed equipment. As an act of retribution for the perceived slights, Boden remotely took unauthorized control of the valve network, spilling over 800,000 liters of raw sewage into area parks, rivers, and businesses.

Closer to home, the connected devices and equipment we use every day have increasingly been shown to be susceptible to cyber–kinetic attacks. Security researchers at Tesla and Jeep automotive systems have pointed out vulnerabilities enabling attackers to alter a variety of systems in increasingly automated vehicles. Such incursions could include activating the brakes rather suddenly or disabling the engine while at speed, either of which could have fatal repercussions.

Finally, to get to the heart of the threat, so to speak, we should consider hacks of medical devices. Pacemakers and other wireless implantable medical devices are clear candidates for CPS strikes. In 2007, doctors for then US vice president Dick Cheney had the wireless functionality for his implanted defibrillator disabled due to concerns around terrorists using it to assassinate him. But nearly any hackable medical technology is cause for great concern. Serious security vulnerabilities have been demonstrated in sending overdoses to drug pumps, changing the level of radiation output by CT scanners, and similar issues in a number of other such devices.

To date, all widely reported medical device hacks have been by security researchers, but it’s a matter of time before we see such attacks happening in the wild. According to a 2016 report by Forrester, the number one security threat for 2016 is ransomware in medical devices.

Concerns for CPS security grow with the imminent introduction of 5G. With a network capable of supporting a million devices per square kilometre at lighting connection speeds with super low latency, sci-fi dreams like fleets of driverless cars finally become a reality. With this infusion of the cyber and the real, the possibilities for innovation and improvement become limitless. Unfortunately, so do the opportunities for security breaches and attacks that could impact the lives of millions.

Securing cyber-physical systems

“In contrast to cyber security, the goal of cyber-physical security is to protect the whole cyber-physical system”.

Planning security for CPS need to take into account aspects of both information security and physical security, accounting for the weaknesses of both.

Threat, Vulnerability, and Risk (TVR) assessments of Cyber-Physical Systems differ from those conducted against enterprise information systems. When examining CPS, the potential impact on the lives and well-being of those using the systems, as well as potential impact to the environments in which they are used, need to be considered. This requires a full understanding of the risk, potential risk exposures, and dependencies between the systems. Without this it is impossible to plan technology deployments and decide on cyber security investment.

The interface between digital and physical realms also make CPS more difficult to test and secure. Penetration testing and vulnerability assessment need to be approached differently. If performed in the same way as is currently being done for business information systems, penetration testing could pose significant risks to CPS. At a minimum it might cause network performance issues. Worse, it could render some CPS components inoperable, alter data sent from sensors, or even provide an avenue for unintended or unauthorized changes to physical systems. Where human safety is involved these risks become unacceptable.

Best practices like using experienced penetration testers, carefully setting rules of engagement, and testing in a methodical fashion, will not eliminate the possible physical dangers from cyber-kinetic attacks. Nor will limiting penetration testing to the cyber portions of systems – the complex integration between cyber and physical that makes these systems so powerful also makes them impossible to separate. Testing environments need to represent both sides of the equation, so we have to develop new and innovative way to assess CPS security, such as developing comprehensive testbeds.

Physical security is a concern in itself. We regularly consider physical security of our information systems (data centers, etc), but in CPS the physical threat potential is far higher. The industry often overlooks the fact that something as generic as a remote control or an IoT device could be use to breach the business side of the network. Physical systems have traditionally not been secured well, but such ancillary access points have begun to see a higher level of scrutiny after incidents such as those that led to the Target breach in 2013.

In addition to TVR assessments, other portions of our security programs need to be tailored when dealing with cyber-physical systems. Security monitoring, incident readiness and response, forensics, and a host of others all have specific challenges we need to address.

Viewing cyber-physical systems through the same lenses typically used to address cyber security is insufficient, and leaves us with an incomplete picture. The standard CIA model (Confidentiality, Integrity, Availability) for assessing information security issues still has relevance but it needs to be augmented. Integrity remains critical – we obviously still need to be in charge of the data transfer points in the system, especially where they connect to the operation of physical objects. System availability also remains important in CPS, but confidentiality is less of a concern than in typical information systems.

A useful addition to the CIA triad in the context of cyber-kinetic attacks could be the Parkerian Hexad, which adds the criteria of Possession or Control, Authenticity, and Utility. Possession or control refers to either physical loss of, or loss of control of the item in question. Authenticity refers to the truth regarding claims of origin of the item. Utility refers to the usefulness of the item.

Taking all of these factors into account, we can revisit the vehicle attack example from the beginning of this article.

  • Confidentiality – Not a major part of this case. Theoretically, the inner workings of a device should be confidential, but when one is dealing with the Internet of Things and objects like cars, such information is freely available.
  • Possession or control – The operator of the vehicle was clearly no longer in control of it.
  • Integrity – The configuration information of the vehicle and its internal signaling between systems had lost integrity.
  • Authenticity – The origin of the signals being sent to the various vehicle systems were not authentic.
  • Availability – The vehicle was not available for the operator to use as intended.
  • Utility – The vehicle was no longer useful for its intended purpose as it cannot be safely driven.

CPS security is complicated by the fact that cyber-physical systems are as varied as they are unique. Even a standard security intervention like regular patching cannot be routinely applied in the CPSverse. Where critical physical systems are being controlled, patching can become much more difficult or even impossible. If our CPS is controlling an artificial organ or the systems of a submarine a thousand feet under the ocean, the idea of patching it while in use becomes almost ludicrous. The alternatives are not much more sensible: long term planning and extensive testing, system replacement, interventions over multiple generations of legacy systems

Winning the race to secure cyber-physical systems against cyber-kinetic attacks requires a realization that CPS’s are unique. They do not operate by the same rules as either strictly cyber or strictly physical systems. Security and testing need to be more imaginative, pre-empting the many novel ways in which an attacker might engage a cyber-physical system. Security professionals also need to be willing to let go of practices or controls that do not work in the CPS space. As we move into an era of hyper-connectivity founded on 5G and facilitated by the IoT, the risks to CPS’s will only increase, as will our responsibility to protect them.

Defeating 21st Century Pirates: the Maritime Cyberattacks

Maritime Cyber-Kinetic

The maritime industry faces a not-so-distant future when ships will be completely autonomous, using navigation data that they receive to plot their own courses with only minimal input from shoreside control centers. The efficiencies this could bring are massive, but before this happens, cybersecurity issues must be addressed. Not only are many vessels configured in ways that invite cyberattacks, but security practices also need to be improved before the industry can safely navigate its future.

An increasingly digitized maritime industry

A fleet of 250 autonomous vessels may launch soon. And that would be only the beginning, according to McKinsey and Co. McKinsey sees autonomous ships becoming the norm, spurring consolidation of different shipping branches – trucking, railroads, maritime, port services – into an end-to-end system that combines them all and increases the volume of container trade as much as fivefold.

System vulnerabilities

The benefits of such a digitally driven industry is not without pitfalls, though. Vulnerability of digital systems is a real threat. Michael Mullen, U.S. Navy Admiral and Chairman of the Joint Chiefs of Staff, warns,We are vulnerable in the military and in our governments, but I think we’re most vulnerable to cyberattacks commercially. This challenge is going to significantly increase. It’s not going to go away.

Three navigation-critical systems have proven to be vulnerable:

  • Global Navigation Satellite System (GNSS) – such as GPS, GLONASS, Galileo or BeiDou – pinpoint the vessel’s exact location, but can be deceived to fool the crew into changing course.
  • Electronic Chart Display & Information System (ECDIS) – contains digital charts of ocean routes, but, when fed false information, can lead the crew to plot an erroneous course, or can lead them to believe they are on the correct course when they aren’t.
  • Automatic Identification System (AIS) – monitors surrounding traffic and continuously broadcasts its location and avoid collisions, but can be intercepted and modified to give inaccurate information about the ship’s location, movements or identity.

The increasing practice of connecting shipboard systems to shoreside stakeholders via satellite or RF radio offers hackers opportunities to intercept and transmit falsified data, either to the ship or to stakeholders onshore. And anyone who can access system USB ports may – maliciously or unknowingly – download false data or malware.

These critical systems are not the only onboard systems that are vulnerable, either. Others include cargo management systems, bridge systems, propulsion and machinery management and power control systems, communication systems, access control systems, and others.

Compromise of these systems is not theoretical. Such cyber-kinetic attacks have happened. For some examples have a look at my more detailed article on maritime cybersecurity threats. Unfortunately, though, those incidents have not spurred significant security improvements.

Security practice vulnerabilities

Much of that is due to laxity in security practices. Ninety-nine percent of successful attacks on maritime system are through known but unpatched system vulnerabilities. Security is often limited to protecting system perimeters, with thought rarely given to detecting intruders who have penetrated the perimeter or stopping them from penetrating further.

Vessel systems often give all users admin access, which multiplies the number of vectors through which an attacker can compromise systems. Finally, many vessels network both their information technology (IT) and operations technology (OT) systems and then connect both to the internet for shoreside monitoring. This, however, gives any hacker who penetrates the perimeter full access to even the most critical onboard systems.

An attractive target

Maritime companies often fail to recognize what an attractive target they are to cybercriminals. The industry moves large sums of money between shipping lines and bunker suppliers or shipyards, not to mention the sums being paid the shipping companies for their services. Not only that, but compromising maritime systems offers an inviting way for criminals to move illicit goods.

An August 2011 cyberattack on Iranian Shipping Line (IRISL) caused weeks of chaos and severe financial loss. A 2017 attack on A.P. Moeller-Maersk cost them more than $200 million. Organized crime organizations have ensured the unchallenged delivery of their illicit goods by hacking into cargo systems in Netherlands and Australia.

A fraudulent scheme victimized the World Fuel Services (WFS) when cybercriminals forged a fake fuel supply tender to take delivery of $18 million worth of marine gas oil. Similar losses reported across the shipping industry have cost it hundreds of millions of dollars. Combine all this with ransomware attacks, sabotage and industrial espionage and the cost to the shipping industry is astronomical.

Taking steps to reduce vulnerabilities

Vulnerabilities that led to these losses could be reduced by applying to both IT and OT systems the principles recommended for other cyber-physical systems: defense in depth and defense in breadth. Defense-in-depth involves providing multiple security layers, so the more critical a system is, the more security levels protect it. Defense-in-breadth provides multiple security defenses within each level, so penetrating one system is limited to that one system and doesn’t automatically give access to others.

Steps to reduce vulnerabilities:

  • Bridge the divide between IT, operational technology, safety and other relevant functions.
  • Replace any outdated or obsolete operating systems or antivirus software.
  • Stop using default passwords. Assign users separate accounts that possess only as much access as their job needs.
  • Practice both defense-in-depth and defense-in-breadth.
  • Limit shoreside remote access of critical systems only to when it is needed.
  • Give contractors and service providers only as much access as needed.

Ultimately, the most important step to take is to make security decisions a high-level responsibility. When security decisions are delegated downward to IT department or individual ship level, security often becomes a low priority. But when security decisions are addressed near the top of organization, they are carried out.

Facing a digital future

Digitization in the maritime industry is growing, and cyberattacks are growing along with it. Attackers achieve massive paydays when maritime targets leave vulnerabilities open. If the maritime industry is to enjoy the potential that digitization can bring, it must put cybersecurity in the forefront instead of on the back burner.


For a more detailed article on maritime cybersecurity see Navigating a Safe Course Through the Threat of Maritime Cyberattacks


Originally published on CSOonline.com on January 8, 2018.

Regulating the Security of the Internet of Things (IoT) in a 5G World

Regulation IoT 5G Cybersecurity

In one of those strange inversions of reason, The Internet of Things (IoT) arguably began before the Internet itself. In 1980, a thirsty graduate in Carnegie Mellon University’s computer science department, David Nichols, eventually grew tired of hiking to the local Coca Cola vending machine only to find it empty or stocked entirely with warm cola. So, Nichols connected the machine to a network and wrote a program that updated his colleagues and him on cola stock levels. The first IoT device was born.

Things have moved on somewhat. Today, the world is home to 8 billion connected devices or “things”, with that number projected to leap to 20 billion devices by 2020. As Bruce Schneier says, “We’re building a robot the size of the world, and most people don’t even realize it.” This exponential growth has been driven by an increasing appetite for connection, convenience and consumption. These are good for business and, for the most part, private individuals. But there is a dark side.

Security of the Internet of Things is historically poor and fraught with challenges. According to researchers, in the ten largest US cities alone there are over 178 million IoT devices that lack basic security features and are visible to attackers. That is alarming enough, but what will be the impact as we move into a new era of hyper-connectivity under 5G?

As the 2016 Mirai botnet attack made all too clear, the IoT is dangerously vulnerable to disruption. Mirai impacted millions of people in the U.S, as well as affecting service for internet users in Germany and the UK, yet public discourse on IoT security remains patchy and inconsistent.

Why isn’t there more discussion about IoT hacks outside of the cybersecurity community? Are people naïve or do they simply not care? In reality it’s a bit of both.

The IoT is notoriously difficult to regulate, with service providers being scattered across borders and jurisdictional lines. Your network operator may be local, but your device may have been made in another country, its components in yet another. In such a scenario, the end user is seldom educated on how to – or why to – keep their devices secure.

The second part of the answer is embedded in human nature: We tend not to pay much attention to things until they impact our daily life. Mirai, for example, didn’t hack devices in order to get at their owners. Instead, these hacked devices were used to congregate enough computing power to launch a distributed denial-of-service attack on Dyn which supports the internet access of millions of Americans.

Aside from losing access to Netflix and Twitter, however, most people don’t feel directly impacted by such security breaches, so they aren’t too worried about their vulnerable devices. As long as their devices continue to function and their sensitive information isn’t compromised, the average end user is unlikely to get worked up about headlines like ‘Major IoT Cyber Attack’.

That’s all about to change under 5G. The first commercial 5G networks are expected to launch by the end of this year, with significant uptake expected by 2025. The promises of 5G are also its greatest threats. In a 5G environment, for example, autonomous cars and remote surgery become truly viable for the first time. With latency of one millisecond or less, connection over 5G will essentially represent real-time engagement.

Imagine the possibilities: A network of driverless automobiles traveling at the high speeds (200km/h plus) that are possible when every car knows where every other car is, and is able to respond to changes within a millisecond (by comparison, the average human response time is 200 milliseconds). The risk of collision approaches zero.

Now imagine that the IoT services are corrupted and the network connecting those hundreds of high speed vehicles suddenly collapses. Or cyber attackers target individuals with vulnerable pacemakers, as could have happened here.

When one looks beyond smart homes and sexy wearables one sees the Internet of Things as the Internet of Everything, not just the harmless stuff. 

We are living in a time where cybersecurity increasingly means human security.

What about the regulations that ensure IoT devices are safe?

“Aren’t there regulations to protect us?” This is a reasonable response when faced with the prospect of potentially devastating and fatal cyber attacks. Unfortunately, the answer to that question is not simple.

There are regulations and standards, but they are disparate and inconsistent. We need only look as recently as the passing of the California Consumer Privacy Act for an example of how state and federal laws can conflict on key issues like privacy. At the moment, all we have is a potpourri of guidelines and frameworks. I’m tracking more than 60 of them. Ideally, these separate guidelines would be amalgamated to form a user-friendly set of global standards that companies could use to build and maintain safe devices.

But we don’t live in an ideal world.

Firstly, IoT security is new enough and complicated enough that developing these guidelines takes time and expertise. Even then the results are not necessarily easy to implement. With so many devices going to market without sufficient security baked in, those responsible for developing or deploying IoT devices within their organisations need to apply IoT guidelines such as those published by NIST and DHS. But this is easier said than done.

As a result of difficulties interpreting and implementing these guidelines, regulations may be the only way to force security into IoT devices. But drawing up such regulations is a daunting task for lawmakers who are themselves still coming to terms with the complexity and nuances of the IoT landscape.

Secondly, there’s the human issue: We are thirsty for innovation. As Mike Gillespie put it, “At the moment, IoT is driven by the desire to innovate on the part of developers and functional need on behalf of the buyers.”

With the demand for IoT devices on a seemingly ceaseless growth curve, it is impossible to maintain full regulatory oversight, especially as suppliers rush to market with unsecure products in-hand.

To regulate or self-regulate: Where do we stand on IoT security regulations?

There are quite a few Internet of Things security guidelines available from different organizations. While there isn’t yet a framework that has attained the status of global standard, experts, bloggers, and IoT enthusiasts frequently cite some more than others. Here are a few that I’d highlight:

IoT Security Document Organization Publication Year
Baseline Security Recommendations for IoT European Union Agency for Network and Information Security (ENISA) 2017
Security and Privacy Controls for Information Systems and Organizations National Institute of Standards and Technology (NIST) 2017
Internet of Things Security Guideline IoT Alliance Australia (IoTAA) 2017
Strategic Principles for Securing the Internet of Things U.S. Department of Homeland Security 2016
IoT Security Guidelines and Assessment GSMA 2016
IoT Security Compliance Framework Internet of Things Security Foundation 2016
Industrial Internet Security Framework The Industrial Internet Consortium 2016

Table: Internet of Things Security Guidelines and Frameworks

Normally, when an industry reveals a weakness in self-moderation – especially where it concerns national and international security threats – governments and regulatory bodies step in. Why is that not the case with the Internet of Things?

To begin with, regulation needs to be carefully balanced with freedom for innovation. Opponents of government regulation point to the software industry which they say managed to work security into its products through trial and error. They believe device manufacturers will figure it out in time because their long-term success depends on it.

IoT security regulations would also need to be enforced. This would require that governments agree on the specificity and extent of the standards. Will companies have to follow a handful of basic guidelines, or will they be legally obligated to take a comprehensive, security-by-design approach?

Supporters of self-regulation argue that market forces will push companies to adhere to a global IoT standard – accreditation leads to increased consumer trust which leads to higher sales.

But others are unconvinced by this view of corporate motivation. As Bruce Schneier explains,  the market can’t solve IoT security on its own because markets are driven by short-term profit making.

If there’s one thing everyone agrees on, though, it’s the need for some sort of global standard to encourage greater alignment and better IoT security. Perhaps the dawn of 5G will be seen as an opportunity for industry leaders to combine forces in creating the framework the Internet of Things so desperately needs.  With a new generation of 5G-enabled devices set for production, and the creation of entirely new networks on the horizon, now is the time to build a system that is inherently as safe as it is exciting.

Tangible Threat of Cyber-Kinetic Attacks

Tangible threat of cyber kinetic attacks

Connecting physical objects and processes to the cyber world offers us capabilities that exponentially exceed the expectations of science fiction writers and futurists of past generations. But it also introduces disquieting possibilities. Those possibilities reach beyond cyberspace to threaten the physical world in which we live and – potentially – our own physical well-being. That’s the threat of cyber-kinetic attacks.


Our physical world is becoming more connected – which makes it more dependent on the cyber world. Many physical objects around us are no longer just physical, but extend into cyberspace, being remotely monitored and controlled. Increasingly, our factories, cities, homes, cars and, perhaps, even bodies are part of vast cyber systems created to run our physical world more efficiently.

Does that sound like the premise of a science fiction story? It shouldn’t, because it describes our current world. Does it sound a bit chilling? That’s because society’s rush to give us more ability to cyber-connect our world opens vulnerabilities that give others ability to use our physical world to harm us.

The growing reality of Cyber-Physical Systems (CPSes) such as Internet of Things (IoT) and Industrial Control Systems (ICS)

As you approached your car this morning, its door automatically unlocked. You found your engine running and your car interior at a comfortable temperature. As you drove, your car’s safety systems monitored traffic; they warned you if you strayed outside your lane; they would even apply your brakes themselves if they detected a potential collision. While you drove, cyber-connected traffic monitoring systems adjusted traffic light timing to reduce congestion.

These examples are the tip of iceberg regarding physical objects enhanced with cyber-connections.

Your water and power delivery depend on similar CPSes. Our civilization already runs on industrial control systems (ICS). As if that’s not enough, we are adopting internet of things (IoT) technologies ever so faster. CPSes can even include your body, as implanted devices such as heart monitors, defibrillators and insulin pumps are cyber-connected to medical personnel who can adjust them as needed.

The growing threat of cyber-kinetic attacks

CPSes enhance your life. But they carry with them the risk of cyber-kinetic attacks. Cyber-kinetic attacks consist of unauthorized personnel hijacking CPSes – whether in homes, cities, cars or human bodies – and using them to harm people or damage the environment.

Such attacks have already occurred, with physical damage inflicted on nuclear power plants, water facilities, oil pipelines, factories, hospitals, transit systems, apartment buildings and more. Only their scattered nature has prevented them from gaining more attention. I’ve been tracking many of them here.

Finding CPSes that provide peoples’ critical needs compromised with malware or backdoors is not rare. When my research team assesses critical CPSes for vulnerabilities, it is rare when we don’t find the systems already infected and ready to be exploited by the adversaries whenever they choose.

The rush to market

Connecting physical systems and devices to the cyber-world has obvious benefits. The process of connecting them, however, has consistently skipped a crucial step: making sure that critical CPSes are secure from unauthorized access.

Factories, water management facilities and power providers find great benefits in enabling administrators to monitor systems remotely. Their thoughts about security, though, often go only so far as wishful thinking that their systems are not attractive enough or are configured too obscurely for hackers to want to breach them.

Yet history tells us otherwise. Numerous incidents of disgruntled former employees seeking revenge, terrorists or state actors seeking disruptions or cybercriminals launching attacks have been reported.

Similarly, implanted medical devices have good reason to be cyber-connected to doctors for monitoring and adjustments. While those devices are rigorously tested to ensure they perform as designed, rarely are they tested for their ability to prevent unauthorized access. Manufacturers assume that, even though security flaws have repeatedly been demonstrated, no one would bother to exploit them.

Only when a person implanted with a cyber-connected medical device has been deemed to be a prominent enough target – such as former U.S. Vice President Dick Cheney in 2007 – have device manufacturers considered the potential dangers that inadequate cybersecurity in their devices poses.

Rethinking traditional security paradigms

Complicating the problem of preventing cyber-kinetic attacks is the fact that what needs protection in CPSes is different from what needs protection in traditional information systems. That calls for rethinking security approaches.

Traditional information systems protect sensitive information, so it doesn’t fall into the hands of those who would use it against the system owners. With CPSes, having unauthorized persons access their information is the least of administrators’ worries.

Keeping someone who breaches a nuclear power plant CPS from knowing system components’ temperature or pressure pales in significance compared to keeping them from compromising the system and destroying critical components. Similarly, keeping someone from knowing the insulin level of a person’s implanted insulin pump pales in significance compared to keeping them from causing the pump to administer a harmful dose.

Add to this the fact that security testing processes for information systems are not suitable for critical CPSes. Penetration testing can cause brief, but often acceptable system failures when searching for vulnerabilities in enterprise IT. In CPSes on which the well-being of an entire city or the life of a single patient depend, even a momentary failure could be devastating. Thus, new CPS testing approaches are needed.

Dealing with cyber-kinetic attacks

Concerns about cyber-kinetic attacks are not merely hype. They happen, and are increasing. Despite evidence that cyber-kinetic attacks are rising, those who drive adoption of increased cyber-connectedness overlook security, trusting that system flaws simply won’t be exploited.

Yet the numbers of hackers are growing. Hackers trained in cyber-armies sponsored by unfriendly countries are discovering more profit in choosing their own targets than in working for their homelands. Ransomware attacks that, only months ago, were known only to their few early victims and the security community have now vaulted into the headlines. And the dark web is training a growing number of disaffected “script kiddies” in how to make their mark on the world.

Our current crossroad

Our journey into a more cyber-connected world offers a utopian view of placing control of the physical objects and services we rely on literally at our fingertips. But the way that security is overlooked adds dystopian undertones of placing control of critical physical systems within reach of those who would disrupt our physical well-being.

In a world where cyber-kinetic attacks on critical CPSes are a reality, ignoring the potential for people to use our physical world against us is not an option. Security for the growing number of CPSes must be addressed to ensure that their benefits – and not their risks – are what define our future.


Originally published on CSO Online on 2 January 2018

Stuxnet and the Birth of Cyber-Kinetic Weapons and Cyber Warfare

Stuxnet Cyber-Physical Weapon

Stuxnet was the first true cyber-kinetic weapon, designed to cripple the Iranian – and perhaps also the North Korean – nuclear weapon programs. It succeeded in slowing the Iranian program, although it was discovered before it could deal the program a fatal blow.

Its significance goes far beyond what it did. It marks a clear turning point in the military history and cybersecurity. Its developers hoped for a weapon that could destroy strategic targets without civilian damage possible in traditional warfare. Instead, it opened the door to cyberattacks that can deliver widespread disruption to the very civilian populations it was designed to protect.

Stuxnet has, years ago, disappeared from the digital world. Its unintended release beyond its target, though, made its code readily available to other nations, cybercriminals and terrorist groups, providing them with a wealth of advanced techniques to incorporate into their own malicious cyber efforts. Its impact on the future cannot be overstated.

Stuxnet’s origins

Stuxnet is thought to have been conceived in 2005 or 2006 as a joint U.S.-Israeli plan to slow Iran’s nuclear weapon development without military strikes. Indications suggest that the U.S. also wanted to use it to slow North Korea’s nuclear weapon program, as well. Ideally, from a U.S.-Israeli perspective, it would accomplish its purpose without Iran and North Korea even realizing that they had been attacked.

An early version of Stuxnet was deployed in 2007, judging from malware discovered in the wild at that time that later was identified as an early version of Stuxnet. This version failed to infect either Iranian or North Korean facilities. It appears, however, that related espionage malware eventually gathered enough intelligence about Iranian operations to facilitate a successful 2009 infection of Iran’s Natanz facility.

Attackers covertly infected five Iranian companies that installed equipment in Natanz. The malware was then carried on infected laptops into the air-gapped facility (a facility not connected to the internet), where it spread to its targets. Additional attacks appear to have used this process again to breach Natanz equipment in 2010, and the spread of these versions beyond their targets onto untargeted devices ultimately led to its discovery in the wild.

North Korean facilities were never breached because North Korean placed much tighter controls on their program than Iran did. Not only are North Korea facilities air-gapped, but all who work in them are banned from computer access outside their facilities. That effectively prevented either espionage or sabotage malware from entering the facilities.

How it was designed

Experts believe that Stuxnet required the largest and most expensive development effort in malware history. They agree that only a nation-state would have the resources to produce it, and the safeguards and self-destruct functionalities built into it to prevent it from damaging untargeted networks suggest that its target was extremely specific.

How it infected devices

Unlike most malware, Stuxnet was not designed to spread via the internet. Instead, it traveled via infected thumb drives that contained malicious code. Thus, it was necessary to get the malicious code onto the laptops of personnel who had access to the targeted facility, not an easy task.

The malicious code used several zero-day exploits of Windows Explorer to infect the laptops of targeted personnel. Here again, use of multiple zero-day exploits suggests developers who had extraordinary resources.

Zero-day exploits are extremely rare. They target vulnerabilities that are – at the time of their launch – unknown to the software manufacturer whose software is exploited. Finding a single zero-day exploit is extremely difficult and considered the Holy Grail of hackers. Uncovering multiple zero-day exploits and reserving them for a single piece of malware is unheard of in the hacker community.

As Windows Explorer would scan an infected thumb drive inserted into a USB port, the malicious files would instantly download onto the device. Then, whenever the device connected to a network, those files would upload and spread across it.

The malware was able to bypass antivirus software because it contained one of several valid security certificates that had been stolen from trusted software companies. Certificates like those used are highly secured, making them almost impossible to obtain. This, too, suggests that Stuxnet’s creators had far greater resources than any typical hacker.

What it did with PLCs

Once Stuxnet entered a network, it looked for PLCs. It looked only for Siemens PLCs and only those that possessed two specific blocks of code that were known to be used by PLCs that controlled Iranian uranium enrichment centrifuges. If it didn’t find the PLCs it sought, it would erase itself from the system. In this way, it was designed not to spread beyond its targeted facilities, although, eventually, it did.

Once it found the PLCs it targeted, Stuxnet renamed the library files through which the programmer communicated with them and replaced them with fake duplicates through which the new, Stuxnet-installed library files could intercept communications and effectively control each PLC. It sent most commands to the original library files to process as usual. This prevented the operator from detecting anything wrong with the PLC.

The Stuxnet-installed library file did not pass all communication to the original, though. It intercepted 16 key read/write requests and reprogrammed them so they could be used to cause physical damage to the centrifuges.

How it disguised its presence

Stuxnet also used rootkit functions to counteract any attempts to discover or remove it. It monitored the blocks it had altered so, if a programmer requested to see them, Stuxnet could either redirect the request so the programmer saw the unaltered block, or rewrite the altered block on the fly so the programmer would see the PLC’s original code. Stuxnet would then re-infect the block after the programmer left. It also did the same with any changes a programmer would make to key blocks, rewriting them after the programmer left to restore or maintain the malicious code.

If a programmer attempted to detect malicious code by comparing file sizes against the sizes they should be, Stuxnet countered by selectively skipping malicious blocks, so all file sizes would match the programmer’s expected profile. These built-in “safeguards” made Stuxnet extremely difficult to detect or disable.

Its additional complexities

Stuxnet’s overall complexity also made it hard to combat. It contained three main systems and 15 separate components, each with layer upon layer of encryption and complex interconnections that allowed the malware to decrypt and extract each component only when needed. It even contained an ingenious system that allowed it to carry updates to the malware when infected devices were connected to the internet.

It searched for other versions of Stuxnet on the network on a peer-to-peer basis and determined which version was the newest. The older version would then update itself with the newer one to ensure that all versions in the network were equipped with the latest enhancements.

Through this complicated delivery, the malware developers could tweak more than 400 functions of Stuxnet on devices that were, themselves, not connected to the internet. And once one PLC was updated in this manner, it would spread the instructions to every PLC on its closed network.

The malware was highly modular in construction, allowing it to be tailored for almost any purpose by simply reconfiguring modules to fit the desired target. This type of platform was designed to enable rapid development of new targeted tools by simply rearranging or reconfiguring existing modules.

How it damaged its target equipment

The actual attack involved two different routines aimed at damaging centrifuge rotors. One attack attempted to dramatically speed up the centrifuges well above their maximum safe speed for short periods of time and later slowing them dramatically below their minimum safe speed. The malware generally waited weeks between these cycles of altered behavior to reduce the risk of operators detecting the sabotage. The second, an order of magnitude more complex routine involved over-pressurizing centrifuges in order to increase rotor stress over time.

The goal was to exert years’ worth of wear on the centrifuges in a matter of months, thus speeding the failure rate of the equipment to the point where they were failing faster than the Iranians could replace them. It is thought that Stuxnet disabled one-fifth of Natanz centrifuges in a year’s time.

Stuxnet’s discovery and unraveling

When researchers first detected Stuxnet, they were puzzled. It clearly was the most sophisticated piece of malware they had ever seen, yet it appeared to have only the modest goal of finding and monitoring PLCs, something that offered neither financial incentive nor bragging rights for hackers. Those early researchers were inclined to write the malware off as nothing more than a surprisingly sophisticated tool for low-level industrial espionage.

As researchers traced infections, they found an unexpected pattern. Rather than finding the bulk of infections in highly digitized countries like the U.S. and Europe, most infections appeared in Iran, a country that rarely appears prominently on malware infection lists. Researchers ultimately traced the infection’s source to the five targeted Iranian organizations.

By this time, researchers realized they were on to something significant, but they couldn’t figure out what it was. Antivirus experts had little experience with ICS security and turned to the ICS security world for help. Even then, progress in figuring out Stuxnet’s purpose was slow. Apparently, the only people who had sufficient expertise in both disciplines to unravel Stuxnet were those who created it. And they were unknown.

Ultimately, the two sets of researchers concluded that Stuxnet was designed not to perform industrial espionage, but industrial sabotage. This was the first malware confirmed to be a true cyberweapon, inflicting physical damage through cyber means.

Attention focused on Iran’s Natanz facility, which had been rumored to have suffered an attack, and in which international inspectors had noticed an unexpectedly high replacement rate of centrifuges. This fulfilled the warnings that cybersecurity experts had been giving for years that even closed systems, like Natanz, were becoming vulnerable to cyberattacks as industries increasingly incorporated cyber-connectivity into their industrial controls.

Experts in ICS security were stunned. The only positive to Stuxnet was the fact that it was extremely selective in delivering its payload, deleting itself from systems for which it was not designed to target. But if it could be so effective on such specific systems, what implications could it hold for adaptation to a broader range of attacks, especially with its capacity to be readily modified for new targets?

Stuxnet damaged nearly 1,000 centrifuges, but was discovered – ironically, by Western security organizations – long before it could deliver a crippling blow to Iran’s nuclear program. Its use for its original purpose was thwarted; any similar mass malfunction of Iranian uranium enrichment equipment would immediately raise suspicions before an attack could accomplish its goals.

What Stuxnet means in our physical world

The key significance of Stuxnet is in its targeting each of the three layers of a cyber-physical system – the cyber layer used to distribute the malware and identify the targets, the control system layer (PLC in this case) used to manipulate physical processes, and the physical layer in which the actual damage was created. Stuxnet became the epitome of a cyber attack that can lead to kinetic impacts resulting in the physical destruction.

Moreover, it set a precedent and an example for:

  • infecting air-gapped systems (systems not connected to the internet);
  • precisely targeting the systems to be infected;
  • introducing subtle, almost undetectable flaws into physical processes that could be just as damaging, if not more, than crashing a system, while much harder to detect. Consider the result of subtle defects built into cars or airplanes so that the finished product malfunctions only after it has been placed into service. Similarly weaknesses could be built into power grids, or toxic additions could be made to food or water where the danger is not in a single dose, but in a cumulative affect over time.

The Stuxnet family

Stuxnet is not the only such sophisticated cyberweapon discovered. Kaspersky Labs found a whole family of them. They found significant enough similarities between Stuxnet and other malware to believe that they were based on the same platform, which they dubbed Tilded. Their research suggested that this platform contained Stuxnet, another highly advanced malware known as Duqu and possibly three other pieces of malware that remained undetected as of their assessment in 2011.

Since then, two other pieces of malware have been linked to Stuxnet and Duqu, namely, Flame and Gauss, although it is not clear whether either is among the undetected cyberweapons mentioned by Kaspersky. Whether they are or not, the point remains that the malware based on the Tilded platform are not necessarily the only advanced cyberweapons that were developed – and the existence of such a platform increases the possibility that developers could create more super-cyberweapons.

The use of an established platform aids developers in two crucial components of cyberweapon building: time and secrecy. It reduces the time needed for development, as only the payload needs to change when the weapon is used for a different target instead of developing an entirely new weapon. It also reduces the number of people that need to be involved in developing and approving the new weapon. This offers fewer possibilities for leaks.

Duqu

Like Stuxnet, Duqu is built upon the same modular platform. Researchers at Kapersky Labs believe that Duqu was originally a surveillance tool that enabled its developers to copy blueprints from Iran’s nuclear program, providing the information needed for Stuxnet to sabotage it. Its code is so similar to Stuxnet that some automated detecting systems misidentified it as Stuxnet.

Its purpose, however, did not appear to be to take control of PLCs, but espionage. Duqu looked for information useful in attacking ICSes. As far as has been discovered, it has been used both for intelligence and for stealing certificates and their private keys. It operated even more stealthily than Stuxnet, deleting itself from infected systems after 36 days.

Unlike Stuxnet, Duqu infected systems via a phishing email and used only one zero-day vulnerability targeting Microsoft Word fonts. Once the Word document was opened, Duqu installed a keylogger to capture system keystrokes. It also allowed the developer to download critical files from infected computers, such as security certifications.

A more advanced version of Duqu, dubbed Duqu 2.0, was detected in 2014-2015, suggesting that the team behind Stuxnet remained in operation – at least at that time – and that further attacks using the same basic platform were still possible.

Flame

Flame, which emerged in 2012, used the same zero-day exploits as Stuxnet. Whether it was created by the same team, a separate team with access to the original Stuxnet framework, or a well-financed team reverse engineering Stuxnet is unknown. Flame, like Duqu, was an espionage tool, although it could capture a wider variety of data.

Flame could grab infected computer screen images, emails and user chats, as well as monitor keystrokes and network traffic. It could spread to devices not connected to the internet via Bluetooth connectivity and could also turn on microphones remotely.

It sent the information it gathered to its command and control server in small increments to avoid detection. It also infected machines in a highly atypical way, by posing as a Windows 7 update. It is estimated that only a handful of developers in the world could achieve such an incredible programming feat. It most likely took teams of programmers working with a super computer to accomplish it.

It, like Stuxnet, appears to have targeted high-ranking Iranian oil ministry officials. It may also have had sabotage capabilities via a command module named Wiper that previously was thought to be an independent computer virus. Wiper was designed to erase the hard drive of targeted computers.

Flame was 20 megabytes large, 40 times as large as Stuxnet, which was already unusually large for malware. Yet it was so good at hiding itself that it escaped detection for nearly five years.

Gauss

Flame is believed to have begotten Gauss. Gauss has proven even harder to study than Flame because of its large number of object-oriented structures and advanced encryption. Like Stuxnet, Gauss was designed to be extremely targeted, deploying only on computers that gave it access to Lebanese banking credentials. Its goal remains unknown to the public and its payload has not been decrypted.

Stuxnet as a continuing weapon

Although Stuxnet failed to cripple either the Iranian or North Korean nuclear weapon programs, it appears that what success it had made it a continuing weapon in U.S. diplomacy. As negotiations to limit Iran’s nuclear weapons program dragged on unsuccessfully during Barack Obama’s U.S. presidency, information leaked about an even more advanced cyberweapon, code-named Nitro Zeus, that the U.S. was prepared to unleash on Iran if negotiations failed.

The leaks suggested that Nitro Zeus could enable the U.S to cripple Iran’s air control command, communication network and power grid. The program may have been real, or it may have been nothing more than a well-crafted misinformation campaign to leverage the Iranian government, which already had been embarrassed by Stuxnet, into reaching an agreement with Western nations rather than risking the possibility that Nitro Zeus was real.

Stuxnet’s continuing legacy

Although Stuxnet has long since ceased to function – researchers believe it was programmed to stop in 2012 – its legacy continues. Its threat plays out in how it changed the world.

Blueprints for the proliferation of future cyberweapons

Most likely, those who launched Stuxnet hoped it would be confined to the Natanz facility, but that was not the case. Stuxnet spread beyond its intended target and was discovered on other computers. Safeguards in the software rendered Stuxnet inoperative on systems that did not have the targeted configuration, but its spread points out the unpredictability of cyberweapons.

The unintended release of the code onto the internet allowed other malware developers to capture this malware into which millions of dollars and thousands of hours had been poured. Granted, the four zero-day exploits that Stuxnet used have been patched, but the many targeting and infection techniques developed for this cyberweapon now provide hackers and cyberterrorists with platforms on which they can build the next generation of cyberweapons.

The problem with Stuxnet does not lie so much with hackers copying its code, but in offering them innovative techniques to adapt to their cyberweapons. Defending against specific code is easy for a security expert. Defending against an innovative technique is harder, and Stuxnet is filled with them.

Threats to cyber-physical systems

By targeting cyber-physical systems, Stuxnet revealed their vulnerability and made them an inviting target. Recall that one of the things that hampered early research on Stuxnet was the fact that those who specialized in antivirus protection and those who specialized in ICS security had almost no experience in the other field. While that situation is no longer as extreme, cyber-kinetic attacks still represent a threat that requires skill in two disciplines.

Furthermore, cyber-kinetic attacks, when used for maximum disruption, could be crippling. Remember the early days of the internet, when many users were wary of making purchases online? Commerce is now so dependent on the internet that retail stores are closing at records rates.

Imagine, now, how it would affect the global economy if consumers suddenly went back to that level of wariness, but now with a large chunk of brick-and-mortar stores gone, and the remaining ones highly dependent on ecommerce. Imagine also the chaos if essential services, like utilities, were suddenly no longer able to be taken for granted.

The legitimization of cyberwarfare

The use of Stuxnet to launch cyber-kinetic attacks on a nation’s enemies has legitimized them as a weapon. Just as U.S. use of nuclear weapons on Japan in World War II led to a global race to obtain and perfect nuclear weapons, today’s nations are believed to be pursuing similar cyberweapon capability.

The 2017 WannaCry ransomware attacks now tied to North Korea are an example of a nation pursuing such capability as it tries to disrupt a global community that has united to curb its nuclear aspirations. Would other nations have ignored the tantalizing promise of attacking enemies from the comfort of their own capital if the U.S. and Israel not developed Stuxnet? Probably not. But the success that Stuxnet demonstrated encouraged other nations to follow their lead.

Known attacks on cyber-physical systems

Although examples of cyberwarfare attacks are not yet commonly known, some have been detected. North Korea’s WannaCry attack stands out. While it largely affected information systems rather than cyber-physical ones, physical hospital equipment in some locations were affected, causing delays or cancellations of medical procedures.

Most other known cyberattacks were less successful. The U.S., in 2016, formally charged five Iranian nationals believed to be working on behalf of the Iranian government with cybercrimes for cyberattacks on 46 financial institutions or financial sector companies. Notable is that they also are accused of failed attempts to compromise the Bowman Avenue Dam in Rye Brook, New York, from 2011 to 2012.

Leading us into uncharted territory

Conventional weapon attacks contain a clear expectation of how the recipient will respond. Cyberweapons do not yet have such a clear escalation ladder. How an adversary will respond is unknown, making any cyberattack fraught with uncertainty.

Attempts have been made to define and codify what constitutes an act of war in a digital environment and what are appropriate responses. The Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations seeks to define how international law applies to cyberspace, but its provisions do not restrict non-signatories – not to mention cybercriminals or terrorists. In addition, even signatories differ on their interpretations of provisions. The threat remains real, but not clearly defined.

Takeaways

While Stuxnet – at least in the form that was discovered and publicized – has long disappeared from the scene, it has forever changed our world. It brought a recognition of how cyber-physical systems can be targeted to inflict physical damage. It blurred the lines between cybersecurity and industrial security and left experts in both fields scrambling to address this convergence of their disciplines.

It made some of the most advanced strategies and techniques for infiltrating the most secure systems available to whomever wants it. It threw open the doors to the possibility of cyberwarfare and it left us to sort through the details of what widespread cyberwarfare might entail, so we can at least try to proactively apply some limits to its possible effects.

Pandora’s box has been opened. The world has seen the harm that can be done by those intent on inflicting it through cyber-kinetic attacks. Developing Stuxnet called for deep pockets and the talents of some of the world’s best minds. Dare we put anything less toward securing the cyber-physical systems that Stuxnet has exposed?

How Telecom Operators Can Strike Back with IoT, Fog & Cyber

fog iot security telcos

Telecom operators sat back as the new over-the-top (OTT) service providers, internet and tech companies slowly ate away at their business, particularly in the B2C space. A combination of institutional laziness and poor execution on promising initiatives gave these new entrants the time to jump in and snatch away customers. At the moment, the future doesn’t look too bright either with a worldwide CAGR put at 0.7 percent through to 2020.

For the time being, wooing back B2C customers is a losing battle. While OTTs use telecom operators to deliver their services, these companies can’t muscle out the competition since public support for net neutrality is remarkably strong.

Fortunately for them, it’s not an entirely hopeless situation. Telecom operators have an opportunity to rethink their business model, transform their organization, and execute on competitive ideas. This is especially true in the B2B space where telecom operators can use their existing infrastructure to offer premium network solutions for large enterprises, particularly when it comes to the Internet of Things (IoT).

The Exploding IoT Market Offers Huge Opportunities for Telcos

Connecting millions of devices across long distances is a telecom company’s reason for being. Not only do they have an enormous amount of experience connecting devices, they understand the security, privacy, and customer service components of connectivity as well.

The growing Internet of Things (soon to become the Internet of Everything) enabled by upcoming 5G presents an exciting web of connected devices. But as Bain Insights explains in Forbes, we’re so busy talking about all these cool devices, that we haven’t spent enough time thinking about how exactly we’re going to manage, protect, and store the immense data they produce. According to Bain, telecom companies are uniquely positioned to assist businesses with:

  • Connectivity and Compliance: Telecom companies have experience and systems in place for keeping devices connected while also complying with the regulatory requirements of multiple governments.
  • Life Cycle Management: Think of how effortlessly we set up and upgrade our smartphones. Telecom companies have experience with managing products from beginning to end, and this experience will be essential for managing and upgrading the tremendous amount of connected devices in the IoT.
  • Vertical Platforms: Telecom operators can offer platforms and turnkey solutions for companies who need help with device management, data storage, and data analysis.

This entire project of turning the IoT into long-term business growth for telecom operators rests on one big factor: Security

Why Telcos Will Profit From Aligning IoT Security Strategies with IoT Market Ambitions

With great connectivity comes great responsibility.

For all its exciting applications, the IoT poses many security challenges, privacy challenges, and even safety challenges. There are significantly more devices to protect beyond our computers and smartphones. If manufacturers aren’t diligent about updates, users will fall victim to hackers who discover and exploit vulnerabilities. On the privacy side, companies will be able to collect even more data on their customers and employees, and without proper oversight these individuals will have little to no say on how that data is used.

What all of this means is that there will be a need for IoT services that are tied to IoT security solutions.

IoT Business News argues that while prioritizing IoT security will be a market differentiator for telecom operators, it won’t amount to a significant revenue stream.

This isn’t entirely true. As we discussed earlier, the key qualifier for telecom operators to dominate the IoT market comes from their experience with network management and security. IoT security capabilities should be more than just a differentiating factor. They have to be central to their service offering.

To successfully offer those services we discussed earlier – connectivity and compliance, lifecycle management, vertical platforms – IoT security solutions must be an integral component. Moreover, telecom operators who prioritize security will be able to build off of this to develop even more revenue streams.

Real-time analytics is one of the more compelling value propositions of the Internet of Things. When it comes to problem solving, sooner is better than later. But oftentimes, no one notices a problem until much later when ample money’s already been spent or the problem has worsened.

The IoT and (close to) real-time analytics go hand in hand. Data collected by this multitude of devices can be quickly analyzed and shot back for automated actioning, or to humans who can make decisions with seconds-old information.

Offering the infrastructure and services to make such fast analytics possible and secure is one huge opportunity for telecom operators. That said, close-to-real-time analytics from the myriad devices spread out over wide geographical areas will require fog computing capabilities.

This is another way that telecom operators are poised to profit off of IoT security solutions: They already have the infrastructure in place to give them a head start on becoming the leading facilitators of fog computing.

Fog Computing Presents a Clear Way Forward For Telecom Companies Pursuing IoT

At present, IoT devices often rely on a centralized, cloud-based system. That’s all well and good for now, but as IoT moves towards offering realer-time analytics and insight on data generated from objects all over the place, there needs to be computing power closer to where these objects are (aka “the edge”).

This is where fog computing – also known as edge computing – comes in. While cloud computing works well in most situations, it isn’t as effective in scenarios where time is of the essence like for example autonomous vehicles. There’s also the security implications of transmitting large amounts of data, not to mention the complicated regulatory considerations of data being generated in one jurisdiction and being analyzed in another.

We’re stuck then with IoT devices that aren’t powerful enough and cloud servers that are certainly powerful, but too far away. This is where fog computing offers a middle ground by mimicking cloud capabilities.

So, where do telecom companies come in?

Presently, traditional cloud computing providers like Amazon, Google, Microsoft, or IBM deliver their famous services from large data centers, which means it will take time to develop the infrastructure needed to offer fog computing services.

On the other hand, telecom operators have networks that more closely echo this need for “small data centers” that exist at the level of a city block. They can repurpose their existing properties and infrastructure to meet what will be a growing demand for computing at the edge.

Telecom Operators Can Take a Bite Out of the IoT Market — They Just Need The Teeth

Telecom operators are close enough to take a bite out of the IoT market. What they’re missing are the teeth and the determination. While the legacy tools and procedures exist to allow telecom operators to be the leading providers of IoT services and security solutions, it will take an organizational shift to make such a strategy successful. By doing this, telecom operators will not only make money, they’ll secure their future in the presence of ambitious OTT competitors.

Navigating a Safe Course Through Maritime Cyberattacks

Maritime Cybersecurity

The open seas have long attracted those who yearned for adventure. The risk of pitting oneself against a vast and unforgiving sea has tested sailors’ mettle for millennia. It’s not surprising that the maritime industry is one that thrives on facing – and overcoming – risks. But, as technology increasingly dominates it, growing risks exist that the industry dare not ignore.

Its growing effort to increase efficiencies through digitization and automation has made it an inviting target for 21st century pirates whose weapons are not cutlasses, but computers. Vulnerabilities in maritime systems and security practices threaten to inflict huge losses on the industry as digitization increases.

This article looks at the current state of digitization in the maritime industry and emerging industry trends. In addition, it assesses current vulnerabilities, how they are being exploited and how maritime firms can combat them.

Increase in digitization

Ships are increasingly digitizing their operations. Digitization already has reduced the number of crewmen and the time is fast approaching when ships will be autonomous, requiring only a small, land-based operations center to handle unusual situations.

Unmanned utility vessels

The Hrönn, an unmanned utility vessel designed to service offshore oil rigs, deliver cargo to remote locations and launch and retrieve unmanned submersible crafts, is expected to be fully functional sometime in 2018.

The technology for such vessels is well ahead of maritime regulations. The main issues delaying widespread use of such vessels are more regulatory than technological. Many ports require a licensed captain to pilot any vessels within them, and regulators fear that unmanned vessels would be prone to breakdowns that could clog shipping lanes.

Unmanned cargo ships

At least one autonomous container ship may be on the seas by 2020, although with a route limited between three Norwegian ports. The YARA Birkeland hopes to be the first autonomous and electric container ship on the seas by that date.

It will be a 120 TEU (Twenty-foot Equivalent Units) open-top unmanned container carrier. Loading and unloading, too, will use automated electric cranes. Berthing and unberthing will not require human intervention, either. Three control centers will handle monitoring and exception processing of ship operations, as well as any emergency or safety issues.

The limited route is due to current international shipping regulations, which do not allow unmanned ships to cross oceans. The UN’s International Maritime Organization is expected to approve this soon, though, because it promises to provide cheaper shipping options, fewer accidents and fewer delays.

In anticipation of such a decision, Japanese shipping companies plan a fleet of 250 autonomous ships by 2025. Rolls Royce, meanwhile, pursues a target date of 2020 for its first remote-controlled, unmanned ships. Such ships would use AI to plot the fastest, most direct and most fuel-efficient routes, while also monitoring to avoid collisions and to diagnose and prevent equipment breakdowns before they happen.

Early versions of these ships are expected to carry a small crew to oversee operations. Eventually, though, they will be completely autonomous, with a land-based “captain” monitoring ships at sea and stepping in only where critical decisions are required.

The future of autonomous ships

McKinsey sees digitization of the shipping industry completely transforming it, with autonomous ships becoming the norm. This would increase the volume of container trade as much as fivefold.

They envision that, over the next 50 years, digitization will make the shipping process far more integrated, forced by the need to roll out technological advancements across entire value chains to achieve optimal results. Logistics will shift from being a fragmented system, with many players involved in moving shipments from one carrier to another. It will instead become an end-to-end system, with a fully integrated process using digitization to control logistics.

Increase in vulnerabilities

Such developments, however, are likely to increase current vulnerabilities exponentially if maritime organization do not deal with them. Michael Mullen – U.S. Navy Admiral and Chairman of the Joint Chiefs of Staff – warns, “We are vulnerable in the military and in our governments, but I think we’re most vulnerable to cyberattacks commercially. This challenge is going to significantly increase. It’s not going to go away.”

Vulnerabilities in critical shipboard systems

Three critical systems are especially vulnerable. They are the Global Navigation Satellite System (GNSS) – such as GPS, GLONASS, Galileo or BeiDou; Electronic Chart Display & Information System (ECDIS) and Automatic Identification System (AIS). The GNSS identifies the vessel’s exact location, the ECDIS provides digital charts of ocean routes and the AIS monitors surrounding traffic and continuously broadcasts its location and avoid collisions.

GNSS can be spoofed, to trick the crew into changing course. ECDIS, a mandatory system for all vessels engaged in international voyages, can be fed inaccurate data to, again, trick the crew into changing course, or it can be compromised to enable attackers to set it on a new course, sometimes while feeding false data to the crew to make them think they remain on their original one. Ongoing AIS transmissions can be intercepted and modified so that other ships or monitoring stations receive inaccurate information about the ship’s location, movements, identity or other details.

Someone with physical access to these systems can feed inaccurate data into them via a USB data port, or download malware or ransomware into them. With many vessels not having systems properly segmented, that can lead to infection of other systems, such as propulsion or power, as well.

The transmission of sensitive information for the use of mainland stakeholders also creates unintended problems and stands to create even greater vulnerabilities as vessels become more autonomous. Marine vessel tracking websites publish vessel location information on the internet. This makes it easily available not just to legitimate stakeholders, but also to those with malicious intent.

Transmission over radio frequencies enables nearby parties who have an RF receiver to listen to the messages. These messages contain no authentication or integrity checks. This vulnerability offers a wide variety of ways to compromise either ship-to-shore data or to send fake data to the ship. In addition, pirates have been known to use transmissions to help them determine the locations of the most profitable cargoes.

Satellite communications offer vulnerabilities, too. Considering how expensive satellite time is, and the growing demand for internet connectivity at sea, multiple solutions have emerged to compress data and reduce the amount of satellite time used by vessels. Unfortunately, some of these solutions – especially older, outdated ones that are no longer supported by the companies that developed them – contain vulnerabilities that can allow an unauthorized user to compromise ship systems.

Numerous incidents of compromise in these critical systems have occurred. Such incidents, however, have not instigated widespread security improvements. Most have been left unpatched.

Many vessels look chiefly to the expertise of their crews to counteract possible system compromises. Traditional, manual methods of checking physical charts are still used in addition to digital systems. Security audits of systems are performed and contingency plans are in place. But many of these positive safety practices rely on human presence. As vessels increasingly move toward autonomous operation, those current safeguards will decrease. Thus, further work must be done to close digital vulnerabilities.

Vulnerabilities in other shipboard systems

These critical systems are not the only onboard systems that are vulnerable, either. Others include:

  • Cargo management systems – These digital systems often have components that allow onshore tracking via the internet. Such interfaces put cargo management and ship manifest data at risk of being compromised.
  • Bridge systems – The increased use of digital navigation systems that interface with shoreside facilities create vulnerabilities. Even standalone digital navigation systems are at risk. They can be compromised through malware loaded – intentionally or unknowingly – through the USB ports used to update navigation data. Compromise of these systems can lead to outside manipulation of these systems, or complete failure of all navigation control.
  • Propulsion and machinery management and power control systems – Digitization of these systems to allow remote monitoring and control can similarly facilitate the introduction of false data into the system, or outright seizure of it. When integrated with bridge systems, it can also be used as an avenue by which hackers could compromise those systems, as well.
  • Communication systems – As mentioned earlier, satellite and internet communications are vulnerable and need to be secured to a greater degree than most provider-supplied security features offer.
  • Access control systems – Digital systems that control surveillance, security alarms and electronic “person-on-board” systems can also be used for cyberattacks and should be segmented apart from critical systems.
  • All other internet-connected interfaces – Basically, anything that provides internet service for crew or passengers should be considered an unsecured system and should be segmented apart from critical systems.

Vulnerabilities in shipboard security practices

Common security practices also provide reason for concern. Security within networks is often ignored, based on the false assumption that all that needs to be protected is the system perimeter. Once inside a network, though, intruders are usually free to map and exploit it without detection.

The maritime industry is particularly prone to penetration through known vulnerabilities, such as the SATCOM vulnerabilities mentioned previously. Because of the widespread nature of the industry’s computer assets, sailing in remote locations with crew that is trained more for other skills, patching known vulnerabilities is often ignored. As a result, 99% of incursions into maritime systems are through unpatched known vulnerabilities.

Another problem is the tendency to operate with default passwords, giving the entire crew unlimited access to all digital systems, rather than limiting users to no more access than their job requires. A compromised account that does not have admin privileges compromises only that account. A compromised account with admin privileges compromises the entire system.

As current technologies have developed, maritime companies have tended to network information technology (IT) and operations technology (OT) systems onboard ships and connect them to the internet for easier access by stakeholders. This, however, increases vulnerability and makes vessels a more inviting target for attackers.

The reality of maritime cyberattacks

Maritime companies often downplay their threat of cyberattacks. They rationalize that cybercriminals are more interested in the cash assets of banks or other financial targets instead of cargoes that are both difficult to convert into ready cash and situated in remote locations. That rationalization, though, ignores the huge concentration of value in shipping and the large sums of money exchanged between shipping lines and bunker suppliers or shipyards, not to mention the sums being paid the shipping companies for their services.

Adding to the attractiveness of the shipping industry to attackers are the facts that vessels are isolated from potential sources of help and that cybersecurity in the maritime industry is more immature than in the financial industry. The target thus drawn on the industry should move its major players to action. Not only are financial risks high, but maritime cyberattacks could put lives and the environment at risk, too.

Motivations for maritime cyberattacks

The shipping industry is highly competitive with significant value to each company’s private information. Criminals may want to steal or ransom sensitive information a vessel carries. They may want to take control of ship operations and demand ransom in return for releasing control back to the company. Criminals may want to falsify shipping information to enable them to covertly use vessels to ship their own, illegal goods into other countries. Or they may be gathering intelligence to help them commit some complex criminal scheme.

Add to that threats that have nothing to do with financial gain. They can come from disgruntled employees or activists motivated by the desire to damage the reputation of the organization or industry, or disrupt operations. Depending on the flag under which a ship sails, it may also be the target of hostile nations or terrorist groups seeking political gain, or disruption of economic trade. The shipping industry has no lack of people motivated to attack it.

Examples of maritime cyberattacks

In August 2011, Iranian Shipping Line (IRISL) experienced a devastating cyberattack on their data that left them unable to determine which containers were onboard vessels and which weren’t, or where they were supposed to go. In addition, their internal communication network was inoperative, resulting in weeks of chaos and severe financial loss.

Another massive cyberattack was the NotPetya ransomware that paralyzed A.P. Moeller-Maersk, the largest shipping firm in the world, for weeks in mid-2017. The company estimates their loss at more than $200 million.

Organized crime organizations have moved into cyberattacks against shipping companies, too. They hacked into cargo systems in Netherlands and Australia, to name just two such attacks, to keep track of containers in which they were smuggling illicit goods. Their access to shipping information enabled them to pick up the containers without interference from law enforcement.

Fraud offers another route to cybercriminals to obtain financial gain. The World Fuel Services (WFS) was victimized by cybercriminals who forged a fake fuel supply tender that claimed to be from the U.S. Defense Logistics Agency. WFS delivered 17 metric tons of marine gas oil to a tanker off the Ivory Coast of Africa. When WFS presented their invoice to the U.S. agency, they discovered that the agency had never placed the order. WFS was out the $18 million dollars they paid for the fuel. Losses by fraudulent schemes like this run into hundreds of millions of dollars across the shipping industry.

Another commonly used scheme involves cybercriminals inserting themselves into communications between two companies. Once they accomplish this, they can easily redirect funds exchanged between the companies into the criminals’ accounts.

Shipping companies also are vulnerable to attacks geared more toward industrial espionage than direct financial gain. Malware such as Zombie Zero installed on hardware scanners used in shipping infected the financial, customer and planning systems of at least nine shipping companies in 2014. Similarly, Icefog malware targeted Japanese and South Korean shipping companies in 2013 to gather sensitive data in a hit-and-run attack. Such malware makes it possible for attackers not only to see sensitive data, but also to modify it so they could make packages appear and disappear at the attacker’s will.

Maritime cyberattacks have been directed at offshore facilities, too. Cyberattacks on oil rigs moved or tilted them, requiring that they be shut down until control was restored. Such attacks have taken up to 19 days and cost millions of dollars to overcome.

Reducing vulnerabilities

The first step in improving maritime cybersecurity is to apply the same principles to shipboard systems as are recommended for other cyber-physical systems. This involves applying both defense in depth and defense in breadth to both IT and OT systems.

Defense-in-depth refers to segmenting the most critical systems, so they are protected by multiple, independent, redundant security layers. Unlike the practice often followed on vessels today, this segmentation ensures that no single layer of the security architecture is relied upon solely. The more critical a system is, the more levels of security protect it.

Thus, notoriously insecure functions such as crew internet connectivity would be protected, but reside in the least protected level. Navigation and other systems that are not essential to onboard safety, but nonetheless important to ship functioning would reside at a deeper level, making it necessary for attackers to penetrate two levels to compromise it. The systems that most affect safety, power, propulsion and automation, would be protected by a third level, making them the most difficult to breach.

Defense-in-breadth, on the other hand, incorporates multiple security defenses within each level. That means not only having strong firewalls, but also providing protection measures at each point of integration between systems. This protects against attackers using one system to circumvent protections in place on other systems.

Doing both provides a good start on closing vulnerabilities. The following are also recommended:

  • Replace obsolete operating systems no longer supported by their developer. Unsupported systems do not receive patches when new vulnerabilities are discovered.
  • Replace outdated antivirus software.
  • Make sure antivirus software also protects from malware.
  • Identify the amount of access each person who will use a system needs and assign them only that much, instead of assigning everyone admin privileges.
  • Upgrade boundary protections for networks.
  • Segment systems for defense-in-depth.
  • Identify which systems, equipment and cargo possess potential attack vectors, and ensure that each one is adequately protected from incursions (defense-in-breadth).
  • Avoid leaving shoreside remote access of critical systems always connected. Such functions should be active only when needed.
  • Determine appropriate access control for third parties (contractors, service providers).

In addition to these, make sure that decision-making about security rests high enough in the organization to adequately balance risk and reward. Often, when security decisions are made at an IT department level or an individual ship level, initiatives that come from higher up in the organization get prioritized higher, and security shifts to the back burner. That is less likely to happen when security is addressed at a higher organizational level. Only then is security adequately addressed instead of becoming a task that competes for time and resources with other management goals.

Facing a digital future

Digitization will continue to grow in the maritime industry and, with it, the threat of cyberattacks and cyber-kinetic attacks. The industry’s historic willingness to accept the risks that the open seas offer and meet them head-on when they occur should not also be its approach to cybersecurity.

The stakes are high, with attackers employing increasingly ingenious strategies to achieve massive paydays from the vessels – and their companies – that leave unneeded vulnerabilities open to them. And not only are massive amounts of money at stake, but also people’s lives and well-being. As digitization of the maritime industry grows, attention to cybersecurity must grow with it.

Our Smart Future and the Threat of Cyber-Kinetic Attacks

Cyber-Kinetic Threat

A growing number of today’s entertainment options show protagonists battling cyber-attacks that target the systems at the heart of our critical infrastructure whose failure would cripple modern society. It’s easy to watch such shows and pass off their plots as something that could never happen. The chilling reality is that those plots are often based on real cyber-kinetic threats that either have already happened, are already possible, or are dangerously close to becoming reality.

Cyberattacks occur daily around the world. Only when one achieves sufficient scope to grab the attention of the news media – such as the WannaCry ransomware attacks of early 2017 – does the public get a brief glimpse of how widespread vulnerabilities are. Those of us who are actively involved in strengthening cybersecurity see the full scope of the problem every day.

Our modern world of cyber-physical systems and cyber-kinetic threat

Our lives increasingly revolve around Cyber-Physical Systems (CPSes). That term goes much deeper than you might think. It’s not simply a matter of computers controlling large mechanical systems as is the case with the Industrial Control Systems (ICS). Today’s CPSes, such as the Internet of Things (IoT), integrate computational devices into an increasing range of everyday physical objects and even biological systems.

Picture the power plant or water plant that provide your electricity and water. Those systems have single-purpose computers embedded with each switch or each valve. Each computer monitors system conditions and determines whether to open or close that switch or valve to keep that part of the system running optimally.

They monitor and control systems at a level that humans would find too granular and too tedious to warrant their undivided attention. They also send a constant stream of data upward in the system to provide actionable information to more complex computers that control larger parts of the process.

Or, let’s bring this closer to home. Let’s say you have a pacemaker or heart monitor or insulin pump to make up for the shortcomings of your heart or pancreas. In such a case, your body has become part of a CPS, with a mechanical device, guided by an embedded device, monitoring and automatically compensating for your organs’ limitations.

Here, too, the internal components are part of a larger system. They report their data to systems controlled by your doctor, who can monitor your condition remotely and adjust your devices if needed.

CPSes are increasingly prevalent in all aspects of modern life. If you drive a car with the latest safety features, they monitor traffic and apply the brakes if they detect a possible collision. They control the way your appliances operate. They work behind the scenes of your city’s traffic system to monitor traffic flow and time traffic lights to minimize gridlock. They operate in virtually every aspect of your life – often without you even realizing it.

Cyber-kinetic threat

With the spread of connected devices through all aspects of daily life comes increased vulnerability. These devices are designed to communicate and, as such, can potentially be compromised through cyber-kinetic attacks.

Such cyber-initiated attacks have already caused physical damage to power plants, gas pipelines, water facilities, emergency notification systems, apartment buildings, transit systems, factories and more. Researchers, including my own teams, have also demonstrated the potential for determined hackers to hack into the systems of – and even take limited control of – the more recent models of cyber-enhanced automobiles, drones, or digital railways.

Why cyber-physical systems are vulnerable

With the growing move toward connecting more and more formerly standalone pieces of equipment to cyberspace, that equipment has become vulnerable. The motivation for connecting them is sound – cyber-enabling equipment and devices helps them work together more efficiently, gathers more relevant data about their interactions and expands their potential functionality.

That ability to communicate, however – if left unprotected – provides a potential entry point for unauthorized parties to hijack the device.

  • The Stuxnet worm destroyed uranium enrichment centrifuges in an Iranian nuclear power plant.
  • Security flaws in consumer electronics devices enabled the 2016 attack on major U.S. websites that was dubbed “the attack that brought down the internet” even if it was only for a day.
  • A bored Polish teenager took control of a city’s tram system in 2008 and carelessly rerouted trams into crashes that caused multiple rider injuries.
  • An Australian wastewater engineer took remote control of parts of the wastewater equipment of the town that terminated him and released hundreds of thousands of liters of raw sewage into lakes and rivers throughout the town over a period of weeks in 2000 before his involvement was discovered.
  • Multiple hospitals had to shut down critical equipment or postpone operations not only during the WannaCry ransomware attack, but also in scattered ransomware attacks in the months that preceded it.

These are just a small sample of documented cyber-kinetic attacks. I’ve been tracking many more of the key historic cyber-kinetic incidents and attacks here. You would think that such incidents would motivate improved security for this ever-expanding web of interconnectedness, but that has not been the case.

Wishful thinking and denial

Connecting every key component of a particular physical process to computer monitoring and control offers greater efficiencies for the process. Making that data available on an open network offers those controlling the process the convenience of having the data they need at their fingertips no matter where in the world they are. Motives are the same whether the physical process is a manufacturing process, temperature measurement and control, a chemical process, traffic control, adjustment of abnormal heart rhythms, or a myriad of other options.

Thus, use cases consistently favor connecting more devices and increasing accessibility. Building more comprehensive cyber-connections becomes the chief priority and security is overlooked.

As physical processes are increasingly being monitored or controlled by embedded computational devices, those physical processes become hackable in the same way as the embedded devices controlling them.

Security of such CPSes is often considered to be effectively covered merely by tossing out the industry adage “security by obscurity.” This term implies that the system’s design is sufficiently different enough from other companies’ systems that no hacker would want to spend their time figuring out how to compromise it.

The fact that security systems of multiple factories, utilities, smart buildings, connected vehicles and even nuclear power plants have been breached demonstrates that adage to be wishful thinking.

Determined hackers have shown a willingness to attack any system in which they can find a vulnerability. In fact, when we assess the security of industrial operations, we rarely find a system that hackers have not already infected with some type of malware or backdoor that they could use at any time to inflict further damage.

A similar form of denial applies to the health-preserving technologies described earlier – the implanted medical devices like pacemakers, defibrillators, heart monitors and insulin pumps. Here, too, use cases encourage connecting them to the cyberworld. What could be better than having such devices feed ongoing data to medical personnel and alert them of problems before those problems become serious?

Such devices undergo strenuous testing to ensure that they function as designed. That, however, is as far as testing goes. Device testing does not take into consideration the possibility of some third party gaining access to a device and causing it to malfunction.

To date, no case has been documented of such sabotage. That, however, doesn’t prove that such sabotage has never happened. Unfortunately, if such sabotage ever occurred, it would be almost impossible to identify that it was sabotage instead of a simple device malfunction.

Yet this vulnerability was considered a real enough threat that when then-U.S. VP Chaney had a defibrillator implanted in his chest in 2007, the doctors disabled its remote functionality as a precaution against a potential assassination attempt. Despite this awareness a decade ago, testing cybersecurity of implanted devices today remains overlooked by most medical device manufacturers.

The challenges of securing critical systems

Outright failure to test security of connected devices is not the only problem. Providing security for CPSes is far more complex than providing security for traditional, information-only systems. If something goes wrong when testing security of an information-only system, the worst that happens is that people lose access to the system’s data until the problem is fixed. But when systems control functions that could mean life or death for people, even a brief failure could be catastrophic.

Past cybersecurity attention focused primarily on three aspects: maintaining data confidentiality, integrity and availability, with the strongest focus on confidentiality. Connecting devices that control aspects of our physical world to cyberspace requires that greater focus land on integrity and availability.

When dealing with systems that affect our physical world, keeping outsiders from discovering what data these devices are processing is far less important than keeping outsiders from changing the data to make the system err in what it does or, even more important, keeping outsiders from blocking data so the system completely fails to provide its essential services.

Connecting critical physical systems also adds more elements to this traditional three-element paradigm of security concerns. Control of the system is not an issue when it comes to traditional information systems. Outsiders gain no benefit from wrestling control of the system away from its administrators. Leaving vulnerabilities that allow outsiders to take control of a connected vehicle or an implanted medical device, on the other hand, could be fatal.

Similarly, with a traditional information system, the introduction of fake data may be a minor inconvenience to the authorized users. But fake information that says that the water pressure on a dam is much less than it really is could cause the system not to take the proper action, putting the dam at risk of collapse.

Finally, with a traditional information system, no risks ensue from installing security protocols that create delays for authorized users in gaining system access. When dealing with security for a remote device to which a medical professional needs quick access in a medical emergency, though, creating a workable balance between security against unauthorized users and ease of access for authorized ones can be a matter of life or death.

Rethinking our security approaches to tackle cyber-kinetic threat

The continued growth of CPSes as an integral part of our physical well-being forces not only security professionals, but all stakeholders in our journey into a highly connected world to rethink traditional security concepts and solutions. Security must not take a back seat to rushing new technologies to market as quickly as possible. Hoping that past security approaches or – worse yet – blind, wishful thinking will prevent the disasters that inadequate security can bring is not an option.

Ignoring the reality of vulnerabilities will not restrict them to the realm of fiction. The cyber-kinetic threats are real. Many have already occurred. Many others are not far removed from dominating our news instead of our entertainment. Only by recognizing the new challenges that our connected world poses and coming together to address them will we be able to make our leap into this new way of life secure and safe, and get the fullest benefits from it.


Originally published on HelpNetSecurity on December 15, 2017.

History of Cyber-Kinetic Incidents and Research

Cyber-Kinetic Attacks History

The fact that cyber-kinetic attacks rarely appear on mainstream news doesn’t mean they don’t happen. They happen more frequently than you would think. Many, for various reasons, aren’t even reported to agencies charged with combatting them.

This hinders security experts in understanding the full scope and recognizing the trends in this growing problem. We’ll highlight examples of cyber-kinetic incidents and attacks in this chapter. Some were malfunctions that, nonetheless, demonstrated cyber-physical system vulnerabilities. Some were collateral damage from hacking or computer viruses. The vulnerabilities these exposed inspired a growing number of targeted cyber-kinetic attacks in recent years.

The Beginning of Cyber-Kinetic Threat

The previous chapter mentioned how the concept of using cyber-physical systems to disrupt industrial processes appears to date back to the early 1980s. The U.S. Central Intelligence Agency supposedly introduced defects into oil pipeline software that the Soviets were trying to steal from Canada. The flawed hardware was claimed by a former CIA operative to have caused a catastrophic explosion in a Soviet natural gas pipeline. Whether it really happened as claimed, or not, the concept of cyber-kinetic attacks was born. Thus, the concept of cyber-kinetic attacks likely existed well before most people even had an inkling of the cyber-enabled systems that industries and energy distribution systems were developing.

Early warnings of potential cyber-kinetic problems

Early problems with cyber systems involved simple malfunctions. These malfunctions, however, demonstrated the damage that improperly functioning cyber systems could inflict.

Software glitches turn deadly – cyber-kinetic malfunction

Between 1985 and 1987, six individuals either died or suffered serious injuries from radiation treatment by machines with flawed software. The Therac-25 radiation therapy machines, in rare situations, showed the equipment operator that the machine was properly configured for the prescribed radiation dose when it actually was configured to deliver a potentially fatal dose.

Although the manufacturer upgraded hardware components after the first two accidents, it neglected to consider software flaws as a source. The result was disastrous.

Continued equipment use, combined with manufacturer insistence that their equipment was incapable of producing the overdoses, led to four deaths before the flaws were uncovered. Clearly, the entrance of computer-controlled equipment into safety-critical systems revealed a need for more stringent coding practices. A 1993 investigation on these accidents, reprinted from IEEE Computer, Vol. 26, No. 7,[1] stated:

The mistakes that were made are not unique to this manufacturer but are, unfortunately, fairly common in other safety-critical systems. As Frank Houston of the US Food and Drug Administration (FDA) said, “A significant amount of software for life-critical systems comes from small firms, especially in the medical device industry; firms that fit the profile of those resistant to or uninformed of the principles of either system safety or software engineering.[2]

Furthermore, these problems are not limited to the medical industry. It is still a common belief that any good engineer can build software, regardless of whether he or she is trained in state-of-the-art software-engineering procedures. Many companies building safety-critical software are not using proper procedures from a software-engineering and safety-engineering perspective.

So, as software entered the realm of controlling safety-critical systems, its producers were slow to recognize the need for more strenuous efforts to ensure that then-new technology would not open the door to unexpected dangers. How similar the situation then was to now, when producers of cyber-physical systems are slow to recognize the need for more robust security as those systems become ever more cyber connected.

Integer overflow coding error destroys a rocket – malfunction

The 1996 maiden launch of the Ariane 5 rocket by the European Space Agency ended spectacularly with a malfunction causing it to explode 40 seconds after lift-off. It took the European Space Agency 10 years and $7 billion to produce its own heavy lift launch vehicle capable of delivering a pair of three-ton satellites into orbit. It took a single coding error seconds to bring it crashing down to earth.

Dividing by zero leaves a warship dead in the water – malfunction

Another early malfunction was the accidental disabling of a U.S. warship in 1997 as it increased its reliance on computer controls. The U.S.S. Yorktown was touted as a great advance in Smart Ship technology. Computers closely integrated with shipboard systems enabled a 10% reduction in crew. Yet the software was not equipped to deal with bad data that a $2.95 hand calculator could handle with ease.[3]

The software crashed, disabling the propulsion system, when it encountered data that asked the software to divide a number by zero. The ship had to be towed into port and required two days of maintenance to get it running again.

The software problem was ultimately corrected. No further incidents of this scope have been reported. You can be sure, however, that hackers of all types – especially the government-trained hacker armies mentioned in chapter 1 – have been inspired to explore ways to accomplish such disruption of military systems ever since.

Careless programming practices kill three people – malfunction

Lives can be put at risk when overconfidence of system developers and administrators leads to carelessness. That’s what led to the 1999 Bellingham, Washington, gas pipeline rupture that claimed three lives.[4]

Although the rupture was not solely the result of computer malfunction, poor network practices were a key contributor to the disaster. It is believed that an administrator was programming new reports into the system database without having first tested them offline.

This caused the system to become unresponsive during the pressure build-up that led to the rupture. The consequent fireball killed three people, injured eight others and released a quarter-million gallon of gasoline into the environment.

The role that administrator carelessness played in this will never be fully determined. The improper assigning of a single admin login password to all computer operators and the deletion of key records from the system shortly after the incident has prevented investigators from completing their investigation into this area of the incident.

The careless practices that led to this disaster likely stemmed from overconfidence in the computer systems of that time to handle unexpected glitches. Our situation today is very similar. They pioneered computer-controlled systems. We are pioneering cyber-connected systems. We must be careful not to have the same overconfidence in systems’ ability to compensate for the unexpected. Widespread cyber-connectedness today makes us vulnerable to even more devastating consequences if we fail to anticipate and act to prevent them.

The rise of cyber-kinetic hacking

Not all incidents in those early years were simple malfunctions. With growing access to systems on which critical systems depended provided by the internet, hacking surfaced as a threat.

These hacks, performed by young computer enthusiasts, usually were not intended to cause harm, but merely to demonstrate the hacker’s skills. These hacks were far from harmless, though.

A young hacker disrupts a Boston airport – untargeted attack

A teen hacker in 1997 cut off communication systems in and around Boston’s Worcester airport, including the control tower and crucial air-to-ground landing aids for over six hours.[5] Not only airport functions, but also the fire department and all emergency services in the area were disrupted. This resulted in him becoming the first juvenile to be charged with computer crimes.

Authorities who charged him recognized that he did not anticipate that his hack would cause the harm that it did. The incident, however, was a clear indicator of the damage that a single hacker could inflict, and how vulnerable critical systems were to attack.

The first documented targeted cyber-kinetic attack

The 2000 Maroochy Shire wastewater attack mentioned previously, on the other hand, was an intentional cyber-kinetic attack, designed by a disgruntled engineer to get revenge on the township that chose not to hire him. It showed the damage someone with knowledge of a critical system could accomplish, as he released more than 264,000 liters of raw sewage around the township, killing marine life and risking residents’ health.

Slammer and the nuclear power plant – untargeted attack

A 2003 incident at the Davis-Besse nuclear power plant in Ohio demonstrated another threat to cyber systems: worms and viruses that, although intentionally created, attacked randomly as they spread. In this case, the Slammer worm entered the nuclear power plant’s system[6] through a T1 connection between a contractor’s network and the plant’s network.

This worm crashed several systems over the course of eight hours, including making core temperature monitoring systems unavailable for more than five hours. Fortunately, the plant was shut down at that time for maintenance.

The frightening thing, though, is the damage this worm could have inflicted on a fully operational nuclear plant. Incidents like this showed the vulnerability of critical systems to malware, a now-common threat, as we will see later.

Also frightening is the fact that most Davis-Besse IT staff was unaware of the T1 connection through which contractors were connecting to the plant. Despite their best efforts to secure the plant, more entry points into the system existed than they knew.

With the complex web of cyber connections in today’s world, the same potential exists for vulnerabilities to slip past IT professionals trained only in traditional security processes. The cyber-physical systems developing today have even more potential entry points to secure than the corporate networks at the time of Davis-Besse.

Trends in cyber-kinetic threats

With the exception of the Maroochy Shire wastewater attack, early incidents appear not to have specifically targeted their eventual victims. That changes as the 21st century unfolds. Now, a growing number of targeted cyber-kinetic attacks mix with accidental disruptions.

Oil and gas industry

Malfunctions in natural gas and oil pipelines

A 2010 San Bruno, California natural gas pipeline rupture[7] occurred when deficiencies in the operating system allowed pressure to rise too high for a pipeline that was overdue to be replaced. This pipeline, running through a residential area, exploded in a fireball that killed eight people, injured 60 others and destroyed 37 homes. Although this was not a targeted attack, the blast shows how vulnerable pipelines can be to erroneous data in the SCADA systems that control the flow.

In another malfunction exacerbated by faulty SCADA systems, a 2010 oil line malfunction in Marshall, Michigan[8] dumped 819,000 gallons of crude oil into the Kalamazoo River. SCADA deficiencies failed to trigger pressure alarms and delayed response to the spill, allowing the scope of the spill to far exceed what it would have been with a prompt response. Here again we see how SCADA systems not working as intended can cause massive damage.

Possible targeted pipeline cyber-kinetic attack

Malfunctions have not been the only cause of incidents in the oil and gas industry. A 2008 Turkish pipeline explosion[9] may have been a targeted attack by Russian operators to cut off oil supply to Georgia at a time when geopolitical tensions were moving toward armed conflict between the two. The exact cause was not conclusively determined, but multiple vulnerabilities appear to have been exploited around the time of the explosion. Lessons from this incident are important regardless of the actual cause of the explosion.

Confirmed targeted oil facilities cyber-kinetic attack

One incident known to have been a targeted attack was the 2008 attack on Pacific Energy Resources[10] SCADA systems that monitored and controlled offshore drilling platforms and dams. As with the Maroochy Shire wastewater attack, a former consultant for Pacific Energy Resources sought revenge when the company turned him down for a permanent position. Using multiple user accounts he created without the company’s knowledge, he transmitted programs, codes and commands to impair system functions. Fortunately, this attack did not cause major damage. The attacker’s intent was more to annoy than to destroy. Had he wished, however, he was positioned to cause massive damage.

Water Utilities

Attacks on water supplies go back to ancient times and have always been an attractive target for making a statement. That remains true today, and automatic systems that control both water supplies and wastewater treatment offer new targets.

Malfunction in dam system controls

The most significant SCADA malfunction in water utilities was the 2005 Taum Sauk Water Storage Dam catastrophic failure.[11] Discrepancies between pressure gauges at the dam, 100 miles south of Saint Louis, and at its remote monitoring facility led to the release of a billion gallons of water. This release destroyed 281 acres of a state park and caused $1 billion of damage. Fortunately, no fatalities occurred and only four people were injured. Although not a targeted attack, it demonstrates the vulnerability of SCADA systems to incorrect data.

Untargeted water filtration plant attack

A 2006 hack into a Harrisburg, Pennsylvania, water filtration plant[12] did not cause damage – the foreign hacker apparently intended to use the network merely as a tool to distribute spam emails and pirateware – but it raises concern because the hacker hacked gained control of a system that controlled water treatment operations.[13] It is unknown whether the hacker realized the sensitivity of the hijacked system or what damage he could have inflicted if so inclined.

Targeted water distribution system attacks

2011 Springfield, Illinois, water distribution malfunction[14] that destroyed a pump responsible for piping water to thousands of homes was initially blamed on Russian hackers. Shortly after the Springfield malfunction was revealed, a hacker operating from an address in Romania posted screenshots as proof of his control over a similar water facility outside Houston, Texas.[15] The Romanian hacker claimed that the facility had such poor security that he was able to gain access to it in less than 10 minutes. He came forward with his claim to dispute U.S. authorities’ claims that the Springfield incident gave no reason for people to be concerned about the safety of critical systems. Although this hacker sought to prove a point rather than cause damage, the ease with which he penetrated critical systems is a cause for concern.

As troubling as these water facility incidents are, perhaps the most chilling is the 2016 attack on an unnamed water facility.[16] A hacktivist group with ties to Syria took control of hundreds of PLCs that controlled toxic chemicals used in water treatment. If they had more knowledge of the processes and chemicals they controlled, they could have inflicted mass casualties. Verizon Security Solutions, the company that investigated this breach, chose not to release the name or location of the facility because the many vulnerabilities Verizon found in the facility’s systems would attract more attacks if known to the hacker community.

Industry

Manufacturing facilities, too, have been hit with cyber-kinetic attacks. While earlier ones appear to have been untargeted, the most recent documented attack raises serious concerns over hacker capabilities and worker safety.

Untargeted industrial facilities attack

A 2005 Zotob worm attack on 13 of Daimler-Chrysler’s U.S. automobile manufacturing plants[17] left workers idle for almost an hour while infected Windows systems were patched. Estimated cost of the incident was $14 million.[18] The worm also affected systems at heavy equipment manufacturer Caterpillar, aircraft manufacturer Boeing and major U.S. news organizations.[19] As with other worm attacks discussed previously, the attacks were not specifically targeted against the affected companies, but demonstrate how malware introduced into computer systems can cripple companies – even if only briefly.

Targeted steel mill cyber-kinetic attack

Perhaps the most famous attack on an industrial facility was the 2014 targeted cyberattack on a German steel mill[20] that caused massive damage to equipment. In contrast to the hacktivist attack on the unnamed water facility, hackers in the German attack appear to have had extensive knowledge of industrial control systems.

Once they penetrated the corporate system, they worked their way into production management software and took control of most of the plant’s operations. They disabled sources of human intervention on the systems and targeted high value equipment, including a blast furnace whose shutdown controls they disabled, overheating it until it was ruined.

Neither the source of nor motive for the attack were ever determined, and the plant, like the unnamed water facility, has not been identified. The attack, however, shows how much damage skilled and knowledgeable hackers can inflict on an industrial operation.

Demonstrations of industrial vulnerabilities

Although the scope of the attack on the German steel mill is presently not common, the vulnerability of industrial devices is frighteningly real. A 2017 test of industrial robots by security consulting firm Trend Micro showed a shocking array of vulnerabilities. They reported:

In our comprehensive security analysis, we found that the software running on industrial robots is outdated; based on vulnerable OSs and libraries, sometimes relying on obsolete or cryptographic libraries; and have weak authentication systems with default, unchangeable credentials. Additionally, the Trend Micro FTR Team found tens of thousands industrial devices residing on public IP addresses, which could include exposed industrial robots, further increasing risks that an attacker can access and compromise them.[21]

Trend Micro demonstrated multiple attack vectors open for hackers to exploit. These attack vectors could introduce defects into the products being manufactured, sabotage robots so they are ruined – or even cause injury or death to equipment operators.

My own experience with industrial robots confirm their conclusions. In fact, in certain industries and in certain geographies, covert access to industrial control systems is almost a regular practice. The long-standing practice of industrial espionage now commonly branches into industrial sabotage as companies introduce barely noticeable failures into their competitors’ processes to gain a competitive advantage.

Transportation

Transportation systems have also been attractive targets for cyber-kinetic attacks. Although the 1997 disruption at the Worcester airport was more collateral damage than targeted attack, it demonstrated the damage that such disruptions could cause. Similar incidents, whether accidental, targeted or the result of controlled testing, reinforce this fact.

Untargeted rail attack

A 2003 shutdown of CSX Transportation system[22] brought commuter, passenger and freight rail service to a standstill for 12 hours because of a worm infecting the system. Dispatching and signaling systems were compromised along much of the U.S. East Coast., demonstrating the wide-ranging affect that a single attack can have.

Targeted city transportation system attacks

We saw previously how the 2008 Lodz, Poland, takeover of a city tram system[23] by a teen resulted in the first documented personal injuries. This, however, was not the first targeted takeover of a city transportation system.

Two years earlier, a 2006 hack of the Los Angeles city traffic system disrupted traffic lights[24] at four of the city’s most heavily used intersections for days. Two city employees caused traffic lights at those intersections to function in a way that snarled traffic in hopes of pressuring the city to give in to union demands in a labor negotiation. Here, again, we see how disgruntled insiders are capable of inflicting massive disruptions through a targeted cyberattack.

Demonstrations of transportation system cyber-kinetic vulnerabilities

Lest anyone dismiss such ancient (in terms of cyber time) threats as having likely been eliminated, a 2014 test of the types of traffic systems used in many major cities has shown how vulnerable those systems are not only to insiders, but to hackers, as well. Although traffic lights themselves appear to be secure, data from the sensors and control systems that determine how and when traffic lights change were found to be easy to intercept and replace with false data. Cesar Cerrudo, the security expert who conducted the tests stated:

[All] communication is performed in clear text without any encryption nor security mechanism. Sensor identification information (sensorid), commands, etc. could be observed being transmitted in clear text. Because of this, wireless communications to and from devices can be monitored and initiated by attackers, allowing them to send arbitrary commands, data and manipulating the devices.[25]

Cerrudo was easily able to intercept data and gain enough information about the systems to reverse engineer them so that he could gain control of the data flow if he was within 1,500 feet of the targeted sensors. By use of a drone, he could then extend his distance from the intersection he was testing to the range of the drone.

Although newer versions of the sensors contain encryption, tens of thousands of vulnerable sensors remain embedded in intersections of cities around the globe, with no capability for upgrade other than ripping up the streets and replacing sensors with newer models.

Demonstrations of automobile cyber-kinetic vulnerabilities

Transportation vulnerabilities are not limited to transportation systems and traffic controls. As vehicles we drive become increasingly able to recognize potential hazards and react to them, they also become more vulnerable to outside control.

Extensive research has gone into the vulnerabilities of vehicles that have advanced safety features that enable those vehicles to autonomously make safety decisions in place of drivers. At the forefront of this are researchers Charlie Miller and Chris Valasek, who have repeatedly demonstrated how automaker efforts to make autos as fully connected to the internet as smartphones comes with the threat of hackers taking control of everything from windshield wipers and radio to steering, brakes and even the ignition system.

Their 2015 remote hijacking of a Jeep Cherokee made a big splash with the media and led Jeep to recall their vehicles to, with Miller and Valasek’s help, patch the vulnerabilities. Although those particular vulnerabilities have been patched, Miller and Valasek warn that merely fixing one vulnerability is not enough to prevent determined hackers from taking control of onboard systems.

Miller and Valasek have since demonstrated even greater control that hackers could exert over a vehicle if they gained access to the onboard Controller Area Network (CAN) bus that coordinates communication between microcontrollers and devices in vehicles. Other researchers have suggested that a compromised smartphone or compromised plug-in monitor, like those provided by insurance companies, could facilitate remote access to the CAN bus for future vehicle hijackers.

In fact, such a CAN bus cyber-kinetic hack has already been accomplished. Chinese researchers in 2016 took control of some functions on a Tesla S when the vehicle’s built-in web browser connected to a compromised Wi-Fi network.

And, finally, in penetration testing in which my team was engaged, we were able to gain partial control of multiple cars at the same time. Unlike the Jeep or Tesla research that hacked a single vehicle, we could control multiple cars simultaneously.

Demonstration of cyber-kinetic drone vulnerability

In another penetration testing engagement, my team overrode drones’ geofencing system (the system that enables government authorities to set up restricted airspaces to block drones from flying over airports, military bases, sensitive facilities and large public events). In a simulated test, we successfully bypassed geofencing protections and directed multiple drones straight toward previously restricted airspace. In other words, we demonstrated that safeguards to prevent drones from accessing restricted areas were not sufficient.

Demonstration of superyacht vulnerabilities

Another area of concern is with the growing development of superyachts for the very wealthy. Such vessels have moved from being floating luxury hotels for leisure activities to becoming floating business complexes from which owners direct their business operations. As such, the demand has skyrocketed for such vessels to possess cutting-edge communications technology.

In the rush to equip vessels with such technology, however, security has been almost wholly overlooked. A 2017 conference on superyachts featured a sobering demonstration of how easily those poorly secured systems can be compromised.[26]

Cybercrime expert Campbell Murray remotely took control of one superyacht, seizing control of navigation, satellite communication and the onboard Wi-Fi and telephone systems. He could do whatever he wanted with the ship and then wipe the system clean of any evidence of his hack.

His demonstration is not theoretical, either. Cybercriminals have already hacked superyacht communication systems to intercept owners’ banking information and withdraw money from owners’ bank accounts. They also have blackmailed owners by obtaining compromising photos or confidential information from shipboard computers. There have even been incidents of superyacht owners paying ransom to regain control of the hijacked navigation systems.

Demonstration of aircraft vulnerabilities

The holy grail of cyber-physical systems security research is remotely hacking a passenger jet. In terms of the impact to human lives as well as to the economy, a malicious cyber-kinetic hack of an aircraft could be one of the most impactful. Uncovering potential aircraft vulnerabilities and closing them before they are abused is therefore an active area of research of many governments in addition to the industry.

Unlike some of the other industries mentioned, aircraft manufacturers and airlines take the threat of cyber-kinetic attacks seriously and have been careful and deliberate in their adoption of digital technologies despite the multiple opportunities IoT offers to improve operational efficiency, increase personalisation to passengers and even introduce new business models.

That, however, didn’t stop the researchers. In November 2017 an official from the U.S. Department of Homeland Security (DHS) announced that one year earlier a team of government, industry and academic researchers were successful in remotely hacking a passenger jet controls in a non-laboratory setting while parked on a runway.[27]

Ransomware attack on a major transit system

This last use of hacking a system to extort ransom is an example of a disturbing new trend: ransomware attacks. Ransomware is a recent strategy in which hackers take control of computer systems and demand money before they will release them to the rightful authorities.

A 2016 ransomware attack on the San Francisco municipal rail system (Muni)[28] resulted in free rides for commuters, but no physical damage and little other effect on the system. The criminals behind the ransomware demanded 100 bitcoin (approximately US $73,000 at the time) to provide the encryption key to unlock the compromised system. The attack apparently was the result of random, automated scanning of web by the criminals behind the attack and Muni was fully operational within a day after the attack became known. Muni has released little information about the attack, but it appears that they were able to recover full use of their systems without paying the ransom.

Security experts found evidence that the cybercriminal behind the Muni attack had previously succeeded in extorting ransom from multiple manufacturing companies.[29] If this becomes an M.O. for cybercriminals, it won’t be long before they start causing kinetic impacts to get their victims’ attention and compliance.

Other ransomware attacks

Although Muni appears to have escaped their ransomware attack unscathed, others have not. Ransomware is becoming a booming business. The 10 ransomware programs known in January 2016 had grown to many hundreds by January 2017.[30] And the number and the use of them keep growing.

On a city library system

A 2017 ransomware attack on the St. Louis library system[31] left all its libraries unable to provide services to patrons. Books could not be checked out or returned. Like Muni, the library system refused to pay for the encryption key. Unlike Muni, however, they were not as well prepared to recover from the attack. It took them nearly two weeks to wipe the entire computer system and rebuild it from backups.

On a resort

Not all victims have refused to pay. A 2017 ransomware attack on an Austrian ski resort compromised the “smart lock” system at the resort.[32] Guests’ key cards failed to unlock their rooms and the resort was not able to create usable new key cards for them. This attack hit at the beginning of the ski season, when all rooms were booked. The resort paid a demand of 1,500 Euros in bitcoin to enable guests to access their rooms again.

Foreseeing the future of ransomware attacks

Ransomware attacks to date do not appear to have been highly targeted. Most of them likely were similar to the Muni attack, where automated scanning software sought specific vulnerabilities it could exploit. As a result, targets so far have been random, either being well enough prepared to restore operations without paying the ransom, such as Muni and the St. Louis library system, or capable of providing only a modest payout, such as the Austrian resort.

Where the cybercriminals behind ransomware attacks have found the most profit is in manufacturing companies, such as the ones from which the Muni attacker was found to have successfully extorted ransom. Even with those, though, the data compromised were usually corporate data. The real threat from ransomware attacks lies in the eventual targeting of critical systems.

Researchers have already demonstrated possible attack vectors that ransomware criminals could use to seize control of ICS devices. One such attack vector was revealed at a 2017 ICS Cyber Security Conference:

[An] ICS security consultant, Alexandru Ariciu demonstrated ransomware attacks, which he called “Scythe,” were able to target inconspicuous and less risky SCADA devices. The names of the targets are not revealed but he describes the affected devices as several types of I/O systems which stand between OPC servers and field devices. The devices run a web server and are powered by an embedded operating system. He says that a large number of these systems are unprotected and easily accessible online, which allows crooks to hijack them by replacing their firmware with a malicious code.[33]

Being able to demand ransom not just for corporate records, but for a company’s entire manufacturing capability will make ransomware a devastating threat for companies that fail to secure vulnerabilities in their SCADA systems and the IoT connections to them.

The massive May 2017 “WannaCry” ransomware attack on tens of thousands of hospitals, government agencies and businesses as large as Fedex and PetroChina across the globe[34] brought ransomware to wide media attention. It, however, is only the tip of the iceberg regarding the threat that ransomware poses.

A growing ransomware threat against hospitals

Most chilling are the ransomware attacks targeting hospitals. At least 19 hospitals were compromised by ransomware attacks in 2016[35] and 48 hospital trusts in the UK[36] alone – nearly 20% of UK hospital trusts – were documented from the May 2017 “WannaCry” attack, not to mention many others around the world. These ransomware attacks caused tens of thousands of cancellations of appointments[37] and even operations in some cases, and temporary hospital shutdowns in others.

Radiotherapy machines, oncology equipment, MRI scanners and other diagnostic equipment have been rendered useless when connected to hospital networks infected by ransomware. Critical care was delayed and the possibility of clinical mistakes multiplied. Some critical equipment required weeks to get it functioning properly again after an attack.

As hackers increasingly find monetizing their abilities appealing, fears grow that the demonstrated vulnerabilities of medical devices will also become an avenue through which hackers seek to obtain ransoms. We’ll look at that shortly.

Medical

Aside from the ransomware attacks on hospitals, most medical threats remain unfulfilled. We’ll look at one incident of people with a medical condition being specifically targeted over the internet, and then look more closely at the growing threat of hackers targeting implanted medical devices.

An attack on epilepsy patients

In a 2008 hack of the nonprofit Epilepsy Foundation website, the hacktivist group Anonymous flooded the Epilepsy Foundation’s forum with brightly flashing graphics and redirects designed to trigger migraines or seizures in the epilepsy patients who frequented the forum.

No motivation has been found for this attack other than that it was intended as a sick and malicious joke. It shows, however, the damage that can be inflicted, under certain circumstances, purely through the internet.

Demonstrations of hacking implanted medical devices

We discussed in Chapter 1 the tests of pacemakers and defibrillators that led to then-U.S. Vice President Dick Cheney having the wireless functionality of his implanted defibrillator disabled in 2007 as a precaution against potential assassination attempts. Additional demonstrations of vulnerable medical devices since then have raised awareness further.

A 2011 demonstration by Jerome Radcliffe on insulin monitors and pumps[38] showed multiple ways to cause harm to insulin patients. He showed that signals from insulin monitors could be intercepted and overridden so that they displayed inaccurate data. As a result, patients relying on those readings would be at risk. Even more alarming, he found that insulin pumps themselves could be reprogrammed to respond to a hacker’s remote, who could, conceivably, administer a fatal dose.

Not long after, another researcher, Barnaby Jack, demonstrated even more advanced insulin pump hacks.[39] While Radcliffe’s attack vector was protected somewhat by the need for the hacker to first obtain the targeted device’s serial number, Jack’s approach enabled him to locate and control any insulin pump within 300 feet without the serial number.

This opens the potential for a new and terrifying form of attack. Many critical medical devices are listed on Shodan, a specialty search engine of internet-connected devices, raising fears[40] that criminals could seize control of those systems and extort money to keep patients alive.

Although Shodan is not yet known to have been used for any such attacks, it has already been misused as a tool to discover a wide variety of unprotected or poorly protected control systems.[41] It is only a matter of time before critical medical devices come into play for hackers.

Other cyber-kinetic attacks on physical well-being

Attacking medical devices or targeting medical conditions is not the only way to affect people’s well-being. We could consider the 2016 Lappeenranta, Finland, attack that shut down heating and hot water in two apartment buildings[42] mentioned previously an attack on well-being as well.

The same could be said about the 2017 unauthorized activation of emergency alert sirens in Dallas, Texas.[43] All 156 sirens in the system were triggered at 11:45 p.m., Dallas time, through its wireless system. They were rendered unresponsive to shutoff codes, sounding for 95 minutes until workers deactivated the entire system.

Other than the inconvenience the sirens caused residents, no physical damage occurred from this attack, although the city’s 911 system was flooded with more than 4,000 calls during the attack disrupting genuine calls. Under different conditions, an attack such as this could have caused mass panic.

Most likely, this attack was from young hackers challenging themselves to see how big of a “splash” they could make with their hacking skills. Whatever their motivation might have been, however, this incident raises concerns about another threat that cyber vulnerabilities present.

The threat of physical damage from hacks of poorly secured cyber-physical systems comes not just from hardened cybercriminals and highly trained cyberwarfare-trained agents of other countries. It comes also from curious kids who might cause kinetic impacts inadvertently by trying things out.

Power Grids

The effect of the Lappeenranta, Finland, attack may have been small, but potential for attacks that affect vast swaths of people are real. Power grids that provide essential services to large populations are vulnerable.

Demonstration of generator vulnerability

The cyber-kinetic threat to power grids was dramatically demonstrated in the 2007 Aurora Generator Test.[44] The test used a computer program to rapidly open and close circuits of a diesel generator out of phase. The generator exploded in less than three minutes.

The generator used was typical of most generators used in power grids, operating on protocols that were designed without security in mind. Failure of such a generator in an actual power grid could have caused widespread outages, or even the type of cascade failure experienced in the 2003 US Northeast blackout.

Targeted revenge attack

A less damaging – but still concerning – attack occurred two years later at a 2009 hack of a Texas power company.[45] A fired employee of the company used his not-yet revoked authorization to cripple an energy use forecasting system, leaving the company unable to sell their excess capacity to other energy companies.

The unauthorized access targeted company profits rather than consumer access to energy. The situation, however, could have played out much worse. The fired employee had access to critical power generation systems and could have wreaked widespread damage to the power grid if he had chosen.

Targeted cyber-kinetic attacks on power grids in Ukraine

Real-life disruptions of power grids have occurred, with the 2015 BlackEnergy attacks on Ukrainian power grids[46] being the first. These attacks, believed to have been conducted by Russian government-sponsored hackers, left more than 80,000 people without power.

Former U.S. National Security Director Michael Hayden said of the attack, “There is a darkening sky. This is another data point in an arc that we’ve long predicted.”

Ever-present malware awaiting activation

In reality, the question is not so much have foreign hackers compromised power grids in their target countries, but how much of power grids are compromised. Researchers have uncovered massive breaches that have given foreign hackers critical details of U.S. power grid infrastructure from New York to California.[47]

Authorities believe that the means to strike U.S. power grids at will are already in place. This accords with my team’s own observations. We regularly see malware in critical infrastructure systems, not doing anything presently, but just waiting on commands from their controllers wherever there is a bit of geopolitical tension.

Nuclear Power Plants

Perhaps people’s greatest nightmare is the possibility of cyberattacks compromising a nuclear power facility. Chillingly, such systems have proven vulnerable to damage, either from simple malfunction or direct attack.

ICS failures cause nuclear plant shutdowns

Failures within devices and software connected to plant ICS systems caused plant shutdowns[48] in the 2006 Browns Ferry shutdown and 2008 Hatch nuclear power plant shutdown. In the case of the Browns Ferry, Alabama, plant, excessive traffic on the control system network caused two circulation pumps to fail. An attempted software update at the Hatch Nuclear Power Plant near Baxley, Georgia, encountered a glitch that resulted in erroneous readings of water level that triggered an automatic shutdown.

In both cases, no widespread harm occurred. But both, again, demonstrate the vulnerability of critical industrial control systems to potential attacks that attempt to overload or insert false data into industrial control systems.

Targeted cyber-kinetic attack on Iranian uranium enrichment facilities

The 2009 targeted Stuxnet attack on Iranian centrifuges used to enrich uranium[49] stands as the most damaging cyberattack known to-date. The attack, believed to have been launched by the U.S. CIA and Israeli government to cripple Iranian nuclear weapons development, may have destroyed as many as 10% of illegally obtained and operated uranium enrichment centrifuges at Iran’s Natanz Nuclear Power Plant.

The plant, although isolated from the internet, was breached by targeting plant personnel with the Stuxnet worm, which then entered the plant’s network when personnel connected their infected computers to it. The Stuxnet worm appears to have been specifically tailored for the Iranian centrifuges.

The worm, however, by its very nature, could not achieve the surgical strike its attackers hoped. The need for it to spread widely in order to find access points through which it could attack led it to spread beyond its target. From there, it has been captured by hackers and modified to strike other targets. We’ll look at that development shortly.

Unidentified targeted cyber-kinetic attack of nuclear facility

Finally, as with power grids, at least one documented disruption of an unnamed nuclear power facility[50] occurred in a two- or three-year period leading up to 2016. This is according to a high-ranking official at the International Atomic Energy Agency, who admitted that a cyberattack had disrupted a nuclear energy facility in that timespan. The location of the plant was not revealed.

Precautionary measures were launched to mitigate the attack, and no plant shutdown was required. No further details were divulged.

Growing threats

To the public, Stuxnet may seem like ancient history in the quickly changing cyberworld. Unfortunately, it is not ancient history to security experts. Its legacy remains a looming threat.

In many ways, the creation of Stuxnet to hamper the threat of Iranian nuclear ambitions opened Pandora’s box. Stuxnet demonstrated the feasibility of cyber-kinetic attacks on vital systems through malware.

Furthermore, Stuxnet has not faded into history. It remains in the cyberworld as an ongoing threat to industrial control systems. Although it was specifically targeted for Iranian centrifuges, it has served as a model from which malware developers have created software that targets other SCADA systems.[51]

Stuxnet, however, is only one part of the growing body of threats. 2017 testing demonstrated the growing threat of ransomware targeting critical infrastructure.

David Formby, a PhD student in the Georgia Tech School of Electrical and Computer Engineering, and his faculty advisor, Raheem Beyah, the Motorola Foundation Professor and associate chair in the School of Electrical and Computer Engineering, designed a proof-of-concept ransomware that could take control of PLCs at water treatment plants, locking out authorized users and enabling them to introduce poisonous levels of chlorine into the water supply.[52]

And the massive May 2017 “WannaCry” ransomware attack on targets around the world only emphasizes the vulnerabilities present in our cyber-connected world. Ralph Langner, founder of German security consultancy Langner, said in light of the “WannaCry” attack: “For a competent attacker it would be possible to use the encryption vector specifically against industrial targets and force a production halt. We haven’t seen that on a large scale yet but I predict it’s coming, with ransom demands in the six and seven digits.[53]

My own research confirms this degree of vulnerability. I have done a significant number of investigations of critical infrastructure providers, such as energy, gas and water, in parts of the world where geopolitical tensions were escalating. My team almost always found malware present in their infrastructure.

The malware was not doing anything malicious. It was just there to provide a foothold for the attacker in case one day they decide to impact the service.

Based on the location, access rights and capabilities of the malware, we confirmed that the attackers indeed could have damaged the critical infrastructure if they wanted to.

Although we could eliminate the risks we found for our clients, identifying the attacker behind the malware is a different matter. It is always hard to determine the attackers with certainty. The best guess is always an adversary nation-state.

Takeaways

If nothing else, the attacks described in this chapter demonstrate that the threat of cyberattacks on critical systems are not hypothetical. They occur increasingly.

These cyber-kinetic malfunctions, attacks and research demonstrations are only a sampling of known incidents. Far more have been documented. Far more than those that have been documented have gone unreported or have only been hinted at by those charged with combatting them.

Many industries or organizations fear that releasing too much information about attacks will result in loss of public trust in them. Many fear that admitting to having been successfully compromised with make them targets of further attacks.

Add those fears to the scattered patchwork of incident reporting channels and it becomes clear that it is almost impossible for security experts to see the true scope of the problem or to trace its underlying trends. Increased vigilance in protecting systems from cyberattack is essential to the safe use of the cyber-enabled systems on which we increasingly rely.

In coming chapters, we’ll look at some of the specific cyber-kinetic threats, beginning with Stuxnet, and how they have been able to compromise systems. Ultimately, we will look at strategies for defending against threats and securing cyber-physical systems for a safer future.

[1] http://courses.cs.vt.edu/~cs3604/lib/Therac_25/Therac_1.html

[2] F. Houston, “What Do the Simple Folk Do?: Software Safety in the Cottage Industry,” IEEE Computers in Medicine Conf., 1985.

[3] https://gcn.com/Articles/1998/07/13/Software-glitches-leave-Navy-Smart-Ship-dead-in-the-water.aspx

[4] https://pdfs.semanticscholar.org/3921/4bc7f02e067c6ac40cc3bf1eff1aaa4cc02d.pdf

[5] http://www.irational.org/APD/CCIPS/juvenilepld.htm

[6] http://www.securityfocus.com/news/6767

[7] National Transportation Safety Board (NTSB), Pacific Gas and Electric Company Natural Gas Transmission Pipeline Rupture and Fire, San Bruno, California, September 9, 2010, NTSB/PAR-11/01, August 30, 2011, pp. 5-12.

[8] National Transportation Safety Board (NTSB). Enbridge, Inc. Hazardous Liquid Pipeline Rupture, Board meeting summary, July 25, 2010.

[9] https://ics.sans.org/media/Media-report-of-the-BTC-pipeline-Cyber-Attack.pdf

[10] http://www.theregister.co.uk/2009/09/24/scada_tampering_guilty_plea/

[11] http://damfailures.org/case-study/taum-sauk-dam-missouri-2005/

[12] http://blogs.abcnews.com/theblotter/2006/10/hackers_penetra.html

[13] http://www.gao.gov/assets/270/268137.pdf, pp. 16-17.

[14] http://www.bbc.com/news/technology-15817335

[15] http://www.itworld.com/article/2734691/security/illinois–texas-hacks-show-it-s-easy-to-take-over-u-s–water-systems.html

[16] https://www.theregister.co.uk/2016/03/24/water_utility_hacked/

[17] http://www.eweek.com/security/zotob-pnp-worms-slam-13-daimlerchrysler-plants

[18] https://www.tofinosecurity.com/why/Case-Profile-Daimler-Chrysler

[19] http://www.gao.gov/assets/270/268137.pdf, pp. 16.

[20] https://www.sentryo.net/cyberattack-on-a-german-steel-mill/; https://www.wired.com/2015/01/german-steel-mill-hack-destruction/; http://www.bbc.com/news/technology-30575104

[21] https://www.trendmicro.com/vinfo/us/security/news/internet-of-things/rogue-robots-testing-industrial-robot-security

[22] CSX Transportation, “Computer virus strikes CSX transportation computers—Freight and commuter service affected (press release),”Aug 2003.

[23] Graeme Baker, Schoolboy Hacks into City’s Tram System, 2008, Available: http://www.telegraph.co.uk/news/worldnews/1575293/Schoolboy-hacks-into-citys-tram-system.html

[24] http://articles.latimes.com/2007/jan/09/local/me-trafficlights9

[25] https://www.wired.com/2014/04/traffic-lights-hacking

[26] https://www.theguardian.com/world/2017/may/05/cybercrime-billionaires-superyacht-owners-hacking

[27] http://www.aviationtoday.com/2017/11/08/boeing-757-testing-shows-airplanes-vulnerable-hacking-dhs-says/

[28] http://www.securityweek.com/ransomware-attack-disrupts-san-francisco-rail-system

[29] https://krebsonsecurity.com/2016/11/san-francisco-rail-system-hacker-hacked/

[30] http://www.dailymail.co.uk/health/article-4149266/Chilling-menace-HACKERS-holding-NHS-ransom.html

[31] https://www.theguardian.com/books/2017/jan/23/ransomware-attack-paralyses-st-louis-libraries-as-hackers-demand-bitcoins

[32] http://fortune.com/2017/01/29/hackers-hijack-hotels-smart-locks/

[33] https://cyware.com/news/vulnerabilities-in-scada-enable-ransomware-attacks-16356dfb

[34] http://money.cnn.com/2017/05/12/technology/ransomware-attack-nsa-microsoft/index.html

[35] http://www.dailymail.co.uk/news/article-3925240/Hackers-cripple-NHS-hospital-machines-demand-ransom-cash.html

[36] https://www.forbes.com/sites/thomasbrewster/2017/05/17/wannacry-ransomware-hit-real-medical-devices/#595a136d425c

[37] http://www.telegraph.co.uk/news/2017/05/13/nhs-cyber-chaos-hits-thousands-patients/

[38] http://www.cbsnews.com/news/black-hat-hacker-can-remotely-attack-insulin-pumps-and-kill-people/

[39] https://www.theregister.co.uk/2011/10/27/fatal_insulin_pump_attack

[40] http://www.dailymail.co.uk/health/article-4149266/Chilling-menace-HACKERS-holding-NHS-ransom.html

[41] http://money.cnn.com/2013/04/08/technology/security/shodan/index.html

[42] Mohit Kumar, DDoS Attack Takes Down Central Heating System Amidst Winter in Finland, 2016, http://thehackernews.com/2016/11/heating-system-hacked.html

[43] https://www.wired.com/2017/04/dallas-siren-hack-wasnt-novel-just-really-loud/

[44] https://en.wikipedia.org/wiki/Aurora_Generator_Test

[45] http://www.theregister.co.uk/2009/06/01/texas_power_plant_hack/

[46] http://www.ibtimes.com/us-confirms-blackenergy-malware-used-ukrainian-power-plant-hack-2263008

[47] http://www.cbc.ca/news/technology/hackers-infrastructure-1.3376342

[48] http://www.safetyinengineering.com/FileUploads/Nuclear%20cyber%20security%20incidents_1349551766_2.pdf

[49] Avag R. Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant? | Institute for Science and International Security. 2010; Available: http://isis-online.org/isis-reports/detail/did-stuxnet-take-out-1000-centrifuges-at-the-natanz-enrichment-plant

[50] http://www.reuters.com/article/us-nuclear-cyber-idUSKCN12A1OC

[51] Eric Byres, “Next Generation Cyber Attacks Target Oil and Gas SCADA,” Pipeline & Gas Journal, February 2012

[52] https://www.scmagazineuk.com/rsa-2017-researchers-create-ransomware-for-industrial-control-systems/article/638130/

[53] https://www.forbes.com/sites/thomasbrewster/2017/05/17/wannacry-ransomware-hit-real-medical-devices/#595a136d425c

Timeline of Key Cyber-Kinetic Attacks, Incidents & Research

Cyber-Kinetic Timeline

Below is a timeline of key historic cyber-kinetic attacks, system malfunctions and key researcher demos targeting cyber-physical systems (CPS), Internet of Things (IoT) and Industrial Control Systems (ICS) resulting in kinetic impacts in the physical world. I tried to select only those that were first-of-the-kind or that significantly increased general awareness about a particular type of an attack or incident

I know that the list is incomplete. That’s where you come in. If you are aware of an incident or a research that demonstrated something new regarding cyber-kinetic threats or helped significantly raise the awareness, please contact me.

For a more readable version of the history of cyber-kinetic incidents and attacks check out this chapter from my book: https://cyberkinetic.com/cyber-kinetic/timeline-of-key-cyber-kinetic-attacks-incidents-and-research/. You can also download all these incidents listed in one PowerPoint slide from https://www.slideshare.net/secret/2nijwZSS9HZFru.

Sorry,You have not added any story yet

Most popular articles this week