ClearEnergy ransomware can destroy process automation logics in critical infrastructure, SCADA and industrial control systems.

ClearEnergy-banner

Schneider Electric, Allen-Bradley, General Electric (GE) and more vendors are vulnerable to ClearEnergy ransomware.

Researchers at CRITIFENCE® Critical Infrastructure and SCADA/ICS Cyber Threats Research Group have demonstrated this week a new proof of concept ransomware attack aiming to erase (clear) the ladder logic diagram in Programmable Logic Controllers (PLCs). The ransomware a.k.a ClearEnergy affects a massive range of PLC models of world’s largest manufacturers of SCADA and Industrial Control Systems. This includes Schneider Electric Unity series PLCs and Unity OS from version 2.6 and later, other PLC models of leading vendors include GE and Allen-Bradley (MicroLogix family) which are also found to be vulnerable to the ransomware attack.ransomware attack.ransomware a.k.a ClearEnergy affects a massive range of PLC models of world’s largest manufacturers of SCADA and Industrial Control Systems. This includes Schneider Electric Unity series PLCs and Unity OS from version 2.6 and later, other PLC models of leading vendors include GE and Allen-Bradley (MicroLogix family) which are also found to be vulnerable to the ransomware attack.

Ransomware is a type of malware that infects computers and encrypts their content with strong encryption algorithms, and then demands a ransom to decrypt that data. “ClearEnergy attack is based on the most comprehensive and dangerous vulnerability that ever found in Critical Infrastructure, SCADA and ICS Systems, and affects a wide range of vulnerable products from different manufacturers and vendors. These attacks target the most important assets and critical infrastructure and not just because they are easy to attack but also hard to be recovered”. Says Brig. Gen. (ret.) Rami Ben Efraim, CEO at CRITIFENCE.

In 2016 we have seen a rise in ransomware, where the victims were businesses or public organizations that on one hand had poor security and on the other hand the alternative cost of losing business continuity was high. Last year there were reports of a targeted ransomware for PC and other workstation within critical infrastructure, SCADA and industrial control systems. A month ago, scientists from the School of Electrical and Computer Engineering in Georgia Institute of Technology have simulated a proof-of-concept ransomware attack (LogicLocker) in a limited scope designed to attack critical infrastructure, SCADA and industrial control systems.

ClearEnergy acts similarly to other malicious ransomware programs that infect computers and encrypts their content with strong encryption algorithms, and then demands a ransom to decrypt that data back to its original form, with one major difference. ClearEnergy is a malicious ransomware attack designed to target Critical Infrastructure and SCADA systems such nuclear and power plant facilities, water and waste facilities, transportation infrastructure and more.

“Although the codename ClearEnergy, the vulnerabilities behind ClearEnergy ransomware takes us to our worst nightmares where cyber-attacks meets critical infrastructure. Attackers can now take down our electricity, our water supply and our oil and gas infrastructure by compromising power plants, water dams and nuclear plants. Critical Infrastructure are the place in which terrorists, activists, criminals and state actors can make the biggest effect. They have the motivation, and ClearEnergy shows that they have also the opportunity.” Says Brig. Gen. (ret.) Rami Ben Efraim, CEO at CRITIFENCE.

Once ClearEnergy is executed on the victim machine it will search for vulnerable PLCs in order to grab the ladder logic diagram from the PLC and will try to upload it to a remote server. Finally ClearEnergy will start a timer that will trigger a process to wipe the logic diagram from all PLCs after one hour unless the victim will pay in order to cancel the timer and to stop the attack.

SCADA and Industrial Control Systems has been found to be weak in the recent years, against numerous types of attacks that result in damages in a form of loss of service which translate to a power outage, or sabotage. The damage that ClearEnergy attack can cause to the critical infrastructure is high since it can cause a power failure and other damages to field equipment, thus making the recovery process slow in most cases, and might even bring a plant to a halt.

ClearEnergy, which is based on vulnerabilities CVE-2017-6032 (SVE-82003203) and CVE-2017-6034 (SVE-82003204) that have been discovered by CRITIFENCE security researchers, disclosed profound security flaws in the UMAS protocol of the vendor Schneider Electric. UMAS protocol seems to suffer from critical vulnerabilities in the form of bad design of the protocol session key, which results in authentication bypass. “UMAS is a Kernel level protocol and an administrative control layer used in Unity series PLC and Unity OS from 2.6. It relies on the Modicon Modbus protocol, a common protocol in Critical Infrastructure, SCADA and industrial control systems and used to access both unallocated and allocated Memory from PLC to SCADA system. What worries our researchers is that it may not be entirely patched within the coming years, since it affecta a wide range of hardware and vendors.” Says Mr. Eran Goldstein, CTO and Founder of CRITIFENCE.

Following to the disclosure, Schneider Electric has confirmed that the Modicon family of PLCs products are vulnerable to the findings presented by CRITIFENCE and released an Important Cybersecurity Notification (SEVD-2017-065-01). ICS-CERT, Department of Homeland Security (DHS) released an important advisory earlier this morning ([April 11, 2017] ICSA-17-101-01). The basic flaws, which was confirmed by Schneider Electric, allows an attacker to guess a weak (1-byte length) session key easily (256 possibilities) or even to sniff it. Using the session key, the attacker is able to get a full control of the controller, to read controller’s program and rewriting it back with the malicious code.

“The recovery process from this type of cyber-attacks can be very hard and slow in most cases due to lack of management resources in the field of SCADA and process automation. Slow recovery process multiplied by the number of devices need be fixed, as well configuration restoration makes the recovery processes very painful”. Says Mr. Alexey Baltacov, Critical Infrastructure Architect at CRITIFENCE

“Recovering from such an attack would be a slow and tedious process, and prone to many failures. Every plant using PLC’s which is part of a production line and would have dozens of these devices all around the plant. Let’s assume that each PLC is indeed backed-up to its recent configuration. It would take a painstakingly long time to recover each and every one of them to its original status.” Says Mr. Eyal Benderski, Head of the Critical Infrastructure and SCADA/ICS Cyber Threats Research Group at CRITIFENCE. “This restoration process would take a long time, on which the plant would be completely shut down. The costs of that shut down could be substantial, and for critical processes it could affect for more than the down-time, as it is the case with energy plants. Consider a process which relies on keeping a constant temperature for a biological agent or chemical process. Breaking the process chain could require re-initialization that may be days and weeks long. Furthermore, since dealing with the OT network is much more complicated for operational reasons, on many occasions plants don’t even have up-to-date backups, which would require complete reconfiguration of the manufacturing process. Given these complications, plants would very much prefer paying the ransom than dealing with the minor chance that the backups will work as expected. Lastly, let’s assume the backups went on-air as soon as possible, what would prevent the same attack from recurring, even after paying?”

About the author:

CRITIFENCE is a leading Critical Infrastructure, SCADA and Industrial Control Systems cyber security firm. The company developed and provides SCADAGate+ unique passive cyber security technology and solutions designed for Critical Infrastructure, SCADA and Industrial Control Systems visibility and vulnerability assessment,  which allow to monitor, control and to analyze OT network cyber security events and vulnerabilities easily and entirely passively. CRITIFENCE development team and Critical Infrastructure and SCADA/ICS Cyber Threats Research Group combined from top experienced SCADA and cyber security experts and researchers of the IDF’s Technology & Intelligence Unit 8200 (Israel’s NSA) and the Israeli Air Force (IAF).

For more information about CRITIFENCE refer to: http://www.critifence.com

References

Vulnerabilities

CVE-2017-6032 http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2017-6032

CVE-2017-6034 http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2017-6034

SVE-82003203 http://www.critifence.com/sve/sve.php?id=82003203

SVE-82003204 http://www.critifence.com/sve/sve.php?id=82003204

Source code

ClearEnergy | UMASploit – https://github.com/0xICF/ClearEnergy

Advisories

Schneider Electric – SEVD-2017-065-01

ICS-CERT, Department of Homeland Security (DHS) – ICSA-17-101-01

 

News

SecurityAffairs – http://securityaffairs.co/wordpress/57731/malware/clearenergy-ransomware-scada.html

0xICF – https://0xicf.wordpress.com/2017/04/09/clearenergy-ransomware-can-destroy-process-automation-logics-in-critical-infrastructure-scada-and-industrial-control-systems/

VirusGuides – http://virusguides.com/clearenergy-ransomware-targets-critical-infrastructure-scada-industrial-control-systems/

CRITIFENCE – http://critifence.com/blog/clear_energy/

Flaws in Samsung’s ‘Smart’ Home Let Hackers Unlock Doors and Set Off Fire Alarms

samsung-smaerthome.jpg

 

 

 

Credit:   Andy Greenberg, wired

Waze | Another way to track your moves

Millions of drivers use Waze, a Google-owned navigation app, to find the best, fastest route from point A to point B. And according to a new study, all of those people run the risk of having their movements tracked by hackers.

Researchers at the University of California-Santa Barbara recently discovered a Waze vulnerability that allowed them to create thousands of “ghost drivers” that can monitor the drivers around them—an exploit that could be used to track Waze users in real-time. They proved it to me by tracking my own movements around San Francisco and Las Vegas over a three-day period.

“It’s such a massive privacy problem,” said Ben Zhao, professor of computer science at UC-Santa Barbara, who led the research team.

Here’s how the exploit works. Waze’s servers communicate with phones using an SSL encrypted connection, a security precaution meant to ensure that Waze’s computers are really talking to a Waze app on someone’s smartphone. Zhao and his graduate students discovered they could intercept that communication by getting the phone to accept their own computer as a go-between in the connection. Once in between the phone and the Waze servers, they could reverse-engineer the Waze protocol, learning the language that the Waze app uses to talk to Waze’s back-end app servers. With that knowledge in hand, the team was able to write a program that issued commands directly to Waze servers, allowing the researchers to populate the Waze system with thousands of “ghost cars”—cars that could cause a fake traffic jam or, because Waze is a social app where drivers broadcast their locations, monitor all the drivers around them.

 

The attack is similar to one conducted by Israeli university students two years ago, who used emulators to send traffic bots into Waze and create the appearance of a traffic jam. But an emulator, which pretends to be a phone, can only create the appearance of a few vehicles in the Waze system. The UC-Santa Barbara team, on the other hand, could run scripts on a laptop that created thousands of virtual vehicles in the Waze system that can be sent into multiple grids on a map for complete surveillance of a given area.

In a test of the discovery, Zhao and his graduate students tried the hack on a member of their team (with his permission).

“He drove 20 to 30 miles and we were able to track his location almost the whole time,” Zhao told me. “He stopped at gas stations and a hotel.”

 

Last week, I tested the Waze vulnerability myself, to see how successfully the UC-Santa Barbara team could track me over a three-day period. I told them I’d be in Las Vegas and San Francisco, and where I was staying—the kind of information a snoopy stalker might know about someone he or she wanted to track. Then, their ghost army tried to keep tabs on where I went.

Users could be tracked right now and never know it.

– Ben Zhao, UC-Santa Barbara computer science professor
 

The researchers caught my movements on three occasions, including when I took a taxi to downtown Las Vegas for dinner:

And they caught me commuting to work on the bus in San Francisco. (Though they lost me when I went underground to take the subway.)

The security researchers were only able to track me while I was in a vehicle with Waze running in the foreground of my smartphone. Previously, they could track someone even if Waze was just running in the background of the phone. Waze, an Israeli start-up, was purchased by Google in 2013 for $1.1 billion. Zhao informed the security team at Google about the problem and made a version of the paper about their findings public last year. An update to the app in January of this year prevents it from broadcasting your location when the app is running in the background, an update that Waze described as an energy-saving feature. (So update your Waze app if you haven’t done so recently!)

“Waze constantly improves its mechanisms and tools to prevent abuse and misuse. To that end, Waze is regularly in contact with the security and privacy research community—we appreciate their help protecting our users,” said a Waze spokesperson in an emailed statement. “This group of researchers connected with us in 2014, and we have already addressed some of their claims, implementing safeguards in our system to protect the privacy of our users.”

The spokesperson said that “the concept of Waze is that we all work together to share information and impact the world around us” and that “users expect to offer certain information about their route in exchange for unparalleled navigation assistance.” Among the safeguards deployed by Waze is a “system of cloaking” so that a user’s location as displayed “from time to time within the Waze application does not represent such user’s actual, real time location.”


But those safeguards did not prevent real-time tracking in my case. The researchers sent me their tracking minutes after my trips, with accurate time stamps for each of my locations, meaning this cloaking system doesn’t seem to work very well.

“Anyone could be doing this [tracking of Waze users] right now,” said Zhao. “It’s really hard to detect.”

Part of what allowed the researchers to track me so closely is the social nature of Waze and the fact that the app is designed to share users’ geolocation information with each other. The app shows you other Waze drivers on the road around you, along with their usernames and how fast they’re going. (Users can opt of this by going invisible.) When I was in Vegas, the researchers simply populated ghost cars around the hotel I was staying at that were programmed to follow me once I was spotted.

“You could scale up to real-time tracking of millions of users with just a handful of servers,” Zhao told me. “If I wanted to, I could easily crawl all of the U.S. in real time. I have 50-100 servers, and could get more from [Amazon Web Services] and then I could track all of the drivers.”

Theoretically, a hacker could use this technique to go into the Waze system and download the activity of all the drivers using it. If they made the data public like the Ashley Madison hackers did, the public would suddenly have the opportunity to follow the movements of the over 50 million people who use Waze. If you know where someone lives, you would have a good idea of where to start tracking them.

Like the Israeli researchers, Zhao’s team was also able to easily create fake traffic jams. They were wary of interfering with real Waze users so they ran their experiments from 2 a.m. to 5 a.m. every night for two weeks, creating the appearance of heavy traffic and an accident on a remote road outside of Baird, Texas.

“No real humans were harmed or even interacted with,” said Zhao. They aborted the experiment twice after spotting real world drivers within 10 miles of their ghost traffic jam.

 

While Zhao defended the team’s decision to run the experiment live on Waze’s system, he admitted they were “very nervous” about initially making their paper about their findings public. They had approval from their IRB, a university ethics board; took precautions not to interfere with any real users; and notified Google’s security team about their findings They are presenting their paper at a conference called MobiSys, which focuses on mobile systems, at the end of June in Singapore.

 

“We needed to get this information out there,” said Zhao. “Sitting around and not telling the public and the users isn’t an option. They could be tracked right now and never know it.”

“This is bigger than Waze,” continued Zhao. The attack could work against any app, said Zhao, turning their servers into an open system that an attacker can mine and manipulate. With Waze, it’s a particularly sensitive attack because users’ location information is being broadcast and can be downloaded, but the attack on another app would allow hackers to download any information that users broadcast to other users or allow them to flood the app with fake traffic.

“With a [dating app], you could flood an area with your own profile or robot profiles and basically ruin it for your area,” said Zhao. “We looked at a bunch of different apps and nearly all of them had this near-catastrophic vulnerability.”

The scary part, said Zhao, is that “we don’t know how to stop this.” He said that servers that interact with apps in general are not as robust against attack as those that are web-facing.

“Not being able to separate a real device from a program is a larger problem,” said Zhao. “It’s not cheap and it’s not easy to solve. Even if Google wanted to do something, it’s not trivial for them to solve. But I want them to get this on the radar screen and help try to solve the problem. If they lead and they help, this collective problem will be solved much faster than if they don’t.”

“Waze is building their platform to be social so that you can track people around you. By definition this is going to be possible,” said Jonathan Zdziarski, a smartphone forensic scientist, who reviewed the paper at my request. “The crowd sourced tools that are being used in these types of services definitely have these types of data vulnerabilities.”

Zdziarski said there are ways to prevent this kind of abuse, by for example, rate-limiting data requests. Zhao told me his team has been running its experiments since the spring of 2014, and Waze hasn’t blocked them, even though they have created the appearance of thousands of Waze users in a short period of time coming from just a few IP addresses.

Waze’s spokesperson said the company is “examining the new issue raised by the researchers and will continue to take the necessary steps to protect the privacy of our users.”

In the meantime, if you need to use Waze to get around but are wary of being tracked, you do have one option: set your app to invisible mode. But beware, Waze turns off invisible mode every time you restart the app.

Full paper here.

Credit:

Physical Backdoor | Remote Root Vulnerability in HID Door Controllers

If you’ve ever been inside an airport, university campus, hospital, government complex, or office building, you’ve probably seen one of HID’s brand of card readers standing guard over a restricted area. HID is one of the world’s largest manufacturers of access control systems and has become a ubiquitous part of many large companies’ physical security posture. Each one of those card readers is attached to a door controller behind the scenes, which is a device that controls all the functions of the door including locking and unlocking, schedules, alarms, etc.

In recent years, these door controllers have been given network interfaces so that they can be managed remotely. It is very handy for pushing out card database updates and schedules, but as with everything else on the network, there is a risk of remotely exploitable vulnerabilities. And in the case of physical security systems, that risk is more tangible than usual.

HID’s two flagship lines of door controllers are their VertX and Edge platforms. In order for these controllers to be easily integrated into existing access control setups, they have a discoveryd service that responds to a particular UDP packet. A remote management system can broadcast a “discover” probe to port 4070, and all the door controllers on the network will respond with information such as their mac address, device type, firmware version, and even a common name (like “North Exterior Door”). That is the only purpose of this service as far as I can tell. However, it is not the only function of this service. For some reason, discoveryd also contains functionality for changing the blinking pattern of the status LED on the controller. This is accomplished by sending a “command_blink_on” packet to the discoveryd service with the number of times for the LED to blink. Discoveryd then builds up a path to /mnt/apps/bin/blink and calls system() to run the blink program with that number as an argument.

And you can probably guess what comes next.

A command injection vulnerability exists in this function due to a lack of any sanitization on the user-supplied input that is fed to the system() call. Instead of a number of times to blink the LED, if we send a Linux command wrapped in backticks, like `id`, it will get executed by the Linux shell on the device. To make matters worse, the discovery service runs as root, so whatever command we send it will also be run as root, effectively giving us complete control over the device. Since the device in this case is a door controller, having complete control includes all of the alarm and locking functionality. This means that with a few simple UDP packets and no authentication whatsoever, you can permanently unlock any door connected to the controller. And you can do this in a way that makes it impossible for a remote management system to relock it. On top of that, because the discoveryd service responds to broadcast UDP packets, you can do this to every single door on the network at the same time!

Needless to say, this is a potentially devastating bug. The Zero Day Initiative team worked with HID to see that it got fixed, and a patch is reportedly available now through HID’s partner portal, but I have not been able to verify that fix personally. It also remains to be seen just how quickly that patch will trickle down into customer deployments. TippingPoint customers have been protected ahead of a patch for this vulnerability since September 22, 2015 with Digital Vaccine filter 20820.

 

 

Credit:  Ricky Lawshae

Kemuri Water Company (KWC) | Hackers change chemical settings at water treatment plant

The unnamed water district had asked Verizon to assess its networks for indications of a security breach. It said there was no evidence of unauthorized access, and the assessment was a proactive measure as part of ongoing efforts to keep its systems and networks healthy.

Verizon examined the company’s IT systems, which supported end users and corporate functions, as well as Operational Technology (OT) systems, which were behind the distribution, control and metering of the regional water supply.

The assessment found several high-risk vulnerabilities on the Internet-facing perimeter and said that the OT end relied heavily on antiquated computer systems running operating systems from 10 or more years ago.

Many critical IT and OT functions ran on a single IBM AS/400 system which the company described as its SCADA (Supervisory Control and Data Acquisition) platform. This system ran the water district’s valve and flow control application that was responsible for manipulating hundreds of programmable logic controllers (PLCs), and housed customer and billing information, as well as the company’s financials.

Interviews with the IT network team uncovered concerns surrounding recent suspicious cyber activity and it emerged that an unexplained pattern of valve and duct movements had occurred over the previous 60 days. These movements consisted of manipulating the PLCs that managed the amount of chemicals used to treat the water to make it safe to drink, as well as affecting the water flow rate, causing disruptions with water distribution, Verizon reported.

An analysis of the company’s internet traffic showed that some IP addresses previously linked to hacktivist attacks had connected to its online payment application.

Verizon said that it “found a high probability that any unauthorized access on the payment application would also expose sensitive information housed on the AS/400 system.” The investigation later showed that the hackers had exploited an easily identified vulnerability in the payment application, leading to the compromise of customer data. No evidence of fraudulent activity on the stolen accounts could be confirmed.

However, customer information was not the full extent of the breach. The investigation revealed that, using the same credentials found on the payment app webserver, the hackers were able to interface with the water district’s valve and flow control application, also running on the AS/400 system.

During these connections, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus interfere with water treatment and production so that the recovery time to replenish water supplies increased. Thanks to alerts, the company was able to quickly identify and reverse the chemical and flow changes, largely minimizing the impact on customers. No clear motive for the attack was found, Verizon noted.

The company has since taken remediation measures to protect its systems.

In its concluding remarks on the incident, Verizon said: “Many issues like outdated systems and missing patches contributed to the data breach — the lack of isolation of critical assets, weak authentication mechanisms and unsafe practices of protecting passwords also enabled the threat actors to gain far more access than should have been possible.”

Acknowledging that the company’s alert functionality played a key role in detecting the chemical and flow changes, Verizon said that implementation of a “layered defense-in-depth strategy” could have detected the attack earlier, limiting its success or preventing it altogether.

 

About the attack [UPDATED]

A “hacktivist” group with ties to Syria compromised Kemuri Water Company’s computers after exploiting unpatched web vulnerabilities in its internet-facing customer payment portal, it is reported.

The hack – which involved SQL injection and phishing – exposed KWC’s ageing AS/400-based operational control system because login credentials for the AS/400 were stored on the front-end web server. This system, which was connected to the internet, managed programmable logic controllers (PLCs) that regulated valves and ducts that controlled the flow of water and chemicals used to treat it through the system. Many critical IT and operational technology functions ran on a single AS400 system, a team of computer forensic experts from Verizon subsequently concluded.

Our endpoint forensic analysis revealed a linkage with the recent pattern of unauthorised crossover. Using the same credentials found on the payment app webserver, the threat actors were able to interface with the water district’s valve and flow control application, also running on the AS400 system. We also discovered four separate connections over a 60-day period, leading right up to our assessment.During these connections, the threat actors modified application settings with little apparent knowledge of how the flow control system worked. In at least two instances, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus handicap water treatment and production capabilities so that the recovery time to replenish water supplies increased. Fortunately, based on alert functionality, KWC was able to quickly identify and reverse the chemical and flow changes, largely minimising the impact on customers. No clear motive for the attack was found.

Verizon’s RISK Team uncovered evidence that the hacktivists had manipulated the valves controlling the flow of chemicals twice – though fortunately to no particular effect. It seems the activists lacked either the knowledge of SCADA systems or the intent to do any harm.

The same hack also resulted in the exposure of personal information of the utility’s 2.5 million customers. There’s no evidence that this has been monetized or used to commit fraud.

Nonetheless, the whole incident highlights the weaknesses in securing critical infrastructure systems, which often rely on ageing or hopelessly insecure setups.

 

KWC

 

More Information

Monzy Merza, Splunk’s director of cyber research and chief security evangelist, commented: “Dedicated and opportunistic attackers will continue to exploit low-hanging fruit present in outdated or unpatched systems. We continue to see infrastructure systems being targeted because they are generally under-resourced or believed to be out of band or not connected to the internet.”

“Beyond the clear need to invest in intrusion detection, prevention, patch management and analytics-driven security measures, this breach underscores the importance of actionable intelligence. Reports like Verizon’s are important sources of insight. Organisations must leverage this information to collectively raise the bar in security to better detect, prevent and respond to advanced attacks. Working collectively is our best route to getting ahead of attackers,” he added.

Reports that hackers have breached water treatment plants are rare but not unprecedented. For example, computer screenshots posted online back in November 2011 purported to show the user interface used to monitor and control equipment at the Water and Sewer Department for the City of South Houston, Texas by hackers who claimed to have pwned its systems. The claim followed attempts by the US Department of Homeland Security to dismiss a separate water utility hack claim days earlier.

More recently hackers caused “serious damage” after breaching a German steel mill and wrecking one of its blast furnaces, according to a German government agency. Hackers got into production systems after tricking victims with spear phishing emails, said the agency.

Spear phishing also seems to have played a role in attacks lining the BlackEnergy malware against power utilities in the Ukraine and other targets last December. The malware was used to steal user credentials as part of a complex attack that resulted in power outages that ultimately left more than 200,000 people temporarily without power on 23 December.

 

Credit:  watertechonline, theregister

OnionDog APT targets Critical Infrastructures and Industrial Control Systems (ICS)

The Helios Team at 360 SkyEye Labs revealed that a group named OnionDog has been infiltrating and stealing information from the energy, transportation and other infrastructure industries of Korean-language countries through the Internet.

OnionDog

OnionDog’s first activity can be traced back to October, 2013 and in the following two years it was only active between late July and early September. The self-set life cycle of a Trojan attack is 15 days on average and is distinctly organizational and objective-oriented.

OnionDog malware is transmitted by taking advantage of the vulnerability of the popular office software Hangul in Korean-language countries, and it attacked network-isolated targets through a USB worm.

Targeting the infrastructure industry

OnionDog concentrated its efforts on infrastructure industries in Korean-language countries. In 2015 this organization mainly attacked harbors, VTS, subways, public transportation and other transportation systems. In 2014 it attacked many electric power and water resources corporations as well as other energy enterprises.

The Helios Team has found 96 groups of malicious code, 14 C&C domain names and IP related to OnionDog. It first surfaced in October 2013, and then was most active in the summers of the following years. The Trojan set its own “active state” time and the shortest was be three days and maximum twenty nine days, from compilation to the end of activity. The average life cycle is 15 days, which makes it more difficult for the victim enterprises to notice and take actions than those active for longer period of time.

OnionDog’s attacks are mainly carried out in the form of spear phishing emails. The early Trojan used icons and file numbers to create a fake HWP file (Hangul’s file format). Later on, the Trojan used a vulnerability in an upgraded version of Hangul, which imbeds malicious code in a real HWP file. Once the file is opened, the vulnerability will be triggered to download and activate the Trojan.

Since most infrastructure industries, such as the energy industry, generally adopt intranet isolation measures, OnionDog uses the USB disk drive ferry to break the false sense of security of physical isolation. In the classic APT case of the Stuxnet virus, which broke into an Iranian nuclear power plant, the virus used an employee’s USB disk to circumvent network isolation. OnionDog also used this channel and generated USB worms to infiltrate the target internal network.

“OCD-type” intensive organization

In the Malicious Code activities of OnionDog, there are strict regulations:

First, the Malicious Code has strict naming rules starting from the path of created PDB (symbol file). For example, the path for USB worm is APT-USB, and the path for spear mail file is APT-WebServer;

When the OnionDog Trojan is successfully released, it will communicate to a C&C (Trojan server), download other malware and save them in the %temp% folder and use “XXX_YYY.jpg” uniformly as the file name. These names have their special meaning and usually point to the target.

All signs show that OnionDog has strict organization and arrangement across its attack time, target, vulnerability exploration and utilization, and malicious code. At the same time, it is very cautious about covering up its tracks.

In 2014, OnionDog used many fixed IPs in South Korea as its C&C sites. Of course, this does not mean that the attacker is located in South Korea. These IPs could be used as puppets and jumping boards. By 2015, OnionDog website communications were upgraded to Onion City across the board. This is so far a relatively more advanced and covert method of network communication among APT attacks.

Onion City means that the deep web searching engine uses Tor2web agent technology to visit the anonymous Tor network deeply without using the Onion Brower specifically. And OnionDog uses the Onion City to hide the Trojan-controlling server in the Tor network.

In recent years, APT attacks on infrastructure facilities and large-scale enterprises have frequently emerged. Some that attack an industrial control system, such as Stuxnet, Black Energy and so on, can have devastating results. Some attacks are for the purpose of stealing information, such as the Lazarus hacker organization jointly revealed by Kaspersky, AlienVault lab and Novetta, and OnionDog which was recently exposed by the 360 Helios team. These secret cybercrimes can cause similarly serious losses as well.

 

 

Credit:  helpnetsecurity

[CRITICAL] Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

What if a car could be controlled from a computer halfway around the world? Computer security researcher and hacker Troy Hunt has managed to do just that, via a web browser and an Internet connection, with an unmodified Nissan Leaf in another country. While so far the control was limited to the HVAC system, it’s a revealing demonstration of what’s possible.

Hunt writes that his experiment started when an attendee at a developer security conference where Hunt was presenting realized that his car, a Nissan Leaf, could be accessed via the internet using Nissan’s phone app. Using the same methods as the app itself, any other Nissan Leaf could be controlled as well, from pretty much anywhere.

Hunt made contact with another security researcher and Leaf-owner, Scott Helme. Helme is based in the UK, and Hunt is based in Australia, so they arranged an experiment that would involve Hunt controlling Helme’s LEAF from halfway across the world. Here’s the video they produced of that experiment:

As you can see, Hunt was able to access the Leaf in the UK, which wasn’t even on, and gather extensive data from the car’s computer about recent trips, distances of those trips (recorded, oddly, in yards) power usage information, charge state, and so on. He was also able to access the HVAC system to turn on the heater or A/C, and to turn on the heated seats.

It makes sense these functions would be the most readily available, because those are essentially the set of things possible via Nissan’s Leaf mobile app, which people use to heat up or cool their cars before they get to them, remotely check on the state of charge, and so on.

This app is the key to how the Leaf can be accessed via the web, since that’s exactly what the app does. The original (and anonymous) researcher found that by making his computer a proxy between the app and the internet, the requests made from the app to Nissan’s servers can be seen. Here’s what a request looks like:

GET https://[redacted].com/orchestration_1111/gdc/BatteryStatusRecordsRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris&TimeFrom=2014-09-27T09:15:21

If you look in that code, you can see that part of the request includes a tag for VIN, which is the Vehicle Identification Number (obfuscated here) of the car. Changing this VIN is really all you need to do to access any particular Leaf. Remember, VIN are visible through the windshield of every car, by law.

Hunt describes the process on his site, and notes some alarming details:

This is pretty self-explanatory if you read through the response; we’re seeing the battery status of his LEAF. But what got Jan’s attention is not that he could get the vehicle’s present status, but rather that the request his phone had issued didn’t appear to contain any identity data about his authenticated session.

In other words, he was accessing the API anonymously. It’s a GET request so there was nothing passed in the body nor was there anything like a bearer token in the request header. In fact, the only thing identifying his vehicle was the VIN which I’ve partially obfuscated in the URL above.

So, there’s no real security here to prevent accessing data on a LEAF, nor any attempt to verify the identity on either end of the connection.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

And it gets worse. Here, quoting from Hunt’s site, he’s using the name “Jan” to refer to the anonymous Leaf-owning hacker who discovered this:

But then he tried turning it on and observed this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

That request returned this response:

{

status:200

message: “success”,

userId: “******”,

vin: “SJNFAAZE0U60****”,

resultKey: “***************************”

}

This time, personal information about Jan was returned, namely his user ID which was a variation of his actual name. The VIN passed in the request also came back in the response and a result key was returned.

He then turned the climate control off and watched as the app issued this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteOffRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

All of these requests were made without an auth token of any kind; they were issued anonymously. Jan checked them by loading them up in Chrome as well and sure enough, the response was returned just fine. By now, it was pretty clear the API had absolutely zero access controls but the potential for invoking it under the identity of other vehicles wasn’t yet clear.

Even if you don’t understand the code, here’s what all that means: we have the ability to get personal data and control functions of the car from pretty much anywhere with a web connection, as long as you know the target car’s VIN.

Hunt proved this was possible after some work, using a tool to generate Leaf VINs (only the last 5 or 6 digits are actually different) and sending a request for battery status to those VINs. Soon, they got the proper response back. Hunt explains the significance:

This wasn’t Jan’s car; it was someone else’s LEAF. Our suspicion that the VIN was the only identifier required was confirmed and it became clear that there was a complete lack of auth on the service.

Of course it’s not just an issue related to retrieving vehicle status, remember the other APIs that can turn the climate control on or off. Anyone could potentially enumerate VINs and control the physical function of any vehicles that responded. That’s was a very serious issue. I reported it to Nissan the day after we discovered this (I wanted Jan to provide me with more information first), yet as of today – 32 days later – the issue remains unresolved. You can read the disclosure timeline further down but certainly there were many messages and a phone call over a period of more than four weeks and it’s only now that I’m disclosing publicly…

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

(Now, just to be clear, this is not a how-to guide to mess with someone’s Leaf. You’ll note that the crucial server address has been redacted, so you can’t just type in those little segments of code and expect things to work.)

While at the moment, you can only control some HVAC functions and get access to the car’s charge state and driving history, that’s actually more worrying than you may initially think.

Not only is there the huge privacy issue of having your comings-and-goings logged and available, but if someone wanted to, they could crank the AC and drain the battery of a Leaf without too much trouble, stranding the owner somewhere.

There’s no provision for remote starting or unlocking at this point, but the Leaf is a fully drive-by-wire vehicle. It’s no coincidence that every fully autonomous car I’ve been in that’s made by Nissan has been on the LEAF platform; all of its major controls can be accessed electronically. For example, the steering wheel can be controlled (and was controlled, as I saw when visiting Nissan’s test facility) by the motors used for power steering assist, and it’s throttle (well, for electrons)-by-wire, and so on.

So, at this moment I don’t think anyone’s Leaf is in any danger other than having a drained battery and an interior like a refrigerator, but that’s not to say nothing else will be figured out. This is a huge security breach that Nissan needs to address as soon as possible. (I reached out to Nissan for comment on this story and will update as soon as I get one.)

So far, Nissan has not fixed this after at least 32 days, Hunt said. Here’s how he summarized his contact with Nissan:

I made multiple attempts over more than a month to get Nissan to resolve this and it was only after the Canadian email and French forum posts came to light that I eventually advised them I’d be publishing this post. Here’s the timeline (dates are Australian Eastern Standard time):

  • 23 Jan: Full details of the findings sent and acknowledged by Nissan Information Security Threat Intelligence in the U.S.A.
  • 30 Jan: Phone call with Nissan to fully explain how the risk was discovered and the potential ramifications followed up by an email with further details
  • 12 Feb: Sent an email to ask about progress and offer further support to which I was advised “We’re making progress toward a solution”
  • 20 Feb: Sent details as provided by the Canadian owner (including a link to the discussion of the risk in the public forum) and advised I’d be publishing this blog post “later next week”
  • 24 Feb: This blog published, 4 weeks and 4 days after first disclosure

All in all, I sent ten emails (there was some to-and-fro) and had one phone call. This morning I did hear back with a request to wait “a few weeks” before publishing, but given the extensive online discussions in public forums and the more than one-month lead time there’d already been, I advised I’d be publishing later that night and have not heard back since. I also invited Nissan to make any comments they’d like to include in this post when I contacted them on 20 Feb or provide any feedback on why they might not consider this a risk. However, there was nothing to that effect when I heard back from them earlier today, but I’ll gladly add an update later on if they’d like to contribute.

I do want to make it clear though that especially in the earlier discussions, Nissan handled this really well. It was easy to get in touch with the right people quickly and they made the time to talk and understand the issue. They were receptive and whilst I obviously would have liked to see this rectified quickly, compared to most ethical disclosure experiences security researches have, Nissan was exemplary.

It’s great Nissan was “exemplary” but it would have been even better if they’d implemented at least some basic security before making their cars’ data and controls available over the internet.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

Security via obscurity just isn’t going to cut it anymore, as Troy Hunt has proven through his careful and methodical work. I’m not sure why carmakers don’t seem to be taking this sort of security seriously, but it’s time for them to step up.

After all, doing so will save them from PR headaches like this, and the likely forthcoming stories your aunt will Facebook you about how the terrorists are going to make all the Leafs hunt us down like dogs.

Until they have to recharge, at least.

(Thanks, Matt and Brandon!)

 

 

Credit:  Jason Torchinsky

[CRITICAL] CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting. Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn’t immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.

That patch is available here.

 

Issue Summary:

Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack. The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack. Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.

 

Technical information:

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query. Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated. Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow. The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.

Exploitation:

Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this Proof of Concept, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact. As you can see in the below debugging session we are able to reliably control EIP/RIP.

(gdb) x/i $rip => 0x7fe156f0ccce <_nss_dns_gethostbyname4_r+398>: req (gdb) x/a $rsp 0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042

When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril! Failed exploit indicators, due to ASLR, can range from:

  • Crash on free(ptr) where ptr is controlled by the attacker.
  • Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.
  • Crash reading from memory pointed by a local overwritten variable.
  • Crash writing to memory on an attacker-controlled pointer.

We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.

 

 

Credit:  Fermin J. Serna and Kevin Stadmeyer

Another Door to Windows | Hot Potato exploit

Microsoft Windows versions 7, 8, 10, Server 2008 and Server 2012 vulnerable to Hot Potato exploit which gives total control of PC/laptop to hackers

Security researchers from Foxglove Security have discovered that almost all recent versions of Microsoft’s Windows operating system are vulnerable to a privilege escalation exploit. By chaining together a series of known Windows security flaws, researchers from Foxglove Security have discovered a way to break into PCs/systems/laptops running on Windows 7/8/8.1/10 and Windows Server 2008/2010.

The Foxglove researchers have named the exploit as Hot Potato. Hot Potato relies on three different types of attacks, some of which were discovered back at the start of the new millennium, in 2000. By chaining these together, hackers can remotely gain complete access to the PCs/laptops running on above versions of Windows.

Surprisingly, some of the exploits were found way back in 2000 but have still not been patched by Microsoft, with the explanation that by patching them, the company would effectively break compatibility between the different versions of their operating system.

Hot Potato

Hot Potato is a sum of three different security issues with Windows operating system. One of the flaw lies in local NBNS (NetBIOS Name Service) spoofing technique that’s 100% effective. Potential hackers can use this flaw to set up fake WPAD (Web Proxy Auto-Discovery Protocol) proxy servers, and an attack against the Windows NTLM (NT LAN Manager) authentication protocol.

Exploiting these exploits in a chained manner allows the hackers to gain access to the PC/laptop by elevating an application’s permissions from the lowest rank to system-level privileges, the Windows analog for a Linux/Android root user’s permissions.

Foxglove researchers created their exploit on top of a proof-of-concept code released by Google’s Project Zero team in 2014 and have presented their findings at the ShmooCon security conference over the past weekend.

They have also posted proof-of-concept videos on YouTube in which the researchers break Windows versions such as 7, 8, 10, Server 2008 and Server 2012.

You can also access the proof of concept on Foxglove’s GitHub page here.

Mitigation

The researchers said that using SMB (Server Message Block) signing may theoretically block the attack. Other method to stop the NTNL relay attack is by enabling “Extended Protection for Authentication” in Windows.

 

 

Credit:  Vijay Prabhu, techworm

Industrial Control Systems (ICS/SCADA) and Cyber Security

It’s a cyber war out there! Is your company ready for battle?

Industry is slowly waking up to the fact that its facilities are in the crosshairs, the targets of cyber attacks by bad actors trying to exploit vulnerabilities in industrial control systems (ICSs) to steal intellectual property or damage critical equipment.

Whether caused by sophisticated hacking teams assembled by nation states, cyber criminal organizations, potential competitors, disgruntled or careless employees, or just bored teenagers in their bedrooms, cyber intrusions into industrial facilities now number in the hundreds of thousands every year. Even unintentional cyber incidents can cause damage.

The result can be more dangerous than stolen credit card numbers or government personnel information, because it can cause real physical damage like a destroyed or damaged power plant, water system, chemical plant, or oil and gas facility. Attacks like these could bring a region or even an entire nation to its knees. But even smaller-scale events, such as hackers taking over control of cars on a busy interstate, or manipulating the recipe controls at a food processing plant, could wreak havoc.

In exploring this issue, one fact stands out: industrial control systems were never designed to be secure. Many have also been in place for 20 or 30 years, long before cybersecurity became an issue. It’s no wonder that retrofitting this massive installed base to overcome 21st century cyber vulnerabilities can seem like an insurmountable task.

Digital threats, physical dangers

“Everyone’s concerned about viruses and worms, but Stuxnet never killed anyone,” says Joe Weiss of Applied Control Solutions, who has amassed a database of more than 750 actual control system cyber incidents. “Compromised industrial control systems, on the other hand, have caused significant electrical outages, environmental and equipment damage, and even killed people.”

Weiss is managing director of the ISA99 committee, which helped develop the ISA/IEC 62443 series of standards on industrial automation and control systems security. “IT people are focused on vulnerabilities from information loss, but it’s the impact of ICS failures on equipment, people and the environment that matters to industrial control professionals,” he says. “Not every ICS cyber vulnerability is critical. We need to focus on what can affect control system operation so that end users can prioritize threats to system reliability and safety.”

Weiss sees his role as waking industry up to the real dangers it faces from compromised control systems. “Industry is a backwater when it comes to cybersecurity,” he says. “We don’t have the systems, the training or the technologies to address it because too many people still don’t believe it’s real.”

He takes a broader view of cybersecurity than many people, citing the emissions fraud at Volkswagen, where software was intentionally manipulated to falsify test results. “Industrial control lies more and more within the digital world,” he says. “Anything that changes the intent of the control system function, whether or not it’s with malicious intent, is a cyber issue.”

The enemy is often us

Companies may think they’re safe if their manufacturing systems are not connected to the Internet, but it turns out the biggest threat comes from their own employees.

“There’s no such thing as an air gap,” says Ben Orchard, applications engineer at Opto 22, referring to control systems that aren’t connected to the Internet. “Malicious software (malware) is chiefly introduced into control systems by employees, vendors or contractors who plug devices like an infected smartphone into a computer’s USB port to charge it or bring in a corrupted thumb drive.”

Since people are the biggest weakness in any security system, Orchard recommends disabling or even filling in non-essential USB ports with epoxy. Other basics include only executing software that’s been cryptographically signed by a trusted source, locking down the operating system so that no email or web browsing is allowed, and constant monitoring of control network traffic.

If you’re looking for proven practices to improve the cybersecurity of your facilities or production systems, Orchard recommends the ones developed by the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) at the Department of Homeland Security. The guidelines address best practices like defense-in-depth, security zoning and encryption.

“They’ve done an astoundingly good job of assembling logical, practical, real-world advice,” he emphasizes. In particular, Orchard recommends downloading the first document in the series, “Improving Industrial Control Systems Cybersecurity with Defense-in-Depth Strategies” (see “More Sources for Security Guidance,”).

Help is a call away

Automation suppliers are actively engaged in the cyber battle. Vendors are continually adding new levels of security to their products and providing a range of customer education and support services.

Honeywell Process Solutions has fielded a team of more than 85 experts who can provide vendor-agnostic risk analysis, perform forensics, and help customers establish security policies, says Mike Baldi, cybersecurity architect for the company. Honeywell has also created a lab in Georgia to demonstrate and validate security solutions for customers, as well as train its own engineers.

Honeywell’s industrial cybersecurity risk manager software for continuous monitoring provides real-time, continuous monitoring of threats, vulnerabilities and risks specific to a control system, providing immediate notification of weaknesses in security.

“We’ll always be chasing evildoers,” Baldi says, “so advanced analytics are crucial, not just to analyze the latest attack but to identify patterns in these attacks that can guide us to the best solutions. It’s essential that companies analyze threat risk, test solutions at installation and maintain security over the life of a system.”

Turning control security on its head

Bedrock Automation is a company that’s making waves in the world of industrial control with a revolutionary platform that integrates an inside-out, bottom-up approach to cybersecurity.

“The cybersecurity issue was the catalyst for the creation of our new control platform. It’s software-defined automation,” says Albert Rooyakkers, CTO and engineering vice president for Bedrock.

Cybersecurity concerns were the catalyst for Bedrock Automation’s new control platform.

The company says the Bedrock system—created by a team of engineering experts from the automation and semiconductor industries—can function as a PLC, DCS, RTU or safety system. Bedrock is using independent system integrators and high-tech distributors as its channels to the market.

“This platform unification was the big breakthrough, something the major automation suppliers have never been able to achieve,” Rooyakkers says. “The incremental security improvements they’ve been doing for years are not likely to work because there are too many gaps to patch. It’s like taking a stick to a gunfight.”

According to research firm Frost & Sullivan, which cited the company’s work with its 2015 Best Practices Award for Industrial Control System Innovation, “Bedrock Automation has designed a control system with layered and embedded cybersecurity features, starting at the transistor level using secure microcontroller technology that includes secure memory, hardware accelerators and true random number generators.”

Rooyakkers adds, “ICS security in the cyber age will require a complete rethinking of control system design. Standards and best practices are being developed, but it will take a generation at the current pace of progress to achieve the level of security industry needs today.”

Hardening hardware and software

Given that most companies can’t afford a wholesale replacement of their existing control systems, many automation vendors are focused on making incremental improvements to harden their software and hardware.

Among the latest cybersecurity introductions by Emerson Process Management for its DeltaV control platform is a suite of cybersecurity software products from Intel Security’s McAfee Labs, including traditional antivirus, centralized whitelisting of applications allowed to run, security information and event management (SIEM) to perform analytics on events on everything from firewalls to operator stations, and network monitoring to identify unusual network communications.

“We’ve been hardening the control system with every DeltaV release,” says Neil Peterson, DeltaV product marketing director. “With the addition of SIEM, for example, you now have a tool that can manage the cybersecurity health of the control system as a whole, alerting you to unusual circumstances that need to be checked out. These include unauthorized communication attempts on your firewall and failed log-on attempts.”

There’s no silver bullet for cybersecurity, says Rick Gorskie, manager of Emerson’s asset strategy and management program. “Security requires a multilayered approach that combines technology, practices and people,” he says. “That furnace meltdown at a German steel mill purportedly started when someone clicked on a phishing email infected with malware, which allowed hackers to make their way down the network to attack the blast furnace.”

Gorskie says incidents like that are why he’s received more customer questions about security in the past year than in the previous 20 years. “We’re getting more board-level interest than ever before, and they’re starting to fund some serious projects because they want to avoid shutdowns,” Peterson adds. “Security is very hard and it requires shared responsibility. We develop systems with locks, but it requires ongoing vigilance to keep them secure. You can’t set it and forget it.”

More than networks

“Security is more than a network issue,” says Clark Case, security platform leader for Rockwell Automation. “Content (intellectual property) protection, tamper detection, user authentication and access control are just as important.”

While user concerns vary by industry and customer, Case says, “machine builders who ship their equipment offshore are particularly concerned about IP protection, so we’re releasing software licensing technology so they can control access to their source code.”

Rockwell has introduced a number of products and services to help customers design, deploy and maintain more secure control systems, according to Case. The company’s FactoryTalk AssetCentre software, for example, lets users see who is making changes to the control system and what changes have been made, including which machine the changes were made on and who was logged on at the time.

“We’re also making our controllers more secure in the design and manufacturing process so that they’re resilient to standard attacks,” Case adds. In addition to fielding a security incident response team for its products, Rockwell works closely with security groups like ICS-CERT.

“Companies need to step back and take a broader look at system risks—what bad things can be done, and how best to address them,” Case says. “Companies doing the best at mitigating cybersecurity risks have people on staff who are responsible for control system security. Fortunately, there are a growing number of operations technology people with the required skillsets.”

Different worlds

Cybersecurity is that much more important in manufacturing. “When there’s a breach in the IT world, you can take the system down and fix it,” explains Jeff Caldwell, chief architect for cybersecurity at Belden. “But in the industrial world, you have to continue to operate when there’s a problem. You can’t have power plants shutting down or planes falling from the sky. Industry’s primary concerns are resilience, uptime and safety. Cybersecurity is just a segment of that.”

Belden promotes a safe networking architecture that includes every device connected to those networks. “Consequently, we’ve developed cybersecurity solutions for all seven layers of the ISO stack,” Caldwell says.

Key elements of this architecture include security zoning; system change management; intrusion radiation protection to identify, halt and report invalid and anomalous traffic; security sentinels at every network juncture; layer 2 deep packet inspection in front of PLCs and RTUs as well as between security zones; authentication for user and administrator access; encryption of VPN traffic information; and secure wireless.

Belden recently acquired TripWire, a company that specializes in system change management, as part of a cybersecurity product portfolio that includes Tofino layer 2 firewalls with industrial protocol deep packet inspection.

Being knowledgeable about industrial protocols is essential to any control system security solution, Caldwell says. “IT uses gigantic signature files to identify patterns that indicate security problems, but that doesn’t work in the industrial world where communications often flow over serial cables that can’t carry large files,” he says. “Let’s face it: The IT world has failed at control system security. You just can’t jam IT solutions onto control systems and make them work. Only 20 percent of industrial cyber incidents are intentional, and disgruntled employees cause half of those. Just 10 percent come from hackers. That’s why it’s critical to protect against everything.”

This is not a test

“Most manufacturers have some degree of security preparedness in place, but it’s unknown whether these steps are enough to repel a full-scale targeted assault on a facility,” cautions Richard Clark, technical marketing specialist for SCADA cybersecurity at Schneider Electric Software, which includes InduSoft and Wonderware. “It seems more likely, as has been demonstrated in several modeling and public test sessions by ICS-CERT at Idaho National Lab, that such an attack would be successful because most engineers and IT personnel would not know how to react properly to such an event.”

Manufacturers need to ask themselves what damage could be caused at their facility if a targeted attack succeeded and their production system was shut down for weeks, Clark says. “These are the type of what-if scenarios that are frequently never explored. Few security managers or engineering teams have performed a single-point failure analysis of their facility, and even fewer have ever done a formal risk assessment using the results of the analysis,” he says. “This is especially irresponsible since there are excellent tools to help them find answers to these questions and determine if they are dedicating enough resources to safeguarding their facilities from a breach or control system malfunction.”

Clark says forward-looking security engineers and IT personnel have begun using automation to assist them in preventing attacks, combining security solutions to create what is known as defense-in-depth. These layers of disparate security measures can virtually surround critical assets and infrastructure.

“Once customers make the effort to begin to understand the nature of these threats to their facilities, products and employees, the safer and more operationally efficient they will become,” he says.


More Sources for Security Guidance

ICS-CERT Guidelines >>https://ics-cert.us-cert.gov/Recommended-Practices

Belden Security Blogs >>http://www.belden.com/blog/industrialsecurity/index.cfm

Schneider Electric/InduSoft Security eBooks >>https://www.smashwords.com/books/view/509999

NIST Cybersecurity Framework Gap Analysis Tool >>https://www.us-cert.gov/forms/csetiso

PBS Nova Program on Cybersecurity >>http://video.pbs.org/video/2365582515/

ISA 99/ISA/IEC 62443 Guidelines >>http://isa99.isa.org/ISA99%20Wiki/Home.aspx

 

 

 

Credit:  , automationworld