Kemuri Water Company (KWC) | Hackers change chemical settings at water treatment plant

The unnamed water district had asked Verizon to assess its networks for indications of a security breach. It said there was no evidence of unauthorized access, and the assessment was a proactive measure as part of ongoing efforts to keep its systems and networks healthy.

Verizon examined the company’s IT systems, which supported end users and corporate functions, as well as Operational Technology (OT) systems, which were behind the distribution, control and metering of the regional water supply.

The assessment found several high-risk vulnerabilities on the Internet-facing perimeter and said that the OT end relied heavily on antiquated computer systems running operating systems from 10 or more years ago.

Many critical IT and OT functions ran on a single IBM AS/400 system which the company described as its SCADA (Supervisory Control and Data Acquisition) platform. This system ran the water district’s valve and flow control application that was responsible for manipulating hundreds of programmable logic controllers (PLCs), and housed customer and billing information, as well as the company’s financials.

Interviews with the IT network team uncovered concerns surrounding recent suspicious cyber activity and it emerged that an unexplained pattern of valve and duct movements had occurred over the previous 60 days. These movements consisted of manipulating the PLCs that managed the amount of chemicals used to treat the water to make it safe to drink, as well as affecting the water flow rate, causing disruptions with water distribution, Verizon reported.

An analysis of the company’s internet traffic showed that some IP addresses previously linked to hacktivist attacks had connected to its online payment application.

Verizon said that it “found a high probability that any unauthorized access on the payment application would also expose sensitive information housed on the AS/400 system.” The investigation later showed that the hackers had exploited an easily identified vulnerability in the payment application, leading to the compromise of customer data. No evidence of fraudulent activity on the stolen accounts could be confirmed.

However, customer information was not the full extent of the breach. The investigation revealed that, using the same credentials found on the payment app webserver, the hackers were able to interface with the water district’s valve and flow control application, also running on the AS/400 system.

During these connections, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus interfere with water treatment and production so that the recovery time to replenish water supplies increased. Thanks to alerts, the company was able to quickly identify and reverse the chemical and flow changes, largely minimizing the impact on customers. No clear motive for the attack was found, Verizon noted.

The company has since taken remediation measures to protect its systems.

In its concluding remarks on the incident, Verizon said: “Many issues like outdated systems and missing patches contributed to the data breach — the lack of isolation of critical assets, weak authentication mechanisms and unsafe practices of protecting passwords also enabled the threat actors to gain far more access than should have been possible.”

Acknowledging that the company’s alert functionality played a key role in detecting the chemical and flow changes, Verizon said that implementation of a “layered defense-in-depth strategy” could have detected the attack earlier, limiting its success or preventing it altogether.

 

About the attack [UPDATED]

A “hacktivist” group with ties to Syria compromised Kemuri Water Company’s computers after exploiting unpatched web vulnerabilities in its internet-facing customer payment portal, it is reported.

The hack – which involved SQL injection and phishing – exposed KWC’s ageing AS/400-based operational control system because login credentials for the AS/400 were stored on the front-end web server. This system, which was connected to the internet, managed programmable logic controllers (PLCs) that regulated valves and ducts that controlled the flow of water and chemicals used to treat it through the system. Many critical IT and operational technology functions ran on a single AS400 system, a team of computer forensic experts from Verizon subsequently concluded.

Our endpoint forensic analysis revealed a linkage with the recent pattern of unauthorised crossover. Using the same credentials found on the payment app webserver, the threat actors were able to interface with the water district’s valve and flow control application, also running on the AS400 system. We also discovered four separate connections over a 60-day period, leading right up to our assessment.During these connections, the threat actors modified application settings with little apparent knowledge of how the flow control system worked. In at least two instances, they managed to manipulate the system to alter the amount of chemicals that went into the water supply and thus handicap water treatment and production capabilities so that the recovery time to replenish water supplies increased. Fortunately, based on alert functionality, KWC was able to quickly identify and reverse the chemical and flow changes, largely minimising the impact on customers. No clear motive for the attack was found.

Verizon’s RISK Team uncovered evidence that the hacktivists had manipulated the valves controlling the flow of chemicals twice – though fortunately to no particular effect. It seems the activists lacked either the knowledge of SCADA systems or the intent to do any harm.

The same hack also resulted in the exposure of personal information of the utility’s 2.5 million customers. There’s no evidence that this has been monetized or used to commit fraud.

Nonetheless, the whole incident highlights the weaknesses in securing critical infrastructure systems, which often rely on ageing or hopelessly insecure setups.

 

KWC

 

More Information

Monzy Merza, Splunk’s director of cyber research and chief security evangelist, commented: “Dedicated and opportunistic attackers will continue to exploit low-hanging fruit present in outdated or unpatched systems. We continue to see infrastructure systems being targeted because they are generally under-resourced or believed to be out of band or not connected to the internet.”

“Beyond the clear need to invest in intrusion detection, prevention, patch management and analytics-driven security measures, this breach underscores the importance of actionable intelligence. Reports like Verizon’s are important sources of insight. Organisations must leverage this information to collectively raise the bar in security to better detect, prevent and respond to advanced attacks. Working collectively is our best route to getting ahead of attackers,” he added.

Reports that hackers have breached water treatment plants are rare but not unprecedented. For example, computer screenshots posted online back in November 2011 purported to show the user interface used to monitor and control equipment at the Water and Sewer Department for the City of South Houston, Texas by hackers who claimed to have pwned its systems. The claim followed attempts by the US Department of Homeland Security to dismiss a separate water utility hack claim days earlier.

More recently hackers caused “serious damage” after breaching a German steel mill and wrecking one of its blast furnaces, according to a German government agency. Hackers got into production systems after tricking victims with spear phishing emails, said the agency.

Spear phishing also seems to have played a role in attacks lining the BlackEnergy malware against power utilities in the Ukraine and other targets last December. The malware was used to steal user credentials as part of a complex attack that resulted in power outages that ultimately left more than 200,000 people temporarily without power on 23 December.

 

Credit:  watertechonline, theregister

[CRITICAL] Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

What if a car could be controlled from a computer halfway around the world? Computer security researcher and hacker Troy Hunt has managed to do just that, via a web browser and an Internet connection, with an unmodified Nissan Leaf in another country. While so far the control was limited to the HVAC system, it’s a revealing demonstration of what’s possible.

Hunt writes that his experiment started when an attendee at a developer security conference where Hunt was presenting realized that his car, a Nissan Leaf, could be accessed via the internet using Nissan’s phone app. Using the same methods as the app itself, any other Nissan Leaf could be controlled as well, from pretty much anywhere.

Hunt made contact with another security researcher and Leaf-owner, Scott Helme. Helme is based in the UK, and Hunt is based in Australia, so they arranged an experiment that would involve Hunt controlling Helme’s LEAF from halfway across the world. Here’s the video they produced of that experiment:

As you can see, Hunt was able to access the Leaf in the UK, which wasn’t even on, and gather extensive data from the car’s computer about recent trips, distances of those trips (recorded, oddly, in yards) power usage information, charge state, and so on. He was also able to access the HVAC system to turn on the heater or A/C, and to turn on the heated seats.

It makes sense these functions would be the most readily available, because those are essentially the set of things possible via Nissan’s Leaf mobile app, which people use to heat up or cool their cars before they get to them, remotely check on the state of charge, and so on.

This app is the key to how the Leaf can be accessed via the web, since that’s exactly what the app does. The original (and anonymous) researcher found that by making his computer a proxy between the app and the internet, the requests made from the app to Nissan’s servers can be seen. Here’s what a request looks like:

GET https://[redacted].com/orchestration_1111/gdc/BatteryStatusRecordsRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris&TimeFrom=2014-09-27T09:15:21

If you look in that code, you can see that part of the request includes a tag for VIN, which is the Vehicle Identification Number (obfuscated here) of the car. Changing this VIN is really all you need to do to access any particular Leaf. Remember, VIN are visible through the windshield of every car, by law.

Hunt describes the process on his site, and notes some alarming details:

This is pretty self-explanatory if you read through the response; we’re seeing the battery status of his LEAF. But what got Jan’s attention is not that he could get the vehicle’s present status, but rather that the request his phone had issued didn’t appear to contain any identity data about his authenticated session.

In other words, he was accessing the API anonymously. It’s a GET request so there was nothing passed in the body nor was there anything like a bearer token in the request header. In fact, the only thing identifying his vehicle was the VIN which I’ve partially obfuscated in the URL above.

So, there’s no real security here to prevent accessing data on a LEAF, nor any attempt to verify the identity on either end of the connection.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

And it gets worse. Here, quoting from Hunt’s site, he’s using the name “Jan” to refer to the anonymous Leaf-owning hacker who discovered this:

But then he tried turning it on and observed this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

That request returned this response:

{

status:200

message: “success”,

userId: “******”,

vin: “SJNFAAZE0U60****”,

resultKey: “***************************”

}

This time, personal information about Jan was returned, namely his user ID which was a variation of his actual name. The VIN passed in the request also came back in the response and a result key was returned.

He then turned the climate control off and watched as the app issued this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteOffRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

All of these requests were made without an auth token of any kind; they were issued anonymously. Jan checked them by loading them up in Chrome as well and sure enough, the response was returned just fine. By now, it was pretty clear the API had absolutely zero access controls but the potential for invoking it under the identity of other vehicles wasn’t yet clear.

Even if you don’t understand the code, here’s what all that means: we have the ability to get personal data and control functions of the car from pretty much anywhere with a web connection, as long as you know the target car’s VIN.

Hunt proved this was possible after some work, using a tool to generate Leaf VINs (only the last 5 or 6 digits are actually different) and sending a request for battery status to those VINs. Soon, they got the proper response back. Hunt explains the significance:

This wasn’t Jan’s car; it was someone else’s LEAF. Our suspicion that the VIN was the only identifier required was confirmed and it became clear that there was a complete lack of auth on the service.

Of course it’s not just an issue related to retrieving vehicle status, remember the other APIs that can turn the climate control on or off. Anyone could potentially enumerate VINs and control the physical function of any vehicles that responded. That’s was a very serious issue. I reported it to Nissan the day after we discovered this (I wanted Jan to provide me with more information first), yet as of today – 32 days later – the issue remains unresolved. You can read the disclosure timeline further down but certainly there were many messages and a phone call over a period of more than four weeks and it’s only now that I’m disclosing publicly…

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

(Now, just to be clear, this is not a how-to guide to mess with someone’s Leaf. You’ll note that the crucial server address has been redacted, so you can’t just type in those little segments of code and expect things to work.)

While at the moment, you can only control some HVAC functions and get access to the car’s charge state and driving history, that’s actually more worrying than you may initially think.

Not only is there the huge privacy issue of having your comings-and-goings logged and available, but if someone wanted to, they could crank the AC and drain the battery of a Leaf without too much trouble, stranding the owner somewhere.

There’s no provision for remote starting or unlocking at this point, but the Leaf is a fully drive-by-wire vehicle. It’s no coincidence that every fully autonomous car I’ve been in that’s made by Nissan has been on the LEAF platform; all of its major controls can be accessed electronically. For example, the steering wheel can be controlled (and was controlled, as I saw when visiting Nissan’s test facility) by the motors used for power steering assist, and it’s throttle (well, for electrons)-by-wire, and so on.

So, at this moment I don’t think anyone’s Leaf is in any danger other than having a drained battery and an interior like a refrigerator, but that’s not to say nothing else will be figured out. This is a huge security breach that Nissan needs to address as soon as possible. (I reached out to Nissan for comment on this story and will update as soon as I get one.)

So far, Nissan has not fixed this after at least 32 days, Hunt said. Here’s how he summarized his contact with Nissan:

I made multiple attempts over more than a month to get Nissan to resolve this and it was only after the Canadian email and French forum posts came to light that I eventually advised them I’d be publishing this post. Here’s the timeline (dates are Australian Eastern Standard time):

  • 23 Jan: Full details of the findings sent and acknowledged by Nissan Information Security Threat Intelligence in the U.S.A.
  • 30 Jan: Phone call with Nissan to fully explain how the risk was discovered and the potential ramifications followed up by an email with further details
  • 12 Feb: Sent an email to ask about progress and offer further support to which I was advised “We’re making progress toward a solution”
  • 20 Feb: Sent details as provided by the Canadian owner (including a link to the discussion of the risk in the public forum) and advised I’d be publishing this blog post “later next week”
  • 24 Feb: This blog published, 4 weeks and 4 days after first disclosure

All in all, I sent ten emails (there was some to-and-fro) and had one phone call. This morning I did hear back with a request to wait “a few weeks” before publishing, but given the extensive online discussions in public forums and the more than one-month lead time there’d already been, I advised I’d be publishing later that night and have not heard back since. I also invited Nissan to make any comments they’d like to include in this post when I contacted them on 20 Feb or provide any feedback on why they might not consider this a risk. However, there was nothing to that effect when I heard back from them earlier today, but I’ll gladly add an update later on if they’d like to contribute.

I do want to make it clear though that especially in the earlier discussions, Nissan handled this really well. It was easy to get in touch with the right people quickly and they made the time to talk and understand the issue. They were receptive and whilst I obviously would have liked to see this rectified quickly, compared to most ethical disclosure experiences security researches have, Nissan was exemplary.

It’s great Nissan was “exemplary” but it would have been even better if they’d implemented at least some basic security before making their cars’ data and controls available over the internet.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

Security via obscurity just isn’t going to cut it anymore, as Troy Hunt has proven through his careful and methodical work. I’m not sure why carmakers don’t seem to be taking this sort of security seriously, but it’s time for them to step up.

After all, doing so will save them from PR headaches like this, and the likely forthcoming stories your aunt will Facebook you about how the terrorists are going to make all the Leafs hunt us down like dogs.

Until they have to recharge, at least.

(Thanks, Matt and Brandon!)

 

 

Credit:  Jason Torchinsky

iBackDoor: High-Risk Code Hits iOS Apps

Introduction

FireEye mobile researchers recently discovered potentially “backdoored” versions of an ad library embedded in thousands of iOS apps originally published in the Apple App Store. The affected versions of this library embedded functionality in iOS apps that used the library to display ads, allowing for potential malicious access to sensitive user data and device functionality.

These potential backdoors could have been controlled remotely by loading JavaScript code from a remote server to perform the following actions on an iOS device:

  • Capture audio and screenshots
  • Monitor and upload device location
  • Read/delete/create/modify files in the app’s data container
  • Read/write/reset the app’s keychain (e.g., app password storage)
  • Post encrypted data to remote servers
  • Open URL schemes to identify and launch other apps installed on the device
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button

The offending ad library contained identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the potentially backdoored ad library: version codes 5.3.3 to 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2] – version 7.0.5 – the potential backdoors are not present. It is unclear whether the potentially backdoored versions of the ad library were released by adSage or if they were created and/or compromised by a malicious third party.

As of November 4, we have identified 2,846 iOS apps containing the potentially backdoored versions of mobiSage SDK. Among these, we observed more than 900 attempts to contact an ad adSage server capable of delivering JavaScript code to control the backdoors. We notified Apple of the complete list of affected apps and technical details on October 21, 2015.

While we have not observed the ad server deliver any malicious commands intended to trigger the most sensitive capabilities such as recording audio or stealing sensitive data, affected apps periodically contact the server to check for new JavaScript code. In the wrong hands, malicious JavaScript code that triggers the potential backdoors could be posted to eventually be downloaded and executed by affected apps.

Technical Details

As shown in Figure 1, the affected mobiSage library included two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the potential backdoors and exposed interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the potential backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

Figure 2: Communication via URL loading in WebView

To process a command in its queue, msageCore dispatches the command, along with its parameters, to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

Figure 3: Command dispatch in msageCore

At-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name Interfaces
MSageCoreUIManagerPlugin – captureAudio:

– captureImage:

– openMail:

– openSMS:

– openApp:

– openInAppStore:

– openCamera:

– openImagePicker:

– …

MSageCoreLocation – start:

– stop:

– setTimer:

– returnLocationInfo:webViewId:

– …

MSageCorePluginFileModule – createDir

– deleteDir:

– deleteFile:

– createFile:

– getFileContent:

– …

MSageCoreKeyChain – writeKeyValue:

– readValueByKey:

– resetValueByKey:

MSageCorePluginNetWork – sendHttpGet:

– sendHttpPost:

– sendHttpUpload:

– …

MSageCoreEncryptPlugin – MD5Encrypt:

– SHA1Encrypt:

– AESEncrypt:

– AESDecrypt:

– DESEncrypt:

– DESDecrypt:

– XOREncrypt:

– XORDecrypt:

– RC4Encrypt:

– RC4Decrypt

– …

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the potential backdoors in the library. They expose the potential ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

Beyond the selected interfaces, the ad library potentially exposed users to additional risks by including logic to promote and install “enpublic” apps as shown in Figure 4. As we have highlighted in previous blogs [footnotes 3, 4, 5, 6, 7], enpublic apps can introduce additional security risks by using private APIs in certain versions of iOS. These private APIs potentially allow for background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations. Apple has addressed a number of issues related to enpublic apps that we have brought to their attention.

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

We can see how this ad library functions by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

Figure 5: Capturing audio with duration and threshold

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

Figure 6: Reading the keychain in the kSecClassGenericPassword class

Remote control in msageJS

msageJS contains JavaScript code for communicating with a remote server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

Figure 7: The file layout of msageJS

The command execution interface is constructed as follows:

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

adsage.exec(“MSageCoreUIManager”, “captureAudio”, [“Hey”, 10, 40],  onSuccess, onFailure);

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

Figure 8: Base64 encoded JavaScript component in Zip format

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9.

Figure 9: Server response to msageJS update request via HTTP POST

Enterprise Protection

To ensure the protection of our customers, FireEye has deployed detection rules in its Network Security (NX) and Mobile Threat Prevention (MTP) products to identify the affected apps and their network activities.

For FireEye NX customers, alerts will be generated if an employee uses an infected app while their iOS device is connected to the corporate network. FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users will receive on-device notifications of the risky app and IT administrators receive email alerts.

Conclusion

In this blog, we described an ad library that affected thousands of iOS apps with potential backdoor functionality. We revealed the internals of backdoors which could be used to trigger audio recording, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these potential backdoors in ad libraries could be controlled remotely by JavaScript code should their ad servers fall under malicious actors’ control.

[1] http://www.adsage.com/mobisage
[2] http://www.adsage.cn/
[3] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[4] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[5] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[6] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html
[7] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

Credit:  Zhaofeng Chen, Adrian Mettler, Peter Gilbert , Yong Kang | Mobile Threats, Threat Research

Pro-Palestinian Hackers Took over Radio Tel Aviv Website

A group of pro-Palestinian hackers took over the official website of Radio Tel Aviv (TLV) on Sunday and left a deface page on the homepage showing anti-Israeli messages.

A group of Palestinian-friendly hackers going with the handle of AnonCoders hacked and defaced the official website of Radio Tel Aviv.

Hackers left a deface page along with messages both in support of Palestine and against Israel and Zionism.

The Israeli newspaper YT News reports that the deface page uploaded by the group remained on the Radio Tel Aviv’s website for more than a day after it was removed by the site’s admin. However, the Radio transmission remained unharmed.

The message on the website described why the site was targeted:

“Because we are the voice of Palestine and we will not remain silent. And our main target is Zionism and Israhell, if you are asking why your website got hacked by us, it’s basically because we want to share our message.”

A full preview of the deface page is available below:

pro-palestinian-hackers-took-over-radio-tel-aviv-website

Link of targeted website along with its zone-h mirror as a proof of hack is available below:

http://102fm.co.il/
http://zone-h.org/mirror/id/24931127?zh=1

This is not the first time when pro-Palestinian hackers took over an Israeli entertainment service provider. In the past, the cyber wing of Hamas hacked the official TV broadcasts of Israeli Channel 2 and Channel 10 for a short period of time.

In 2013, Israel’s major traffic tunnel was hit by a massive cyber-attack, causing huge financial damage.

In March 2014, Al-Qassam hackers from Palestine compromised the IsraelDefense magazine database and its website, to launch SMS attack on Israeli journalists.

 

 

Credit: 

Researchers Hack Car via Insurance Dongle

Small devices installed in many automobiles allow remote attackers to hack into a car’s systems and take control of various functions, researchers have demonstrated.

 

Researchers at the University of California in San Diego analyzed commercial telematic control units (TCU) to determine if they are vulnerable to cyberattacks.

TCUs are embedded systems on board a vehicle that provide a wide range of functions. The products offered by carmakers, such as GM’s OnStar and Ford’s Sync, provide voice and data communications, navigation, and allow users to remotely control the infotainment systems and other features.

Aftermarket TCUs, which connect to the vehicle through the standard On-Board Diagnostics (OBD) port, can serve various purposes, including driving assistance, vehicle diagnostics, security, and fleet management. These devices are also used by insurance companies that offer safe driving and low mileage discounts, and pay-per-mile insurance.

Researchers have conducted tests on C4E dongles produced by France-based Mobile Devices. These TCUs, acquired by the experts from eBay, are used by San Francisco-based car insurance firm Metromile, which offers its per-mile insurance option to Uber.

Aftermarket TCUs are mostly used for data collection, but the OBD-II port they are connected to also provides access to the car’s internal networks, specifically the controller area network (CAN) buses that are used to connect individual systems and sensors.

“CAN is a multi-master bus and thus any device with a CAN transceiver is able to send messages as well as receive. This presents a key security problem since as we, and others, have shown, transmit access to the CAN bus is frequently sufficient to obtain arbitrary control over all key vehicular systems (including throttle and brakes),” researchers explained in their paper.

The experts have identified several vulnerabilities in the Mobile Devices product, including the lack of authentication for remotely accessible debug services, the use of hard-coded cryptographic keys (CVE-2015-2906) and hard-coded credentials (CVE-2015-2907), the use of SMS messages for remotely updating the dongle, and the lack of firmware update validation (CVE-2015-2908).

In their experiments, researchers managed to gain local access to the system via the device’s USB port, and remote access via the cellular data interface that provides Internet connectivity and via an SMS interface.

In a real-world demonstration, the experts hacked a Corvette fitted with a vulnerable device simply by sending it specially crafted SMS messages. By starting a reverse shell on the system, they managed to control the windshield wipers, and apply and disable brakes while the car was in motion. The experts said they could have also accessed various other features.

Corvette hacked via insurance dongle

The remote attacks only work if the attacker knows the IP address of the device or the phone number associated with the SIM card used for receiving SMS messages. However, researchers determined that Internet-accessible TCUs can be identified by searching the web for strings of words unique to their web interface, or by searching for information related to the Telnet and SSH servers. Thousands of potential TCUs were uncovered by experts using this method.

As for the the SIM phone numbers, researchers believe many of them are sequentially assigned, which means an attacker might be able to obtain the information by determining the phone number for one device.

Researchers have reported their findings to Mobile Devices, Metromile, and Uber. Wired reported that Mobile Devices developed a patch that has been distributed by Metromile and Uber to affected products.

Mobile Devices told the researchers and the CERT Coordination Center at Carnegie Mellon University that many of the vulnerabilities have been fixed in newer versions of the software, and claimed that the attack described by experts should only work on developer/debugging devices, not on production deployments.

However, researchers noted that they discovered the vulnerabilities on recent production devices and they had not found the newer versions of software that should patch the security holes.

This is not the first time someone has taken control of a car using insurance dongles. In January, a researcher demonstrated that a device from Progressive Insurance used in more than two million vehicles was plagued by vulnerabilities that could have been exploited to remotely unlock doors, start the car, and collect engine information.

White hat hackers demonstrated on several occasions this summer that connected cars can be hacked. Charlie Miller and Chris Valasek remotely hijacked a Jeep, ultimately forcing Fiat Chrysler to recall 1.4 million vehicles to update their software. Last week, researchers reported finding several vulnerabilities in Tesla Model S, but they applauded the carmaker for its security architecture.

In July, senators Ed Markey and Richard Blumenthal introduced new legislation, the Security and Privacy in Your Car (SPY Car) Act, in an effort to establish federal standards to secure cars and protect drivers’ privacy.

 

 

Credit:  Eduard Kovacs

Firefox Under Fire: Anatomy of latest 0-day attack

On the August 6th, the Mozilla Foundation released a security update for the Firefox web browser that fixes the CVE-2015-4495 vulnerability in Firefox’s embedded PDF viewer, PDF.js. This vulnerability allows attackers to bypass the same-origin policy and execute JavaScript remotely that will be interpreted in the local file context. This, in turn, allows attackers to read and write files on local machine as well as upload them to a remote server. The exploit for this vulnerability is being actively used in the wild, so Firefox users are advised to update to the latest version (39.0.3 at the time of writing) immediately.

In this blog we provide an analysis of two versions of the script and share details about the associated attacks against Windows, Linux and OS X systems.

According to ESET’s LiveGrid® telemetry, the server at the IP address 185.86.77.48, which was hosting the malicious script, has been up since July 27, 2015. Also we can find corroboration on one of the compromised forums:

image1

Operatives from the Department on Combating Cybercrime of the Ministry of Internal Affairs of Ukraine, who responded promptly to our notification, have also confirmed that the malicious exfiltration server, hosted in Ukraine, has been online since July 27, 2015.

According to our monitoring of the threat, the server became inactive on August 8, 2015.

 

The script

The script used is not obfuscated and easy to analyze. Nevertheless, the code shows that the attackers had good knowledge of Firefox internals.

The malicious script creates an IFRAME with an empty PDF blob. When Firefox is about to open the PDF blob with the internal PDF viewer (PDF.js), new code is injected into the IFRAME (Figure 2). When this code executes, a new sandboxContext property is created within wrappedJSObject. A JavaScript function is written to the sandboxContext property. This function will later be invoked by subsequent code. Together, these steps lead to the successful bypass of the same-origin policy.

Code that creates sandboxContext property

The exploit is very reliable and works smoothly. However, it may display a warning which can catch the attention of tech-savvy users.

The warning message showed on compromised site

After successful exploitation of the bug, execution passes to the exfiltration part of code. The script supports both the Linux and Windows platforms. On Windows it searches for configuration files belonging to popular FTP clients (such as FileZilla, SmartFTP and others), SVN client, instant messaging clients (Psi+ and Pidgin), and the Amazon S3 client.

The list of collected files on Windows at the first stage of attack

These configuration files may contain saved login and password details.

On the Linux systems, the script sends following files to the remote server:

  • /etc/passwd
  • /etc/hosts
  • /etc/hostname
  • /etc/issue

It also parses the /etc/passwd file in the order to get the home directories (homedir) of users on the system. The script then searches files by mask in the home directories collected, and it avoids searching in the home directories of standard system users (such as daemon, bin, sys, sync and so forth).

The list of collected files on Linux at stage 1 of attack

It collects and uploads such files as:

  • history (bash, MySQL, PostgreSQL)
  • SSH related configuration files and authorization keys
  • Configuration files for remote access software – Remmina
  • FileZilla configuration files
  • PSI+ configuration
  • text files with possible credentials and shell scripts

As is evident here, the purpose of the first version of the malicious script was to gather data used mostly by webmasters and site administrators. This allowed attackers to move on to compromising more websites.

 

The second version

The day after Mozilla released the patch for Firefox the attackers decided to go “all-in”: they registered two new domains and improved their script.

The two new malicious domains were maxcdnn[.]com (93.115.38.136) and acintcdn[.]net (185.86.77.48). The second IP address is the same one as used in the first version. Attackers selected these names because the domains look as if they belong to a content delivery network (CDN).

The improved script on the Windows platform not only collects configuration files for applications; it also collects text files containing almost all combinations of words of possible value to attackers (such as password, accounts, bitcoins, credit cards, exploits, certificates, and so on):

List of files collected on Windows during the second attack stage

The attackers improved the Linux script by adding new files to collect and also developed code that works on the Mac OS X operating system:

List of files collected on Macs during the second stage of an attack

Some Russian-speaking commentators misattributed this code to the Duqu malware, because some variables in the code have the text “dq” in them.

 

A copycat attack

Since the bug is easy to exploit and a working copy of the script is available to cybercriminals, different attackers have started to use it. We have seen that various groups quickly adopted the exploit and started to serve it, mostly on adult sites from google-user-cache[.]com (108.61.205.41)

This malicious script does all the same things as the original script, but it collects different files:

The list of collected files used in copycat attack

 

Conclusion

The recent Firefox attacks are an example of active in-the-wild exploitation of a serious software vulnerability. The exploit shows that the malware-writers had a deep knowledge of Firefox internals. It is also an interesting one, since in most cases, exploits are used as an infection vector for other data-stealing trojans. In this instance, however, that was not necessary, because the malicious script alone was able to steal sensitive files from victims’ systems.

Additionally, the exploit started to be reused by other malware operators shortly after its discovery. This is common practice in the malware world.

ESET detects the malicious scripts as JS/Exploit.CVE-2015-4495. We also urge Firefox users to update their browser to the patched version (39.0.3). The internal Firefox PDF reader can also be disabled by changing the pdfjs.disabled setting to true.

 

Indicators of Compromise

A partial list of compromised servers:

hxxp://www.akipress.org/

hxxp://www.tazabek.kg/

hxxp://www.super.kg/

hxxp://www.rusmmg.ru/

hxxp://forum.cs-cart.com/

hxxp://www.searchengines.ru/

hxxp://forum.nag.ru/

Servers used in attack:

maxcdnn[.]com (93.115.38.136)

acintcdn[.]net (185.86.77.48)

google-user-cache[.]com (108.61.205.41)

Hashes (MD5):

0A19CC67A471A352D76ACDA6327BC179547A7A25

2B1A220D523E46335823E7274093B5D44F262049

19BA06ADF175E2798F17A57FD38A855C83AAE03B

3EC8733AB8EAAEBD01E5379936F7181BCE4886B3

 
 

Credit:  Anton Cherepanov

DynamoRIO | Runtime Code Manipulation System

About DynamoRIO

DynamoRIO is a runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO exports an interface for building dynamic tools for a wide variety of uses: program analysis and understanding, profiling, instrumentation, optimization, translation, etc. Unlike many dynamic tool systems, DynamoRIO is not limited to insertion of callouts/trampolines and allows arbitrary modifications to application instructions via a powerful IA-32/AMD64 instruction manipulation library. DynamoRIO provides efficient, transparent, and comprehensive manipulation of unmodified applications running on stock operating systems (Windows or Linux) and commodity IA-32 and AMD64 hardware.

DynamoRIO’s powerful API abstracts away the details of the underlying infrastructure and allows the tool builder to concentrate on analyzing or modifying the application’s runtime code stream. API documentation is included in the release package and can also be browsed online.

 

 

Downloading DynamoRIO

DynamoRIO is available free of charge as a binary package for both Windows and Linux. DynamoRIO’s source code is available under a BSD license.

 

 

DynamoRIO Website: http://www.dynamorio.org

 

 

 

Anticuckoo – A tool to detect and crash Cuckoo Sandbox.

Anticuckoo

A tool to detect and crash Cuckoo Sandbox.

Tested in Cuckoo Sandbox Official and Accuvant’s Cuckoo version.

Reddit / netsec discussion about anticuckoo.

Features

  • Detection:
    • Cuckoo hooks detection (all kind of cuckoo hooks).
    • Suspicius data in own memory (without APIs, page per page scanning).
  • Crash (Execute with arguments) (out of a sandbox these args dont crash the program):
    • -c1: Modify the RET N instruction of a hooked API with a higher value. Next call to API pushing more args into stack. If the hooked API is called from the Cuckoo’s HookHandler the program crash because it only pushes the real API args then the modified RET N instruction corrupt the HookHandler’s stack.

The overkill methods can be useful. For example using the overkill methods you have two features in one: detection/crash and “a kind of Sleep” (Cuckoomon bypass long Sleeps calls).

TODO list

Cuckoo Detection

Submit Release/anticuckoo.exe to analysis in Cuckoo Sandbox. Check the screenshots (console output). Also you can check Accesed Files in Sumary:

ScreenShot

Accesed Files in Sumary (django web):

ScreenShot

Cuckoo Crash

Specify in submit options the crash argument, ex -c1 (via django web):

ScreenShot

And check Screenshots/connect via RDP/whatson connection to verify the crash. Ex -c1 via RDP:

Screenshot

TODO

  • Python process & agent.py detection – 70% DONE
  • Improve hook detection checking correct bytes in well known places (Ex Native APIs always have the same signatures etc.).
  • Cuckoo’s TLS entry detection.

New ideas & PRs are wellcome.

 

 

 

 

Understanding the VENOM Vulnerability

 

Jason Geffner, a security researcher at Crowdstrike, has released information about a new, unchecked buffer vulnerability called VENOM affecting the open source QEMU virtualization platform which provides virtualization capabilities similar to VMWare or Microsoft’s Hyper-V.

The initial reports indicate this is a serious vulnerability, and while the vulnerability itself is serious, the overall scope is limited. People should treat this as a serious situation, but not view it as a broad crisis like “Heartbleed,” for instance.

 

The Vulnerability 

The unchecked buffer vulnerability (CVE-2015-3456) occurs in the code for QEMU’s virtual floppy disk controller. A successful buffer overflow attack exploiting this vulnerability can enable an attacker to execute his or her code in the hypervisor’s security context and escape from the guest operating system to gain control over the entire host.

Because QEMU is an open source package it’s nearly impossible to know all affected products or services. However, Crowdstrike has indicated that it does affect Xen, KVM and the native QEMU client.

We do know that neither VMWare’s nor Microsoft’s virtualization products are vulnerable. Amazon has also stated that their AWS platform is not affected.

QEMU and XEN already have patches available. Other vendors are presumably working on patches, as well.

 

The Risks 

In terms of the vulnerability itself, a determined attacker could potentially compromise all virtual instances on the host. A compromised host could also be used to stage lateral movement attacks against the hosting environment, putting other hosts and virtual instances at risk. To do this, an attacker would need to have a virtual machine on a vulnerable host and be able to load and execute code of their choosing onto the host. The attacker would also need administrator privileges on the guest OS. At that point, the attacker could have control of the host and potentially leverage that compromised host to launch other attacks on the network.

For environments that have the vulnerable code on their systems, this is a very serious vulnerability that should be addressed as quickly as possible. Similar to other open source vulnerabilities, like Heartbleed and Shellshock, obtaining and deploying patches will be a challenge due to the fractured nature of the ecosystem. Administrators should be prepared for these difficulties and plan for contingencies to mitigate those risks.

 

The Ramifications 

While this isn’t a vulnerability that would appear to affect the industry as broadly as some others, it is virtual machine escape vulnerability in the default configuration: this is the worst type of vulnerability for virtual machine environments. Even if you’re not directly affected by this vulnerability, if you run virtual machines in your environment, you should use this new vulnerability as an indication it is time to plan your response and mitigations for the day when a vulnerability just like this will affect your environment.

 

 

 

Credit: Christopher Budd

New GPU-based Linux Rootkit and Keylogger | Proof-of-concept GPU rootkit hides in VRAM, snoops system activities

 

A team of coders have published a new “educational” rootkit, dubbed Jellyfish, that’s virtually undetectable by current software practices. Their work is designed to demonstrate that GPUs, which have become considerably more powerful and flexible over the past decade, are now capable of running keyloggers and rootkits.

The world of hacking has become more organized and reliable over recent years and so the techniques of hackers.
Nowadays, attackers use highly sophisticated tactics and often go to extraordinary lengths in order to mount an attack.
And there is something new to the list:
A team of developers has created not one, but two pieces of malware that run on an infected computer’s graphics processor unit (GPU) instead of its central processor unit (CPU), in order to enhance their stealthiness and computational efficiency.

The two pieces of malware:

The source code of both the Jellyfish Rootkit and the Demon keylogger, which are described as proof-of-concepts malware, have been published on Github.
Until now, security researchers have discovered nasty malware running on the CPU and exploiting the GPU capabilities in an attempt to mine cryptocurrencies such as Bitcoins.
However, these two malware could operate without exploiting or modifying the processes in the operating system kernel, and this is why they do not trigger any suspicion that a system is infected and remain hidden.

 

JELLYFISH ROOTKIT

Jellyfish is capable of running on Nvidia, AMD, and Intel hardware (this last thanks to support from AMD’s APP SDK). The advantage of using a GPU to perform system snooping and keylogging is substantial. If you stop and think about it, there are a variety of methods to determine exactly what is running on your CPU. From the Windows Task Manager to applications like Process Explorer, there are built-in or free tools that will help you isolate exactly which processes are being called and what those processes are doing. Malware detection software is more complex, but it offers an even deeper window into process analysis.

Contrast that with GPUs. In terms of freeware utilities, you’ve got GPU-Z and a handful of other applications that provide a similar “GPU Load” monitoring function. Nvidia, AMD, and Intel all provide some basic profiling tools that can be used to analyze a GPU’s performance in a specific application, but these toolkits plug into existing software packages, like Visual Studio. They don’t take a snapshot of what’s running on the GPU in general — they allow you to monitor code that you’ve explicitly told to run on the GPU.

GPU-malware

Hackers and researchers have been exploring more of what a GPU can be used for and come away with some interesting results, including a project last year that turned a graphics card into a keylogger. As they noted at the time, “By instructing the GPU to carefully monitor via DMA the physical page where the keyboard buffer resides, a GPU-based keylogger can record all user keystrokes and store them in the memory space of the GPU.”

For those of you wondering about using a simple GPU load monitor to catch this work, it’s not really feasible — the estimated CPU and GPU utilization was ~0.1%. The Jellyfish rootkit discussed above doesn’t just have the ability to transmit information back across a network — it can theoretically remain resident in between warm reboots of the target system.

 

New GPU-based Linux Rootkit and Keylogger with Excellent Stealth and Computing Power

 

Jellyfish rootkit is a proof-of-concept malware code designed to show that running malware on GPUs is practically possible, as dedicated graphics cards have their processors and memory.
These types of rootkits could snoop on the CPU host memory through DMA (direct memory access), which allows hardware components to read the main system memory without going through the CPU, making such actions harder to detect.

The pseudo-anonymous developers describe their Jellyfish Rootkit as:

Jellyfish is a Linux based userland gpu rootkit proof of concept project utilizing the LD_PRELOAD technique from Jynx (CPU), as well as the OpenCL API developed by Khronos group (GPU). Code currently supports AMD and NVIDIA graphics cards. However, the AMDAPPSDK does support Intel as well.

Advantages of GPU stored memory:

  • No GPU malware analysis tools are available on the Internet
  • Can snoop on CPU host memory via DMA (direct memory access)
  • GPU can be used for fast/swift mathematical calculations like parsing or XORing
  • Stubs
  • Malicious memory is still inside GPU after device shutdown

Requirements for use:

  • Have OpenCL drivers/icds installed
  • Nvidia or AMD graphics card (Intel supports AMD’s SDK)
  • Change line 103 in rootkit/kit.c to server ip you want to monitor GPU client from

Stay tuned for more features:

  • client listener; let buffers stay stored in GPU until you send a magic packet from the server
The anonymous developers of the rootkit warned people that Jellyfish is a proof-of-concept malware and still a work in progress so that it can contain flaws. The code published on Github is intended to be used for educational purposes only.

 

DEMON KEYLOGGER

Moreover, the developers also built a separate, GPU-based keylogger, dubbed Demon though they did not provide any technical details about the tool.
Demon keylogger is also a proof-of-concept that is inspired by the malware described in a 2013 academic research paper [PDF] titled “You Can Type, but You Can’t Hide: A Stealthy GPU-based Keylogger,” but the developers stressed that they were not working with the researchers.
We are not associated with the creators of this paper,” the Demon developers said. “We only PoC’d what was described in it, plus a little more.
As described in the research paper, GPU-based keystroke logger consists of two main components:
  • A CPU-based component that is executed once, during the bootstrap phase, with the task of locating the address of the keyboard buffer in main memory.
  • A GPU-based component that monitors, via DMA, the keyboard buffer, and records all keystroke events.
However, users may not worry about cyber criminals or hackers using GPU-based malware yet, but proof-of-concepts malware such as Jellyfish Rootkit and Demon keylogger could inspire future developments.

 

How do we fix this?

It seems likely that malware detection methods will have to evolve to scan the GPU as well as the CPU, but it’s not clear how easy that’s going to be. The keylogger research team noted that Nvidia’s CUDA environment did offer the ability to attach to a running process and monitor its actions, but states that this is currently Nvidia-specific and of only limited (though important) use.

Software detection methods are going to need to fundamentally step up their game. Malware researchers tend to use virtual machines (for all the reasons you’d imagine), but these applications are not designed to support GPU API virtualization. That’s going to need to change if we’re going to protect systems from code running on GPUs.

Given the fact that code running on the GPU is almost untraceable today, it wouldn’t surprise me in the slightest to discover that state governments had already exploited these detection weaknesses. White hats, start your engines. Jellyfish is a Linux utility for now, but nothing in the literature suggests that this issue is unique to that operating system.

 

 

 

Credit: extremetech