Flaws in Samsung’s ‘Smart’ Home Let Hackers Unlock Doors and Set Off Fire Alarms

samsung-smaerthome.jpg

 

 

 

Credit:   Andy Greenberg, wired

Waze | Another way to track your moves

Millions of drivers use Waze, a Google-owned navigation app, to find the best, fastest route from point A to point B. And according to a new study, all of those people run the risk of having their movements tracked by hackers.

Researchers at the University of California-Santa Barbara recently discovered a Waze vulnerability that allowed them to create thousands of “ghost drivers” that can monitor the drivers around them—an exploit that could be used to track Waze users in real-time. They proved it to me by tracking my own movements around San Francisco and Las Vegas over a three-day period.

“It’s such a massive privacy problem,” said Ben Zhao, professor of computer science at UC-Santa Barbara, who led the research team.

Here’s how the exploit works. Waze’s servers communicate with phones using an SSL encrypted connection, a security precaution meant to ensure that Waze’s computers are really talking to a Waze app on someone’s smartphone. Zhao and his graduate students discovered they could intercept that communication by getting the phone to accept their own computer as a go-between in the connection. Once in between the phone and the Waze servers, they could reverse-engineer the Waze protocol, learning the language that the Waze app uses to talk to Waze’s back-end app servers. With that knowledge in hand, the team was able to write a program that issued commands directly to Waze servers, allowing the researchers to populate the Waze system with thousands of “ghost cars”—cars that could cause a fake traffic jam or, because Waze is a social app where drivers broadcast their locations, monitor all the drivers around them.

 

The attack is similar to one conducted by Israeli university students two years ago, who used emulators to send traffic bots into Waze and create the appearance of a traffic jam. But an emulator, which pretends to be a phone, can only create the appearance of a few vehicles in the Waze system. The UC-Santa Barbara team, on the other hand, could run scripts on a laptop that created thousands of virtual vehicles in the Waze system that can be sent into multiple grids on a map for complete surveillance of a given area.

In a test of the discovery, Zhao and his graduate students tried the hack on a member of their team (with his permission).

“He drove 20 to 30 miles and we were able to track his location almost the whole time,” Zhao told me. “He stopped at gas stations and a hotel.”

 

Last week, I tested the Waze vulnerability myself, to see how successfully the UC-Santa Barbara team could track me over a three-day period. I told them I’d be in Las Vegas and San Francisco, and where I was staying—the kind of information a snoopy stalker might know about someone he or she wanted to track. Then, their ghost army tried to keep tabs on where I went.

Users could be tracked right now and never know it.

– Ben Zhao, UC-Santa Barbara computer science professor
 

The researchers caught my movements on three occasions, including when I took a taxi to downtown Las Vegas for dinner:

And they caught me commuting to work on the bus in San Francisco. (Though they lost me when I went underground to take the subway.)

The security researchers were only able to track me while I was in a vehicle with Waze running in the foreground of my smartphone. Previously, they could track someone even if Waze was just running in the background of the phone. Waze, an Israeli start-up, was purchased by Google in 2013 for $1.1 billion. Zhao informed the security team at Google about the problem and made a version of the paper about their findings public last year. An update to the app in January of this year prevents it from broadcasting your location when the app is running in the background, an update that Waze described as an energy-saving feature. (So update your Waze app if you haven’t done so recently!)

“Waze constantly improves its mechanisms and tools to prevent abuse and misuse. To that end, Waze is regularly in contact with the security and privacy research community—we appreciate their help protecting our users,” said a Waze spokesperson in an emailed statement. “This group of researchers connected with us in 2014, and we have already addressed some of their claims, implementing safeguards in our system to protect the privacy of our users.”

The spokesperson said that “the concept of Waze is that we all work together to share information and impact the world around us” and that “users expect to offer certain information about their route in exchange for unparalleled navigation assistance.” Among the safeguards deployed by Waze is a “system of cloaking” so that a user’s location as displayed “from time to time within the Waze application does not represent such user’s actual, real time location.”


But those safeguards did not prevent real-time tracking in my case. The researchers sent me their tracking minutes after my trips, with accurate time stamps for each of my locations, meaning this cloaking system doesn’t seem to work very well.

“Anyone could be doing this [tracking of Waze users] right now,” said Zhao. “It’s really hard to detect.”

Part of what allowed the researchers to track me so closely is the social nature of Waze and the fact that the app is designed to share users’ geolocation information with each other. The app shows you other Waze drivers on the road around you, along with their usernames and how fast they’re going. (Users can opt of this by going invisible.) When I was in Vegas, the researchers simply populated ghost cars around the hotel I was staying at that were programmed to follow me once I was spotted.

“You could scale up to real-time tracking of millions of users with just a handful of servers,” Zhao told me. “If I wanted to, I could easily crawl all of the U.S. in real time. I have 50-100 servers, and could get more from [Amazon Web Services] and then I could track all of the drivers.”

Theoretically, a hacker could use this technique to go into the Waze system and download the activity of all the drivers using it. If they made the data public like the Ashley Madison hackers did, the public would suddenly have the opportunity to follow the movements of the over 50 million people who use Waze. If you know where someone lives, you would have a good idea of where to start tracking them.

Like the Israeli researchers, Zhao’s team was also able to easily create fake traffic jams. They were wary of interfering with real Waze users so they ran their experiments from 2 a.m. to 5 a.m. every night for two weeks, creating the appearance of heavy traffic and an accident on a remote road outside of Baird, Texas.

“No real humans were harmed or even interacted with,” said Zhao. They aborted the experiment twice after spotting real world drivers within 10 miles of their ghost traffic jam.

 

While Zhao defended the team’s decision to run the experiment live on Waze’s system, he admitted they were “very nervous” about initially making their paper about their findings public. They had approval from their IRB, a university ethics board; took precautions not to interfere with any real users; and notified Google’s security team about their findings They are presenting their paper at a conference called MobiSys, which focuses on mobile systems, at the end of June in Singapore.

 

“We needed to get this information out there,” said Zhao. “Sitting around and not telling the public and the users isn’t an option. They could be tracked right now and never know it.”

“This is bigger than Waze,” continued Zhao. The attack could work against any app, said Zhao, turning their servers into an open system that an attacker can mine and manipulate. With Waze, it’s a particularly sensitive attack because users’ location information is being broadcast and can be downloaded, but the attack on another app would allow hackers to download any information that users broadcast to other users or allow them to flood the app with fake traffic.

“With a [dating app], you could flood an area with your own profile or robot profiles and basically ruin it for your area,” said Zhao. “We looked at a bunch of different apps and nearly all of them had this near-catastrophic vulnerability.”

The scary part, said Zhao, is that “we don’t know how to stop this.” He said that servers that interact with apps in general are not as robust against attack as those that are web-facing.

“Not being able to separate a real device from a program is a larger problem,” said Zhao. “It’s not cheap and it’s not easy to solve. Even if Google wanted to do something, it’s not trivial for them to solve. But I want them to get this on the radar screen and help try to solve the problem. If they lead and they help, this collective problem will be solved much faster than if they don’t.”

“Waze is building their platform to be social so that you can track people around you. By definition this is going to be possible,” said Jonathan Zdziarski, a smartphone forensic scientist, who reviewed the paper at my request. “The crowd sourced tools that are being used in these types of services definitely have these types of data vulnerabilities.”

Zdziarski said there are ways to prevent this kind of abuse, by for example, rate-limiting data requests. Zhao told me his team has been running its experiments since the spring of 2014, and Waze hasn’t blocked them, even though they have created the appearance of thousands of Waze users in a short period of time coming from just a few IP addresses.

Waze’s spokesperson said the company is “examining the new issue raised by the researchers and will continue to take the necessary steps to protect the privacy of our users.”

In the meantime, if you need to use Waze to get around but are wary of being tracked, you do have one option: set your app to invisible mode. But beware, Waze turns off invisible mode every time you restart the app.

Full paper here.

Credit:

[CRITICAL] Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

What if a car could be controlled from a computer halfway around the world? Computer security researcher and hacker Troy Hunt has managed to do just that, via a web browser and an Internet connection, with an unmodified Nissan Leaf in another country. While so far the control was limited to the HVAC system, it’s a revealing demonstration of what’s possible.

Hunt writes that his experiment started when an attendee at a developer security conference where Hunt was presenting realized that his car, a Nissan Leaf, could be accessed via the internet using Nissan’s phone app. Using the same methods as the app itself, any other Nissan Leaf could be controlled as well, from pretty much anywhere.

Hunt made contact with another security researcher and Leaf-owner, Scott Helme. Helme is based in the UK, and Hunt is based in Australia, so they arranged an experiment that would involve Hunt controlling Helme’s LEAF from halfway across the world. Here’s the video they produced of that experiment:

As you can see, Hunt was able to access the Leaf in the UK, which wasn’t even on, and gather extensive data from the car’s computer about recent trips, distances of those trips (recorded, oddly, in yards) power usage information, charge state, and so on. He was also able to access the HVAC system to turn on the heater or A/C, and to turn on the heated seats.

It makes sense these functions would be the most readily available, because those are essentially the set of things possible via Nissan’s Leaf mobile app, which people use to heat up or cool their cars before they get to them, remotely check on the state of charge, and so on.

This app is the key to how the Leaf can be accessed via the web, since that’s exactly what the app does. The original (and anonymous) researcher found that by making his computer a proxy between the app and the internet, the requests made from the app to Nissan’s servers can be seen. Here’s what a request looks like:

GET https://[redacted].com/orchestration_1111/gdc/BatteryStatusRecordsRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris&TimeFrom=2014-09-27T09:15:21

If you look in that code, you can see that part of the request includes a tag for VIN, which is the Vehicle Identification Number (obfuscated here) of the car. Changing this VIN is really all you need to do to access any particular Leaf. Remember, VIN are visible through the windshield of every car, by law.

Hunt describes the process on his site, and notes some alarming details:

This is pretty self-explanatory if you read through the response; we’re seeing the battery status of his LEAF. But what got Jan’s attention is not that he could get the vehicle’s present status, but rather that the request his phone had issued didn’t appear to contain any identity data about his authenticated session.

In other words, he was accessing the API anonymously. It’s a GET request so there was nothing passed in the body nor was there anything like a bearer token in the request header. In fact, the only thing identifying his vehicle was the VIN which I’ve partially obfuscated in the URL above.

So, there’s no real security here to prevent accessing data on a LEAF, nor any attempt to verify the identity on either end of the connection.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

And it gets worse. Here, quoting from Hunt’s site, he’s using the name “Jan” to refer to the anonymous Leaf-owning hacker who discovered this:

But then he tried turning it on and observed this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

That request returned this response:

{

status:200

message: “success”,

userId: “******”,

vin: “SJNFAAZE0U60****”,

resultKey: “***************************”

}

This time, personal information about Jan was returned, namely his user ID which was a variation of his actual name. The VIN passed in the request also came back in the response and a result key was returned.

He then turned the climate control off and watched as the app issued this request:

GET https://[redacted].com/orchestration_1111/gdc/ACRemoteOffRequest.php?RegionCode=NE&lg=no-NO&DCMID=&VIN=SJNFAAZE0U60XXXXX&tz=Europe/Paris

All of these requests were made without an auth token of any kind; they were issued anonymously. Jan checked them by loading them up in Chrome as well and sure enough, the response was returned just fine. By now, it was pretty clear the API had absolutely zero access controls but the potential for invoking it under the identity of other vehicles wasn’t yet clear.

Even if you don’t understand the code, here’s what all that means: we have the ability to get personal data and control functions of the car from pretty much anywhere with a web connection, as long as you know the target car’s VIN.

Hunt proved this was possible after some work, using a tool to generate Leaf VINs (only the last 5 or 6 digits are actually different) and sending a request for battery status to those VINs. Soon, they got the proper response back. Hunt explains the significance:

This wasn’t Jan’s car; it was someone else’s LEAF. Our suspicion that the VIN was the only identifier required was confirmed and it became clear that there was a complete lack of auth on the service.

Of course it’s not just an issue related to retrieving vehicle status, remember the other APIs that can turn the climate control on or off. Anyone could potentially enumerate VINs and control the physical function of any vehicles that responded. That’s was a very serious issue. I reported it to Nissan the day after we discovered this (I wanted Jan to provide me with more information first), yet as of today – 32 days later – the issue remains unresolved. You can read the disclosure timeline further down but certainly there were many messages and a phone call over a period of more than four weeks and it’s only now that I’m disclosing publicly…

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

(Now, just to be clear, this is not a how-to guide to mess with someone’s Leaf. You’ll note that the crucial server address has been redacted, so you can’t just type in those little segments of code and expect things to work.)

While at the moment, you can only control some HVAC functions and get access to the car’s charge state and driving history, that’s actually more worrying than you may initially think.

Not only is there the huge privacy issue of having your comings-and-goings logged and available, but if someone wanted to, they could crank the AC and drain the battery of a Leaf without too much trouble, stranding the owner somewhere.

There’s no provision for remote starting or unlocking at this point, but the Leaf is a fully drive-by-wire vehicle. It’s no coincidence that every fully autonomous car I’ve been in that’s made by Nissan has been on the LEAF platform; all of its major controls can be accessed electronically. For example, the steering wheel can be controlled (and was controlled, as I saw when visiting Nissan’s test facility) by the motors used for power steering assist, and it’s throttle (well, for electrons)-by-wire, and so on.

So, at this moment I don’t think anyone’s Leaf is in any danger other than having a drained battery and an interior like a refrigerator, but that’s not to say nothing else will be figured out. This is a huge security breach that Nissan needs to address as soon as possible. (I reached out to Nissan for comment on this story and will update as soon as I get one.)

So far, Nissan has not fixed this after at least 32 days, Hunt said. Here’s how he summarized his contact with Nissan:

I made multiple attempts over more than a month to get Nissan to resolve this and it was only after the Canadian email and French forum posts came to light that I eventually advised them I’d be publishing this post. Here’s the timeline (dates are Australian Eastern Standard time):

  • 23 Jan: Full details of the findings sent and acknowledged by Nissan Information Security Threat Intelligence in the U.S.A.
  • 30 Jan: Phone call with Nissan to fully explain how the risk was discovered and the potential ramifications followed up by an email with further details
  • 12 Feb: Sent an email to ask about progress and offer further support to which I was advised “We’re making progress toward a solution”
  • 20 Feb: Sent details as provided by the Canadian owner (including a link to the discussion of the risk in the public forum) and advised I’d be publishing this blog post “later next week”
  • 24 Feb: This blog published, 4 weeks and 4 days after first disclosure

All in all, I sent ten emails (there was some to-and-fro) and had one phone call. This morning I did hear back with a request to wait “a few weeks” before publishing, but given the extensive online discussions in public forums and the more than one-month lead time there’d already been, I advised I’d be publishing later that night and have not heard back since. I also invited Nissan to make any comments they’d like to include in this post when I contacted them on 20 Feb or provide any feedback on why they might not consider this a risk. However, there was nothing to that effect when I heard back from them earlier today, but I’ll gladly add an update later on if they’d like to contribute.

I do want to make it clear though that especially in the earlier discussions, Nissan handled this really well. It was easy to get in touch with the right people quickly and they made the time to talk and understand the issue. They were receptive and whilst I obviously would have liked to see this rectified quickly, compared to most ethical disclosure experiences security researches have, Nissan was exemplary.

It’s great Nissan was “exemplary” but it would have been even better if they’d implemented at least some basic security before making their cars’ data and controls available over the internet.

How The Nissan Leaf Can Be Hacked Via Web Browser From Anywhere In The World

Security via obscurity just isn’t going to cut it anymore, as Troy Hunt has proven through his careful and methodical work. I’m not sure why carmakers don’t seem to be taking this sort of security seriously, but it’s time for them to step up.

After all, doing so will save them from PR headaches like this, and the likely forthcoming stories your aunt will Facebook you about how the terrorists are going to make all the Leafs hunt us down like dogs.

Until they have to recharge, at least.

(Thanks, Matt and Brandon!)

 

 

Credit:  Jason Torchinsky

[CRITICAL] CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting. Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn’t immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.

That patch is available here.

 

Issue Summary:

Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack. The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack. Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.

 

Technical information:

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query. Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated. Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow. The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.

Exploitation:

Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this Proof of Concept, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact. As you can see in the below debugging session we are able to reliably control EIP/RIP.

(gdb) x/i $rip => 0x7fe156f0ccce <_nss_dns_gethostbyname4_r+398>: req (gdb) x/a $rsp 0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042

When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril! Failed exploit indicators, due to ASLR, can range from:

  • Crash on free(ptr) where ptr is controlled by the attacker.
  • Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.
  • Crash reading from memory pointed by a local overwritten variable.
  • Crash writing to memory on an attacker-controlled pointer.

We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.

 

 

Credit:  Fermin J. Serna and Kevin Stadmeyer

Another Door to Windows | Hot Potato exploit

Microsoft Windows versions 7, 8, 10, Server 2008 and Server 2012 vulnerable to Hot Potato exploit which gives total control of PC/laptop to hackers

Security researchers from Foxglove Security have discovered that almost all recent versions of Microsoft’s Windows operating system are vulnerable to a privilege escalation exploit. By chaining together a series of known Windows security flaws, researchers from Foxglove Security have discovered a way to break into PCs/systems/laptops running on Windows 7/8/8.1/10 and Windows Server 2008/2010.

The Foxglove researchers have named the exploit as Hot Potato. Hot Potato relies on three different types of attacks, some of which were discovered back at the start of the new millennium, in 2000. By chaining these together, hackers can remotely gain complete access to the PCs/laptops running on above versions of Windows.

Surprisingly, some of the exploits were found way back in 2000 but have still not been patched by Microsoft, with the explanation that by patching them, the company would effectively break compatibility between the different versions of their operating system.

Hot Potato

Hot Potato is a sum of three different security issues with Windows operating system. One of the flaw lies in local NBNS (NetBIOS Name Service) spoofing technique that’s 100% effective. Potential hackers can use this flaw to set up fake WPAD (Web Proxy Auto-Discovery Protocol) proxy servers, and an attack against the Windows NTLM (NT LAN Manager) authentication protocol.

Exploiting these exploits in a chained manner allows the hackers to gain access to the PC/laptop by elevating an application’s permissions from the lowest rank to system-level privileges, the Windows analog for a Linux/Android root user’s permissions.

Foxglove researchers created their exploit on top of a proof-of-concept code released by Google’s Project Zero team in 2014 and have presented their findings at the ShmooCon security conference over the past weekend.

They have also posted proof-of-concept videos on YouTube in which the researchers break Windows versions such as 7, 8, 10, Server 2008 and Server 2012.

You can also access the proof of concept on Foxglove’s GitHub page here.

Mitigation

The researchers said that using SMB (Server Message Block) signing may theoretically block the attack. Other method to stop the NTNL relay attack is by enabling “Extended Protection for Authentication” in Windows.

 

 

Credit:  Vijay Prabhu, techworm

OmniRAT – the $25 way to hack into Windows, OS X and Android devices

 

Just last week, police forces across Europe arrested individuals who they believed had been using the notorious DroidJack malware to spy on Android users.

Now attention has been turned on to another piece of software that can spy on communications, secretly record conversations, snoop on browsing histories and take complete control of a remote device. But, unlike DroidJack, OmniRAT doesn’t limit itself to Android users – it can also hijack computers running Windows and Mac OS X too.

And that’s not the only difference between DroidJack and OmniRAT. Both of them may be being sold openly online, but OmniRAT retails for as little as $25 compared to DroidJack’s more hefty $210.

Security researchers at the anti-virus company Avast describe OmniRAT as a “Remote Administration Tool.

And it certainly can be used for entirely legitimate purposes, with the permission and consent of the owners of Android, Mac and Windows computers it tries to control.

But, in the wrong hands, it can also be considered a “Remote Access Trojan” – giving malicious hackers an opportunity to sneakily spy on and steal from unsuspecting users duped into installing the code.

OmniRAT

In his blog post, researcher Nikolaos Chrysaidos describes how he believes hackers have infected Androids with OmniRAT after sending an SMS.

Apparently, a German Android user explained on the Techboard-online forum how he had received an SMS telling him that an MMS had not been delivered directly to him due to the StageFright vulnerability.

In order to access the MMS, the user was told to follow a bit.ly link within three days, and enter a PIN code.

However, as Crysaidos explains, visiting the URL would initiate the attempt to install OmniRAT onto the target’s Android device:

Once you enter your number and code, an APK, mms-einst8923, is downloaded onto the Android device. The mms-einst8923.apk, once installed, loads a message onto the phone saying that the MMS settings have been successfully modified and loads an icon, labeled “MMS Retrieve” onto the phone.

Once the icon is opened by the victim, mms-einst8923.apk extracts OmniRat, which is encoded within the mms-einst8923.apk. In the example described on Techboard-online, a customized version of OmniRat is extracted.

Android app icon

Perhaps the long list of permissions requested by the app would make you think twice, if it weren’t so common for so many popular apps in the Google Play store to make similar requests.

App permissions

The problem of course is that through its cunning social engineering, and the target’s keen attempt to view the MMS that they might have been sent, it may be all too likely that the user grants permission for the app to be installed without thinking of the possible consequences.

And, as the app is capable of sending its own SMS messages, it may be that your infected Android device could then send further messages with malicious intent to your friends, family and colleagues, in the hope of hijacking further devices. After all, users are more likely to be tricked into believing a message is legitimate, and letting their guard down, if they receive a message apparently coming from someone they know and trust.

Sadly victims will probably have no clue that their devices are compromised, and even if they uninstall the MMS Retrieve icon, the customised version of OmniRAT remains installed on their Android smartphone, and will be sending data to a command and control (C&C) server seemingly based in Russia:

Russian domain

So, the question to ask is how should you protect yourself?

Well, clearly you should resist the urge to install apps onto your smartphone from anywhere other than the official app stores. Although malware has unfortunately snuck into the Google Play store in the past, you’re much more likely to encounter malicious code from unauthorised sources.

Furthermore, I would recommend running a security product on your Android device to detect malicious code and that – if possible – you keep your Android smartphone patched with the latest version of the operating system.

Finally, always think long and hard before clicking on links from untrusted sources. It could be that you’re just one click away from a hacker trying to take remote control of your Android phone.

 

 

Credit: 

Newly Discovered Exploit Makes Every iPhone Remotely Hackable

The government would love to get its hands on a foolproof way to break into the new highly encrypted iPhone. And it looks like some clever hackers just gave it to them.

Bug bounty startup Zerodium just announced that a team has figured out how to remotely jailbreak the latest iPhone operating system and will take home a million dollar prize. It’s unclear if Apple will get a peek at the zero-day exploit.

But wait, isn’t that what security researchers are supposed to do? Expose the exploit? Not when there’s this kind of cash on the line.

The hack itself seemed impossible. Zerodium required the exploit to work through a Safari, Chrome, a text message, or a multimedia message. This meant that hackers wouldn’t have to find just one vulnerability but rather a chain of them that would enable them to jailbreak an iPhone from afar. Once the phone’s jailbroken, the hackers could ostensibly download apps to the phone or even upload malware. It could also be a killer surveillance tool for anyone from law enforcement to spy agencies, which is what makes the details of this situation even more unsettling.

Zerodium is no ordinary security company. As Motherboard’s Lorenzo Francheschi-Biccierai explains:

[Founder Chaouki] Bekrar and Zerodium, as well as its predecessor VUPEN, have a different business model. They offer higher rewards than what tech companies usually pay out, and keep the vulnerabilities secret, revealing them only to certain government customers, such as the NSA.

Oh, that sounds bad. But it gets worse:

But there’s no doubt that for some, this exploit is extremely valuable. …This exploit would allow [law enforcement and spy agencies] to get around any security measures and get into the target’s iPhone to intercept calls, messages, and access data stored in the phone.

So unlike a lot of news that comes out of the security industry, this is a real threat. Zero-day vulnerabilities are often shared with the vendor before research is released so that they can have a patch ready. In this case, Zerodium and the winning team of now millionaire hackers will probably keep the bug a secret so that the proprietors of state secrets can take advantage of it. Again, Bekrar and his various ventures have been doing this for years.

There’s a chance Apple will figure out how to patch the vulnerability before the NSA takes off with it. After all, the Cupertino-based purveyor of very expensive gadgets is historically terrific at security. This is actually the first report of a method for jailbreaking an iPhone remotely since iOS 7. Hopefully, it will be the last.

 

 

Credit:  Adam Clark Estes – gizmodo

3D Imaging System in Driver-less Cars Can Be Hacked

google-driverless-car1

The laser navigation system and sensors of driverless cars can be easily exploited by hackers as they can trick them into getting paralyzed thinking about a probable collision with another person, car or hurdle.

Lidar 3D Imaging System is vulnerable to hack attacks. It is a system used by autonomous vehicles to create an image of the surroundings and navigate through the roads. However, research reveals that a cheap low-power laser attack lets hackers trick this system into thinking that something is blocking their way and forcing the vehicle to slow down, stop and/or take elusive action.

Driverless-Car-hack

The University of Cork’s Computer Security Group’s former researcher Jonathan Petit identified this vulnerability of the well-known laser powered navigation system while trying to discover the cyber vulnerabilities of self-directed vehicles.

Petit’s research will be presented at the Black Hat Europe security conference that is due in November this year. He explained that the combo of a pulse generator and a low-power laser let him record encrypted or non-coded laser pulses emitting from the high-profile Lidar system.

These pulses can later be replicated with a laser to produce fake objects that can easily trick a driverless car into thinking that there is an obstacle present at the front.

While speaking to IEEE Spectrum, Petit stated:

“I can take echoes of a fake car and put them at any location I want. And I can do the same with a pedestrian or a wall. I can spoof thousands of objects and basically carry out a denial-of-service attack on the tracking system so it’s not able to track real objects.”

He further added that the primary basis of the vulnerability lies in the fact that some driverless cars have poor quality input systems. This means such cars can make wrong decisions if these are fed incorrect data of surrounding environment and/or the road.

“If a self-driving car has poor inputs, it will make poor driving decisions,” said Petit.

However, one wonders that Lunar laser ranging technology is the most expensive and technically advanced one that is currently available in the market, then how can these commit mistakes?

In response to this, Petit says that autonomous cars can be hacked easily and cheaply as

“You can easily do it with a Raspberry Pi or an Arduino. It’s really off the shelf.”

The research reveals that driverless cars are not fully reliable and have inherent security related issues regardless of the fact that the technology has been cleared after being tested on UK roads.

We can comprehend that excessive insertion of connected technology into vehicles nowadays is making our cars prone to risks and threats from hackers.

History of vulnerability in vehicles: 

In this Black Hat USA 2015 session, two security researchers namely Charles Millerand Chris Valasek will gave a presentation about their discoveries related to the security vulnerability they found in the on-board infotainment system of all the vehicles manufactured by Fiat Chrysler Automobiles, leaving more than 470,000 vehicles vulnerable to these similar hacking attempts.

Using this vulnerability, both of these hackers managed to remotely take control over the vehicle, which allowed them to manipulate the vehicle’s brakes, acceleration, entertainment system and what not.

Another hacker demonstrated how hackers could locate, unlock and start GM cars with a hacked mobile app and how to hack Corvette with a text message.

During the same the DefCon and BlackHat security conferences researchers also exposed how hackers could easily exploit the vulnerabilities found within the Megamos Crypto to start the vehicle without any key, and the vulnerability could be exploited within 60 minutes!

 

 

 

Credit: 

Self-driving Cars Hacked Using a Simple Laser and a Raspberry Pi

Wake-up call for driverless-car makers to solve this glaring security problem. Self-driving cars are easy to hack with a modified laser pointer.

A security researcher has discovered that self-driving cars with laser-powered sensors that detect and avoid obstacles in their paths can easily be fooled by a line-of-sight attacker using a laser pointer to trick those sensors into detecting and avoiding obstacles that don’t actually exist.

Self-driving or driverless cars are widely predicted to be the next big innovation in automotive technology — indeed, it’s possible that today’s infants will come of age in a world where “driving your own car” is as obsolete as horse-and-buggy combos are now.

Google has already developed and tested a semi-driverless car (which still requires a licensed and alert human driver as a failsafe in case anything goes wrong). Various car manufacturers including Lexus, Mercedes and Audi are developing self-driving prototypes of their own. But, of course, driverless cars with wireless computer controls are as vulnerable to hacking as any other Internet-connected device – and have a few other vulnerabilities as well.

google-self-driving-car-wb

 

Lidar systems

Driverless cars use laser ranging systems, known as “lidar” (a riff off of “radar”), to detect obstacles and navigate their way through them. Radar, which was originally a semi-acronym for RAdio Detection And Ranging, “sees” things by sending out radio waves, then measuring whether and how many of those waves reflect back after bouncing off of various objects. Lidar does the same thing with lasers, which are narrower and far more precise than the radio waves used in radar.

Jonathan Petit, a scientist at the software-security company Security Innovation, told IEEE Spectrum that he was able to fool the lidar systems of self-driving cars with a device he made out of only $60 worth of off-the-shelf technology.

“I can take echoes of a fake car and put them at any location I want. And I can do the same with a pedestrian or a wall.” Petit made his device using a low-powered laser and a pulse generator, although he said “you don’t need the pulse generator when you do the attack. You can easily do it with a Raspberry Pi or an Arduino. It’s really off the shelf.”

Once he made this device, Petit could use it to create from a lidar’s perspective the illusion of a car, wall or pedestrian while he was anywhere from 20 to 350 meters (roughly 65 to 1,500 feet) away from the lidar system. Perhaps even more disturbingly, Petit could carry out these attacks on a lidar-equipped car without the car’s passengers even being aware of it.

The good news is that, according to Petit, there is a way for car or lidar manufacturers to solve this problem. “A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible,” he said. “But I don’t think carmakers have done it yet. This might be a good wake-up call for them.

Petit plans to formally present his findings at the Black Hat Europe security conference this November.

 

 

Credit:  Jennifer Abel

Researchers Hack Car via Insurance Dongle

Small devices installed in many automobiles allow remote attackers to hack into a car’s systems and take control of various functions, researchers have demonstrated.

 

Researchers at the University of California in San Diego analyzed commercial telematic control units (TCU) to determine if they are vulnerable to cyberattacks.

TCUs are embedded systems on board a vehicle that provide a wide range of functions. The products offered by carmakers, such as GM’s OnStar and Ford’s Sync, provide voice and data communications, navigation, and allow users to remotely control the infotainment systems and other features.

Aftermarket TCUs, which connect to the vehicle through the standard On-Board Diagnostics (OBD) port, can serve various purposes, including driving assistance, vehicle diagnostics, security, and fleet management. These devices are also used by insurance companies that offer safe driving and low mileage discounts, and pay-per-mile insurance.

Researchers have conducted tests on C4E dongles produced by France-based Mobile Devices. These TCUs, acquired by the experts from eBay, are used by San Francisco-based car insurance firm Metromile, which offers its per-mile insurance option to Uber.

Aftermarket TCUs are mostly used for data collection, but the OBD-II port they are connected to also provides access to the car’s internal networks, specifically the controller area network (CAN) buses that are used to connect individual systems and sensors.

“CAN is a multi-master bus and thus any device with a CAN transceiver is able to send messages as well as receive. This presents a key security problem since as we, and others, have shown, transmit access to the CAN bus is frequently sufficient to obtain arbitrary control over all key vehicular systems (including throttle and brakes),” researchers explained in their paper.

The experts have identified several vulnerabilities in the Mobile Devices product, including the lack of authentication for remotely accessible debug services, the use of hard-coded cryptographic keys (CVE-2015-2906) and hard-coded credentials (CVE-2015-2907), the use of SMS messages for remotely updating the dongle, and the lack of firmware update validation (CVE-2015-2908).

In their experiments, researchers managed to gain local access to the system via the device’s USB port, and remote access via the cellular data interface that provides Internet connectivity and via an SMS interface.

In a real-world demonstration, the experts hacked a Corvette fitted with a vulnerable device simply by sending it specially crafted SMS messages. By starting a reverse shell on the system, they managed to control the windshield wipers, and apply and disable brakes while the car was in motion. The experts said they could have also accessed various other features.

Corvette hacked via insurance dongle

The remote attacks only work if the attacker knows the IP address of the device or the phone number associated with the SIM card used for receiving SMS messages. However, researchers determined that Internet-accessible TCUs can be identified by searching the web for strings of words unique to their web interface, or by searching for information related to the Telnet and SSH servers. Thousands of potential TCUs were uncovered by experts using this method.

As for the the SIM phone numbers, researchers believe many of them are sequentially assigned, which means an attacker might be able to obtain the information by determining the phone number for one device.

Researchers have reported their findings to Mobile Devices, Metromile, and Uber. Wired reported that Mobile Devices developed a patch that has been distributed by Metromile and Uber to affected products.

Mobile Devices told the researchers and the CERT Coordination Center at Carnegie Mellon University that many of the vulnerabilities have been fixed in newer versions of the software, and claimed that the attack described by experts should only work on developer/debugging devices, not on production deployments.

However, researchers noted that they discovered the vulnerabilities on recent production devices and they had not found the newer versions of software that should patch the security holes.

This is not the first time someone has taken control of a car using insurance dongles. In January, a researcher demonstrated that a device from Progressive Insurance used in more than two million vehicles was plagued by vulnerabilities that could have been exploited to remotely unlock doors, start the car, and collect engine information.

White hat hackers demonstrated on several occasions this summer that connected cars can be hacked. Charlie Miller and Chris Valasek remotely hijacked a Jeep, ultimately forcing Fiat Chrysler to recall 1.4 million vehicles to update their software. Last week, researchers reported finding several vulnerabilities in Tesla Model S, but they applauded the carmaker for its security architecture.

In July, senators Ed Markey and Richard Blumenthal introduced new legislation, the Security and Privacy in Your Car (SPY Car) Act, in an effort to establish federal standards to secure cars and protect drivers’ privacy.

 

 

Credit:  Eduard Kovacs