Friday, May 24, 2024

Protecting the ‘Metaverse ecosystem’…: Openness is healthy

Meta’s Reality Labs has an opening for “Malware Reverse Engineer”. Not an uncommon role, but this particular one is a bit more specific when you dive deeper.

Reality Labs(formerly Occulus VR) makes those nifty Quest VR headsets w/ controllers you see around. They’re one of the main viewpoints/entrypoints to the Metaverse. Specifically Meta’s Metaverse. A combination shared VR hub/world where one can work or play, something like “Ready Player One(2018)”s Oasis. The ‘work’ is not replacing offices yet. The ‘play’ includes games and experiences within the virtual Metaverse environment and individual VR games that you can purchase from the Meta Quest section of the Meta Store.

Occulus Quest 2
Credit: Maximilian Prandstätter CC 2.0

Part of the job includes a focus “on conducting security research in the Metaverse”. Presumably not operating within the Metaverse environment. No one has announced a Metaverse UI for IDA Pro or Ghidra.

Job Responsibilities

These give some hint to the malware specifics:

  • Identify vulnerabilities and potential attack vectors in the Metaverse ecosystem

“Metaverse ecosystem” is a vague term. Obviously it contains the VR environment, but also content added and the hardware all of this runs on. Server side handles multiplayer for games and interactions for users in the Metaverse. Client side we’re looking primarily at the Quest hardware.

Since the Quest 2, all of the Quest hardware units are Android devices. Mainly running Meta apps. But users can sideload any Android apps they desire, subject only to support by Reality Lab’s flavor of Android.

In the old PC/MS/etc-DOS days we had similar program compatibility, with exceptions for specific vendor’s versions of DOS. Similarly on Android we have high compatibility with all current versions of Android, differing only in support for certain Google Frameworks or other vendor specific system libraries. The MS-DOS ecosystem was quite open, like Android, which led to a considerable amount of computer viruses and other malware. It also led to Antivirus/security software being a necessary safeguard.

  • Advise and consult investigative or product teams as a subject matter expert

Subject matter expert on Android malware, not so much malware exclusive to the Metaverse ecosystem. There’s more than a few of us around; I’ve been dealing professionally with Android malware for about 15 years now.

Working with product teams is where the fun in device security is located. Once a product hits the market, most of the security impact one can make is gone. At McAfee, we got to participate during the architecture stage of producing new mobile phones. So much so our antivirus engine was included in the firmware of both Java phones and smartphones.

When you’re at design stage you can do threat modeling and actually fix gaps instead of placing bandages after shipping a million units.

Screenshot of Meta Store displaying titles of Meta VR games and apps.

Most users get their Quest apps from the Meta Store, but they can also sideload other, sometimes incompatible, Android apps.

  • Lead projects while effectively prioritizing time spent on reversing or malware analysis based on team priorities

This isn’t strictly incident response. The larger the organization, the more likely they have a separate IR division. The role sounds more like one helps to contribute to security for current and upcoming Quest hardware.

And it’s not a junior or staff position if you’re leading security projects. It would be useful if they get someone to interface with other teams within Meta.

  • Stay up-to-date with the latest security trends and threats in the industry

That’s part of the general anti-malware researcher role. Presumably this means a training budget/continuous education benefit. It is difficult to keep up-to-date with no/few resources. An O’reilly account is nice, but insufficient. You really need to send your _team_(surely, they’re not hiring only one person) to advanced training or at least 1–2 cybersecurity conferences.

There should also be closer ties to other threat hunting/intel and IR teams in the organization. Sharing information and training material helps all. It really takes a village to ‘[s]tay up-to-date’.

Minimum Qualifications

The minimum qualifications give us a bit more detail:

  • Experience with operating systems (Android, Linux), ARM architecture

This is definitely an Android-specific malware reverse engineer. Android lies on top of Linux, certain apps(e.g. high performance games) are composed primarily of native libraries(ELF) and Android runs almost exclusively on ARM processors.

Android malware comes in many flavors these days: Android(Java,Kotlin), native(C/C++, assembly), C#(Unity), Flutter(dart). One needs to be a bit specialized to stay on top of new malware.

  • BA/BS in Computer Science or 5+ years relevant work experience within malware

There are few if any undergraduate CS programs that cover malware analysis. Few cybersecurity undergrad programs either. A lot of the senior researchers either learned on the job or the generations that followed them from advanced training courses(e.g. SANS).

Those qualifications make it clear, they need experienced malware analysts and researchers. They’re looking for mid-career to senior-level.

Preferred Qualifications

And these make it even clearer:

  • Experience to create their own tools to automate analysis or detection (Yara, Snort, etc)

Writing one’s own tools is different from writing malware and network signatures. This shows they don’t currently have those senior people on the Reality Labs team. The task for writing job descriptions usually falls to management or most senior staff.

Malware researchers tend to write their own tools, initially out of curiosity and for learning, but then out of necessity. It’s usually common when we start to investigate a new platform and new malware. Our old tools may not handle the file formats used, so we break out the trusty hex editor and as much reference material we can scrounge from SDKs, forum posts, the dark reaches of the Internet.

If they need new research, they definitely want senior level folks. Being part of the manufacturer’s team does make gathering that reference material considerably easier.

  • Familiarity with drafting scripts leveraging disassemblers like IDA or Ghidra

Scripting the major disassemblers definitely isn’t something beginning analysts do. This goes back to malware researchers writing their own tools, in Java or IDC or IDAPython.

Conclusion

This Malware Reverse Engineer role looks like it could easily hold the interest of a senior level malware researcher. The salary range also seems to cover a senior candidate. Meta is also one of the few companies that is OK with remote work.(I spent almost a decade at McAfee 100% remote. Most of my Senior-level colleagues at other Anti-malware firms work fully remote. It has been SOP for a couple decades.)

If the hiring managers at Reality Labs can see that they’re trying to hire senior-level staff, I’d recommend any of my colleagues to apply.


Tuesday, August 13, 2019

Auto "Kill Switch", solving the wrong problem?

Consumer Watchdog, a consumer advocacy group, put out a report on the dangers of Internet connected cars. They received coverage on the nightly news. Their heart is in the right place, but we must question the accuracy of their conclusions.

Their report, "Kill Switch: Why Connected Cars Can Be Killing Machines and How Turn Them Off", covers how Internet Connected automobiles are vulnerable to cyber attackers.

They suggest the following:

  • Automobile software/firmware that can be updated Over The Air(OTA)/via the Internet are "unfinished".
  • Cyber attackers have access and can control all connected cars, so shut off the cars' Internet access.
  • "White Hat" hackers and bug bounties encourage continual patching/improvement of software that was "never fundamentally secure".

These suffer from some misconceptions and flawed ideas. 

From the report:

p 21

"However, it also allows the automaker to control the public message, covering up an inadequate solution, and ensuring a positive spin on what should be a public embarrassment."

Manufacturers benefit by "deflecting the public shame of selling consumers an unsafe product". Matches can be dangerous, usually due to user error/misuse yet we don't claim they're an unsafe product.

p 22

"So-called “responsible disclosure” is irresponsible when public safety is at stake."

Responsible disclosure involves working with manufacturers and vendors to fix discovered bugs. Public disclosure is avoided if cyber attackers can easily release exploits before manufacturers can test and release a patch. Public safety is a fundamental consideration of Responsible Disclosure.

p 44

"This “air gap” method is time-tested and very effective, as no matter how buggy the software, a hacker cannot cross the air gap from the remotely-accessible components to the components that control the car’s motion. "

That used to work. Then folks dropped USB flash drives with Stuxnet near an Iranian Nuclear plant. It turns out People can cross air gaps.

p 45

"Only in the last few years have we begun making cars remotely accessible via computer networks. It is therefore very unlikely that the features made possible by the “connected car” are things we cannot live without, at least until we can develop a safer way to implement them. "

Mechanics have had to use computerized diagnostics for quite some time now. We're not going back to a time of un-Connected cars. We must instead build more protections and safety as we do with our other computerized equipment.

"help restart the transportation infrastructure after a massive cyberattack"

If one's automobile is bricked, turning the Internet off and on again will never fix it.

"However, if cars were required to have the ability to disconnect from the Internet, we could restore our transportation infrastructure with the flip of a switch."

An Internet worm that can infect Thousands or hundreds of Thousands of automobiles in thousandths of a second can not be stopped by people manually flipping switches in seconds. It seems the watchdog group saw the same Fast & Furious movie I did but came to vastly different conclusions.

p 46

"CEOs of auto manufacturers should be required to sign personal statements and accept personal legal liability for the cyber-security status of their cars."

Looking at the VW emissions scandal and the Boeing 737 Max issues, Executives rarely have to take responsibility for security of products.

p 47

"Automaker “bug bounty” programs have demonstrated that vulnerabilities can be bought for a few tens of thousands of dollars"

A $10,000 DoS bug is not the same as a $100,000 Remote Code Execution bug. Nor a Volkswagen Beetle the same as a Tesla Model S. The higher the impact and usefulness of a bug, the greater the price. Bug Bounty programs do not greatly drive down the cost of high impact bugs.

"A clever hacker could even make it look like a third party was responsible."

Attribution of cyber-attacks to Attackers is a hard problem. Attacks arriving over the Internet to Connected Automobiles even more so.

"cars have been provably immune to cyber-attack because they weren’t connected to the Internet"

Some of the Automobile security research cited in the report included attacks against Non-Connected Automobiles. In one case music files were uploaded to the car's entertainment center which were able to exploit safety-critical portions of the car. No Internet access is not the same as being provably immune.

p 48

"every software update you receive on your smartphone or other connected gadget means the previous version of the software wasn’t finished."

Very simple programs can be proven bug free, any reasonably complex program or system can not be. Software is never "finished". Software updates ensure that Attackers can't exploit software, steal data, or damage your equipment. Updates also mean your software works better.

"Allowing automakers to update critical software frequently, easily, and away from public and regulatory attention"

Medical equipment is at risk due to a potentially long re-certification process when Operating Systems or firmware are patched to fix bugs. There is a trade-off between safety for people and safety of computerized systems. Even with regulatory attention, there have been computer security mitigations implemented to protect systems that may take a while to be patched. There is no need to sacrifice safety regulations entirely for equipment safety.

We can fix problems with Connected Cars. We don't need to cut ourselves off from the Internet to do so.

Consumer Watchdog report: Kill Switch


Sunday, May 19, 2019

Brickerbot, 'Zombie' Cars, and IoT vulnerability reporting

​​​​Note: These are speaking notes from a presentation I gave at SparkleCon in 2018. Much of the information is in the notes, so Slideshare only displays about 10%(the slides). 



Nixon is a great security researcher and I agree wholeheartedly with the first half of this statement. Attackers are using botnets primarily for profit. Distributed Denial of Service(DDOS) as a primary source of income. 

I respectfully disagree with the idea that the only solution is to turn to law enforcement. Unfortunately while Law enforcement has great powers of investigation and response after the fact, there are still a number of steps we can take to prevent attacks.

On Bots/Zombies
On Bots/Zombies


Attackers exploiting botnets to perform large scale DDOSes has become common. Bots, or Robots or Zombies, whatever are nodes in a network of malicious machines. Traditionally attackers either infected machines with malware or on a less automated fashion convince users to perform like bots. 

LOIC is an example of software that allows users to participate in a DDOS. This is the simplest technique where each user is independent but collaborates with a multitude of like minded users. Imagine a hundred thousand individuals each with a single rifle all aiming at the same target. Some will miss. Some rifles will misfire. Some will never understand how to fire a bullet. Regardless a majority will hit the target. Of course the efficiency of such an attack is much less than one that eliminates human error. 

Infecting numerous bots is sometimes only the first step. An attacker with hundreds of thousands or millions of bots needs to do something with them. They make no money lying idle. This is where another traditional technique is used, DOSaaS(Denial of Service as a Service). I have a botnet, you pay me to take your target down, we all profit. Except the major site we’ve just knocked down.

Third party services increase as a market matures.
Third party services increase as a market matures.


Attackers are looking to make a profit. The usual methods work, but they can be costly. There is a need to purchase or develop one own’s malware to build up one’s botnet. 

IoT botnets help to reduce those costs. Several IoT botnets have had their source code released by their authors. 

Turns out security on IoT devices is severely lacking. No traffic control, no firewalls, no real authentication. Default credentials. Let me repeat that, default username/password combinations that users can’t easily change. 

Mirai made the big splash, using a long list of default credentials to log in to embedded devices and then turning them into bots. 

Having a list of default creds is useful, but more benevolent trespassers on one’s internet enabled cameras and home routers can also use the same in order to log in and patch your systems. Linux.Wifatch is famous for being a worm that connects to IoT devices and patches them, locking out the bad guys. Linux.Hajime does something similar, locking down ports preventing other botnets from connecting.

As another show of good faith the authors of Linux.Wifatch released the source code to their patching worm.

Good or bad, none of these worms would gain as much traction amongst IoT devices if it were possible to modify logins. 

So, Brickerbot?
So, Brickerbot?


Right so, Brickerbot. Where Wifatch and Hajime do their part in securing devices by changing passwords or disabling outside access, Brickerbot goes about it in a slightly different manner. If we just brick all of these vulnerable IoT devices they can’t be turned against us. Great idea. 

Mudge, of L0pht and DARPA, has heard about this idea too. From reasonable folks in the Intelligence Community and the Department of Defense. These are kind of the folks who get to take direct action. Except it seems even they couldn’t get away with bricking the devices of civilians in the US and all over the world. 



The author of Brickerbot calls himself The Doctor. He has also posted on an underground forum as ‘Janitor’. Sometimes attribution is easy, like when an actor directly connects online identities or claims credit. Attribution is harder when all you have is the end result such as a binary or obfuscated script. 

A source code release like the authors of Linux.Wifatch did is a primary method of claiming authorship. Releasing an obfuscated script that doesn’t or cannot execute is like showing off portions of a wrecked fighter aircraft with any and all markings or identifying information missing/removed. It makes for a great showpiece and allows one to take or (more often)give credit. Other researchers suggest that some of the attacks that The Doctor claimed, such as one on a major mobile carrier’s network, were not performed by him or at least not using any of the exploits contained in Brickerbot. Do we just take the word of whichever party we have a greater trust in? Minus a release of source or of forensic results from various attacks it seems that’s the default position. 

Where are we? 
  1. We may not be able to trust Brickerbot’s ‘author’ 
  2. The publicly available sample is essentially an obfuscated list of exploits and ‘bricking’ code 
  3. We need to examine the Brickerbot code a little closer to see what we have 
What's bricking?
What's bricking?


Bricking is just turning useful devices into something as useful as a brick. This a perfectly legal action that one can perform on devices that one owns. When done to others, it is almost always illegal and occasionally an act of war. 

To be clear, Brickerbot is intended to operate entirely on others’ devices. 

Brickerbot source code is everywhere...
Brickerbot source code is everywhere...


Normally coming from the malware analysis side, I’m loath to help spread malware. In this case, the cat is out of the bag, the horses have left the barn, and there are dozens of pastebin like sites and one or two github accounts with a copy of the released Brickerbot script. 

So if you would like a copy of the script in order to play along at home/analyze just google for the following line: 

 “if 57 - 57: O0oo0OOOOO00 . Oo0 + IIiIii1iI * OOOoOooO . o0ooO * i1” 

Malware analysis, simple? Not quite.
Malware analysis, simple? Not quite.


A quick aside about the malware analysis process. Assuming you’re not doing it as a hobby it’s almost always performed under pressure. 

One never has enough time to do a complete teardown, to check every nook and cranny of a given target. You may love puzzles and even enjoy completely solving them, but you never have that time at work. When you eventually do, it’s known as that nearly mythical time period ‘vacation’. 

An analyst always has to deal with multiple competing pressures from various interested parties. In no particular order:
  • Customers
  • Bosses/Upper Management
  • Competitors 
  • Press 
Depending on how one receives a sample one or more of these will be aware and the clock starts ticking. One won’t always satisfy all parties. Regardless, handling the various competing interests is the job. 

Ok you don’t have unlimited time. A customer is under attack. The press is minutes from publishing. Higher ups are yelling at your boss for an update. What do you do?

Generally one searches for IOCs(Indicators of Compromise). It really does become: What is the least I need to see before I know my home/office/business is irrecoverable?

Time to look at the sample
Time to look at the sample


What has The Doctor dropped in our collective laps? 

We’ve got a script that appears obfuscated. No line listing the interpreter the shell should use. First steps for analyzing this malware: 

 1) Let’s see if it runs 

     a) Make sure the VM has no network access

     b) Include possible runtimes(pyhton2, python3) 

Shocker! It doesn't run. Why? 

Running with Python3 fails. Mainly due to the print statement becoming a function in Python 3. Thus we know it’s python 2. 

Running under Python 2 it fails. Symbol not found. In this case due to additional whitespace turning a function call into an undefined symbol. Next, we need to remove extraneous whitespace. 

Re-run and it fails again. This time due to not a single library being imported. You have to be kidding me. The script as provided was never intended to run. 

 2) Maybe de-obfuscating the script would simplify the analysis 
  •  writing a custom de-obfuscator is a good solution, unfortunately it’s a vacation project. We’ve still got a job to do. 

    • It’s good to get acquainted with tokenizer.py for that eventual vacation 

  • We can still write a one-off script to quickly remove dead code. Dead code being things like If statements that are always false, or statements that make no changes to program state.

    •  Here’s where one gets to know PyLint on a first name basis. Statically checking for errors allows one to remove useless lines of code from the script. 

  • A final step is to pretty-print the script. Pretty-printing reformats source code so that it follows certain coding guidelines(e.g. Tabs not spaces, splitting multiple statements on one line over multiple lines, etc.) 
After all those steps, it still doesn’t run. Assuming malware analysis is part of your job, you stopped trying to deobfuscate after the step with the unknown symbols. You then probably went straight to the next stages. 

Initial steps
Initial steps


Now you’ve figured it’s non-functional. Now its time to find all low hanging fruit. In this case, the author provided the initial hint. Suggesting that one could just check the unencrypted/unobfuscated strings. 

One of the first steps in statically analyzing all malware is to extract all strings. Really. Some of the best clues for attribution come from identifiers left in the code, or shout-outs to colleagues or malware researchers. Egos have led to a number of malware authors getting convicted. 

In the case of Brickerbot, the simple obfuscation used by the author removes all identifiers(i.e. variable names, messages, etc.). It’s more about not having it tied back to the author than making it difficult to learn how the worm operates. 

Another time-saver used in analysis is to read other analysts’ reports. This lets you see if you’ve missed anything important(e.g. malware emailing all your contacts). It also lets you re-direct analysis to portions of the malware not yet analyzed or to specific payloads. 

With Brickerbot, since it won’t run and there’s much useless code an analyst can look at it as a container for various IoT device exploits 

Cleaning up the code
Cleaning up the code


Let’s look at the Brickerbot source code. 

Random, similar looking variable names, more whitespace than necessary. This looks horrible. 

The if statements where the conditional is equivalent to 0 will never run the code that follows. Dead code. 

There are spaces in function calls. These don’t even help readability so the sleep call on line 7 isn’t actually a .sleep() call, it’s just the undefined symbol time a dot and the undefined symbol sleep. 

Much of this can be removed as described earlier with a custom script to delete all dead code. 

Some of this code is ... dead!
Some of this code is ... dead!


Ok, now things are looking a bit cleaner, we can see functions. Names that have been obfuscated are lost, but one can eventually rename them according to their function. Similar to what one does with functions in an unknown binary. 

This is also after pretty-printing the script so that we get rid of the excessive/additional whitespace. 

Also like mentioned earlier, no libraries are imported so the code still won’t run. 

You should be getting a better picture of the roadblocks placed in the code. 

Back to searching for strings
Back to searching for strings


Now we can go back to step 1 of the static malware analysis process, searching for strings. 

This particular segment includes commands that overwrite storage on a particular device with random data. Routes and firewall rules are cleared from memory and deleted. Then the system is stopped and rebooted. By now there should be no code left to run so your home router is now useless. 

So what do we know now?

  1. this code is annoying
     
  2. so is The doctor

  3. all devices with the default username and password will get disabled permanently 

  4. Vulnerable devices are fixed now; since they can’t work, they’re safe 

So smaller IoT/embedded devices are quite vulnerable and badly secured. Do bigger embedded systems and Internet connected devices face similar threats? Can a brickerbot for my washing machine be much far away? Would other larger devices be more vulnerable? Say my car? 

"Zombie" cars
"Zombie" cars


These days even cars are members of the Internet of Things. What’s the worst that can happen to my car? Can it become part of a botnet? Let’s see what an expert on automotive security thinks. 

Charlie Miller is formerly of the NSA, but has made a name for himself in the private sector as a capable security researcher. Sorta reminds me of Star Trek’s Captain Picard, clever and wise. 

He wrote the first public exploits for Android and iOS, on the Google G1 and original iPhone. Later he and Chris Valasek received a grant from DARPA’s Cyber Fast Track program(the same one founded by Mudge) to research automotive security. So he moved on from hacking PCs, to hacking phones, to hacking the very cars you and I drive. 

And people complained, ‘but those aren’t remote exploits. You’ve got to be in the car to hack them. Any crook can do that.’ 

And verily, Charlie and Chris developed remote exploits. 

So when Charlie says that
  1. it’s not simple to hack all the cars 
  2. there aren’t as many people hacking cars 
You’d do well to believe him.  

"Hacker Dreads"
"Hacker Dreads"

Except for movie hackers…

If you’ve seen the seminal ‘Hackers’(1996) you know that “real” hackers can be determined by their hairstyles. Dreadlocks, in fact possibly being a necessary factor in successful cyber attacks. E.g. Matthew Lillard’s character Cereal Killer. As one can see Ms. Theron’s character, Cypher, must be quite the criminal hacker. 

After all, the main heroes in the 'Fate of the Furious'(Johnson, Diesel & Statham) lack both hair and computer skills. 

Realistically one can get a better sense of the threat posed by Cypher by viewing her environment. She is obviously well-funded(either State-backed or via independent fortune) and employs a full team of experts. Especially experts that can hack cars. 

When an underling tells her there are a couple thousand cars in the vicinity of her target, she orders him to hack all of them. Yes, all of them. Never mind the brand, the telematics system, any and all firmware(or lack of such) within each of these thousands of automobiles. Those hacker dreads must confer some extreme hacking powers. 

"Bricking cars" the Hard Way
"Bricking cars" the Hard Way


We should just brick everybody's car in a 5 mile radius. It’s not like anyone needs to get to work, pick up their kids, drive a buddy to the hospital, drive for Lyft/uber. Maybe they didn’t really need their personal cars. 

This goes back to the original idea of abdicating any responsibility as developers and manufacturers of IoT devices to Law Enforcement. Or in the case of The Doctor, to vigilantes. 

It's a common idea that fixing bugs at the earliest point in development is many times cheaper and less dangerous than patching them after release. Though we can’t always fix before customers take possession. 

It’s possible to release firmware updates and patches afterwards, but there is not always incentive to do so. The manufacturer of my car will occasionally release updates for the telematics system living in my dash, but the manufacturer of my new Internet enabled Toaster might say it’s no longer supported. 

This is not usually a problem. Until attackers not as benevolent as the Wifatch and Hajime authors or Charlie & Chris will discover and exploit a vulnerability to turn your new Mustang into a torpedo. With Miller & Valasek I know they’ll make an effort to reach out to the manufacturer and enable a patch or update to be created. 

We can learn lessons from the way vulnerability reports are handled on other platforms. Embedded systems and IoT devices are not as dissimilar from desktop and server PCs as one would think. The same way security researchers reach out to Apple or Google for bugs in their phone OSes, they can reach out to various device manufacturers. 

In theory. 

Industry buy-in


It can be difficult to get industry members to agree on issues like vulnerability disclosure, regular patching and in general working with outside security researchers. It is very much like the common description of ‘herding cats’. While many of the players may look similar and even share similar interests it is difficult to get them all to come to the table. 

Each company has no special reason to trust another. It usually takes commonly trusted individuals and backing from companies with greater resources just to begin the conversation.

Fortunately we’re seeing motion towards that goal in a small subset of the Internet of Things/Internet-connected embedded devices. 

Steps to get Industry buy-in
Steps to get Industry buy-in



How do we get all these cats eating peacefully at the same bowl? 

We can start by selecting someone credible and trusted widely as a focal point for the multitude of players. Get a Pied Piper that all the cats can turn towards. In this case that would be Renderman(Brad Haines), a well known and respected security researcher. He’s also a CISSP. 

Renderman has a wide range of experience with penetration testing, wireless security and computer security research. A published author and a speaker at numerous computer security(Black Hat USA) and hacker(Def Con) conferences. 

The interesting part of this is that Renderman is operating within the category of items that fall under the heading ‘Adult Sex Toys’ on Amazon.com. 

One might have expected that other personal IoT devices, such as fitness trackers, would be where more security research would occur. The recent information leak from Strava which showed running paths of US armed forces members is one such area of research. Here a personal IoT device’s lack of default security and privacy controls ends up violating Operational Security(OPSEC) at US bases around the world. The main threat is not the bases’ locations(arguably opposition forces and host countries already have that knowledge), but more that of current intelligence confirming activity at the bases and possibly number/names of active personnel. 

Renderman has accomplished something that those of us in other specialties(e.g. Antivirus/Anti-malware) haven’t, he’s managed to convince disparate manufacturers to trust him; both as a source of best security practices and as an interface to the wider community of vulnerability researchers and hackers. 

The key is for security researchers to have a path or some organized way to provide vulnerability information to manufacturers. We’ve seen on the desktop where there is a chilling effect when vendors sue or attempt to prosecute security researchers for their discoveries. Coming up with a set of guidelines so that researchers can talk to manufacturers and vice versa will help make all of us much more secure, 

One more aspect that may have led to Renderman’s success at organizing vendors may be the use of branding. Others have found success in branding vulnerabilities(e.g. Shellshock,Krack, Spectre, etc.). As with various pharmaceuticals, end users and experts alike are more likely to remember a brand name than a numbered CVE. In the case of a project about internet connected adult toys, it is the ‘Internet of Dongs’ and vulnerabilities are assigned DVEs(Dong Vulnerability and Exposures). Despite the jocular naming, the project has had success in getting reported vulnerabilities handled and improved communication and cooperation with manufacturers. Other IoT vendors and the security research community can use it as a template for security of the millions of other Internet connected devices. Perhaps then we can see the end of the threat of IoT worms. 

Questions 

Q: What sort of threat model are IoT vendors using? 


A: Good question. Default credentials implies there is no threat model. A basic threat model would take into account the simplest attacker, your common script kiddie. Take a dictionary of default credentials and list of target addresses and feed them to a scanner or ready made tool. Even after the number of in the wild IoT worms we’ve seen, the script kiddie would still end up with control of a significant number of IoT devices. Unfortunately security is currently at best an afterthought for a large number of vendors.

A proper threat model would need to take into account attackers of various skill levels and budgets.

Script kiddies might be kept at bay by simply allowing for users to easily change passwords. Or using public-key encryption to ensure firmware updates come only from the manufacturer. Which would also protect the device from script kiddies. Best practices in security will overall protect the bulk of users. If your threat model also includes Nation-State actors, then its likely your development budget is of a commensurate size.

It would also need to consider the attack surface for a given device. Does it connect to the Internet? Is the firmware protected from modification? Can an attacker cause the Li ion batteries to overload/ignite? Is it possible to inject malicious traffic into the stream of commands sent to the device? 

Regardless, these are factors that need to be considered at the beginning of the development cycle and not once product is in the hands of consumers. Once there, it is considerably more expensive to mitigate(e.g by patching, or recalling devices). 

Q: Would legislation help to increase security in IoT/Internet-connected embedded devices? 

A: Legislation is insufficient. Mandating that IoT devices must be secure does not automatically make them so. While there is no such legislation that calls for Windows to be secure, the market, consumers, and the work of numerous vulnerability researchers have over the years, along with Microsoft, made Windows increasingly more secure.

What IoT and embedded developers can take from the example on the desktop is that similar mitigations can be applied to their products. And that adding security helps to reduce expenses and increases consumers’ trust in their products.

Wednesday, May 08, 2019

"Oh, the places you'll go!": A look back at reverse engineering on Mobile/Embedded Systems

A bunch of years ago(circa 2003), after a little while in the Antivirus industry, I'd been consulting on web development but still had an itch for reverse engineering and malware.

The day job was mostly maintenance and upgrade on a VBScript(not .Net) site. With an MS Access database backend. Eventually this was converted to the WAMP stack and things got a bit more stable. Unfortunately stable became boring.

So I picked up perl in the evenings. My interest was in picking up a new language and playing with newer platforms. I'd always wanted to get a Psion Series 5 or a Revo palmtop computer but could never justify the $400-$600 price. I got lucky, Tiger Direct ended up with a lot of Diamond Mako(re-branded Psion Revo) palmtops and were selling for the impulse buy price of $100. Epoc32(renamed Symbian) was the OS running on the Series 5(and later many Smartphones). 

Picture of A Diamond Mako palmtop computer. It is a re-branded Psion Revo device.
A Diamond Mako(Psion Revo)
Credit: Miha Ulanov Licensed under 
CC BY-SA 3.0



“The little [computers] have .exe’s too? How cute?”


I'd picked up a few habits in the Antivirus industry that served me well :
  1. Run executables only as a last resort.
  2. Make sure you have analysis tools, even if you have to build them yourself.
Tools that dump information on executables are a primary tool for malware analysts. If you ask 100 Windows malware analysts, likely all 100 of them have written their own PE dumper. For the same reason. The research to build these tools requires learning the file format in depth.

On Symbian I wanted to know what info I could pull from its various file formats(e.g. .exe, .sis). For the .exe and .dll dumper I found a lot of information(e.g. internal structures, values, etc.) necessary to build the executable dumper from a header file posted to a Usenet group.(Today the post is gone, but the header file is available as Open Source.)


“Stop opening things. They packed them for a reason.”


The .exe dumper was done. .sis files(Symbian's installation file format) are useful. Unpack them and you get more files, especially the .exe/.app/.dll files. If you wanted to analyze a potentially malicious sample, you'd need an unpacking tool.

This is where the reversing fun happened. The first version of .sis files, used on earlier EPOC devices(release 5), was documented publicly. They got updated for EPOC Release 6(Symbian).

Public specifications, even slightly outdated, help reversing immensely. You also need samples of good files. A hex editor in which to view them. And some way to take notes.

As Perl had a Symbian port I was able to run the same code on both my laptop and the Mako(Revo). I could verify it was working and displaying the right information on the device.

Having a spec is a good start. I started hunting for sample SIS files. Success came from a number of places(e.g. old Symbian SDK CDs, sites on the Internet). Having an actual Symbian device gave me access to additional valid files.

Staring at hex dumps isn't the exciting part, it's the figuring out what's in a spec, comparing it to reality. I've had colleagues who can tell you what specific system call is encoded in those hex bytes. They can you tell where a particular virus opens an .exe, where it copies its virus code to the end. It seems like magic at first, then you see it's just pattern recognition. More fun is climbing inside the head of the creator; with file formats, seeing what they did right and occasionally where they went wrong.

Viewing .sis file with Symbian worm SymbOs/Cabir.A in a hex editor.
Viewing .sis file with Symbian worm SymbOs/Cabir.A in a hex editor.

I spent a few months building DumpSis(creative naming, no? Programmers tend to be lazy about naming, Symbian named their later tool the same). Running tests, fixing bugs and taking notes. Finally DumpSis worked. So it went on the shelf/in the toolbox. Because this was still 2003-2004 and here was no Symbian malware. Yet.

Unicode Build
Uid1: 0x38b1a3d Uid2: 0x10003a12 Uid3: 0x10000419 Uid4: 0xab80e0c4 
SIS CRC: 0xe840 
Number of Languages: 1 
Number of Files: 3 
Number of Dependencies: 1 
Installed Language: 
Last Installed File: 0 
Installed Drive: ! 
Installer Version: 200 (0xc8) 

Type: App Version: 1.0.0 
Install Name: caribe 


Files ---------|
             1 |-!:\system\apps\caribe\caribe.rsc 
             2 |-!:\system\apps\caribe\flo.mdl 
             3 |-!:\system\apps\caribe\caribe.app

DumpSIS output for SymbOs/Cabir.A.
Things changed in mid-2004. I got contacted by Antivirus colleagues asking for help getting DumpSis running on Windows. I asked what they were using it on. The virus writing group 29A had released SymbOS/Cabir in their latest zine(29A #8).

Soon after Symbian malware became a thing. I exchanged samples with other researchers and provided analysis.(still more fun than the day job)

About six months after that I got asked if I wanted to get back into Antivirus Research. It was the right place at the right time with the right tools.

Symbian malware led to other mobile phone malware, then to other embedded system threats. Times change, yet somehow to this day I'm still explaining that new platforms are vulnerable.

Tuesday, November 20, 2018

"Don't you build your own tools? If not, why not?"

In a recent issue of the Doctor Strange comic book, the good doctor is asked a question by the weapon forging Dwarf Eoffen. "Don't you build your own tools? If not, why not?". The question is one framed by an expert from one generation talking to another about their skills and experience. 

Doctor Strange, where are your tools?
A colleague brought up the point that some of these large toolsets and frameworks were written by experts( "wizards"/"sorcerers") who invested so much of themselves in the tools they built and it's a sign of respect to associate the names. While related this is not the issue(e.g. Fydor will always be tied to nmap, Ilfak to IDA Pro).
(Doctor Strange (2018) #4)

Building your own Tools

I ask a form of the question the Weapon Master Eoffen asks Doctor Strange at interviews. That and whether you like puzzles. Much of malware analysis and reverse engineering is puzzle solving.  In CTF challenges or cracking software protection you're trying to figure out how your opponent is trying to fool you. Eventually you get good at solving the simple challenges and breaking simple ciphers. Then you start upping your game, doing more research, and building your own tools.

Why? Because your opponent knows what you do and what's in your toolset. After all you're both in the same business. We know the strengths and more importantly the weaknesses of our standard tools.


I like to say that every Windows reverser eventually writes their own PE dumping tool. It can be applied generally to any platform and its executable or other formats. Not only does building our own tools give us a deeper understanding of what we're studying but it gives us alternatives when the Standard tools fail or are disabled.

Training
I received some of my earliest training with professional tools, some of which are as outdated or outclassed as the spells used by Doctor Strange. Tools built and designed by others. Can we write the same tools? We may not all develop some of the larger scale tools or frameworks("Spell Books") but eventually we all make our own special purpose tools. Colleagues, even those who claim they don't  "code" or program still end up writing their own tools, using everything from full fledged programming languages to bare-bones shell scripts.

On my first foray into the antivirus industry my mentors trained me by throwing a bunch of malware at my desk and my machine. My only tools a system level debugger, some DOS floppies, a modified hex editor, a DOS & BIOS function/interrupt reference, and a notepad. It did teach me that those last two would be my best and first tools. As we entered the Windows era, something like that would be counterproductive. These days I would modify the initial toolkit, but it would still involve teaching my charges how to take notes and trust their intuition.

Of course we build our own tools
In the end Doctor Strange, former surgeon and the Sorcerer Supreme of Earth, arguably a professional user of and Subject Matter Expert on all things Magic must utilize his knowledge and experience to craft his own mystical weapon/tool the "Scalpel of Strange".  Did it do everything he needed to do? No. It did allow to him to complete the task at hand. Was it a reflection or implementation of his experience, attained knowledge and intuition? Absolutely.

It's ok to trust the standard tools. Don't leave everything to them. Do trust in yourself and your own tools.

Tuesday, March 06, 2018

Tuesday, March 17, 2015

Internet of Dolls: See you later, Barbie.

In a recent episode of CSI: Cyber baby monitoring cameras had malware inserted into their firmware to allow criminals to spy on babies in their cribs. The crooks and kidnappers kept track of routines and schedules in order to find the best time abduct a child.

On CSI:Cyber, television kidnappers hack baby camera firmware to spy on children.
While baby cameras are intended for the purpose of monitoring your child, that's not the case with a new Barbie doll from Mattel set to debut in the upcoming Christmas season. The Hello Barbie is capable of carrying out conversations with your child on a similar basis to Siri or Cortana on your phone. Where the phone AIs are there to follow your commands or search the web, Hello Barbie will speak with your child and learn from their responses. Like Siri, the wifi enabled doll sends back the child's responses to it's creator's servers( SF-based ToyTalk) so that it can better answer the child.

The Hello Barbie, waiting to have a chat with your kids.

"Furbies are listening to everything!"
Sixteen years ago in 1999, the National Security Agency(NSA) banned Hasbro's Furbies from their premises. This was due to the little toys having the ability to listen and "learn" new phrases. The toys had  limited English vocabulary and smaller vocabulary of words in their own language, Furbish. Instead of learning like a parrot, further English words were unlocked slowly until the Furby spoke mostly English with a few Furbish phrases. The NSA was being cautious as Furbies were brand new and produced in factories in China, where it's possible that foreign spies could insert radio chips into the toys.

An original 90's Furby. They (probably)weren't spying on your kids.
Credit: @blamethecrane http://www.flickr.com/people/66376272@N07/

These original Furbies were not network connected. Furbies have been reverse engineered to see how they function and how to repair them, but no special radio chips were found inside to allow criminals and spies to listen in on private conversations.

Today the same can't be definitively said of modern Furby Booms with their own iPhone and Android apps.  One can feed them when they're hungry, play games with them on your iPad, give them "medical" check ups when they get "ill". These additional functions just need a compatible mobile app.

An attacker looking to control a modern Furby has much of the hard work done. Like the original Furbies, the new ones have also been reverse engineered to see how they function and/or to modify their behavior. Researchers have even decompiled and analyzed the Android app to work out the communication API. Unlike Hello Barbie, even a modern Furby doesn't have the hardware to send anything children say over the Internet.

Shhhh... Hello Barbie is around
There has been talk about not inviting Hello Barbie into our homes; not allowing her to speak with our kids. The arguments have been that it's like bringing an open microphone into your children's bedrooms, or in some cases even worse, inviting marketers.

Hello Barbie has only been seen in demos so far and she won't be available for purchase for months. Is she secretly listening? Maybe not, it looks like she has an indicator light and plays a tone when she hears you speak.

Her creators say that she will learn from speaking with your child. She's already got an advantage on the Furby. Having a built-in microphone and the ability to send audio to a speech recognition backend lets her respond more like a real person.

A Hello Barbie is able to communicate over the Internet. Does it have it's own account like the power meter on the side of your house? Or like late model cars? No, but it does talk back to the its creators in a similar fashion that the power meter communicates to the Utility.

Hello Barbie beats the Furbies by actually talking to a child, remembering and responding to a child. This makes for a very social toy. Powered by Mattel's partner ToyTalk, who specialize in speech recognition.

The folks behind Hello Barbie's people skills
ToyTalk is a company founded by ex-Pixar people that specializes in creating apps for children that encourage communication. They create technology for speech recognition, specialized for children instead of adults. The company makes a line of mobile games and interactive stories. 

Some of the iOS games made by ToyTalk. Kids can play along and chat with game characters.

Their backend technology is driving the Hello Barbie's ability to learn and understand when talking to a child.

As the games ToyTalk produce are frontends that encourage children to speak with characters, there is some care to ensure that parental consent is acquired. If you let your kid play the games, you need to sign up for an account with your email and agree to let ToyTalk analyze your kid's conversations. Since you would then have an account, the company can give you access and control over your kids recordings. If you don't sign up for an account, your kids can still play but the conversation portions of the game are not active.

In the case of Hello Barbie, the doll will likely be inactive until parents activate their own accounts and enabling conversation mode. That would still leave your child with a Barbie, albeit an expensive one.

Threats to our toys: are our children safe?

  • Should we be worrying that criminals will hijack our children's Barbies in order to convince them to run away or follow that stranger? 
    • No. Expect that to be the plot of a future episode of CSI:Cyber or Scorpion.
  • Will they attack our apps?
    • Almost certainly. 
  • Will they attack our children's apps?
    • Possibly. Criminals, especially computer criminals tend to look for a profit. It's more likely they'll try to steal financial information(e.g. overheard credit card numbers) at the kitchen table rather than the name of your kid's best friend.
  • Will criminals use modified firmware to create a botnet of Hello Barbies to steal the money from all of our Apple Pay accounts?
    • No. Also more likely a plot for CSI:Cyber.
We are all still safe until Hello Barbie is finally released. When that happens mobile apps will then be available for download by the world at large, including computer criminals. They will finally be able to reverse engineer them, looking for vulnerabilities to exploit.  As with the Furby, more features will give more for children to play with but they'll also give more to crooks.



Wednesday, January 14, 2015

Smart Luggage Locks: Are we ready for them?

When you travel by air a lot you tend to get efficient about packing your bags(like George Clooney in 'Up in the Air'). Due to efficiency(or other reasons), locking your bags tends to fall by the wayside. If I could simply wave my phone over my bag in order to unlock it, that would definitely save me time. eGeeTouch believes they have the solution, smart luggage locks. Like what the folks behind the Noke padlock are doing, eGeeTouch is providing a way to carry the keys to your bags everywhere,

Roughly the size of the average luggage lock the eGeeTouch
smart luggage lock means you'll never forget your key. 
Smart NFC-enabled luggage locks
The eGeeTouch locks are different from older locks in that they don't use keys(other than the TSA master keys) or combination wheels. I've forgotten my combinations and lost or misplaced a key before so I've definitely have an interest in these locks. eGeeTouch promises the ease of key management and easy unlocking through a mobile app. Even without a phone that supports NFC(Near Field Communication, like in touch and pay credit cards ), eGeeTouch also provides separate programmable NFC tags.

Near Field Communications lets you pay for things by waving your
phone. Now you'll be able to unlock your bags too.
Credit: Steven Walling

The locks themselves will have a suggested cost between $20-30 a piece. Larger licensing deals may reduce the cost for the end user. At the moment they're about 3x the price of a non-smart lock.  When they're eventually licensed by baggage manufacturers the cost will be included in the price of your new luggage.

The most expensive Smartphones now include NFC support(for use with Google Wallet or Apple Pay), though the average Smartphone is not excluded. The programmable NFC tags can be registered as a keys for your lock using the eGeeTouch Access Manager app.

Attacker's eye view of the eGeeTouch lock
The eGeeTouch locks do look quite interesting, but it's likely attackers will find new ways to access your property.

The Lock Manager app includes a number of functions:
managing password, making a backup, and managing tags.
Although some of that functionality is currently unimplemented.

There are a number of ways to attack a smart lock or access what it protects:

Physical
1) Cloning smart tags/"keys"
2) TSA approved keyway
3) Zipper tricks

Technological
1) Exploit lost key replacement protocol
2) Extract key from phone app

Physically cloning the NFC smart tags(i.e. the keys) with a tag reader/writer would be the "Hollywood" method. Technically a perfect copy, but requiring an attacker to expend more resources(people, hardware/software, time, money) than the value of whatever is stored in most people's luggage.

Attackers may go after the next most complex physical defense, the TSA bypass lock.  The seminal report on TSA compatible locks by security and lock expert Marc Weber Tobias covers the issues quite well. Although legitimate TSA master keys are inventory controlled, restricted, and secured at the end of shifts, it is still possible to create keys or decode combinations. Tobias' report shows how it's possible to pick or bypass luggage locks through the TSA approved keyway. The cost and preparation time may also be too much for the average attacker.

Zippers rather than locks seem to be the real weak point when looking at physical attacks. There are numerous videos on YouTube that show how one can easily open and re-seal the zipper on your bag with a common ballpoint pen.  If an attacker is in more destructive mood they could also simply slice into the bag with a knife.

Given the cost and relative difficulty of physical attacks, it can be easier to use the low hanging fruit of mobile apps.  Currently the eGeeTouch Manager App is available on the app markets. Per the eGeeTouch FAQ if one loses their phone, one can simply install the Manager App on their new phone and replace/reload a new code on their locks. The attacker would need to disassemble/decompile the app in order to figure out how the keys are managed and how to clone or insert their own.

A slightly easier method is to locate how/where the keys are stored on disk. The attacker would just need to gain access to the password file, decode the stored keys, and exfiltrate them to the attackers server. This attack would be most successful on a rooted device, allowing access to the password file. A plausible attack would have an optional root exploit, knowledge of key storage(e.g. filepaths), and a method to exfiltrate the data.

Smart Luggage Locks: What can go wrong?
Smart Luggage Locks can be attacked. Does that make them insecure? Not necessarily. Attackers face a tradeoff between cost(money + risk) and acquired information or goods(revenue - cost). Since I'm not carrying the formula for Coca-Cola in my luggage it might not be worth the risk for attackers to take on the TSA or other law enforcement just to bypass my Smart locks. For the regular traveller that also doesn't carry state/trade secrets, high end electronics or fancy jewelry the locks may be enough to discourage the casual pilferer.



[1] Once I managed to leave the key in my luggage as I closed the lock. This led to some fiddling with a butter knife and damage to the zippers on my bag. These are the dangers of forgetting where one placed the key.

Protecting the ‘Metaverse ecosystem’…: Openness is healthy

Meta’s Reality Labs has an opening for “Malware Reverse Engineer” . Not an uncommon role, but this particular one is a bit more specific whe...