Security Vulnerability Disclosure Is Still A Minefield

There’s no such thing as perfect security in the digital world. There are a swathe of hardware and software bugs floating around that compromise the security of these products. In recent years major data leaks have shown us that even big technology companies are vulnerable to security fails. There is an army of security enthusiasts tracking these bugs down, but tension can arise when they report vulnerabilities to technology vendors that may not want security flaws to be exposed to the public, at least not quickly.

There have been many clashes between researchers and vendors, some of which have resulted in legal action against bug hunters. Today, we look at an extremely grey area in IT security: how security vulnerabilities should be disclosed.

No technology vendor likes to find out that there are security holes in their products. But in an age where technology continues to grow ever more complex, it’s nigh impossible to create an absolutely secure solution. This is especially true when it comes to software where one wrong line of code could cause critical vulnerabilities. Most recently, the Dirty Cow bug which allows privilege escalation via the Linux kernel was found to be hiding in the source code after nine years, even though the open source nature of the project meant more people would be on the lookout for bugs.

We depend on security researchers, be it working in-house for vendors or independently, to help find bugs lurking in technology products. Once they find vulnerabilities, it’s only logical to make it known so that the problem can be fixed; if there is a bug that can result in a data leak of sensitive information, you’d want to get it patched swiftly. This is where things start to get complicated. That’s because there really isn’t a defined structure as to how vulnerabilities should be disclosed that satisfies all parties.

“This lack of structure has caused the eruption of a heated debate within the information security community. This debate has been going on for almost a decade. Yet to date there is no formal, accepted, and enforced standard of practice,” the SANS Institute for security professionals said in a report published in 2003. Over a decade has passed and yet vendors and security researchers are still clashing over the issue of disclosure.

Responsible Disclosure Isn’t The Panacea

As the idiom goes, “Don’t air your dirty laundry in public”, some organisations prefer to keep their security failures behind closed doors. Security researchers have been known to make vulnerabilities public to force the hand of vendors to fix them. These days, it is common practice for security researchers and vendors to engage in responsible disclosure, also known as co-ordinated disclosure. As the name suggests, it’s about researchers or bug hunters giving organisations a chance to remedy security issues by flagging it to them first and delaying the publication of their findings.

In theory, it’s a good idea and as security expert Troy Hunt has observed, in some cases, responsible disclosure works. He runs the website Have I Been Pwned? which tracks data breaches online. Last month, he helped co-ordinate the announcement that the Red Cross suffered one of Australia’s worst data breach incidents.

“The situation last month with the Red Cross was sort of our gold standard to what we’d really like to see for responsible disclosure. The organisation was receptive, it took it very constructively and seriously. It responded quickly, ethically and transparently,” he told Lifehacker Australia.

But that’s not how all attempts at responsible disclosure go down.

“One of the things we worry about with ethical disclosure is, first of all, whether the company will react in a negative way,” he said. “I regularly communicate with people who say ‘Look, I’ve found a vulnerability’ or ‘I have data’ and I’ll say ‘You should disclose it ethically and quietly’. They tell me they’re scared of the ramifications. As such, a lot of stuff goes unknown and unpatched.”

These people have every reason to be scared. Organisations worldwide have been known to take legal action against security researchers on numerous occasions. Attrition.org, co-founded by president of the Open Security Foundation Brian Martin (Jericho), has documented these cases dating back to 2000. The latest case that is listed is from June this year. What’s disturbing about the incidents listed on Attrition.org is that many of the researchers that tried to report the vulnerabilities directly to the companies involved were hit with lawsuits and gag orders instead.

Hunt noted that the grounds in which vendors can take legal action against researchers can be a bit ambiguous, but it often comes down to how vulnerabilities are discovered in the first place.

“If we take something like the Red Cross scenario, that was pretty benign; it was just one guy looking for web servers that were publicly facing and already publishing data,” he said. “But what about the people that actually probe around and cause a system to misbehave? I appreciate much more how they would be very worried. Particularly in the US where they have the CFAA (Computer Fraud and Abuse Act), they go absolutely nuts with this stuff over there.”

The legality around the methods used by security researchers opens up a whole new can of worms because of legal implications; for example, if a researcher attempts to hack into computer systems uninvited to poke around for security holes, that could be considered a cyber crime. But the disclosure side has legal implications as well.

Researchers that choose to publish details of vulnerabilities may get into legal hot water with vendors for infringement of copyright and intellectual property laws. Last year, FireEye obtained a court injunction to prevent researchers from the security firm ERNW from disclosing vulnerabilities found in the vendor’s products. The reason? FireEye said if ERNW were to detail the flaws, it would reveal the vendor’s source code and design trade secrets to the public. It would also put FireEye customers at greater risk of suffering a cyber attack, the vendor said.

Legal action is the worst case scenario. Often there is just a general air of tension as vendors and security researchers butt heads over when and how a vulnerability should be published. We asked security vendor RSA for its thoughts regarding the best way to respond to vulnerability disclosures. RSA security evangelist for Asia-Pacific Michael Lee told Lifehacker Australia that while 30 days is generally accepted as a reasonable time for full disclosure of vulnerabilities, some vulnerabilities that are already in the wild should be addressed sooner.

Timing for disclosure is often the issue of contention. That was the case for a recent zero-day bug in the Windows kernel. Google security researchers made the flaw public just seven days after notifying Microsoft, arguing the bug was already being actively exploited (this is in accordance with Google’s own disclosure guidelines). Microsoft hit back, criticising Google for not giving it enough time to fix the problem.

“One clear issue is communication. Security researchers should be given the basic courtesy to know their report has been received within seven days,” Lee said. “Furthermore, while a security researcher is not necessarily entitled to the internal processes that a vendor may go through in remediating an issue, constant communication helps provide the context as to why a disclosure period may or may not be appropriate.

“… Most researchers understand that developing secure software can be a difficult process that has its own challenges. Keeping them informed of how their issue is being addressed only helps to come to an agreement on what is a suitable time for responsible disclosure.”

But sometimes, vendors just don’t want to talk. I’ve heard numerous stories about people who report vulnerabilities to vendors, only to receive no response until they make the details of the security flaws public.

“One of the things that disclosure does is that it does force the hand of the organisation, because now they are in the public spotlight and they can’t just sweep it under the rug,” Hunt said. “They have to do what is in the public’s best interest rather than their own interests.”

Bug Bounty Programs

It’s important that vendors and security researchers maintain a good relationship so that there is a concerted effort to discover and patch bugs in a quick and efficient manner. If security researchers refrain from disclosing their findings in fear of retribution from vendors, it could be detrimental to the entire security industry.

“This is the sort of area where bug bounties are very important,” Hunt said. Bug bounty programs offer recognition and compensation to those who report bugs to vendors. “As soon as you have a bug bounty, there’s a published, verified route that people can report these things.

“Not only that, they might earn themselves some money, which is quite nice.”

Atlassian head of security Craig Davies is a proponent of bug bounty programs. The software company doesn’t have a paid bounty program, preferring to give away Atlassian swag and entering those who have reported bugs to its ‘Hall of Fame’. Even so, the company has still received an influx of vulnerability reports.

“Now, it is not all joy and happiness, the quality of reports can sometimes seem low, people will raise issues without an understanding of your business process, or even they raise issues that do not exist,” Davies told Lifehacker Australia. “Hence partnering with a managed provider (call out to another great Australian company – Bugcrowd) can save you a lot of time and energy. Your internal process of handling reports needs to be able to respond in a reasonable time frame.”

“This is normal now for anyone who has a solution, ether an app or web presence, so you should start working out how you bring it into your organisation. People are looking anyway, you might as well partner with them.”

Davies has three tips for organisations interested in engaging researchers to help address security vulnerabilities within the business:

  • Start a program: Either do it yourself or with a trusted partner
  • Think about your processes: When someone reports, how do you make it easier to capture feedback, respond and get it fixed
  • Don’t have crazy rules and be authentic: Researchers do want to do great work too, respect that.

What are your thoughts on responsible disclosure? Let us know in the comments.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments