If someone is a researcher in the infosec industry, works in the field or just has fun messing around there, then it's quite possible that one stumbles upon a vulnerability in the software of other people; also called a hitherto unknown bug. At this point paths diverge for the person who found this bug, dependent on their own moral compass.
The researcher can decide whether they want to (probably illegally) make lots of money by crafting the bug into a 0-day exploit, or whether they are contacting the affected company without making the bug public, in a responsible disclosure process. And of course there is the possibility of full disclosure as well, meaning to make the bug public without contacting the company in question beforehand - there are several reasons for this course of action as well.
I am not going to discuss the various ethical or technical issues with each of these variants in this blog; maybe at a later date. Responsible disclosure - contacting the company and hoping for the best - seems like a clean method for the researcher in question, but it is far from easy. Not a lot of companies react in a thoughtful manner to contacts from people who found bugs in their software, if at all.
Years ago it was even more difficult to find the right contact in the company to discuss a possible 0day. Often it had been (and still is) a rather demotivating and disappointing process. With enough patience it might have been possible to explain the bug to somebody, but more often than not the bug then either just got patched or the researcher got threatened, sometimes sued. Only in a few cases there was some kind of recognition or compensation.
Bug Bounty platforms like Hacker One, Bugcrowd or Zerocopter (there are others as well, this is an incomplete list) are relatively new on the scene. They are acting as brokers between the research community and participating companies; and in the meantime there are some big names from Facebook to Google who offer bug bounty programs.
Advantage for participating companies
Regular pentests usually have a well-defined and normally very limited scope. As an example the scope might read: "We'd like to put our new web application xyz in production and have it tested beforehand."
Pentesters very often only have a short amount of time for financial reasons, and thus concentrate on the scope and goals of the pentest to get the best results.
Nothing wrong with that, but integration with other components in production rarely is in the focus of pentests, especially because the software to be tested isn't internet-approved yet and only accessible in a test environment. It is not rare for each component of a web application to be individually tested for bugs, but testing the whole chain including webservers, database, software, etc. in production isn't done very often for various reasons.
Interfaces between components, on the other hand, are natural breaking points and often very interesting to attackers.
Consider this picture. The rails have been tested thoroughly and approved. The hose has been tested and approved. The hose has been secured against any damage from vehicles and has been tested with bikes and cars, all according to process. High-vis jackets have been handed out, and each and every component functions as expected. Just not in combination.
When participating in bug bounties companies can still define a scope of course, but very often that scope is less limited. Companies pay for bugs that have been found, not for the hours of work the researcher put in, and according to severity. A bug that lets an attacker completely takeover a system is naturally worth more money than a bug in a web form that lets an attacker change an informational value like a name to "Smith" and nothing else.
Companies also have a choice when it comes to rewarding researchers. From "kudos", meaning no money and just some respect to t-shirts and swag and further on to money (10.000 $ and more for severe bugs) everything's possible. If a company wants to attract the top researchers to have a look at their website, then stickers probably won't cut it, though. Researchers who earn their living with finding bugs are not interested in kudos or stickers, as long as their landlord also won't accept those as payment.
Advantage for participating researchers
Like always: I am not a lawyer. Nevertheless it should be a no-brainer that a company is less likely to sue researchers, when company and researcher have come to an agreement via a broker, who also makes sure both parties stick to the rules.
This might include not disclosing a bug to a third party for the researcher, but it also means not getting sued or threatened when they find bugs. Bug bounty platforms strive to establish this kind of trust, which usually is scarce in the professionally paranoid infosec industry.
So why shouldn't you run a bug bounty program?
A company without a bug bounty program isn't more secure, of course. They just have less information about the bugs in their own software and about who's going to sell them to the highest bidder.