I found this title confusing. For those who didn't make it toward the end of the article: the leaked emails didn't cost them anything (except their time and ingenuity), and they received 10k as the bug bounty.
sedatk 18 hours ago [-]
I thought they meant providing the services to leak the email of any user for $10K, perhaps per user. :)
adrianmonk 9 hours ago [-]
I thought they meant it cost $10K of compute time to brute force some process that would reveal one email address.
raffraffraff 5 hours ago [-]
Yeah, probably getting an LLM to do it
cassepipe 5 hours ago [-]
I think this was the clickbaity intention behind the title
sim7c00 2 hours ago [-]
well, i guess it got me to read these comments and think, ok no thanks :D
raffraffraff 6 hours ago [-]
That's what I thought too because that's exactly how it's phrased! Terrible wording.
mikeyinternews 49 minutes ago [-]
The title should have been something like, "Revealing the email address"...
stevage 14 hours ago [-]
Me too.
I thought it meant they were offering this as a service for $10k.
willtemperley 12 hours ago [-]
I think this was the joke.
nickvec 10 hours ago [-]
Could be a clickbait sort of title.
DecentShoes 17 hours ago [-]
Yeah, I thought it was going to be about compute cost for brute forcing some hash or something
tomsmeding 16 hours ago [-]
The domain name kind of suggests this interpretation, too.
15 hours ago [-]
SZJX 9 hours ago [-]
From the D/M/Y date format at the end of the article, they may not be native English speakers (at least they aren’t American).
defrost 8 hours ago [-]
The USofA is the bizarre exception here:
The United States has a rather unique way of writing the date that is imitated in very few other countries (although Canada and Belize do also use the form). In America, the date is formally written in month/day/year form.
They don't use metric, still use First Past the Post voting, elect a mini monarch with effectively unchecked powers, ... it's an odd place.
OhMeadhbh 8 hours ago [-]
Don't oversell the place. It also has it's down-sides.
kinematicgps99 7 hours ago [-]
Mass shootings, out-of-control police, bankruptcies from for-profit healthcare and expensive medications, Bibles and Creationism in public schools, widespread ignorance about the world and just about everything, and millions (vastly undercounted) of homeless people.
But seriously, America is awesome for rich people if you don't mind living in a poor, third-world country that still believes it's a first-world, exceptional country.
nanna 6 hours ago [-]
Think you might have missed the parent poster's sarcasm.
skirge 4 hours ago [-]
Africa of North
b59831 4 hours ago [-]
[dead]
kinematicgps99 7 hours ago [-]
Also, the US military used/uses DDMMMYYYY format, i.e., 15JAN2025, where MMM is the month abbreviation, which is similar to one of the formats used in Romania. This has the benefits of unambiguous parsing and no need for component separators but lacks lexicographical sort-ability like ISO 8601. A format like YYYYMMMDD might some of the advantages of ISO 8601 by keeping items of the same year and month together at a minimum. (ISO 8601 is the most proper date format though. ;)
turbonaut 9 hours ago [-]
England is just one example of a country of native English speakers who use dd/mm/yyyy.
chrisweekly 8 hours ago [-]
yyyy-mm-dd is the iso standard, w/ the benefits of logical consistency (larger to smaller units left to right) and -- best of all -- sortability.
anilakar 7 hours ago [-]
In any case there are two sane ways to write dates, and the middle-endian format is not one of them.
Wowfunhappy 7 hours ago [-]
The only problem is that it puts the most important information last and the least important first (going by "what do I most likely need to see and cannot infer from context", ie I likely already know the year and possibly the month).
unification_fan 5 hours ago [-]
Who cares? How much time are you going to lose from that? Like, 0.15s every time you read a date. Big woop.
Wowfunhappy 2 hours ago [-]
It has been a problem for me when the last part gets truncated (e.g. with an ellipses).
> Here's a POC of the exploit in action: This video has been removed for violating YouTube's Terms of Service
That's hilarious.
SZJX 9 hours ago [-]
Weird that it shows that to me at first as well, but now when I opened the article again, the video seems to be available? Not sure if it was restored just now.
voytec 6 hours ago [-]
Maybe just not yet wiped from all CDN caches.
vetrom 19 hours ago [-]
I see a lot of noise made about responsible disclosure, its drivers, and its rewards. What I don't see is talk about how this is one more datapoint against centralized permanent identities.
Every time I see a service purporting that it works best only with a single link to your Real Identity™, I'm reminded that the vendors only abstractly care about actually protecting the user, and then only sometimes.
Imagine being able get immediately three or four steps closer to doxing anyone interacting on YouTube. That's the actual impact of this bug IMO. It's good that this was fixed, but I don't think this class of bug goes away anytime soon. What do we need to do to get vendors and big companies to realize that this sort of design is landmines waiting to happen?
vineyardmike 19 hours ago [-]
> Every time I see a service purporting that it works best only with a single link to your Real Identity™, I'm reminded that the vendors only abstractly care about actually protecting the user, and then only sometimes.
I abstractly agree with you. There is a level of obscurity and disposability that should be tolerated in these accounts. They’re just a row in a database somewhere anyways.
That said, many people transact with these businesses with real human money. For example, YouTube premium subscribers or content creators. From a practical perspective, that requires IRL identifiers to be stored somewhere with that otherwise disposable account. And due to fraud risks and other realities of banking, that requires giving these businesses actual identities and addresses which they store too.
While I don’t give random apps and websites my human-identifying information, anyone I do business with necessarily knows the real me, which is a theoretical point of data leaking.
patrick451 10 hours ago [-]
This is a fixable problem if we can get congress to roll back the insane KYC laws.
Garlef 9 hours ago [-]
It's also fixable in ways that don't require rolling back KYC laws.
autoexec 7 hours ago [-]
> While I don’t give random apps and websites my human-identifying information, anyone I do business with necessarily knows the real me, which is a theoretical point of data leaking.
Certainly not theoretical. You can be certain that nearly every company who knows your identity has leaked/sold it to others in one fashion or another.
chii 9 hours ago [-]
They don't care because there's no legal consequence for them.
Try and leak some medical data as a medical services provider. You will get your ass handed to you.
tptacek 1 days ago [-]
Since every 3rd message on this thread (at the time I wrote this) is about how Google underpaid for this bug, some quick basic things about vulnerability valuations:
* Valuations for server-side vulnerabilities are low, because vendors don't compete for them. There is effectively no grey market for a server-side vulnerability. It is difficult for a third party to put a price on a bug that Google can kill instantaneously, that has effectively no half-life once discovered, and whose exploitation will generate reliable telemetry from the target.
* Similarly, bugs like full-chain Android/Chrome go for hundreds of thousands of dollars because Google competes with a well-established grey market; a firm can take that bug and sell it to potentially 6 different agencies at a single European country.
* Even then, bounty vs. grey market is an apples-oranges comparison. Google will pay substantially less than the grey market, because Google doesn't need a reliable exploit (just proof that one can be written) and doesn't need to pay maintenance. The rest of the market will pay a total amount that is heavily tranched and subject to risk; Google can offer a lump-sum payment which is attractive even if discounted.
* Threat actors buy vulnerabilities that fit into existing business processes. They do not, as a general rule, speculate on all the cool things they might do with some new kind of vulnerability and all the ways they might make money with it. Collecting payment information? Racking up thousands of machines for a botnet? Existing business processes. Unmasking Google accounts? Could there be a business there? Sure, maybe. Is there one already? Presumably no.
A bounty payout is not generally a referendum on how clever or exciting a bug is. Here, it kind of is, though, because $10,000 feels extraordinarily high for a server-side web bug.
For people who make their nut finding these kinds of bugs, the business strategy is to get good at finding lots of them. It's not like iOS exploit development, where you might sink months into a single reliable exploit.
This is closer to the kind of vulnerability research I've done recently in my career than a lot of other vuln work, so I'm reasonably confident. But there are people on HN who actually full-time do this kind of bounty work, and I'd be thrilled to be corrected by any of them.
edanm 1 days ago [-]
I don't remember if I've ever thanked you for the dose or reality you bring to these discussions, but if not - thank you! Before I started reading your comments on bug bounty payouts I'd probably have made the typical thoughtless (in my case) remark that the bounties are tiny, without actually thinking through the realistic dollar value of bugs found.
Not to mention not really thinking through how obviously stupid it is to immediately compare a legal activity to a highly illegal one, as if they're real alternatives for most people.
A friend generated a tag cloud from all my comments here like 10 years ago and it was just the word "No" like a supermassive black hole ringed by dozens of tiny little words I was saying "no" about.
pvg 17 hours ago [-]
That's a great example of "doing it wrong makes it better", in this case not filtering stop words.
hedora 23 hours ago [-]
Most other fields of endeavor aren’t compensated based on the black market value of the thing that’s being produced.
If we apply your analysis to other things, we’ll find that the upper bound price for a new car stereo or bike is ~ $100, and the price of any copyrighted good is bounded by the cost of transferring it over the network.
I think it is more useful to divide the amount Google paid by the number of hours spent on this and any unsuccessful exploit attempts since the last bounty was paid.
I’d guess that the vast majority of people in this space are making less than US minimum wage for their efforts, with a six figure per year opportunity cost.
That tells you exactly how much Google values the security and preserving the privacy of its end users. The number is significantly lower than what they pay other engineers orders of magnitude more to steal personal information from the same group of people.
demosthanos 21 hours ago [-]
> Most other fields of endeavor aren’t compensated based on the black market value of the thing that’s being produced.
> If we apply your analysis to other things
This analysis doesn't work for a few reasons:
* For physical goods, used items always fetch a lower price than new items due to unrelated effects. And if we're only looking at the used price, we do find that the black market price is just about equal to the used item's value minus the risk associated with dealing with stolen goods (unless the buyer is unaware of the theft, in which case the black market value is the same as the used value).
* For both physical and digital goods, there are millions of potential customers for whom breaking the law isn't an option, creating a large market for the legal good that can serve to counter the effect of the black market price. This isn't true of exploits, where the legal market is tiny relative to the black market. We should expect to see the legal market prices track the black market prices more closely when the legal market is basically "the company who built the service and maybe a few other agencies".
mootothemax 6 hours ago [-]
> For physical goods, used items always fetch a lower price than new items
This is only true under certain circumstances. If there are supply chain issues, used prices can go up and over the list price. The most extreme (and obvious) example I've seen is home gym equipment during the Covid lockdowns, particularly for stuff like rowing machines.
The other potentially less obvious example is seen in countries that don't have a local presence or distributor for a given item, and the pain and slowness of importing leads to local used prices being above list price.
One other potentially interesting semi-related point: prices for used items can sometimes increase in unexpected ways (excluding obvious stuff like collectables, art, antiques etc). In the UK, the used price for a Nissan Leaf EV started increasing with age after the market realised that fears about their battery failing ~5 years into ownership were unfounded urban myths, and repriced accordingly.
akerl_ 37 minutes ago [-]
> If there are supply chain issues, used prices can go up and over the list price.
The comment you're replying to isn't referring to list price, they're referring to the price of a new item.
Supply chain issues, as we saw during COVID, affect the cost of new items by making them effectively infinite: if there are only 100 new rowing machines available and 1000 people want them, then for 900 people, the list price of a new rowing machine is irrelevant because they can't actually buy it at that price.
UncleMeat 22 hours ago [-]
Bug bounty programs are not the only (or even primary) way that security researchers get paid. Google pays employees salaries to find vulns. Bounty programs are a pretty recent development and the idea that they should be scalable and stable well paying employment for a lot of people is a bit strange to me.
If security researchers want to have stable employment doing this sort of work, there's oodles of job applications they can send out.
mlyle 22 hours ago [-]
> Bounty programs are a pretty recent development and the idea that they should be scalable and stable well paying employment for a lot of people is a bit strange to me.
So, the value to the researcher of having a found bug has a floor of the black market value.
The value to Google is whatever the costs of exploitation are: reputational, cleanup, etc.
A sane value is somewhere between these two, depending on bargaining power, of course. Now, Google has all the bargaining power. On the other hand, at some point there's the point where you feel like you're being cheated and you'd rather just deal with the bad guys instead.
UncleMeat 21 hours ago [-]
That's not true because there is an economic cost for most people to committing crimes. "Hey you could make more money selling that on the black market" is not going to convince me to sell something on the black market.
Bounty programs are very much not trying to compete with crime.
makeitdouble 16 hours ago [-]
The reputation angle shouldn't be dismissed: Google paying so little for this bug is the whole reason this article stays on the top page and gets so much discussion.
I don't know how much it should be worth, but at least there's a PR effect and it's also a message towards the dev community.
I see it the same way ridiculously low penalty for massive data breaches taught us how much privacy is actually valued.
tptacek 15 hours ago [-]
If Google doesn't have the best reputation of any large tech company for security, it's in the top 3. This is not the nightmare scenario for Google that people think it is. It's a large payout for this bug class, so, if anything, what we're doing here is advertising for them.
makeitdouble 9 hours ago [-]
I'm in all agreement (genuinely thankful for the context you brought on the difference in market values for this category of bugs), which is also part of why it's sobering privacy bugs have such a low valuation and this is set as a high payout.
For security researchers it's apparently obvious, but from the outside it's another nail in the coffin of how we want to think about user data (especially creators, many being at the front line of abuse already). As you point out Google here is only the messenger, but we'll still remember the face that delivered the bitter pill for better and worse.
TheSpiceIsLife 7 hours ago [-]
Globally, how many people are there presently salivating at the thought of US$10,000 for a bug bounty?
How many young computer enthusiasts / aspiring security researchers are motivated to learn more because they see, what to them are, massive payouts.
You or I might not get out of bed for the hourly rate that translates to, fine by me - I have a job that pays the figure I negotiated.
Bug bounty programs pay the market clearing rate, always. One bug, two market participants, one price.
scarby2 21 hours ago [-]
It is a factor though. Most people will commit non-violent crime for a big enough pay off. Especially one where the individuals effected are hard to identify.
If my bug bounty is $10,000 and I can sell it for $20,000 then most people will take the legitimate cash. If it's $10,000 and some black market trader will pay $10,000,000 (obviously exaggerating) then there's a whole mess of people are going to take the ten million.
Arainach 21 hours ago [-]
Except it's not "legitimate cash" and that's the point.
* Are you talking to someone legitimately interested in purchasing and paying you, or is this a sting?
* If you're meeting up with someone in person, what is the risk that the person will bring payment or try to attack you?
* If you're meeting with someone in person, how do you use $20k in cash without attracting suspicion? How much time will that take?
* If it's digital, is the person paying you or are the funds being used to pay you clean or the subject of an active investigation? What records are there? If this person is busted soon will you be charged with a crime?
There are a lot of unknowns and a lot of risks, and most people would gladly take a clean $10k they can immediately put in the bank and spend anywhere over the hassle.
mlyle 18 hours ago [-]
It's not a crime to sell a bug. You can sell something like this to Crowdfense and receive money wired from the company (or cryptocurrency if you prefer anonymity).
tptacek 17 hours ago [-]
It is not intrinsically a crime to sell a bug, but if you sell a bug and it can be demonstrated you reasonably knew the buyer was going to use it to commit a crime, you will end up with accessory liability to that crime. Selling vulnerabilities is not risk-free.
This is another reason why the distinction between well-worn markets (like Chrome RCEs) and ad-hoc markets is so important; there's a huge amount of plausible deniability built into the existing markets. Most sellers aren't selling to the ultimate users of the vulnerabilities, but to brokers. There aren't brokers for these Youtube vulnerabilities.
mlyle 17 hours ago [-]
There's not a standard price in a list, but you can absolutely sell a platform exploit to a broker.
tptacek 16 hours ago [-]
Say more. What do you mean by "platform exploit", and which brokers are you talking about? I am immediately skeptical, but it should be easy to knock me down on this.
4 hours ago [-]
s1artibartfast 19 hours ago [-]
The "legitimate cash" option is the bug bounty without the risk. I think you are saying the same thing.
unsigner 18 hours ago [-]
You have discovered the one real practical application of crypto.
fooker 19 hours ago [-]
I wonder what your definition of crime is.
Legally, in most places of the world it isn't.
Morality differs among people too. Profiting off a trillion dollar company will not cross the line for a lot of people.
efitz 18 hours ago [-]
Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed, who, how much harm, what kind of harm, etc.", that factors into moral decisions.
Almost everyone, even people without a moral sense, have a self-preservation sense- "How likely is it that I will get caught? If I get caught, will I get punished? How bad will the punishment be?" and these factor into a personal risk decision. Laws, among having other purposes, are a convenient way to inform people ahead of time of the risks, in hopes of deterring undesirable behavior.
But most people aren't sociopaths and while they might make fuzzy moral decisions about low-harm low-risk activities, they will shy away from high-harm or high-risk activities, either out of moral sense or self preservation sense or both.
"Stealing from rich companies" is a just a cope. In the case of an exploit against a large company, real innocent people can be harmed, even severely. Exposing whistleblowers or dissidents has even resulted in death.
fooker 15 hours ago [-]
> Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed
How much time do you spend asking yourself whether your paycheck is coming from a source that causes harm? Or whether the code you have written will be used directly or indirectly to cause harm? Pretty much everyone in tech is responsible for great harm by this logic.
Great, would you be surprised that most of us don't?
Most will just take the 500k paycheck and work at whatever the next big tech thing is.
There's some chance that thing is autonomous drones or something like that...
owl57 14 hours ago [-]
That's definitely a factor at least some people consider when choosing their job.
> Pretty much everyone in tech is responsible for great harm by this logic.
We're also responsible for great good. The question which is greater is tricky, case-by-case and subjective.
golem14 11 hours ago [-]
It‘s a grey zone.
If Mr GRU asks, I probably say say no.
If the CIA, Mossad or BND asks, maybe I say yes? It’s not clear for a person with a better moral compass than mine.
rkagerer 17 hours ago [-]
...has even resulted in death
I wish developers (and their companies, tooling, industry, etc.) creating such flaws in the first place would treat the craft with a higher degree of diligence. It bothers me that someone didn't maintain the segregation between display name / global identifier (in YouTube frontend*) or global identifier / email address (in the older product), or was in a position to maintain the code without understanding the importance of that intended barrier.
If users knew what a mess most software these days looks like under the hood (especially with regard to privacy) I think they'd be a lot less comfortable using it. I'm encouraged by some of the efforts that are making an impact (e.g. advances in memory safety).
(*Seems like it wouldn't have been as big a deal if the architecture at Google relied more heavily on product-encapsulated account identifiers instead of global ones)
mlyle 21 hours ago [-]
Selling a bug is not a crime.
> Bounty programs are very much not trying to compete with crime.
Nor did my post posit this.
Bounty programs should pay a substantial fraction of the downside saved by eliminating the bug, because A) this gives an appropriate incentive for effort and motivate the economically correct amount of outside research, and B) this will feel fair and make people more likely to do what you consider the right thing, which is less likely if people feel mistreated.
UncleMeat 21 hours ago [-]
Should this be true only for vulns, or all bugs? If I as a third party find a bug that is causing Google to undercharge on ads by a fraction, should Google be obligated to pay me a mountain of cash?
Is there any evidence that OP feels that this payout was unfair?
mlyle 20 hours ago [-]
> If I as a third party find a bug that is causing Google to undercharge on ads by a fraction, should Google be obligated to pay me a mountain of cash?
No, but Google should understand that if they give a token payment, people will be less likely to help in future situations like this. And might be inclined to just instead tell ad buyers about the loophole quietly.
Arainach 21 hours ago [-]
How do you propose to calculate "the downside saved by eliminating the bug" - ideally in general, but I'd be curious to see if you could do it even for the specific bug discussed in this article.
mlyle 20 hours ago [-]
Organizations price future, nebulous things all the time.
Imagine a possible downside or two, imagine a probable risk, multiply, discount.
Arainach 20 hours ago [-]
Sure, but give some specific values. What potential damages and potential risk multiply to more than $10k?
mlyle 20 hours ago [-]
Prominent youtuber doxxed and killed; terrible press extended for an extended period by litigation. 1 in 5000 but very high cost.
Large scale data leak and need for data leak disclosure. 1 in 3, moderate cost.
Bug report saving engineering time by giving clear report of issue instead of having to dig through telemetry and figure out misuse and then identify what is going on, extents of past damage, etc. 3 in 4.
Arainach 16 hours ago [-]
You think that being able to get someone's email address (most likely a business email but let's pretend it's a personal email) has a 1 in 5,000 chance of being turned into enough personal information to track down AND that someone would use it to kill someone?
Millions of usernames and emails are leaked every month; if this was the case you'd be seeing these murders in the news every week.
mlyle 10 hours ago [-]
> Millions of usernames and emails are leaked every month; if this was the case you'd be seeing these murders in the news every week.
Yes, because all possible scenarios kill the same fraction of people-- whether we're talking about getting a dump of a million email addresses or giving some nutjob a chance to unmask people he doesn't like online.
tanewishly 21 hours ago [-]
As mentioned by thread starter, you can also sell to some national security agency. That way, you're doing your patriotic duty and making a buck. So Google has an incentive to at least beat those offerings.
jonas21 21 hours ago [-]
Most other fields produce things that can be sold in the legal market - and so the value of those things can be determined by the market.
tptacek 19 hours ago [-]
I think the right comparison to make here is art. The compensation floor is zero, and, in fact, that's what most vuln research pays.
notpushkin 22 hours ago [-]
> and the price of any copyrighted good is bounded by the cost of transferring it over the network
It sure has worked out pretty much like this for music. The cost is not exactly zero, but pretty close to that.
hammock 19 hours ago [-]
>Most other fields of endeavor aren’t compensated based on the black market value of the thing that’s being produced.
What you’re saying can be seen as tautological. The reason a gray/black market exists is precisely because the field is undercompensating (aka in disequilibrium)
nitwit005 20 hours ago [-]
> Most other fields of endeavor aren’t compensated based on the black market value of the thing that’s being produced.
They're buying exclusive access to some information, which is a somewhat unusual thing to pay for.
News reporters do take spicy stories to tabloids, rather than the normal press, as the tabloids will pay more.
21 hours ago [-]
bee_rider 21 hours ago [-]
They mentioned the grey market a couple time, although some of their examples did seem like applications that would be more useful for the black market.
Anyway, I’m not 100% sure what they meant by grey market. It looks like they were talking about maybe selling to “agencies” which, I guess, could include state intelligence agencies. If that’s what they meant, it wouldn’t be that surprising to find that the black market and grey market prices influence each other, right?
I mean we could ask our intelligence agencies why they are shopping in the same markets as criminals but I guess they will say something like “it is important that we <redacted> on the <redacted>, which will allow us to better serve the <redacted> and keep the <redacted> safe.”
Uptrenda 15 hours ago [-]
Yep, I came to the same conclusion. The payments from bug bounties and the uncertainty of payment just isn't worth it. It's like taking a fixed prize contract and adding in a gambling element to get paid. Fixed prized I learned was bad enough if you want to make anything as a software engineer. This is even worse though.
I mean, the technical skills in the article here are basic. But the first finding was significantly good luck, and having the background to know to look towards old Google services for the ID to email part was non-obvious. You would need a lot of high-quality, guiding knowledge like that to make bug bounties work. Still, seems like a very high starting cost.
kccqzy 23 hours ago [-]
I hate how this HN thread is mostly about discussing the amount of bounty, but I'm afraid it's only natural. Most commenters here are working in the software industry and they want to normalize extremely high bounties. It's an extra income source for them. They want higher bug bounties much like they want SWEs to be a highly compensated profession. It's only natural for workers to demand higher pay for their own profession. No amount of rationalization will change that instinct.
iinnPP 22 hours ago [-]
It isn't always about money, even when that is the stated problem.
The dollar value of a responsible report going up means more responsibility overall and less problem leaks, exploits, etc.
I would be equally happy to see any solution where the end result is increased security and privacy for everyone, even at zero bounty.
The problem being overlooked is that the actual cost of these exploits and bugs is paid by the people who had no say whatsoever in any matter regarding the issue. Any time a company is being "cheap" at the expense of regular people is a bad time, from my perspective.
Google has the power to limit the exposure of the people who use there products (and this isn't always voluntary exposure mind you) and is choosing to profit a teeny tiny bit more instead. At no immediately obvious cost to them, why not?
nightpool 22 hours ago [-]
> The dollar value of a responsible report going up means more responsibility overall and less problem leaks, exploits, etc.
Does it? I just had a bug bounty program denied for budget approval at my work because of the cost of the bounties and the sufficiency of our existing security program. On the margins, it's not clear to me that the dollar value of a report going up is incentivizing better reports vs pricing smaller companies out of the market.
iinnPP 22 hours ago [-]
This is a great point and I did not really think of this in the above statement.
It may work kind of how employment works, where Google can afford to pay more than a company that cannot afford a 10k bounty.
Google paying a 10k bounty is the equivalent of the bottom 10% of earners in the US paying a 6th(napkin math) of a soon to be discontinued penny.
Regardless, you are correct that the calculation is not obvious, unlike how I presented it. Preferably, things like multiple million character titles are handled correctly and no bounty is paid at all. I expect a smaller company to have an easier time here as well, lessening the financial burden.
demosthanos 21 hours ago [-]
> I expect a smaller company to have an easier time here as well, lessening the financial burden.
Why would you expect that? In a smaller company the ratio of developers to HTTP endpoints tends to be substantially lower (fewer devs per feature) than in a large company, so I'd expect the opposite.
mlyle 18 hours ago [-]
I'm not a SWE anymore and haven't been one for a long time.
I think it's in everyone's interest for bug bounties to be higher than harmful markets for the same bug, and a decent fraction of the harms they prevent. That's what is going to result in the economically efficient amount of bug hunting. And it's going to result in a safer world with less cybercrime.
tptacek 16 hours ago [-]
No, it's not. CNE is shockingly effective, both for organized crime and for the international IC. The productivity wins are so great there is enormous space for the market prices of tradable vulnerabilities to increase; maybe even multiple orders of magnitude. We're not going to disrupt that process with bug bounties.
I really think people just like to think about stories where someone like them finds a bug and gets a lottery jackpot as a result. I like that story too! It's fun.
Smart companies running bug bounties --- Google is probably the smartest --- are using them like engineering tools; both to direct attention on specific parts of their codebase, and, just as importantly, as an internal tool to prioritize work. This is part of why we keep having stories where we're shocked about people finding oddball security- and security-adjacent bugs that get zero payouts.
mlyle 14 hours ago [-]
> I really think people just like to think about stories where someone like them finds a bug and gets a lottery jackpot as a result. I like that story too! It's fun.
Increasing bounties by a small factor will be enough to reduce things on the grey market and to increase the ROI of people choosing to do freelance security research. The time between payoffs is enough that no one is going to get rich from $150k bounties.
Don't forget the extrinsic benefits: easier to brag about bounties on your resume than selling things into the grey market.
> Smart companies running bug bounties --- Google is probably the smartest --- are using them like engineering tools; both to direct attention on specific parts of their codebase, and, just as importantly, as an internal tool to prioritize work.
These "smart" companies should consider just how cheap even higher bounties are to prevent massive downsides. Of course, an underlying problem is how well these companies have insulated themselves from the consequences of writing and not fixing vulnerable software. A sane liability (and insurance) regime would go a long way towards aligning incentives properly.
mlyle 10 hours ago [-]
> I really think people just like to think about stories where someone like them finds a bug and gets a lottery jackpot as a result. I like that story too! It's fun.
P.S. a lot of time your writing comes off as having a smug tone that rubs me the wrong way.
Actually, I already won a small lottery jackpot doing security stuff. Then a large one doing security stuff. Then a small one again doing other stuff. I could have retired a couple of decades ago, but now I'm a schoolteacher for the funsies. My days of scrunching over IDA Pro for pennies are over: I've got no personal direct interest in whether research gets paid more or less.
I just think that bug bounties are a good thing, but by being underfunded and with uneven quality of administration a lot of the potential benefit is left on the table.
aqueueaqueue 21 hours ago [-]
SWE comp is weird in that typically it is zero (see what's on Github!) often it us middle class and sometimes it is small scale CEO (as in the actual job not a founder) level.
I guess bounties fit into the framework somewhere between the Github and middle class engineer.
I think it comes down to supply and demand. It also shows you what Google would pay employees if things were in their favour. On unrelated news, a tech billionaire is almost defacto VP of the US.
rectang 22 hours ago [-]
When bug bounties are priced low, it also irks those among us who care about security — for the sake of the organizations we work for, for the sake of our end users, and for the sake of the world at large.
reaperducer 22 hours ago [-]
[flagged]
seangrogg 22 hours ago [-]
You say greed but I would wager that most people in the thread are not financially independent. If someone can't retire from needing money in perpetuity, is it really greed to want to move that needle from "no" closer to "yes"?
fragmede 21 hours ago [-]
Or even just the next meal. We don't know their situation, and I've heard quite a few stories of the tech-adept being on the streets or behind bars. Some amount of greed is normal. It's when goes way beyond that, into averice that it's a problem.
neilv 24 hours ago [-]
> Threat actors buy vulnerabilities that fit into existing business processes
Isn't there a market for this? For example, "Reveal who is behind this account that's criticizing our sketchy company/government, so we can neutralize them".
I'll also argue there's separate incentives, than the market value to threat actors... Although a violent stalker of an online personality might not be a lucrative market for a zero-day exploit for this "threat actor" market, the vulnerability is still a liability (and ethical) risk for the company that could negligently disclose the identity of target to violent stalker.
IMHO, if you're paying well a gazillion Leetcode performance artists, to churn out massive amounts of code with imperfect attention to security, then you should also pay well the people who help you catch and fix their gazillion mistakes, before bad things happens.
portaouflop 23 hours ago [-]
You are imagining a market that doesn’t exist.
First there are only very few gobs/companies that are sketchy enough to do this - and for those a huge number of non-anonymous people exist with huge reach that are very critical for years.
If such a market would exist they would assassinate all those first - you don’t need the email if you have the face, voice, and name - since that is not happening they just don’t care that much about it.
wepple 23 hours ago [-]
There’s 100% an active market for this, and I think tptacek is simply wrong on this point (the others are valid)
The likes of Cambridge Analytica didn’t go away, they exist and absolutely go hunting for data like this.
The ability to map between different identifiers and pieces of content on the internet is central to so many things - why do you think adtech tries to join so many datapoints? Let alone things like influence campaigns for political purposes.
I’m not talking about assasination plots, but more mundane data mining. This is why so much effort in the EU has gone into preventing companies from joining data sources across products - that’s embedded in DMA
ufmace 16 hours ago [-]
There's an easy way to put your money where your mouth is here. Just offer $11k for this or similar vulnerabilities out of your own pocket, and then resell them. If there really is a large and active market for this at higher dollar values, you'll make a killing!
Sure is funny there's nobody doing that despite so many people being so dead certain there's an active market.
0xDEAFBEAD 22 hours ago [-]
Sure, but do adtech companies buy vulnerabilities in web services to advance their mission? Wouldn't that risk running foul of e.g. the Computer Fraud and Abuse Act?
notpushkin 22 hours ago [-]
You don‘t need to sell the vulnerability to them, or even tell them the vulnerability is there. Just set up an API and bill them by the query.
fn-mote 21 hours ago [-]
This ignores tptacek's points in the top-level post.
> [...] a bug that Google can kill instantaneously, that has effectively no half-life once discovered, and whose exploitation will generate reliable telemetry from the target.
You can't set up unmask-as-a-service because it's going to take you longer to get clients than it will take Google to shut down your exploit.
notpushkin 21 hours ago [-]
Yes, but:
1. It can still take a while before Google finds out
2. You can log every mapping you got in the meanwhile, then keep selling the ones you already have
Edit: although probably most of your business will be over when word gets out that your data isn’t exactly legal (which your clients have understood from the start, of course; they could just plead ignorance)
sushid 20 hours ago [-]
People keep talking about this as if there's a 0% chance of being caught if you do this?.
So let's suppose that you did set up the service like this. Can you even make 10 K? What are your odds of getting caught? How much do you value not being in prison and/or having to hire a lawyer to get you out of there?
I'd take the 10k every time.
notpushkin 19 hours ago [-]
I’d take the 10k, too, but I think it’s possible to pull this off without getting caught.
It’s a lot more work, of course, but you can scrape some top youtubers first as it seems relatively easy. If you can pull this off you can then try and figure out how to legitimize your offering – I won’t go into details here, for obvious reasons, but now that you have something valuable on your hands it makes sense to spend some time/money on selling that.
wepple 18 hours ago [-]
You’re talking about this as if there aren’t other countries who actively infiltrate power infrastructure and for whom this is the most low risk mild attack (if you can call it that)
I’m not speaking theoretically, which I suspect most on this thread are.
sushid 15 hours ago [-]
Okay, which state actor is going to buy this for $100,000? How are you going to sell it to them? What's the risk of getting caught?
Even if someone on telegram was telling me that Russia would buy this information for $100,000, I think I would reach out to Google and "settle" for $10k.
bredren 21 hours ago [-]
I’ve seen a light version of this, where a “marketing data” company was scraping baby shower gift registry pages and selling the data to an infant formula company in the US.
The scraping was def in violation of the EULAs. Product data is one thing, but I believe this group was combining it with other sources and selling the identities and context as a bundle.
fragmede 21 hours ago [-]
An API is too much work. Grab the addresses for the top 100,000 YouTubers and sell that csv on the dark web.
0xDEAFBEAD 12 hours ago [-]
What happens when the first to buy the CSV starts selling it themselves?
fragmede 8 hours ago [-]
That’s not a new problem with selling info on dark web marketplaces. if you're interested in learning more, here are a couple of books you might enjoy:
"The Dark Net” – Jamie Bartlett
“We Are Anonymous” – Parmy Olson
“Future Crimes” – Marc Goodman
“Kingpin” – Kevin Poulsen
tptacek 19 hours ago [-]
I think you've missed my point. I know data brokers exist. Does there exist today a data broker that functions in whole or in significant part buy acquiring vulnerabilities and exploiting them to collect data? He's a more concise way to frame my argument: if you're imagining yourself to be the first person to sell a particular kind of vulnerability to, then your customer is imaginary.
wepple 1 minutes ago [-]
Yeah, I think this is valid. “I’m confident I can find someone who will buy this” vs “I’ll message grugq”, roughly?
zemnmez 22 hours ago [-]
i think what's being conflated here is that there are reasonably buyers for this kind of vulnerability but there's no market in the truest sense. I think a correctly connected individual could well sell this vuln to a state actor or a contractor to one; but the ecosystem of bug sales to these parties has no aggregate appetite for them, thus, there is nothing driving the price up. People in the market for cyberweapons want point and shoot vulns that have broad usage beyond a specific server for a specific company or parts for them, and ones that will last beyond a single corporation patching something. They are willing to pay such big $$$ for this that the whole market is optimized for it. The power players here would much rather buy a gun and shoot the lock off a door than a specialised set of picks that work for that lock in that building.
tart-lemonade 17 hours ago [-]
The only real market (that I can see) are shady data aggregators. Governments just file subpoenas, and abusive megacorps can file lawsuits (all the anti-SLAPP statues in the world can't prevent your Google account from being unmasked and having to pay for a lawyer). There is a limited market in the form of internet addicts who want to harass people for kicks (since finding an email gives them another route to do that with), but it's a small one. These people also tend to be entitled pricks, so they're not a very good customer base to have.
lolinder 24 hours ago [-]
> then you should also pay well the people who help you catch and fix their gazillion mistakes before bad things happens.
You missed their point about the business model of the security researchers here: their business model is finding a large number of small value vulnerabilities. Those who are good at this are very very good at this.
My company has a bug bounty program and some of the researchers participating in it make double or more my salary off of our program, but we never pay out more than this for a single report. And it's not like we're particularly vulnerable, we just get a steady stream of very small issues and we pay accordingly.
tptacek 24 hours ago [-]
They're right: I was talking about the business models at the buyers that these vulnerabilities have to slot into. The point I'm making is: there already has to be an operating business that's doing this for a vulnerability to be salable at all. If there isn't one, you're not selling a vulnerability, you're helping plan a heist.
lolinder 23 hours ago [-]
Right, I'm only responding to the last part where they imply to these researchers are not well paid. I'm saying that on an hourly basis or monthly basis $10k a vulnerability is actually quite a good payout when you have a surface area as large as Google's to explore and know what you're doing.
Their last paragraph shows that they didn't understand your paragraph here:
> For people who make their nut finding these kinds of bugs, the business strategy is to get good at finding lots of them. It's not like iOS exploit development, where you might sink months into a single reliable exploit.
neilv 23 hours ago [-]
> Their last paragraph shows that they didn't understand
I think I understood. The last paragraph of mine that you cite was speaking of the creator of the bugs, not the discoverer.
The liable party should be investing reasonably towards non-negligence. (Especially in the context of spending billions of dollars each year on oft-misaligned headcount that's creating many of these liabilities.)
I'm not talking about the company optimizing for the minimal amount they think they can get away with paying to try to cover their butt. Nor am I talking about how white/gray-hat researchers adapt viable small businesses to that reality.
pwillia7 24 hours ago [-]
Yeah, _should_ but businesses make money and not reporting and using the vulnerability in any other way is illegal, so they get to set the price as they're the only buyer. They know this.
dadrian 21 hours ago [-]
I'd also add that the legality of law enforcement exploiting a server-side bug is much more of a gray area (or actually illegal), whereas there is a standard process for law enforcement or the intelligence community to get a court order that enables them to exploit devices that belong to a specific target (phone, laptop, etc).
tptacek 19 hours ago [-]
There's also the thing where like, as you go from iOS Safari to Windows Chrome to Acrobat Reader or whatever, grey market prices plummet. The top-dollar targets all have multilayered runtime protections and whole teams that do nothing but security refactoring. No serverside software is hardened that way (excepting the Linux kernel, maybe, but Linux kernel bugs are a standard component of clientside exploit chains). You could infer a pretty low price.
I will say: at Matasano, we were once asked by an established security company that turned out to be a broker to find PHPBB vulnerabilities.
Cpoll 23 hours ago [-]
> because $10,000 feels extraordinarily high for a server-side web bug.
Am I misunderstanding the bug? In my reading, this bug translates to "a list of the top 1,000 Youtube accounts' email addresses (or as many as you can get until Google detects it and shuts it down)." Why isn't that conceivably worth more than $10,000?
sbarre 22 hours ago [-]
Perhaps because email addresses are kinda/sorta PII (business emails are categorically not) but not quite comparable to home addresses, tax/payment information, etc..
Our emails get leaked all the time in data breaches, sometimes alongside much more important information such as home addresses etc..
This was certainly a bad leak that could be used to further dox people by connecting the email to other leaked info or other sources, but from Google's perspective, all they did was leak the email.
It was a privacy breach for sure.
But further doxxing based on the email would be "not their problem" I suspect they would say.
ldoughty 21 hours ago [-]
Oh darn, my youtube email was leaked... It certainly stinks that mybusinessname@gmail.com is now known to the world...
There's certainly bad things that CAN be done to a number of people with information when it's a personal email address that's used for numerous purposes... but the 3 people I talked to about having youtube (or any streaming) accounts all have mentioned it as being a separate account.
So the only threat I can see in most cases is just better phishing attempts, which is not necessarily an easy money maker... Unless they can steal the entire account? It is impossible to get support from Google, so it's quite possible you could change the bank info and get a month or two of payments before someone gets in the loop to stop it... and realistically, the more money someone is making on YouTube, the less likely they have troubles contacting someone at Google by some side channel... and the less likely it's a personal email address that reaches the actual star of the channel.. so the more popular the person, the less valuable the email address
Invictus0 21 hours ago [-]
Increasing the ease of phishing the top 1000 YouTube accounts seems like a pretty serious threat to me.
ldoughty 19 hours ago [-]
But as I tried to highlight, the more valuable the YouTube account, the more likely they actually have an account manager at Google. Additionally, they probably have staff, and it's not actually the "star" that you would be emailing... Once you gain access to their YouTube account, what could you actually do to harm them? Upload a video that encourages somebody to go to a website and do a thing? It would probably get reported fairly quickly.. and it probably wouldn't look like a normal video for that channel, so it might stand out... It's just a very weird attack vector that is more easily achieved without spending lots of money to unmask email addresses. The fake Elon Musk profiles/accounts pushing watches or telling people to buy crypto are infinitely cheaper and probably more effective.. you could just make an account that pretends to be the person you're trying to scam and make comments on their videos
fy20 11 hours ago [-]
The majority of the top 1000 YouTube accounts will actually have an email address publicly available, as they are a business and they want people to be able to reach out to them for sponsorships or brand collaborations.
For example, MrBeast has this in the video description:
> For any questions or inquiries regarding this video, please reach out to chucky@mrbeastbusiness.com
The vulnerability here is that you can find the exact email address tied to their YouTube account, which you can't really do anything with if they have strong passwords and use 2FA.
sushid 22 hours ago [-]
I think a simple way to think of it is: how much would an adversarial nation state buy this exploit for?
I just don't think Russia would be willing to pay $100,000 to get Mr. Beast's email address, even if that sounds tempting to you.
Cpoll 21 hours ago [-]
Why a nation state? My hypothetical is a phishing ring that sends an official-looking phishing email to 1000 non-public email accounts that typically only get emails from Youtube.
The exploit can be valued at: number of emails * probability that you'll phish them into letting you in * value of posting a "Free Robux" scam on a channel with 100M subscribers.
sushid 20 hours ago [-]
Who are you advertising to? What is the risk of getting caught or getting scammed back while trying to receive your payment?
I feel like you are just taking into account the theoretical max value of a bad actor having these accounts, not the cost/risk of using this knowledge.
I could have the master key of a bank safe with 100MM worth of gold in the basement, but it's value is going to be nowhere near that, even to bad actors.
jmholla 21 hours ago [-]
Yea. Especially with AI, easy access to identities of email users makes it so much easier to scam on a massive scale.
kube-system 19 hours ago [-]
Sure, they'd probably be more interested in political dissidents.
alt227 22 hours ago [-]
> Why isn't that conceivably worth more than $10,000?
If it exposed passwords as well then that would be worth a lot more, but a list of email addresses is not the most valuable of things on its own.
lxgr 22 hours ago [-]
Potentially deanonymizing pseudonymous Youtube accounts sounds pretty bad by itself.
g8oz 21 hours ago [-]
I can see that being worth a lot to a nation state
reaperducer 22 hours ago [-]
Why isn't that conceivably worth more than $10,000?
As explained by the parent comment, because there isn't a market for it. It's a novelty. Who are you going to sell that exploit to? At this time, nobody. Since Google doesn't have to compete against others for the bug, it pays low.
Cpoll 22 hours ago [-]
To clarify, I'm not suggesting selling the exploit. I'm suggesting selling MrBeast, PewDiePie, Blackpink, Sony Music, etc.'s Youtube email addresses. To phishing rings.
Those may be non-public email addresses (admin/billing emails), so the phishing potential is higher than emailing prteam@mrbeast.com (or whatever).
ipaddr 21 hours ago [-]
I`ll suggest you want the bottom 1000 as they are most likely to fall for a scam.
zeroq 13 hours ago [-]
On top of that I always felt that this is generally aimed towards hobbyist who may accidently stumble on something to give them additional incentive to finish the job and make an actually summary and repro, rather than hollywood hackers.
Sure the gray market will pay more, but how do you contact criminals and make sure that you actually receive payment?
I know nothing about the market, but I think it's similar to buying drugs - we all know that drugs are everywhere and criminals are making a ton of money out of it, but if you haven't been introduced before how do you actually buy them? Go to a club and start asking random people?
(that last part might be different in US, but in EU we don't have people standing on every corner selling cookies)
harwoodr 22 hours ago [-]
It sounds like a standard threat-risk assessment applies.
How big of a threat is it/what impact will it have on business/reputation/etc.?
How likely is it to be exploited and how widely would it be considered useful to the market of threat actors?
kazinator 20 hours ago [-]
The discoverer had these choices:
- monetize the bug themselves; i.e. set up a site where you can submit a YouTube user id, pay some fee using your credit card and get an e-mail address.
- report that they have the ability to convert any YouTube id to an e-mail, with proof: then negotiate over compensation for the disclosure of the details
- just report the problem and be happy with whatever they get.
Ten grand doesn't look too bad for the most timid choice.
nightpool 17 hours ago [-]
Do any companies pay bounties for path #2? My understanding is that it's forbidden by most bounty programs since it could be seen as a form of extortion.
For #1, as tptacek says, it would be trivially easy for Google to shut a service like that down as soon as it was created, and prosecute the people running the service under the CFAA. Also, the amount of demand for that kind of data is pretty small given the number of email address databases already available online through legal means (e.g. Zoominfo, RocketReach, etc). It's a path filled with a lot of risk and not a ton of reward.
kazinator 15 hours ago [-]
In other words the more you think about it the better the $10k looks.
asah 1 days ago [-]
Also, Google can monitor the grey/black market and buy these exploits under false identities. For less urgent vulnerabilities (such as the YT email hack), this severely caps the bounty size.
dan-robertson 1 days ago [-]
My guess was that people selling vulnerabilities generally know who they’re selling to. Is there a big market for people selling exploits to unknown/anonymous customers?
tptacek 1 days ago [-]
People talk about "people selling vulnerabilities" as if there's an established pattern for selling arbitrary vulnerabilities. There is not. There's an established pattern for selling exploits for RCE vulnerabilities on a subset of popular client-side platforms. It's not an especially easy market to break into (as with consulting, people starting out here tend to end up subcontracting, and taking a huge income hit).
For any other kind of vulnerability, you're not so much "selling a product" as you are "helping plan a heist".
swiftcoder 1 days ago [-]
It's a pretty big part of most black markets that vendors don't ask too many questions about the buyer.
Do you really want to know what the FSB plans to do with your exploit?
mmsc 1 days ago [-]
>Unmasking Google accounts? Could there be a business there? Sure, maybe. Is there one already? Presumably no.
Absolutely, yes. Spam and targeted phishing attacks are in high demand.
My understanding is that it is possible to retrieve every public youtube channel ID, if not also Google Maps/Play reviewers, quite easily. This exploit could have been used to create a massive near-complete database of every Google account has automatically had a Youtube account created.
lolinder 24 hours ago [-]
> This exploit could have been used to create a massive near-complete database of every Google account has automatically had a Youtube account created.
Massive email databases are extremely cheap, often free. For this vulnerability to be worth more than $10k there would have to be something about it being a near-complete library of Google accounts (rather than just another massive mailing list).
And that's assuming the prospective buyer believed that they could exploit this vulnerability in full before discovery. If I'm reading this exploit right, each email recovered requires two requests, one of which needs to make one of the fields 2.5 million characters long in order to error out the notification email sent to the victim. Presumably that email sending error would show up in a log somewhere, so the prospective attacker would have to send billions of requests fast enough that Google can't block them as suspicious or patch the vulnerability, all the while knowing full well that they're filling up an error log somewhere and leaving an extremely suspicious pattern of megabyte-sized request bodies on a route that normally doesn't even reach kilobytes.
I'm honestly not seeing how you could make an email list out of this that is anywhere near complete, and even if you could I'm not sure where the value to it would be.
mmsc 23 hours ago [-]
>Massive email databases are extremely cheap, often free
There are different qualities of email databases. "Known real email by Youtube account holders" would be a high value database. Definitely not free.
This type of vulnerability is extremely valuable for private investigators, too. "Who uploaded this video which my client is extremely interested in?"
0xDEAFBEAD 22 hours ago [-]
>This type of vulnerability is extremely valuable for private investigators, too. "Who uploaded this video which my client is extremely interested in?"
Would exploiting this vulnerability violate the Computer Fraud and Abuse Act? If so, would a private investigator really want to do that?
fragmede 20 hours ago [-]
The CFAA is so broad that it's really for the prosecutor to decide you're evil hacker and go after you, even if you didn't do anything bad. Like use view source in a web browser. A PI works around legally grey things anyway, what's the CFAA on top of that?
Sure but did you read the rest of the post you're replying to?
That database only exists in theory, based on extrapolation of this vulnerability to billions of individual exploits, and I think we can all agree that Google would detect this activity and shut it down.
Hence, that database might fetch a decent price if it existed, but it doesn't.
kasey_junk 1 days ago [-]
And then what?
Exploits need to plug into a business plan. Like any business plan there has to be somewhere that money gets extracted and that money needs to be more than the exploit cost & infrastructure costs & a risk premium.
If you can’t trivially say how the exploit explicitly gets turned into cash you probably are on the wrong track. Doubly so if it’s not a known standard and commoditized way that’s happened before.
chmod775 1 days ago [-]
There is often phishing campaigns targeting larger channels on YT, trying to trick someone with access to it into opening malicious e-mail attachments, with the end-goal of taking over the channel. Usually the attackers then put a livestream on it and push some crypto scam. It must make enough money, given that it keeps happening.
So then why do they need additional information about emails? They clearly already can email these youtubers.
chmod775 21 hours ago [-]
This will enable you to get the private e-mail of the google account that owns the channel, which is not necessarily the same one a channel may give away publicly.
So for some channels that provided no contact information, you now can acquire an email address, and for everyone else you may now get an additional one.
It also enables you to link multiple channels back to the same person.
Every bit of information you can get your hands on counts for social engineering attacks.
For very famous individuals this may also open them up to harassment. You can't find Elon Musk's private telephone number on the Tesla homepage for good reason. For that class of people, any time that sort of information leaks, they need to get a new private phone number/e-mail address.
UncleMeat 21 hours ago [-]
I think we can imagine reasons why this would be valuable. It's a vuln. That's worth know about and fixing.
I'm not sure that there are terribly many black market opportunities for "every bit of information" such that this should be a six figure payout or whatever.
chmod775 20 hours ago [-]
Sure, but here's some examples that may be worth a lot of money to the right person, or can just cause a lot of harm:
- Regime critics with a channel on YT.
- Vulnerable individuals and others trying to keep their identity a secret. Putting yourself on YT means putting yourself in front of every deranged individual out there.
- Trump quite famously runs some of his own social media accounts personally, for better or for worse. And even where he doesn't, he probably retains ultimate control - in the case of YT it might be his personal google account that created the channel. He's probably not the only high value target to do so.
Also if you happen to be in any date leak, being able to figure out your private e-mail address gives attackers another place to check whether you re-used a password.
kasey_junk 20 hours ago [-]
This is the “heist vs exploit sale” dichotomy that tptacek mentions.
For any vuln you can make up a hypothetical one off usage. But to find the right buyer for that is effectively building a team ala The Great Muppet Caper.
wswope 24 hours ago [-]
Say you’re a blackhat OSINTer trying to steal crypto. You have a first initial and a last name for a target (“J. Smith”) - plus you know this person is on github and discord.
You take out your handy email list and run a regex to find candidate accounts that match “J Smith”. You pipe matches into a recon script to check if github and discord accounts exist for each email. Suddenly, you’ve got a small pool of matches. You try more account-existence recon to find all the sites they’re signed up on. You look up all breached creds tied to the target emails, then run cred stuffing against any sensitive services they’ve signed up for.
Boom, you’ve gone from first initial + last name to compromising an account in thirty minutes.
UncleMeat 1 minutes ago [-]
Surely the key part of this is "this person's email address and password has been published online together" rather than "I can identify this person's email address."
mmsc 23 hours ago [-]
It can get turned into cash by the EU when Google gets a massive fine for leaking private data.
grog454 24 hours ago [-]
> Exploits need to plug into a business plan
Or, you know, develop a new "business plan" around an exploit.
tptacek 23 hours ago [-]
Nobody does this. It would be an insane proposition. The vulnerability is going to die very shortly into your attempt to capitalize on it. Businesses have startup costs they have to pay off.
He reportedly made $32k and barely avoided jail time... which does not sound to me like the $10k payout is undervalued.
ineedasername 23 hours ago [-]
Wouldn't that require, if true, that new revenue streams around exploits aren't generally pursued? It seems like new scams, and variations on old ones around new methods, come about on a somewhat regular basis. And as with any business, there is going to be some speculative work around new "product offerings", so to speak. I'm with you on the idea that they are less valuable, as 'spec work, than something that enhances existing revenue streams in a more predictable way.
wepple 23 hours ago [-]
You could dump all the data over a matter of weeks, then you’re sitting on a treasure trove that will pay out over 5+ years.
You could sell it non-exclusively to every data broker
23 hours ago [-]
kasey_junk 23 hours ago [-]
Even if that did happen, it would drive down the price of the exploit and especially so for server side novel ones.
brookst 1 days ago [-]
But then what? Given the number of accounts Google has, odds are that nearly every alphanumeric combo less than 8 or 10 characters plus “@gmail.com” is a google account. This vulnerability gets you other domains, but still not seeing it. Massive databases of email addresses are a dime a dozen.
The only angle I can imagine is phishing for high profile creators, and at most this is a “makes it easier” and not a “creates the problem” bug.
ineedasername 23 hours ago [-]
You could target accounts of users likely to be younger & more susceptible to phishing for passwords-- kids subscribed to channels with younger content. Or other interest-based targeting. It's not quite spear phishing, but still more targeted.
refulgentis 1 days ago [-]
The back of an envelope can get you making silly claims quickly (ex. 26 ^ 8 is 208 billion)
brookst 27 minutes ago [-]
Not seeing the problem. Are you assuming that somehow there is at most one Gmail account per person on earth?
I have… I’m not sure. Ten maybe? And those are actual conveniences for different purposes. I’m sure plenty of people have hundreds, if not thousands. So what?
cirego 1 days ago [-]
I think you might be off by a factor of 10. Alphanumeric would be at least 36 characters, which would imply 2.8 trillion combininations (36 ^ 8).
refulgentis 23 hours ago [-]
yeah, I was doing the charitable as possible version
jeffwask 1 days ago [-]
Honestly, that leaves straight up harassment of YouTubers by other YouTubers and fans off the table which by itself would motivate a few of them. Some of the same people who play in the black and grey hat worlds are the same people buying DDOS attacks and swatting streamers. They would have a party with their emails.
lolinder 24 hours ago [-]
> which by itself would motivate a few of them
Motivation in the abstract is not enough to counter GP's point—they have to have enough motivation that it's worth more than $10,000 to them and also have more than $10,000 to spend and also have the connections necessary to get in touch with someone who's able to sell a vulnerability like this and also be able to exploit it in a timely manner or at least think they can.
iinnPP 23 hours ago [-]
Or be a black hat. An incredibly common hat.
tptacek 1 days ago [-]
Draw up a straw-man business plan for this, with SWAG numbers.
jeffwask 1 days ago [-]
The motivation isn't financial but the impact to some of Google's biggest earners would be significant. Never mind the PR when Mr Beast and SSSniperwolf's personal details leak online.
jsnell 24 hours ago [-]
Major channels typically would be using a YouTube brand account, not a single normal Google account. (This is so that they can e.g. delegate parts of the channel management to multiple people without sharing a single login). The email address for a brand account is totally worthless.
> Threat actors buy vulnerabilities that fit into existing business processes
Selling crazy stories to the media is as old as time.
This vuln would give you a lookup table from email->YT
SELECT * FROM table WHERE email LIKE “%.gov”
tsunamifury 22 hours ago [-]
And? So what. You can spam them?
Come on.
wepple 21 hours ago [-]
You don’t think there are folks with content they’d very much not like to be directly associated with them? Comments, videos, likes, etc
nickelpro 21 hours ago [-]
There's no existing black market of criminals extorting politicians and celebrities over Youtube comments (also how you go from an email address to an identity is itself iffy).
You are imagining a potential market, the exploits are priced against markets that are real and pay out today. Security researchers aren't traveling salesmen going around to every shady character on the internet and pitching them on the potential of a new criminal enterprise.
UncleMeat 21 hours ago [-]
And so what's going to happen? Are there blackmailing rings that are in active need of ways of tying youtube comments to work accounts that are paying out the nose?
alt227 22 hours ago [-]
Or spear-phish, with a high degree of accuracy knowing the target.
EGreg 13 hours ago [-]
If you think Google had underpaid for this, imagine how much they got to underpay for this:
That guy is ridiculous! Could have made $50 million or more probably, if he had used a different registrar than Google itself.
He mentioned that Microsoft also let their domain lapse and that one was actually going to the open market... and what's more, they didn't even care when he contacted them! Oof:
What’s the value of the mailing list with every YouTube users email address ?
idiotsecant 22 hours ago [-]
If the value of the bug payout is equal to the grey market payout, why would I ever sell it to Google? I could sell it on the grey market and not pay taxes on the sale, or worry about cumbersome reporting requirements. Google plays a dangerous game with this logic.
jovial_cavalier 23 hours ago [-]
The bounty is not a market. It's a subsidized incentive to subvert the market, and to give greyhat hackers a reason to be white-tinged instead of black-tinged. I would conservatively guess this guy could have found at least 30 people willing to pay $500 for details on this exploit, and he would've netted $5000 more than Google paid him to do the right thing.
Probably the risk of going to jail outweighs the extra $5k, but if a company is serious about the bug bounty program, they would offer a reward that's competitive with what you could extract from the black market, and I don't think that's hard to do.
KennyBlanken 16 hours ago [-]
> that has effectively no half-life once discovered
Google knew about this already, and hadn't done anything to fix it...and when it was reported, they didn't fully understand it and were dismissive, until the author came back at them again.
> Unmasking Google accounts? Could there be a business there? Sure, maybe
I'm pretty sure there are a _lot_ of youtube channels that private and public entities would love to uncover the identity of, and I would say that it's very unlikely these guys were the first to piece all this together.
The main takeaway for me is how incompetent Googlers seem to be, both in the basic "web application 101" mistakes made (not properly validating/restricting fields) and the clearly rushed evaluation of the security report. Such a report should trigger some folks going "oh, that's not good. I wonder what else is broken about this." Not "meh, not significant, quick patch, fixed."
Nobody at Google wants to work on stuff that isn't going to get them up a rung on the ladder.
fumplow 23 hours ago [-]
[flagged]
gxs 21 hours ago [-]
The fact that the amounts in apple's bounty program can range from 5k-500k for a single category tells me that the answer is it depends.
It's most likely not just a comparison to black market prices or how many lines of code it'd take to patch.
There is kind of a market for server side vulnerabilities but I'm not sure if you would call it grey. I suspect ZDI will purchase commodity server side vulnerabilities (https://www.zerodayinitiative.com/). So stuff like apache, nginx, and maybe opensource webapps that have a narrower usage.
tptacek 19 hours ago [-]
ZDI claims they'll pay for bugs in serverside software, which is a different meaning of the term "serverside" than I'm using (admittedly, that definition is more precise). An nginx bug has a half-life once discovered. A Youtube bug does not.
I'm a little skeptical of published prices for serverside software, though. Do you know anyone who specializes in selling those bugs? I don't.
maxed 22 hours ago [-]
It does not make sense to value these kind of (web) bugs by their potential price on the grey market. I think its better to value these bugs by their potential impact, although that is hard to express in money.
In this case there were 4 billion email addresses on the line from being scraped, imagine if this was exploited and the data was leaked. The news would hit the headliners which would definitely be bad for Google's reputation and stock price.
However, the impact of the leak is not that high as it only consists of a channel <> email address mapping, and therefore I think 10k is a fair price
hnburnsy 11 hours ago [-]
From the article...
15/09/24 - Report sent to vendor
...
29/01/25 - Vendor requests extension for disclosure to 12/02/2025
09/02/25 - Confirm to vendor that both parts of the exploit have been fixed (T+147 days since disclosure)
12/02/25 - Report disclosed
So that is 136 days not fixed(?) and Google asks for extension.
Then 147 days to fix and 150 days to public disclosure.
Compare this to Google Project Zero which gives other companies the following time to fix before disclosure...
>"This bug is subject to a 90 day disclosure deadline. If a fix for this issue is made available to users before the end of the 90-day deadline, this bug report will become public 30 days after the fix was made available. Otherwise, this bug report will become public at the deadline."
>If the patch is expected to arrive within 14 days of the deadline expiring, then Project Zero may offer an extension...Note however, that the 14-day grace period overlaps with the 30-day patch uptake window, such that any vulnerability fixed within the grace period will still be publicly disclosed on day 120 at the latest (30 days after the original 90-day deadline).
>If we don't think a fix will be ready within 14 days, then we will use the original 90-day deadline as the time of disclosure. That means we grant a 14-day grace extension only when there's a commitment by the developer to ship a fix within the 14-day grace period.
It's quite clever how the email notification was disabled (setting the email subject length to 2.5million characters, so the email delivery itself would fail).
vachina 58 minutes ago [-]
Right? You’d think at Google’s scale they’d sanitize every single user inputs, like truncate the string and suffix it with “…” instead of the mail delivery throwing an exception.
AznHisoka 1 days ago [-]
“Applied 1 downgrade from the base amount due to complexity of attack chain required” <— is this common?
I’ve only participated in a few vulnerability programs, and most of them reward less if the security flaw is stupidly simple (but serious) such as revealing user emails in the page source.
tptacek 1 days ago [-]
I had the opposite impression, that it got dinged for being relatively complex for a web finding.
kevincox 23 hours ago [-]
Yeah, this seems backwards. It should be upgraded from the base amount because they effectively found 2 bugs!
swyx 17 hours ago [-]
made sense from the pov that if its harder to exploit, it's less damaging of a bug, so worth less
kevincox 17 hours ago [-]
But it's not really harder to exploit. It is an API call that any Google account can make. It's not like the second call has complex requirements or only probabilisticly succeeds.
lolinder 12 hours ago [-]
It's harder to find, so it's less likely to be noticed and exploited by a bad actor than a glaring issue. My experience has been that this is typical of these programs—you're trying to reward researchers for finding things that are likely to be exploited, so the more arcane bugs are less valuable.
I'm not sure I'd apply that logic if I were Google, though. Smaller companies it makes sense because the threat actors that they are most likely to face are mostly script kiddies who give you at most a day before they get bored and try someone else. Google is another matter, since they're always a target for much more sophisticated attackers.
philipwhiuk 1 days ago [-]
> Some time ago, I was looking for a research target in Google and was digging through the Internal People API (Staging) discovery document
It's just an automatically translated schema file from their internal .proto definition. Google relies on real cryptography not security through obscurity.
Furthermore the discovery endpoint is publicly documented[0] and specifically meant for external users. Nobody internal would read the discovery endpoint: they would just pull up the .proto file through code search.
Another observation: from my experience at Google it took multiple weeks of effort fighting against the bureaucracy to be able to expose an API to the public. It's not like an AWS S3 bucket that could just be accidentally public. The team knew this is public and had fought the bureaucracy to make it public.
Nice catch! And finding a vulnerability on such a high profile service will look very nice on the resume.
Congrats!
swyx 17 hours ago [-]
i'd really like a way to email a youtube channel owner though. even if sits in a youtube inbox for a year. most of them don't have email contacts and its hard to reach out for sponsorship or any other deals.
nullderef 1 days ago [-]
Breaking the email system so that it's not sent is the cherry on top. With companies as big as Google who have developed so many products, "security" feels fake. If every line of code is a possible vulnerability, with millions it's just inevitable. It feels like the only way is to keep things simple (e.g., deprecate the recorder site), but even then.
rpigab 1 days ago [-]
That's probably another reason why Google kills so many products that are successful, but not successful enough for Google's whole system to justify keeping them alive and secure.
goldfish3 22 hours ago [-]
There's a lot of truth to that. Older projects often get bogged down by new security & compliance horizontals, to the point where maintenance is just no longer worth it.
echelon 1 days ago [-]
100%. Every product not a part of the core mission is attack surface area, ongoing maintenance to ensure it works with the rest of Google services and infra, and drag on the rest of the team and velocity.
The part that sucks for consumers is that they often kill things that people like. I wish they had a better way of doing this.
Bravo to brutecat for this excellent discovery, productionization, and writeup.
zoklet-enjoyer 1 days ago [-]
They could spin these products off into separate companies and cut the integration with the rest of the Google ecosystem.
scarface_74 23 hours ago [-]
Where is the profit for the individual product? There are a lot of services at every BigTech company that would not make sense as an individual product. But they make the overall ecosystem better or make money only because they are a part of the larger company.
That’s part of the stupidity of the DOJ trying to force Google to sell Chrome. Who would want it? And how would they profit from it?
rvba 8 hours ago [-]
The firefox admin are making millions of dollars per year even if they make one bad decision after another (mostly focusing on acqusitions and new projects instead of actual work on their core product: firefox).
hypothesis 19 hours ago [-]
> That’s part of the stupidity of the DOJ trying to force Google to sell Chrome. Who would want it? And how would they profit from it?
All valid questions, but it might be that splitting the tool used to bludgeon everyone around is still worth it, even if pace of development slows down considerably.
scarface_74 19 hours ago [-]
There will be no development. Who is going to spend money to develop it and why would they? Microsoft even decided it wasn’t worth it to develop their own engine.
Unless you are using Chromebooks, every desktop user who uses Chrome made an affirmative choice to download it.
hypothesis 18 hours ago [-]
> There will be no development.
My point is that maybe it is okay? Runaway churn is at least partially responsible for current situation, where most companies simply unable to compete.
scarface_74 18 hours ago [-]
What companies are trying to “compete” in the browser space?
Apple has no reason to compete, it can just make more and more functionality for native apps as can Google if it doesn’t have to worry about Chrome anymore.
Microsoft doesn’t care about the browser anymore and just uses Chromium. Firefox’s revenue comes completely from Google. If Google doesn’t have to prop up Firefox for antitrust reasons anymore, why would they?
echelon 1 days ago [-]
Probably way too much effort. The apps aren't built for generic infra, but rather Google's internal weirdware. It wouldn't be possible to run it anywhere else without a rewrite.
throwaway2037 24 hours ago [-]
I agree, and I like this term "internal weirdware". Real question: Why don't we see more start-ups try to clone old terminated Google services with a freemium model?
aqueueaqueue 20 hours ago [-]
There probably are. The real question is why are they not successful. Maybe they need to solve distribution. I use Google Calendar tasks for todos because it is there, handy, for example.
As sister comments have said there is no money in it. They are stickiness plays or just bets for Google.
scarface_74 23 hours ago [-]
People don’t want to pay for things.
eitland 19 hours ago [-]
I want. And I do. Notable examples are Kagi.com and Raindrop.io. I've also been sponsoring a number of projects for a number of months, from journalism to social media startup.
But I am getting more hesitant as often when I (and others) do it seems companies think they can increase their prices wildly or do other stuff.
I have this exact feeling now with Logseq: I started paying for sync a while ago and it seems so did others and now they are rewriting the whole thing from plain text[1] to some kind of database based storage :-/
[1]: which could be synced over git, transferred effortlessly into another application and was one of the reasons I went with Logseq
scarface_74 18 hours ago [-]
How many of those companies are profitable? How many do you think you will see a blog post about in a year or two - “Our Amazing Journey” where they won’t either go out of business or get acquired and their product gets shut down”?
We are currently serving around 2.1M queries a month, costing us around $26,250 USD/month.
Between Kagi and Orion, we are currently generating around $26,500 USD in monthly recurring revenue, which incidentally about exactly covers our current API and infrastructure costs.
That means that salaries and all other operating costs (order of magnitude of $100K USD/month) remain a challenge and are still paid out of the founders’ pocket (Kagi remains completely bootstrapped).
So what do you know about kagi’s profitability :).
I’m honestly glad to hear that. Until I heard your interview with Gruber, I thought Kagi was the same company that use to provide a payment platform for Mac indie developers.
It’s always good to see a bootstrapped company become successful without enshittification. I put you up there with BackBlaze.
I see you have investors now (not saying that is a negative). Are the laws still the same about having to be a “qualified investor”? Can anyone invest - asking out of curiosity.
ragazzina 23 hours ago [-]
Maybe Apple should do the same and kill their many half-baked software products.
gallerdude 22 hours ago [-]
Which ones? In my experience, a lot of Apples products have incredible longevity. Notes, Calendar, Pages all just get better and better.
ragazzina 22 hours ago [-]
Alarms, Photos, Siri, Books..
21 hours ago [-]
azinman2 21 hours ago [-]
How are any of these half baked? (Aside from obvious Siri deficiencies)
ragazzina 21 hours ago [-]
Alarms is unreliable for the basic functionality of waking you up.
Photos redesign makes it really hard to use.
Siri works half of the times, maybe even less than that.
Books lacks of basic functionalities such as downloading and keeping books on device.
azinman2 21 hours ago [-]
How are alarms unreliable?
Photos redesign maybe something you don’t like, but you can hardly call it half baked. All of the functionality is there and there’s a new consistency in how it works that wasn’t there previously.
Books automatically downloads to device. There isn’t a way to read a book without it local.
> Books automatically downloads to device. There isn’t a way to read a book without it local.
Have you used Books extensively or just skimmed it? There's no way to keep books on device, make another Google search if you do not believe me.
> Photos redesign maybe something you don’t like, but you can hardly call it half baked.
Perfect, then keep Photos and kill only alarms, books and Siri.
azinman2 20 hours ago [-]
I’ve never had my alarm fail, but it looks like others have hit this bug. So by your definition if a product has a bug (even if rare but intrinsic to the functionality), then it’s half baked. Given (effectively) all software has bugs, then by your definition there are no fully baked software products. I think we have very different definitions of what makes a product half or fully baked.
I haven’t used Books extensively outside of audiobooks. So it sounds like there’s offloading of caching going on that’s iCloud wide; disabling iCloud sync would fix this. I can imagine that being frustrating if the book you want isn’t there when you’re on a flight (which should only happen if you haven’t recently accessed it). I agree there should be a way to prevent this. I wouldn’t call that half baked, but it’s a big enough problem I’d agree that’s not fully thought through (or more likely, they did think through it but came to a different conclusion).
lilyball 19 hours ago [-]
> Have you used Books extensively or just skimmed it? There's no way to keep books on device, make another Google search if you do not believe me.
Once you download a book to a device, it stays downloaded. There is a setting to automatically remove downloads once you're finished with the book, but that defaults to off (and I didn't even realize it was there until I went looking just now).
pixelesque 19 hours ago [-]
Probably a few times every quarter I have iOS alarms that didn't go off for some reason on my iPhone.
It's happened for years: it was pretty bad about 5/6 years ago, but Apple claimed they fixed it, but it's still happening a bit.
In fact, when I really need to wake up at a particular time (say for a flight), I set two alarms 1 minutes apart.
SirMaster 16 hours ago [-]
Seems like there is more to this. I have been using iPhones since they came out and can't think of a time my alarm never worked, and I use them multiple times a day.
aqueueaqueue 20 hours ago [-]
They are not. I haven't seen an unreliable alarm since the digital age. 1980s LED alarm clocks were reliable.
superb_dev 20 hours ago [-]
What are you talking about? I’ve got the books app open now, I can see all of my downloaded books. In fact there’s a whole section in the library for my downloaded books!
Go to library > collections > downloaded.
I can see books I purchased and other PDFs that I uploaded.
I do agree on the Photos redesign. I feel like I constantly get stuck on certain pages.
ragazzina 20 hours ago [-]
> What are you talking about?
Your tone is aggressive and uncalled for. In fact, the fact that you have never found a very common bug says a lot about your inattention to detail.
There's no "keep forever on device" button, which to me seems like a basic functionality. If the app decides to delete them, it will.
Oh okay, but that’s not what you said? You implied that no books could be downloaded at all which is just not true.
iCloud offload is a pretty common feature on Apple devices and one that I find pretty handy. I understand why it doesn’t work for others though.
You can turn off iCloud sync in general for the device.
ragazzina 18 hours ago [-]
I don't think I have implied that. I have said that there's no functionality to download and keep files on device, which is true, because you cannot trust the device will not delete your files without your permission (or even without warning you first). But I'm not a native speaker so I could have been misunderstood.
20 hours ago [-]
robin_reala 1 days ago [-]
Unfortunately with the number of users Google has, any deprecation will be met with cries of pain / I-rely-on-the-spacebar-to-heat-up-my-computer. See https://killedbygoogle.com/.
aeonik 1 days ago [-]
They really aren't shy about massive breaking changes.
I remember being upset about Google Reader for a few months after its death… before moving to one of its many, fuller-featured competitors and carrying on using RSS feeds exactly as before.
What upsets me re RSS these days is how many people were apparently so reliant on one reader that they still publicly mourn every time it comes up, 12 years later. Who are these fair-weather feed followers who threw their hands in the air with the loss of exactly one product?
D13Fd 1 days ago [-]
There still is no replacement for Google Reader. The difference is that there was a community around Reader’s social features. That only really works with wide adoption, and it’s a lot easier for people to adopt a Google product than a random company x one. Today, there are many replacements with the mechanical features of browsing & sync, but the community will never come back.
The other problem was that Google killing Reader was a signal to the broader web to move away from RSS. RSS has kind of limped along since then.
baud147258 21 hours ago [-]
> The difference is that there was a community around Reader’s social features.
Are social features the main selling point of RSS readers? I mean I just use mine to know when there's a new blog post/webcomic posted on a few sites I follow, without having to give my email or use another platform like social media to know about it. And I'd use the social features which are present on the blogs, under the control of the blog owner(s), if there's any. Or maybe my use case is not the most common one?
Though I agress about the signal Google sent by killing GR.
flir 1 days ago [-]
GR had some primitive social features that none of the competitors, as far as I know, could replicate. Side-effect of being the largest. Even an exact clone wouldn't behave the same. It was the core of the blogging ecosystem, and IMO its removal was the main cause of blogging falling apart.
All so they could clear the way for Google Plus. And look how that turned out.
So yeah, watershed moment, the point where the scales fell from my eyes, still justifiably pissed, fool me twice, etc etc etc
(Still using RSS daily, though I lapsed for a while).
ant6n 1 days ago [-]
Killing Google Reader killed blogs. You personally can replace Google Reader as a product, but since most people didn't and just sort of moved on to closed platforms, there was less content produced on blogs and less discussion activity on the blog posts that were created after.
scarface_74 23 hours ago [-]
John Gruber of DaringFireball has said his blog has still not recovered completely from the Google Reader shutdown and he has the most popular blog in the Apple ecosystem
ibaikov 1 days ago [-]
I realized I was reading too many websites and decided to switch to RSS, only to find out that Google had killed Reader a month earlier.
Years later, I came across Artifact, created by the founders of Instagram, and thought it was an interesting idea. The problem was I was reading its shutdown announcement.
Sometimes I think products are killed way too early. Look at twitch, it boomed after years of stagnation.
robertlagrant 1 days ago [-]
Twitch has found some not-amazing niches to bulk up its revenue. A service needs to be profitable to work, and I don't think anyone wanted to pay for RSS. Or not enough.
ibaikov 22 hours ago [-]
Somewhat true back then, but I think now there are more people who would pay for it, and they could capitalize a lot on integrating LLMs into RSS apps.
robertlagrant 21 hours ago [-]
By comparison, Twitch's yearly revenue is consistently over $100m[0] from subscriptions and other in-app payments (e.g. taking a cut for buying their internal currency).
I didn't use Reader. What was so special about it? Iirc it was an RSS aggregator, which sounds pretty simple to replace. Nobody has an open source equivalent?
kobalsky 1 days ago [-]
There wasn't an equivalent when it was deprecated. It downloaded the contents and they were archived in your account.
Many of us are still upset about Reader. It definitely felt like a watershed moment between the old cool Google who sent pizzas to hackers and had clean fast web design and weren't evil.
I'd be so glad now to give up on Google and all its enshittified shit. I could give up things that are still super useful and I get value from every day: YouTube, Gmail, Play Services, Drive, Maps. But I don't think I could give them all up at once. I've been trying to migrate to Proton and OpenStreetMap and some kind of real Linux phone etc, I don't even mind if I have to fiddle around before everything works. The trouble is that the claws are in, but they're not in me.
Remember when Google proudly didn't advertise themselves? They got to critical mass through word of mouth, from having a compellingly better product. Now what they have is network effects and locking. They used to appeal to developers and techies because and that ended up making the services better for everyone. Now like all the other tech giants they have PHB's optimizing for the next millisecond of attention and Microdollar of ad revenue from a lowest-common-denominator victim.
Google is so big that it's a significant part of life for a significant proportion of the world. When Google is shit it moves the needle on net human suffering. I think the UN should be focussing on prevent war and trying to salvage our environment, but if they aren't going to do that then it might be rational to just form a worldwide consumer group to take on megacorps.
meindnoch 1 days ago [-]
+1 for Google Reader
That marks my coming of age on the enshittified web. The killing of Google Reader was a watershed moment. It marks the moment in time when the tide turned from the open Web to closed social media gardens.
DonHopkins 1 days ago [-]
I'm still upset about the "I've Got A Bad Feeling About This" button.
I would challenge you to give me examples where security feels "real" and how does that help.
Most software products rely on very complex software stacks, and if you trust 100% all the libraries and the OS you use I would say it's a wrong mindset. There were bugs even in the processor (meltdown). Security is a continuous battle and you never know if you won, only (sometimes) if you loose.
tialaramex 1 days ago [-]
You can tell security is real the same way as lots of other things, reality doesn't give a fuck. Like how you can tell the difference between man's laws (e.g. "The Offside Rule" or "Constitutional Rights") and Mother Nature's laws (e.g. Thermodynamics). Try it, kick the ball even though the rule says you mustn't - if you get lucky the referee doesn't notice and play continues. But if you try to make a system more ordered without expending energy it does not work. Reality doesn't give a fuck.
When I breeze through your login process with the wrong credentials that's because your security was fake, if it was real that would break because it didn't know who I was, so if some bug lets me past login I don't somehow successfully log in as me, I'm logging in as nobody at all which is clearly nonsense.
This is "Make Invalid States Unrepresentable" at scale, and it's difficult to do, but not impossible.
hkwerf 1 days ago [-]
You're essentially suggesting a Drake equation [1] equivalent for the number of security vulnerabilities based on NLoC. What other factors would be part of this equation?
Language or framework definitely plays a role (isn't that what the Rust people are so excited about). Maybe say like the materials/tools used.
There's definitely some measure of complexity. I still like simple cyclomatic but I know there are better ones out there that try to capture the cognitive load of understanding the code.
The attack surface of the system is definitely important. The more ways that more people have to interface with the code, the more likely it is that there will be a mistake.
Security practices need to be captured in some way (maybe a factor that gets applied). If you have vulnerability scanning enabled that's going to catch some percentage of bugs. So will static analysis, code reviews, etc.
maximus-decimus 1 days ago [-]
How close to the Balmer peak the programmer was when he wrote the code.
bobnamob 1 days ago [-]
Correlated or inversely correlated?
zwnow 1 days ago [-]
The point is: security is fake. No app is truly secure. You can spend millions on app security and all it takes to breach that is one slip up of a human user.
TheDong 1 days ago [-]
I'd take away "security is complicated and multi-faceted", not "fake".
It's not a black and white of "an app is truly secure" or "an app is truly insecure", but rather a continuum from "secure enough in practice for this threat model and purpose" to "an insecure mess".
Like, plenty of websites and apps have launched, existed for years, and then shutdown without a single security incident. In those cases, surely the app was secure, right? At least secure enough? Signal so far has been "secure enough in practice" for most people, while iMessage has in practice been "secure enough if you're a normal person, but with serious security issues for anyone who might be subject to serious targeted attacks"
Say more about what you mean by "no app is truly secure"? Especially in the context of signal?
zwnow 1 days ago [-]
Im just saying that all it takes is one employee to click onto the wrong URL to breach your apps security. I am not talking about the app itself. You can have all the security implemented the world has to offer and yet you cant get rid of human errors.
TheDong 1 days ago [-]
I'm totally not understanding what you're saying then.
> Im just saying that all it takes is one employee to click onto the wrong URL to breach your apps security
Pretend I'm a signal employee. What link can I click that breaches the app's security?
They don't store unencrypted data, pushing source code changes requires review, releases are signed and a single employee can't compromise the release process, so I'm missing how one employee being compromised could lead to the signal app breaching signal's security.
Also, in practice, how often are apps compromised from a phishing attack? I don't even really see news reports on that, so I'm curious if you're operating off like a specific case or something.
zwnow 1 days ago [-]
Some malicious mail that grants remote access to the employees device? Its not that hard to understand.
dboreham 24 hours ago [-]
Actually it is hard to understand because that employee's device isn't an attack vector.
zwnow 22 hours ago [-]
It absolutely is. Every connection to your app also is a attack vector.
robin_reala 1 days ago [-]
I’d misunderstood the title to refer to $10k of GPU compute or something like that. Unfortunately I suspect there’ll be tens or hundreds of occurrences of this bug given that they just picked one old Google product and immediately found a hole.
SXX 1 days ago [-]
> given that they just picked one old Google product and immediately found a hole.
This is just not how it works. Most likely author spent weeks or months digging into different products until he found something worthville.
saretup 24 hours ago [-]
I misunderstood it to mean they are selling any YouTuber’s email address for $10k
jwpapi 15 hours ago [-]
One commenter already clarified how bounty amount is related to black market value. Now a lot of others might seem how Google doesn’t value security enough. (Or other companies).
But one has to understand that for security purposes they SHOULD pay as little as possible. If they pay out more there is more incentive in finding bugs and then there unfortunately you’ll also raise more black market.
So GTO strat is to just cut off black market with as little money as possible.
nashashmi 16 hours ago [-]
Does google still do those security vulnerability reveal if the thing has not been fixed in 90 days? This was dixed in 147 days.
deviation 11 hours ago [-]
Cool attack chain.
Regarding the payout-- I'm curious if targeting (in the disclosure video) a CEO/C-Suite exec @ Google would have encouraged a higher amount from the panel.
hoerzu 1 days ago [-]
I haven't gotten access to my YouTube channel since it migrated to Google account. If anyone can set me in contact with anyone who can help recover my account, it will be rewarded with karma for life
ornornor 1 days ago [-]
Haha a human at google. Good luck. My maps review are almost always blocked because reasons for years, Im still trying to reach a human there.
shrx 17 hours ago [-]
Same here, I tried to fix the navigation by re-creating this bit of road multiple times but it's always rejected without a reason given. https://maps.app.goo.gl/YkjqBZSRPrjFLvsi8
vednig 12 hours ago [-]
A little Update on the video, seems to be disabled by YouTube ToS
opello 11 hours ago [-]
Looks like the Internet Archive has saved us once again:
After reading the article top to bottom I still had to come to the comments to find out what the "for $10,000" was about. It's the payout for a bug bounty.
ggregoire 22 hours ago [-]
Bottom of the article:
> Timeline
> 05/11/24 - Panel awards $3,133.
> 12/12/24 - Panel awards an additional $7,500.
binarysneaker 23 hours ago [-]
Same. Oddly worded title.
theogravity 19 hours ago [-]
What can one do with a Gaia ID? I don't think the article went into the impact of having it.
badosu 18 hours ago [-]
After reading it a bit further, they searched for a service that exposed email via Gaia ID, and found it via Pixel Recorder.
badosu 19 hours ago [-]
Use it on the block user api and get an email address from what I understood.
aqueueaqueue 13 hours ago [-]
But what else could definitely or potentially be done? It is an interesting question.
xeromal 11 hours ago [-]
I think it's mainly a way to discover who's behind a youtube comment or video. Getting their email address often leaks information about them
areyourllySorry 12 hours ago [-]
get google maps reviews and search web archive for google plus and profiles.google.com snapshots
fnordian_slip 1 days ago [-]
Very nice breakdown. But while 10,000 dollars seems like a decent sum, I expected more for a bug of this severity, if I'm being honest. Especially as they initially only awarded 3100. But I'm not sure how much is usual for such cases. Almost 150 days also seems kind of a long time for fixing it imho.
Frieren 1 days ago [-]
Bounties make sense for open source projects where the main reward is to contribute to the community.
For private corporations/closed code, it is a way to get a thousand engineers looking at their code and APIs and only pay a small amount to however is the first one to find something. Everybody else gets nothing even if they put a lot of time and effort.
Underpaid is an understatement.
blagie 1 days ago [-]
$10k is not a decent sum. The compensation reflects roughly 0.25-3 weeks of SWE costs in payout.
Industry-wide SWE compensation is somewhere in the $100k-$200k range. Typical Google SWE compensation is $350k. Top Google SWE salary is north of $1M. Increase by 60-100% for overhead, or somewhat more for consulting overhead.
The amount of work doing something like this is orders of magnitude more than the compensation:
1) Most security vulnerabilities investigated lead nowhere, were previously discovered, etc. That's lost time.
2) Working out something like this is much more than 0.25-3 weeks.
More critically, the black market value of most vulnerabilities is much more than Google pays out. A rational economic actor would sell something like this grey market or black market, rather than reporting.
The problem is none of the big companies take security seriously. The reason is that there are no economic damages to even serious data leaks, so what incentive is there for them to take data security seriously?
Many companies (including big ones like T-Mobile) have major security compromises every few months (and in the case of T-Mobile, have had so for decades) and simply don't care. I don't mean to pick on T-Mobile -- I like them as a company -- but they're pretty representative.
tptacek 1 days ago [-]
It's an extraordinarily high sum for this kind of finding. Bounties are generally not a referendum on how clever the underlying work is. A full-chain iOS bug is worth hundreds of thousands of dollars because Apple competes with the grey market for it (and even then, it's an apples-oranges comparison and Apple pays substantially less than the rest of the market for structural reasons). Nobody competes for this bug; nobody is going to pay these people $10,001 for a bug that Google can end instantaneously the moment they figure out what's happening.
blagie 1 days ago [-]
My commentary was precisely about the state-of-the-practice.
That $10k is "an extraordinarily high sum for" what was likely weeks of work on this bug, and probably months of work poking in other places, reflects the very, very low focus on security industry-wide. This is why we need significant civil -- or possibly occasionally criminal -- liability. Civil if it's simple negligence, and criminal if it's gross negligence leading to harm.
If Google were to pay me $200 if it leaked my data, that would:
- Be worth much less than my privacy
- Amount to damages of $400B worldwide if there were a compromise impacting all $2B users (although, realistically, damages would be lower in middle and low income countries)
This would represent a 20% fall in Google's market cap, which feels about right.
At that point, I expect the bug bounties would be set many orders of magnitude higher. Security bugs should be rare. They're common. This is a problem, and one created by our market incentive structures.
You are correct that Apple is an exception, and seems to mind security.
edanm 1 days ago [-]
I think you're looking at this wrong.
Security is hard. Incredibly hard. Unlike most things in business which are positive-sum, security isn't - it's adversarial. If we make companies pay huge civil fines for things that are so hard to protect against, we're stifling a ton of innovation.
I usually analogize a large company to a bank. A bank is supposed to keep your money secure, and for sure you'd have a legitimate beef if a bank robber could waltz in and steal your money easily because it's not kept in a vault.
But what if it is kept in a vault? What if the bank isn't attacked by a random group of bank robbers, but rather by the armed forces of a hostile nation? We don't expect banks to protect against armies - that's what we have states for! They provide centralized protection against threats that are far too large for any individual entity to take on by themselves.
This is the same, albeit out of sight, situation with large companies. You can have thousands, tens of thousands of people around the world poking at everything your company does for years, looking for any vulnerability. No company can truly withstand that kind of scrutiny - and I don't think making civil penalties higher will change that. And on top of criminal or opportunistic actors, companies also have to be worried about state actors too.
The only way is for the state to take on an active role in security. I don't see any other way that gets real security for anyone.
tptacek 1 days ago [-]
Google pays a piece rate. They pay the rate the market will bear, unless you impress them, like these people did, and then they pay a bit more. They do not compensate you for your working hours.
Google is not going to pay you $200 if they leak your email address.
Google pays as much attention to security as Apple does.
If you want a world in which these kinds of security bugs create multimillion-dollar liabilities, you can advocate for the new statutes that will create that world; just be aware that only companies like Google will be able to afford to operate in that world.
blagie 1 days ago [-]
I am very much advocating for new statutes. That's precisely what my post was doing. Companies should not be allowed to externalize costs of bad security on users.
I disagree with the claim that "only companies like Google will be able to afford to operate in that world." That's not how markets work.
1) The impact would be that frameworks would develop with better security. This would result in a slowdown of software engineering. Perhaps it would start to look like any other engineering discipline, where things are analyzed for safety.
2) Every other industry shows that in situations like this, big players are disadvantaged.
The analysis here is pretty basic:
- If I'm running a small $10M startup making a little iPhone app for some obscure task, the risk of legal liability from this is among the smallest of my risks of going under, so I'm incentivized to ignore it. If I were faced with a $400B liability, I declare bankruptcy, so in effect, that's a $10M liability. The expected cost is 5% times 10M = $500k, so it makes sense to spend up to $500k to mitigate a 5% risk.
- If Google has a team working on that same app, and doesn't manage security properly, the $400B liability stays a $400B liability. There is no ROI analysis where it makes sense to build a little app which has a 5% chance of leaking data. Do it right, or don't do it at all. The expected cost to Google here is 5% times $400B = $20B.
This is why, in virtually every other industry, big players are (1) more trusted (2) more expensive, and phrases like "small, fly-by-night operation" exist (and make business sense to run).
tptacek 24 hours ago [-]
Well, as long as we agree that none of this is how the world works right now, I don't think we have to litigate this.
vineyardmike 19 hours ago [-]
> If Google were to pay me $200 if it leaked my data, that would: Be worth much less than my privacy
I think you need to do a lot to justify a non-zero value for this, frankly.
How is your “privacy” worth $200? What data is valuable and what data isn’t? Under what context?
If your privacy is worth $200 per leak (by some definition of leak), you surely take steps to anonymize your data already and wouldn’t use a service like Google (or name your untrustworthy party).
I’m not saying leaks are good but trying to price it in seems fraught.
paulddraper 18 hours ago [-]
I think you are vastly overestimating the damage of knowing an email address.
I used to get books dropped off at my door with the names, addresses, and phone numbers of thousands of people. The first two are often public record.
iinnPP 1 days ago [-]
Just because companies are paying X doesn't mean that X isn't a low sum.
Calling 10k an "extraordinarily high sum" is accurate to some and inaccurate to others.
I would bet the groups would differ by perceived personal cost more than the opinion of Google, Apple, and the like. These groups would also probably show distinction where people have been victimized by "identity theft."
The opinions of those bearing the cost are more important here, in my opinion.
ianhawes 1 days ago [-]
You’re significantly underestimating the value of dox-style exploits. Author could have partnered with a black hat vendor who would offer (for example) $25 per lookup. Or they could’ve done bulk scraping of YouTube channels to get emails and sold the dataset.
It requires some legwork but they could’ve seen somewhere in the ballpark of 6 figures over 1 year if the exploit wasn’t patched.
Oh, and if they had no ethics.
tptacek 1 days ago [-]
Does that black-hat vendor already exist? Do they already sell the service of taking $25 to unmask Google users? What calculation does that vendor do about how many customers they'll get before Google notices? Does the exploit developer get a 50% cut? The black-hat vendor is taking all the risk; seems unlikely. Arranging this whole thing is work; finding the "black hat vendor" is work; not getting caught in the process is work; not getting screwed by your partner is work. You pencil out the numbers and this gets less and less plausible as a way to beat a $10,000 lump sum payment.
I think the reality though is just that there's literally no buyer for this.
You could sell the service yourself! I bet you could make a couple thousand bucks before you and your customers got indicted.
blagie 1 days ago [-]
> not getting caught in the process is work
Caught for what? If someone sells information about a vulnerability, what law are they breaking? In most jurisdictions, unless you're dumb enough to ask questions about whom your selling to and have active knowledge you're assisting someone in breaking some law, selling to the black market is perfectly legal, at least so long as you pay your taxes.
If you're doing grey market, it's even more legal. If a dictatorship wants to unmask a critic for assassination, and one is selling this information to a government security agency, it's legal by definition.
tptacek 1 days ago [-]
If you sell information about a vulnerability to someone that you know specifically is going to use it to break the law, you are an accessory to that lawbreaking. Ask Stephen Watt how this plays out.
blagie 24 hours ago [-]
Please read my posts more carefully. Virtually every response is non-responsive to what I wrote:
I wrote: "unless you're dumb enough to ask questions about whom your selling to and have active knowledge you're assisting someone in breaking some law, selling to the black market is perfectly legal"
You wrote: "If you sell information about a vulnerability to someone that you know specifically is going to use it to break the law, you are an accessory to that lawbreaking"
That's the exact same thing.
You, likewise, didn't notice I was advocating for new statutes in a post above.
scarface_74 23 hours ago [-]
Someone gives you two kilos of cocaine, doesn’t tell you what’s in the box and tells you not to open it while you transport it across the border and when you get your the other side someone will pay you $20000.
You get caught by the DEA. Do you think it’s a valid defense “I didn’t ask what was in the box”?
Say the drug dealer you delivered it to got caught and then told authorities you delivered it to them, do you think you would have a valid defense?
aqueueaqueue 13 hours ago [-]
Is that the right analogy? This sounds more like a free speech and free speech exceptions type of issue.
(Commenters keep moving the goalposts making for a complex thread where each node in the tree litigates a very different hypothetical situation. Ah HN!)
Similar to publishing say... the Anarchists Cookbook.
scarface_74 13 hours ago [-]
> unless you're dumb enough to ask questions about whom your selling to and have active knowledge you're assisting someone in breaking some law, selling to the black market is perfectly legal
This goes directly to the concept of “willful blindness”
The parent is replying to something different (a $25 a pop dox service), just FYI.
jorvi 23 hours ago [-]
Interestingly enough there is already a brute-force way exploiters have been doxxing YouTubers: bots comment random name combinations to a channel, check if it is posted, then immediately delete the comment. If the comment didn't appear, either of those names is on the blocklist and is probably the YouTuber's name or related to it. Same goes for addresses.
sgjohnson 22 hours ago [-]
> Zero-click kernel code execution with persistence and kernel PAC bypass
This is what baffles me about Apple's bug bounty program.
> $1,000,000: Zero-click remote chain with full kernel execution and persistence, including kernel PAC bypass, on latest shipping hardware. As an example, you demonstrated a zero-click remote chain with full kernel execution and a PAC/PPL bypass with persistence on the latest iOS device.
This is easily worth significantly more. You don't even need to sell it to the black market, sell it to all the 3 letter agencies in the world.
kccqzy 1 days ago [-]
Bug bounty payouts are not effort based. It does not matter how much time it took the discoverer to find the vulnerability. So discussing the amount of work involved is irrelevant; it's not like the kindergarten level "oh you tried so there's a consolation prize for effort". Comparing it against the fixed rate salary of a SWE is even more wrong, except that your argument shows it is more profitable for a hypothetical person relying on bug bounty income to instead join Google as an internal red teamer.
The other comment has already addressed the market value question.
blagie 1 days ago [-]
Unless you can stumble on Google vulnerabilities casually, it's showing quite the opposite -- how unprofitable it is to work from bug bounties.
kccqzy 1 days ago [-]
It's not the opposite. We are in fact not disagreeing. It's unprofitable to work from bug bounties. It is better off for the person to become an internal red teamer.
ant6n 24 hours ago [-]
I wonder where I could get 10K for a week of work. That'd be a nice a vacation supplement. ㋡
yieldcrv 1 days ago [-]
salary and compensation are not synonyms, you used them interchangeably
zombiwoof 13 hours ago [-]
Click bait title to an otherwise interesting story
arajnoha 1 days ago [-]
haha the title sounds like you are a blackhat, offering emails for 10k
tomjen3 3 hours ago [-]
It took 157 days to resolve per the timeline in the article.
That seems extremely long for an organisation as technically competent as google.
byearthithatius 19 hours ago [-]
He could have made WAY more money not disclosing this and that should scare any Google employee reading this
zhobbs 17 hours ago [-]
Curious if anyone knows, what legality of using a technique like this? I assume illegal, even though it's just making publicly available API calls?
alfredosa 7 hours ago [-]
That's impressive
progforlyfe 24 hours ago [-]
Wow, until the very last paragraph for some reason I was thinking that it COST $10,000 to leak the email of any YouTube user, like either a black market cost or purchasing cloud resources =) -- Very nice exploit though!
19 hours ago [-]
ProllyInfamous 18 hours ago [-]
I discovered my favorite PI via a geographic subreddit, over a decade ago. The amount of information this retired peace officer procures is always incredibly useful (e.g. litigation, applicant screening).
$100.00 is enough to cause most to blush.
ForHackernews 1 days ago [-]
> That params is nothing more than just base64 encoded protobuf, which is a common encoding format used throughout Google.
Pour one out for the google dev in charge of b64 encoding their fancy binary message format so it can be jammed inside a JSON blob. If you want a vision of the future, imagine a boot with "worse is better" imprinted on the sole stomping on an engineer's face, forever.
caust1c 14 hours ago [-]
It's everywhere and it's the worst. I sometimes ponder whether or not the volume of protobuf bytes represented as b64 encoded protobuf in JSON exceeds that of actual protobuf bytes sent over the wires of the internet, and then I pour one out for myself.
ForHackernews 4 hours ago [-]
JSON's not even that bad if it's gzipped! It compresses relatively well.
If only there were some way to easily send this binary gzip data to this API that accepts JSON...
aleksiy123 1 days ago [-]
Internally, it would be a b64 protobuf in a protobuf field.
The json part is an automatic conversion.
jeffbee 22 hours ago [-]
Why would it be b64 encoded? There's nothing that prevents you from putting an encoded protobuf into a protobuf as `bytes` type. `bytes nestedMessage = 42;` Only delimited message formats like JSON or XML need to encapsulate messages before nesting.
aleksiy123 19 hours ago [-]
Because it is in a Json? Internally it probably is protobuf with bytes.
But the external API is Json and so it needs to be converted at some point.
Internally, it is (maybe) a binary field of a protobuf.
Then when translating to JSON, it was converted to a string via base64 encoding.
21 hours ago [-]
paulddraper 18 hours ago [-]
JSON string of base64 encoded protocol buffer...you don't need to know what company did that to know what company did that.
augbog 18 hours ago [-]
Excellent find and breakdown of the process
ta988 21 hours ago [-]
147 days to fix that is just ridiculously long.This really shows the inefficiency of the whole google engineering layercake.
sebstefan 1 days ago [-]
“Applied 1 downgrade from the base amount due to complexity of attack chain required”
The attack chain isn't that complex...
It's very lame to be stingy with a bug bounty program.
renewiltord 21 hours ago [-]
Everyone on HN has this addiction to “vulnerability” like it’s some grand thing and then concocts complicated “it could be worth millions; you then use it to find out their Swiss bank account number; and they watch the money go down as you drain it!”
It feels like most software sites are now populated by people who think software is like in the movies.
AutistiCoder 19 hours ago [-]
what if the target has their e-mail set to private?
zoklet-enjoyer 1 days ago [-]
Pixel Recorder is an "old forgotten product"? I have used it at least once a week for years. I used it a bunch yesterday. Very good app. I hope Google doesn't kill it.
einpoklum 18 hours ago [-]
It helps not to have a YouTube account. Less tracking - by Google, by its partners, and by its exploiters.
(Of course, they can still apply more computational work and possibly identify you without you logging in.)
suyash 1 days ago [-]
Question is is this patched or the vulnerability still exists?
croisillon 1 days ago [-]
"09/02/25 - Confirm to vendor that both parts of the exploit have been fixed (T+147 days since disclosure)"
55555 1 days ago [-]
This is a puny payout IMO. If they poked around a bit more they may have found a better GAIA->Email vulnerability or perhaps could just use the one they found. A database of emails for every major youtube channel would be worth an awful lot.
aimazon 1 days ago [-]
Major YouTube channels are typically managed by multiple people through the channel management features and brand accounts. I don't think it's possible to even log in to the brand account (which has a generated email address like channel-000000000000000000000@pages.plusgoogle.com) instead it can only be accessed through an authorized user's account (which are distinct from the channel, i.e: it's not the email address that would be surfaced by this attack). Granted, things have changed over the years, so there may be old channels lingering with Google account linked email addresses, but from what I can tell, all channels were converted a while back.
edit: My hunch is that the channels the OP's attack was able to target are not actual channels but rather YouTube users (who have a "channel" because that's how YouTube represents users): so "YouTube User" is the correct description of this attack, which is distinct from what you're thinking of as a channel.
imdsm 1 days ago [-]
Think this is puny — I found the ability to reveal emails in npmjs.org but as it hadn't been included in the new GitHub/Microsoft bug bounty scope yet, I was given a t-shirt and $1000.
Talk about puny!
KomoD 1 days ago [-]
I think this is puny: I was able to take over accounts on a cybersecurity platform just by knowing their account email and was only paid $200
TheDong 1 days ago [-]
I think this is puny; I can take down almost any site on the internet just by knowing the DNS name, and in exchange all I get is threats of criminal prosecution under anti-DDoS laws
Xcelerate 1 days ago [-]
Do you mind sharing which platform?
davidmurdoch 1 days ago [-]
I was able to run JavaScript inside an email in the GMail app on Android (it required the user tap within the email body). I only got a Nexus 7 tablet.
tiborsaas 23 hours ago [-]
I've discovered I can run JavaScript in the browser and I've got a job :(
imdsm 4 hours ago [-]
I'm so sorry my friend!
croisillon 1 days ago [-]
in an old company of mine they started an intranet but if you opened it as http instead of https you'd see raw codes inclusive sql passwords and everything ; i reported to them, to which they replied "yeah just open it with https like everyone else"
tptacek 1 days ago [-]
Serverside vulnerabilities have essentially no market outside of bug bounties. This is a hell of a payout for a web finding.
mosselman 1 days ago [-]
Unless you build a “get email for all of your viewers” service that streamers use to gather emails
tptacek 1 days ago [-]
And then Google notices, kills the bug, and comes after you. Meanwhile, each of those streamers is criminally liable. Sounds like a great business!
astrange 15 hours ago [-]
I once reported a way to see anyone's gift registry shipping address on Amazon.com and they paid me $0 because they don't have a bug bounty. (But they did fix it.)
croisillon 1 days ago [-]
the burden of being consciencious, i guess
ajross 1 days ago [-]
What would an appropriate payout be? I mean, the classification ("high exploit probability, abuse-related impact") seems about right to me. Are you saying that abuse bugs should be more valuable? That all bugs should pay more? That this is a rich company so they should pay more?
> If they poked around a bit more they may have found a better GAIA->Email vulnerability
They still can! Report more bugs, get more bounties. I don't see how this is related to how much they paid for this one.
> A database of emails for every major youtube channel would be worth an awful lot.
It's pretty clear from the article that you can't use this API to scrape at that kind of volume. This kind of thing was never in the offering. As the title says, you can leak "any" email, not "every" email.
xyst 1 days ago [-]
A database of every YT user then x-referencing them with public services (fb/ig/twitter). Build shadow profiles, sell db to highest bidder.
Or just plain ole pwning them. Most users still tend to use the same password across different services, not use 2FA, and involved in at least 1 high profile leak (I know I’m in at least a dozen so far per haveibeenpwned).
Occasionally you get the victim that uses that same password for their e-mail service and that can allow you to bypass e-mail 2FA if enabled. Even better if the account is used for social SSO (ie, Google, Facebook, Twitter). Then you have access to a treasure trove of services; or just delete them for lulz
yieldcrv 1 days ago [-]
On one hand I doing really see the hack here. They didn’t get access to any email address, just a potential privacy leak
On the other hand, a spearfishing campaign could be valuable. And launch a memecoin on some people’s account to make millions
andrewstuart 1 days ago [-]
$10,000 ain’t much for that.
mschoch 1 days ago [-]
google insiders will leak for considerably less, no exploit needed
riffic 23 hours ago [-]
the date formats in the Timeline section trigger me for some reason.
michpoch 1 days ago [-]
Am I very naive expecting the payout to be significantly higher?
tgsovlerkhgsel 1 days ago [-]
Yes. Bug bounties aren't that high. For an issue that does so little (leaking an identity vs. e.g. giving access to an account or remote code execution), I'd actually consider that a surprisingly high amount, and I would expect that many companies wouldn't consider this class of bug a bounty-worthy issue at all - "thanks for the report", maybe fix it maybe not, but no bounty.
ec109685 1 days ago [-]
Interesting that the bounty amount went down due to how obscure the attack vector was.
cesarb 1 days ago [-]
To me, that payout felt quite high; it's bigger than the average monthly salary for a senior IT professional where I live. To put it another way, that bounty alone would be like being paid for several months of full-time employment.
22 hours ago [-]
SXX 1 days ago [-]
Unfortunately high payouts is just not how cyber security industry work. Instead of high payouts you get vanity and higher chance to get well-paid job.
immibis 1 days ago [-]
You can get a high payout if you're also willing to risk your life. Companies are relying on researchers doing the ethical thing instead of the profitable thing.
neilv 1 days ago [-]
$10k seems too small, for discovering a bad security mess-up by employees each getting paid 20 to 70 times that amount (or more).
ej1 20 hours ago [-]
[dead]
15 hours ago [-]
kensai 1 days ago [-]
I hear heads rolling...
billpg 1 days ago [-]
It's (channel-name)@gmail.com
I'll take a cheque.
doctorhandshake 1 days ago [-]
Is it me or are all the dates in this timeline in the future? Isn’t it Feb 2025 now? Do you smell toast?
I have no idea why America settled on MM/DD/YY, which seems like absolutely the least intuitive permutation of D, M, and Y. Except perhaps MYD.
voytec 1 days ago [-]
For a person born and raised in metric system, MM/DD/YY is just as bonkers as if someone decided that MM:HH:SS makes sense.
ars 18 hours ago [-]
Year month day makes sense. Month day year makes sense because that's how people talk: I'll be there February 5, etc.
Month day year makes no sense because it's backwards, and no one talks that way. So why use that?
voytec 18 hours ago [-]
> Month day year makes sense because that's how people talk: I'll be there February 5, etc.
People also say "twelve past two" and yet you don't use 12:2:SS.
ars 18 hours ago [-]
People only say "twelve past two" when they want to be formal and awkward.
GuB-42 1 days ago [-]
It seems like it comes from an british convention of spelling dates "February 12, 2025" instead of the now more common "The 12th of February, 2025".
Like US customary units, imported from the british, but the UK modernized its system, not the US.
natebc 1 days ago [-]
I've always thought it was because we say:
February Twelfth Two Thousand and Twenty Five
Feb 12 2025
02/12/2025
I know it's cool for Europeans ... and everyone else to hate on us for it but it does seem to make sense given the way we typically say the date.
pcthrowaway 1 days ago [-]
Canada uses this also, though we also use day-month-year and year-month-day.
Yes, this effectively makes dates nearly impossible to decipher here.
iinnPP 1 days ago [-]
I have always dated things with months spelled for this reason. Except where the format is clearly defined, which is fairly common and likely for the same reason.
doctorhandshake 24 hours ago [-]
Agreed - MMDDYY is truly unintuitive but even DDMMYY is ambiguous if it’s early enough in the morning.
SigmundA 1 days ago [-]
To me its sounds better and more correct to say:
February 12th, 2025
Rather than:
12 February 2025
And is easier to say than:
The 12th of February 2025
So it's always been natural to write the numeric form the same way, but I am American. I can appreciate day first being easier to sort by machines and having an agreed upon international standard.
mdiesel 22 hours ago [-]
Just like the 4th of July, that most American of days
SigmundA 20 hours ago [-]
"The 4th of July" is more formal sounding, so it makes sense for the holiday, but many just say "July 4th" more informally when referring to the holiday.
Again grammatically is easier and shorter to say month day vs the day of month.
mattlondon 1 days ago [-]
Day-Month-Year is the standard everywhere in the world apart from the US.
Big and little endian dates are the only way that makes sense I think. Doing it the US way where day is inexplicably between year and month just feels corrupted to my mind.
cesarb 1 days ago [-]
> Day-Month-Year is the standard everywhere in the world apart from the US.
IIRC, Japan uses Year-Month-Day, which is the other order which makes sense.
duohedron 1 days ago [-]
Not standard in China, Japan, Hungary, Mongolia, South Korea, Taiwan, and of course ISO 8601.
ant6n 24 hours ago [-]
> Day-Month-Year is the standard everywhere in the world apart from the US
Nope, the standards are day.month.year, year-month-day, or month/day/year. The problem happens when the delimiter doesn't match the ordering.
cesarb 23 hours ago [-]
Using / as the delimiter with day/month/year is also very common. Here in Brazil, dd/mm/yyyy (or sometimes dd/mm/yy, which used to be more common before year 2000) is the standard.
ant6n 20 hours ago [-]
And then problems happen!
hennell 1 days ago [-]
DMY is the most common format internationally. There's a growing move (and ISO standard) for YMD but its a slow change, I think it's only North America that uses MDY.
Why is it always americans who are caught with being seemingly unaware of the rest of the world? Is it because we all speak their language? I didn't ever see it with Britons or Australians.
mattlondon 1 days ago [-]
Date format used everywhere apart from US.
eru 1 days ago [-]
Not everywhere. In Asia you find a lot of YYYY-MM-DD and similar.
I thought it meant they were offering this as a service for $10k.
But seriously, America is awesome for rich people if you don't mind living in a poor, third-world country that still believes it's a first-world, exceptional country.
https://en.m.wikipedia.org/wiki/Endianness
That's hilarious.
Every time I see a service purporting that it works best only with a single link to your Real Identity™, I'm reminded that the vendors only abstractly care about actually protecting the user, and then only sometimes.
Imagine being able get immediately three or four steps closer to doxing anyone interacting on YouTube. That's the actual impact of this bug IMO. It's good that this was fixed, but I don't think this class of bug goes away anytime soon. What do we need to do to get vendors and big companies to realize that this sort of design is landmines waiting to happen?
I abstractly agree with you. There is a level of obscurity and disposability that should be tolerated in these accounts. They’re just a row in a database somewhere anyways.
That said, many people transact with these businesses with real human money. For example, YouTube premium subscribers or content creators. From a practical perspective, that requires IRL identifiers to be stored somewhere with that otherwise disposable account. And due to fraud risks and other realities of banking, that requires giving these businesses actual identities and addresses which they store too.
While I don’t give random apps and websites my human-identifying information, anyone I do business with necessarily knows the real me, which is a theoretical point of data leaking.
Certainly not theoretical. You can be certain that nearly every company who knows your identity has leaked/sold it to others in one fashion or another.
Try and leak some medical data as a medical services provider. You will get your ass handed to you.
* Valuations for server-side vulnerabilities are low, because vendors don't compete for them. There is effectively no grey market for a server-side vulnerability. It is difficult for a third party to put a price on a bug that Google can kill instantaneously, that has effectively no half-life once discovered, and whose exploitation will generate reliable telemetry from the target.
* Similarly, bugs like full-chain Android/Chrome go for hundreds of thousands of dollars because Google competes with a well-established grey market; a firm can take that bug and sell it to potentially 6 different agencies at a single European country.
* Even then, bounty vs. grey market is an apples-oranges comparison. Google will pay substantially less than the grey market, because Google doesn't need a reliable exploit (just proof that one can be written) and doesn't need to pay maintenance. The rest of the market will pay a total amount that is heavily tranched and subject to risk; Google can offer a lump-sum payment which is attractive even if discounted.
* Threat actors buy vulnerabilities that fit into existing business processes. They do not, as a general rule, speculate on all the cool things they might do with some new kind of vulnerability and all the ways they might make money with it. Collecting payment information? Racking up thousands of machines for a botnet? Existing business processes. Unmasking Google accounts? Could there be a business there? Sure, maybe. Is there one already? Presumably no.
A bounty payout is not generally a referendum on how clever or exciting a bug is. Here, it kind of is, though, because $10,000 feels extraordinarily high for a server-side web bug.
For people who make their nut finding these kinds of bugs, the business strategy is to get good at finding lots of them. It's not like iOS exploit development, where you might sink months into a single reliable exploit.
This is closer to the kind of vulnerability research I've done recently in my career than a lot of other vuln work, so I'm reasonably confident. But there are people on HN who actually full-time do this kind of bounty work, and I'd be thrilled to be corrected by any of them.
Not to mention not really thinking through how obviously stupid it is to immediately compare a legal activity to a highly illegal one, as if they're real alternatives for most people.
If we apply your analysis to other things, we’ll find that the upper bound price for a new car stereo or bike is ~ $100, and the price of any copyrighted good is bounded by the cost of transferring it over the network.
I think it is more useful to divide the amount Google paid by the number of hours spent on this and any unsuccessful exploit attempts since the last bounty was paid.
I’d guess that the vast majority of people in this space are making less than US minimum wage for their efforts, with a six figure per year opportunity cost.
That tells you exactly how much Google values the security and preserving the privacy of its end users. The number is significantly lower than what they pay other engineers orders of magnitude more to steal personal information from the same group of people.
> If we apply your analysis to other things
This analysis doesn't work for a few reasons:
* For physical goods, used items always fetch a lower price than new items due to unrelated effects. And if we're only looking at the used price, we do find that the black market price is just about equal to the used item's value minus the risk associated with dealing with stolen goods (unless the buyer is unaware of the theft, in which case the black market value is the same as the used value).
* For both physical and digital goods, there are millions of potential customers for whom breaking the law isn't an option, creating a large market for the legal good that can serve to counter the effect of the black market price. This isn't true of exploits, where the legal market is tiny relative to the black market. We should expect to see the legal market prices track the black market prices more closely when the legal market is basically "the company who built the service and maybe a few other agencies".
This is only true under certain circumstances. If there are supply chain issues, used prices can go up and over the list price. The most extreme (and obvious) example I've seen is home gym equipment during the Covid lockdowns, particularly for stuff like rowing machines.
The other potentially less obvious example is seen in countries that don't have a local presence or distributor for a given item, and the pain and slowness of importing leads to local used prices being above list price.
One other potentially interesting semi-related point: prices for used items can sometimes increase in unexpected ways (excluding obvious stuff like collectables, art, antiques etc). In the UK, the used price for a Nissan Leaf EV started increasing with age after the market realised that fears about their battery failing ~5 years into ownership were unfounded urban myths, and repriced accordingly.
The comment you're replying to isn't referring to list price, they're referring to the price of a new item.
Supply chain issues, as we saw during COVID, affect the cost of new items by making them effectively infinite: if there are only 100 new rowing machines available and 1000 people want them, then for 900 people, the list price of a new rowing machine is irrelevant because they can't actually buy it at that price.
If security researchers want to have stable employment doing this sort of work, there's oodles of job applications they can send out.
So, the value to the researcher of having a found bug has a floor of the black market value.
The value to Google is whatever the costs of exploitation are: reputational, cleanup, etc.
A sane value is somewhere between these two, depending on bargaining power, of course. Now, Google has all the bargaining power. On the other hand, at some point there's the point where you feel like you're being cheated and you'd rather just deal with the bad guys instead.
Bounty programs are very much not trying to compete with crime.
I don't know how much it should be worth, but at least there's a PR effect and it's also a message towards the dev community.
I see it the same way ridiculously low penalty for massive data breaches taught us how much privacy is actually valued.
For security researchers it's apparently obvious, but from the outside it's another nail in the coffin of how we want to think about user data (especially creators, many being at the front line of abuse already). As you point out Google here is only the messenger, but we'll still remember the face that delivered the bitter pill for better and worse.
How many young computer enthusiasts / aspiring security researchers are motivated to learn more because they see, what to them are, massive payouts.
You or I might not get out of bed for the hourly rate that translates to, fine by me - I have a job that pays the figure I negotiated.
Bug bounty programs pay the market clearing rate, always. One bug, two market participants, one price.
If my bug bounty is $10,000 and I can sell it for $20,000 then most people will take the legitimate cash. If it's $10,000 and some black market trader will pay $10,000,000 (obviously exaggerating) then there's a whole mess of people are going to take the ten million.
* Are you talking to someone legitimately interested in purchasing and paying you, or is this a sting?
* If you're meeting up with someone in person, what is the risk that the person will bring payment or try to attack you?
* If you're meeting with someone in person, how do you use $20k in cash without attracting suspicion? How much time will that take?
* If it's digital, is the person paying you or are the funds being used to pay you clean or the subject of an active investigation? What records are there? If this person is busted soon will you be charged with a crime?
There are a lot of unknowns and a lot of risks, and most people would gladly take a clean $10k they can immediately put in the bank and spend anywhere over the hassle.
This is another reason why the distinction between well-worn markets (like Chrome RCEs) and ad-hoc markets is so important; there's a huge amount of plausible deniability built into the existing markets. Most sellers aren't selling to the ultimate users of the vulnerabilities, but to brokers. There aren't brokers for these Youtube vulnerabilities.
Legally, in most places of the world it isn't.
Morality differs among people too. Profiting off a trillion dollar company will not cross the line for a lot of people.
Almost everyone, even people without a moral sense, have a self-preservation sense- "How likely is it that I will get caught? If I get caught, will I get punished? How bad will the punishment be?" and these factor into a personal risk decision. Laws, among having other purposes, are a convenient way to inform people ahead of time of the risks, in hopes of deterring undesirable behavior.
But most people aren't sociopaths and while they might make fuzzy moral decisions about low-harm low-risk activities, they will shy away from high-harm or high-risk activities, either out of moral sense or self preservation sense or both.
"Stealing from rich companies" is a just a cope. In the case of an exploit against a large company, real innocent people can be harmed, even severely. Exposing whistleblowers or dissidents has even resulted in death.
How much time do you spend asking yourself whether your paycheck is coming from a source that causes harm? Or whether the code you have written will be used directly or indirectly to cause harm? Pretty much everyone in tech is responsible for great harm by this logic.
https://news.ycombinator.com/item?id=42540862#42542151
Most will just take the 500k paycheck and work at whatever the next big tech thing is.
There's some chance that thing is autonomous drones or something like that...
> Pretty much everyone in tech is responsible for great harm by this logic.
We're also responsible for great good. The question which is greater is tricky, case-by-case and subjective.
If Mr GRU asks, I probably say say no.
If the CIA, Mossad or BND asks, maybe I say yes? It’s not clear for a person with a better moral compass than mine.
I wish developers (and their companies, tooling, industry, etc.) creating such flaws in the first place would treat the craft with a higher degree of diligence. It bothers me that someone didn't maintain the segregation between display name / global identifier (in YouTube frontend*) or global identifier / email address (in the older product), or was in a position to maintain the code without understanding the importance of that intended barrier.
If users knew what a mess most software these days looks like under the hood (especially with regard to privacy) I think they'd be a lot less comfortable using it. I'm encouraged by some of the efforts that are making an impact (e.g. advances in memory safety).
(*Seems like it wouldn't have been as big a deal if the architecture at Google relied more heavily on product-encapsulated account identifiers instead of global ones)
> Bounty programs are very much not trying to compete with crime.
Nor did my post posit this.
Bounty programs should pay a substantial fraction of the downside saved by eliminating the bug, because A) this gives an appropriate incentive for effort and motivate the economically correct amount of outside research, and B) this will feel fair and make people more likely to do what you consider the right thing, which is less likely if people feel mistreated.
Is there any evidence that OP feels that this payout was unfair?
No, but Google should understand that if they give a token payment, people will be less likely to help in future situations like this. And might be inclined to just instead tell ad buyers about the loophole quietly.
Imagine a possible downside or two, imagine a probable risk, multiply, discount.
Large scale data leak and need for data leak disclosure. 1 in 3, moderate cost.
Bug report saving engineering time by giving clear report of issue instead of having to dig through telemetry and figure out misuse and then identify what is going on, extents of past damage, etc. 3 in 4.
Millions of usernames and emails are leaked every month; if this was the case you'd be seeing these murders in the news every week.
Yes, because all possible scenarios kill the same fraction of people-- whether we're talking about getting a dump of a million email addresses or giving some nutjob a chance to unmask people he doesn't like online.
It sure has worked out pretty much like this for music. The cost is not exactly zero, but pretty close to that.
What you’re saying can be seen as tautological. The reason a gray/black market exists is precisely because the field is undercompensating (aka in disequilibrium)
They're buying exclusive access to some information, which is a somewhat unusual thing to pay for.
News reporters do take spicy stories to tabloids, rather than the normal press, as the tabloids will pay more.
Anyway, I’m not 100% sure what they meant by grey market. It looks like they were talking about maybe selling to “agencies” which, I guess, could include state intelligence agencies. If that’s what they meant, it wouldn’t be that surprising to find that the black market and grey market prices influence each other, right?
I mean we could ask our intelligence agencies why they are shopping in the same markets as criminals but I guess they will say something like “it is important that we <redacted> on the <redacted>, which will allow us to better serve the <redacted> and keep the <redacted> safe.”
I mean, the technical skills in the article here are basic. But the first finding was significantly good luck, and having the background to know to look towards old Google services for the ID to email part was non-obvious. You would need a lot of high-quality, guiding knowledge like that to make bug bounties work. Still, seems like a very high starting cost.
The dollar value of a responsible report going up means more responsibility overall and less problem leaks, exploits, etc.
I would be equally happy to see any solution where the end result is increased security and privacy for everyone, even at zero bounty.
The problem being overlooked is that the actual cost of these exploits and bugs is paid by the people who had no say whatsoever in any matter regarding the issue. Any time a company is being "cheap" at the expense of regular people is a bad time, from my perspective.
Google has the power to limit the exposure of the people who use there products (and this isn't always voluntary exposure mind you) and is choosing to profit a teeny tiny bit more instead. At no immediately obvious cost to them, why not?
Does it? I just had a bug bounty program denied for budget approval at my work because of the cost of the bounties and the sufficiency of our existing security program. On the margins, it's not clear to me that the dollar value of a report going up is incentivizing better reports vs pricing smaller companies out of the market.
It may work kind of how employment works, where Google can afford to pay more than a company that cannot afford a 10k bounty.
Google paying a 10k bounty is the equivalent of the bottom 10% of earners in the US paying a 6th(napkin math) of a soon to be discontinued penny.
Regardless, you are correct that the calculation is not obvious, unlike how I presented it. Preferably, things like multiple million character titles are handled correctly and no bounty is paid at all. I expect a smaller company to have an easier time here as well, lessening the financial burden.
Why would you expect that? In a smaller company the ratio of developers to HTTP endpoints tends to be substantially lower (fewer devs per feature) than in a large company, so I'd expect the opposite.
I think it's in everyone's interest for bug bounties to be higher than harmful markets for the same bug, and a decent fraction of the harms they prevent. That's what is going to result in the economically efficient amount of bug hunting. And it's going to result in a safer world with less cybercrime.
I really think people just like to think about stories where someone like them finds a bug and gets a lottery jackpot as a result. I like that story too! It's fun.
Smart companies running bug bounties --- Google is probably the smartest --- are using them like engineering tools; both to direct attention on specific parts of their codebase, and, just as importantly, as an internal tool to prioritize work. This is part of why we keep having stories where we're shocked about people finding oddball security- and security-adjacent bugs that get zero payouts.
Increasing bounties by a small factor will be enough to reduce things on the grey market and to increase the ROI of people choosing to do freelance security research. The time between payoffs is enough that no one is going to get rich from $150k bounties.
Don't forget the extrinsic benefits: easier to brag about bounties on your resume than selling things into the grey market.
> Smart companies running bug bounties --- Google is probably the smartest --- are using them like engineering tools; both to direct attention on specific parts of their codebase, and, just as importantly, as an internal tool to prioritize work.
These "smart" companies should consider just how cheap even higher bounties are to prevent massive downsides. Of course, an underlying problem is how well these companies have insulated themselves from the consequences of writing and not fixing vulnerable software. A sane liability (and insurance) regime would go a long way towards aligning incentives properly.
P.S. a lot of time your writing comes off as having a smug tone that rubs me the wrong way.
Actually, I already won a small lottery jackpot doing security stuff. Then a large one doing security stuff. Then a small one again doing other stuff. I could have retired a couple of decades ago, but now I'm a schoolteacher for the funsies. My days of scrunching over IDA Pro for pennies are over: I've got no personal direct interest in whether research gets paid more or less.
I just think that bug bounties are a good thing, but by being underfunded and with uneven quality of administration a lot of the potential benefit is left on the table.
I guess bounties fit into the framework somewhere between the Github and middle class engineer.
I think it comes down to supply and demand. It also shows you what Google would pay employees if things were in their favour. On unrelated news, a tech billionaire is almost defacto VP of the US.
Isn't there a market for this? For example, "Reveal who is behind this account that's criticizing our sketchy company/government, so we can neutralize them".
I'll also argue there's separate incentives, than the market value to threat actors... Although a violent stalker of an online personality might not be a lucrative market for a zero-day exploit for this "threat actor" market, the vulnerability is still a liability (and ethical) risk for the company that could negligently disclose the identity of target to violent stalker.
IMHO, if you're paying well a gazillion Leetcode performance artists, to churn out massive amounts of code with imperfect attention to security, then you should also pay well the people who help you catch and fix their gazillion mistakes, before bad things happens.
First there are only very few gobs/companies that are sketchy enough to do this - and for those a huge number of non-anonymous people exist with huge reach that are very critical for years. If such a market would exist they would assassinate all those first - you don’t need the email if you have the face, voice, and name - since that is not happening they just don’t care that much about it.
The likes of Cambridge Analytica didn’t go away, they exist and absolutely go hunting for data like this.
The ability to map between different identifiers and pieces of content on the internet is central to so many things - why do you think adtech tries to join so many datapoints? Let alone things like influence campaigns for political purposes.
I’m not talking about assasination plots, but more mundane data mining. This is why so much effort in the EU has gone into preventing companies from joining data sources across products - that’s embedded in DMA
Sure is funny there's nobody doing that despite so many people being so dead certain there's an active market.
> [...] a bug that Google can kill instantaneously, that has effectively no half-life once discovered, and whose exploitation will generate reliable telemetry from the target.
You can't set up unmask-as-a-service because it's going to take you longer to get clients than it will take Google to shut down your exploit.
1. It can still take a while before Google finds out
2. You can log every mapping you got in the meanwhile, then keep selling the ones you already have
Edit: although probably most of your business will be over when word gets out that your data isn’t exactly legal (which your clients have understood from the start, of course; they could just plead ignorance)
So let's suppose that you did set up the service like this. Can you even make 10 K? What are your odds of getting caught? How much do you value not being in prison and/or having to hire a lawyer to get you out of there?
I'd take the 10k every time.
It’s a lot more work, of course, but you can scrape some top youtubers first as it seems relatively easy. If you can pull this off you can then try and figure out how to legitimize your offering – I won’t go into details here, for obvious reasons, but now that you have something valuable on your hands it makes sense to spend some time/money on selling that.
I’m not speaking theoretically, which I suspect most on this thread are.
Even if someone on telegram was telling me that Russia would buy this information for $100,000, I think I would reach out to Google and "settle" for $10k.
The scraping was def in violation of the EULAs. Product data is one thing, but I believe this group was combining it with other sources and selling the identities and context as a bundle.
"The Dark Net” – Jamie Bartlett “We Are Anonymous” – Parmy Olson “Future Crimes” – Marc Goodman “Kingpin” – Kevin Poulsen
You missed their point about the business model of the security researchers here: their business model is finding a large number of small value vulnerabilities. Those who are good at this are very very good at this.
My company has a bug bounty program and some of the researchers participating in it make double or more my salary off of our program, but we never pay out more than this for a single report. And it's not like we're particularly vulnerable, we just get a steady stream of very small issues and we pay accordingly.
Their last paragraph shows that they didn't understand your paragraph here:
> For people who make their nut finding these kinds of bugs, the business strategy is to get good at finding lots of them. It's not like iOS exploit development, where you might sink months into a single reliable exploit.
I think I understood. The last paragraph of mine that you cite was speaking of the creator of the bugs, not the discoverer.
The liable party should be investing reasonably towards non-negligence. (Especially in the context of spending billions of dollars each year on oft-misaligned headcount that's creating many of these liabilities.)
I'm not talking about the company optimizing for the minimal amount they think they can get away with paying to try to cover their butt. Nor am I talking about how white/gray-hat researchers adapt viable small businesses to that reality.
I will say: at Matasano, we were once asked by an established security company that turned out to be a broker to find PHPBB vulnerabilities.
Am I misunderstanding the bug? In my reading, this bug translates to "a list of the top 1,000 Youtube accounts' email addresses (or as many as you can get until Google detects it and shuts it down)." Why isn't that conceivably worth more than $10,000?
Our emails get leaked all the time in data breaches, sometimes alongside much more important information such as home addresses etc..
This was certainly a bad leak that could be used to further dox people by connecting the email to other leaked info or other sources, but from Google's perspective, all they did was leak the email.
It was a privacy breach for sure.
But further doxxing based on the email would be "not their problem" I suspect they would say.
There's certainly bad things that CAN be done to a number of people with information when it's a personal email address that's used for numerous purposes... but the 3 people I talked to about having youtube (or any streaming) accounts all have mentioned it as being a separate account.
So the only threat I can see in most cases is just better phishing attempts, which is not necessarily an easy money maker... Unless they can steal the entire account? It is impossible to get support from Google, so it's quite possible you could change the bank info and get a month or two of payments before someone gets in the loop to stop it... and realistically, the more money someone is making on YouTube, the less likely they have troubles contacting someone at Google by some side channel... and the less likely it's a personal email address that reaches the actual star of the channel.. so the more popular the person, the less valuable the email address
For example, MrBeast has this in the video description:
> For any questions or inquiries regarding this video, please reach out to chucky@mrbeastbusiness.com
The vulnerability here is that you can find the exact email address tied to their YouTube account, which you can't really do anything with if they have strong passwords and use 2FA.
I just don't think Russia would be willing to pay $100,000 to get Mr. Beast's email address, even if that sounds tempting to you.
The exploit can be valued at: number of emails * probability that you'll phish them into letting you in * value of posting a "Free Robux" scam on a channel with 100M subscribers.
I feel like you are just taking into account the theoretical max value of a bad actor having these accounts, not the cost/risk of using this knowledge.
I could have the master key of a bank safe with 100MM worth of gold in the basement, but it's value is going to be nowhere near that, even to bad actors.
If it exposed passwords as well then that would be worth a lot more, but a list of email addresses is not the most valuable of things on its own.
As explained by the parent comment, because there isn't a market for it. It's a novelty. Who are you going to sell that exploit to? At this time, nobody. Since Google doesn't have to compete against others for the bug, it pays low.
Those may be non-public email addresses (admin/billing emails), so the phishing potential is higher than emailing prteam@mrbeast.com (or whatever).
Sure the gray market will pay more, but how do you contact criminals and make sure that you actually receive payment?
I know nothing about the market, but I think it's similar to buying drugs - we all know that drugs are everywhere and criminals are making a ton of money out of it, but if you haven't been introduced before how do you actually buy them? Go to a club and start asking random people?
(that last part might be different in US, but in EU we don't have people standing on every corner selling cookies)
How big of a threat is it/what impact will it have on business/reputation/etc.?
How likely is it to be exploited and how widely would it be considered useful to the market of threat actors?
- monetize the bug themselves; i.e. set up a site where you can submit a YouTube user id, pay some fee using your credit card and get an e-mail address.
- report that they have the ability to convert any YouTube id to an e-mail, with proof: then negotiate over compensation for the disclosure of the details
- just report the problem and be happy with whatever they get.
Ten grand doesn't look too bad for the most timid choice.
For #1, as tptacek says, it would be trivially easy for Google to shut a service like that down as soon as it was created, and prosecute the people running the service under the CFAA. Also, the amount of demand for that kind of data is pretty small given the number of email address databases already available online through legal means (e.g. Zoominfo, RocketReach, etc). It's a path filled with a lot of risk and not a ton of reward.
For any other kind of vulnerability, you're not so much "selling a product" as you are "helping plan a heist".
Do you really want to know what the FSB plans to do with your exploit?
Absolutely, yes. Spam and targeted phishing attacks are in high demand.
My understanding is that it is possible to retrieve every public youtube channel ID, if not also Google Maps/Play reviewers, quite easily. This exploit could have been used to create a massive near-complete database of every Google account has automatically had a Youtube account created.
Massive email databases are extremely cheap, often free. For this vulnerability to be worth more than $10k there would have to be something about it being a near-complete library of Google accounts (rather than just another massive mailing list).
And that's assuming the prospective buyer believed that they could exploit this vulnerability in full before discovery. If I'm reading this exploit right, each email recovered requires two requests, one of which needs to make one of the fields 2.5 million characters long in order to error out the notification email sent to the victim. Presumably that email sending error would show up in a log somewhere, so the prospective attacker would have to send billions of requests fast enough that Google can't block them as suspicious or patch the vulnerability, all the while knowing full well that they're filling up an error log somewhere and leaving an extremely suspicious pattern of megabyte-sized request bodies on a route that normally doesn't even reach kilobytes.
I'm honestly not seeing how you could make an email list out of this that is anywhere near complete, and even if you could I'm not sure where the value to it would be.
There are different qualities of email databases. "Known real email by Youtube account holders" would be a high value database. Definitely not free.
This type of vulnerability is extremely valuable for private investigators, too. "Who uploaded this video which my client is extremely interested in?"
Would exploiting this vulnerability violate the Computer Fraud and Abuse Act? If so, would a private investigator really want to do that?
https://www.stltoday.com/news/local/government-politics/pars...
That database only exists in theory, based on extrapolation of this vulnerability to billions of individual exploits, and I think we can all agree that Google would detect this activity and shut it down.
Hence, that database might fetch a decent price if it existed, but it doesn't.
Exploits need to plug into a business plan. Like any business plan there has to be somewhere that money gets extracted and that money needs to be more than the exploit cost & infrastructure costs & a risk premium.
If you can’t trivially say how the exploit explicitly gets turned into cash you probably are on the wrong track. Doubly so if it’s not a known standard and commoditized way that’s happened before.
Most recent example I've seen: https://www.youtube.com/watch?v=EnVxWK6DfMQ
So for some channels that provided no contact information, you now can acquire an email address, and for everyone else you may now get an additional one.
It also enables you to link multiple channels back to the same person.
Every bit of information you can get your hands on counts for social engineering attacks.
For very famous individuals this may also open them up to harassment. You can't find Elon Musk's private telephone number on the Tesla homepage for good reason. For that class of people, any time that sort of information leaks, they need to get a new private phone number/e-mail address.
I'm not sure that there are terribly many black market opportunities for "every bit of information" such that this should be a six figure payout or whatever.
- Regime critics with a channel on YT.
- Vulnerable individuals and others trying to keep their identity a secret. Putting yourself on YT means putting yourself in front of every deranged individual out there.
- Trump quite famously runs some of his own social media accounts personally, for better or for worse. And even where he doesn't, he probably retains ultimate control - in the case of YT it might be his personal google account that created the channel. He's probably not the only high value target to do so.
Also if you happen to be in any date leak, being able to figure out your private e-mail address gives attackers another place to check whether you re-used a password.
For any vuln you can make up a hypothetical one off usage. But to find the right buyer for that is effectively building a team ala The Great Muppet Caper.
You take out your handy email list and run a regex to find candidate accounts that match “J Smith”. You pipe matches into a recon script to check if github and discord accounts exist for each email. Suddenly, you’ve got a small pool of matches. You try more account-existence recon to find all the sites they’re signed up on. You look up all breached creds tied to the target emails, then run cred stuffing against any sensitive services they’ve signed up for.
Boom, you’ve gone from first initial + last name to compromising an account in thirty minutes.
Or, you know, develop a new "business plan" around an exploit.
Sure: https://www.abc.net.au/news/2016-07-01/league-of-legends-que...
You could sell it non-exclusively to every data broker
The only angle I can imagine is phishing for high profile creators, and at most this is a “makes it easier” and not a “creates the problem” bug.
I have… I’m not sure. Ten maybe? And those are actual conveniences for different purposes. I’m sure plenty of people have hundreds, if not thousands. So what?
Motivation in the abstract is not enough to counter GP's point—they have to have enough motivation that it's worth more than $10,000 to them and also have more than $10,000 to spend and also have the connections necessary to get in touch with someone who's able to sell a vulnerability like this and also be able to exploit it in a timely manner or at least think they can.
Selling crazy stories to the media is as old as time.
This vuln would give you a lookup table from email->YT
SELECT * FROM table WHERE email LIKE “%.gov”
Come on.
You are imagining a potential market, the exploits are priced against markets that are real and pay out today. Security researchers aren't traveling salesmen going around to every shady character on the internet and pitching them on the potential of a new criminal enterprise.
https://www.theverge.com/2016/1/29/10868404/google-reveals-h...
That guy is ridiculous! Could have made $50 million or more probably, if he had used a different registrar than Google itself.
He mentioned that Microsoft also let their domain lapse and that one was actually going to the open market... and what's more, they didn't even care when he contacted them! Oof:
https://www.theregister.com/2003/11/06/microsoft_forgets_to_...
Here are a few other doozies:
Apple forgot to renew their certificate for the entire Mac App Store, and didn't care much:
2014: https://www.macrumors.com/2014/05/25/apple-software-update-i...
if that wasn't bad enough... they did it again in 2015:
https://osxdaily.com/2015/11/12/fix-app-is-damaged-cant-be-o...
and almost in 2016:
https://apple.stackexchange.com/a/227787/75628
Probably the risk of going to jail outweighs the extra $5k, but if a company is serious about the bug bounty program, they would offer a reward that's competitive with what you could extract from the black market, and I don't think that's hard to do.
Google knew about this already, and hadn't done anything to fix it...and when it was reported, they didn't fully understand it and were dismissive, until the author came back at them again.
> Unmasking Google accounts? Could there be a business there? Sure, maybe
I'm pretty sure there are a _lot_ of youtube channels that private and public entities would love to uncover the identity of, and I would say that it's very unlikely these guys were the first to piece all this together.
The main takeaway for me is how incompetent Googlers seem to be, both in the basic "web application 101" mistakes made (not properly validating/restricting fields) and the clearly rushed evaluation of the security report. Such a report should trigger some folks going "oh, that's not good. I wonder what else is broken about this." Not "meh, not significant, quick patch, fixed."
Nobody at Google wants to work on stuff that isn't going to get them up a rung on the ladder.
It's most likely not just a comparison to black market prices or how many lines of code it'd take to patch.
https://security.apple.com/bounty/categories/
I'm a little skeptical of published prices for serverside software, though. Do you know anyone who specializes in selling those bugs? I don't.
In this case there were 4 billion email addresses on the line from being scraped, imagine if this was exploited and the data was leaked. The news would hit the headliners which would definitely be bad for Google's reputation and stock price.
However, the impact of the leak is not that high as it only consists of a channel <> email address mapping, and therefore I think 10k is a fair price
Compare this to Google Project Zero which gives other companies the following time to fix before disclosure...
>"This bug is subject to a 90 day disclosure deadline. If a fix for this issue is made available to users before the end of the 90-day deadline, this bug report will become public 30 days after the fix was made available. Otherwise, this bug report will become public at the deadline."
>If the patch is expected to arrive within 14 days of the deadline expiring, then Project Zero may offer an extension...Note however, that the 14-day grace period overlaps with the 30-day patch uptake window, such that any vulnerability fixed within the grace period will still be publicly disclosed on day 120 at the latest (30 days after the original 90-day deadline).
>If we don't think a fix will be ready within 14 days, then we will use the original 90-day deadline as the time of disclosure. That means we grant a 14-day grace extension only when there's a commitment by the developer to ship a fix within the 14-day grace period.
https://googleprojectzero.blogspot.com/p/vulnerability-discl...
I’ve only participated in a few vulnerability programs, and most of them reward less if the security flaw is stupidly simple (but serious) such as revealing user emails in the page source.
I'm not sure I'd apply that logic if I were Google, though. Smaller companies it makes sense because the threat actors that they are most likely to face are mostly script kiddies who give you at most a day before they get bored and try someone else. Google is another matter, since they're always a target for much more sophisticated attackers.
Should... should this just be public: https://staging-people-pa.sandbox.googleapis.com/$discovery/...
Furthermore the discovery endpoint is publicly documented[0] and specifically meant for external users. Nobody internal would read the discovery endpoint: they would just pull up the .proto file through code search.
Another observation: from my experience at Google it took multiple weeks of effort fighting against the bureaucracy to be able to expose an API to the public. It's not like an AWS S3 bucket that could just be accidentally public. The team knew this is public and had fought the bureaucracy to make it public.
[0]: https://developers.google.com/discovery/v1/getting_started
Congrats!
The part that sucks for consumers is that they often kill things that people like. I wish they had a better way of doing this.
Bravo to brutecat for this excellent discovery, productionization, and writeup.
That’s part of the stupidity of the DOJ trying to force Google to sell Chrome. Who would want it? And how would they profit from it?
All valid questions, but it might be that splitting the tool used to bludgeon everyone around is still worth it, even if pace of development slows down considerably.
Unless you are using Chromebooks, every desktop user who uses Chrome made an affirmative choice to download it.
My point is that maybe it is okay? Runaway churn is at least partially responsible for current situation, where most companies simply unable to compete.
Apple has no reason to compete, it can just make more and more functionality for native apps as can Google if it doesn’t have to worry about Chrome anymore.
Microsoft doesn’t care about the browser anymore and just uses Chromium. Firefox’s revenue comes completely from Google. If Google doesn’t have to prop up Firefox for antitrust reasons anymore, why would they?
As sister comments have said there is no money in it. They are stickiness plays or just bets for Google.
But I am getting more hesitant as often when I (and others) do it seems companies think they can increase their prices wildly or do other stuff.
I have this exact feeling now with Logseq: I started paying for sync a while ago and it seems so did others and now they are rewriting the whole thing from plain text[1] to some kind of database based storage :-/
[1]: which could be synced over git, transferred effortlessly into another application and was one of the reasons I went with Logseq
From Kagi’s website
https://blog.kagi.com/status-update-first-three-months#:~:te...
We are currently serving around 2.1M queries a month, costing us around $26,250 USD/month.
Between Kagi and Orion, we are currently generating around $26,500 USD in monthly recurring revenue, which incidentally about exactly covers our current API and infrastructure costs.
That means that salaries and all other operating costs (order of magnitude of $100K USD/month) remain a challenge and are still paid out of the founders’ pocket (Kagi remains completely bootstrapped).
https://blog.kagi.com/what-is-next-for-kagi
I’m honestly glad to hear that. Until I heard your interview with Gruber, I thought Kagi was the same company that use to provide a payment platform for Mac indie developers.
It’s always good to see a bootstrapped company become successful without enshittification. I put you up there with BackBlaze.
I see you have investors now (not saying that is a negative). Are the laws still the same about having to be a “qualified investor”? Can anyone invest - asking out of curiosity.
Photos redesign makes it really hard to use.
Siri works half of the times, maybe even less than that.
Books lacks of basic functionalities such as downloading and keeping books on device.
Photos redesign maybe something you don’t like, but you can hardly call it half baked. All of the functionality is there and there’s a new consistency in how it works that wasn’t there previously.
Books automatically downloads to device. There isn’t a way to read a book without it local.
A simple google search will answer this question
https://www.theverge.com/2025/1/9/24340238/apple-iphone-alar...
Even an hn search is fine, if you do not trust the Verge (notice these are comments from the last 3 months so not an old issue):
https://news.ycombinator.com/item?id=42705217 https://news.ycombinator.com/item?id=41887505 https://news.ycombinator.com/item?id=41962418
> Books automatically downloads to device. There isn’t a way to read a book without it local.
Have you used Books extensively or just skimmed it? There's no way to keep books on device, make another Google search if you do not believe me.
> Photos redesign maybe something you don’t like, but you can hardly call it half baked.
Perfect, then keep Photos and kill only alarms, books and Siri.
I haven’t used Books extensively outside of audiobooks. So it sounds like there’s offloading of caching going on that’s iCloud wide; disabling iCloud sync would fix this. I can imagine that being frustrating if the book you want isn’t there when you’re on a flight (which should only happen if you haven’t recently accessed it). I agree there should be a way to prevent this. I wouldn’t call that half baked, but it’s a big enough problem I’d agree that’s not fully thought through (or more likely, they did think through it but came to a different conclusion).
Once you download a book to a device, it stays downloaded. There is a setting to automatically remove downloads once you're finished with the book, but that defaults to off (and I didn't even realize it was there until I went looking just now).
It's happened for years: it was pretty bad about 5/6 years ago, but Apple claimed they fixed it, but it's still happening a bit.
In fact, when I really need to wake up at a particular time (say for a flight), I set two alarms 1 minutes apart.
Go to library > collections > downloaded.
I can see books I purchased and other PDFs that I uploaded.
I do agree on the Photos redesign. I feel like I constantly get stuck on certain pages.
Your tone is aggressive and uncalled for. In fact, the fact that you have never found a very common bug says a lot about your inattention to detail.
There's no "keep forever on device" button, which to me seems like a basic functionality. If the app decides to delete them, it will.
https://old.reddit.com/r/ios/comments/1b04rzy/apple_what_wer...
https://news.ycombinator.com/item?id=23736536
https://apple.stackexchange.com/questions/344271/books-autom...
iCloud offload is a pretty common feature on Apple devices and one that I find pretty handy. I understand why it doesn’t work for others though.
You can turn off iCloud sync in general for the device.
I'm still upset about Google Reader.
https://killedbygoogle.com/
What upsets me re RSS these days is how many people were apparently so reliant on one reader that they still publicly mourn every time it comes up, 12 years later. Who are these fair-weather feed followers who threw their hands in the air with the loss of exactly one product?
The other problem was that Google killing Reader was a signal to the broader web to move away from RSS. RSS has kind of limped along since then.
Are social features the main selling point of RSS readers? I mean I just use mine to know when there's a new blog post/webcomic posted on a few sites I follow, without having to give my email or use another platform like social media to know about it. And I'd use the social features which are present on the blogs, under the control of the blog owner(s), if there's any. Or maybe my use case is not the most common one?
Though I agress about the signal Google sent by killing GR.
All so they could clear the way for Google Plus. And look how that turned out.
So yeah, watershed moment, the point where the scales fell from my eyes, still justifiably pissed, fool me twice, etc etc etc
(Still using RSS daily, though I lapsed for a while).
Years later, I came across Artifact, created by the founders of Instagram, and thought it was an interesting idea. The problem was I was reading its shutdown announcement.
Sometimes I think products are killed way too early. Look at twitch, it boomed after years of stagnation.
[0] https://www.statista.com/statistics/517907/twitch-app-revenu...
https://news.ycombinator.com/item?id=5371982
I'd be so glad now to give up on Google and all its enshittified shit. I could give up things that are still super useful and I get value from every day: YouTube, Gmail, Play Services, Drive, Maps. But I don't think I could give them all up at once. I've been trying to migrate to Proton and OpenStreetMap and some kind of real Linux phone etc, I don't even mind if I have to fiddle around before everything works. The trouble is that the claws are in, but they're not in me.
Remember when Google proudly didn't advertise themselves? They got to critical mass through word of mouth, from having a compellingly better product. Now what they have is network effects and locking. They used to appeal to developers and techies because and that ended up making the services better for everyone. Now like all the other tech giants they have PHB's optimizing for the next millisecond of attention and Microdollar of ad revenue from a lowest-common-denominator victim.
Google is so big that it's a significant part of life for a significant proportion of the world. When Google is shit it moves the needle on net human suffering. I think the UN should be focussing on prevent war and trying to salvage our environment, but if they aren't going to do that then it might be rational to just form a worldwide consumer group to take on megacorps.
That marks my coming of age on the enshittified web. The killing of Google Reader was a watershed moment. It marks the moment in time when the tide turned from the open Web to closed social media gardens.
https://www.youtube.com/watch?v=4Z4RKRLaSug
Most software products rely on very complex software stacks, and if you trust 100% all the libraries and the OS you use I would say it's a wrong mindset. There were bugs even in the processor (meltdown). Security is a continuous battle and you never know if you won, only (sometimes) if you loose.
When I breeze through your login process with the wrong credentials that's because your security was fake, if it was real that would break because it didn't know who I was, so if some bug lets me past login I don't somehow successfully log in as me, I'm logging in as nobody at all which is clearly nonsense.
This is "Make Invalid States Unrepresentable" at scale, and it's difficult to do, but not impossible.
[1] https://en.wikipedia.org/wiki/Drake_equation
There's definitely some measure of complexity. I still like simple cyclomatic but I know there are better ones out there that try to capture the cognitive load of understanding the code.
The attack surface of the system is definitely important. The more ways that more people have to interface with the code, the more likely it is that there will be a mistake.
Security practices need to be captured in some way (maybe a factor that gets applied). If you have vulnerability scanning enabled that's going to catch some percentage of bugs. So will static analysis, code reviews, etc.
It's not a black and white of "an app is truly secure" or "an app is truly insecure", but rather a continuum from "secure enough in practice for this threat model and purpose" to "an insecure mess".
Like, plenty of websites and apps have launched, existed for years, and then shutdown without a single security incident. In those cases, surely the app was secure, right? At least secure enough? Signal so far has been "secure enough in practice" for most people, while iMessage has in practice been "secure enough if you're a normal person, but with serious security issues for anyone who might be subject to serious targeted attacks"
Say more about what you mean by "no app is truly secure"? Especially in the context of signal?
> Im just saying that all it takes is one employee to click onto the wrong URL to breach your apps security
Pretend I'm a signal employee. What link can I click that breaches the app's security?
They don't store unencrypted data, pushing source code changes requires review, releases are signed and a single employee can't compromise the release process, so I'm missing how one employee being compromised could lead to the signal app breaching signal's security.
Also, in practice, how often are apps compromised from a phishing attack? I don't even really see news reports on that, so I'm curious if you're operating off like a specific case or something.
This is just not how it works. Most likely author spent weeks or months digging into different products until he found something worthville.
But one has to understand that for security purposes they SHOULD pay as little as possible. If they pay out more there is more incentive in finding bugs and then there unfortunately you’ll also raise more black market.
So GTO strat is to just cut off black market with as little money as possible.
Regarding the payout-- I'm curious if targeting (in the disclosure video) a CEO/C-Suite exec @ Google would have encouraged a higher amount from the panel.
https://web.archive.org/web/0id_/http://wayback-fakeurl.arch...
> Timeline
> 05/11/24 - Panel awards $3,133.
> 12/12/24 - Panel awards an additional $7,500.
For private corporations/closed code, it is a way to get a thousand engineers looking at their code and APIs and only pay a small amount to however is the first one to find something. Everybody else gets nothing even if they put a lot of time and effort.
Underpaid is an understatement.
Industry-wide SWE compensation is somewhere in the $100k-$200k range. Typical Google SWE compensation is $350k. Top Google SWE salary is north of $1M. Increase by 60-100% for overhead, or somewhat more for consulting overhead.
The amount of work doing something like this is orders of magnitude more than the compensation:
1) Most security vulnerabilities investigated lead nowhere, were previously discovered, etc. That's lost time.
2) Working out something like this is much more than 0.25-3 weeks.
More critically, the black market value of most vulnerabilities is much more than Google pays out. A rational economic actor would sell something like this grey market or black market, rather than reporting.
The problem is none of the big companies take security seriously. The reason is that there are no economic damages to even serious data leaks, so what incentive is there for them to take data security seriously?
Many companies (including big ones like T-Mobile) have major security compromises every few months (and in the case of T-Mobile, have had so for decades) and simply don't care. I don't mean to pick on T-Mobile -- I like them as a company -- but they're pretty representative.
That $10k is "an extraordinarily high sum for" what was likely weeks of work on this bug, and probably months of work poking in other places, reflects the very, very low focus on security industry-wide. This is why we need significant civil -- or possibly occasionally criminal -- liability. Civil if it's simple negligence, and criminal if it's gross negligence leading to harm.
If Google were to pay me $200 if it leaked my data, that would:
- Be worth much less than my privacy
- Amount to damages of $400B worldwide if there were a compromise impacting all $2B users (although, realistically, damages would be lower in middle and low income countries)
This would represent a 20% fall in Google's market cap, which feels about right.
At that point, I expect the bug bounties would be set many orders of magnitude higher. Security bugs should be rare. They're common. This is a problem, and one created by our market incentive structures.
You are correct that Apple is an exception, and seems to mind security.
Security is hard. Incredibly hard. Unlike most things in business which are positive-sum, security isn't - it's adversarial. If we make companies pay huge civil fines for things that are so hard to protect against, we're stifling a ton of innovation.
I usually analogize a large company to a bank. A bank is supposed to keep your money secure, and for sure you'd have a legitimate beef if a bank robber could waltz in and steal your money easily because it's not kept in a vault.
But what if it is kept in a vault? What if the bank isn't attacked by a random group of bank robbers, but rather by the armed forces of a hostile nation? We don't expect banks to protect against armies - that's what we have states for! They provide centralized protection against threats that are far too large for any individual entity to take on by themselves.
This is the same, albeit out of sight, situation with large companies. You can have thousands, tens of thousands of people around the world poking at everything your company does for years, looking for any vulnerability. No company can truly withstand that kind of scrutiny - and I don't think making civil penalties higher will change that. And on top of criminal or opportunistic actors, companies also have to be worried about state actors too.
The only way is for the state to take on an active role in security. I don't see any other way that gets real security for anyone.
Google is not going to pay you $200 if they leak your email address.
Google pays as much attention to security as Apple does.
If you want a world in which these kinds of security bugs create multimillion-dollar liabilities, you can advocate for the new statutes that will create that world; just be aware that only companies like Google will be able to afford to operate in that world.
I disagree with the claim that "only companies like Google will be able to afford to operate in that world." That's not how markets work.
1) The impact would be that frameworks would develop with better security. This would result in a slowdown of software engineering. Perhaps it would start to look like any other engineering discipline, where things are analyzed for safety.
2) Every other industry shows that in situations like this, big players are disadvantaged.
The analysis here is pretty basic:
- If I'm running a small $10M startup making a little iPhone app for some obscure task, the risk of legal liability from this is among the smallest of my risks of going under, so I'm incentivized to ignore it. If I were faced with a $400B liability, I declare bankruptcy, so in effect, that's a $10M liability. The expected cost is 5% times 10M = $500k, so it makes sense to spend up to $500k to mitigate a 5% risk.
- If Google has a team working on that same app, and doesn't manage security properly, the $400B liability stays a $400B liability. There is no ROI analysis where it makes sense to build a little app which has a 5% chance of leaking data. Do it right, or don't do it at all. The expected cost to Google here is 5% times $400B = $20B.
This is why, in virtually every other industry, big players are (1) more trusted (2) more expensive, and phrases like "small, fly-by-night operation" exist (and make business sense to run).
I think you need to do a lot to justify a non-zero value for this, frankly.
How is your “privacy” worth $200? What data is valuable and what data isn’t? Under what context?
If your privacy is worth $200 per leak (by some definition of leak), you surely take steps to anonymize your data already and wouldn’t use a service like Google (or name your untrustworthy party).
I’m not saying leaks are good but trying to price it in seems fraught.
I used to get books dropped off at my door with the names, addresses, and phone numbers of thousands of people. The first two are often public record.
Calling 10k an "extraordinarily high sum" is accurate to some and inaccurate to others.
I would bet the groups would differ by perceived personal cost more than the opinion of Google, Apple, and the like. These groups would also probably show distinction where people have been victimized by "identity theft."
The opinions of those bearing the cost are more important here, in my opinion.
It requires some legwork but they could’ve seen somewhere in the ballpark of 6 figures over 1 year if the exploit wasn’t patched.
Oh, and if they had no ethics.
I think the reality though is just that there's literally no buyer for this.
You could sell the service yourself! I bet you could make a couple thousand bucks before you and your customers got indicted.
Caught for what? If someone sells information about a vulnerability, what law are they breaking? In most jurisdictions, unless you're dumb enough to ask questions about whom your selling to and have active knowledge you're assisting someone in breaking some law, selling to the black market is perfectly legal, at least so long as you pay your taxes.
If you're doing grey market, it's even more legal. If a dictatorship wants to unmask a critic for assassination, and one is selling this information to a government security agency, it's legal by definition.
I wrote: "unless you're dumb enough to ask questions about whom your selling to and have active knowledge you're assisting someone in breaking some law, selling to the black market is perfectly legal"
You wrote: "If you sell information about a vulnerability to someone that you know specifically is going to use it to break the law, you are an accessory to that lawbreaking"
That's the exact same thing.
You, likewise, didn't notice I was advocating for new statutes in a post above.
You get caught by the DEA. Do you think it’s a valid defense “I didn’t ask what was in the box”?
Say the drug dealer you delivered it to got caught and then told authorities you delivered it to them, do you think you would have a valid defense?
(Commenters keep moving the goalposts making for a complex thread where each node in the tree litigates a very different hypothetical situation. Ah HN!)
Similar to publishing say... the Anarchists Cookbook.
This goes directly to the concept of “willful blindness”
https://www.mad.uscourts.gov/resources/pattern2003/html/patt...
This is what baffles me about Apple's bug bounty program.
> $1,000,000: Zero-click remote chain with full kernel execution and persistence, including kernel PAC bypass, on latest shipping hardware. As an example, you demonstrated a zero-click remote chain with full kernel execution and a PAC/PPL bypass with persistence on the latest iOS device.
This is easily worth significantly more. You don't even need to sell it to the black market, sell it to all the 3 letter agencies in the world.
The other comment has already addressed the market value question.
That seems extremely long for an organisation as technically competent as google.
$100.00 is enough to cause most to blush.
Pour one out for the google dev in charge of b64 encoding their fancy binary message format so it can be jammed inside a JSON blob. If you want a vision of the future, imagine a boot with "worse is better" imprinted on the sole stomping on an engineer's face, forever.
If only there were some way to easily send this binary gzip data to this API that accepts JSON...
The json part is an automatic conversion.
But the external API is Json and so it needs to be converted at some point.
https://stackoverflow.com/questions/49358526/protobuf-messag...
Internally, it is (maybe) a binary field of a protobuf.
Then when translating to JSON, it was converted to a string via base64 encoding.
The attack chain isn't that complex...
It's very lame to be stingy with a bug bounty program.
It feels like most software sites are now populated by people who think software is like in the movies.
(Of course, they can still apply more computational work and possibly identify you without you logging in.)
https://support.google.com/youtube/answer/7001996?hl=en-GB
edit: My hunch is that the channels the OP's attack was able to target are not actual channels but rather YouTube users (who have a "channel" because that's how YouTube represents users): so "YouTube User" is the correct description of this attack, which is distinct from what you're thinking of as a channel.
Talk about puny!
> If they poked around a bit more they may have found a better GAIA->Email vulnerability
They still can! Report more bugs, get more bounties. I don't see how this is related to how much they paid for this one.
> A database of emails for every major youtube channel would be worth an awful lot.
It's pretty clear from the article that you can't use this API to scrape at that kind of volume. This kind of thing was never in the offering. As the title says, you can leak "any" email, not "every" email.
Or just plain ole pwning them. Most users still tend to use the same password across different services, not use 2FA, and involved in at least 1 high profile leak (I know I’m in at least a dozen so far per haveibeenpwned).
Occasionally you get the victim that uses that same password for their e-mail service and that can allow you to bypass e-mail 2FA if enabled. Even better if the account is used for social SSO (ie, Google, Facebook, Twitter). Then you have access to a treasure trove of services; or just delete them for lulz
On the other hand, a spearfishing campaign could be valuable. And launch a memecoin on some people’s account to make millions
I'll take a cheque.
EDIT: oh I see .. DD/MM/YY is a new one to me
https://en.wikipedia.org/wiki/List_of_date_formats_by_countr...
I have no idea why America settled on MM/DD/YY, which seems like absolutely the least intuitive permutation of D, M, and Y. Except perhaps MYD.
Month day year makes no sense because it's backwards, and no one talks that way. So why use that?
People also say "twelve past two" and yet you don't use 12:2:SS.
Like US customary units, imported from the british, but the UK modernized its system, not the US.
February Twelfth Two Thousand and Twenty Five
Feb 12 2025
02/12/2025
I know it's cool for Europeans ... and everyone else to hate on us for it but it does seem to make sense given the way we typically say the date.
Yes, this effectively makes dates nearly impossible to decipher here.
February 12th, 2025
Rather than:
12 February 2025
And is easier to say than:
The 12th of February 2025
So it's always been natural to write the numeric form the same way, but I am American. I can appreciate day first being easier to sort by machines and having an agreed upon international standard.
Again grammatically is easier and shorter to say month day vs the day of month.
Big and little endian dates are the only way that makes sense I think. Doing it the US way where day is inexplicably between year and month just feels corrupted to my mind.
IIRC, Japan uses Year-Month-Day, which is the other order which makes sense.
Nope, the standards are day.month.year, year-month-day, or month/day/year. The problem happens when the delimiter doesn't match the ordering.
https://xkcd.com/1179/
https://en.wikipedia.org/wiki/List_of_date_formats_by_countr...