This is critical infrastructure, and it gets compromised way too often. There are so many horror stories of NPM (and similar) packages getting filled with malware. You can't rely on people not falling for phishing 100% of the time.
People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform.
Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks.
And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address.
We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack.
There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point.
Moru 5 hours ago [-]
Some registrars make this easy. Think it was cloudflare that has a button for "Do not allow email from this domain". Saw it last time I set up a domain that I didn't want to send email from. I'm guessing you get that question if there is no MX records for the domain when you move to cloudflare.
SoftTalker 21 hours ago [-]
I think you just have to distrust email (or any other "pushed" messages), period. Just don't ever click on a link in an email or a message. Go to the site from your own previously bookmarked shortcut, or type in the URL.
I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number.
I logged in to the website instead. When I called to follow up I used the phone number printed on my card.
Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails.
Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app.
Moru 5 hours ago [-]
There is companies that send email with invoices where you have to click a link. There is no way of logging in on their site to get to the invoice. It is an easy fix for them (we use the same invoicing company as they do so I know). All they need to do is click "Allow sending bills directly to customers bank". Every month I get the email, I use the included chat function on the webpage to ask when they will enable this and it's always not possible. Mabe some day.
I wish we could stop training people to click links in random messages just because we want to be able to track their movements online.
sroussey 19 hours ago [-]
I get Coinbase SMS all the time with a code not to share. But also… “call this phone number if you did not request the code”.
sgc 19 hours ago [-]
This does nothing for the case of receiving a fake coinbase sms with a fake contact phone number.
I have had people attempt fraud in my work with live calls as follow up to emails and texts. I only caught it because it didn't pass the smell test so I did quite a bit of research. Somebody else got caught in the exact same scam and I had to extricate them from it. They didn't believe me at first and I had to hit them over the head a bit with the truth before it sank in.
Moru 5 hours ago [-]
Yes, this is a classic scam vector. We really should stop training users to click links / call phonenumbers in sms and emails.
parliament32 19 hours ago [-]
> it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it
USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA.
This is the way! Passkeys or FIDO2 (yubikey) should be required for supply chain critical missions like this.
neilv 5 hours ago [-]
> This is critical infrastructure, and it gets compromised way too often.
Most times that I go to use some JS, Python, or (sometimes) Rust framework, I get a sinking feeling, as I see a huge list of dependencies scroll by.
I know that it's a big pile of security vulnerabilities and supply-chain attack risk.
Web development documentation that doesn't start with `npm install` seems rare now.
Then there's the 'open source' mobile app frameworks that push you to use the framework on your workstation with some vendor's Web platform tightly in the loop, which all your code flows through.
Children, who don't know how things work, will push any button. But experienced software engineers should understand the technology, the business context, and the real-world threats context, and at least have an uneasy, disapproving feeling every time they work on code like this.
And in some cases -- maybe in all cases that aren't a fly-by-night, or an investment scam, or a hobby project on scratch equipment -- software engineers should consider pushing back against engaging in irresponsible practices that they know will probably result in compromise.
cjonas 4 hours ago [-]
What does having an "uneasy disapproving feeling" actually solve?
neilv 3 hours ago [-]
The next sentence is one of the conclusions it might lead to.
nikcub 21 hours ago [-]
* passkeys
* signed packages
enforce it for the top x thousand most popular packages to start
some basic hygiene about detecting unique new user login sessions would help as well
SAI_Peregrinus 19 hours ago [-]
Requiring signed packages isn't enough, you have to enforce that signing can only be done with the approval of a trusted person.
People will inevitably set up their CI system to sign packages, no human intervention needed. If they're smart & the CI system is capable of it they'll set it up to only build when a tag signed by someone approved to make releases is pushed, but far too often they'll just build if a tag is pushed without enforcing signature verification or even checking which contributors can make releases. Someone with access to an approved contributor's GitHub account can very often trigger the CI system to make a signed release, even without access to that contributor's commit signing key.
jonplackett 3 hours ago [-]
One issue is that many institutions - banks, tech giants - still send ridiculously spammy looking emails asking you to click a link and go verify something.
All these actions are teaching people to be dumb and make it more likely they’ll fall for a scam because the pattern has been normal before.
evantbyrne 21 hours ago [-]
The email was sent from the 'npmjs dot help' domain. I'm not saying you're wrong, but also basic due diligence would have prevented this. If not by email, the maintainer may have been able to be compromised over text or some other medium. And today maintainers of larger projects can avoid these problems by not importing and auto-updating a bunch of tiny packages that look like they could have been lifted from stack overflow
chrisweekly 20 hours ago [-]
Re: "npmjs dot help", way too many companies use random domains -- effectively training their users to fall for phishing attacks.
InsideOutSanta 19 hours ago [-]
This exactly. It's actually wild how much valid emails can look like phishing emails, and how confusing it is that companies use different domains for critical things.
One example that always annoys me is that the website listing all of Proton's apps isn't at an address you'd expect, like apps.proton.me. It's at protonapps.com. Just... why? Why would you train your users to download apps from domains other than your primary one?
It also annoys me when people see this happening and point out how the person who fell for the attack missed some obvious detail they would have noticed. That's completely irrelevant, because everyone is stupid sometimes. Everyone can be stressed out and make bad decisions. It's always a good idea to make it harder to make bad decisions.
OkayPhysicist 12 hours ago [-]
I can answer why this is at the company I work at right now:
It's a PITA to coordinate between teams, and my team doesn't control the main domain. If I wanted my team's application to run on the parent domain, I would have to negotiate with the crayon eaters in IT to make a subdomain, point it at whatever server, and then if I want any other changes to be made, I'd have to schedule a followup meeting, which will generate more meetings, etc.
If I want to make any changes to the mycompany.othertld domain, I can just do it, with no approval from anyone.
SoftTalker 10 hours ago [-]
Are you arguing that it’s a good idea for random developers to be able to set up new subdomains on the company domain without any oversight?
mdaniel 9 hours ago [-]
Do they work there or not? I deeply appreciate that everyone's threat model is different, but I'd bet anyone that wants to create a new DNS record also has access to credentials that would do a ton more actual damage to the company if they so chose
Alternatively, yup, SOC2 is a thing: optionally create a ticket tracking the why, then open a PR against the IaC repo citing that ticket, have it ack-ed by someone other than the submitter, audit trail complete, change managed, the end
0cf8612b2e1e 17 hours ago [-]
Too many services will send you 2FA codes from different numbers per request.
zokier 21 hours ago [-]
Spf/dkim already authenticates the sender. But it doesn't help if the user doesn't check who the email is from. But in that case gpg would not help that much either.
elric 21 hours ago [-]
SPF & DKIM are all but worthless in practice, because so many companies send emails from garbage domains, or add large scale marketing platforms (like mailchimp) to their SPF records.
Like Citroen sends software update notifications for their cars from mmy-customerportal.com. That URL looks and sounds like a phisher's paradise. But somehow, it's legit. How can we expect any user to make the right decision when we push this kind of garbage in their face?
JimDabell 21 hours ago [-]
The problem is there is no continuity. An email from an organisation that has emailed you a hundred times before looks the same as an email from somebody who has never emailed you before. Your inbox is a collection of legitimate email floating in a vast ocean of email of dubious provenance.
I think there’s a fairly straightforward way of fixing this: contact requests for email. The first email anybody sends you has an attachment that requests a token. Mail clients sort these into a “friend request” queue. When the request is accepted, the sender gets the token, and the mail gets delivered to the inbox. From that point on, the sender uses the token. Emails that use tokens can skip all the spam filters because they are known to be sent by authorised senders.
This has the effect of separating inbound email into two collections: the inbox, containing trustworthy email where you explicitly granted authorisation to the sender; and the contact request queue.
If a phisher sends you email, then it will end up in the new request queue, not your inbox. That should be a big glaring warning that it’s not a normal email from somebody you know. You would have to accept their contact request in order to even read the phishing email.
I went into more detail about the benefits of this system and how it can be implemented in this comment:
You don't need complex token arrangements for this. You can just filter emails based on their from addresses.
JimDabell 18 hours ago [-]
Unfortunately, it’s not that simple. It’s extremely common for the same organisation to send emails from different addresses, different domains, and different servers, for many different reasons.
waynesonfire 11 hours ago [-]
You can just filter emails based on their from addresses.
JimDabell 10 hours ago [-]
So if an organisation emails you from no-reply@notifications.example.com, mailing-list@examplemail.com, and bob.smith@examplecorp.com, and the phisher emails you from support@example.help, which filter based on their from addresses makes all the legitimate ones show up as the same sender while excluding the phishing email?
zahlman 5 hours ago [-]
> which filter based on their from addresses makes all the legitimate ones show up as the same sender while excluding the phishing email?
This is the wrong question.
The right question is: what should we do about the fact that the organization has such terrible security practice?
And the answer is: call them on the phone, and tell them that you will not do business with them until they fix their shit.
cindyllm 5 hours ago [-]
[dead]
artemisart 10 hours ago [-]
Why should we expect companies to be able to reuse the correct token if they can't coordinate on using a single domain in the first place?
JimDabell 9 hours ago [-]
Your assumption that they use more than one domain by accident due to a lack of coördination is not correct. Separating, e.g. your product email from your mailing list email from your corporate email has a number of benefits.
Anyway, I already mentioned a solid incentive for them to use the correct token. Go back and read my earlier comment.
The same problem applies to gpg. If companies can not manage to use consistent from addresses then do you really expect them to do any better with gpg key management?
"All legitimate npm emails are signed with GPG key X" and "All legitimate npm emails come from @npmjs.com" are equally strong statements.
vel0city 19 hours ago [-]
There's little reason to think these emails didn't pass SPF/DKIM. They probably "legitimately" own their npmjs[.]help domain and whatever server they used to send the emails is probably approved by them to send for that domain.
But in the same vein the phishing email can easily be gpg signed too. The problem is to check if the gpg key used to sign the email is legitimate, but that is exactly the same problem as checking if the from address is legitimate.
thayne 16 hours ago [-]
> Of course that's all moot if the attacker also changes the email address.
Maybe don't allow changing the email address right after changing 2fa?
And if the email is changed, send an email to the original email alllowing you to dispute the change.
progx 22 hours ago [-]
TRUE! A simple self defined word in an email and you will see, if the mail is fake or not.
ignoramous 20 hours ago [-]
> Can package publishing platforms PLEASE start SIGNING emails
I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages.
I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right?
> Get them to distrust any unsigned email, no matter how convincing it looks.
I don't think signed email would solve phishing in general. But for a service by-and-for programmers, I think it at least stands a chance.
Signing the packages seems like low hanging fruit as well, if that isn't already being done. But I'm skeptical that those keys are as safe as they should be; IIRC someone recently abused a big in a Github pipeline to execute arbitrary code and managed to publish packages in that way. Which seems like an insane vulnerability class to me, and probably an inevitable consequence of centralising so many things on github.
egorfine 21 hours ago [-]
> You can't rely on people not falling for phishing 100% of the time
1. I genuinely don't understand why.
2. If it is true that people are the failing factor, then nothing is going to help. Hardware keys? No problem, a human will use the hardware key to sign a malicious action.
tgv 21 hours ago [-]
> 1. I genuinely don't understand why.
You never make a mistake? Never ever? It's a question of numbers. If the likelihood of making a mistake is 1 in 10000 emails, send out links to 10.000 package maintainers, and you've got a 63% chance of someone making that mistake.
chrisweekly 20 hours ago [-]
Your point is completely valid.
Tangent: in your example, what calculation led to "63%"?
theanonymousone 20 hours ago [-]
1-(.9999)^10000
I trust the user did this calculation. I didn't.
tgv 19 hours ago [-]
That's indeed the formula. The .9999 is (1 - 1/10000), 1/10000 being the likelihood. It would perhaps have been clearer if I had chosen two different numbers...
egorfine 20 hours ago [-]
Then hardware 2FA won't help.
smw 19 hours ago [-]
This seems to be a common misunderstanding.
The major difference between passkeys and hardware 2fa (FIDO2/yubikeys) and TOTP/SMS/Email solutions is that the passkey/yubikey _also_ securely validates the site it's communicating with before sending validation, making traditional phishing attacks all but impossible.
tuckerman 20 hours ago [-]
Hardware 2FA, with something like passkeys (or even passkeys with software tokens), _would_ prevent this as they are unique to the domain by construction so cannot be accidentally phished (unlike TOTP 2FA).
elric 21 hours ago [-]
> 1. I genuinely don't understand why.
It's a war of attrition. You can keep bombarding developers with new and clever ways of trying to obtain their credentials or get them to click on some link while signed in. It only has to succeed once. No one is 100% vigilant all the time. If you think you're the exception, you're probably deluding yourself.
There's something broken in a system where one moment of inattention by one person can result in oodles of people ending up with compromised software, and I don't think it's the person that's broken.
kentm 6 hours ago [-]
> where one moment of inattention by one person
I'll get a lot of pushback for this, but the main problem are ecosystems that encourage using packages published by one person. I call these "some person with a github" packages, and I typically go through codebases to try to remove these dependencies specifically because of this threat vector.
Packages that are developed by a team with code multiple code reviewers and a process are still at risk, don't get me wrong. But the risk is much less if one person does not have the power to unilaterally merge a PR, and more-so if its backed by an organization that has multiple active devs and processes for reviews.
If you do need to depend on these one-person packages, I'd recommend forking and carefully merging in changes, or pinning versions and manually reviewing all commits before upgrading versions. Thats probably intractable for a lot of projects, but thats honestly something that we as developers need to fix by raising the bar for what dependencies we include.
egorfine 20 hours ago [-]
Then see #2: there is no way to prevent humans from actually performing detrimental actions, hardware keys or not.
vel0city 19 hours ago [-]
This specific attack (and many others like it) would have absoultey been foiled by U2F or passkeys. These authors would have been incapable of giving the adversary any useful credential to impersonate them by the very nature of how these systems work.
egorfine 19 hours ago [-]
Fair.
MitPitt 21 hours ago [-]
Removing humans will help
egorfine 21 hours ago [-]
I sense a startup opportunity here
InsideOutSanta 19 hours ago [-]
> If it is true that people are the failing factor, then nothing is going to help
Nothing will reduce incidents to 0, but many things can move us closer to 0.
chatmasta 8 hours ago [-]
DuckDB is not critical infrastructure and I don’t even think these billion-download packages are critical infrastructure. In software everything can be rolled back and that’s exactly what happened here. Yes we were lucky that someone caught this rather sloppy exploit early, and (you can verify via the wallet addresses) didn’t make any money from it. And it could certainly have been worse.
But I think calling DuckDB “critical infrastructure” is just a bit conceited. As an industry we really overestimate the importance of our software that can be deleted when it’s broken. We take ourselves way too seriously. In any worst case scenario, a technical problem can be solved with a people solution.
If you want to talk about critical infrastructure then the xz backdoor was the closest we’ve caught to affecting it. And what came of that backdoor? Nothing significant… I suppose you could say there might be 100 xz-like backdoors lurking in our “critical infrastructure” today, but at least as long as they’re idle, it’s not actually a problem. Maybe one day China will invade Taiwan and we’ll see just how compromised our critical infrastructure has actually been this whole time…
diggan 23 hours ago [-]
So far, it seems to be a bog-standard phishing email, with not much novelty or sophistication, seems the people running the operation got very lucky with their victims though.
I'm starting to think we haven't even seen the full scope of it yet, two authors confirmed as compromised, must be 10+ out there we haven't heard of yet?
IshKebab 22 hours ago [-]
Probably the differentiating factor here is that the phishing message was very plausible. Normally they're full of spelling mistakes and unprofessional grammar. The domain was also plausible.
I think where they got lucky is
> In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.
A huge red flag. I wonder if browsers should actually detect if you're putting login details for site A manually into site B, and give you a "are you sure this isn't phishing" warning or something?
I don't quite understand how the chalk author fell for it though. They said
> This was mobile, I don't use browser extensions for the password manager there.
So are there mobile password managers that don't even check the URL? I dunno how that works...
jasode 21 hours ago [-]
> In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.
>A huge red flag.
It won't be a red flag for people who often see auto-complete not working for legitimate websites. The usual cause is legitimate websites not working instead of actual phishing attempts.
This unintended behavior of password managers changes the Bayesian probabilities in the mind such that username/password fields that remain unfilled becomes normal and expected. It inadvertently trains sophisticated people to lower their guard. I wrote more on how this happens to really smart technical people: https://news.ycombinator.com/item?id=45179643
>So are there mobile password managers that don't even check the URL? I dunno how that works...
Strongbox pw manager on iOS by default doesn't autofill. You have to go settings to specifically enable that feature. If you don't, it's copy&paste.
cosmic_cheese 21 hours ago [-]
Even standard autofill (as in that built into Safari, Firefox, Chrome etc) gets tripped up on 100% legit sites shockingly often. Usually the cause is the site being botched, with mislabeled fields or some unnecessarily convoluted form design that otherwise prevents autofill from doing its thing.
Please people, build your login forms correctly! It’s not rocket science.
diggan 21 hours ago [-]
> It won't be a red flag for people who often see auto-complete not working for legitimate websites. The usual cause is legitimate websites not working instead of actual phishing attempts.
Yeah, that's true, I hit this all the time with 1Password+Firefox+Linux (fun combo).
Just copying-pasting the username+password because it doesn't show up is the wrong approach. It gives you a chance to pause and reflect, since it isn't working, so in that case you lookup if it's actually the right domain, and if it is, add it to the allowed domains so it works fine in the future.
Maybe best would be if password managers defaulted to not showing a "copy" thing at all for browser logins, and not letting users select the password, instead prompting them to rely on the autofill, and fix the domains if the autofill doesn't work.
Half the reason I use password manager in the first place is specifically for this issue, the other half is because I'm lazy and don't like typing. It's really weird to hear people using password managers yet do the old copy-paste dance anyways.
jonhohle 20 hours ago [-]
Thr reason to use a password manager should be because passwords now need to be unique per login. Domain binding is a close second.
Unfortunately, as bad as phishing is, service providers have leaked more plain text passwords than a phisherman could ever catch.
diggan 19 hours ago [-]
Well yeah, that too. But I was doing that manually before anyways, didn't really change when I started using a password manager, except the passwords of course got a lot stronger since there is no need to remember anything.
But the domain binding just isn't possible without technical means, hence I see that as my own top reason, I suppose :)
chrisweekly 20 hours ago [-]
> "It's really weird to hear people using password managers yet do the old copy-paste dance anyways."
Thankfully there are many reasons to use a password manager. Auto-fill is just one.
nightski 21 hours ago [-]
This hasn't been my experience at all. I regularly check the bitwarden icon for example to make sure I am not on the wrong site (b/c my login count badge is there). In fact autofill has saved me before because it did not recognize the domain and did not fill.
IshKebab 18 hours ago [-]
Yeah nor mine. Chrome's password manager / autofill is very reliable and very few sites don't work with it or have multiple domains with the same auth. The only one I can think of is maybe Synopsys Solvnet, but you're probably not using that...
hiccuphippo 22 hours ago [-]
My guess is their password manager is a separate app and they use the clipboard (or maybe it's a keyboard app) to paste the password. No way for the password manager to check the url in that case.
stanac 22 hours ago [-]
You are probably right. Still browser vendors or even extension devs can create a system where username hash and password hash are stored and checked on submit to warn for phishing. Not sure if I would trust such extension, except in case it's FF recommended and verified extension.
0cf8612b2e1e 19 hours ago [-]
I use a separate app like this because I do not fully trust browser security. The browser is such a tempting hacking target (hardened, for sure) that I want to know my vault lives in an offline-only area to reduce chance of leaks.
Is there some middle ground where I can get the browser to automatically confirm I am on a previously trusted domain? My initial thought is that I could use Firefox Workspaces for trusted domains. Limited to the chosen set of urls. Which I already do for some sites, but I guess I could expand it to everything with a login.
bobbylarrybobby 18 hours ago [-]
You could run two password managers, with a fake one that's a clone of the real one but with fake passwords. Only the fake one is connected to the browser. If the browser suggests a password from the fake pw manager, you go to the real one and copy it in.
Not actually suggesting this as it sounds like quite a big headache, but it is an option.
0cf8612b2e1e 18 hours ago [-]
Honestly, that’s not a terrible idea. There are only a half dozen accounts which actually matter, so there is not even that much initial configuration burden. If I get phished for my HN account, oh well.
Think my only blocker would be if the browser extension fights me if I try to register a site using a broken/missing password.
Does feel like a bit of a browser gap. “You have previously visited this site N times”. If that number is zero, extra caution warranted. Even just a bit of extra sophistication on bookmarks if the root domain has previously been registered. Thinking out loud, I guess I could just lean on the browser Saved Passwords list. I’ve never been comfortable with the security, but I could just always try to get it to save a sentinel username, “YOUHAVEBEENHEREBEFORE”.
sunaookami 15 hours ago [-]
Nothing is plausible about this phishing mail - writing "update your password now" would be understandable but "update your 2FA now"? Never EVER seen this on any real site and it doesn't make sense (rotating passwords doesn't make sense either but not everyone got the memo).
yawaramin 9 hours ago [-]
I literally, just a couple of days ago, got an email from Microsoft Azure asking me to update my 2FA. And I had already set up a passkey, so 2FA shouldn't even have been needed!
Macha 12 hours ago [-]
I wonder how well this correlates with people for whom 2FA adoption was not a choice they made in the first place, but a thing that "NPM insists we do". For them, this email is not all that different from the emails that required them to set up 2FA in the first place.
sunaookami 4 hours ago [-]
I hope this is not true for those that made packages which are downloaded a million times per week.
ecshafer 21 hours ago [-]
> Normally they're full of spelling mistakes and unprofessional grammar.
This is the case when you are doing mass phishing attacks trying to get the dumbest person you can. In these cases, they want the person that will jump through multiple loops one after another that keeps giving them money. A more technical audience you wouldn't want to do so, if you want one smart person to make one mistake.
jve 22 hours ago [-]
> Normally they're full of spelling mistakes and unprofessional grammar. The domain was also plausible.
I don't get these arguments. Yeah, of course I was always surprised phishing emails give itself away with mistakes as maybe non-native speakers create it without any spellcheck or whatever and it was straight forward to improve that... but whatever the text, if I open a link from email the first thing I look at is domain. Not how the site looks. The DOMAIN NAME! Am I on trusted site? Well .help TLD would SURELY ring a bell and involve research as whether this domain is associated to npm in any way.
At some point my bank redirected me to some weird domain name... meh, that was annoying, had to research whether that domain is really associated to them.. it was. But they just put their users under risk if they want domain name not to mean trust and just feed whatever domains as acceptable. That is NOT acceptable.
jonhohle 20 hours ago [-]
Nearly every email link now goes through an analytics domain that looks like a jumble of random characters. In the best case they end up at the expected site, but a significant number go to B2B service provider of the week’s domain.
There are more than a few instances when I’ve created an account for a service I know I’ve never interacted with before, but my password manager offered to log me in because another business I’ve used in the past used the same service (medical providers, schools, etc.).
Even as a technically competent person, I received a legitimate email from Google regarding old shadow accounts they were reconciling from YouTube and I spent several hours convinced it was a phishing scheme.it put me on edge for nearly a week that there was no way I could be sure critical accounts were safe, and worse yet, someone like my parents or in-laws could be safe.
bluGill 22 hours ago [-]
Unicode means that domain names can be different and look the same unless you really look close. Even if you just stick to ascii l (letter) and 1 (number) look so close that I would expect many people to not see the difference if it isn't pointed out. (remember you don't control the font in use, some are more different than others)
400thecat 21 hours ago [-]
I think, firefox allows you to display url without uncicode
mdaniel 9 hours ago [-]
Given a test of https:// news.ycombınator.com [1] it seems that no, hovering over the URL shows it in its rendered form
data:text/html,<meta charset="utf-8"><body><a href="https://news.ycomb%C4%B1nator.com/login">login to news.ycombinator.com</a></body>
and only by clicking it and getting an NXDOMAIN does one see the Punycode:
> We can’t connect to the server at news.xn--ycombnator-1ub.com.
1: Ironically HN actually mutated that link, I pasted the unicode version news.ycombınator.com (which it seems to leave intact so long as I don't qualify it with a protocol://)
400thecat 21 hours ago [-]
more alarming than .help domain is the domain registration just few weeks ago.
I got scammed just last week when paying with credit card online, and only later when investigating discovered several of identical eshops with different .shop domains registered just months ago
if domain is less that year old, it should raise red flags
worble 21 hours ago [-]
> Normally they're full of spelling mistakes and unprofessional grammar.
Frankly I can't believe we've trained an entire generation of people that this is the key identifier for scam emails.
Because native English speakers never make a mistake, and all scammers are fundamentally unable to use proper grammar, right?
pixl97 21 hours ago [-]
I mean most of the time it's the companies themselves that teach people bad habits.
MyBank: "Don't click on emails from suspicious senders! Click here for more information" { somethingweirdmybank.com } -- Actual real email from my bank.
Like, wtf. Why are you using a totally different domain.
And the companies I've worked for do this kind of crap all the time. "Important company information" { learnaboutmycompany.com } -- Like, is this a random domain someone registered. Nope, actually belongs to the place I work for when we have a well known and trusted domain.
Oh, and it's the best when the legit sites have their own spelling mistakes.
IshKebab 18 hours ago [-]
I don't see why you're surprised. It is a key identifier for scam emails. Or at least it was until recently. I don't think anyone was under the impression that scammers could never possibly learn good English.
quitit 21 hours ago [-]
For regular computers users I recommend using a password manager to prevent these types of phishing scams. As the password manager won't autofill on anything but the correct login website, the user is given a figurative red flag whenever the autofill doesn't happen.
tom1337 22 hours ago [-]
At least 1Password on iOS checks the URLs and if you use the extension to fill the password anyway you get a prompt informing you that you are filling onto a new url which is not associated with the login item.
skeeter2020 23 hours ago [-]
>> So far, it seems to be a bog-standard phishing email
The fact this is NOT the standard phishing email shows how low the bar is:
1. the text of the email reads like one you'd get from npm in the tone, format and lack of obvious spelling & grammatical errors. It pushes you to move quicker than you might normally, without triggering the typical suspicions.
2. the landing domain and website copy seem really close to legit, no obfuscated massive subdomain, no uncanny login screen, etc.
All the talk of AI disrupting tech; this is an angle where generative AI can have a massive impact in democratizing the global phishing industry. I do agree with you that there's likely many more authors who have been tricked and we haven't seen the full fallout.
spoaceman7777 22 hours ago [-]
It's just a phishing email... there isn't anything novel going on here.
Also, I really don't see what this has to do with gen AI, or what "democratizing the global phishing industry" is supposed to mean even.
Is this comment AI generated?
ApolloFortyNine 20 hours ago [-]
If your someone who barely speaks English in a third world country running a phishing campaign, you can have chatgpt write you a professional sounding email in 10 seconds. If you convince it your running a phishing test you can probably even have a back and forth about the entire design and wording of the email and phishing site.
That's what I'm guessing OP meant.
diggan 23 hours ago [-]
Both of those points are fairly common in phishing emails, at least the ones I receive. Cloning the HTML/CSS for phishing has been done for as long as I've been able to receive emails, don't even need LLMs for that :)
r_lee 23 hours ago [-]
How does AI relate to this in any way? you can easily clone websites by just copying via devtools, like seriously
same with just copying email HTML
it's actually easier to make it looke exactly the same vs different in some ways
mvieira38 22 hours ago [-]
You can make your phishing bot write tailor-made messages and even respond
21 hours ago [-]
polynomial 21 hours ago [-]
The article says the victim used 2fa. How did the attacker know their 2fa in order to send them a fake 2fa request?
fastest963 10 hours ago [-]
They MITM the real sign-in on NPM. So NPM actually sent them a 2FA but the user entered it on the phishing site. The attacker then relayed that to the real NPM.
eviks 23 hours ago [-]
> This website contained a *pixel-perfect copy* of the npmjs.com website.
Not sure how this emphasis is of any importance, you brain doesn't have a pixel perfect image of the website, so you wouldn't know whether it's a perfect replica or not.
Let the silicon dummies in the password manager do the matching, don't strain your brain with such games outside of entertainment
stanac 22 hours ago [-]
My password manager is a separate app, I always have to manually copy/paste the credentials. That's because I believed that approach to be more secure, now I see it's replacing one attack vector for another.
behindsight 12 hours ago [-]
> I always have to manually copy/paste the credentials.
I really hope you clear your clipboard history entirely after doing your copy/paste method because your credentials would otherwise persist for any other application with clipboard perms to just exfiltrate (which has already been exploited in the wild before)
mtlynch 11 hours ago [-]
>I really hope you clear your clipboard history entirely after doing your copy/paste method because your credentials would otherwise persist for any other application with clipboard perms to just exfiltrate (which has already been exploited in the wild before)
How does that work?
If a malicious website reads the clipboard, what good is knowing an arbitrary password with no other information? If the user is using a password manager, presumably they don't reuse passwords, so the malicious website would have to guess the matching username + URL where the password applies.
If you're talking about a malicious desktop app running on the same system, it's game over anyway because it can read process memory, read keystrokes, etc.
Sidenote: Most password managers I've used automatically clear the clipboard 10-15s after you copy a credential.
behindsight 9 hours ago [-]
Interesting questions, I can later provide more links to more indepth security resources that go over similar points if you would be interested but currently on my phone so I will just jot down some quick surface level points.
> If a malicious website reads the clipboard, what good is knowing an arbitrary password with no other information?
Even if assuming unique username+url pairings, clipboard history can store multiple items including emails or usernames which could be linked to any data breach and service (or just shotgunned towards the most popular services).
It's not really a "no other information" scenario and you drastically reduce the effort required for an attacker regardless.
> If you're talking about a malicious desktop app running on the same system, it's game over anyway because it can read process memory, read keystrokes, etc.
The app does not have to be overtly malicious, AccuWeather (among others) was caught exfiltrating users' clipboard data for over 4 years to an analytics company who may or may not have gotten compromised. Even if the direct application you are using is non-malicious, you are left hoping wherever your data ends up isn't a giant treasure trove/honeypot waiting to be compromised by attackers.
The same reasoning can be used for pretty much anything really, why protect anything locally since they could just keylog you or intercept requests you make.
In that case it would be safer for everyone to run Qubes OS and stringently check any application added to their system.
In the end it's a balancing act between convenience and security with which striving for absolute perfection ends up being an enemy of good.
> Sidenote: Most password managers I've used automatically clear the clipboard 10-15s after you copy a credential.
That is true, good password managers took these steps precisely to reduce the clipboard attack surface.
Firefox also took steps in 2021 to also limit leaking secrets via the clipboard.
zahlman 5 hours ago [-]
> Even if assuming unique username+url pairings, clipboard history can store multiple items including emails or usernames which could be linked to any data breach and service (or just shotgunned towards the most popular services). It's not really a "no other information" scenario and you drastically reduce the effort required for an attacker regardless.
I always manually type the emails and usernames for this reason.
(A keylogger is already game over, so.)
eviks 22 hours ago [-]
What's the most common example of an alternative attack with autofill?
karel-3d 5 hours ago [-]
just recently there was a clickjacking attack that affected most popular password manager extensions. It tricked the managers into filling passwords to random pages, worked on almost all extensions and all pages.
This doesn't seem to be "passwords on random pages", only "Personal Data + Credit Card,", passwords are domain-specific unless the website is hacked itself.
> The attacker can only steal credentials for the vulnerable domain.
karel-3d 4 hours ago [-]
ok that's nice
kaoD 21 hours ago [-]
The password manager's autofill browser extension gets compromised.
eviks 4 hours ago [-]
Common? Which of the good pw managers' extensions have been compromised in the last year?
EE84M3i 8 hours ago [-]
This used to happen with some frequency but I haven't heard of it happening in some time now.
21 hours ago [-]
SAI_Peregrinus 18 hours ago [-]
The one I use (KeePassXC) is also a separate app, but there are browser extensions for the major browsers to support autofill. Of course plenty of sites don't actually work with autofill, even the browser builtin autofill, because they don't mark the form fields properly. So autofill not working is common enough that it's not a reliable red flag. Separate password managers have the advantage that they can store passwords for things other than websites, and secret data other than passwords (arbitrary files). KeePassXC's auto-type can work with any application, not just a browser.
eviks 3 hours ago [-]
> Of course plenty of sites don't actually work with autofill, even the browser builtin autofill, because they don't mark the form fields properly.
Can't KeePass use the autotype functionality, but still filter it by website domain/host that it gets from the extension? So basically you'll still never have to copy&paste, and any site requiring this would be a reliable red flag?
welder 22 hours ago [-]
Please change that now! It's the muscle memory of never typing a password that prevents you from being victim to phishing.
udev4096 19 hours ago [-]
A mitm proxy can replicate the whole site, it's almost impossible to distinguish from the real one other than the checking the domain
Forget about phishing, it's a red herring. The actual solution to this is code signing and artifact signing.
You keep a private key on your local machine. You sign your code and artifacts with it. You push them. The packages are verified by the end-user with your public key. Even if your NPM account gets taken over, the attacker does not have your private key, so they cannot publish valid packages as you.
But because these platforms don't enforce code and artifact signing, and their tools aren't verifying those signatures, attackers just have to figure out a way to upload their own poison package (which can happen in multiple ways), and everyone is pwnd. There must be a validated chain of trust from the developer's desktop all the way to the end user. If the end user can't validate the code they were given was signed by the developer's private key, they can't trust it.
This is already implemented in many systems. You can go ahead and use GitHub and 1Password to sign all your commits today, and only authorize unsealing of your private key locally when it's needed (git commits, package creation, etc). Then your packages need to be signed too, public keys need to be distributed via multiple paths/mirrors, and tools need to verify signatures. Linux distributions do this, Mac packages do, etc. But it's not implemented/required in all package managers. We need Npm and other packaging tools to require it too.
After code signing is implemented, then the next thing you want is 1) sign-in heuristics that detect when unusual activity occurs and either notifies users or stops it entirely, 2) mandatory 2FA (with the option for things like passkeys with hardware tokens). This will help resist phishing, but it's no replacement for a secure software supply chain.
Strongly agree on artifact signing, but it has to be real end-to-end. If the attacker can trigger your CI to sign with a hot key, you still lose. What helps: 1) require offline or HSM-backed keys with human approval for release signing, 2) enforce that published npm artifacts match a signed Git tag from approved maintainers, 3) block publishes after auth changes until a second maintainer re-authorizes keys. In today’s incident the account was phished and a new token was used to publish a browser-side wallet-drainer. Proper signing plus release approvals would have raised several hard gates.
smw 19 hours ago [-]
"2) mandatory 2FA (with the option for things like passkeys with hardware tokens)."
No, with the _requirement_ for passkeys or hardware tokens!
0xbadcafebee 14 hours ago [-]
They don't work everywhere, and when they do work they're not a panacea. It's like host-based security: if you get past this one barrier... what, everything is completely pwnd? You need defense in depth. That means the authentication factor(s) must not be the final word in security. So not using a passkey or hardware token shouldn't be a death knell.
4 hours ago [-]
hiccuphippo 22 hours ago [-]
Maybe email software should add an option to make links unclickable, or show a box with the clear link (and highlight the domain) before letting the user go through it.
They already make links go through redirects (to avoid referrer headers?) so it's halfway there. Just make the redirect page show the link and a go button instead of redirecting automatically. And it would fix the annoyance that is not being able to see the real domain when you hover the link.
elric 22 hours ago [-]
So many legit emails contain links that pass through some kind of URL shortener or tracker (like mailchimp does). People are being actively conditioned to ignore suspicious looking URLs.
ecshafer 21 hours ago [-]
I worked for a company that as part of phishing we were told not to click on suspicious links. However all links were put through proxy link shortener. So www.google.com becomes just proxy.com/randomstring like an internal link shortener/mitm. But this means I can no longer check the url to see if its legitimate.
vitonsky 20 hours ago [-]
Just for context. DuckDB team is consistently ignores any security practices.
The single one method how to install DuckDB on laptop is to run
Fundamentally, doesn't the security depend entirely on whether https is working properly? Even the standard package repos are relying on https right?
Like, I don't see how it's different than going to their website, copying their recommended command to install via a standard repo, then pasting that command into your shell. Either way, you are depending entirely on the legitimacy of their domain right?
dansmith1919 20 hours ago [-]
I assume OP's point is "you're running a random script directly into your shell!!"
You're about to install and run their software. If they wanted to do something malicious, they wouldn't hide it in their plaintext install script.
tomsmeding 19 hours ago [-]
It is sometimes possible to detect server-side whether the script is being run immediately with `| sh` or not. The reason is that `sh` only reads from its input as far as it got in the script, so it takes longer to get to the end than if you'd curl show the result in the terminal directly (or pipe it to a file).
A server can use this to maliciously give you malware only if you're not looking at the code.
Though your point about trust is valid.
kevinrineer 18 hours ago [-]
`curl URL | sudo sh` doesn't have a means of verification of what the contents of the URL points to.
Sure a binary can be swapped in other places, but they generally can be verified with hashes and signatures. Also, a plaintext install script often has this problem in another layer of recursion (where the script usually pulls from URLs that the runner of the script cannot verify with this method)
zahlman 5 hours ago [-]
> Like, I don't see how it's different than going to their website, copying their recommended command to install via a standard repo, then pasting that command into your shell.
Suppose the site got compromised. If you separately explicitly download the install script first, in principle you can review it before running it.
Same deal with installing Python source packages (sdists). Arbitrary code included in the package runs at installation time (with the legitimate purpose of orchestrating any needed build steps, especially for non-Python code, which could be arbitrarily complex). This is worse than importing the installed code and letting it run whatever top-level code, because the entire installation is normally automated and there's no point where you review the code before proceeding. We do generally accept this risk in the Python ecosystem, but demanding to install only from pre-built wheels is safer (it just isn't always possible).
(Pip has the problem that this still happens even if you use its "download" command — because it wants to verify that building the project would produce a package with a name and version that match what it says in the file name and/or other metadata, and because it wants to know what the dependencies are — and in the general case it's permitted to depend on the build process to tell you this, because the system for conditional-on-platform dependencies isn't powerful enough for everyone's use case. See also: https://zahlman.github.io/posts/2025/02/28/python-packaging-...)
0xbadcafebee 14 hours ago [-]
> Fundamentally, doesn't the security depend entirely on whether https is working properly? Even the standard package repos are relying on https right?
They should only need http. You don't need https at all if your package is signed. The package/installer/app/etc could come from anywhere, modified by anyone, at any level. But if it's not signed by the dev's private key (which only exists on their laptop [or hardware token], protected by a password/key manager), it's invalid. This avoids the hundred different exploits between the dev and the user.
What's actually crazy about this is, if you're already making the user do a copy and paste, it doesn't have to be one line. Compare that line above, to:
All you have to do is copy and paste that snippet, and the same thing will happen as the one-liner, except it will only work if the sha256sum is valid. Now this isn't perfect of course, we should be using artifacts signed by a private key. But it's better than just praying.
It is amazing that a duckdb could be worse than decade old PHP for something such as this.
6 hours ago [-]
mdaniel 9 hours ago [-]
curl -f
I'm super sad they didn't make --fail the default, and people that don't care could opt-out with --no-fail
Ekaros 16 hours ago [-]
Running code as privileged user is always a risk.
Running scripts even more so.
One day someone might decide simply to exploit whatever trust they have.
Actually I wonder how much black market would pay for rights to change reasonable popular script like that...
vitonsky 20 hours ago [-]
Current incident confirms that we can't trust to authors of DuckDB, because they can't evade a trivial phishing attack.
Tomorrow they will do it again, and attackers will replace binary files that users download with this random script. Or this script will steal crypto/etc.
To make attack vector difficult for hackers, it's preferable to download any software as packages. On linux it looks like `apt install python3`.
The benefits is
1. Repositories are immutable, so attacker can't replace binary for specific version, even if they will hack all infrastructure of DuckDB. Remote script may be replaced anytime to run any code
2. Some repositories have strict review process, so there are external reviewers who will require to pass security processes to upload new version
riku_iki 16 hours ago [-]
> On linux it looks like `apt install python3`.
for MacOS they have it in brew, which is also you can use on linux, also it is available in nix.
I think the problem is that there are so many linux distros with their own package repositories, that it is very untrivial task to include package into most of them if maintainers are not proactively interested.
speedgoose 20 hours ago [-]
I also don’t know why using a unix pipe instead of saving in the file system and executing the file is a significant security risk. Perhaps an antivirus could scan the file without the pipe.
themafia 11 hours ago [-]
> depend entirely on whether https
> depending entirely on the legitimacy of their domain
Just move the phishing attack down each step of your dependency chain.
artemisart 9 hours ago [-]
Do you know about other security issues? If it's only about curl | sh it really isn't a problem, if the same website showed you a hash to check the file then the hash would be compromised at the same time as the file, and with a package manager you still end up executing code from the author that is free to download and execute anything else. Most package managers don't add security.
0cf8612b2e1e 19 hours ago [-]
They also publish binaries on their GitHub if you prefer that.
weinzierl 22 hours ago [-]
Is this related to npm debug and chalk packages being compromised?
I've been critical of blockchain in the past because of the lack of use cases, but I've gotta say crypto functions pretty well as an underlying bug bounty system. This probably could have been a much more insidious and well hidden attack if there wasn't a quick payoff route to take.
tripplyons 19 hours ago [-]
That argument only really makes sense if you assume the attackers aren't rational actors. If there was a better, more destructive way to profit from this kind of compromise, they would either do it or sell their access to someone who knew how to do it.
What is funny is again how many "young developers" had fun at old timers package managers like Debian being so slow to release new versions of packages.
But never ever anyone was rooted because of malware that was snuck into an official .deb package.
That was the concept of "stable" in the good old time, when software was really an "engineering" field.
SahAssar 14 hours ago [-]
> But never ever anyone was rooted because of malware that was snuck into an official .deb package.
We got pretty close with the whole XZ thing. And people generated predictable keys due to a flaw in a debian patch to openssl.
This stuff is hard and I'm not saying that npm is doing well but seems like no large ecosystem is doing exceptionally well either.
cenamus 4 hours ago [-]
I'd say jus about every major linux dist is doing about 2 orders of magnitude better than npm
zahlman 5 hours ago [-]
> But never ever anyone was rooted because of malware that was snuck into an official .deb package.
Sure. The tradeoff is that when there's a zero-day, you have to wait for Debian to fix it, or to approve and integrate the dev's fix. Finding malware is one thing; finding unintentional vulns is another.
12 hours ago [-]
lima 2 hours ago [-]
Using Security Keys/FIDO2 instead of TOTP codes completely solves trivial phishing attacks like this one.
I guess it's hands off the npm jar for a week or three 'cause I am expecting a bunch more packages to be affected at this point.
theanonymousone 20 hours ago [-]
How do these things mostly happen for npm? Why not (much) PyPI or Maven? Or do they?
zahlman 5 hours ago [-]
Python has a heavy standard library, and the most popular third-party libraries tend to have simple dependency graphs because they can lean on that standard library so much. Many of them are also maintained under umbrellas such as the Python Software Foundation (for things like `requests`) or the Python Packaging Authority (for build tools etc.). So there are many eyes on everything all the time, those eyes mostly belong to security-conscious people, and they all get to talk to each other quite a bit.
There was still a known compromise recently: https://blog.pypi.org/posts/2025-07-31-incident-report-phish... (`num2words` gets millions of monthly downloads, but still for example two orders of magnitude less than NumPy). Speaking of the communication I mentioned in the first paragraph, one of the first people reporting seeing the phishing email was a CPython core developer.
Malware also still does get through regularly, in the form of people just uploading it. But there are automated measures against typo-squatting (you can't register a name that's too similar to existing names, or which is otherwise blacklisted) and for most random crap there's usually just no reason anyone would find out about it to install it.
johnisgood 20 hours ago [-]
Or Cargo. I compiled Zed with release mode, pulled in 2000 dependencies. It does not fill me with confidence.
hu3 20 hours ago [-]
On a related note, the maintainer of the compromised npm packages, debug and chalk, who got pawned, is creating an operational system in rust.
Good to know! Hopefully others will be delighted to see, too.
I wonder if it really is only npm that got compromised.
bakugo 24 hours ago [-]
> According to the npm statistics, nobody has downloaded these packages before they were deprecated
Is this actually accurate? Packages with weekly downloads in the hundreds of thousands, yet in the 4+ hours that the malicious versions were up for, not a single person updated any of them to the latest patch release?
hfmuehleisen 23 hours ago [-]
DuckDB maintainer here, thanks for flagging this. Indeed the npm stats are delayed. We will know in a day or so what the actual count was. In the meantime, I've removed that statement.
belgattitude 23 hours ago [-]
I think you should unpublish rather than deprecate... `npm unpublish package@version` ... It's possible within 72h. One reason is that the patched version contains -alpha... so tools like npm-check-updates would keep the 1.3.3 as the latest release for those who installed it
hfmuehleisen 23 hours ago [-]
Yes we tried, but npm would not let us because of "dependencies". We've reached out to them and are waiting for a response. In the meantime, we re-published the packages with newer versions so people won't accidentally install the compromised version.
herpdyderp 22 hours ago [-]
At least one thing is clear from this week: npm is too slow to respond.
For how long time do Microsoft need to leave wide-open holes for the government to crack down on their wilful ignorance? Unless people go to jail, literally nothing will happen.
zahlman 5 hours ago [-]
TIL that NPM is a subsidiary of GitHub, making this indeed Microsoft's responsibility.
npm stats lag. We observed installs while the malicious versions were live for hours before removal. Affected releases we saw: duckdb@1.3.3, @duckdb/duckdb-wasm@1.29.2, @duckdb/node-api@1.3.3, @duckdb/node-bindings@1.3.3. Same payload as yesterday’s Qix compromise. Recommend pinning and avoiding those versions, reviewing diffs, and considering a temporary policy not to auto-adopt fresh patch releases on critical packages until they age.
diggan 23 hours ago [-]
I think that's pretty unlikely. I aren't even a high-profile npm author, and if I publish any npm package they end up being accessed/downloadaded within minutes of first publish, and any update after that.
I also know projects who are reading the update feeds and kick off CI jobs after any dependencies are updated to automatically test version upgrades, surely at least one dependent of DuckDB is doing something similar.
belgattitude 23 hours ago [-]
[dead]
karel-3d 5 hours ago [-]
npm actually does send these emails. They are about setting up 2FA though. And never have this sense of urgency.
"Hi, XXXX! It looks like you still do not have two-factor authentication (2FA) enabled on your npm account.
To enable 2FA, please follow the instructions found here."
koakuma-chan 23 hours ago [-]
Should enforce passkeys not 2FA
nodesocket 23 hours ago [-]
I think just supporting yubikeys is sufficient.
KevinMS 23 hours ago [-]
yubikeys locks up my firefox on both windows and mac, no thanks
nodesocket 22 hours ago [-]
Mine works flawlessly in Chrome on MacOS. Maybe you got defective one, or try factory resetting it.
koakuma-chan 23 hours ago [-]
I have two yubikeys lying around, how do I use them? I don't even have the correct hole in my laptop or in my phone to insert them
egorfine 21 hours ago [-]
It goes into the square hole.
BenjiWiebe 22 hours ago [-]
You can use an adapter (usb-a to usb-c). Or are they NFC capable? Some models are.
nodesocket 23 hours ago [-]
This is a joke right? Can’t say I’ve ever heard of USB ports referred to as “holes”.
koakuma-chan 22 hours ago [-]
> Can’t say I’ve ever heard of USB ports referred to as “holes”.
I cannot be bother to remember every hole name. They're all USB anyway, the difference is that some are A, C, or Lightning, I bought a new MacBook and it has that magnet hole, what is that called? I'm not following.
SOLAR_FIELDS 17 hours ago [-]
Are you not around hardware that much? This is stuff people who work in tech deal with every day, it's too hard to keep track of the names of the three different ports that you use ubiquitously? When someone asks you what charging port you need, do you just say "big square one" or "the iphone one"? Do you then have to clarify "the old iphone one, not the new one"?
koakuma-chan 17 hours ago [-]
> This is stuff people who work in tech deal with every day
The stuff I deal with every day is centering divs
> it's too hard to keep track of the names of the three different ports
it's more than three ports.
koakuma-chan 16 hours ago [-]
Also USB A is not even square, it's a rectangle
SOLAR_FIELDS 15 hours ago [-]
Point still stands. Maybe it’s 5 if we are being charitable. Do you also call skillets “flat pan thing I cook with”?
khc 11 hours ago [-]
not being a native English speaker, I actually do this all the time
koakuma-chan 14 hours ago [-]
You mean frying pan?
koakuma-chan 23 hours ago [-]
No I'm serious. I used to work on a PC and I had the correct hole, but I never figured out how to make yubikey useful and of course I couldn't use it with my phone. Maybe I'm missing something?
Semaphor 22 hours ago [-]
If it supports NFC, you can use that (mine do, I use them on my phone), otherwise you’d need an adapter, which is clunky but workable.
cr125rider 23 hours ago [-]
How is that different?
semiquaver 23 hours ago [-]
Passkeys are unphishable because there is nothing to type in. And they are locked to an origin by design, so you can’t accidentally use one on the wrong domain because the browser simply won’t do it.
jve 22 hours ago [-]
... and they are not transferrable, tied to BigCorp & Friends.
Semaphor 22 hours ago [-]
I use a hardware key as passkey where supported, nothing ties me to anything but those keys. Also there are OSS software managers that support them, like KeePass and friends.
wavemode 21 hours ago [-]
does your hardware key work on mobile? or do you now need to maintain two keys for every service?
Semaphor 4 hours ago [-]
> does your hardware key work on mobile?
Yes, they support NFC
> or do you now need to maintain two keys for every service?
I maintain 4 keys so I have backups. In most cases registering additional keys is no problem, and this is only needed when signing up.
vel0city 20 hours ago [-]
Yes, my hardware keys work on my mobile devices as well.
> do you now need to maintain two keys for every service?
I do maintain multiple keys for every service. I wouldn't say it's a lot of maintenance, any more than a far more secure "remember me" box is "maintenance".
When I register for a new service, I add my hardware token on my keychain as a passkey. I sign in on my laptop for the first time for a service I'll use there more than once, I make a passkey. I sign in on my desktop for the first time, I make a passkey, maybe make a spare in my password manager. Maybe if it's something I use on my phone, I'll make a passkey there as well when I sign in for the first time. When I get around to it, I'll add the spare hardware token I keep in a drawer. But its not like "I just signed up for a new service, now I must go around to every device and make a new passkey immediately. As long as I've got a couple of passkeys at registration time, I'm probably fine.
Lose my laptop? Its ok, I've got other passkeys. Lose my keys? Its ok, I've got other passkeys. My laptop and keys get stolen at the same time? Its ok, I've got other passkeys.
Its really not that hard.
udev4096 19 hours ago [-]
It is on majority of selfhosted pw managers. Vaultwarden, most popular, can transfer passkeys
koakuma-chan 23 hours ago [-]
Passkey only works when you're on the correct website
diggan 23 hours ago [-]
Use a password manager (that isn't too buggy and/or suck) and you get the same thing for both TOTP and passwords.
ApolloFortyNine 20 hours ago [-]
As mentioned elsewhere in this thread, the password manager failing to autofill is hardly unheard of.
diggan 19 hours ago [-]
As also mentioned elsewhere in this submission, it doesn't matter how often autofill breaks/works. There are two cases where it breaks: The accounts not showing up in the password manager modal, and the website autofill not working. The first is what prevents phishing, the second doesn't really matter to prevent phishing or not.
The idea is that if your password manager doesn't show the usual list of accounts (regardless if the actual autofill after clicking the account works or not), you double-check the domain.
yawaramin 9 hours ago [-]
Yes, the idea you are presenting is that the human being must manually check for mistakes. As should be clear by now, this idea does not work at scale. Passkeys will automate and enforce the check, removing human error from the equation.
diggan 3 hours ago [-]
> Yes, the idea you are presenting is that the human being must manually check for mistakes.
Not at all? The password manager handles that automatically, have you never used a password manager before?
> Passkeys will automate and enforce the check
What happens to the passkey when the origin changes, is it automatically recognising it as the new domain without any manual input? Curious to see what magic is responsible for that
koakuma-chan 23 hours ago [-]
Npm can't force people to use password manager
diggan 22 hours ago [-]
Nor does TOTP+password lock you to one authentication provider indefinitely. Tradeoffs :)
maltee 22 hours ago [-]
You can always register a new passkey with the site if you want to switch authentication providers, can’t you?
diggan 20 hours ago [-]
Yeah, I guess that'd work if I had a couple of accounts, but since there a bunch of them, I really need proper import/export to feel comfortable with moving to it. I just know I'd punt the task of migrating everything if I have to go account-by-account to migrate away.
Considering that today it'd add work for me today, and future work, with no additional security benefits compared to my current approach, it just don't seem worth it.
vel0city 20 hours ago [-]
I've got passkeys from multiple "authentication providers" available on all of my devices. This isn't a tradeoff.
ljlolel 22 hours ago [-]
You can if you just force passwords longer than people can memorize or even want to write down (assigned 24+ characters)
koakuma-chan 22 hours ago [-]
It's just gonna be on a sticky note hanging on the screen or under keyboard
hu3 21 hours ago [-]
careless people just copy paste those
243423443 23 hours ago [-]
Care to explain?
vladvasiliu 23 hours ago [-]
The actual URL in the browser is part of what the passkey signs. So if you go to totallynotascam.com which turns out to be some dude intercepting and passing the connection to npm, the signature would be refused by npm since it wouldn't be for the correct domain.
codedokode 22 hours ago [-]
Unlike humans.
operator-name 23 hours ago [-]
The browser ensures that a passkey can only be used on the correct site.
ritcgab 21 hours ago [-]
For critical infra projects like this, making a release should require at least three signatures from different maintainers. In fact, I am surprised that this is not a common practice.
A few concrete datapoints from our analysis of this incident that may help cut through the hand-waving:
1. This is the same campaign that hit Qix yesterday (https://socket.dev/blog/npm-author-qix-compromised-in-major-...). The injected payload is byte-for-byte behaviorally identical. It hooks fetch, XMLHttpRequest, and common wallet provider APIs and live-rewrites transaction payloads to attacker addresses across ETH, BTC, SOL, TRX, LTC, BCH. One tell: a bundle of very distinctive regexes for chain address formats, including multiple Solana and Litecoin variants.
2. Affected versions and timing (UTC) that we verified:
- duckdb@1.3.3 at 01:13
- @duckdb/duckdb-wasm@1.29.2 at 01:11
- @duckdb/node-api@1.3.3 at 01:12
- @duckdb/node-bindings@1.3.3 at 01:11
Plus low-reach test shots: prebid@10.9.1, 10.9.2 and @coveops/abi@2.0.1
3. Payout so far looks small. Tracked wallets sum to roughly $600 across chains. That suggests speed of discovery contained damage, not that the approach is harmless.
What would actually move the needle:
=== Registry controls ===
- Make passkeys or FIDO2 mandatory for high-impact publisher accounts. Kill TOTP for those tiers.
- Block publishing for 24 hours after 2FA reset or factor changes. Also block after adding a new automation token unless it is bound by OIDC provenance.
- Require signed provenance on upload for popular packages. Verify via Sigstore-style attestations. Reject if there is no matching VCS tag.
- Quarantine new versions from being treated as “latest” for automation for N hours. Exact-version installs still work. This alone cuts the blast radius of a hijack.
=== Team controls ===
- Do not copy-paste secrets or 2FA. Use autofill and origin-bound WebAuthn.
- Require maker-checker on publish for org-owned high-reach packages. CI must only build from a signed tag by an allowed releaser.
- Pin and lock. Use `npm ci`. Consider an internal proxy that quarantines new upstream versions for review.
=== Detection ===
- Static heuristics catch this family fast. Wallet address regex clusters and network shims inside non-crypto packages are a huge tell. If your tooling sees that in a data engine or UI lib, fail the build.
Lastly, yes, training helps, but the durable fix is making the easy path the safe path.
ptrl600 21 hours ago [-]
Is there a way to configure npm that it only installs packages that are, like, a week old?
OK, a week for popular packages, anything else I'd manually review each update. It'd be a nice feature.
HatchedLake721 20 hours ago [-]
Don’t auto install latest versions, pick a version up to a patch and use package-lock.json
mdaniel 20 hours ago [-]
That's only half the story, as I learned yesterday <https://news.ycombinator.com/item?id=45172213> since even with lock files one must change the verb given to npm/yarn to have them honor the lock file
So, regrettably, we're back to "train users" and all the pitfalls that entails
3np 12 hours ago [-]
More importantly, avoid yarn[0] if you have a choice. They do not have a security posture fitting for 2025. There's way too much assumptions like "helpful" "magic" guessing/inferring what the user "actually wants" to "make things just work". See also: corepack.
[0]: legacy 1.x projects aside
23 hours ago [-]
ebfe1 23 hours ago [-]
Is it just me who think this could have been prevented if npm admins put in some sort of cool off period to only allow new versions or packages to be downloaded after being published by "x" amount of hours? This way the npm maintainer would get notifications on their email and react immediately? And if it is urgent fix, perhaps there can be a process to allow npm admin to approve and bypass publication cool off period.
Disclaimer: I don't know enough of npm/nodejs community so I might be completely off the mark here
herpdyderp 22 hours ago [-]
If I was forced to wait to download my own package updates I would simply stop using npm altogether and use something else.
kaelwd 21 hours ago [-]
It would be fine if you could still manually specify those versions eg. npm i duckdb@1.3.3 installs 1.3.3 but duckdb@latest or duckdb@^1.3 stays on 1.3.2 until 1.3.3 is ~a week old.
Except they'd have to have an override for when there's a zero day, at which point we're back where we started.
kaelwd 20 hours ago [-]
Versions with a serious vulnerability should be deprecated by the maintainer which then warns you to use a newer version when installing. Yes if a npm account is compromised the attacker could deprecate everything except their malicious version but it would still significantly reduce the attack surface by requiring manual intervention vs the current npm install foo@latest -> you're fucked.
herpdyderp 20 hours ago [-]
Brilliantly simple, that would work for me!
balder1991 22 hours ago [-]
It could be done like a rollout in % over time like app stores do.
kaelwd 20 hours ago [-]
NPM could also flag releases that don't have a corresponding github tag (for packages that are hosted on github), most of these attacks are publishing directly to NPM without any git changes.
mdaniel 9 hours ago [-]
I would love this for every dependency manager, and double extra bonus for "the tag NOW isn't the tag from when the dep was published"
They could definitely add a maker-checker process (similar to code review) for new versions and make it a requirement for public projects with x number of downloads per week.
hiccuphippo 22 hours ago [-]
The could force release candidates that the package managers don't automatically update to, but let researchers analyse the packages before the real release.
skylurk 23 hours ago [-]
I hate the janky password manager browser extensions but at least they make it hard to make this mistake.
smw 19 hours ago [-]
And passkeys or hardware tokens (FIDO/yubikeys) make it impossible
hoppp 12 hours ago [-]
Why the hell we use npm,
Every dependency is a backdoor,
To make them malicious it only take s a small slip up
xyst 9 hours ago [-]
> ... One of the maintainers read through this text and found it somewhat reasonable. He followed the link (now defunct) to a website hosted under the domain npmjs.help. This website contained a pixel-perfect copy of the npmjs.com website. He logged in using the duckdb_admin user and password, followed by 2FA. Again, the user profile, settings etc. were a perfect copy of the npmjs.com website including all user data. As requested by the email, he then re-set the 2FA setup.
This is absolutely wild that this did not raise _any_ red flags to this person.
red flag: random reset for 2FA ???
red flag: npmjs.help ???
red flag: user name and password not autofilled by browser ???
red flag: copy and pasting u/p combo into phishing site
If _developers_ can't even get this right. Why do we expect dumb users to get this right? We are so cooked.
jeswin 20 hours ago [-]
Publishing could require clicking an email confirmation link, sent by npm.
petcat 20 hours ago [-]
It's all pointless theater because people want less friction to do what they want, not more. They'll just automate away the friction points like clicking an email confirmation link.
jeswin 18 hours ago [-]
If you're the author of ducklib, and you get an email asking "Did you just publish ducklib 2.4.1?" with a fair number of warnings in the mail text, will you click on the publish link?
I certainly wouldn't. And I don't see it as pointless theater. It requires deliberate action, and that's what's missing here.
udev4096 19 hours ago [-]
> This website contained a pixel-perfect copy of the npmjs.com website
This should not be considered high effort or a sophisticated attack. The attacker probably used a mitm proxy which can easily replicate every part of your site, with very little initial configuration. Evilginx is the most popular one I could think of
cefboud 22 hours ago [-]
> malicious code to interfere with cryptocoin transactions
Any idea what the interference was?
polynomial 21 hours ago [-]
Serious question, how did the attacking site (npmjs.help) know the victim's 2fa? ie. How did they know what phone number to send the 2fa request to?
feross 18 hours ago [-]
It was a relay. The fake site forwarded actions to the real npm, so the legit 2FA challenge was triggered by npm and the victim entered the code into the phishing page. The attacker captured it and completed the session, then added an API token and pushed malware. Passkeys or FIDO2 would have failed here because the credential is bound to the real domain and will not sign for npmjs.help.
yawaramin 9 hours ago [-]
And by 'fail' we mean that passkeys would have successfully prevented the attack.
feross 9 hours ago [-]
Correct!
xx_ns 21 hours ago [-]
It acted as a proxy for the real npm site, which was the one to send the request, intercepting the code when the user inserted it.
21 hours ago [-]
mediumsmart 24 hours ago [-]
Comes with the territory considering that npm is defacto the number one enshittification dependency by now. But no worries - this will scale beautifully.
downvotes appreciated but also happy to see one or two urls that would prove me wrong
eviks 23 hours ago [-]
In the spirit of a substantive discussion could you likewise share a couple that would prove you right?
mediumsmart 21 hours ago [-]
First of all I have a theory that nothing can be proven but I can't prove it.
Second - an example for a javascript heavy npm utilizing tracking heavy / low content site has not much weight in proving me right - my view is an assumption - 2 examples of shitty tracking SEO AI garbage content blubber sites not using npm would substantially question my assumption... I am genuinely interested in the tech those sites would use instead.
eviks 21 hours ago [-]
If you have such a theory, how does it make sense to ask others to do the impossible and prove anything???
mediumsmart 21 hours ago [-]
thats a fortune cookie - please stay on topic :)
hiccuphippo 22 hours ago [-]
I think the downvotes are because enshittification is a different thing, intentionally done by the developers themselves.
mediumsmart 21 hours ago [-]
granted but the motivation is payment I think and that originates elsewhere.
arewethereyeta 24 hours ago [-]
> An attacker published new versions of four of duckdb’s packages that included malicious code to interfere with cryptocoin transactions
How can anyone publish their packages?
OtherShrezzing 24 hours ago [-]
The attacker emailed a maintainer from a legitimate looking email address. The maintainer clicked the link and reset their credentials on a legitimate looking website. The attacker then signs into the legitimate duckdb account and publishes their new package.
This is the second high-profile instance of the technique this week.
arewethereyeta 23 hours ago [-]
2FA for such high profile packages should be enforced
jsheard 23 hours ago [-]
It is, if your packages are popular enough then npm will force you to enable 2FA. They started doing that a few years ago. It clearly doesn't stop everything though, the big attack yesterday went through 2FA by tricking the author into doing a "2FA reset".
diggan 22 hours ago [-]
> It is, if your packages are popular enough then npm will force you to enable 2FA.
Are they actively forcing it? I've received the "Remember to enable 2FA" email notifications from NPM since 2022 I think, but haven't bothered since I'm not longer publishing packages/updates.
Besides, the email conveniently mentions their "automation" tokens as well, which when used for publishing updates, bypasses 2FA fully.
Parent is exactly right! For critical infrastructure an un-phishable 2fa mechanism like passkeys or hardware token (FIDO2/yubikey) should be required! It would remove this category of attack completely.
frizlab 21 hours ago [-]
I take the downvote but I’d like to know why?
Passkeys are effectively and objectively a better security solution than password+2FA. Among other things, they are completely unfishable.
cesarb 19 hours ago [-]
> Among other things, they are completely unfishable.
From what I've heard, they're also unbackupable, and tied to the ecosystem used to create them (so if you started with an Apple desktop, you can't later migrate the passkeys to a Windows desktop, you have to go to every single site you've ever used and create new ones).
yawaramin 9 hours ago [-]
You can just create a new passkey on the new device after logging in. It's a non-issue.
3eb7988a1663 8 hours ago [-]
It is not a given that multiple services let you enroll multiple keys. How many year did it take before Amazon allowed multiple Yubikeys? Which means you are in a real pickle if you ever lose your one hardware device with keys (lost, stolen, bricked, whatever).
smw 19 hours ago [-]
You can't really backup hardware tokens, either? It's quite possible to use something like bitwarden/vaultwarden/1password as a password manager, and you can "backup" tokens quite easily without being tied to a particular mobile/desktop ecosystem.
frizlab 14 hours ago [-]
That’s not true anymore; you can migrate passkeys to another password manager now.
skeeter2020 23 hours ago [-]
for popular packages - and in this case - they are. This attack (and yesterday's) are relay attacks, with the attacker in the middle between npm and the target.
koakuma-chan 23 hours ago [-]
He would have entered 2FA too
pneff 24 hours ago [-]
There is a detailed postmortem in the linked ticket explaining exactly how this happened.
masfuerte 24 hours ago [-]
This is the same phishing attack that hit junon yesterday.
People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform.
Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks.
And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address.
We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack.
There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point.
I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number.
I logged in to the website instead. When I called to follow up I used the phone number printed on my card.
Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails.
Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app.
I wish we could stop training people to click links in random messages just because we want to be able to track their movements online.
I have had people attempt fraud in my work with live calls as follow up to emails and texts. I only caught it because it didn't pass the smell test so I did quite a bit of research. Somebody else got caught in the exact same scam and I had to extricate them from it. They didn't believe me at first and I had to hit them over the head a bit with the truth before it sank in.
USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA.
[1]https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-0...
Most times that I go to use some JS, Python, or (sometimes) Rust framework, I get a sinking feeling, as I see a huge list of dependencies scroll by.
I know that it's a big pile of security vulnerabilities and supply-chain attack risk.
Web development documentation that doesn't start with `npm install` seems rare now.
Then there's the 'open source' mobile app frameworks that push you to use the framework on your workstation with some vendor's Web platform tightly in the loop, which all your code flows through.
Children, who don't know how things work, will push any button. But experienced software engineers should understand the technology, the business context, and the real-world threats context, and at least have an uneasy, disapproving feeling every time they work on code like this.
And in some cases -- maybe in all cases that aren't a fly-by-night, or an investment scam, or a hobby project on scratch equipment -- software engineers should consider pushing back against engaging in irresponsible practices that they know will probably result in compromise.
* signed packages
enforce it for the top x thousand most popular packages to start
some basic hygiene about detecting unique new user login sessions would help as well
People will inevitably set up their CI system to sign packages, no human intervention needed. If they're smart & the CI system is capable of it they'll set it up to only build when a tag signed by someone approved to make releases is pushed, but far too often they'll just build if a tag is pushed without enforcing signature verification or even checking which contributors can make releases. Someone with access to an approved contributor's GitHub account can very often trigger the CI system to make a signed release, even without access to that contributor's commit signing key.
All these actions are teaching people to be dumb and make it more likely they’ll fall for a scam because the pattern has been normal before.
One example that always annoys me is that the website listing all of Proton's apps isn't at an address you'd expect, like apps.proton.me. It's at protonapps.com. Just... why? Why would you train your users to download apps from domains other than your primary one?
It also annoys me when people see this happening and point out how the person who fell for the attack missed some obvious detail they would have noticed. That's completely irrelevant, because everyone is stupid sometimes. Everyone can be stressed out and make bad decisions. It's always a good idea to make it harder to make bad decisions.
It's a PITA to coordinate between teams, and my team doesn't control the main domain. If I wanted my team's application to run on the parent domain, I would have to negotiate with the crayon eaters in IT to make a subdomain, point it at whatever server, and then if I want any other changes to be made, I'd have to schedule a followup meeting, which will generate more meetings, etc.
If I want to make any changes to the mycompany.othertld domain, I can just do it, with no approval from anyone.
Alternatively, yup, SOC2 is a thing: optionally create a ticket tracking the why, then open a PR against the IaC repo citing that ticket, have it ack-ed by someone other than the submitter, audit trail complete, change managed, the end
Like Citroen sends software update notifications for their cars from mmy-customerportal.com. That URL looks and sounds like a phisher's paradise. But somehow, it's legit. How can we expect any user to make the right decision when we push this kind of garbage in their face?
I think there’s a fairly straightforward way of fixing this: contact requests for email. The first email anybody sends you has an attachment that requests a token. Mail clients sort these into a “friend request” queue. When the request is accepted, the sender gets the token, and the mail gets delivered to the inbox. From that point on, the sender uses the token. Emails that use tokens can skip all the spam filters because they are known to be sent by authorised senders.
This has the effect of separating inbound email into two collections: the inbox, containing trustworthy email where you explicitly granted authorisation to the sender; and the contact request queue.
If a phisher sends you email, then it will end up in the new request queue, not your inbox. That should be a big glaring warning that it’s not a normal email from somebody you know. You would have to accept their contact request in order to even read the phishing email.
I went into more detail about the benefits of this system and how it can be implemented in this comment:
https://news.ycombinator.com/item?id=44969726
This is the wrong question.
The right question is: what should we do about the fact that the organization has such terrible security practice?
And the answer is: call them on the phone, and tell them that you will not do business with them until they fix their shit.
Anyway, I already mentioned a solid incentive for them to use the correct token. Go back and read my earlier comment.
"All legitimate npm emails are signed with GPG key X" and "All legitimate npm emails come from @npmjs.com" are equally strong statements.
Maybe don't allow changing the email address right after changing 2fa?
And if the email is changed, send an email to the original email alllowing you to dispute the change.
I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages.
I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right?
> Get them to distrust any unsigned email, no matter how convincing it looks.
For shops of any important consequence, email security is table stakes, at this point: https://www.lse.ac.uk/research/research-for-the-world/societ...
Signing the packages seems like low hanging fruit as well, if that isn't already being done. But I'm skeptical that those keys are as safe as they should be; IIRC someone recently abused a big in a Github pipeline to execute arbitrary code and managed to publish packages in that way. Which seems like an insane vulnerability class to me, and probably an inevitable consequence of centralising so many things on github.
1. I genuinely don't understand why.
2. If it is true that people are the failing factor, then nothing is going to help. Hardware keys? No problem, a human will use the hardware key to sign a malicious action.
You never make a mistake? Never ever? It's a question of numbers. If the likelihood of making a mistake is 1 in 10000 emails, send out links to 10.000 package maintainers, and you've got a 63% chance of someone making that mistake.
I trust the user did this calculation. I didn't.
The major difference between passkeys and hardware 2fa (FIDO2/yubikeys) and TOTP/SMS/Email solutions is that the passkey/yubikey _also_ securely validates the site it's communicating with before sending validation, making traditional phishing attacks all but impossible.
It's a war of attrition. You can keep bombarding developers with new and clever ways of trying to obtain their credentials or get them to click on some link while signed in. It only has to succeed once. No one is 100% vigilant all the time. If you think you're the exception, you're probably deluding yourself.
There's something broken in a system where one moment of inattention by one person can result in oodles of people ending up with compromised software, and I don't think it's the person that's broken.
I'll get a lot of pushback for this, but the main problem are ecosystems that encourage using packages published by one person. I call these "some person with a github" packages, and I typically go through codebases to try to remove these dependencies specifically because of this threat vector.
Packages that are developed by a team with code multiple code reviewers and a process are still at risk, don't get me wrong. But the risk is much less if one person does not have the power to unilaterally merge a PR, and more-so if its backed by an organization that has multiple active devs and processes for reviews.
If you do need to depend on these one-person packages, I'd recommend forking and carefully merging in changes, or pinning versions and manually reviewing all commits before upgrading versions. Thats probably intractable for a lot of projects, but thats honestly something that we as developers need to fix by raising the bar for what dependencies we include.
Nothing will reduce incidents to 0, but many things can move us closer to 0.
But I think calling DuckDB “critical infrastructure” is just a bit conceited. As an industry we really overestimate the importance of our software that can be deleted when it’s broken. We take ourselves way too seriously. In any worst case scenario, a technical problem can be solved with a people solution.
If you want to talk about critical infrastructure then the xz backdoor was the closest we’ve caught to affecting it. And what came of that backdoor? Nothing significant… I suppose you could say there might be 100 xz-like backdoors lurking in our “critical infrastructure” today, but at least as long as they’re idle, it’s not actually a problem. Maybe one day China will invade Taiwan and we’ll see just how compromised our critical infrastructure has actually been this whole time…
I'm starting to think we haven't even seen the full scope of it yet, two authors confirmed as compromised, must be 10+ out there we haven't heard of yet?
I think where they got lucky is
> In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.
A huge red flag. I wonder if browsers should actually detect if you're putting login details for site A manually into site B, and give you a "are you sure this isn't phishing" warning or something?
I don't quite understand how the chalk author fell for it though. They said
> This was mobile, I don't use browser extensions for the password manager there.
So are there mobile password managers that don't even check the URL? I dunno how that works...
>A huge red flag.
It won't be a red flag for people who often see auto-complete not working for legitimate websites. The usual cause is legitimate websites not working instead of actual phishing attempts.
This unintended behavior of password managers changes the Bayesian probabilities in the mind such that username/password fields that remain unfilled becomes normal and expected. It inadvertently trains sophisticated people to lower their guard. I wrote more on how this happens to really smart technical people: https://news.ycombinator.com/item?id=45179643
>So are there mobile password managers that don't even check the URL? I dunno how that works...
Strongbox pw manager on iOS by default doesn't autofill. You have to go settings to specifically enable that feature. If you don't, it's copy&paste.
Please people, build your login forms correctly! It’s not rocket science.
Yeah, that's true, I hit this all the time with 1Password+Firefox+Linux (fun combo).
Just copying-pasting the username+password because it doesn't show up is the wrong approach. It gives you a chance to pause and reflect, since it isn't working, so in that case you lookup if it's actually the right domain, and if it is, add it to the allowed domains so it works fine in the future.
Maybe best would be if password managers defaulted to not showing a "copy" thing at all for browser logins, and not letting users select the password, instead prompting them to rely on the autofill, and fix the domains if the autofill doesn't work.
Half the reason I use password manager in the first place is specifically for this issue, the other half is because I'm lazy and don't like typing. It's really weird to hear people using password managers yet do the old copy-paste dance anyways.
Unfortunately, as bad as phishing is, service providers have leaked more plain text passwords than a phisherman could ever catch.
But the domain binding just isn't possible without technical means, hence I see that as my own top reason, I suppose :)
Thankfully there are many reasons to use a password manager. Auto-fill is just one.
Is there some middle ground where I can get the browser to automatically confirm I am on a previously trusted domain? My initial thought is that I could use Firefox Workspaces for trusted domains. Limited to the chosen set of urls. Which I already do for some sites, but I guess I could expand it to everything with a login.
Not actually suggesting this as it sounds like quite a big headache, but it is an option.
Think my only blocker would be if the browser extension fights me if I try to register a site using a broken/missing password.
Does feel like a bit of a browser gap. “You have previously visited this site N times”. If that number is zero, extra caution warranted. Even just a bit of extra sophistication on bookmarks if the root domain has previously been registered. Thinking out loud, I guess I could just lean on the browser Saved Passwords list. I’ve never been comfortable with the security, but I could just always try to get it to save a sentinel username, “YOUHAVEBEENHEREBEFORE”.
This is the case when you are doing mass phishing attacks trying to get the dumbest person you can. In these cases, they want the person that will jump through multiple loops one after another that keeps giving them money. A more technical audience you wouldn't want to do so, if you want one smart person to make one mistake.
I don't get these arguments. Yeah, of course I was always surprised phishing emails give itself away with mistakes as maybe non-native speakers create it without any spellcheck or whatever and it was straight forward to improve that... but whatever the text, if I open a link from email the first thing I look at is domain. Not how the site looks. The DOMAIN NAME! Am I on trusted site? Well .help TLD would SURELY ring a bell and involve research as whether this domain is associated to npm in any way.
At some point my bank redirected me to some weird domain name... meh, that was annoying, had to research whether that domain is really associated to them.. it was. But they just put their users under risk if they want domain name not to mean trust and just feed whatever domains as acceptable. That is NOT acceptable.
There are more than a few instances when I’ve created an account for a service I know I’ve never interacted with before, but my password manager offered to log me in because another business I’ve used in the past used the same service (medical providers, schools, etc.).
Even as a technically competent person, I received a legitimate email from Google regarding old shadow accounts they were reconciling from YouTube and I spent several hours convinced it was a phishing scheme.it put me on edge for nearly a week that there was no way I could be sure critical accounts were safe, and worse yet, someone like my parents or in-laws could be safe.
> We can’t connect to the server at news.xn--ycombnator-1ub.com.
1: Ironically HN actually mutated that link, I pasted the unicode version news.ycombınator.com (which it seems to leave intact so long as I don't qualify it with a protocol://)
Frankly I can't believe we've trained an entire generation of people that this is the key identifier for scam emails.
Because native English speakers never make a mistake, and all scammers are fundamentally unable to use proper grammar, right?
MyBank: "Don't click on emails from suspicious senders! Click here for more information" { somethingweirdmybank.com } -- Actual real email from my bank.
Like, wtf. Why are you using a totally different domain.
And the companies I've worked for do this kind of crap all the time. "Important company information" { learnaboutmycompany.com } -- Like, is this a random domain someone registered. Nope, actually belongs to the place I work for when we have a well known and trusted domain.
Oh, and it's the best when the legit sites have their own spelling mistakes.
The fact this is NOT the standard phishing email shows how low the bar is:
1. the text of the email reads like one you'd get from npm in the tone, format and lack of obvious spelling & grammatical errors. It pushes you to move quicker than you might normally, without triggering the typical suspicions.
2. the landing domain and website copy seem really close to legit, no obfuscated massive subdomain, no uncanny login screen, etc.
All the talk of AI disrupting tech; this is an angle where generative AI can have a massive impact in democratizing the global phishing industry. I do agree with you that there's likely many more authors who have been tricked and we haven't seen the full fallout.
Also, I really don't see what this has to do with gen AI, or what "democratizing the global phishing industry" is supposed to mean even.
Is this comment AI generated?
That's what I'm guessing OP meant.
same with just copying email HTML
it's actually easier to make it looke exactly the same vs different in some ways
Not sure how this emphasis is of any importance, you brain doesn't have a pixel perfect image of the website, so you wouldn't know whether it's a perfect replica or not.
Let the silicon dummies in the password manager do the matching, don't strain your brain with such games outside of entertainment
I really hope you clear your clipboard history entirely after doing your copy/paste method because your credentials would otherwise persist for any other application with clipboard perms to just exfiltrate (which has already been exploited in the wild before)
How does that work?
If a malicious website reads the clipboard, what good is knowing an arbitrary password with no other information? If the user is using a password manager, presumably they don't reuse passwords, so the malicious website would have to guess the matching username + URL where the password applies.
If you're talking about a malicious desktop app running on the same system, it's game over anyway because it can read process memory, read keystrokes, etc.
Sidenote: Most password managers I've used automatically clear the clipboard 10-15s after you copy a credential.
> If a malicious website reads the clipboard, what good is knowing an arbitrary password with no other information?
Even if assuming unique username+url pairings, clipboard history can store multiple items including emails or usernames which could be linked to any data breach and service (or just shotgunned towards the most popular services). It's not really a "no other information" scenario and you drastically reduce the effort required for an attacker regardless.
> If you're talking about a malicious desktop app running on the same system, it's game over anyway because it can read process memory, read keystrokes, etc.
The app does not have to be overtly malicious, AccuWeather (among others) was caught exfiltrating users' clipboard data for over 4 years to an analytics company who may or may not have gotten compromised. Even if the direct application you are using is non-malicious, you are left hoping wherever your data ends up isn't a giant treasure trove/honeypot waiting to be compromised by attackers.
The same reasoning can be used for pretty much anything really, why protect anything locally since they could just keylog you or intercept requests you make.
In that case it would be safer for everyone to run Qubes OS and stringently check any application added to their system.
In the end it's a balancing act between convenience and security with which striving for absolute perfection ends up being an enemy of good.
> Sidenote: Most password managers I've used automatically clear the clipboard 10-15s after you copy a credential.
That is true, good password managers took these steps precisely to reduce the clipboard attack surface.
Firefox also took steps in 2021 to also limit leaking secrets via the clipboard.
I always manually type the emails and usernames for this reason.
(A keylogger is already game over, so.)
This doesn't seem to be "passwords on random pages", only "Personal Data + Credit Card,", passwords are domain-specific unless the website is hacked itself.
> The attacker can only steal credentials for the vulnerable domain.
Can't KeePass use the autotype functionality, but still filter it by website domain/host that it gets from the extension? So basically you'll still never have to copy&paste, and any site requiring this would be a reliable red flag?
Forget about phishing, it's a red herring. The actual solution to this is code signing and artifact signing.
You keep a private key on your local machine. You sign your code and artifacts with it. You push them. The packages are verified by the end-user with your public key. Even if your NPM account gets taken over, the attacker does not have your private key, so they cannot publish valid packages as you.
But because these platforms don't enforce code and artifact signing, and their tools aren't verifying those signatures, attackers just have to figure out a way to upload their own poison package (which can happen in multiple ways), and everyone is pwnd. There must be a validated chain of trust from the developer's desktop all the way to the end user. If the end user can't validate the code they were given was signed by the developer's private key, they can't trust it.
This is already implemented in many systems. You can go ahead and use GitHub and 1Password to sign all your commits today, and only authorize unsealing of your private key locally when it's needed (git commits, package creation, etc). Then your packages need to be signed too, public keys need to be distributed via multiple paths/mirrors, and tools need to verify signatures. Linux distributions do this, Mac packages do, etc. But it's not implemented/required in all package managers. We need Npm and other packaging tools to require it too.
After code signing is implemented, then the next thing you want is 1) sign-in heuristics that detect when unusual activity occurs and either notifies users or stops it entirely, 2) mandatory 2FA (with the option for things like passkeys with hardware tokens). This will help resist phishing, but it's no replacement for a secure software supply chain.
Strongly agree on artifact signing, but it has to be real end-to-end. If the attacker can trigger your CI to sign with a hot key, you still lose. What helps: 1) require offline or HSM-backed keys with human approval for release signing, 2) enforce that published npm artifacts match a signed Git tag from approved maintainers, 3) block publishes after auth changes until a second maintainer re-authorizes keys. In today’s incident the account was phished and a new token was used to publish a browser-side wallet-drainer. Proper signing plus release approvals would have raised several hard gates.
No, with the _requirement_ for passkeys or hardware tokens!
They already make links go through redirects (to avoid referrer headers?) so it's halfway there. Just make the redirect page show the link and a go button instead of redirecting automatically. And it would fix the annoyance that is not being able to see the real domain when you hover the link.
The single one method how to install DuckDB on laptop is to run
`curl https://install.duckdb.org | sh`
I've requested to deliver CLI as standard package, they have ignored it. Here is the thread https://github.com/duckdb/duckdb/issues/17091
As you can see that it isn't single slip due to "human factor", but DuckDB management consistently puts users at risk.
Fundamentally, doesn't the security depend entirely on whether https is working properly? Even the standard package repos are relying on https right?
Like, I don't see how it's different than going to their website, copying their recommended command to install via a standard repo, then pasting that command into your shell. Either way, you are depending entirely on the legitimacy of their domain right?
You're about to install and run their software. If they wanted to do something malicious, they wouldn't hide it in their plaintext install script.
A server can use this to maliciously give you malware only if you're not looking at the code.
Though your point about trust is valid.
Sure a binary can be swapped in other places, but they generally can be verified with hashes and signatures. Also, a plaintext install script often has this problem in another layer of recursion (where the script usually pulls from URLs that the runner of the script cannot verify with this method)
Suppose the site got compromised. If you separately explicitly download the install script first, in principle you can review it before running it.
Same deal with installing Python source packages (sdists). Arbitrary code included in the package runs at installation time (with the legitimate purpose of orchestrating any needed build steps, especially for non-Python code, which could be arbitrarily complex). This is worse than importing the installed code and letting it run whatever top-level code, because the entire installation is normally automated and there's no point where you review the code before proceeding. We do generally accept this risk in the Python ecosystem, but demanding to install only from pre-built wheels is safer (it just isn't always possible).
(Pip has the problem that this still happens even if you use its "download" command — because it wants to verify that building the project would produce a package with a name and version that match what it says in the file name and/or other metadata, and because it wants to know what the dependencies are — and in the general case it's permitted to depend on the build process to tell you this, because the system for conditional-on-platform dependencies isn't powerful enough for everyone's use case. See also: https://zahlman.github.io/posts/2025/02/28/python-packaging-...)
They should only need http. You don't need https at all if your package is signed. The package/installer/app/etc could come from anywhere, modified by anyone, at any level. But if it's not signed by the dev's private key (which only exists on their laptop [or hardware token], protected by a password/key manager), it's invalid. This avoids the hundred different exploits between the dev and the user.
What's actually crazy about this is, if you're already making the user do a copy and paste, it doesn't have to be one line. Compare that line above, to:
All you have to do is copy and paste that snippet, and the same thing will happen as the one-liner, except it will only work if the sha256sum is valid. Now this isn't perfect of course, we should be using artifacts signed by a private key. But it's better than just praying.It is amazing that a duckdb could be worse than decade old PHP for something such as this.
Running scripts even more so.
One day someone might decide simply to exploit whatever trust they have.
Actually I wonder how much black market would pay for rights to change reasonable popular script like that...
Tomorrow they will do it again, and attackers will replace binary files that users download with this random script. Or this script will steal crypto/etc.
To make attack vector difficult for hackers, it's preferable to download any software as packages. On linux it looks like `apt install python3`.
The benefits is
1. Repositories are immutable, so attacker can't replace binary for specific version, even if they will hack all infrastructure of DuckDB. Remote script may be replaced anytime to run any code
2. Some repositories have strict review process, so there are external reviewers who will require to pass security processes to upload new version
for MacOS they have it in brew, which is also you can use on linux, also it is available in nix.
I think the problem is that there are so many linux distros with their own package repositories, that it is very untrivial task to include package into most of them if maintainers are not proactively interested.
> depending entirely on the legitimacy of their domain
Just move the phishing attack down each step of your dependency chain.
https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
https://news.ycombinator.com/item?id=45169657
But never ever anyone was rooted because of malware that was snuck into an official .deb package.
That was the concept of "stable" in the good old time, when software was really an "engineering" field.
We got pretty close with the whole XZ thing. And people generated predictable keys due to a flaw in a debian patch to openssl.
This stuff is hard and I'm not saying that npm is doing well but seems like no large ecosystem is doing exceptionally well either.
Sure. The tradeoff is that when there's a zero-day, you have to wait for Debian to fix it, or to approve and integrate the dev's fix. Finding malware is one thing; finding unintentional vulns is another.
We all dodged a bullet - https://news.ycombinator.com/item?id=45183029 - Sept 2025 (273 comments)
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)
PyPI also now requires 2FA for everyone and makes other proactive attempts to hunt down malware (https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f...) in addition to responding to reports.
There was still a known compromise recently: https://blog.pypi.org/posts/2025-07-31-incident-report-phish... (`num2words` gets millions of monthly downloads, but still for example two orders of magnitude less than NumPy). Speaking of the communication I mentioned in the first paragraph, one of the first people reporting seeing the phishing email was a CPython core developer.
Malware also still does get through regularly, in the form of people just uploading it. But there are automated measures against typo-squatting (you can't register a name that's too similar to existing names, or which is otherwise blacklisted) and for most random crap there's usually just no reason anyone would find out about it to install it.
https://github.com/oro-os
https://news.ycombinator.com/user?id=junon
I wonder if it really is only npm that got compromised.
Is this actually accurate? Packages with weekly downloads in the hundreds of thousands, yet in the 4+ hours that the malicious versions were up for, not a single person updated any of them to the latest patch release?
Microsoft has been bravely saying "Security is top priority" since 2002 (https://www.cnet.com/tech/tech-industry/gates-security-is-to...) and every now and then reminds us that they put "security above all else" (latest in 2024: https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...), yet things like this persists.
For how long time do Microsoft need to leave wide-open holes for the government to crack down on their wilful ignorance? Unless people go to jail, literally nothing will happen.
npm stats lag. We observed installs while the malicious versions were live for hours before removal. Affected releases we saw: duckdb@1.3.3, @duckdb/duckdb-wasm@1.29.2, @duckdb/node-api@1.3.3, @duckdb/node-bindings@1.3.3. Same payload as yesterday’s Qix compromise. Recommend pinning and avoiding those versions, reviewing diffs, and considering a temporary policy not to auto-adopt fresh patch releases on critical packages until they age.
I also know projects who are reading the update feeds and kick off CI jobs after any dependencies are updated to automatically test version upgrades, surely at least one dependent of DuckDB is doing something similar.
"Hi, XXXX! It looks like you still do not have two-factor authentication (2FA) enabled on your npm account.
To enable 2FA, please follow the instructions found here."
I cannot be bother to remember every hole name. They're all USB anyway, the difference is that some are A, C, or Lightning, I bought a new MacBook and it has that magnet hole, what is that called? I'm not following.
The stuff I deal with every day is centering divs
> it's too hard to keep track of the names of the three different ports
it's more than three ports.
Yes, they support NFC
> or do you now need to maintain two keys for every service?
I maintain 4 keys so I have backups. In most cases registering additional keys is no problem, and this is only needed when signing up.
> do you now need to maintain two keys for every service?
I do maintain multiple keys for every service. I wouldn't say it's a lot of maintenance, any more than a far more secure "remember me" box is "maintenance".
When I register for a new service, I add my hardware token on my keychain as a passkey. I sign in on my laptop for the first time for a service I'll use there more than once, I make a passkey. I sign in on my desktop for the first time, I make a passkey, maybe make a spare in my password manager. Maybe if it's something I use on my phone, I'll make a passkey there as well when I sign in for the first time. When I get around to it, I'll add the spare hardware token I keep in a drawer. But its not like "I just signed up for a new service, now I must go around to every device and make a new passkey immediately. As long as I've got a couple of passkeys at registration time, I'm probably fine.
Lose my laptop? Its ok, I've got other passkeys. Lose my keys? Its ok, I've got other passkeys. My laptop and keys get stolen at the same time? Its ok, I've got other passkeys.
Its really not that hard.
The idea is that if your password manager doesn't show the usual list of accounts (regardless if the actual autofill after clicking the account works or not), you double-check the domain.
Not at all? The password manager handles that automatically, have you never used a password manager before?
> Passkeys will automate and enforce the check
What happens to the passkey when the origin changes, is it automatically recognising it as the new domain without any manual input? Curious to see what magic is responsible for that
Considering that today it'd add work for me today, and future work, with no additional security benefits compared to my current approach, it just don't seem worth it.
A few concrete datapoints from our analysis of this incident that may help cut through the hand-waving:
1. This is the same campaign that hit Qix yesterday (https://socket.dev/blog/npm-author-qix-compromised-in-major-...). The injected payload is byte-for-byte behaviorally identical. It hooks fetch, XMLHttpRequest, and common wallet provider APIs and live-rewrites transaction payloads to attacker addresses across ETH, BTC, SOL, TRX, LTC, BCH. One tell: a bundle of very distinctive regexes for chain address formats, including multiple Solana and Litecoin variants.
2. Affected versions and timing (UTC) that we verified:
- duckdb@1.3.3 at 01:13
- @duckdb/duckdb-wasm@1.29.2 at 01:11
- @duckdb/node-api@1.3.3 at 01:12
- @duckdb/node-bindings@1.3.3 at 01:11
Plus low-reach test shots: prebid@10.9.1, 10.9.2 and @coveops/abi@2.0.1
3. Payout so far looks small. Tracked wallets sum to roughly $600 across chains. That suggests speed of discovery contained damage, not that the approach is harmless.
What would actually move the needle:
=== Registry controls ===
- Make passkeys or FIDO2 mandatory for high-impact publisher accounts. Kill TOTP for those tiers.
- Block publishing for 24 hours after 2FA reset or factor changes. Also block after adding a new automation token unless it is bound by OIDC provenance.
- Require signed provenance on upload for popular packages. Verify via Sigstore-style attestations. Reject if there is no matching VCS tag.
- Quarantine new versions from being treated as “latest” for automation for N hours. Exact-version installs still work. This alone cuts the blast radius of a hijack.
=== Team controls ===
- Do not copy-paste secrets or 2FA. Use autofill and origin-bound WebAuthn.
- Require maker-checker on publish for org-owned high-reach packages. CI must only build from a signed tag by an allowed releaser.
- Pin and lock. Use `npm ci`. Consider an internal proxy that quarantines new upstream versions for review.
=== Detection ===
- Static heuristics catch this family fast. Wallet address regex clusters and network shims inside non-crypto packages are a huge tell. If your tooling sees that in a data engine or UI lib, fail the build.
Lastly, yes, training helps, but the durable fix is making the easy path the safe path.
A week waiting period would not be enough. On average, npm malware lingers on the registry for 209 days before it's finally reported and removed.
Source: https://arxiv.org/abs/2005.09535
So, regrettably, we're back to "train users" and all the pitfalls that entails
[0]: legacy 1.x projects aside
Disclaimer: I don't know enough of npm/nodejs community so I might be completely off the mark here
https://github.com/pnpm/pnpm/issues/9921
But, this coming from GitHub, who believe that sliding "v1" tags on random action repos is how one ends up with https://news.ycombinator.com/item?id=43367987
Every dependency is a backdoor, To make them malicious it only take s a small slip up
This is absolutely wild that this did not raise _any_ red flags to this person.
red flag: random reset for 2FA ??? red flag: npmjs.help ??? red flag: user name and password not autofilled by browser ??? red flag: copy and pasting u/p combo into phishing site
If _developers_ can't even get this right. Why do we expect dumb users to get this right? We are so cooked.
I certainly wouldn't. And I don't see it as pointless theater. It requires deliberate action, and that's what's missing here.
This should not be considered high effort or a sophisticated attack. The attacker probably used a mitm proxy which can easily replicate every part of your site, with very little initial configuration. Evilginx is the most popular one I could think of
Any idea what the interference was?
downvotes appreciated but also happy to see one or two urls that would prove me wrong
Second - an example for a javascript heavy npm utilizing tracking heavy / low content site has not much weight in proving me right - my view is an assumption - 2 examples of shitty tracking SEO AI garbage content blubber sites not using npm would substantially question my assumption... I am genuinely interested in the tech those sites would use instead.
How can anyone publish their packages?
This is the second high-profile instance of the technique this week.
Are they actively forcing it? I've received the "Remember to enable 2FA" email notifications from NPM since 2022 I think, but haven't bothered since I'm not longer publishing packages/updates.
Besides, the email conveniently mentions their "automation" tokens as well, which when used for publishing updates, bypasses 2FA fully.
https://old.reddit.com/r/node/comments/xftu7i/comment/iooabn...
Passkeys are effectively and objectively a better security solution than password+2FA. Among other things, they are completely unfishable.
From what I've heard, they're also unbackupable, and tied to the ecosystem used to create them (so if you started with an Apple desktop, you can't later migrate the passkeys to a Windows desktop, you have to go to every single site you've ever used and create new ones).
https://news.ycombinator.com/item?id=45169657