Taking the question of whether this would be a useful addition to Node.js core or aside, it must be noted that this 19k LoC PR was mostly generated by Claude Code and manually reviewed by the submitter which in my opinion is against the spirit of the project and directly violates the terms of Developer's Certificate of Origin set in the project's CONTRIBUTING.md
conartist6 45 minutes ago [-]
Pain is a signal. Even if the trick is not minding, it's still inadvisable to burn your hand on an open flame. The pain is there to help you not get hurt.
I do not think it is wise to brag that your solution to a problem is extremely painful but that you were impervious to all the pain. Others will still feel it. This code takes bandwidth to host and space on devices and for maintainers it permanently doubles the work associated with evolving the filesystem APIs. If someone else comes along with the same kind of thinking they might just double those doubled costs, and someone else might 8x them, all because nobody could feel the pain they were passing on to others
syrusakbary 3 hours ago [-]
Fully disagree with this take. Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.
Note aside, OpenJS executive director mentioned it's ok to use AI assistance on Node.js contributions:
I checked with legal and the foundation is fine with the DCO on AI-assisted contributions. We’ll work on getting this documented.
I appreciate hearing your point of view on this. In my opinion the future of Open Source and AI assisted coding is a much bigger issue, and different people have different levels of confidence in both positive and negative outcomes of LLM impact on our industry.
It is great to have a legal perspective on compliance of LLM generated code with DCO terms, and I feel safer knowing that at least it doesn't expose Node.js to legal risk. However it doesn't address the well known unresolved ethical concerns over the sourcing of the code produced by LLM tooling.
oystersareyum 35 minutes ago [-]
Allowing AI contributions results in lower quality contributions and allows wild things to come in and disrupt it, making it an unreliable dependency. We have seen big tech experience constant outages due to AI contributions as is...
jaredklewis 54 minutes ago [-]
AI coding is great, but iteration speed is absolutely not a desirable trait for a runtime. Stability is everything.
Speed code all your SaaS apps, but slow iteration speeds are better for a runtime because once you add something, you can basically never remove it. You can't iterate. You get literally one shot, and if you add a awkward or trappy API, everyone is now stuck with it forever. And what if this "must have" feature turns out to be kind of a dud, because everyone converged on a much more elegant solution a few years later? Congratulations, we now have to maintain this legacy feature forever and everyone has to migrate their codebase to some new solution.
Much better to let dependencies and competing platforms like bun or deno do all the innovating. Once everyone has tried and refined all the different ways of solving this particular problem, and all the kinks have been worked out, and all the different ways to structure the API have been tried, you can take just the best of the best ideas and add it into the runtime. It was late, but because of that it will be stable and not a train wreck.
But I know what you're thinking. "You can't do that. Just look at what happens to platforms that iterate slowly, like C or C++ or Java. They're toast." Oh wait, never mind, they're among the most popular platforms out there.
syrusakbary 47 minutes ago [-]
Since when we accepted that we can’t go fast and offer stability at the same time?
Time is highly correlated with expertise. When you don’t have expertise, you may go fast at expense of stability because you lack the experience to make good decisions to really save speed.
This doesn’t hold true for any projects where you rely on experts, good processes and tight timelines (aka: Apollo mission)
szmarczak 2 hours ago [-]
> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.
It's not an AI issue. Node.js itself is lots of legacy code and many projects depend on that code. When Deno and Bun were in early development, AI wasn't involved.
Yes, you can speed up the development a bit but it will never reach the quality of newer runtimes.
It's like comparing C to C++. Those languages are from different eras (relatively to each other).
mixologic 5 hours ago [-]
Worth noting that mcollina is a member of the Node.js Technical Steering Committee
everlier 4 hours ago [-]
We call it a slip slop at work, it's ok to slip some slop if it's "our" slop :-)
giancarlostoro 2 hours ago [-]
> I pointed the AI at the tedious parts, the stuff that makes a 14k-line PR possible but no human wants to hand-write: implementing every fs method variant (sync, callback, promises), wiring up test coverage, and generating docs.
Is it slop if it is carefully calculated? I tire of hearing people use slop to mean anything AI, even when it is carefully reviewed.
grey-area 1 hours ago [-]
Was 14k lines carefully reviewed? Seems unlikely.
digikata 5 hours ago [-]
Large PRs could follow the practices that the Linux kernel dev lists follow. Sometimes large subsystem changes could be carried separately for a while by the submitter for testing and maintenance before being accepted in theory, reviewed, and if ready, then merged.
While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.
With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.
dakiol 4 hours ago [-]
Nobody wants to review AI-generated code (unless we are paid for doing so). Open source is fun, that's why people do it for free... adding AI to the mix is just insulting to some, and boring to others.
Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write? I couldn't care less if it improves (best case scenario) my open source codebase, I simply don't enjoy the imbalance.
tyre 1 hours ago [-]
Why do you care how much effort it took the engineer to make it? If there was a huge amount of tedium that they used Claude Code for, then reviewed and cleaned up so that it’s indistinguishable from whatever you’d expect from a human; what’s it to you?
Not everyone has the same motivations. I’ve done open source for fun, I’ve done it to unblock something at work, I’ve done it to fix something that annoys me.
If your project is gaining useful functionality, that seems like a win.
gonzalohm 1 hours ago [-]
Because sometimes programming is an art and we want people to do it as if it was something they cared about.
I play chess and this is a bit like that. Why do I play against humans? Because I want to face another person like me and see what strategies they can come up with.
Of course any chess bot is going to play better, but that's not the point
goalieca 5 hours ago [-]
> With AI blowing up the line counts on PRs,
Well, the process you’re describing is mature and intentionally slows things down. The LLM push has almost the opposite philosophy. Everyone talks about going faster and no one believes it is about higher quality.
digikata 5 hours ago [-]
Go slow to go fast. Breaking up the PR this way also allows later humans and AI alike to understand the codebase. Slowing down the PR process with standards lets the project move faster overall.
If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.
Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.
tracker1 4 hours ago [-]
TBF, most of the AI code I've reviewed isn't significantly different than code I've seen from people... in fact, I've seen significantly worse from real people.
The fact is, it's useful as a tool, but you still should review what's going on/in. That isn't always easy though, and I get that. I've been working on a TS/JS driver for MS-SQL so I can use some features not in other libraries, mostly bridging a Rust driver (first Tiberious, then mssql-client), the clean abstraction made the switch pretty quick... a fairly thorough test suite for Deno/Node/Bun kapt the sanity in check. Rust C-style library with FFI access in TS/JS server environment.
My hardest part, is actually having to setup a Windows Server to test the passswordless auth path (basically a connection string with integrated windows auth). I've got about 80 hours of real time into this project so far. And I'll probably be doing 2 followups.. one with be a generic ODBC adapter with a similar set of interfaces. And a final third adapter that will privide the same methods, but using the native SQLite underneath but smothing over the differences.
I'm leveraging using/dispose (async) instead of explicit close/rollback patterns, similar to .Net as well as Dapper-like methods for "Typed" results, though no actual type validation... I'd considered trying to adapt Zod to check at least the first record or all records, and may still add the option.
All said though, I wouldn't have been able to do so much with so relatively little time without the use of AI. You don't have to sacrifice quality to gain efficiency with AI, but you do need to take the time to do it.
dotancohen 4 hours ago [-]
> Everyone talks about going faster and no one believes it is about higher quality.
Go Fast And Break Things was considered a virtue in the JavaScript community long before LLMs became widely available.
athorax 6 hours ago [-]
How exactly does it violate the Developer's Certificate of Origin clause?
If submitter picks (a) they assert that they wrote the code themselves and have right to submit it under project's license. If (b) the code was taken from another place with clear license terms compatible with the project's license. If (c) contribution was written by someone else who asserted (a) or (b) and is submitted without changes.
Since LLM generated output is based on public code, but lacks attribution and the license of the original it is not possible to pick (b). (a) and (c) cannot be picked based on the submitter disclaimer in the PR body.
athorax 3 hours ago [-]
Not sure if you are intentionally misrepresenting (a), but here is the full text
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
duskdozer 1 hours ago [-]
That seems exclusive of LLMs, as the user didn't create the contribution, the LLM did.
Dylan16807 2 hours ago [-]
If there's a "the original" the LLM is copying then there's a problem.
If there isn't, then (b) works fine, the code is taken from the LLM with no preexisting license. And it would be very strange if a mix of (a) and (b) is a problem; almost any (b) code will need some (a) code to adapt it.
benatkin 2 hours ago [-]
To many, it qualifies under either A or B, and therefore C as well. Under A, you can think of the LLM as augmenting your own intelligence. Under B, the license terms of LLM output are essentially that you can do whatever you want with it. The alternative is avoiding use of AI because of copyright or plagiarism concerns.
charcircuit 5 hours ago [-]
It would be considered (a) since the author would own the copyright on the code.
lacoolj 4 hours ago [-]
Owning copyright of something and writing it are very different things
crote 5 hours ago [-]
Citation needed.
Whether AI output can fall under copyright at all is still up for debate - with some early rulings indicating that the fact that you prompted the AI does not automatically grant you authorship.
Even if it does, it hasn't been settled yet what the impact of your AI having been trained on copyrighted material is on its output. You can make a not-completely-unreasonable argument that AI inference output is a derivative work of AI training input.
Fact is, the matter isn't settled yet, which means any open-source project should assume the worst possible outcome - which in practice means a massive AI-generated PR like this should be treated like a nuke which could go off at any moment.
phendrenad2 4 hours ago [-]
Why write open-source software at all, when the government could outlaw open-source entirely? What if an asteroid destroys Earth and there are no humans left to enjoy your work? At some point, you have to agree that a risk isn't worth worrying about. And your "worst possible outcome" is just the arbitrary outcome that you think has some subjective risk threshold. And it's certainly not one I agree with. Furthermore, calling it a "nuke" is a bad analogy because that implies that it can't be put back in the bottle once opened. In reality, we're dealing with legal definitions, which can be redefined as easily as defined.
charcircuit 5 hours ago [-]
The two main points are that:
1. Copyright cannot be assigned to an AI agent.
2. Copyrighted works require human creativity to be applied in order to be copyrighted.
For point 2 this would apply to times were AI one shots a generic prompt. But for these large PRs where multiple prompts are used and a human has decided what the design should be and how the API should look you get the human creativity required for copyright.
In regards to being a derivative work I think it would be hard to argue that an LLM is copying or modifying an existing original work. Even if it came up with an exact duplicate of a piece of code it would be hard to prove that it was a copy and not an independent recreation from scratch.
>the worst possible outcome
The worst possible outcome is they get sued and Anthropic defends them from the copyright infringement claim due to Anthopic's indemnity clause when using Claude Code.
monocularvision 3 hours ago [-]
That indemnity clause is only for Team, Enterprise and API users. Do you know what was used here?
Also the commercial version is limited to “…Customer and its personnel, successors, and assigns…”. I am very much not a lawyer and couldn’t find definitions of these in the agreement but I am not sure how transferable this indemnity would be to an open source project.
charcircuit 2 hours ago [-]
I reviewed it and it looks like personal Claude Code subscriptions are not covered, so it's riskier than I claimed.
epolanski 6 hours ago [-]
Do as I say, not as I do.
On a more serious note, I think that this will be thoroughly reviewed before it gets merged and Node has an entire security team that overviews these.
indutny 6 hours ago [-]
As someone who was a part of the aforementioned security team I'm not sure I'd be interested in reviewing such volume of machine generated code, expecting trap at every corner. The implicit assumption that I observed at many OSS projects I've been involved with is that first time contributions are rarely accepted if they are too large in volume, and "core contributor" designation exists to signal "I put effort into this code, stand by it, and respect everyone's time in reviewing it". The PR in the post violates this social contract.
epolanski 5 hours ago [-]
For free, you can decide to do what you want, if it's your job, it's a bit different and you may have to do so, especially considering Collina, is one of the largest contributors of the project and member of the technical committee.
exe34 4 hours ago [-]
> if it's your job, it's a bit different and you may have to do so
Oh I'd use an llm to generate large amounts of feedback and request changes!
epolanski 4 hours ago [-]
Imagine if every profession reasoned liked that when doing something they don't enjoy.
kruffalon 2 hours ago [-]
What a wonderful world we would have, or possibly at least better than the current shit show :)
exe34 1 hours ago [-]
Imagine fighting fire with fire. You don't have to take shit lying down.
lemagedurage 5 hours ago [-]
[dead]
socalgal2 1 hours ago [-]
What's special about node.js here? Does golang, C#, python, ruby, java, etc... have a virtual file system?
I get it, I've implemented things for tests, I'm just wondering if this shouldn't be solved at an OS level.
--- update
Let's put this another way, my code does effectively, child_process.spawn('something-that-reads-and-write-a-file')
now I'm back to the same issue. To test I need a virtual file system. Node providing one won't help.
I do think it's more painful to distribute files when you're a distributed as a single binary vs scripts, since the latter has to figure out bundling of files anyway.
But still - it does exist
benatkin 53 minutes ago [-]
Embed is read-only at runtime. This proposed vfs module for Node.js is a full virtual file system.
wccrawford 7 hours ago [-]
I'm not convinced that allowing Node to import "code generated at runtime" is actually a good thing. I think it should have to go through the hoops to get loaded, for security reasons.
I like the idea of it mocking the file system for tests, but I feel like that should probably be part of the test suite, not Node.
The example towards the end that stores data in a sqlite provider and then saves it as a JSON file is mind-boggling to me. Especially for a system that's supposed to be about not saving to the disk. Perhaps it's just a bad example, but I'm really trying to figure out how this isn't just adding complexity.
I had to laugh, because the post you're replying to STRONGLY reminds me of this story, https://news.ycombinator.com/item?id=31778490 , in which some people on the GNOME project objected to thumbnails in the file-open dialog box because it might be a "Security issue" (even though thumbnails were available in the normal file browser, something those commenters probably should have known about, but didn't, but they just had to chime in anyway).
TheRealPomax 6 hours ago [-]
But then you go "hang on, doesn't ESM exist?" and you realize that argument 4 isn't even true. You can literally do what this argument says you can't, by creating a blob instead of "writing a temp file" and then importing that using the same dynamic import we've had available since <checks his watch> 2020.
dfabulich 5 hours ago [-]
A virtual filesystem makes it possible for the ESM you import to statically import other files in the virtual filesystem, which isn't possible by just dynamically importing a blob. Anything your blob module imports has to be updated to dynamically import its dependencies via blobs.
notnullorvoid 6 hours ago [-]
There's also a module expression proposal, that would remove the need to use blob imports.
Using Claude for code you use yourself or at your own company internally is one thing, but when you start injecting it into widely-shared projects like this (or, the linux kernel, or Debian, etc) there will always be a lingering feeling of the project being tainted.
Just my opinion, probably not a popular one. But I will be avoiding an upgrade to Node.js after 24.14 for a while if this is becoming an acceptable precedent.
giancarlostoro 2 hours ago [-]
> I pointed the AI at the tedious parts, the stuff that makes a 14k-line PR possible but no human wants to hand-write: implementing every fs method variant (sync, callback, promises), wiring up test coverage, and generating docs.
This is the biggest takeaway for me for AI. It's not even that nobody wants to do these things, its that by the time you finish your tasks, you have no time to do these things, because your manage / scrum master / powers that be want you to work on the next task.
Culonavirus 42 minutes ago [-]
That's perfectly understandable. But has no business being in a large open source project, let alone world class one like Node or (god forbid) the Linux kernel. Get that shit the fuck out.
Lerc 1 hours ago [-]
I think the insight there is that the increased productivity of AI could be used to add features where the end results are weighing the ability of the AI against the ability of an individual implementing the same thing.
The alternative is that you work on the same number of features and utilize the ability to make those features as robust as you know they could be, but you have other pressing matters to attend to. That's weighing the ability of AI against the ability of neglect.
PaulHoule 7 hours ago [-]
Would be nice if node packages could be packed up in ZIP files so to avoid the security/metadata tax for small file access on Windows.
MarleTangible 7 hours ago [-]
The number of files in the node modules folder is crazy, any amount of organization that can tame that chaos is welcomed.
koolba 6 hours ago [-]
And if you thought malware hiding in a mess of files was bad, just wait till you see it in two layers of container files.
PaulHoule 6 hours ago [-]
Or worse yet, the performance load of anti-malware software that has to look inside ZIP files.
Look, most of us realized around 2004 or so that if you had a choice between Norton and the virus you would pick the virus. In the Windows world we standardized around Defender because there is some bound on how much Defender degrades the performance of your machine which was not the case with competitive antivirus software.
I've done a few projects which involved getting container file formats like ZIP and PDF (e.g. you know it's a graph of resources in which some of those resources are containers that contain more resources, right?) and now that I think of it you ought to be able to virus scan ZIP files quickly and intelligently but the whole problem with the antivirus industry is that nobody ever considers the cost.
ronsor 4 hours ago [-]
Now we'll have to encrypt the files to prevent the performance hit of antivirus peeking inside.
Oh, wait...
Dangeranger 6 hours ago [-]
There are alternative package managers like Yarn that use zip files as a way to store each Node package.[0]
yarn with zero-installs removes an awful lot of pain present in npm and pnpm. Its practically the whole point of yarn berry.
Firstly - with yarn pnp zero-installs, you don't have to run an `install` every time you switch branch, just in case a dep changed. So much dev time is wasted due to this.
Secondly - "it worked on my machine" is eliminated. CI and deploy use the exact same files - this is particularly important for deeply nested range satisfied dependencies.
Thirdly - packages committed to the repo allows for meaningful retrospectives and automated security reviews. When working in ops, packages changing is hell.
All of this is facilitated by the zip files that the comment you replied to was discussing, that you tangented away from.
The graph you have linked is fundamentally odd. Firstly - there is no good explanation of what it is actually showing. I've had claude spin on it and it reckons its npm download counts. This leads to it being a completely flawed graph! Yarn berry is typically installed either via corepack or bootstrapped via package.json and the system yarn binary. Yarn even saves itself into your repo. pnpm is never (I believe) bundled with the system node, wheras yarn and npm typically are.
Your graph doesn't show what you claim it does.
PaulHoule 6 hours ago [-]
... and of course JAR files in Java are just ZIP files with a little extra metadata and the JVM can unpack them in realtime just fine.
buttsack 3 hours ago [-]
When npm decided to have per-project node_modules (rather than shared like ruby and others) and human readable configs and library files I think the goal was to be a developer friendly and highly configurable, which it is. And package.json became a lot more than that as a result, it’s been a great system IMO.
Combined with a hackable IDE like Atom (Pulsar) made with the same tech it’s a pretty great dev exp for web devs
fmorel 7 hours ago [-]
I remember when Firefox started putting everything into jars for similar reasons.
Would accessing deps directly from a zip really be faster? I'd be a little surprised but not terribly, given that it's readonly on an fs designed for RW. If not, maybe just tar?
pie_flavor 2 hours ago [-]
You just cat the exe with the zip file, then it is all loaded into memory at the same time on process init. This is how e.g. LÖVE does game code packaging. (It can't be tar, because this trick only works because the PKZIP descriptor is at the end of the file.)
pverheggen 4 hours ago [-]
You can always use virtualized Linux to avoid the NTFS penalty (WSL2, VS Code dev containers, etc.)
hrmtst93837 4 hours ago [-]
Moving your whole workflow into WSL or nested containers just to dodge NTFS is a band-aid. Then you get flaky file watchers, odd perms, and a dev setup that feels like a workaround piled on top of another workaround. A fast Node VFS would remove a lot of this nonsense.
pverheggen 3 hours ago [-]
Oh it's a workaround for sure, didn't mean to suggest otherwise.
MBCook 6 hours ago [-]
It’s insane to me that node works how it does. Zip files make so much more sense, I really liked that about Yarn.
sheept 6 hours ago [-]
Would it work to run a bundler over your code, so all (static) imports are inlined and tree shaken?
mg 6 hours ago [-]
You can’t import or require() a module
that only exists in memory.
You can convert it into a data url and import that, can't you?
afavour 5 hours ago [-]
What happens to relative imports?
doctorpangloss 6 hours ago [-]
Yeah but Claude didn't suggest that when it wrote this blog post and did all the work so...
austin-cheney 7 hours ago [-]
Most of the 4 justifications mentioned sound like mitigations of otherwise bad design decisions. JavaScript in the browser went down this path for the longest time where new standards were introduced only to solve for stupid people instead of actually introducing new capabilities that were otherwise unachievable.
I do see some original benefits to a VFS though, bad application decisions aside, but they are exceedingly minor.
As an aside I think JavaScript would benefit from an in-memory database. This would be more of language enhancement than a Node.js enhancement. Imagine the extended application capabilities of an object/array store native to the language that takes queries using JS logic to return one or more objects/records. No SQL language and no third party databases for stuff that you don't want to keep in offline storage on a disk.
iainmerrick 5 hours ago [-]
Why would you want a language enhancement for that, rather than just writing it in JS code? (or perhaps WASM)
dotancohen 4 hours ago [-]
> I think JavaScript would benefit from an in-memory database.
That database would probably look a lot like a JSON object. What are you suggesting, that a global JSON object does not solve?
austin-cheney 4 hours ago [-]
Whether it is an object, array, something else, or a combination thereof is a design decision. It is not so much about the design of the structure, which should be determined by execution performance considerations, but how information is added, removed and retrieved. Gathering one or more records from a JSON object, or array index, by value of some child property somewhere in a descendant structure of the instance index always feels like a one-off based upon the shape of the data. That could just be a query which is more elegant to read and yet still achieves superior execution performance compared to a bunch of nested loops or string of function array methods.
The more structures you have in a given application and the larger those structures become in their schemas the more valuable a uniform storage and retrieval solution becomes.
> As an aside I think JavaScript would benefit from an in-memory database.
isn't that just global state, or do you mean you want that to be persistent?
gnarbarian 2 hours ago [-]
one of the reasons I prefer deno is the availability of indexeddb (and all the other great stuff that comes with it out of the box)
butz 2 hours ago [-]
How about trying to reduce dependencies? 11ty is going in correct direction, dropping significant chunk of various dependencies or replacing them with packages with no dependencies or using platform features, that becomes readily available.
mohsen1 5 hours ago [-]
Yarn, pnpm, webpack all have solutions for this. Great to see this becoming a standard. I have a project that is severely handicapped due to FS. Running 13k tests takes 40 minutes where a virtual file system that Node would just work with it would cut the run time to 3 minutes. I experimented with some hacks and decided to stay with slow but native FS solution.
What I really want is a way of swapping FS with VFS in a Node.js program harness. Something like
node --use-vfs --vfs-cache=BIG_JSON_FILE
So basically Node never touches the disk and load everything from the memory
Normal_gaussian 5 hours ago [-]
The way to do this today is to do it outside of node. Using an overlay fs with the overlay being a ramfs. You can even chroot into it if you can't scope the paths you need to be just downstream from some directory. Or, just use docker.
mohsen1 5 hours ago [-]
making that work cross platform is pure pain
Normal_gaussian 5 hours ago [-]
yes and no. Waiting 40mins for every test run is pure pain, platform specific ramfs type mounting is quite scriptable. Yes some devs might need to install a dependency, but its not a complex script.
skydhash 2 hours ago [-]
What are the other OS? There's a bunch of solutions described on Wikipedia
This resonates a lot. The number of times I've seen Node.js projects ship with fragile path-joining logic that breaks across OS boundaries is wild. A VFS layer would clean up so many edge cases.
One angle the article doesn't cover much: testing. Right now mocking the file system in Node requires either sinon stubs on every fs method or something like memfs. A built-in VFS would make it trivial to spin up an isolated file tree per test case. No temp directories, no cleanup, no flaky CI from parallel tests writing to the same path.
The performance concern is valid though. Any abstraction layer adds overhead, and for I/O-heavy workloads even a thin wrapper matters. I'd love to see benchmarks comparing direct fs calls vs a VFS proxy on something like a large Webpack build.
This is because yarn patches fs in order to introduce virtual file path resolution of modules in the yarn cache (which are zips), which is quite brittle and was broken by a seemingly unrelated change in 25.7.
The discussion in issue 62012 is notable - it was suggested yarn just wait for vfs to land. This is interesting to me in two ways: firstly, the node team seems quite happy for non-trivial amounts of the ecosystem to just be broken, and suggests relying on what I'm assuming will be an experimental API when it does land; secondly, it implies a lot of confidence that this feature will land before LTS.
Not spamming, not affiliated, just trying to help others avoid so much needless suffering.
Normal_gaussian 4 hours ago [-]
This is quite spammy; you could mitigate it by explaining what you think the "needless suffering" is. Having been using npm, pnpm, and yarn for many years the only benefit I find with pnpm is a little bit of speed when using the cli, but not enough that I notice; I've outlined the major yarn benefit to me 'in a peer comment' (which I didn't realise was you when I answered) https://news.ycombinator.com/item?id=47415660
I expect yarn to have a real competitor sooner rather than later that will replace it; and I do wonder if it is this vfs module that will enable it.
zadikian 5 hours ago [-]
I just use npm because I like to stay as vanilla as possible. Glad that alternatives exist though.
Normal_gaussian 4 hours ago [-]
This can't be overstated. The main benefit with yarn berry (v4+) is being able to commit the dependencies to the repo - I have yarn based tools that I wrote years ago that just work wheras I frequently find npm and python tools are broken due to version changes. However this benefit comes at a setup cost and a lot more on disk complexity - one off tools are just npm and done.
notnullorvoid 6 hours ago [-]
I could see something like this being useful if it could be passed to workers to replace any fs access inside the worker.
ozlikethewizard 6 hours ago [-]
I'm not convinced this needs to be in core Node, but being able to have serverless functions access a file system without providing storage would definitely have some use cases. Had some fun with video processing recently that this would be perfect for.
sidewndr46 3 hours ago [-]
Don't all projects eventually grow to encompass service discovery?
gwbas1c 4 hours ago [-]
Can you dynamically load code via eval?
(I know, I know, it's ugly and has its own set of problems)
adzm 5 hours ago [-]
How does electron do this with its packaged files? I suppose it does not work with module resolution?
minraws 2 hours ago [-]
Why is this not a library what is this insanity??
2 hours ago [-]
verdverm 3 hours ago [-]
Separate the valid critiques on other comments, Go's io.FS interface is really nice for making these sorts of things. Is there something like this in Node already? (with base implementations like host and in memory)
themafia 3 hours ago [-]
> You can’t import or require() a module that only exists in memory.
Sure you can. Function() exists and require.cache exists. This is _intentionally_ exploitable.
bronlund 4 hours ago [-]
Yeah. That’s what we need. More Node.
devnotes77 3 hours ago [-]
[flagged]
westurner 6 hours ago [-]
Is node::vfs the new solution for JupyterLite filesystems?
>Let me be honest: a PR that size would normally take months of full-time work. This one happened because I built it with Claude Code.
The node.js codebase and standard library has a very high standard of quality, hope that doesn't get washed out by sloppy AI-generated code.
OTOH, Matteo is an excellent engineer and the community owes a lot to him. So I guess the code is solid :).
leontloveless 6 hours ago [-]
[dead]
aplomb1026 4 hours ago [-]
[dead]
openinstaclaw 5 hours ago [-]
[dead]
rigorclaw 5 hours ago [-]
[flagged]
andrewmcwatters 7 hours ago [-]
[dead]
buttsack 3 hours ago [-]
[flagged]
syrusakbary 3 hours ago [-]
[flagged]
szmarczak 2 hours ago [-]
HN comments isn't a place to advertise your product.
szszrk 2 hours ago [-]
I am not so sure about that. I recall multiple posts that start with most upvoted comments from founders...
Wonder what Dang says about that.
AgentMarket 31 minutes ago [-]
[flagged]
petcat 7 hours ago [-]
Are people still building new projects on Node.js? I would have thought the ecosystem was moving to deno or bun now
dzogchen 7 hours ago [-]
I don't really understand what the value proposition of Bun and Deno is. And I see huge problems with their governance and long-term sustainability.
Node.js on the other hand is not owned or controlled by one entity. It is not beholden to the whims of investors or a large corporation. I have contributed to Node.js in the past and I was really impressed by its rock-solid governance model and processes. I think this an under-appreciated feature when evaluating tech options.
packetlost 7 hours ago [-]
Deno has some pretty nice unique features like sandboxing that, afaik, don't exist in other runtimes (yet). It's enough of a draw that it's the recommended runtime for projects like yt-dlp: https://github.com/yt-dlp/yt-dlp/issues/14404
> The permission model implements a "seat belt" approach, which prevents trusted code from unintentionally changing files or using resources that access has not explicitly been granted to. It does not provide security guarantees in the presence of malicious code. Malicious code can bypass the permission model and execute arbitrary code without the restrictions imposed by the permission model.
Deno's permissions model is actually a very nice feature. But it is not very granular so I think you end up just allowing everything a lot of the time. I also think sandboxing is a responsibility of the OS. And lastly, a lot of use cases do not really benefit from it (e.g. server applications).
zamadatix 6 hours ago [-]
If one gets nothing from them directly, they've at least been a good kick to get several features into Node. It's almost like neovim was to vim, perhaps to a lesser extent.
zadikian 5 hours ago [-]
Note that Bun was recently acquired by Anthropic.
gavmor 5 hours ago [-]
Faster, no transpilation, dev-ex sugar.
pier25 5 hours ago [-]
I agree about the governance and long-term sustainability points but if you don't see any value in Bun or Deno is probably because (no offense) you are not paying attention.
6 hours ago [-]
jitl 7 hours ago [-]
loud people on twitter are always switching to the new hotness. i personally can't see myself using bun until its reputation for segfaults goes away after a few more years of stabilizing. deno seems neat and has been around for longer, but its node compatibility story is still evolving; i'm also giving it another year before i try it.
That's basically just Zig, right? Re-invented C but only fixed the syntax, not the problems.
zadikian 5 hours ago [-]
Yes people are using Node.js, most likely the majority.
rrr_oh_man 7 hours ago [-]
Why?
kitsune1 7 hours ago [-]
The delusion in this comment is insane.
pier25 5 hours ago [-]
The Node team has lost the plot IMO.
By far the most critical issue is the over reliance on third party NPM packages for even fundamental needs like connecting to a database.
afavour 5 hours ago [-]
What would a Node-native database connection layer look like? What other platforms have that?
Databases are third party tech, I don’t think it’s unreasonable to use a third party NPM module to connect to them.
mike_hearn 5 hours ago [-]
Most obviously, Java has JDBC. I think .NET has an equivalent. Drivers are needed but they're often first party, coming directly from the DB vendor itself.
Java also has a JIT compiling JS engine that can be sandboxed and given a VFS:
N.B. there's a NodeJS compatible mode, but you can't use VFS+sandboxing and NodeJS compatibility together because the NodeJS mode actually uses the real NodeJS codebase, just swapping out V8. For combining it all together you'd want something like https://elide.dev which reimplemented some of the Node APIs on top of the JVM, so it's sandboxable and virtualizable.
LunaSea 5 hours ago [-]
> Most obviously, Java has JDBC. I think .NET has an equivalent. Drivers are needed but they're often first party, coming directly from the DB vendor itself.
So it's an external dependency that is not part of Java. It doesn't really matter if the code comes from the vendor or not. Especially for OpenSource databases.
zadikian 4 hours ago [-]
DBMS vendor providing the client is nice. At least if you're using pg-native in Node, that's just a wrapper around the Postgres-owned libpq, but I've run into small breaking updates before that I don't feel would've happened if Postgres maintained both.
afavour 1 hours ago [-]
But that’s not Node’s fault surely? Shouldn’t Postgres be providing an NPM module given the popularity of Node?
zadikian 1 hours ago [-]
No it's not Node's fault, this isn't their job. I don't blame Postgres either, cause maintaining libpq is fair enough, just would've been extra nice to have an official Node lib too.
mike_hearn 4 hours ago [-]
Well in the case of Oracle you can get the language, runtime, DB and driver all from the same organization under unified support contracts.
If you don't value that, why would you want your programming language implementors to also implement database drivers?
zadikian 3 hours ago [-]
Well that's only because Oracle happens to own both Java and Oracle DB. Suppose you're not using that DB.
pier25 5 hours ago [-]
Bun provides native MySQL, SQlite, and Postgres drivers.
I'm not saying Node should support every db in existence but the ones I listed are critical infrastructure at this point.
When using Postgres in Node you either rely on the old pg which pulls 13 dependencies[1] or postgres[2] which is much better and has zero deps but mostly depends on a single guy.
Maybe MySQL and Postgres should make official Node libs then. Bun maintaining this is ok too, but it seems odd given that it means having to keep up with new features in those DBMSes.
pier25 39 minutes ago [-]
> but it seems odd given that it means having to keep up with new features in those DBMSes
That would be more useful for the ecosystem than the Node team investing time on a virtual file system.
adzm 5 hours ago [-]
Node has sqlite, though I have not had any issues using better-sqlite3 and worker processes for long running ops
pier25 2 hours ago [-]
Until the day it gets pwned by a malicious actor. Which is something we've seen quite a lot of times on npm deps.
ksherlock 5 hours ago [-]
Perl has DBI. PHP has PDO.
Spivak 5 hours ago [-]
Python has DB-API.
beart 5 hours ago [-]
Outside of sqlite, what runtimes natively include database drivers?
pier25 5 hours ago [-]
Bun, .NET, PHP, Java
Deukhoofd 4 hours ago [-]
For .NET only the old legacy .NET Framework, SqlClient was moved to a separate package with the rewrite (from System.Data.SqlClient to Microsoft.Data.SqlClient). They realized that it was a rather bad idea to have that baked in to your main runtime, as it complicates your updates.
pier25 2 hours ago [-]
It's still provided by Microsoft. They are responsible for those first party drivers.
LunaSea 5 hours ago [-]
For Bun you're thinking of simple key / values, hardly a database. They also have a SQLite driver which is still just a package.
pier25 2 hours ago [-]
I think you're confusing the database engine with the driver?
torginus 3 hours ago [-]
Why do people keep reinventing OS features?
There's Docker, OverlayFS, FUSE, ZFS or Btrfs snapshots?
Do you not trust your OS to do this correctly, or do you think you can do better?
A lot of this stuff existed 5, 10, 15 years ago...
Somehow there's been a trend for every effing program to grow and absorb the features and responsibilities of every other program.
Actually, I have a brilliant idea, what if we used nodejs, and added html display capabilities, and browser features? After all Cursor has already proven you can vibecode a browser, why not just do it?
I'm just tired at this point
williamstein 3 hours ago [-]
This exact thing solves a huge problem with SEA binaries as he points out in his post. You can include complicated assets easily and skip an ugly unpack step entirely. This is very useful.
ryandrake 3 hours ago [-]
One of the worst is media players that all insist on grafting their own "library" on top of my already-working OS filesystem. So I can't just run the media player and play files. No, that would be too simple. I have to first "import" my media into a "library" abstraction and then store that library somewhere else on my filesystem. Terrible!
SAI_Peregrinus 2 hours ago [-]
There's a legitimate problem they're trying to solve there: there are several ways to sort media that don't match up well with a hierarchical filesystem¹. They solve it badly. Good players maintain a database for efficient queries of media metadata, and periodically rescan the folders to update it. Shitty media players try to manage the files themselves, and still end up needing to maintain a database. The worst of these use the database to manage the contents of their storage files (or store the files themselves in the database), if something isn't in the database they delete the files. Adobe Lightroom Classic does this, if your database gets corrupted it deletes all your RAW files!
¹E.g. if you've got music, and it's sorted `artist/album/track<n>.extension`, and two artists collaborate on an album, which one gets the album in their folder? What if you want to sort all songs in the display by publication date? Even if they use the files on your filesystem without moving them, some sort of metadata database will be needed for efficient display & search.
I do not think it is wise to brag that your solution to a problem is extremely painful but that you were impervious to all the pain. Others will still feel it. This code takes bandwidth to host and space on devices and for maintainers it permanently doubles the work associated with evolving the filesystem APIs. If someone else comes along with the same kind of thinking they might just double those doubled costs, and someone else might 8x them, all because nobody could feel the pain they were passing on to others
Note aside, OpenJS executive director mentioned it's ok to use AI assistance on Node.js contributions:
[1]: https://github.com/nodejs/node/pull/61478#issuecomment-40772...It is great to have a legal perspective on compliance of LLM generated code with DCO terms, and I feel safer knowing that at least it doesn't expose Node.js to legal risk. However it doesn't address the well known unresolved ethical concerns over the sourcing of the code produced by LLM tooling.
Speed code all your SaaS apps, but slow iteration speeds are better for a runtime because once you add something, you can basically never remove it. You can't iterate. You get literally one shot, and if you add a awkward or trappy API, everyone is now stuck with it forever. And what if this "must have" feature turns out to be kind of a dud, because everyone converged on a much more elegant solution a few years later? Congratulations, we now have to maintain this legacy feature forever and everyone has to migrate their codebase to some new solution.
Much better to let dependencies and competing platforms like bun or deno do all the innovating. Once everyone has tried and refined all the different ways of solving this particular problem, and all the kinks have been worked out, and all the different ways to structure the API have been tried, you can take just the best of the best ideas and add it into the runtime. It was late, but because of that it will be stable and not a train wreck.
But I know what you're thinking. "You can't do that. Just look at what happens to platforms that iterate slowly, like C or C++ or Java. They're toast." Oh wait, never mind, they're among the most popular platforms out there.
Time is highly correlated with expertise. When you don’t have expertise, you may go fast at expense of stability because you lack the experience to make good decisions to really save speed. This doesn’t hold true for any projects where you rely on experts, good processes and tight timelines (aka: Apollo mission)
It's not an AI issue. Node.js itself is lots of legacy code and many projects depend on that code. When Deno and Bun were in early development, AI wasn't involved.
Yes, you can speed up the development a bit but it will never reach the quality of newer runtimes.
It's like comparing C to C++. Those languages are from different eras (relatively to each other).
Is it slop if it is carefully calculated? I tire of hearing people use slop to mean anything AI, even when it is carefully reviewed.
While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.
With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.
Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write? I couldn't care less if it improves (best case scenario) my open source codebase, I simply don't enjoy the imbalance.
Not everyone has the same motivations. I’ve done open source for fun, I’ve done it to unblock something at work, I’ve done it to fix something that annoys me.
If your project is gaining useful functionality, that seems like a win.
Of course any chess bot is going to play better, but that's not the point
Well, the process you’re describing is mature and intentionally slows things down. The LLM push has almost the opposite philosophy. Everyone talks about going faster and no one believes it is about higher quality.
If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.
Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.
The fact is, it's useful as a tool, but you still should review what's going on/in. That isn't always easy though, and I get that. I've been working on a TS/JS driver for MS-SQL so I can use some features not in other libraries, mostly bridging a Rust driver (first Tiberious, then mssql-client), the clean abstraction made the switch pretty quick... a fairly thorough test suite for Deno/Node/Bun kapt the sanity in check. Rust C-style library with FFI access in TS/JS server environment.
My hardest part, is actually having to setup a Windows Server to test the passswordless auth path (basically a connection string with integrated windows auth). I've got about 80 hours of real time into this project so far. And I'll probably be doing 2 followups.. one with be a generic ODBC adapter with a similar set of interfaces. And a final third adapter that will privide the same methods, but using the native SQLite underneath but smothing over the differences.
I'm leveraging using/dispose (async) instead of explicit close/rollback patterns, similar to .Net as well as Dapper-like methods for "Typed" results, though no actual type validation... I'd considered trying to adapt Zod to check at least the first record or all records, and may still add the option.
All said though, I wouldn't have been able to do so much with so relatively little time without the use of AI. You don't have to sacrifice quality to gain efficiency with AI, but you do need to take the time to do it.
If submitter picks (a) they assert that they wrote the code themselves and have right to submit it under project's license. If (b) the code was taken from another place with clear license terms compatible with the project's license. If (c) contribution was written by someone else who asserted (a) or (b) and is submitted without changes.
Since LLM generated output is based on public code, but lacks attribution and the license of the original it is not possible to pick (b). (a) and (c) cannot be picked based on the submitter disclaimer in the PR body.
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
If there isn't, then (b) works fine, the code is taken from the LLM with no preexisting license. And it would be very strange if a mix of (a) and (b) is a problem; almost any (b) code will need some (a) code to adapt it.
Whether AI output can fall under copyright at all is still up for debate - with some early rulings indicating that the fact that you prompted the AI does not automatically grant you authorship.
Even if it does, it hasn't been settled yet what the impact of your AI having been trained on copyrighted material is on its output. You can make a not-completely-unreasonable argument that AI inference output is a derivative work of AI training input.
Fact is, the matter isn't settled yet, which means any open-source project should assume the worst possible outcome - which in practice means a massive AI-generated PR like this should be treated like a nuke which could go off at any moment.
1. Copyright cannot be assigned to an AI agent.
2. Copyrighted works require human creativity to be applied in order to be copyrighted.
For point 2 this would apply to times were AI one shots a generic prompt. But for these large PRs where multiple prompts are used and a human has decided what the design should be and how the API should look you get the human creativity required for copyright.
In regards to being a derivative work I think it would be hard to argue that an LLM is copying or modifying an existing original work. Even if it came up with an exact duplicate of a piece of code it would be hard to prove that it was a copy and not an independent recreation from scratch.
>the worst possible outcome
The worst possible outcome is they get sued and Anthropic defends them from the copyright infringement claim due to Anthopic's indemnity clause when using Claude Code.
Also the commercial version is limited to “…Customer and its personnel, successors, and assigns…”. I am very much not a lawyer and couldn’t find definitions of these in the agreement but I am not sure how transferable this indemnity would be to an open source project.
On a more serious note, I think that this will be thoroughly reviewed before it gets merged and Node has an entire security team that overviews these.
Oh I'd use an llm to generate large amounts of feedback and request changes!
I get it, I've implemented things for tests, I'm just wondering if this shouldn't be solved at an OS level.
--- update
Let's put this another way, my code does effectively, child_process.spawn('something-that-reads-and-write-a-file')
now I'm back to the same issue. To test I need a virtual file system. Node providing one won't help.
I do think it's more painful to distribute files when you're a distributed as a single binary vs scripts, since the latter has to figure out bundling of files anyway.
But still - it does exist
I like the idea of it mocking the file system for tests, but I feel like that should probably be part of the test suite, not Node.
The example towards the end that stores data in a sqlite provider and then saves it as a JSON file is mind-boggling to me. Especially for a system that's supposed to be about not saving to the disk. Perhaps it's just a bad example, but I'm really trying to figure out how this isn't just adding complexity.
I had to laugh, because the post you're replying to STRONGLY reminds me of this story, https://news.ycombinator.com/item?id=31778490 , in which some people on the GNOME project objected to thumbnails in the file-open dialog box because it might be a "Security issue" (even though thumbnails were available in the normal file browser, something those commenters probably should have known about, but didn't, but they just had to chime in anyway).
https://github.com/tc39/proposal-module-expressions
Just my opinion, probably not a popular one. But I will be avoiding an upgrade to Node.js after 24.14 for a while if this is becoming an acceptable precedent.
This is the biggest takeaway for me for AI. It's not even that nobody wants to do these things, its that by the time you finish your tasks, you have no time to do these things, because your manage / scrum master / powers that be want you to work on the next task.
The alternative is that you work on the same number of features and utilize the ability to make those features as robust as you know they could be, but you have other pressing matters to attend to. That's weighing the ability of AI against the ability of neglect.
Look, most of us realized around 2004 or so that if you had a choice between Norton and the virus you would pick the virus. In the Windows world we standardized around Defender because there is some bound on how much Defender degrades the performance of your machine which was not the case with competitive antivirus software.
I've done a few projects which involved getting container file formats like ZIP and PDF (e.g. you know it's a graph of resources in which some of those resources are containers that contain more resources, right?) and now that I think of it you ought to be able to virus scan ZIP files quickly and intelligently but the whole problem with the antivirus industry is that nobody ever considers the cost.
Oh, wait...
[0] https://yarnpkg.com/advanced/pnp-spec#zip-access
See https://pnpm.io/motivation
Also, while popularity isn't necessarily a great indicator of quality, a quick comparison shows that the community has decided on pnpm:
https://www.npmcharts.com/compare/pnpm,yarn,npm
Firstly - with yarn pnp zero-installs, you don't have to run an `install` every time you switch branch, just in case a dep changed. So much dev time is wasted due to this.
Secondly - "it worked on my machine" is eliminated. CI and deploy use the exact same files - this is particularly important for deeply nested range satisfied dependencies.
Thirdly - packages committed to the repo allows for meaningful retrospectives and automated security reviews. When working in ops, packages changing is hell.
All of this is facilitated by the zip files that the comment you replied to was discussing, that you tangented away from.
The graph you have linked is fundamentally odd. Firstly - there is no good explanation of what it is actually showing. I've had claude spin on it and it reckons its npm download counts. This leads to it being a completely flawed graph! Yarn berry is typically installed either via corepack or bootstrapped via package.json and the system yarn binary. Yarn even saves itself into your repo. pnpm is never (I believe) bundled with the system node, wheras yarn and npm typically are.
Your graph doesn't show what you claim it does.
Combined with a hackable IDE like Atom (Pulsar) made with the same tech it’s a pretty great dev exp for web devs
https://web.archive.org/web/20161003115800/https://blog.mozi...
I do see some original benefits to a VFS though, bad application decisions aside, but they are exceedingly minor.
As an aside I think JavaScript would benefit from an in-memory database. This would be more of language enhancement than a Node.js enhancement. Imagine the extended application capabilities of an object/array store native to the language that takes queries using JS logic to return one or more objects/records. No SQL language and no third party databases for stuff that you don't want to keep in offline storage on a disk.
The more structures you have in a given application and the larger those structures become in their schemas the more valuable a uniform storage and retrieval solution becomes.
isn't that just global state, or do you mean you want that to be persistent?
What I really want is a way of swapping FS with VFS in a Node.js program harness. Something like
So basically Node never touches the disk and load everything from the memoryhttps://en.wikipedia.org/wiki/List_of_RAM_drive_software
One angle the article doesn't cover much: testing. Right now mocking the file system in Node requires either sinon stubs on every fs method or something like memfs. A built-in VFS would make it trivial to spin up an isolated file tree per test case. No temp directories, no cleanup, no flaky CI from parallel tests writing to the same path.
The performance concern is valid though. Any abstraction layer adds overhead, and for I/O-heavy workloads even a thin wrapper matters. I'd love to see benchmarks comparing direct fs calls vs a VFS proxy on something like a large Webpack build.
- https://github.com/yarnpkg/berry/issues/7065
- https://github.com/nodejs/node/issues/62012
This is because yarn patches fs in order to introduce virtual file path resolution of modules in the yarn cache (which are zips), which is quite brittle and was broken by a seemingly unrelated change in 25.7.
The discussion in issue 62012 is notable - it was suggested yarn just wait for vfs to land. This is interesting to me in two ways: firstly, the node team seems quite happy for non-trivial amounts of the ecosystem to just be broken, and suggests relying on what I'm assuming will be an experimental API when it does land; secondly, it implies a lot of confidence that this feature will land before LTS.
Not spamming, not affiliated, just trying to help others avoid so much needless suffering.
I expect yarn to have a real competitor sooner rather than later that will replace it; and I do wonder if it is this vfs module that will enable it.
(I know, I know, it's ugly and has its own set of problems)
Sure you can. Function() exists and require.cache exists. This is _intentionally_ exploitable.
From https://github.com/jupyterlite/jupyterlite/issues/949#issuec... :
> Ideally, the virtual filesystem of JupyterLite would be shared with the one from the virtual terminal.
emscripten-core/emscripten > "New File System Implementation": https://github.com/emscripten-core/emscripten/issues/15041#i... :
> [ BrowserFS, isomorphic-git/lightningfs, ]
pyodide/pyodide: "Native file system API" #738: https://github.com/pyodide/pyodide/issues/738 re: [Chrome,] Filesystem API :
> jupyterlab-git [should work with the same VFS as Jupyter kernels and Terminals]
pyodide/pyodide: "ENH Add API for mounting native file system" #2987: https://github.com/pyodide/pyodide/pull/2987
The node.js codebase and standard library has a very high standard of quality, hope that doesn't get washed out by sloppy AI-generated code.
OTOH, Matteo is an excellent engineer and the community owes a lot to him. So I guess the code is solid :).
Wonder what Dang says about that.
Node.js on the other hand is not owned or controlled by one entity. It is not beholden to the whims of investors or a large corporation. I have contributed to Node.js in the past and I was really impressed by its rock-solid governance model and processes. I think this an under-appreciated feature when evaluating tech options.
> The permission model implements a "seat belt" approach, which prevents trusted code from unintentionally changing files or using resources that access has not explicitly been granted to. It does not provide security guarantees in the presence of malicious code. Malicious code can bypass the permission model and execute arbitrary code without the restrictions imposed by the permission model.
Deno's permissions model is actually a very nice feature. But it is not very granular so I think you end up just allowing everything a lot of the time. I also think sandboxing is a responsibility of the OS. And lastly, a lot of use cases do not really benefit from it (e.g. server applications).
Open 80, closed 492.
By far the most critical issue is the over reliance on third party NPM packages for even fundamental needs like connecting to a database.
Databases are third party tech, I don’t think it’s unreasonable to use a third party NPM module to connect to them.
Java also has a JIT compiling JS engine that can be sandboxed and given a VFS:
https://www.graalvm.org/latest/security-guide/sandboxing/
N.B. there's a NodeJS compatible mode, but you can't use VFS+sandboxing and NodeJS compatibility together because the NodeJS mode actually uses the real NodeJS codebase, just swapping out V8. For combining it all together you'd want something like https://elide.dev which reimplemented some of the Node APIs on top of the JVM, so it's sandboxable and virtualizable.
So it's an external dependency that is not part of Java. It doesn't really matter if the code comes from the vendor or not. Especially for OpenSource databases.
If you don't value that, why would you want your programming language implementors to also implement database drivers?
I'm not saying Node should support every db in existence but the ones I listed are critical infrastructure at this point.
When using Postgres in Node you either rely on the old pg which pulls 13 dependencies[1] or postgres[2] which is much better and has zero deps but mostly depends on a single guy.
[1] https://npmgraph.js.org/?q=pg
[2] https://github.com/porsager/postgres
That would be more useful for the ecosystem than the Node team investing time on a virtual file system.
There's Docker, OverlayFS, FUSE, ZFS or Btrfs snapshots?
Do you not trust your OS to do this correctly, or do you think you can do better?
A lot of this stuff existed 5, 10, 15 years ago...
Somehow there's been a trend for every effing program to grow and absorb the features and responsibilities of every other program.
Actually, I have a brilliant idea, what if we used nodejs, and added html display capabilities, and browser features? After all Cursor has already proven you can vibecode a browser, why not just do it?
I'm just tired at this point
¹E.g. if you've got music, and it's sorted `artist/album/track<n>.extension`, and two artists collaborate on an album, which one gets the album in their folder? What if you want to sort all songs in the display by publication date? Even if they use the files on your filesystem without moving them, some sort of metadata database will be needed for efficient display & search.