I miss Joe, he left us too early. He always had wild ideas like that. For a while he had this idea of a git + bittorrent he called it gittorrent, only to find out someone had already used the name. I think it was a bit of an extension of this universal functions idea.
If you expand some of the comments below, he and other members of the community at the time have a nice discussion about hierarchical namespace.
I particularly like his "flat beer and chips" comment:
> I'd like to know if there will be hierarchial modules in Erlang,
because tree of packages is a rather good idea:
No it's not - this has been the subject of long and heated discussion and is
why packages are NOT in Erlang - many people - myself included - dislike
the idea of hierarchical namespaces. The dot in the name has no semantics
it's just a separator. The name could equally well be encoders.mpg.erlyvideo
or mpg.applications.erlvideo.encoder - there is no logical way to organise the
package name and it does not scale -
erlyvideo.mpegts.encoder
erlyvideo.rtp.encoder
But plain module namespace is also ok. It would be impossible for me
to work with 30K LOC with plain function namespace.
The English language has a flat namespace.
I'd like a drink.alcoholic.beer with my food.unhealthy.hamburger and my food.unhealthy.national.french.fries
I have no problem with flat beer and chips.
/Joe
---
hinkley 7 hours ago [-]
Software development is continually emotionally stunted by a lack of people with expertise in multiple other fields.
English absolutely has namespaces. Every in-group has shibboleths and/or jargon, words that mark membership in the group that have connotations beyond the many dictionary definitions of that word (in fact I wonder how many words with more than three definitions started out as jargon/slang words that achieved general acceptance).
You cannot correctly parse a sentence without the context in which it was written. It’s a literary device some authors use. By letting the reader assume one interpretation of a prophetic sentence early on, the surprise the reader experiences when they discover a different interpretation at the end intensifies the effect.
mechanicalpulse 4 hours ago [-]
I'm reminded of a Final Jeopardy! clue from a few years back --
> As of 2013, this 3-letter verb common in sports, theater & politics has the largest entry in the online OED.
The correct response? What is "run"?
0perator 7 hours ago [-]
It's arguable that any group's dialect is actually a fork of English specialized for a specific culture, activity, or context. Occasionally, elements of the fork are pulled into upstream English as groups grow in popularity and jargon or shibboleths become more commonly used across dialects.
rdtsc 7 hours ago [-]
> Software development is continually emotionally stunted by a lack of people with expertise in multiple other fields.
I think Joe's point is about the perennial discussion whether hierarchy is better than tags. It's as old as software or as old as people started categorizing things. Some early databases were hierarchical KV stores. Email clients and services go through that too, is it better to group messages by tags or have a single hierarchy of folders?
> English absolutely has namespaces
Sure, we can pick apart the analogy, after all we're not programing in English unless we write LLM prompts (or COBOL /s). Then if English has namespaces what would you pick lager.flat.alcoholic or alcoholic.lager.flat or lager.alcoholic.flat, etc? Is there a top-level "lager" vs "ale" package, with a flat vs carbonated as next level?
d0mine 7 hours ago [-]
"whether hierarchy is better than tags" sounds like whether hammer is better than a screwdriver. Use a tool appropriate for the job.
Hierarchy seems more rigid less general than tags but when it works--it works.
froh 4 hours ago [-]
yes. and math/logics trained brains confuse hierarchical namespaces with trees. names in nested namespaces should be DAGs, maybe even arbitrary graphs, meaning a chair can be a sit-on thing in several contexts, but not in all (think meeting).
in many contemporary programming languages you can express this, too, by exporting some imported name.
twic 6 hours ago [-]
> The dot in the name has no semantics it's just a separator.
That's not true of all module systems. It's true in Java, but not in Rust, where it establishes a parent-child relationship, and in which context [1]:
> If an item is private, it may be accessed by the current module and its descendants.
And privacy in Rust is load-bearing for encapsulating unsafe operations from safe code, so it's not just a nice-to-have, its fundamental to the language.
5 hours ago [-]
auggierose 9 hours ago [-]
but we do have alcoholic beer, and non-alcoholic beer, and it is nice to be able to say which one you want. And yes, there is a separator here, too, it is called a space.
ludston 18 hours ago [-]
We need modules so that my search results aren't cluttered with contamination from code that is optimised to be found rather than designed to solve my specific problem.
We need then so that we can find all functions that are core to a given purpose, and have been written with consideration of their performance and a unified purpose rather than also finding a grab bag of everybody's crappy utilities that weren't designed to scale for my use case.
We need them so that people don't have to have 80 character long function names prefixed with Hungarian notation for every distinct domain that shares the same words with different meanings.
sweezyjeezy 16 hours ago [-]
I agree, but also agree with the author's statement "It's very difficult to decide which module to put an individual function in".
Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc. This is great if you are trying to hunt down a single module or line of code quickly. But it can become absolute misery to actually read the 'flow' of the codebase, because every file has a million dependencies, and the logic jumps in and out of each file for a few lines at a time. I'm a big fan of the "proximity principle" [1] for this reason - don't divide code to optimise 'searchability', put things together that actually depend on each other, as they will also need to be read / modified together.
> It's very difficult to decide which module to put an individual function in
It's difficult because it is a core part of software engineering; part of the fundamental value that software developers are being paid for. Just like a major part of a journalist's job is to first understand a story and then lay it out clearly in text for their readers, a major part of a software developer's job is to first understand their domain and then organize it clearly in code for other software developers (including themselves). So the act of deciding which modules different functions go in is the act of software development. Therefore, these people:
> Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc.
Those people are shirking their duty. I disdain those people. Some of us software developers actually take our jobs seriously.
hansvm 12 hours ago [-]
One thing I experimented with was writing a tag-based filesystem for that sort of thing. Imagine, e.g., using an entity component system and being able to choose a view that does a refactor across all entities or one that hones in on some cohesive slice of functionality.
In practice, it wound up not quite being worth it (the concept requires the same file to "exist" in multiple locations for that idea to work with all your other tools in a way that actually exploits tags, but then when you reference a given file (e.g., to import it) that needs to be some sort of canonical name in the TFS so that on `cd`-esque operations you can reference the "right" one -- doable, but not agnostic of the file format, which is the point where I saw this causing more problems than it was solving).
I still think there's something there though, especially if the editing environment, programming language, and/or representation of the programming language could be brought on board (e.g., for any concrete language with a good LSP, you can re-write important statements dynamically).
hansvm 4 hours ago [-]
Oops: important -> import
ludston 15 hours ago [-]
Indeed! The traditional name for the proximity principle is called "cohesion"[1].
Not to pick on Rails, sorting files into "models / views / controllers" seems to be our first instinct. My pantry is organized that way: baking stuff goes here, oils go there, etc.
A directory hierarchy feels more pleasant when it maps to features, instead. Less clutter.
Most programmers do not care about OO design, but "connascence" has some persuasive arguments.
> Knowing the various kinds of connascence gives us a metric for determining the characteristics and severity of the coupling in our systems. The idea is simple: The more remote the connection between two clusters of code, the weaker the connascence between them should be.
> Good design principles encourages us to move from tight coupling to looser coupling where possible. But connascence allows us to be much more specific about what kinds of problems we’re dealing with, which makes it easier to reason about the types of refactorings that can be used to weaken the connascence between components.
taeric 12 hours ago [-]
We could get that without a hierarchical categorization of code, though?
Makes me wonder what it would look like if you gave "topics" to code as you wrote it. Where would you put some topics? And how many would you have that are part of several topics?
hombre_fatal 11 hours ago [-]
There is a similar question about message board systems.
Instead of posting a topic in a subforum, what if subforums were turned into tags and you just post your topic globally with those tags. Now you can have a unified UI that shows all topics, and people can filter by tag.
I experimented with this with a /topics page that implemented such a UI. What I found was that it becomes one big soup that lacks the visceral structure that I quickly found to be valuable once it was missing.
There is some value to "Okay, I clicked into the WebDesign subforum and I know the norms here and the people who regularly post here. If I post a topic, I know who is likely to reply. I've learned the kind of topics that people like to discuss here which is a little different than this other microclimate in the RubyOnRails subforum. I know the topics that already exist in this subforum and I have a feel for it because it's separate from the top-level firehose of discussion."
I think something similar happens with modules and grouping like-things into the same file. Microclimates and micronorms emerge that are often useful for wrapping your brain around a subsystem, contributing to it, and extending it. Even if the norms and character change between files and modules, it's useful that there are norms and character when it comes to understanding what the local objective is and how it's trying to solve it.
Like a subforum, you also get to break down the project management side of things into manageable chunks without everything always existing at a top organizational level.
efitz 11 hours ago [-]
I agree, but go farther:
Most things have multiple kinds of interesting properties. And in general, the more complex the thing, the more interesting properties it has. Ofc "interesting" is relative to the user/observer.
The problem with hierarchical taxonomies, and with taxonomies in general, is that they try to categorize things by a single property. Not only that, the selection of the property to classify against, is relevant to the person who made the selection, but it might not be relevant, or at least the most relevant, property for others who need to categorize the same set of things.
Sometimes people discover "new" properties of things, such as when a new tool or technique for examining the things, comes into existence. And new reasons for classifying come into existence all the time. So a hierarchical taxonomy begins to become less relevant, as soon as it is invented.
Sometimes one wants to invent a new thing and needs to integrate it into an existing taxonomy. But they have a new value for the property that the taxonomy uses for classification. Think back to SNMP and MIBs and OIDs. Now the original classifier is a gatekeeper and you're at their mercy to make space for your thing in the taxonomy.
In my experience, the best way to classify things, ESPECIALLY man-made things, is to allow them to be freely tagged with zero or more tags (or if you're a stickler, one or more tags). And don't exert control over the tags, or exert as little control as you can get away with. This allows multiple organic taxonomies to be applied to the same set of things, and adapts well to supporting new use cases or not-previously-considered use cases.
taeric 11 hours ago [-]
Yeah, I suspect this is one where the general hierarchy does lift quite heavily. Such that it isn't that I would want to lose it, entirely. More that I think it is best seen as a view of the system. Not a defining fact of it.
Is a lot like genres for music and such. In broad strokes, they work really well. If taken as a requirement, though, they start to be too restrictive.
skydhash 11 hours ago [-]
Tags are great only when hierarchical structures becomes cumbersome. And even then, there's some limit to how much tags you can have before they become useless.
bob1029 17 hours ago [-]
I feel like you are arguing more for namespaces than modules.
Having a hierarchical naming system that spans everything makes it largely irrelevant how the functions themselves are physically organized. This also provides a pattern for disambiguating similar products by way of prefixing the real world FQDNs of each enterprise.
adrian_b 15 hours ago [-]
As another poster already said, providing namespaces is just one of the functions of modules, the other being encapsulation, i.e. the interface of a module typically exports only a small subset of the internal symbols, the rest being protected from external accesses.
While a function may have local variables that are protected from external accesses, a module can export not only multiple functions, but any other kinds of symbols, e.g. data types or templates, while also being able to keep private any kind of symbol.
In languages like C, which have separate compilation, but without modules, you can partition code in files, then choose for each symbol whether to be public or not, but with modules you can handle groups of related symbols simultaneously, in a simpler way, which also documents the structure of the program.
Moreover, with a well-implemented module system, compilation can be much faster than when using inefficient tricks for specifying the interfaces, like header file textual inclusion.
ludston 16 hours ago [-]
It is irrelevant until you have 4gb of binaries loaded from 50 repositories and then you are trying to find the definition of some cursed function that isn't defined in the same spot as everything it is related to, and now you have to download/search through all 50 repositories because any one of them could have it. (True story)
layer8 15 hours ago [-]
Modules don’t imply namespaces. You can run into the same problem with modules. For example, C libraries don’t implicitly have namespaces. And the problem can be easily solved by the repository maintaining a function index, without having to change anything about the modules.
norman784 17 hours ago [-]
Don't forget about encapsulation, there's most likely a lot of functions that aren't relevant outside the module.
AtlasBarfed 10 hours ago [-]
The article references the true granularity issue (actually the function names need a version number as well, not sure in my scan of the article if it was mentioned).
Modules being collections of types and functions obviously increases coarseness. I'm not a fan of most import mechanisms because it leaves versioning and namespace versioning (if it has namespaces at all...) out, to be picked up poorly by build systems and dependency graph resolvers and that crap.
poincaredisk 10 hours ago [-]
How do you imagine importing modules by version in the code? Something like "import requests version 2.0.3"? This sounds awful when you accidentally import the same module in two different versions and chaos ensures.
_factor 9 hours ago [-]
Import latest where signed by a trusted authority.
Gurkenglas 17 hours ago [-]
just deduce the domain from text similarity :o)
jonnycat 11 hours ago [-]
This is one of those things where I don’t agree with the argument, but know the person making it knows way more than I do on the subject and has given it way more thought. In these cases it’s usually best to sit back and listen a bit...
GrantMoyer 8 hours ago [-]
I think Hoogle[1] is proof this concept could work. Haskell has modules, of course, but even if it didn't, Hoogle would keep it still pretty usuable.
The import piece here which is mentioned but not very emphasized in TFA is that Hoogle lets you search by meta data instead of just by name. If a function takes the type I have, and transforms it to the type I want, and the docs say it does what I want, I don't really care what module or package it's from. In fact, that's often how I use Hoogle, finding the function I need across all Stack packages.
That said, while I think it could work, I'm not convinced it'd have any benefit over the statys quo in practice.
Hoogle works because of how richly-typed Haskell is, but Erlang is dynamically-typed.
neongreen 17 hours ago [-]
> database of functions
This is exactly what Unison (https://www.unison-lang.org/) does. It’s kinda neat. Renaming identifiers is free. Uh… probably something else is neat (I haven’t used Unison irl)
brabel 11 hours ago [-]
A lot of things are neat because of this. Refactoring becomes trivial and safe. If you do not change the type of the refactored function, you can safely do a batch replace and everywhere the old function was used, the new one will be used after that. If you do change the type, the compiler interface will guide you through an interactive flow where you have to handle the change everywhere the function was being used. You can stop in the middle and continue later... and once you're done you just commit and push... all the while the code continues to work. Even cooler, perhaps: no unit test is re-run if not affected. And given the compiler knows the full AST of everything , it knows exactly when a test must run again.
jweir 5 hours ago [-]
I tried it out. Fascinating language and a completely different paradigm. The language itself is familiar, but the structure of the program is different - no files – all functions are in a database and their history. I found the language a bit difficult to navigate, but that is probably because of my experience of work with files, and having tools based on files.
anonzzzies 17 hours ago [-]
Makes me think of Unison [0]. I never used it but I found it interesting to read about.
They are nodes in a graph, where the other nodes are the input types, output types and other functions.
It makes sense to cluster closely associated notes, hence Modules.
Etheryte 17 hours ago [-]
As with other similar proposals, doesn't this simply move the complexity around without changing anything else? Now instead of looking for the right module or whatnot, you'll be sifting through billions of function definitions, trying to find the very specific one that does what you need, buried between countless almost but not quite similar functions.
codethief 7 hours ago [-]
> all functions go into a global (searchable) Key-value database
If there are no modules but a "flat" global namespace which requires every function name to be unique to avoid collisions... it means people would inevitably re-invent pseudo/fake "modules" and hierarchy in metadata tags in large non-trivial codebases.
Consider a function name: log()
Is it a function to log an event for audit history?
Or is it a function to get the mathematical natural logarithm of a number?
The global namespace forces the functions to be named differently (maybe use underscore '_') in "audit_log()" and the other "math_log()". With modules, the names would isolated be colons "::" or a period '.' : Audit.log() and Math.log(). Audit and Math are isolated namespaces. You still have potential global namespace collisions but it happens at the higher level of module names instead of the leaf function names. Coordinating the naming at the level of modules to avoid conflicts is much less frequent and more manageable.
Same issue in os file systems with proposing no folders/directories and only a flat global namespace with metadata tags. The filenames themselves would have embedded substrings with underscores to recreate fake folder names. People would reinvent hierarchy in tag names with concatenated substrings like "tag:docs_taxes_archive" to recreate pseudo folders/directories of "/docs/taxes/archive". Yes, some users could deliberately avoid hiearchies and only name tags as 1-level such as "docs", "taxes", "archive" ... but that creates new organizational problems because some have "work docs" vs "personal docs" ... which gravitates towards a hierarchical organization again.
john2x 17 hours ago [-]
This is what Emacs Lisp has, and what indeed does happen with libraries
_Wintermute 16 hours ago [-]
Happens with R as well where everything gets dumped into a global namespace. It's a huge mess.
If you're lucky all functions will have a common prefix str_* or fct_*. If you're unlucky then you have to figure out which package has clobbered a standard library function, or the exact ordering of your package import statements you need for your code to run.
bfung 16 hours ago [-]
Same with S3 object names.
There’s no directories in S3, just object names.
The feature of the object names being hierarchical with “/“ delimiters are out of habit and easier to reason about for the avg user.
skydhash 11 hours ago [-]
Same thing happens to me with Bear.app (note taking). It only has tags, and the first thing I believe everyone does is to go with hierarchical structure again, because you need some tag, but also an additional specifier. Which help with grouping an location (And Bear.app have support for that naming scheme and displays it as a tree).
bionhoward 14 hours ago [-]
IMHO, aren’t modules necessary for big projects to limit the amount of complexity we have to deal with at any one time?
Our minds can (allegedly) only handle 7+/-2 concepts in working memory at once. Your whole codebase has way more than that, right? But one module could easily fit in that range.
bjourne 11 hours ago [-]
This is an all-time classic, but, sadly, most HN commenters just don't "get it". Perhaps because they have no experience with the Erlang VM so they don't understand Joe's premises The Erlang VM is best described as a dynamic process manager and a "function" is just a callstack template. You want to fix a bug in your executing function without stopping it? Sure, no problem. Just reload the function and have the VM seamlessly upgrade the callstack to the new template. Since data is immutable it mostly just works. Now since functions forms the basic unit of work in Erlang modules are kind of irrelevant. Recompiling a module is the same as recompiling every function in the module. Hence, what use does the abstraction serve? The proliferation of "utils" or "misc" modules in not only Erlang but many other languages supports his point.
Btw, the more experienced I've gotten the more I've found that organizing code is mostly pointless. A 5000-line source file (e.g., module) isn't necessarily worse than five 1000-line files.
skydhash 10 hours ago [-]
It's all related to naming. You can refer to a symbol with auth/guard/token/authenticate or auth_guard_token_authenticate, and what matters is the amount of characters you type sometimes. Also you can have encapsulation with the first option.
Smalltalk have the same live experience, but do have modules, because it makes editing easier and encapsulation is nice for readability and clarity.
bjourne 7 hours ago [-]
No, neither Smalltalk nor any of the Lisp environments that purport to support hot code reloading have the same facilities the Erlang VM has.
skydhash 6 hours ago [-]
Concurrency and tasks supervision is orthogonal to modules/packages.
igouy 9 hours ago [-]
Is a 5,000-line function worse than 500 10-line functions?
Aside from grouping functions together that work together, for example working with data types/structures also defined in or by the module, modules also serve the purpose to hide implementation-detail code (“private” functions) shared between those functions. Modules provide a form of information hiding.
Furthermore, modules are the unit of versioning. While one could version each individual function separately, that would make managing the dependency graph with version compatibility considerably more complex.
There is the adage “version together what changes together”. That carries over to modules: “group together in a module what changes together”. And typically things change together that are designed together.
Namespaces are an orthogonal issue. You can have modules without namespaces, and namespaces without modules.
friendzis 13 hours ago [-]
Global namespace clobbering has huge implications. With modules/namespaces you have a well defined and limited blast radius: a change is limited to a module and calling code.
Now, imagine your environment of choice supported dynamic runtime loading of code where the code is just dropped to the global namespace. This screams "insecure" and "how do I know if I call the code I want to call?".
Now imagine the only mitigating mechanism was `include_once`. It would make sense software written in this environment requires own CVE namespace as new security vulns are discovered every second
jghn 12 hours ago [-]
What he wound up arguing for was that everything would have a globally unique name.
gatinsama 14 hours ago [-]
"Namespaces are one honking great idea -- let's do more of those!"
Zen of Python
porkbrain 14 hours ago [-]
1. Have a global append-only function key-value store.
2. A key of a function is something like `keccak256(function's signature + docstring)`
3. A value is a list of the function's implementation (index being the implementation's version) and some other useful metadata such as the contributor's signature and preferred function name. (Compiler emits a warning that needs to be explicitly silenced if preferred name is not used.)
4. IDE hints and the developer confirms to auto import the function from the global KV store.
5. Import hash can be prepended with a signers name that's defined in some config file. This makes it obvious in git diffs if a function changes its author. Additionally, the compiler only accepts a short hash in import statements if used with a signer.
// use publisher and short hash
import "mojmir@51973ec9d4c1929b@1" as log_v1;
// or full hash
import "51973ec9d4c1929bdd5b149c064d46aee47e92a7e2bb5f7a20c7b9cfb0d13b39" as log_latest;
import "radislava@c81915ad12f36c33" as ln;
log_v1("Hello");
log_latest(ln(0));
Philpax 12 hours ago [-]
You've just invented Unison :)
andrewcl 14 hours ago [-]
Hard to see a world without modules as a means of compartmentalization for various reasons. You do have to appreciate the exercise what a world without them looks like / the implications.
wruza 18 hours ago [-]
That would be too useful. Imagine adding tags to functions and generally treat them as "items" which you can multi-categorize, search through, select, etc like with any dataset. Way too advanced.
18 hours ago [-]
jiggawatts 6 hours ago [-]
I had a vaguely similar notion of a global proof database. Picture something like a blockchain (actually a "blockgraph") of Lean theorems built up from other theorems and axioms also on the same distributed global data structure.
A use-case could be optimising compilers. These need to search for alternative (faster) series of statements that are provably equivalent to the original given some axioms about the behaviour of the underlying machine code and basic boolean algebra and integer mathematics.
This could be monetised: Theorems along the shortest path from a desired proof to the axioms are rewarded. New theorems can be added by anyone at any time, but would generate zero income unless they improve the state-of-the-art. Shortest-path searches through the data structure would remain efficient because of this incentive.
Client tools such as compilers could come with monthly subscriptions and/or some other mechanism for payments, possibly reusing some existing crypto coin. These tools advertise desired proofs -- just like how blockchain clients advertise transactions they like to complete along with a fee -- and then the community can find new theorems to reach those proofs, hoping not just for the one-time payment, but the ongoing reward if the theorems are general and reusable for other purposes.
Imagine you're a FAANG and there's some core algorithm that uses 1% of your global compute. You could advertise a desire to improve the algorithm or the assembly code to be twice as efficient for $1M. Almost certainly, this is worth it. If no proof turns up, there's no payment. If a proof does turn up, a smart contract debits the FAANG's crypto account and they receive the chain of theorems proving that there's a more efficient algorithm, which will save them millions of USD in infrastructure costs. Maths geeks, AI bots, and whomever else contributed to the proof get their share of the $1M prize.
It's like... Uber for Fields medals, used for industrial computing.
Fully automated gig work for computer scientists and mathematicians.
LegionMammal978 5 hours ago [-]
The Metamath Proof Explorer (AKA the set.mm database) works on a similar principle, of all theorems forming a tree of backreferences that ultimately lead to the axioms [0].
Though it wouldn't make sense to build something like that on top of such a fast-moving, complex, and bug-prone target like Lean.
Unison of course works this way, as has been mentioned.
I like Deno for similar reason. It's a coarser level of granularity, and not explicitly content-addressed, but you can import specific versions of modules that are ostensibly immutable, and if you want, you could do single-function modules.
I like the idea so much that I'm now kind of put off by any language/runtime that requires users of my app/library to do a separate 'package install' step. Python being the most egregious, but even languages that I am otherwise interested in, like Racket, I avoid because "I want imports to be unambiguous and automatically downloaded."
Having a one-step way to run a program where all dependencies are completely unambiguous might be my #1 requirement for programming languages. I am weird.
One reason not to do things this way is if you want to be able to upgrade some library independently of other components that depend on it, but "that's what dependency injection is for". i.e. have your library take the other library as an argument, with the types/APIs being in a separate one. TypeScript's type system in particular makes this work very easily. I have done this in Deno projects to great effect. From what I've heard from Rich Hickey[1] the pattern should also work well in Clojure
[1] something something union types being superior to what you might call 'sum types'; can't find the link right now. I think this causes some trouble in functional languages where instead of something being A|B it has to be a C, where C = C A | C B. In the former case an A is a valid A|B, but a C A is not an A, so you can't expand the set of values a function takes without breaking the API. Basically what union types require is that every value in the language extends some universal tagged type; if you need to add a tag to your union then it won't work.
immibis 9 hours ago [-]
I think they already tried this "single flat key/value namespace of all functions" in the JavaScript ecosystem - it was called npm. It was a mess when someone claimed a function based on a trademark and someone else deleted the function for padding a string to a certain length with spaces on the left in retaliation.
MrBuddyCasino 17 hours ago [-]
We need modules because they demarcate social units of collaboration.
greener_grass 17 hours ago [-]
This could be achieved with a hierarchical namespacing scheme for functions, no?
universe.mega_corp.finance_dept.team_alpha.foo
But to use `universe.mega_corp.finance_dept.team_alpha.foo` in your application, you don't import a module, just the function `foo`.
Who controls what goes into the namespace `universe.mega_corp.finance_dept.team_alpha`?
That would be Team Alpha in the Finance Department of Mega Corp.
I guess this is like tree-shaking by default.
sestep 12 hours ago [-]
I'm probably just missing something obvious, but in this scenario with really long names, doesn't that just mean all code will be extremely verbose? Or are you saying there'd be some way to have shorter bindings to those longer names within a specific context? But then what would that look like? Typically we use modules to denote contexts within which you can import longer fully-qualified names with shorter aliases.
greener_grass 11 hours ago [-]
You would do something like:
open universe.mega_corp.finance_dept.team_alpha
Then when you use `foo`, the compiler would know you mean `universe.mega_corp.finance_dept.team_alpha.foo`.
There will probably need to be some kind of lock-file or hash stored with the source-code so that we know precisely which version of `universe.mega_corp.finance_dept.team_alpha.foo` was resolved.
ramses0 7 hours ago [-]
This is kindof how golang works by default. `import foo/bar/baz` then "foo" and "bar" effectively don't exist, you only refer to "baz" in the end.
Every argument made quickly becomes invalid because in any sufficiently complex project, the function naming scheme will end up replicating a module/namespace system.
If you expand some of the comments below, he and other members of the community at the time have a nice discussion about hierarchical namespace.
I particularly like his "flat beer and chips" comment:
https://groups.google.com/g/erlang-programming/c/LKLesmrss2k
---
> I'd like to know if there will be hierarchial modules in Erlang, because tree of packages is a rather good idea:
No it's not - this has been the subject of long and heated discussion and is why packages are NOT in Erlang - many people - myself included - dislike the idea of hierarchical namespaces. The dot in the name has no semantics it's just a separator. The name could equally well be encoders.mpg.erlyvideo or mpg.applications.erlvideo.encoder - there is no logical way to organise the package name and it does not scale -
erlyvideo.mpegts.encoder erlyvideo.rtp.encoder
But plain module namespace is also ok. It would be impossible for me to work with 30K LOC with plain function namespace.
The English language has a flat namespace.
I'd like a drink.alcoholic.beer with my food.unhealthy.hamburger and my food.unhealthy.national.french.fries
I have no problem with flat beer and chips.
/Joe
---
English absolutely has namespaces. Every in-group has shibboleths and/or jargon, words that mark membership in the group that have connotations beyond the many dictionary definitions of that word (in fact I wonder how many words with more than three definitions started out as jargon/slang words that achieved general acceptance).
You cannot correctly parse a sentence without the context in which it was written. It’s a literary device some authors use. By letting the reader assume one interpretation of a prophetic sentence early on, the surprise the reader experiences when they discover a different interpretation at the end intensifies the effect.
> As of 2013, this 3-letter verb common in sports, theater & politics has the largest entry in the online OED.
The correct response? What is "run"?
I think Joe's point is about the perennial discussion whether hierarchy is better than tags. It's as old as software or as old as people started categorizing things. Some early databases were hierarchical KV stores. Email clients and services go through that too, is it better to group messages by tags or have a single hierarchy of folders?
> English absolutely has namespaces
Sure, we can pick apart the analogy, after all we're not programing in English unless we write LLM prompts (or COBOL /s). Then if English has namespaces what would you pick lager.flat.alcoholic or alcoholic.lager.flat or lager.alcoholic.flat, etc? Is there a top-level "lager" vs "ale" package, with a flat vs carbonated as next level?
Hierarchy seems more rigid less general than tags but when it works--it works.
in many contemporary programming languages you can express this, too, by exporting some imported name.
That's not true of all module systems. It's true in Java, but not in Rust, where it establishes a parent-child relationship, and in which context [1]:
> If an item is private, it may be accessed by the current module and its descendants.
[1] https://doc.rust-lang.org/reference/visibility-and-privacy.h...
We need then so that we can find all functions that are core to a given purpose, and have been written with consideration of their performance and a unified purpose rather than also finding a grab bag of everybody's crappy utilities that weren't designed to scale for my use case.
We need them so that people don't have to have 80 character long function names prefixed with Hungarian notation for every distinct domain that shares the same words with different meanings.
Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc. This is great if you are trying to hunt down a single module or line of code quickly. But it can become absolute misery to actually read the 'flow' of the codebase, because every file has a million dependencies, and the logic jumps in and out of each file for a few lines at a time. I'm a big fan of the "proximity principle" [1] for this reason - don't divide code to optimise 'searchability', put things together that actually depend on each other, as they will also need to be read / modified together.
[1] https://kula.blog/posts/proximity_principle/
It's difficult because it is a core part of software engineering; part of the fundamental value that software developers are being paid for. Just like a major part of a journalist's job is to first understand a story and then lay it out clearly in text for their readers, a major part of a software developer's job is to first understand their domain and then organize it clearly in code for other software developers (including themselves). So the act of deciding which modules different functions go in is the act of software development. Therefore, these people:
> Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc.
Those people are shirking their duty. I disdain those people. Some of us software developers actually take our jobs seriously.
In practice, it wound up not quite being worth it (the concept requires the same file to "exist" in multiple locations for that idea to work with all your other tools in a way that actually exploits tags, but then when you reference a given file (e.g., to import it) that needs to be some sort of canonical name in the TFS so that on `cd`-esque operations you can reference the "right" one -- doable, but not agnostic of the file format, which is the point where I saw this causing more problems than it was solving).
I still think there's something there though, especially if the editing environment, programming language, and/or representation of the programming language could be brought on board (e.g., for any concrete language with a good LSP, you can re-write important statements dynamically).
[1] https://en.wikipedia.org/wiki/Cohesion_(computer_science)
A directory hierarchy feels more pleasant when it maps to features, instead. Less clutter.
Most programmers do not care about OO design, but "connascence" has some persuasive arguments.
https://randycoulman.com/blog/2013/08/27/connascence/
https://practicingruby.com/articles/connascence
https://connascence.io/
> Knowing the various kinds of connascence gives us a metric for determining the characteristics and severity of the coupling in our systems. The idea is simple: The more remote the connection between two clusters of code, the weaker the connascence between them should be.
> Good design principles encourages us to move from tight coupling to looser coupling where possible. But connascence allows us to be much more specific about what kinds of problems we’re dealing with, which makes it easier to reason about the types of refactorings that can be used to weaken the connascence between components.
Makes me wonder what it would look like if you gave "topics" to code as you wrote it. Where would you put some topics? And how many would you have that are part of several topics?
Instead of posting a topic in a subforum, what if subforums were turned into tags and you just post your topic globally with those tags. Now you can have a unified UI that shows all topics, and people can filter by tag.
I experimented with this with a /topics page that implemented such a UI. What I found was that it becomes one big soup that lacks the visceral structure that I quickly found to be valuable once it was missing.
There is some value to "Okay, I clicked into the WebDesign subforum and I know the norms here and the people who regularly post here. If I post a topic, I know who is likely to reply. I've learned the kind of topics that people like to discuss here which is a little different than this other microclimate in the RubyOnRails subforum. I know the topics that already exist in this subforum and I have a feel for it because it's separate from the top-level firehose of discussion."
I think something similar happens with modules and grouping like-things into the same file. Microclimates and micronorms emerge that are often useful for wrapping your brain around a subsystem, contributing to it, and extending it. Even if the norms and character change between files and modules, it's useful that there are norms and character when it comes to understanding what the local objective is and how it's trying to solve it.
Like a subforum, you also get to break down the project management side of things into manageable chunks without everything always existing at a top organizational level.
Most things have multiple kinds of interesting properties. And in general, the more complex the thing, the more interesting properties it has. Ofc "interesting" is relative to the user/observer.
The problem with hierarchical taxonomies, and with taxonomies in general, is that they try to categorize things by a single property. Not only that, the selection of the property to classify against, is relevant to the person who made the selection, but it might not be relevant, or at least the most relevant, property for others who need to categorize the same set of things.
Sometimes people discover "new" properties of things, such as when a new tool or technique for examining the things, comes into existence. And new reasons for classifying come into existence all the time. So a hierarchical taxonomy begins to become less relevant, as soon as it is invented.
Sometimes one wants to invent a new thing and needs to integrate it into an existing taxonomy. But they have a new value for the property that the taxonomy uses for classification. Think back to SNMP and MIBs and OIDs. Now the original classifier is a gatekeeper and you're at their mercy to make space for your thing in the taxonomy.
In my experience, the best way to classify things, ESPECIALLY man-made things, is to allow them to be freely tagged with zero or more tags (or if you're a stickler, one or more tags). And don't exert control over the tags, or exert as little control as you can get away with. This allows multiple organic taxonomies to be applied to the same set of things, and adapts well to supporting new use cases or not-previously-considered use cases.
Is a lot like genres for music and such. In broad strokes, they work really well. If taken as a requirement, though, they start to be too restrictive.
Having a hierarchical naming system that spans everything makes it largely irrelevant how the functions themselves are physically organized. This also provides a pattern for disambiguating similar products by way of prefixing the real world FQDNs of each enterprise.
While a function may have local variables that are protected from external accesses, a module can export not only multiple functions, but any other kinds of symbols, e.g. data types or templates, while also being able to keep private any kind of symbol.
In languages like C, which have separate compilation, but without modules, you can partition code in files, then choose for each symbol whether to be public or not, but with modules you can handle groups of related symbols simultaneously, in a simpler way, which also documents the structure of the program.
Moreover, with a well-implemented module system, compilation can be much faster than when using inefficient tricks for specifying the interfaces, like header file textual inclusion.
Modules being collections of types and functions obviously increases coarseness. I'm not a fan of most import mechanisms because it leaves versioning and namespace versioning (if it has namespaces at all...) out, to be picked up poorly by build systems and dependency graph resolvers and that crap.
The import piece here which is mentioned but not very emphasized in TFA is that Hoogle lets you search by meta data instead of just by name. If a function takes the type I have, and transforms it to the type I want, and the docs say it does what I want, I don't really care what module or package it's from. In fact, that's often how I use Hoogle, finding the function I need across all Stack packages.
That said, while I think it could work, I'm not convinced it'd have any benefit over the statys quo in practice.
[1]: https://hoogle.haskell.org/
This is exactly what Unison (https://www.unison-lang.org/) does. It’s kinda neat. Renaming identifiers is free. Uh… probably something else is neat (I haven’t used Unison irl)
[0] https://www.unison-lang.org
Variables aren't named, they are beta reduced and referred to by abstraction level.
https://text.marvinborner.de/2023-04-06-01.html
They are nodes in a graph, where the other nodes are the input types, output types and other functions.
It makes sense to cluster closely associated notes, hence Modules.
Very much related: https://scrapscript.org/
Consider a function name: log()
Is it a function to log an event for audit history?
Or is it a function to get the mathematical natural logarithm of a number?
The global namespace forces the functions to be named differently (maybe use underscore '_') in "audit_log()" and the other "math_log()". With modules, the names would isolated be colons "::" or a period '.' : Audit.log() and Math.log(). Audit and Math are isolated namespaces. You still have potential global namespace collisions but it happens at the higher level of module names instead of the leaf function names. Coordinating the naming at the level of modules to avoid conflicts is much less frequent and more manageable.
Same issue in os file systems with proposing no folders/directories and only a flat global namespace with metadata tags. The filenames themselves would have embedded substrings with underscores to recreate fake folder names. People would reinvent hierarchy in tag names with concatenated substrings like "tag:docs_taxes_archive" to recreate pseudo folders/directories of "/docs/taxes/archive". Yes, some users could deliberately avoid hiearchies and only name tags as 1-level such as "docs", "taxes", "archive" ... but that creates new organizational problems because some have "work docs" vs "personal docs" ... which gravitates towards a hierarchical organization again.
If you're lucky all functions will have a common prefix str_* or fct_*. If you're unlucky then you have to figure out which package has clobbered a standard library function, or the exact ordering of your package import statements you need for your code to run.
There’s no directories in S3, just object names.
The feature of the object names being hierarchical with “/“ delimiters are out of habit and easier to reason about for the avg user.
Our minds can (allegedly) only handle 7+/-2 concepts in working memory at once. Your whole codebase has way more than that, right? But one module could easily fit in that range.
Btw, the more experienced I've gotten the more I've found that organizing code is mostly pointless. A 5000-line source file (e.g., module) isn't necessarily worse than five 1000-line files.
Smalltalk have the same live experience, but do have modules, because it makes editing easier and encapsulation is nice for readability and clarity.
(Locality of reference.)
[0] https://scrapscript.org
Furthermore, modules are the unit of versioning. While one could version each individual function separately, that would make managing the dependency graph with version compatibility considerably more complex.
There is the adage “version together what changes together”. That carries over to modules: “group together in a module what changes together”. And typically things change together that are designed together.
Namespaces are an orthogonal issue. You can have modules without namespaces, and namespaces without modules.
Now, imagine your environment of choice supported dynamic runtime loading of code where the code is just dropped to the global namespace. This screams "insecure" and "how do I know if I call the code I want to call?".
Now imagine the only mitigating mechanism was `include_once`. It would make sense software written in this environment requires own CVE namespace as new security vulns are discovered every second
2. A key of a function is something like `keccak256(function's signature + docstring)`
3. A value is a list of the function's implementation (index being the implementation's version) and some other useful metadata such as the contributor's signature and preferred function name. (Compiler emits a warning that needs to be explicitly silenced if preferred name is not used.)
4. IDE hints and the developer confirms to auto import the function from the global KV store.
5. Import hash can be prepended with a signers name that's defined in some config file. This makes it obvious in git diffs if a function changes its author. Additionally, the compiler only accepts a short hash in import statements if used with a signer.
package.toml
source.fileA use-case could be optimising compilers. These need to search for alternative (faster) series of statements that are provably equivalent to the original given some axioms about the behaviour of the underlying machine code and basic boolean algebra and integer mathematics.
This could be monetised: Theorems along the shortest path from a desired proof to the axioms are rewarded. New theorems can be added by anyone at any time, but would generate zero income unless they improve the state-of-the-art. Shortest-path searches through the data structure would remain efficient because of this incentive.
Client tools such as compilers could come with monthly subscriptions and/or some other mechanism for payments, possibly reusing some existing crypto coin. These tools advertise desired proofs -- just like how blockchain clients advertise transactions they like to complete along with a fee -- and then the community can find new theorems to reach those proofs, hoping not just for the one-time payment, but the ongoing reward if the theorems are general and reusable for other purposes.
Imagine you're a FAANG and there's some core algorithm that uses 1% of your global compute. You could advertise a desire to improve the algorithm or the assembly code to be twice as efficient for $1M. Almost certainly, this is worth it. If no proof turns up, there's no payment. If a proof does turn up, a smart contract debits the FAANG's crypto account and they receive the chain of theorems proving that there's a more efficient algorithm, which will save them millions of USD in infrastructure costs. Maths geeks, AI bots, and whomever else contributed to the proof get their share of the $1M prize.
It's like... Uber for Fields medals, used for industrial computing.
Fully automated gig work for computer scientists and mathematicians.
Though it wouldn't make sense to build something like that on top of such a fast-moving, complex, and bug-prone target like Lean.
[0] https://us.metamath.org/mpeuni/mmset.html
https://github.com/joearms/elib1/blob/master/src/elib1_misc....
I like Deno for similar reason. It's a coarser level of granularity, and not explicitly content-addressed, but you can import specific versions of modules that are ostensibly immutable, and if you want, you could do single-function modules.
I like the idea so much that I'm now kind of put off by any language/runtime that requires users of my app/library to do a separate 'package install' step. Python being the most egregious, but even languages that I am otherwise interested in, like Racket, I avoid because "I want imports to be unambiguous and automatically downloaded."
Having a one-step way to run a program where all dependencies are completely unambiguous might be my #1 requirement for programming languages. I am weird.
One reason not to do things this way is if you want to be able to upgrade some library independently of other components that depend on it, but "that's what dependency injection is for". i.e. have your library take the other library as an argument, with the types/APIs being in a separate one. TypeScript's type system in particular makes this work very easily. I have done this in Deno projects to great effect. From what I've heard from Rich Hickey[1] the pattern should also work well in Clojure
[1] something something union types being superior to what you might call 'sum types'; can't find the link right now. I think this causes some trouble in functional languages where instead of something being A|B it has to be a C, where C = C A | C B. In the former case an A is a valid A|B, but a C A is not an A, so you can't expand the set of values a function takes without breaking the API. Basically what union types require is that every value in the language extends some universal tagged type; if you need to add a tag to your union then it won't work.
Who controls what goes into the namespace `universe.mega_corp.finance_dept.team_alpha`? That would be Team Alpha in the Finance Department of Mega Corp.
I guess this is like tree-shaking by default.
There will probably need to be some kind of lock-file or hash stored with the source-code so that we know precisely which version of `universe.mega_corp.finance_dept.team_alpha.foo` was resolved.
`import github.com/blah/baz`, `megacorp.com/finance/baz`, ...
It all resolves to `baz.Something()`