Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need modules so that my search results aren't cluttered with contamination from code that is optimised to be found rather than designed to solve my specific problem.

We need then so that we can find all functions that are core to a given purpose, and have been written with consideration of their performance and a unified purpose rather than also finding a grab bag of everybody's crappy utilities that weren't designed to scale for my use case.

We need them so that people don't have to have 80 character long function names prefixed with Hungarian notation for every distinct domain that shares the same words with different meanings.



I agree, but also agree with the author's statement "It's very difficult to decide which module to put an individual function in".

Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc. This is great if you are trying to hunt down a single module or line of code quickly. But it can become absolute misery to actually read the 'flow' of the codebase, because every file has a million dependencies, and the logic jumps in and out of each file for a few lines at a time. I'm a big fan of the "proximity principle" [1] for this reason - don't divide code to optimise 'searchability', put things together that actually depend on each other, as they will also need to be read / modified together.

[1] https://kula.blog/posts/proximity_principle/


> It's very difficult to decide which module to put an individual function in

It's difficult because it is a core part of software engineering; part of the fundamental value that software developers are being paid for. Just like a major part of a journalist's job is to first understand a story and then lay it out clearly in text for their readers, a major part of a software developer's job is to first understand their domain and then organize it clearly in code for other software developers (including themselves). So the act of deciding which modules different functions go in is the act of software development. Therefore, these people:

> Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc.

Those people are shirking their duty. I disdain those people. Some of us software developers actually take our jobs seriously.


One thing I experimented with was writing a tag-based filesystem for that sort of thing. Imagine, e.g., using an entity component system and being able to choose a view that does a refactor across all entities or one that hones in on some cohesive slice of functionality.

In practice, it wound up not quite being worth it (the concept requires the same file to "exist" in multiple locations for that idea to work with all your other tools in a way that actually exploits tags, but then when you reference a given file (e.g., to import it) that needs to be some sort of canonical name in the TFS so that on `cd`-esque operations you can reference the "right" one -- doable, but not agnostic of the file format, which is the point where I saw this causing more problems than it was solving).

I still think there's something there though, especially if the editing environment, programming language, and/or representation of the programming language could be brought on board (e.g., for any concrete language with a good LSP, you can re-write important statements dynamically).


Oops: important -> import


Indeed! The traditional name for the proximity principle is called "cohesion"[1].

[1] https://en.wikipedia.org/wiki/Cohesion_(computer_science)


Not to pick on Rails, sorting files into "models / views / controllers" seems to be our first instinct. My pantry is organized that way: baking stuff goes here, oils go there, etc.

A directory hierarchy feels more pleasant when it maps to features, instead. Less clutter.

Most programmers do not care about OO design, but "connascence" has some persuasive arguments.

https://randycoulman.com/blog/2013/08/27/connascence/

https://practicingruby.com/articles/connascence

https://connascence.io/

> Knowing the various kinds of connascence gives us a metric for determining the characteristics and severity of the coupling in our systems. The idea is simple: The more remote the connection between two clusters of code, the weaker the connascence between them should be.

> Good design principles encourages us to move from tight coupling to looser coupling where possible. But connascence allows us to be much more specific about what kinds of problems we’re dealing with, which makes it easier to reason about the types of refactorings that can be used to weaken the connascence between components.


We could get that without a hierarchical categorization of code, though?

Makes me wonder what it would look like if you gave "topics" to code as you wrote it. Where would you put some topics? And how many would you have that are part of several topics?


There is a similar question about message board systems.

Instead of posting a topic in a subforum, what if subforums were turned into tags and you just post your topic globally with those tags. Now you can have a unified UI that shows all topics, and people can filter by tag.

I experimented with this with a /topics page that implemented such a UI. What I found was that it becomes one big soup that lacks the visceral structure that I quickly found to be valuable once it was missing.

There is some value to "Okay, I clicked into the WebDesign subforum and I know the norms here and the people who regularly post here. If I post a topic, I know who is likely to reply. I've learned the kind of topics that people like to discuss here which is a little different than this other microclimate in the RubyOnRails subforum. I know the topics that already exist in this subforum and I have a feel for it because it's separate from the top-level firehose of discussion."

I think something similar happens with modules and grouping like-things into the same file. Microclimates and micronorms emerge that are often useful for wrapping your brain around a subsystem, contributing to it, and extending it. Even if the norms and character change between files and modules, it's useful that there are norms and character when it comes to understanding what the local objective is and how it's trying to solve it.

Like a subforum, you also get to break down the project management side of things into manageable chunks without everything always existing at a top organizational level.


I agree, but go farther:

Most things have multiple kinds of interesting properties. And in general, the more complex the thing, the more interesting properties it has. Ofc "interesting" is relative to the user/observer.

The problem with hierarchical taxonomies, and with taxonomies in general, is that they try to categorize things by a single property. Not only that, the selection of the property to classify against, is relevant to the person who made the selection, but it might not be relevant, or at least the most relevant, property for others who need to categorize the same set of things.

Sometimes people discover "new" properties of things, such as when a new tool or technique for examining the things, comes into existence. And new reasons for classifying come into existence all the time. So a hierarchical taxonomy begins to become less relevant, as soon as it is invented.

Sometimes one wants to invent a new thing and needs to integrate it into an existing taxonomy. But they have a new value for the property that the taxonomy uses for classification. Think back to SNMP and MIBs and OIDs. Now the original classifier is a gatekeeper and you're at their mercy to make space for your thing in the taxonomy.

In my experience, the best way to classify things, ESPECIALLY man-made things, is to allow them to be freely tagged with zero or more tags (or if you're a stickler, one or more tags). And don't exert control over the tags, or exert as little control as you can get away with. This allows multiple organic taxonomies to be applied to the same set of things, and adapts well to supporting new use cases or not-previously-considered use cases.


Yeah, I suspect this is one where the general hierarchy does lift quite heavily. Such that it isn't that I would want to lose it, entirely. More that I think it is best seen as a view of the system. Not a defining fact of it.

Is a lot like genres for music and such. In broad strokes, they work really well. If taken as a requirement, though, they start to be too restrictive.


Tags are great only when hierarchical structures becomes cumbersome. And even then, there's some limit to how much tags you can have before they become useless.


I feel like you are arguing more for namespaces than modules.

Having a hierarchical naming system that spans everything makes it largely irrelevant how the functions themselves are physically organized. This also provides a pattern for disambiguating similar products by way of prefixing the real world FQDNs of each enterprise.


As another poster already said, providing namespaces is just one of the functions of modules, the other being encapsulation, i.e. the interface of a module typically exports only a small subset of the internal symbols, the rest being protected from external accesses.

While a function may have local variables that are protected from external accesses, a module can export not only multiple functions, but any other kinds of symbols, e.g. data types or templates, while also being able to keep private any kind of symbol.

In languages like C, which have separate compilation, but without modules, you can partition code in files, then choose for each symbol whether to be public or not, but with modules you can handle groups of related symbols simultaneously, in a simpler way, which also documents the structure of the program.

Moreover, with a well-implemented module system, compilation can be much faster than when using inefficient tricks for specifying the interfaces, like header file textual inclusion.


It is irrelevant until you have 4gb of binaries loaded from 50 repositories and then you are trying to find the definition of some cursed function that isn't defined in the same spot as everything it is related to, and now you have to download/search through all 50 repositories because any one of them could have it. (True story)


Modules don’t imply namespaces. You can run into the same problem with modules. For example, C libraries don’t implicitly have namespaces. And the problem can be easily solved by the repository maintaining a function index, without having to change anything about the modules.


The article references the true granularity issue (actually the function names need a version number as well, not sure in my scan of the article if it was mentioned).

Modules being collections of types and functions obviously increases coarseness. I'm not a fan of most import mechanisms because it leaves versioning and namespace versioning (if it has namespaces at all...) out, to be picked up poorly by build systems and dependency graph resolvers and that crap.


How do you imagine importing modules by version in the code? Something like "import requests version 2.0.3"? This sounds awful when you accidentally import the same module in two different versions and chaos ensures.


Import latest where signed by a trusted authority.


Don't forget about encapsulation, there's most likely a lot of functions that aren't relevant outside the module.


just deduce the domain from text similarity :o)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: