Hacker Newsnew | past | comments | ask | show | jobs | submit | clpwn's commentslogin

Have you ever wanted to pay a monthly subscription to give your location data and dashcam feed to a company for them to sell to other companies? Get Bee Mapping!


Shoutout to Mixxx, it's one of the examples of open source being able to match and in many ways surpass the enjoyability of their closed-source for-profit alternatives. I regularly give it as an example alongside things like Blender to show that a better, more free software world is possible outside of just developer tools.


I wish more people posted questions like this. Questioning yourself and your path (and importantly, the ethics of it all) is a vital survival skill to avoid losing yourself entirely to the capitalism game, where you start to listen more to your analytics software than your sensibilities and moral compass.

I don't know if your AI products are scummy or not, but the fact that you're asking this question makes me less worried about you than a lot of other folks out there furiously rushing to get their AI pre-crime hot dog detectors rushed out the door so they can get on the NASDAQ.


Amazon doesn't pretend to care about its employees, Google does (pretend, that is). Working for Amazon right out of college, I honestly kind of appreciated the fact that they never tried to make it feel like your work was your "family", which always came off as incredibly creepy. Interviewing for Facebook and Google and seeing their offices and culture always gave me the heebie jeebies for that reason, and I'm glad I didn't choose them. I'm also glad I left Amazon quickly.

In the end, it's just two different types of cults. You're a fool to show loyalty to Amazon when it shows none back, and Google is in some ways creepier for pretending to care about you and suck in all facets of yourself into a corporation.

The straightforward solution is to work for neither, and show that we won't tolerate creepy corporate culture taking over the technology world that we love.


I feel like a lot of fantastic software is made by a small number of people whose explicit culture is a mix of abnormally strong opinionatedness plus the dedication to execute on that by developing the tools and flow that feel just right.

Much like a lot of other "eccentric" artists in other realms, that eccentricity is, at least in part, a bravery of knowing what one wants and making that a reality, usually with compromises that others might not be comfortable making (efficiency, time, social interaction from a larger group, etc).


Totally agree.

It is just allowing human element that creates quality craft.

When you are following the best practices, you remove that human element (hyperbole, I know).

When you force certain rules, jiras, stand-up, you increase predictability, but the cost is the lower quality, lower happiness and higher attrition.


SQLite's quality is due to the DO-178B compliance that has been achieved with "test harness 3" (TH3).

Dr. Hipp's efforts to perfect TH3 likely did lower his happiness, but all the Android users stopped reporting bugs.

"The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out... We crashed Oracle, including commercial versions of Oracle. We crashed DB2. Anything we could get our hands on, we tried it and we managed to crash it... I was just getting so tired of this because with this sort of thing, it’s the old joke of, you get 95% of the functionality with the first 95% of your budget, and the last 5% on the second 95% of your budget. It’s kind of the same thing. It’s pretty easy to get up to 90 or 95% test coverage. Getting that last 5% is really, really hard and it took about a year for me to get there, but once we got to that point, we stopped getting bug reports from Android."

https://corecursive.com/066-sqlite-with-richard-hipp/


Even more interesting is right above that:

> he managed to segfault every single database engine he tried, including SQLite, except for Postgres. Postgres always ran and gave the correct answer. We were never able to find a fault in that. The Postgres people tell me that we just weren’t trying hard enough.


I will confess that it is easier to quote the rule than the exception.

That is a profound compliment for Postgres.


I've always felt like Postgres is like one of those big old Detroit Diesel V12s that power generators and mining trucks and things. It's slow and loud and hopelessly thirsty compared to the modern stuff you get nowadays, and it'll continue to be just as slow and loud and hopelessly thirsty for another 40 or 50 years without stopping even once if you don't fiddle with it.


And then you find out that the slowness was because it was placed in first gear and someone left a limiter on the throttle...


(I should say that it is not at all difficult to crash an Oracle dedicated server process. I've seen quite a few. This doesn't crash the database (usually).

I've never run an instance in MTS mode, so I've never seen a shared server crash, although I think it would be far from difficult.

I might be curious about the type of Db2 that crashed, UDB, mainframe, or OS/400, as they are very different.)


it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.

You can use all of those things as to enable people to do things better and with less friction, but you also need to keep in mind that if a tool becomes more of a hindrance than a help, you should go looking for a new one.


> it's not that "best practices" or any of those things are what causes trouble; it's failing to recognize that they're just tools, and people will still be the ones doing the work. And people should never be treated as merely tools.

For me, the concept of best practices is pernicious because it is a delegation of authority to external consensus which inevitably will lead to people being treated as tools as they are forced to contort to said best practices. The moment something becomes best practice, it becomes dogma.


Imagine your doctor or pilot eschewing “best practices” and what your reaction would be. There’s a reason knowledge communities build consensus.

Best practice doesn’t mean you’re at the mercy of the consensus, it just means you have to justify why you should stray from it.


Doctors “best practices” are handed down by the AMA (or local equivalent). Pilots “best practices” are handed down by the FAA (or local equivalent).

Programmers best practices are handed down by the twitter accounts of consultants. It’s not quite the same thing.


This comment perfectly encapsulates the point that I am making about best practices: the concept is used as a cudgel to silence debate and to confer a sense of superiority on the practitioner of "best practice." It is almost always an appeal to authority.

No one wants cowboy pilots ignoring ground control. Doctors though do not exactly have the best historical track record.

Knowledge communities should indeed work towards consensus and constantly be trying to improve themselves. Consensus though is not always desirable. Often consensus goes in very, very dark directions. Even if there is some universal best practice for some particular problem, my belief is that codifying certain things as "best practice" and policing the use of alternative strategies is more likely to get in the way of actually getting closer to that platonic ideal.


Perhaps a better example might be "covering indexes," or what Oracle would call an "index full scan."

Is is an idea so efficient that to disregard it is inefficiency.

"I had never heard of, for example, a covering index. I was invited to fly to a conference, it was a PHP conference in Germany somewhere, because PHP had integrated SQLite into the project. They wanted me to talk there, so I went over and I was at the conference, but David Axmark was at that conference, as well. He’s one of the original MySQL people.

"David was giving a talk and he explained about how MySQL did covering indexes. I thought, “Wow, that’s a really clever idea.” A covering index is when you have an index and it has multiple columns in the index and you’re doing a query on just the first couple of columns in the index and the answer you want is in the remaining columns in the index. When that happens, the database engine can use just the index. It never has to refer to the original table, and that makes things go faster if it only has to look up one thing.

"Adam: It becomes like a key value store, but just on the index.

"Richard: Right, right, so, on the fly home, on a Delta Airlines flight, it was not a crowded flight. I had the whole row. I spread out. I opened my laptop and I implemented covering indexes for SQLite mid-Atlantic."

This is also related to Oracle's "skip scan" of indexes.

https://corecursive.com/066-sqlite-with-richard-hipp/


Most software “best practices” are a poorly structured replacement for a manual.

Aviation best practices were written from the outcome of minor and major disasters.


> And people should never be treated as merely tools.

maybe on a tight knit team people don't mind being treated like tools because they understand what needs to get done next, and see that it makes the most sense for them to do it, it's nothing personal.

At my freshman year "1st day" our university president gave us an inspirational speech in which he said "people say our program just trains machines... I want you do know we don't train machines. We educate them."


I'd say that if you have a tight-knit team, you are already doing the very opposite of treating people as tools. There's nothing wrong with having a shared understanding of a goal and then assuming a specific role in the effort to accomplish that goal; people are very good at that.

The problem is when you think of people the same way you think of a hammer when you use it to hit nails: The hammer doesn't matter, only that the nail goes in.


Best practices are subjective. What is best practice is for C is not the same as Python.

SQL DBs provide consistency guarantees around mutating linked lists. It’s not hard to do that in code and use any data storage format.

Imo software engineers have made software “too literal” and generated a bunch of “products” to pitch in meetings. This is all constraints on electron state given an application. A whole lot of books are sold about unit tests but I know from experience a whole lot of critical software systems have zero modern test coverage. A lot of myths about the necessity of this and that to software have been hyped to sell stock in software companies the last couple decades.


"Best practices" are just a summary of what someone (or a group of someones) thinks is something that is broadly applicable, allowing you to skip much of the research required to figure out what options there are even available.

Of course, dogmatic adherence to any principle is a problem (including this one). Tools can be misused, but that doesn't really affect how useful they can be; though I think better tools are generally the kind that people will naturally use correctly, that's not a requirement.


I don't think you need "abnormally strong opinionatedness" or anything else special: all you need is a certain (long-term) dedication to the project and willingness to just put in the work.

Almost every project is an exercise in trade-offs; including every possible feature is almost always impossible, and never mind that it's the (usually small) group of core devs who need to actually maintain all those features.


I interpreted "opinionatedness" as meaning they have a clear definition of what sqlite is and isn't, including the vision of where it's headed. That would result in a team with very strong opinions about which changes and implementations are a good or bad fit for sqlite.

Can a project consistently make the right trade-offs without having strong opinions like that?


I see these informations especially in light of the theory of constraints when working on a platform: https://en.wikipedia.org/wiki/Theory_of_constraints

These devs provide a platform and any change to a platform has a huge impact for the users. They have a plan they follow, and in every project are layers. Constraints can be good, when and if applied correctly like in this case.


While not apples-to-apples and less polished, we're slowly building up https://github.com/tonarino/innernet as a fully open-source (and self-hosted) alternative to things like Tailscale. It controls vanilla WireGuard under the hood (kernel or userspace implementations), and is lower level (no graphical interfaces yet), though, but depending on your needs it might still fit :).


We'd love your help if you're interested in it! Supporting more platforms and making graphical frontends are high on the priority list.


> 1. Is it possible to use the same subnet on different innernets?

As moviuro mentioned, no, not unless you want to get fancy with independent network namespaces (https://man7.org/linux/man-pages/man8/ip-netns.8.html).

If you want to be more confident of not having an address space conflict, I recommend using a randomly generated private IPv6 block using the RFC 4193 specification: https://en.wikipedia.org/wiki/Private_network#Private_IPv6_a...

> 2. Could you please provide installation instructions for generic linux, as I am looking to host on almalinux and opensuse leap, neither of which use dpkg.

Our Arch PKGBUILD (https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=inner...) may be the simplest existing guide for making your own package for your distro. If it's not a lot of work to add, I'm happy to maintain other package formats, or help you be a maintainer.

Thanks! Looking forward to hearing how it goes for you.


I found interactive CIDR visualization tools like https://cidr.xyz/ to be very helpful in understanding the notation.

I also end up using https://gitlab.com/ipcalc/ipcalc a lot, and am definitely planning on similarly making it easier in the terminal to manage and visualize the CIDRs in innernet networks. I'm hoping innernet can become a fun way to learn networking in a safe (and cheap) virtual environment.


Latency and bandwidth was a big issue for us - innernet uses the WireGuard kernel module on Linux when available, which is about as good as you can get (easily achieving saturated gigabit line speeds).

macOS is a different story, since there are only userspace implementations at the moment. Innernet currently looks for the official "wireguard-go" implementation, but you can swap out userspace implementations as you like. I'll add an environment variable check to make that easier without needing to recompile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: