Hacker Newsnew | past | comments | ask | show | jobs | submit | bisby's commentslogin

KISS is a complete paradigm shift from other phone launchers. It takes some getting used to. It has made me rethink how I use my phone from time to time because I have it set to sort by recently used: I only have a few apps I use regularly it seems.

Not for everyone, but it's my preferred way to use a phone now.


I suppose it doesn't matter because there is probably a search or something, but I only use my banking app and children's games every month or 2. I like knowing where they are at. 2 swipes away.

Also, doesn't this mean more attention to the screen? I can blindly pick apps without looking at my screen. Makes it useful when running + audiobook + taking notes.


Yes. Search works for finding things once every few months. Or, I've found that they tend to not really be that far down the list, because I only use a few apps per month anyway, so "1 month ago" is actually pretty recent in that regard.

But I also have specific apps pinned. Messaging, Browser, Camera all have fixed icons across the bottom of the screen, so I could blindly pick those as well as on any other launcher.

And in some cases, it means more attention, but more intent - which I find good. I'm far less likely to randomly open an app just because I see it on the screen. "Oh I havent played this game in a few months" never pops up (unless I scroll the complete app list, which it still has).

It's a trade off, for me, it means faster (but not no look - but tbh, I never have had that level of accuracy with any launcher) access to my most common used apps, and a slight decrease in rarely used apps. So I save half a second 10 times a day, and lose 5 seconds once a week. It's a tradeoff that I'm willing to make based on my particular usage patterns.


Didn't know about KISS, I know Kvaesitso is also a search-focused launcher (that seems to have more features ? I didn't download KISS)

https://kvaesitso.mm20.de/


Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...

How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.


Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.

Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.

Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.

> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."


This is a great example of how the homelab bottomless pit becomes normalized.

Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?

And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.


Late reply I know, but I wanted to clear up that I don’t want to normalize a 2.5kW homelab. Usually when talking to people about it I refer to it as “insane.” But, having an absolutely insane amount of computer and RAM is fun (and I personally find it genuinely useful for learning, in particular in terms of engineering for massive concurrency) and I can afford the hydro, so whatever. To match the raw compute and RAM with current gen hardware, you only need maybe 500W - you’ll just be spending a shitload of money up front, instead of over time on hydro. (To match my current lab’s utilized performance, I’d need at least 2 servers, one of which with a ~threadripper 7955WX and 256GB of DDR5, and another with an Epyc 9475F and 1TB of DDR5. That would put me somewhere in the neighborhood of $35k? Ish? Costs me about $115/month to run the rack right now (cheaper than my hot tub) and cooling is free in the winter (6~7 months of the year) so the break even is loooooong term. And realistically, $100ish a month isn’t crazy, considering I self host basically everything - the only services I pay for are my VPS to run my mail server, and AWS for glacier S3 for backup-of-last-resort.

Again, not trying to normalize 2500W, most people don’t need that (and I don’t really either), but I do make good use of it.

As for “rewiring the house for 240V”, every house* in Canada and the US is delivered “split-phase” 240V (i.e. 240V with a centre tapped neutral, providing 120V between either end of the 240V phase and neutral or 240V from phase to phase), and many appliances are 240V (dryers, water heaters, stove/ranges/ovens, air conditioners). If you have a space free in your breaker panel, adding a 240V 30A circuit should cost less than $1k if you pay an electrician, and can be DIY’d for like $150 max unless you have an ancient panel that requires rare/specialty breakers or the run is very long. It’s far from the most expensive part of a homelab unless you’re running literally just a raspberry pi or something.

*barring an incredibly small exceptional percentage


I agree with you. My use case doesn't call for perfect uptime. Sounds like yours doesn't either (though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)).

But that doesn't mean its for us to say that someone else's use case is wrong. Some people self host a nextcloud instance and offer access to it to friends and family. What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.

My point was simply that different people have different use cases and different needs, and it definitely can become a bottomless pit if you let it.

For me, IPMI, PiKVM, TinyPilot, any sort of remote management interface that can power on/off a device and be auto powered on when power is available, so you can reasonably always access it, and having THAT on the UPS means that you can power down the compute remotely, and also power back up remotely. Means you never have to send someone to reboot your rack while youre out of town, you dont shred your UPS battery in minutes by having the server auto boot when power is available. Eliminates reliance on other people while youre not home :tada:

But again, not quite a bottomless pit, but there are constant layers of complexity if you want to get it right.


> though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)

Generator was a requirement for the sump pump. My house was basically built on a swamp, so an hour in spring without it means water in the basement. Now admittedly, I spent an extra couple hundred bucks to get a 240V generator with higher capacity than strictly necessary, but it was also roughly the minimum amount of money to spend to get one that can run on gasoline or propane, which was a requirement for me. 240V to the rack cost me $45, most of that cost being the breaker (rack is right next to the panel).

> What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.

I host roughly a dozen services that have around 25 users at the moment, but I charge $0 for them. I make it very clear: I have a petabyte of storage and oodles of compute, feel free to use your slice, and I’ll do my best to keep everything up and available - for my own sake (and I’ve maintained over 3 nines for 8 years!). But you as a user get no guarantee of uptime or availability, ever, and while I try very hard to backup important data (onsite, offsite split to multiple locations, and AWS S3 glacier), if I lose your data, sucks to suck. So far most people are pretty happy with this arrangement.

I couldn’t possibly fathom worrying about other people’s access to my homelab during a power outage. If I wanted to care, I’d charge for access, and I’d have a standby generator, multiple WANs, a more resilient remote KVM setup, etc. But then I’d be running a business - just a really shitty one that takes tons of my time and makes me little money. And is very illegal (for some of the services I make available, at least), instead of only slightly illegal.


no? A file system is the format that the data on the disk is stored as. If you mount an ext4 disk as ntfs, it wouldn't load properly. It's not just the interface for loading the data, it's how the data is actually stored.


What I mean is that it should ignore permissions on external ext4 by default in Desktops.


There's no concept of "external". What would it be, "USB" or anything mounted under /mnt or /media? What if it's the root OS drive of another computer you're trying to fix connected through a USB-SATA adapter? Should any program running with minimized privileges get to overwrite even root files in that OS drive?

I think that it's a pretty good heuristic that if permissions exist in the filesystem, they matter and shouldn't be ignored.


They shouldn't be ignored. but they can be ignored, is the problem. File permissions are not encryption or security: If I can't read a file on this machine, because I'm not root, I'll just move the drive to a different machine where I am root.

But I agree with you, they do have a use and to some use cases matter, and we shouldn't arbitrarily decide to ignore them.


> Then release it under a copyleft license. Or if you absolutely must, release your proprietary bit under a non-open source license

An old mentor once said to me that a contract is just the start of a conversation. If you sign a contract, the other party violates it, and your business goes under... you may be able to get some compensation through courts, but also your business is gone. And getting that compensation and proving that the contract was violated and how much you are entitled to costs time and money.

Releasing something at all, even under a restrictive license, means nothing if you have no intention (or capability) of enforcing that license. Look at how often companies take GPL code, modify it, and then never publish their modifications... and then people have to sue to get things resolved.

So "We aren't ready to commit the legal resources to fighting and defending the licenses" makes a LOT of sense. IP protection is not just a matter of signing a piece of paper saying people can't do a thing, you have to actually prevent them from doing the thing.


You could argue whether or not it's a "feature", but one of the thing ghostty claims as an advantage is the out of the box configuration.

With no config at all, ghostty looks nicer than my alacritty setup. The rendering is just real nice. I could probably get alacritty to look as nice or nicer, but ghostty just worked this way with no config needed.

So you could consider aesthetics and rendering quality, and simplicity of setup both as features, which people may need/want (or not).


I wouldn't argue against that at all: OOBE is absolutely a feature.

Problem is, we don't all agree with what the OOBE should be. I, for example, always strip out menus, tabs, and other UI features. For me, the terminal that requires the least lines in the config file is probably going to be the winner (assuming no unfixable defects that effect me).


A space elevator doesnt just take you to the karman line (like in the OP website), to get to orbit, you'd need to get up to geostationary height. That's 22,000 miles.

What's the best way to pull yourself directly vertical along a cable for 22,000 miles?

What's the best way to descend 22,000 miles quickly, but also with a braking mechanism that isn't going to require a heat shield?

Some sort of slow cable car going at 10mph even is going to take 2200 hours... 1000mph is going to take 22 hours still. That's a full day to orbit even going REALLY fast. And getting up to 1000mph vertically, for a sustained 22 hours... that's not an easy feat.

And if the goal is just to get up past the karman line and use the elevator as a stage 1 for a rocket launch and detaching from the elevator while suborbital is fine, then it's a one way trip, and still need to re-enter the old fashion way.

The scale of space makes all of the problems far more complicated (edit: not just the cable strength issue, but traversing the cable)


Unless we're using it for humans the transit time isn't that big a deal; "last mile" orbital transfer times are often measured in days anyway.

That "last mile" bit is going to entail independent propulsion anyway. Getting to the altitude if the ISS is a mere 10 hour trip at a sedately 40kph which isn't unpleasant even for humans, but the ISS orbits at nearly 29000kph (as will you if you let go of the space elevator at that altitude) and the velocities are only half as scary at the far end, so your rendezvous anywhere other than one specific point in geo is going to be complicated. But you've saved the fuel costs of escaping the earth's atmosphere that's rather significantly more than the fuel costs of other bits of your satellite mission, including reentry. (At least until the costs of building and maintaining and protecting the elevator are factored in, but who knows what unobtanium costs?)


> as will you if you let go of the space elevator at that altitude)

Doesn't the teather have a constant (24 hour) rotational period at every elevation? That is significantly slower than the ISS


fair point, you'd need to be orbiting at that speed to stay in that orbit, but you'd need propulsion to get the delta v to get there after letting go of the tether, but a lot less than to launch from ground level through the atmosphere. Or you could figure out the point higher up the tether to release where your orbital decay would intersect the IS orbits, but given the precision involved in that rendezvous you'd still want propulsion. You'd want propulsion for the last mile bit for pretty much anything other than building a station attached to the tether was kind of my whole point :)


You don’t need to get to geostationary to get to orbit. The reason elevators need to get that high is because that’s the lowest place you could “anchor” the top of the elevator to something fixed relative to the earth.


I think they aren't referring to "where does it go?" and more being forgetful.

If you have something that would be reasonable to open on any workspace because it's ephemeral (they used a tmp terminal as an example), and you open it, navigate away from it, and then switch workspaces a few time, and then get pulled into a meeting or go to lunch, and come back, switch workspaces a few more times...

"Where did I leave that terminal, I dont remember where I was when I opened it."

In i3wm/sway etc, you can cycle all your workspaces and eventually one of them will have it visible. On Niri, as you cycle through all your workspaces you may never see it because you don't see all the windows in a workspace, unless you scroll through the workspace panes as you cycle workspaces.

It's not a problem necessarily, but it is something to consider. It sounds like this doesn't affect your workflow, but it might affect others.


It has overview. You can see all windows and workspaces in a scaled out view of your preference.


Fair enough. "Overview" [0] presumably solves this, though.

[0] https://github.com/YaLTeR/niri/wiki/Overview


I always had a keybind to toggle gaps. sometimes certain layouts just feel congested, and the gaps put spaces between the windows and helps them feel like they are in their own space (even though it makes them even smaller). It's purely psychological and often doesnt make sense, but it's not just "show off the wallpaper and waste real estate", it's for mental processing.

And same goes for the icons. I've personally never gotten there. but also, I don't look at the icons. They could be hidden. I know if I need to get to slack or email, it's on workspace one. So if the workspace badge says "1" or "1: Comms" or "" ... it doesn't really matter, because the keybind is muscle memory anyway. But on the flip side, because all of that is muscle memory... I might go "Where was my email at again? Workspace 1, or 2?" and having an envelope as the label makes it easier to find.

Different people have different workflows. And yes, some people are doing those things to sacrifice usability in the name of aesthetics, but some people may be GAINING usability by doing these things. People are vast and diverse.


https://store.ui.com/us/en/products/ai-key

Even this only reviews "Smart Detections" and I have smart detections turned off on my Unifi cameras, because it enables cloud AI. Having the ability to have an AI key to process detections locally would be great.

Also, having to buy extra hardware kinda stinks. Would love to be able to have a self hosted Unifi OS server that can do AI key abilities if the hardware supports it.


There is indeed a fine line between desktop environment and complete DE ecosystem.

Having spent a long time on i3wm, I learned a lot about how to build your own DE effectively. These days I'm on KDE but definitely don't just assume that I want to use the kTool for everything, I've brought a lot of things from my i3wm days with me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: