Hacker Newsnew | past | comments | ask | show | jobs | submit | SlightlyLeftPad's commentslogin

Or popularity rather.

Yeah, so not exactly liberal democracy. It is a democracy, but doesn't seem very liberal if the checks and balances doesn't work against popular policies.

I would argue that in that case, liberal democracy is an oxymoron.

Really popular policies have a wide support among the population, which means that they will became law, or even an amendment to the constitution. (Most countries have something like 3/5 supermajority requirements for changing constitutions, which is a lot more practical than the basically-as-of-now-impossible US procedure.)

At this moment, if you want to keep "liberal" character of the country, your "checks and balances" institutions have to act in a fairly authoritarian ways and invalidate laws which attracted supermajority support. What is then stopping such institutions to just rule as they see fit? Even checks and balances need checks and balances.

Nevertheless, I would say that "liberal democracy" isn't one that can always prevent illiberal policies from being enacted. I would say that it is one that can later correct them.

Note that historically, most obvious executive encroachments of liberty (Guantanamo etc.) in the US were later overturned by new administrations.


> Really popular policies have a wide support among the population, which means that they will became law, or even an amendment to the constitution

McCarthyism didn't have that much support from voters, so this isn't the issue, it didn't become law. The issue is that the elected representatives didn't do anything to stop it until it started having massive disapproval from voters.

Voters needing to massively disapprove of government abuse for the "checks and balances" to do their job means the democracy isn't working as it should, the government doesn't need to change the constitution they just need to keep disapproval low enough to continue with their illegal actions. In a true liberal democracy the checks and balances works, ministers who perform illegal acts are investigated and relieved of their duties without needing elected representatives to start that procedure.

I live in Sweden and I can't even find examples of a politician that blatantly ignores laws and procedures that get to stay for years here. I think the two party system is the biggest culprit, then you need support from both parties to remove criminal politicians, but that is very difficult to get when people have to vote against their own. In a multi party system each party is a minority, and allied parties are not friendly to each other, they gladly sink an ally to absorb their votes since the issue was the party and not the alliance, people wont move to the other block over such a thing.


Sweden supports Chat Control on European level, even though the very principle of Chat Control is anathema to basic civic rights.

Is widespread surveillance of private communications popular with Swedish electorate, or do people like Ylva Johansson support and even push such abominable things regardless of what actual Swedes think?

If the latter, it is not that different from what McCarthy once did, and our entire continent is in danger that this sort of paranoid dystopia gets codified into law approximately forever. At least McCarthy's era was short.


Pardon me, I am looking forward to my future $0.03 cheque

But has the tradeoff of using lead solder at every joint

Lead solder hasn't been used in the US since 1986, when it was banned by the Safe Drinking Water Act.

“Lead free” isn’t zero lead.

> In 1986, Congress amended the Safe Drinking Water Act (SDWA), prohibiting the use of lead in pipes, and solder and flux on products used in public water systems that provide water for human consumption. Lead-free was defined as solder and flux with no more than 0.2% lead and pipes with no more than 8%.

> In 2011, Congress passed the RLDWA, which revised the definition of lead free and took effect in 2013. Lead free was now defined as the lead content of the wetted surfaces of plumbing products as a weighted average of no greater than 0.25% for products that contact water intended for consumption, and 0.2% for solder and flux.

https://www.workingpressuremag.com/epa-final-lead-free-rulin...

A lot of municipal water systems have done more recent (but by no means required) improvements to the water itself to “coat” the lead in supply lines. Beyond just pH control, like orthophosphate. Most just in the last decade or so.

For Chicago, it’s an active project

> Polyphosphate is being removed because recent studies have shown that it may negatively impact lead corrosion control.

> Polyphosphate was initially added with the orthophosphate to mask discoloration of the water from metals such as iron or manganese.

https://villageofalsip.org/Chicago%20Department%20of%20Water...


> Lead-free was defined as ... pipes with no more than 8%.

This thread has had a lot of twists and turns, but I wasn't expecting this one. Yikes.


A lot of brass fittings and fixtures with lead in them. Makes it easier to machine.

I wouldn’t be surprised if a lot of no-name Amazon and aliexpress plumbing fixtures still have a lot of lead in them. Keeps your cutting tool/machining costs down.


Even big box stores that are careful sell a lot of high lead plumbing parts - they are just marked not for potable water and sold for use with gas pipes.

Other country’s citizens leave to avoid military service, Americans leave to…avoid student loan repayments. That’s sad.


It's not sad, it's just a story. The number of Americans who actually emigrate to skip out on student debt is virtually zero.


like I'm sure it happens, but its trivial in terms of overall impact.

immigrating to another country, even as an American with an in-demand degree, is not easy.


If people leave your country because it sucks, that’s a governance problem, not a citizen problem. People are mobile, if they have options and can do better, they should take them. Life is short and you only live once. Debt is just accounting, it is a shared delusion like a currency.

We set these citizens up to fail, and they’re the bad guys? Hardly. If you can escape the torment nexus, go, don’t look back. The torment nexus does not care about you. “The purpose of the system is what it does.”


If people enter your country because it's the greatest country in the world, that's a governance success. The USA has it's share of problems and there are things we should do to reduce education costs. But overall the USA has a positive net migration rate with every other major country. People are coming here for a reason.


We've got a lot of governance problems. Our system allows college prices to get really high, mostly because 18 year kids can sign up for these guaranteed loans. They don't know what they are signing up for, and colleges just want students in seats. People don't know if their major has good employment prospects, avg salary and how that will affect their loan repayment.

America chose to do this, banks make big money from the loans. Colleges make big money from students.

We have a similar system with medical care. We have regulatory capture of our medical system by the drug sellers, medical groups, etc. We pay way way way more than other countries with worse outcomes. And the reaction of half the country, the republicans, it is we'll fix this by eliminating a lot of coverage for poor people. Democrats try to control costs, cover more poor people, get on a better trajectory and it's demonzied as destroying democracy. Meanwhile, our recently passed BBB bill takes billions out of medicare, ie coverage for poor people, many of whom voted for Trump. This whole thing is disgusting. I'm angry because of the loss of potential here, just like for student debt.

My dad says colleges are corrupt because they "waste money on dei things" and that's why they have high costs (sadly not making this up). I try to explain college is not subsidized like it used to be when you went to college in the 60s. Similar thing with young people not affording housing, not having kids as much.


Oh boy it actually gets so much worse than you mention. If you have W2 income the cost of college is uniquely high to _you_. Very wealthy people who claim a business loss on their tax returns (through the massive spiderweb of itemized deductions) college, even ivy leagues, are free or almost free.


Bullshit. FAFSA covers assets as well as income so wealthy families with zero taxable income aren't getting need-based financial aid. Unless they lie on the application, which is criminal fraud.


Sorta. You exclude your primary residence, retirement funds, and any college savings accounts. I def know people with millions in the first two, retired, with their kids getting full rides and food stamps.


Not bullshit at all, there are loopholes riddled throughout the FAFSA system that allows assets to be tucked away out of scope. It’s not fraud, all completely legal and purpose-built to support the types of families who can afford a financial advisor on retainer.


I mean yeah, nothing about my statement disagrees with your point.


Pretty pessimistic frankly. Management at all levels pushing for nearshoring SWE labor, meanwhile we’re training AI as a long term solution to fill the skill gap in the same nearshore labor. We were hired to be smart people and it’s frankly an insult to gaslight us into believing that it’s simply because it makes us more productive. Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?

I’m looking for a way out of tech because of it.


> Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?

I still don’t see this, if only for the Managerial instinct for ass-covering.

If something really matters and a prod showstopper emerges, can those non-technical supervisory managers be completely, absolutely, 100% sure the AI can fix the code and bring everything back up? If not, the buck would surely stop with them and they would be utterly helpless in that situation. The Board waiting on conference call while they stare at a pageful of code that may as well be written in ancient Sumerian.

I can see developers taking a higher level role and using these tools, but I can’t really see managers interfacing directly with AI code generation. Unless they are completely risk tolerant, and you don’t get far up the greasy pole with those tendencies.


Savvy management know how to insulate themselves from such accountability. I've never seen anyone held accountable for large-scale f-ups.


How, exactly? If a production showstopper needs to be worked on immediately.

If the development is between non-technical management and some AI tool they have been using, how do they insulate themselves from being accountable to their superiors? Who is responsible, and who gets to fix it?


Directors at my company (large mid-tier tech) are being _asked_ to write AI code. Below that level, it’s a mandate and anyone who doesn’t ship regularly using AI will be pipped and fired. Don’t make me explain how it makes sense but that what we’re dealing with.


The scenario I was responding to is where developers are no longer employed because the managers interface directly with AI. It sounds like your company still has developers around.

If your company removes all developers and lets the managers vibe code instead, I’ll get the popcorn in for the next outage.


Any EEs that can comment on at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory lives on a PCIe slot? It seems like that would also have some power delivery advantages.


> at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory

Actually the RapsberryPi (appeared 2012) was based on a SoC with a big and powerful GPU and small weak supporting CPU. The board booted the GPU first.


If you look at a any of the nvidia DGX boards it's already pretty close.

PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast...


There are companies that specialize in memory controller ip that every one else uses, including large semi companies like Intel.

The ip companies are the first to support new standards, make their money selling to intel etc. Allowing intel or whomever to take their time to build higher performance ip.


These days you can buy any standard as a soft IP from Synopsys or Cadence. They take their previous serdes and modify it to meet the new standard. They have thousands of employees across the globe just doing that.


Isnt it about latency as well with DGX boards? Vs PCI-E. You can only fit so much RAM on a board that will realistically be plugging into a slot


Most current DGX server assemblies are stacked and compression-fit, much higher density and more amenable to liquid cooling.

https://www.servethehome.com/micron-socamm-memory-powers-nex...


Has the DGX actually shipped anywhere yet?


Do you mean the new one? The older ones have been around for so long you can buy off-leases of them: https://www.etb-tech.com/nvidia-dgx-1-ai-gpu-server-2-x-e5-2...


Good to see I’m not the only person that’s been thinking about this. Wedging gargantuan GPUs onto boards and into cases, sometimes needing support struts even, and pumping hundreds of watts through a power cable makes little sense to me. The CPU, RAM, these should be modules or cards on the GPU. Imagine that! CPU cards might be back..


It is not like CPU aren't getting higher wattage as well. Both AMD and Intel have roadmap for 800W CPU.

At 50-100W for IO, this only leaves 11W per Core on a 64 Core CPU.


800 watt CPU with a 600 watt GPU, I mean at a certain point people are going to need different wiring for outlets right?


This is a legitimate problem in datacenters. They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases. The talk of GPU/AI datacenters consuming inordinate amounts of energy isn't just because the DC's are yuge, (although some are), but because the power draw per rack unit space is going through the roof as well.

On the consumer side of things where the CPU's are branded Ryzen or Core instead of Epyc or Xeon, a significant chunk of that power consumption is from the boosting behavior they implement to pseudo-artificially[0] inflate their performance numbers. You can save huge (easily 10%, often closer to 30%, but really depends on exact build/generation) on energy by doing a very mild undervolt and limiting boosting behavior on these cpus and keeping the same base clocks. Intel 11th through 14th gen CPU's are especially guilty of this, as are most Threadripper CPU's. you can often trade single digit or even negligible performance losses (depends on what you're using it for and how much you undervolt/underclock/restrict boosting) for double digit reductions in power usage. This phenomenon is also true for GPU's when compared across the enterprise/consumer divide, but not quite to the significant extent in most cases.

Point being, yeah, it's a problem in data centers, but honestly there's a lot of headroom still even if you only have your common American 15A@120VAC outlets available before you need to call your electrician and upgrade your panel and/or install 240VAC outlets or what have you.

0: I say pseudo-artificial because the performance advantages are real, but unless you're doing some intensive/extreme cooling, they aren't sustainable or indicative of nominal performance, just a brief bit of extra headroom before your cooling solution heat-soaks and the CPU/GPU's throttle themselves back down. But it lets them put the "Bigger number means better" on the box for marketing.


It's not just about better numbers. Getting high clocks for a short period helps in a lot of use cases - say random things like a search. If I'm looking for some specific phrase in my codebase in vscode, everything spins up for the second or two it takes to process that.

Boosting from 4 to 5,5.5 ghz for that brief period shaves a fraction of a second - repeat that for any similar operation and it adds up.


Yes, I figured that much would be obvious to this crowd. Thus the "pseudo" part.

The point isn't that there isn't a benefit, it's that you start to pay exponentially more energy per 0.1GHz at a certain point. Furthermore, AMD and Intel were exceptionally aggressive about it in the generations I outlined (AMD would be 7000 series ryzens specifically), leading to instability issues on both platforms due to their spec itself being too aggressive, or AIB partners improperly implementing that spec as the headroom that typically exists from factory stock to push clocks/voltages further was no longer there in some silicon (some of it comes down to silicon lottery and manufacturing defects/mistakes (Intel's oxidation issues for example) but we're really getting into the weeds on this already)

And to clarify: I'm talking specifically of Intel turboboost and AMD's PBO boosting technologies where they boost where they boost well over base clocks, separate from the general dynamic clocking behavior where clocks will drop well below base when not in (heavy) use.


> They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases.

Switch is designing for 2MW racks now.


unless it’s an Apple data center, populated by the server version of the latest ultra chips…


What makes you think that?

They're small and efficient, that means they can pack large numbers of those into small spaces, resulting in a similar large power draw per volume occupied by equipment in the DC. This is especially true with Apple's "Ultrafusion" tech which they're developing as quasi-analog to Nvidia Grace (Hopper) superchips.


Because I worked on them, before retiring. Yes they’re packed in; no they still don’t draw the same levels of power.


Didn't saw they draw the same, I openly acknowledge their more efficient. Said power user per rack unit is trending up. This is true of Apple DC's as well, especially with their new larger/fused chip initiatives. It's an universal industry trends especially with AI compute, and Apple is not immune.


Let me rephrase to: No, they (collectively) don’t draw the same levels of power. I know what amperage is drawn by each rack. It’s nowhere near as much as was drawn by the older intel-based racks.

And yes, they’re packed densely.


at that point, they're powered by a bicycle.


How safe is undervolting? Can it cause stability issues?


Far safer than overvolting.

Changing settings can lead to stability issues no matter which way you push it frankly. If you're don't know what you're doing/aren't comfortable with it, probably not worth it.


At least with U.S. wiring we have 15 amps at 120 volts. For continuous power draw I know you'd want an 80% margin of safety, so let's say you have 1440 Watts of AC power you can safely draw continuously. Power supplies built on MOSFETs seem to peak at around 90% efficiency, but you could consider something like the Corsair AX1600i using gallium nitride transistors, which supposedly can handle up to 1600 watts at 94% efficiency.

Apparently we still have room, as long as you don't run anything else on the same circuit. :)


You can always have an electrician install a larger breaker for a particular circuit. I did that with my "server" area in my study, which was overkill cuz I barely pull 100w on it. But it cost nearly zero extra since he was doing a bunch of other things around the house anyway.


> You can always have an electrician install ...

If you own the house, sure. Many people don't.


You need to increase the wire diameter as well if you go that route. Running larger breakers on 10A or 15A wiring is a recipe for bad stuff.


In older houses, made from brick and concrete, that can be tricky to do. The only reason I can have my computer on a separate circuit is because we could repurpose the old three phase wiring for a sauna we ripped out. If that had not been the case, getting the wires to the fuse board would have been tricky at best.

New homes are probably worse than old homes through. The wires a just chucked in the space been the outer and inner walls, there's basically no chance of replacing them of pulling new ones. Old houses at least frequently have piping in which the wires run.


Larger breaker and thicker wires!


I thought you only needed thicker wires for higher amps? Should go without saying, but I am not a certified electrician :-)

I only have a PhD from YouTube (Electroboom)


The voltage is always going to be the same because the voltage is determined by the transformers leading to your service panel. The breakers break when you hit a certain amperage for a certain amount of time, so by installing a bigger breaker, you allow more amperage.

If you actually had an electrician do it, I doubt they would've installed a breaker if they thought the wiring wasn't sufficient. Truth is that you can indeed get away with a 20A circuit on 14 AWG wire if the run is short enough, though 12 AWG is recommended. The reason for this is voltage drop; the thinner gauge wire has more resistance, which causes more heat and voltage drop across the wire over the length of it, which can cause a fire if it gets sufficiently hot. I'm not sure how much risk you would put yourself in if you were out-of-spec a bit, but I wouldn't chance it personally.


Could you not just run a 240 volt outlet on existing wiring built for 110v? Just send l1 and l2 on the existing hot/neutral?


You can, 240V on normal 12/2 Romex is fine. The neutral needs to be "re-labeled" with tape at all junctions to signify that it's hot, and then this practice is (generally) even code compliant.

However! This strategy only works if the outlet was the only one on the circuit, and _that_ isn't particularly common.


Although this exists, as a layperson, I've rarely seen it. There is the NEMA 6-15R receptacle type, but I have literally none of those in my entire house, and I've really never seen them. Apparently they're sometimes used for air conditioners. Aside from the very common 5-15R, I see 5-20R (especially in businesses/hospitals), and 14-30R/14-50R for ranges and dryers. (I have one for my range, but here in the midwest electric dryers and ranges aren't as common, so you don't always come across these here. We have LNG ran to most properties.) So basically, I just really don't see a whole lot of NEMA 6 receptacles. The NEMA 14 receptacles, though, require both hots and the neutral, so in a typical U.S. service panel it requires a special breaker and to take up two slots, so definitely not as simple of a retrofit.

(Another outlet type I've seen: I saw a NEMA 7 277V receptacle before. I think you get this from one phase of a 480V three-phase system, which I understand is ran to many businesses.)


If you drive an electric car in a rural area you might want to carry around 6-30 and 6-50 adapters because most farms have welders plugged into those and that can give you a quick charge. And also TT-30 and 14-50 adapters to plug in at campgrounds.


NEMA 6 is limiting because there’s no neutral, so everything in the device has to run on 240V. Your oven and dryer want 120V to run lights and electronics, so they use a 14 (or 10 for older installs) which lets them get 120V between a hot and the neutral.

Oddly, 14-50 has become the most common receptacle for non-hardwired EV charging, which is rather wasteful since EV charging doesn’t need the neutral at all. 6-50 would make more sense there.


Reasons why it's nice to have a 14-50 plug in your garage rather than a 6-50:

1: when an uncle stops by for a visit with his RV he can plug in.

2: the other outlets in your garage are likely on a shared circuit. The 14-50 is dedicated, so with a 14-50 to 5-15 adapter you can more safely plug in a high wattage appliance, like a space heater.


1 is why we ended up with 14-50 as the standard, too. Before there was much charging infrastructure, RV parks were a good place to get a semi-fast charge, and that meant a charger with a 14-50 plug.

2 is something I never thought of, I’ll have to keep that in mind.


NEMA 6s are extremely common in barns and garages for welders. 6-50 is more common for bigger welders but I’ve also seen 6-20s on repurposed 12/2 Romex as the parent post was discussing used for cheap EV retrofits, compressors, and welders.


5-20R/6-20R is also somewhat commonly used by larger consumer UPS for your computer, router, etc.


Without upgrsding the wiring to a thicker gauge? That's not code compliant and is likely to cause a fire.


Sorry just to specify, it was more like a 20 amp I think (I will verify), it wasn't like I was going way higher.

I don't remember whether he ran another wire though. It was 5 years ago. Maybe I should not be spreading this anecdote without complete info.

He was a legit electrician that I've worked with for years, specifically because he doesn't cut corners. So I'm sure he did The Right Thing™.


If this is north america we're talking about, then 14 gauge is the standard for 120V 15A household circuits. By code, 20A requires 12 gauge. You'll notice the difference right away, it's noticeably harder to bend. Normally a house or condo will only have 15A wires running to circuits in the room. It's definitely not a standard upgrade, the 12 gauge wire costs a lot more per foot, no builder will do it unless the owner forks over extra dough.

Unless you performed the upgrade yourself or know for a fact that the wiring was upgraded to 12 gauge, it's very risky to just upgrade the breaker. That's how house fires start. It's worth it to check. If you know which breaker it is, you can see the gauge coming out. It's usually written on the wire.


I was actually under the impression that it is allowed depending on the length of the conductor, but it seems you are right. The NEC Table 15(B)(16) shows the maximum allowed ampacity of 14 AWG cables is 20 amperes, BUT... there is a footnote that states the following:

> * Unless otherwise specifically permitted elsewhere in this Code, the overcurrent protection for conductor types marked with an asterisk shall not exceed 15 amperes for No. 14 copper, 20 amperes for No. 12 copper, and 30 amperes for No. 10 copper, after any correction factors for ambient temperature and number of conductors have been applied.

I could've sworn there were actually some cases where it was allowed, but apparently not, or if there is, I'm not finding it. Seems like for 14 AWG cable the breaker can only be up to 15 amperes.


There is a chance he did not run new wires if he was able to ascertain that the wire gauge was sufficient to carry 20 amps over the length of the cable. This is a totally valid upgrade though it does obviously require you to be pretty sure you know the length of the entire circuit. If it was Southwire Romex, you can usually tell just by looking at the color of the sheathing on the cable (usually visible in the wallboxes.)


Where things get hairy are old houses with wiring that’s somewhere between shaky and a housefire waiting to happen, which are numerous.


As an old house owner, I can attest to that for sure. In fairness though, I suspect most of the atrocities occur in wall and work boxes, as long as your house is new enough to at least have NM sheathed wiring instead of ancient weird stuff like knob and tube. That's still bad but it's a solvable problem.

I've definitely seen my share of scary things. I have a lighting circuit that is incomprehensibly wired and seems to kill LED bulbs randomly during a power outage; I have zero clue what is going on with that one. Also, often times opening up wall boxes I will see backstabs that were not properly inserted or wire nuts that are just covering hand-twisted wires and not actually threaded at all (and not even the right size in some cases...) Needless to say, I should really get an electrician in here, but at least with a thermal camera you can look for signs of serious problems.


Yeah, but it ain't nothing that microwaves, space heaters, and hair dryers haven't already given a run for their money.


Hair dryers and microwaves only run for a few minutes, so even if you do have too much resistance this probably won't immediately reveal a problem. A space heater might, but most space heaters I've come across actually seem to draw not much over 1,000 watts.

And even then, even if you do run something 24/7 at max wattage, it's definitely not guaranteed to start a fire even if the wiring is bad. Like, as long as it's not egregiously bad, I'd expect that there's enough margin to cover up less severe issues in most cases. I'm guessing the most danger would come when it's particularly hot outside (especially since then you'll probably have a lot of heat exchangers running.)


That's still not much for wiring in most countries. A small IKEA consumer oven is only 230V16A=3860W. Those GPUs and CPUs only consume that much at max usage anyway. And those CPUs are uninteresting for consumers, you only need a few Watts for a single good core, like a Mac Mini has.


> And those CPUs are uninteresting for consumers, you only need a few Watts for a single good core, like a Mac Mini has.

Speak for yourself. I’d love to have that much computer at my disposal. Not sure what I’d do with it. Probably open Slack and Teams at the same time.


> Probably open Slack and Teams at the same time.

Too bad it feels like both might as well be single threaded applications somehow


I could use KVM and open a bunch of instances of each.


So Europe ends up with an incidental/accidental advantage in the AI race?


All American households get mains power at 240v (I'm missing some nuance here about poles and phases, so the electrical people can correct my terminology).

It's often used for things like ACs, Clothes Dryers, Stoves, EV Chargers.

So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.


To get technical -- US homes get two phases of 120v that are 180 degrees out of phase with the neutral. Using either phase and the neutral gives you 120v. Using the two out of phase 120v phases together gives you a difference of 240v.

https://appliantology.org/uploads/monthly_2016_06/large.5758...


Even more technical, we don't have two phases, we have 1-phase that's split in half. I hate it cause it makes it confusing.

Two phase power is not the same as split phase (There's basically only weird older installations of 2 phase in use anymore).


Yeah that's right. The grid is three phases (as it is basically everywhere in the world), and the transformer at the pole splits one of those in half. Although, what are technically half-phases are usually just called "phases" when they're inside of a home.


Relevant video from Technology Connections:

"The US electrical system is not 120V" https://youtu.be/jMmUoZh3Hq4


That's such a great video, like most of his stuff.


> So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.

It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace.

Though I think it's unlikely we'll see an actual need for it at home, I imaging a 800w cpu is going to be for server class CPUs and rare-ish to see in home environments.


> and currently electricians are at a premium so it'd likely end up costing a thousand+

I got a quote for over 2 thousand to run a 24v line literally 9 feet from my electrical panel across my garage to put a EV charger in.

Opening up an actual wall and running it to another room? I can only imagine the insane quotes that'd get.


I kinda suspect there’s a premium once you mention “EV vehicle”, since you’re signalling that you’re affluent enough to afford an EV and have committed to spending the money required to get EV charging at home working, etc. (Kinda like getting a quote for anything wedding related.)

I’m getting some wiring run about the same distance (to my attic, fished up a wall, with moderately poor access) for non-EV purposes next week and the quote was a few hundred dollars.


the trick is to request 240v outlet for welder. it brings price down to 400 or so.

running to another room will be done usually (at least in usa) through attic or crawlspace. i got it done a few months ago to have dedicated 20A circuit (for my rack) in my work room. cost was around 300-400 as well


Labor chargers alone are going to be higher than that in Seattle. Just to have someone come out on a call is going to be 150-200. If it is an independent electrician who owns their own business, maybe 100-150/hr, if they are part of a larger company, I'd expect even more than that.

Honestly I wouldn't expect to pay less than $1000 for the job w/o any markups.


i live in bay area. i have some doubts that seattle going to be more expensive.


Handy man prices around here are $65 to $100/hr, and there is a huge wait list for the good ones.

I've gotten multiple quotes on running the 240v line, the labor breakdown was always over $400 alone. Just having someone show up to do a job is going to be almost $200 before any work is done.

When I got quotes from unlicensed people, those came in around $1000 even.


in bayarea subreddits there are multiple posts talking about EV charger vs welder outlet and how it drops price from 2000 to 500 or so (depends on complexity).

another thing, which is good long term is to a find a local electrician (plumber, etc) who doesn't charge service calls and have reasonable pricing.

no idea about handyman pricing. never used any. for electrical/water/roofing i prefer somebody who is licensed/insured/bonded/etc


I don't think many people would want some 2kW+ system sitting on their desk at home anyways. That's quite a space heater to sit next to.


I should look at the label (or check with a meter..), but when I run my SGI Octane with its additional XIO SCSI board in active use, the little "office" room gets very hot indeed.


Also the noise from the fans.


If we're counting all the phases then european homes get 400V 3-phase, not 240V split-phase. Not that typical residential connections matter to highend servers.


It depends on the country, in many places independent houses get a single 230V phase only.


Well yes its possible but often $500-1000 to run a new 240v outlet, and that's to a garage for an ev charger. If you want an outlet in the house I dont know how much wall people want to tear up and extra time and cost.


Sure yeah, I was just clarifying that if the issue is 240v, etc, US houses have the feed coming in. Infrastructure-wise it's not an issue at all.


Consumers with desktop computers are not winning any AI race anywhere.


In residential power delivery? yes

In power cost? no

I'm literally any other way? also no


In the Nordics we're on 10A for standard wall outlets so we're stuck on 2300W without rewiring (or verifying wiring) to 2.5mm2.

We rarely use 16A but it exists. All buildings are connected to three phases so we can get the real juice when needed (apartments are often single phase).

I'm confident personal computers won't reach 2300W anytime soon though


In Italy we also have 10A and 16A (single phase). In practice however almost all wires running in the walls are 2.5 mm^2, so that you can use them for either one 16A plug or two adjacent 10A plugs.


In the Nordics (I'm assuming you mean Nordic countries) 10A is _not_ standard. Used to be, some forty years ago. Since then 16A is standard. My house has a few 10A leftovers from when the house was built, and after the change to TN which happened a couple of decades ago, and with new "modern" breakers, a single microwave oven on a 10A circuit is enough to trip the breaker (when the microwave pulses). Had to get the breakers changed to slow ones, but even those can get tripped by a microwave oven if there's something else (say, a kettle) on the same circuit.

16A is fine, for most things. 10A used to be kind of ok, with the old IT net and old-style fuses. Nowadays anything under 16A is useless for actual appliances. For the rest it's either 25A and a different plug, or 400V.


Let's rephrase: 10A is the effective standard that's been in use for a long long time, if you walk into a building you can assume it has 10A breakers.

On new installations you can choose 10A or 16A so if you're forward thinking you'd go 16 since it gives you another 1300 watts to play with.


There already are different outlets for these higher power draw beasts in data centers. The amount of energy used in a 4u "AI" box is what an entire rack used to draw. Data centers themselves are having to rework/rewire areas in order to support these higher power systems.


You can up the voltage to 240 and re-use the wiring (with some minor mods to the ends), for double the power. Insulation class should be sufficient. That makes good sense anyway. You may still have an issue if the powersupply can't handle 240/60 but for most of the ones that I've used that would have worked. Better check with the manufacturer to be sure though. It's a lot easier and faster than rewiring.


A simple kitchen top water cooker is 2000W, so a 1500W PC sounds like no big deal.


Kettles in the US are usually 1500W, as the smallest branch circuits in US homes support 15A at 120V and the general rule for continuous loads is to be 80% of the maximum.


Ah, 16A at 230v (3680W) is a normal circuit here. Most appliances work with that, the common exception is electric cooking (using two circuits or 380v two-phase) and EV charging.


True but kettles rarely run for very long.


But computers do, which was why I included that context. You don't really want to build consumer PC >1500W in the US or you'd need to start changing the plug to patterns that require larger branch circuits.


Kettles and microwaves are usually 1100 watts and lower, but space heaters and car chargers can be 1500 watts and run for long periods of time.


Microwave ovens have a different issue, which I found when I upgraded my breaker board to a modern one in my house. The startup pulse gives a type of load which trips a standard A-type 10A breaker (230V). Had to get those changed to a "slow" type, but even that will trip every blue moon, and if there's something else significant on the same circuit the microwave oven will trip even so, every two weeks or so (for the record, I have several different types of microwave ovens around the house, and this happens everywhere there's a 10A circuit).

The newer circuits in the house are all 16A, but the old ones (very old) are 10A. A real pain, with new TN nets and modern breakers.


Microwave ovens top out around 1100-1250W output from a ~1500W input from the wall. Apparently there's a fair bit of energy lost in the power supply and magnetron that doesn't make it into the box where the food is.


You don't keep the kettle constantly running, unlike a PC.



Laughs in 230V (sorry).


ʰᵉₕₑheʰᵉₕₑhe in 400V


It is mostly an issue in countries with 120V mains (I know that in the US 240V outlets exist though). In France for example it is required that standard plugs must be able to deliver at least 16A on each outlet, at the 230V used here, we get 3600W of power, that’s more than enough.


Yes and this is something I've been thinking about for awhile.

A computer is becoming a Home Appliance in the it will need 20A wiring and plugs soon, but should move to 220/240v soon anyway (and change the jumper on your standard power supply).


But all of the most-ridiculous hyperscale deployments, where bandwidth + latency most matter, have multiple GPUs per CPU, with the CPU responsible for splitting/packing/scheduling models and inference workloads across its own direct-attached GPUs, providing the network the abstraction of a single GPU with more (NUMA) VRAM than is possible for any single physical GPU to have.

How do you do that, if each GPU expects to be its own backplane? One CPU daughterboard per GPU, and then the CPU daughterboards get SLIed together into one big CPU using NVLink? :P


GPU as motherboard really only makes sense for gaming PCs. Even there SXM might be easier.


No, for a gaming computer what we need is the motherboard and gpu to be side by side. That way the heat sinks for the CPU and GPU have similar amounts of space available.

For other use cases like GPU servers it is better to have many GPUs for every CPU, so plugging a CPU card into the GPU doesn’t make much sense there either.



It’s always going to be a back and forth on how you attach stuff.

Maybe the GPU becomes the motherboard and the CPU plugs into it.


And the memory should be a onboard module on the cpu card intel/amd should replicate what apple did with a unified same ringbus sort of memory modules. Lower latency,higher throughput.

Would push performance further. Although companies like intel would bleed the consumer dry with, a certain i5-whatever cpu with onboard memory of 16 gigs could be insanely priced compared to what you'd pay for addon memory.


That would pretty much make both intel and amd to start market segmentation by CPU Core + Memory combination. I absolutely do not want that.


We're already there. That's what a lot of people are using DPU's are for.

An example, This is storage instead of GPU's, but as the SSD's were PCIe NVMe, it's pretty nearly the same concept: https://www.servethehome.com/zfs-without-a-server-using-the-...


To continue the ServeAtHome links, https://www.servethehome.com/microchip-adaptec-smartraid-430...

PCI-e Networks and CXL are the future of many platforms... like ISA backplanes.


Yep, I have a lot of experience with CXL devices and networked PCIe/NVMe (over Eth/IB) Fabrics and deploying "Headless"/"Micro-Head" compute units which are essentially just a pair of DPU's on a PCIe multiplexer (basically just a bunch of PCIe slots tied to a PCIe Switch or two).

That said my experience in this field is more with storage than GPU compute, but I have done some limited hacking about in the GPGPU space with that tech as well. Really fascinating stuff (and often hard to keep up with and making sure every part in the chain supports the features you want to leverage, not to mention going down the PCIe root topology rabbit hole and dealing with latency/trace-length/SnR issues with retimers vs muxers vs etc etc etc).

It's still a nascent field that's very expensive to play in, but I agree it's the future of at least part of the data infrastructure field.

Really looking forward to finally getting my hands on CXL3.x stuff (outside of a demo environment.)


EE here. There's no reason to not deliver power directly to the GPU by using cables. I'm not sure if it's sooving anything.

But you are right, there's no hierarchy in the systems anymore. Why do we even call something a motherboard? There's a bunch of chips interconnected.


Can I just have a backplane? Pretty please?


I've wondered why there hasn't been a desktop with a CPU+RAM card that slots into a PCIe x32 slot (if such a thing could exist), or maybe dual x16 slots, and the motherboard could be a dumb backplane that only connected the other slots and distributed power, and probably be much smaller.


Those exist; they are used for risers ("vertical mount GPU brackets, for dual GPU" equivalent for servers, where they make the cards flat again).


PCIe x 32 actually exists, at least in the specification. I have never seen a picture of a part using it.


Retimers.


Sockets (and especially backplanes) are absolutely atrocious for signal integrity.


I guess if it's possible to have 30cm PCIe 5 riser cables, it should be possible to have a backplane with traces of similar length.


Cables much better sadly, so much so that they started to use cables to jump across the server main board in places.


VMEBus for the win! (now VPX...)


The hot stuff nowadays is µTCA: https://www.picmg.org/openstandards/microtca/


If I remember correctly the military / aerospace shy away from this spec because the connector with the pins is on the backplane, with the sockets on the cards.

So if you incorrectly insert a card and bend a pin you're in trouble.

VPX has the sockets on the backplane so avoids this issue, if you bend pins you just grab another card from spares.

This may have changed since I last looked at it.

Telecoms industry definitely seem to favour TCA though.


I don't know, I work in particle physics and here µTCA is all the rage nowadays.


Yes, for fucks sake, this is the only way forward. It gives us the ultimate freedom to do whatever we want in the future. Just make everything a card on the bus and quit with all this hierarchy nonsense.


Wouldn't that mean an complete mobo replacement to upgrade the GPU? GPU upgrades seem much more rapid and substantial compared to CPU/RAM. Each upgrade would now mean taking out the CPU/RAM and other cards vs just replacing the GPU


GPUs completely dominate the cost of a server, so a GPU upgrade typically means new servers.


Agree - newer GPU likely will need faster PCIe speeds too.

Kinda like RAM - almost useless in terms of “upgrade” if one waits a few years. (Seems like DDR4 didn’t last long!)


> GPU upgrades seem much more rapid and substantial compared to CPU/RAM.

I feel like I’ve been hearing about people selling five-to-ten-year-old GPUs for sometimes as many dollars as they bought them for, for the last five years; and people choosing to stay on 10-series NVIDIA cards (2016) because the similar-cost RTX 30-, 40- or 50-series was actually worse, because they’d been putting the effort and expense into parts of the chips no one actually used. Dunno, I don’t dGPU.


Yes I agree, let's bring back the SECC style CPU's from the Pentium Era, I've still got my Pentium II (with MMX technology)


And limit yourself to only one GPU?

Also CPUs are able to make use of more space for memory, both horizontally and vertically.

I don't really see the power delivery advantages, either way you're running a bunch of EPS12V or similar cables around.


Personally I hope this point comes after we realise we don't need 1kW GPUs doing a whole lot of not much useful


Figure out how much RAM, L1-3|4 cache, integer, vector, graphics, and AI horsepower is needed for a use-case ahead-of-time and cram them all into one huge socket with intensive power rails and cooling. The internal RAM bus doesn't have to be DDRn/X either. An integrated northbridge would deliver PCIe, etc.


I wonder how many additional layers are required in the PCB to achieve this + how this will dramatically affect the TDP; the GPU's aren't the only components with heat tolerance and capacitance.


It is not a EE problem. It is an ecosystem problem. You need a whole catalog of compatible hardware for this.


The concept exists now. You can "reverse offload" work to the CPU.


Isn't that what has kinda sorta basically happened with Apple Silicon?


And AMD Strix Halo.


GPU + CPU on the same die, RAM on the same package.

A total computer all-in-one. Just no interface to the world without the motherboard.


One possible advantage of this approach that no one here has mentioned yet is that it would allow us to put RAM on the CPU die (allowing for us to take advantage of the greater memory bandwidth) while also allowing for upgradable RAM.


I think you'd want to go the other way.

GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds.

It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory?

It's a whole new architecture but it seems to get at the actual problems we have in the space.


Isn't that what you described Direct Storage?


You're still running through the PCIe slot and it's bandwidth limit. I'm suggesting you bypass even that and put more memory directly on the card.


So an additional layer slower and larger than global GPU memory?

I believe that's kind of what bolt graphics is doing with the dimm slots next to the soldered on lpddr5. https://bolt.graphics/how-it-works/


Couldn’t we do that today if we wanted to?

What’s keeping Intel/AMD from putting memory on package like Apple does other than cost and possibly consumer demand?


Supply + demand, the manufacturing-capacity rabbit hole.


Bring back the S100 bus and put literally everything on a card. Your motherboard is just a dumb bus backplane.


We were moving that way, sorta, with Slot 1 and Slot A.

Then that became unnecessary when L2 cache went on-die.


You have the unicorn health plan it appears


Actually I miss typed and can't edit but I have co-pays and meant to type $0 co-insurance.


Better: Use browsers not owned by search companies


Yeah it is interesting though from a sociological perspective that there seems to be a worldwide pullback from globalism. Did Brexit or Trump kick this off?


For alot of people globalism = elitism. It provides a sort of camouflage behind which elites can organise things in a way which best suits them whilst at the same time proclaiming how virtuous they are.


Yet when the populists gain control that’s precisely what they do.


What's populism again? That's a democratic mandate you don't like, right?


Populism is clearly what you're in favor of, but you're too stupid to know that I guess.


If I disagree with you then by definition I must be stupid, of course.


It's my sense that globalism kicked off this trend of anti-globalism.

If it benefitted more people, or more accurately if it benefitted people in a more equitable way instead of concentrating the gains in the hand of the wealthy and powerful, globally, then maybe there would be less of a pullback.

I'm not arguing for or against globalism, it has many benefits and drawbacks, but the undercurrent of opposition has existed long before Trump of Brexit, as seen for example in the various GX (G8, G20, etc) protests that took place around the world in the 2000's and 2010's, preceding the Trumpers and Brexit.

I agree the sentiment has picked up in recent years, accelerated since Covid, and that politicians are doing what politicians do, trying to get elected.


It’s that grifter want to be fascist dictators types have exploited the hate people have for others to get elected or gain votes ?

On second thought, you seem like a rage bot with GPT generated comments on politically charged articles.


Robots completely replacing humans cannot occur in a capitalist system without complete economic collapse.

If robots are developed to be able to perform the most undesirable jobs, then they will also be developed to perform the most desirable jobs. If humans don’t work, they have no money. Humans without money cannot buy things. Humans can’t buy things, companies can’t exist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: