Yeah, so not exactly liberal democracy. It is a democracy, but doesn't seem very liberal if the checks and balances doesn't work against popular policies.
I would argue that in that case, liberal democracy is an oxymoron.
Really popular policies have a wide support among the population, which means that they will became law, or even an amendment to the constitution. (Most countries have something like 3/5 supermajority requirements for changing constitutions, which is a lot more practical than the basically-as-of-now-impossible US procedure.)
At this moment, if you want to keep "liberal" character of the country, your "checks and balances" institutions have to act in a fairly authoritarian ways and invalidate laws which attracted supermajority support. What is then stopping such institutions to just rule as they see fit? Even checks and balances need checks and balances.
Nevertheless, I would say that "liberal democracy" isn't one that can always prevent illiberal policies from being enacted. I would say that it is one that can later correct them.
Note that historically, most obvious executive encroachments of liberty (Guantanamo etc.) in the US were later overturned by new administrations.
> Really popular policies have a wide support among the population, which means that they will became law, or even an amendment to the constitution
McCarthyism didn't have that much support from voters, so this isn't the issue, it didn't become law. The issue is that the elected representatives didn't do anything to stop it until it started having massive disapproval from voters.
Voters needing to massively disapprove of government abuse for the "checks and balances" to do their job means the democracy isn't working as it should, the government doesn't need to change the constitution they just need to keep disapproval low enough to continue with their illegal actions. In a true liberal democracy the checks and balances works, ministers who perform illegal acts are investigated and relieved of their duties without needing elected representatives to start that procedure.
I live in Sweden and I can't even find examples of a politician that blatantly ignores laws and procedures that get to stay for years here. I think the two party system is the biggest culprit, then you need support from both parties to remove criminal politicians, but that is very difficult to get when people have to vote against their own. In a multi party system each party is a minority, and allied parties are not friendly to each other, they gladly sink an ally to absorb their votes since the issue was the party and not the alliance, people wont move to the other block over such a thing.
Sweden supports Chat Control on European level, even though the very principle of Chat Control is anathema to basic civic rights.
Is widespread surveillance of private communications popular with Swedish electorate, or do people like Ylva Johansson support and even push such abominable things regardless of what actual Swedes think?
If the latter, it is not that different from what McCarthy once did, and our entire continent is in danger that this sort of paranoid dystopia gets codified into law approximately forever. At least McCarthy's era was short.
> In 1986, Congress amended the Safe Drinking Water Act (SDWA), prohibiting the use of lead in pipes, and solder and flux on products used in public water systems that provide water for human consumption. Lead-free was defined as solder and flux with no more than 0.2% lead and pipes with no more than 8%.
> In 2011, Congress passed the RLDWA, which revised the definition of lead free and took effect in 2013. Lead free was now defined as the lead content of the wetted surfaces of plumbing products as a weighted average of no greater than 0.25% for products that contact water intended for consumption, and 0.2% for solder and flux.
A lot of municipal water systems have done more recent (but by no means required) improvements to the water itself to “coat” the lead in supply lines. Beyond just pH control, like orthophosphate. Most just in the last decade or so.
For Chicago, it’s an active project
> Polyphosphate is being removed because recent studies have shown that it may
negatively impact lead corrosion control.
> Polyphosphate was initially added with the orthophosphate to mask discoloration of the water from metals such as iron or manganese.
A lot of brass fittings and fixtures with lead in them. Makes it easier to machine.
I wouldn’t be surprised if a lot of no-name Amazon and aliexpress plumbing fixtures still have a lot of lead in them. Keeps your cutting tool/machining costs down.
Even big box stores that are careful sell a lot of high lead plumbing parts - they are just marked not for potable water and sold for use with gas pipes.
If people leave your country because it sucks, that’s a governance problem, not a citizen problem. People are mobile, if they have options and can do better, they should take them. Life is short and you only live once. Debt is just accounting, it is a shared delusion like a currency.
We set these citizens up to fail, and they’re the bad guys? Hardly. If you can escape the torment nexus, go, don’t look back. The torment nexus does not care about you. “The purpose of the system is what it does.”
If people enter your country because it's the greatest country in the world, that's a governance success. The USA has it's share of problems and there are things we should do to reduce education costs. But overall the USA has a positive net migration rate with every other major country. People are coming here for a reason.
We've got a lot of governance problems. Our system allows college prices to get really high, mostly because 18 year kids can sign up for these guaranteed loans. They don't know what they are signing up for, and colleges just want students in seats. People don't know if their major has good employment prospects, avg salary and how that will affect their loan repayment.
America chose to do this, banks make big money from the loans. Colleges make big money from students.
We have a similar system with medical care. We have regulatory capture of our medical system by the drug sellers, medical groups, etc. We pay way way way more than other countries with worse outcomes. And the reaction of half the country, the republicans, it is we'll fix this by eliminating a lot of coverage for poor people. Democrats try to control costs, cover more poor people, get on a better trajectory and it's demonzied as destroying democracy. Meanwhile, our recently passed BBB bill takes billions out of medicare, ie coverage for poor people, many of whom voted for Trump. This whole thing is disgusting. I'm angry because of the loss of potential here, just like for student debt.
My dad says colleges are corrupt because they "waste money on dei things" and that's why they have high costs (sadly not making this up). I try to explain college is not subsidized like it used to be when you went to college in the 60s. Similar thing with young people not affording housing, not having kids as much.
Oh boy it actually gets so much worse than you mention. If you have W2 income the cost of college is uniquely high to _you_. Very wealthy people who claim a business loss on their tax returns (through the massive spiderweb of itemized deductions) college, even ivy leagues, are free or almost free.
Bullshit. FAFSA covers assets as well as income so wealthy families with zero taxable income aren't getting need-based financial aid. Unless they lie on the application, which is criminal fraud.
Sorta. You exclude your primary residence, retirement funds, and any college savings accounts. I def know people with millions in the first two, retired, with their kids getting full rides and food stamps.
Not bullshit at all, there are loopholes riddled throughout the FAFSA system that allows assets to be tucked away out of scope. It’s not fraud, all completely legal and purpose-built to support the types of families who can afford a financial advisor on retainer.
Pretty pessimistic frankly. Management at all levels pushing for nearshoring SWE labor, meanwhile we’re training AI as a long term solution to fill the skill gap in the same nearshore labor. We were hired to be smart people and it’s frankly an insult to gaslight us into believing that it’s simply because it makes us more productive. Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?
> Of course there’s a push for it with the intent to replace us. Why else would it be forced down our throats?
I still don’t see this, if only for the Managerial instinct for ass-covering.
If something really matters and a prod showstopper emerges, can those non-technical supervisory managers be completely, absolutely, 100% sure the AI can fix the code and bring everything back up? If not, the buck would surely stop with them and they would be utterly helpless in that situation. The Board waiting on conference call while they stare at a pageful of code that may as well be written in ancient Sumerian.
I can see developers taking a higher level role and using these tools, but I can’t really see managers interfacing directly with AI code generation. Unless they are completely risk tolerant, and you don’t get far up the greasy pole with those tendencies.
How, exactly?
If a production showstopper needs to be worked on immediately.
If the development is between non-technical management and some AI tool they have been using, how do they insulate themselves from being accountable to their superiors? Who is responsible, and who gets to fix it?
Directors at my company (large mid-tier tech) are being _asked_ to write AI code. Below that level, it’s a mandate and anyone who doesn’t ship regularly using AI will be pipped and fired. Don’t make me explain how it makes sense but that what we’re dealing with.
The scenario I was responding to is where developers are no longer employed because the managers interface directly with AI. It sounds like your company still has developers around.
If your company removes all developers and lets the managers vibe code instead, I’ll get the popcorn in for the next outage.
Any EEs that can comment on at what point do we just flip the architecture over so the GPU pcb is the motherboard and the cpu/memory lives on a PCIe slot? It seems like that would also have some power delivery advantages.
If you look at a any of the nvidia DGX boards it's already pretty close.
PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast...
There are companies that specialize in memory controller ip that every one else uses, including large semi companies like Intel.
The ip companies are the first to support new standards, make their money selling to intel etc. Allowing intel or whomever to take their time to build higher performance ip.
These days you can buy any standard as a soft IP from Synopsys or Cadence. They take their previous serdes and modify it to meet the new standard. They have thousands of employees across the globe just doing that.
Good to see I’m not the only person that’s been thinking about this. Wedging gargantuan GPUs onto boards and into cases, sometimes needing support struts even, and pumping hundreds of watts through a power cable makes little sense to me. The CPU, RAM, these should be modules or cards on the GPU. Imagine that! CPU cards might be back..
This is a legitimate problem in datacenters. They're getting to the point where a single 40(ish)OU/RU rack can pull a megawatt in some hyperdense cases. The talk of GPU/AI datacenters consuming inordinate amounts of energy isn't just because the DC's are yuge, (although some are), but because the power draw per rack unit space is going through the roof as well.
On the consumer side of things where the CPU's are branded Ryzen or Core instead of Epyc or Xeon, a significant chunk of that power consumption is from the boosting behavior they implement to pseudo-artificially[0] inflate their performance numbers. You can save huge (easily 10%, often closer to 30%, but really depends on exact build/generation) on energy by doing a very mild undervolt and limiting boosting behavior on these cpus and keeping the same base clocks. Intel 11th through 14th gen CPU's are especially guilty of this, as are most Threadripper CPU's. you can often trade single digit or even negligible performance losses (depends on what you're using it for and how much you undervolt/underclock/restrict boosting) for double digit reductions in power usage. This phenomenon is also true for GPU's when compared across the enterprise/consumer divide, but not quite to the significant extent in most cases.
Point being, yeah, it's a problem in data centers, but honestly there's a lot of headroom still even if you only have your common American 15A@120VAC outlets available before you need to call your electrician and upgrade your panel and/or install 240VAC outlets or what have you.
0: I say pseudo-artificial because the performance advantages are real, but unless you're doing some intensive/extreme cooling, they aren't sustainable or indicative of nominal performance, just a brief bit of extra headroom before your cooling solution heat-soaks and the CPU/GPU's throttle themselves back down. But it lets them put the "Bigger number means better" on the box for marketing.
It's not just about better numbers. Getting high clocks for a short period helps in a lot of use cases - say random things like a search. If I'm looking for some specific phrase in my codebase in vscode, everything spins up for the second or two it takes to process that.
Boosting from 4 to 5,5.5 ghz for that brief period shaves a fraction of a second - repeat that for any similar operation and it adds up.
Yes, I figured that much would be obvious to this crowd. Thus the "pseudo" part.
The point isn't that there isn't a benefit, it's that you start to pay exponentially more energy per 0.1GHz at a certain point. Furthermore, AMD and Intel were exceptionally aggressive about it in the generations I outlined (AMD would be 7000 series ryzens specifically), leading to instability issues on both platforms due to their spec itself being too aggressive, or AIB partners improperly implementing that spec as the headroom that typically exists from factory stock to push clocks/voltages further was no longer there in some silicon (some of it comes down to silicon lottery and manufacturing defects/mistakes (Intel's oxidation issues for example) but we're really getting into the weeds on this already)
And to clarify: I'm talking specifically of Intel turboboost and AMD's PBO boosting technologies where they boost where they boost well over base clocks, separate from the general dynamic clocking behavior where clocks will drop well below base when not in (heavy) use.
They're small and efficient, that means they can pack large numbers of those into small spaces, resulting in a similar large power draw per volume occupied by equipment in the DC. This is especially true with Apple's "Ultrafusion" tech which they're developing as quasi-analog to Nvidia Grace (Hopper) superchips.
Didn't saw they draw the same, I openly acknowledge their more efficient. Said power user per rack unit is trending up. This is true of Apple DC's as well, especially with their new larger/fused chip initiatives. It's an universal industry trends especially with AI compute, and Apple is not immune.
Let me rephrase to: No, they (collectively) don’t draw the same levels of power. I know what amperage is drawn by each rack. It’s nowhere near as much as was drawn by the older intel-based racks.
Changing settings can lead to stability issues no matter which way you push it frankly. If you're don't know what you're doing/aren't comfortable with it, probably not worth it.
At least with U.S. wiring we have 15 amps at 120 volts. For continuous power draw I know you'd want an 80% margin of safety, so let's say you have 1440 Watts of AC power you can safely draw continuously. Power supplies built on MOSFETs seem to peak at around 90% efficiency, but you could consider something like the Corsair AX1600i using gallium nitride transistors, which supposedly can handle up to 1600 watts at 94% efficiency.
Apparently we still have room, as long as you don't run anything else on the same circuit. :)
You can always have an electrician install a larger breaker for a particular circuit. I did that with my "server" area in my study, which was overkill cuz I barely pull 100w on it. But it cost nearly zero extra since he was doing a bunch of other things around the house anyway.
In older houses, made from brick and concrete, that can be tricky to do. The only reason I can have my computer on a separate circuit is because we could repurpose the old three phase wiring for a sauna we ripped out. If that had not been the case, getting the wires to the fuse board would have been tricky at best.
New homes are probably worse than old homes through. The wires a just chucked in the space been the outer and inner walls, there's basically no chance of replacing them of pulling new ones. Old houses at least frequently have piping in which the wires run.
The voltage is always going to be the same because the voltage is determined by the transformers leading to your service panel. The breakers break when you hit a certain amperage for a certain amount of time, so by installing a bigger breaker, you allow more amperage.
If you actually had an electrician do it, I doubt they would've installed a breaker if they thought the wiring wasn't sufficient. Truth is that you can indeed get away with a 20A circuit on 14 AWG wire if the run is short enough, though 12 AWG is recommended. The reason for this is voltage drop; the thinner gauge wire has more resistance, which causes more heat and voltage drop across the wire over the length of it, which can cause a fire if it gets sufficiently hot. I'm not sure how much risk you would put yourself in if you were out-of-spec a bit, but I wouldn't chance it personally.
You can, 240V on normal 12/2 Romex is fine. The neutral needs to be "re-labeled" with tape at all junctions to signify that it's hot, and then this practice is (generally) even code compliant.
However! This strategy only works if the outlet was the only one on the circuit, and _that_ isn't particularly common.
Although this exists, as a layperson, I've rarely seen it. There is the NEMA 6-15R receptacle type, but I have literally none of those in my entire house, and I've really never seen them. Apparently they're sometimes used for air conditioners. Aside from the very common 5-15R, I see 5-20R (especially in businesses/hospitals), and 14-30R/14-50R for ranges and dryers. (I have one for my range, but here in the midwest electric dryers and ranges aren't as common, so you don't always come across these here. We have LNG ran to most properties.) So basically, I just really don't see a whole lot of NEMA 6 receptacles. The NEMA 14 receptacles, though, require both hots and the neutral, so in a typical U.S. service panel it requires a special breaker and to take up two slots, so definitely not as simple of a retrofit.
(Another outlet type I've seen: I saw a NEMA 7 277V receptacle before. I think you get this from one phase of a 480V three-phase system, which I understand is ran to many businesses.)
If you drive an electric car in a rural area you might want to carry around 6-30 and 6-50 adapters because most farms have welders plugged into those and that can give you a quick charge. And also TT-30 and 14-50 adapters to plug in at campgrounds.
NEMA 6 is limiting because there’s no neutral, so everything in the device has to run on 240V. Your oven and dryer want 120V to run lights and electronics, so they use a 14 (or 10 for older installs) which lets them get 120V between a hot and the neutral.
Oddly, 14-50 has become the most common receptacle for non-hardwired EV charging, which is rather wasteful since EV charging doesn’t need the neutral at all. 6-50 would make more sense there.
Reasons why it's nice to have a 14-50 plug in your garage rather than a 6-50:
1: when an uncle stops by for a visit with his RV he can plug in.
2: the other outlets in your garage are likely on a shared circuit. The 14-50 is dedicated, so with a 14-50 to 5-15 adapter you can more safely plug in a high wattage appliance, like a space heater.
1 is why we ended up with 14-50 as the standard, too. Before there was much charging infrastructure, RV parks were a good place to get a semi-fast charge, and that meant a charger with a 14-50 plug.
2 is something I never thought of, I’ll have to keep that in mind.
NEMA 6s are extremely common in barns and garages for welders. 6-50 is more common for bigger welders but I’ve also seen 6-20s on repurposed 12/2 Romex as the parent post was discussing used for cheap EV retrofits, compressors, and welders.
If this is north america we're talking about, then 14 gauge is the standard for 120V 15A household circuits. By code, 20A requires 12 gauge. You'll notice the difference right away, it's noticeably harder to bend. Normally a house or condo will only have 15A wires running to circuits in the room. It's definitely not a standard upgrade, the 12 gauge wire costs a lot more per foot, no builder will do it unless the owner forks over extra dough.
Unless you performed the upgrade yourself or know for a fact that the wiring was upgraded to 12 gauge, it's very risky to just upgrade the breaker. That's how house fires start. It's worth it to check. If you know which breaker it is, you can see the gauge coming out. It's usually written on the wire.
I was actually under the impression that it is allowed depending on the length of the conductor, but it seems you are right. The NEC Table 15(B)(16) shows the maximum allowed ampacity of 14 AWG cables is 20 amperes, BUT... there is a footnote that states the following:
> * Unless otherwise specifically permitted elsewhere in this Code, the overcurrent protection for conductor types marked with an asterisk shall not exceed 15 amperes for No. 14 copper, 20 amperes for No. 12 copper, and 30 amperes for No. 10 copper, after any correction factors for ambient temperature and number of conductors have been applied.
I could've sworn there were actually some cases where it was allowed, but apparently not, or if there is, I'm not finding it. Seems like for 14 AWG cable the breaker can only be up to 15 amperes.
There is a chance he did not run new wires if he was able to ascertain that the wire gauge was sufficient to carry 20 amps over the length of the cable. This is a totally valid upgrade though it does obviously require you to be pretty sure you know the length of the entire circuit. If it was Southwire Romex, you can usually tell just by looking at the color of the sheathing on the cable (usually visible in the wallboxes.)
As an old house owner, I can attest to that for sure. In fairness though, I suspect most of the atrocities occur in wall and work boxes, as long as your house is new enough to at least have NM sheathed wiring instead of ancient weird stuff like knob and tube. That's still bad but it's a solvable problem.
I've definitely seen my share of scary things. I have a lighting circuit that is incomprehensibly wired and seems to kill LED bulbs randomly during a power outage; I have zero clue what is going on with that one. Also, often times opening up wall boxes I will see backstabs that were not properly inserted or wire nuts that are just covering hand-twisted wires and not actually threaded at all (and not even the right size in some cases...) Needless to say, I should really get an electrician in here, but at least with a thermal camera you can look for signs of serious problems.
Hair dryers and microwaves only run for a few minutes, so even if you do have too much resistance this probably won't immediately reveal a problem. A space heater might, but most space heaters I've come across actually seem to draw not much over 1,000 watts.
And even then, even if you do run something 24/7 at max wattage, it's definitely not guaranteed to start a fire even if the wiring is bad. Like, as long as it's not egregiously bad, I'd expect that there's enough margin to cover up less severe issues in most cases. I'm guessing the most danger would come when it's particularly hot outside (especially since then you'll probably have a lot of heat exchangers running.)
That's still not much for wiring in most countries. A small IKEA consumer oven is only 230V16A=3860W. Those GPUs and CPUs only consume that much at max usage anyway. And those CPUs are uninteresting for consumers, you only need a few Watts for a single good core, like a Mac Mini has.
All American households get mains power at 240v (I'm missing some nuance here about poles and phases, so the electrical people can correct my terminology).
It's often used for things like ACs, Clothes Dryers, Stoves, EV Chargers.
So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.
To get technical -- US homes get two phases of 120v that are 180 degrees out of phase with the neutral. Using either phase and the neutral gives you 120v. Using the two out of phase 120v phases together gives you a difference of 240v.
Yeah that's right. The grid is three phases (as it is basically everywhere in the world), and the transformer at the pole splits one of those in half. Although, what are technically half-phases are usually just called "phases" when they're inside of a home.
> So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.
It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace.
Though I think it's unlikely we'll see an actual need for it at home, I imaging a 800w cpu is going to be for server class CPUs and rare-ish to see in home environments.
I kinda suspect there’s a premium once you mention “EV vehicle”, since you’re signalling that you’re affluent enough to afford an EV and have committed to spending the money required to get EV charging at home working, etc. (Kinda like getting a quote for anything wedding related.)
I’m getting some wiring run about the same distance (to my attic, fished up a wall, with moderately poor access) for non-EV purposes next week and the quote was a few hundred dollars.
the trick is to request 240v outlet for welder. it brings price down to 400 or so.
running to another room will be done usually (at least in usa) through attic or crawlspace. i got it done a few months ago to have dedicated 20A circuit (for my rack) in my work room. cost was around 300-400 as well
Labor chargers alone are going to be higher than that in Seattle. Just to have someone come out on a call is going to be 150-200. If it is an independent electrician who owns their own business, maybe 100-150/hr, if they are part of a larger company, I'd expect even more than that.
Honestly I wouldn't expect to pay less than $1000 for the job w/o any markups.
Handy man prices around here are $65 to $100/hr, and there is a huge wait list for the good ones.
I've gotten multiple quotes on running the 240v line, the labor breakdown was always over $400 alone. Just having someone show up to do a job is going to be almost $200 before any work is done.
When I got quotes from unlicensed people, those came in around $1000 even.
in bayarea subreddits there are multiple posts talking about EV charger vs welder outlet and how it drops price from 2000 to 500 or so (depends on complexity).
another thing, which is good long term is to a find a local electrician (plumber, etc) who doesn't charge service calls and have reasonable pricing.
no idea about handyman pricing. never used any. for electrical/water/roofing i prefer somebody who is licensed/insured/bonded/etc
I should look at the label (or check with a meter..), but when I run my SGI Octane with its additional XIO SCSI board in active use, the little "office" room gets very hot indeed.
If we're counting all the phases then european homes get 400V 3-phase, not 240V split-phase.
Not that typical residential connections matter to highend servers.
Well yes its possible but often $500-1000 to run a new 240v outlet, and that's to a garage for an ev charger. If you want an outlet in the house I dont know how much wall people want to tear up and extra time and cost.
In the Nordics we're on 10A for standard wall outlets so we're stuck on 2300W without rewiring (or verifying wiring) to 2.5mm2.
We rarely use 16A but it exists. All buildings are connected to three phases so we can get the real juice when needed (apartments are often single phase).
I'm confident personal computers won't reach 2300W anytime soon though
In Italy we also have 10A and 16A (single phase). In practice however almost all wires running in the walls are 2.5 mm^2, so that you can use them for either one 16A plug or two adjacent 10A plugs.
In the Nordics (I'm assuming you mean Nordic countries) 10A is _not_ standard. Used to be, some forty years ago. Since then 16A is standard. My house has a few 10A leftovers from when the house was built, and after the change to TN which happened a couple of decades ago, and with new "modern" breakers, a single microwave oven on a 10A circuit is enough to trip the breaker (when the microwave pulses). Had to get the breakers changed to slow ones, but even those can get tripped by a microwave oven if there's something else (say, a kettle) on the same circuit.
16A is fine, for most things. 10A used to be kind of ok, with the old IT net and old-style fuses. Nowadays anything under 16A is useless for actual appliances. For the rest it's either 25A and a different plug, or 400V.
There already are different outlets for these higher power draw beasts in data centers. The amount of energy used in a 4u "AI" box is what an entire rack used to draw. Data centers themselves are having to rework/rewire areas in order to support these higher power systems.
You can up the voltage to 240 and re-use the wiring (with some minor mods to the ends), for double the power. Insulation class should be sufficient. That makes good sense anyway. You may still have an issue if the powersupply can't handle 240/60 but for most of the ones that I've used that would have worked. Better check with the manufacturer to be sure though. It's a lot easier and faster than rewiring.
Kettles in the US are usually 1500W, as the smallest branch circuits in US homes support 15A at 120V and the general rule for continuous loads is to be 80% of the maximum.
Ah, 16A at 230v (3680W) is a normal circuit here. Most appliances work with that, the common exception is electric cooking (using two circuits or 380v two-phase) and EV charging.
But computers do, which was why I included that context. You don't really want to build consumer PC >1500W in the US or you'd need to start changing the plug to patterns that require larger branch circuits.
Microwave ovens have a different issue, which I found when I upgraded my breaker board to a modern one in my house. The startup pulse gives a type of load which trips a standard A-type 10A breaker (230V). Had to get those changed to a "slow" type, but even that will trip every blue moon, and if there's something else significant on the same circuit the microwave oven will trip even so, every two weeks or so (for the record, I have several different types of microwave ovens around the house, and this happens everywhere there's a 10A circuit).
The newer circuits in the house are all 16A, but the old ones (very old) are 10A. A real pain, with new TN nets and modern breakers.
Microwave ovens top out around 1100-1250W output from a ~1500W input from the wall. Apparently there's a fair bit of energy lost in the power supply and magnetron that doesn't make it into the box where the food is.
It is mostly an issue in countries with 120V mains (I know that in the US 240V outlets exist though).
In France for example it is required that standard plugs must be able to deliver at least 16A on each outlet, at the 230V used here, we get 3600W of power, that’s more than enough.
Yes and this is something I've been thinking about for awhile.
A computer is becoming a Home Appliance in the it will need 20A wiring and plugs soon, but should move to 220/240v soon anyway (and change the jumper on your standard power supply).
But all of the most-ridiculous hyperscale deployments, where bandwidth + latency most matter, have multiple GPUs per CPU, with the CPU responsible for splitting/packing/scheduling models and inference workloads across its own direct-attached GPUs, providing the network the abstraction of a single GPU with more (NUMA) VRAM than is possible for any single physical GPU to have.
How do you do that, if each GPU expects to be its own backplane? One CPU daughterboard per GPU, and then the CPU daughterboards get SLIed together into one big CPU using NVLink? :P
No, for a gaming computer what we need is the motherboard and gpu to be side by side. That way the heat sinks for the CPU and GPU have similar amounts of space available.
For other use cases like GPU servers it is better to have many GPUs for every CPU, so plugging a CPU card into the GPU doesn’t make much sense there either.
And the memory should be a onboard module on the cpu card intel/amd should replicate what apple did with a unified same ringbus sort of memory modules. Lower latency,higher throughput.
Would push performance further. Although companies like intel would bleed the consumer dry with, a certain i5-whatever cpu with onboard memory of 16 gigs could be insanely priced compared to what you'd pay for addon memory.
Yep, I have a lot of experience with CXL devices and networked PCIe/NVMe (over Eth/IB) Fabrics and deploying "Headless"/"Micro-Head" compute units which are essentially just a pair of DPU's on a PCIe multiplexer (basically just a bunch of PCIe slots tied to a PCIe Switch or two).
That said my experience in this field is more with storage than GPU compute, but I have done some limited hacking about in the GPGPU space with that tech as well. Really fascinating stuff (and often hard to keep up with and making sure every part in the chain supports the features you want to leverage, not to mention going down the PCIe root topology rabbit hole and dealing with latency/trace-length/SnR issues with retimers vs muxers vs etc etc etc).
It's still a nascent field that's very expensive to play in, but I agree it's the future of at least part of the data infrastructure field.
Really looking forward to finally getting my hands on CXL3.x stuff (outside of a demo environment.)
I've wondered why there hasn't been a desktop with a CPU+RAM card that slots into a PCIe x32 slot (if such a thing could exist), or maybe dual x16 slots, and the motherboard could be a dumb backplane that only connected the other slots and distributed power, and probably be much smaller.
If I remember correctly the military / aerospace shy away from this spec because the connector with the pins is on the backplane, with the sockets on the cards.
So if you incorrectly insert a card and bend a pin you're in trouble.
VPX has the sockets on the backplane so avoids this issue, if you bend pins you just grab another card from spares.
This may have changed since I last looked at it.
Telecoms industry definitely seem to favour TCA though.
Yes, for fucks sake, this is the only way forward. It gives us the ultimate freedom to do whatever we want in the future. Just make everything a card on the bus and quit with all this hierarchy nonsense.
Wouldn't that mean an complete mobo replacement to upgrade the GPU? GPU upgrades seem much more rapid and substantial compared to CPU/RAM. Each upgrade would now mean taking out the CPU/RAM and other cards vs just replacing the GPU
> GPU upgrades seem much more rapid and substantial compared to CPU/RAM.
I feel like I’ve been hearing about people selling five-to-ten-year-old GPUs for sometimes as many dollars as they bought them for, for the last five years; and people choosing to stay on 10-series NVIDIA cards (2016) because the similar-cost RTX 30-, 40- or 50-series was actually worse, because they’d been putting the effort and expense into parts of the chips no one actually used. Dunno, I don’t dGPU.
Figure out how much RAM, L1-3|4 cache, integer, vector, graphics, and AI horsepower is needed for a use-case ahead-of-time and cram them all into one huge socket with intensive power rails and cooling. The internal RAM bus doesn't have to be DDRn/X either. An integrated northbridge would deliver PCIe, etc.
I wonder how many additional layers are required in the PCB to achieve this + how this will dramatically affect the TDP; the GPU's aren't the only components with heat tolerance and capacitance.
One possible advantage of this approach that no one here has mentioned yet is that it would allow us to put RAM on the CPU die (allowing for us to take advantage of the greater memory bandwidth) while also allowing for upgradable RAM.
GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds.
It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory?
It's a whole new architecture but it seems to get at the actual problems we have in the space.
Yeah it is interesting though from a sociological perspective that there seems to be a worldwide pullback from globalism. Did Brexit or Trump kick this off?
For alot of people globalism = elitism.
It provides a sort of camouflage behind which elites can organise things in a way which best suits them whilst at the same time proclaiming how virtuous they are.
It's my sense that globalism kicked off this trend of anti-globalism.
If it benefitted more people, or more accurately if it benefitted people in a more equitable way instead of concentrating the gains in the hand of the wealthy and powerful, globally, then maybe there would be less of a pullback.
I'm not arguing for or against globalism, it has many benefits and drawbacks, but the undercurrent of opposition has existed long before Trump of Brexit, as seen for example in the various GX (G8, G20, etc) protests that took place around the world in the 2000's and 2010's, preceding the Trumpers and Brexit.
I agree the sentiment has picked up in recent years, accelerated since Covid, and that politicians are doing what politicians do, trying to get elected.
Robots completely replacing humans cannot occur in a capitalist system without complete economic collapse.
If robots are developed to be able to perform the most undesirable jobs, then they will also be developed to perform the most desirable jobs. If humans don’t work, they have no money. Humans without money cannot buy things. Humans can’t buy things, companies can’t exist.
reply