Hacker Newsnew | past | comments | ask | show | jobs | submit | more kowsheek's commentslogin

I'm sorry you had to experience that, I know how you might feel :(

That data structures and algorithm book and course is the root of so much distasteful smugness in the tech world, it's really bizarre.

[Edit]: It's almost like it was the first course we found very difficult so we should that imprint of challenge on everyone else.


The word I think you’re looking for is “hazing”


I think "caring" is probably the only one that separates the best from the rest. Reminds me of this: https://youtu.be/7EmboKQH8lM?t=2165

If we care as a developer, we write better code, learn things and listen to our team, customers.


Looks like Apple ripped off yet another Web spec... AppClips anyone?


Might look outlandish to everyday consumers but it's not unreasonable in my line of work (VFX, 3D, real-time graphics) to require this kind of specs.


Yeah, not sure the point of this post. >$20k alone for GPU and RAM.


My takeaway is that somebody, somewhere, had to get a 50k mac replaced and apple has one to refurbish. I suppose it's not that surprising given that used luxury cars are a thing, but a used workstation at this price point is rare.


Curious about the low (comparatively to the rest of the specs) amount of SSD storage, given that raw video tends to be very large. I guess they would be using networked storage for everything?


If you're really doing video, I suspect you'd use storage arrays (perhaps arrays of SSD for high-end) instead of trying to cram it into your workstation, not to mention you'd want redundancy (and sharing) for production-type work


A lot of editors will keep all of their footage on some form of networked storage until they need to work on a specific piece of it. Bring it to the machine, edit it, and send it back to the storage array.


I work in the same industry, and will say you don't need to max out every component on a PC for real-time graphics peak performance. Yeah, you might need to max out a few parts, but not every part.


I’m pretty sure you could escape for under 10k if you built this yourself in a PC.


Here's a $32,000 attempt at a PC build to compete with a loaded Mac Pro. The result was a mixed bag.

https://www.youtube.com/watch?v=l_IHSRPVqwQ


They threw a silly amount of RAM in it, so ended up using one of the AMD server CPUs/mainboards rather than a threadripper, since the threadripper could only leverage 256G of RAM. $1200 a stick... woof. I doubt the workload they put on things used that much memory. Would have been interesting to match a cheaper 24 core CPU ($1300) and 256G ($1000) or less of memory with that same test.


If we're going for comparisons, the $43k+ refurb Mac Pro linked at the top of this page still has 50% more RAM than that PC does.

I think the video's conclusion is spot on -- blindly comparing specs isn't really helpful for judging real world usability of hardware that is this high end. You should get the specific resources matched to your workload. That might be a Mac Pro, it might be a server in a closet with a TB of RAM, or it might be a PC with a couple of Quadros in it. Or maybe it's just a netbook with a Celeron :)


> They threw a silly amount of RAM in it, so ended up using one of the AMD server CPUs/mainboards rather than a threadripper, since the threadripper could only leverage 256G of RAM. $1200 a stick... woof.

Are you arguing that no one needs >256G of memory? Or are you arguing that no one needs ECC memory? Or are you arguing that server chips and motherboards shouldn't exist? Or is your complaint that Apple sells a workstation? Or maybe you think workstations should be illegal?

Should car makers only sell cars that you, personally, would like to buy? Should all manufacturers only sell things that you personally find useful? And if you don't find this computer useful how valuable is it for 100 people to chime in to say that? (IMHO opinion polls are better suited to that level of commentary.)

> I doubt the workload they put on things used that much memory

All anecdata to the contrary some workloads do in fact benefit from obscene amounts of memory. And if you've ever looked into it supporting >1TB of memory requires expensive dense memory modules because there are only so many memory busses on the chip and only so many sockets you can put on a bus.


> Are you arguing that no one needs >256G of memory?

For context, I'm working in cloud infrastructure. Due to customer demand (esp. from customers with large in-memory DBs), our newest set of ultra-large VM flavors goes up to 3 and 6 TiB RAM. I'm not directly involved with compute, but from what I gather, it's an interesting challenge esp. on the operations side. You can migrate a 4 GiB VM in a pinch if the hypervisor is looking bad. Migrating a 6 TiB VM, however, takes a substantial amount of time (22 minutes if you have a 40 Gbps link and can saturate it, which you usually can't).


I'm looking at what a workstation class machine likely should be, I guess. All things being equal, I'd love there to be no (realistic) upper limit on memory. ECC should have been supported by threadripper too, IMHO. I've only got a 128G on my workstation and already considering filling that extra four slots. 64-512G is likely pretty normal memory space for a workstation class machine.

My complaint is the testing methodology. Server vs server is a different problem/requirements. They compared a workstation and then built a workstation with memory requirements that were more common to a server class machine. I suspect a cheaper 'non-server' CPU, with a memory footprint that was more typical for a workstation load, a cheaper option would have held up. One of the likely reasons they gimped the threadripper was to ensure it did not compete with the epic product line.

As for the questions: no, no, no, no, no, no, no, and no. :)


I was curious as to whether this indeed would be possible. But the CPU (W-3275M) itself starts at $6k, the Vega's are $8k each, I can't find any catalogue prices but each slab of ram comes in at around $600 which puts us at $30k. Not unthinkable that the remaining hardware can add another $5k to the bill and you're looking at a $35k machine.

You can probably replace some of these with consumer grade parts, but I'll be impressed if you can create a machine with two titans, 28 cores, and 1.5tB of memory for under $10k.


The two video cards alone cost $10k.


The memory alone is $15,000.


It seems like they’re maybe Apple-exclusive so it’s possibly they’re drastically marked up. The Titan RTX is $2500 apiece and seems somewhat comparable.


I think your issue may be with the exponential scaling of GPU cost at the highest end. As a consumer, you wish that a card ~2x the Titan in tflops only cost twice as much, but it doesn’t work that way for either platform, and of course isn’t helped by the workstation card market being smaller by unit than the retail/ORM gaming card market.

That said I did not downvote you; it’s an understandable frustration.


The Titan RTX is a consumer GPU, not a workstation GPU. The cards are designed for different workloads, so are not really comparable.


What parts of the chip do they burn off for the consumer version?


It has ecc memory, more memory, and the design allows for 4 stacked together, making it ideal for deep learning workflows.


What are you talking about? Just the 1.5Tb of RAM would cost you around $10K alone...


What RAM are you buying?


... but macOS?!?


Hackintosh!


Those days are over, ARM transition and all. Enjoy while you still can.


Eagerly awaiting the next-gen. "ApplePi" with ARM macOS on a raspberry pi 5. :)


They "still have unrevealed Intel Macs" as per the keynote; given that the 2013 iMac is still supported for Big Sur, we almost certainly still have 7 years of support for both architectures in MacOS.


Jobs said the same thing (about new Macs) about PowerPC in 2006. By 2007, Macs were all Intel, and the Intel-only Snow Leopard debuted in 2009. Complete abandonment of Leopard in 2011.

I give it 5 years at the most.


absolutely agree with you, if that. Support times seem to be getting worse as we have more overall system changes these days (or just bigger OSs?)


“I could clone Facebook in a weekend”


Exactly, I wasn't quite sure why there was the original post on it too.

As for my own post, I intend to tie it in with posts on git flow - coming soon(tm).


I had similar thoughts: https://blog.kowsheek.com/git-branch-naming-conventions-for-...

The purpose of feature branches is agility and shouldnt be used to create more process.


I had to put a tissue in my nose while reading this: https://youtu.be/qbInsYok8x8


As an United fan, I relate and hope so too.


This is brilliant. I cannot wait for WASM to take over everything. It's going to (hopefully) bring about true cross platform development.


All I think is haven’t we heard all this before, multiple times, is this just the next generation going through what sounds like a good idea, but in practice isn’t!


There's no need to play that game and it's far more rewarding to define and play your own game with a smaller audience. I had cut myself off of social media and blogging but got back into it recently because I felt there would be an audience I could benefit: https://blog.kowsheek.com/should-i-start-a-blog/


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: