Hacker Newsnew | past | comments | ask | show | jobs | submit | fluxem's commentslogin

I don’t. I have other hobbies that don’t involve staring at a screen all day


I do too but isn't that quite reductive?

My state of mind are quite different when I'm doing "work" work and when I'm doing something else for fun. Completely different experiences with just the facts that I'm staring at a screen and using my fingers on a keyboard being common to both.


I struggle to find a hobby like programming.

It’s

- intellectually challenging and interesting

- massive rabbit hole

- I’m good at it

- it’s a very creative outlet and the outputs are awesome

I’m relatively physically disabled (not noticeable to the naked eye though), otherwise I’d be out walking and running and bouldering all day


Carpentry feels surprisingly similar to programming, and there is a lot of depth to it (one of if not the oldest and most diversely expressed forms of human creativity).


Ebike button-only,

private flying.


E-bike is a great idea cheers


Got one picked out? =D


Maybe a paywall is there for a reason


Maybe paywalls aren't successful for a reason


maybe.. just maybe...


If they wanted to not show the article, maybe don’t put it on the site.


Just don’t read it then. You are not the intended audience for that book. It’s ok to not to read a book


As JL Borges said,

> [...] If a book bores you, leave it; don’t read it because it is famous, don’t read it because it is modern, don’t read a book because it is old. If a book is tedious to you, leave it, even if that book is 'Paradise Lost' — which is not tedious to me — or 'Don Quixote' — which also is not tedious to me. But if a book is tedious to you, don't read it; that book was not written for you. Reading should be a form of happiness, so I would advise all possible readers of my last will and testament—which I do not plan to write— I would advise them to read a lot, and not to get intimidated by writers' reputations, to continue to look for personal happiness, personal enjoyment. It is the only way to read.”

Anyway, being science/academic books/papers the point of discussion in the thread, I doubt one would always have the privilege to just leave it.


Math and adjacent literature are there to rewire your brain, they will always be a struggle to read since rewiring your brain takes effort. The exception is if you already know the topic really really well so you don't have to rewire anything, you just put it in places you have already created, but that is impossible for topics new to you.


That’s because no one wants to draw degeneracy. You read stories about what artists had draw because “rent was due” and just shake your head


It's because furries are very wealthy because they're all gay IT workers.


It’s great for all the people still living in 1992. The rest want good collaboration tools with a grammar checker


LibreOffice supports grammar checking with LanguageTool, which blows Word's grammar checker out of the water.


Fragmented sentence. Please rewrite your comment.

[Edit] I guess someone doesn't get the joke about how bad Words grammar checker is. It used to flag so much stuff as "fragmented" and didn't suggest improvements.


Also, I cannot believe that some cars (looking at you, Subaru) don’t have a pause button, only mute. Not an issue when you’re playing to music. But when you are Listening to an audiobook, it’s another story


Add two more envelopes:

“Blame covid “

“Blame market conditions”


"Blame post-covid changes" "Blame pre-covid changes"


CUDA = C++ on GPUs. Compute shader - subset of C with a weird quirks.


There are existing efforts to compile SYCL to Vulkan compute shaders. Plenty of "weird quirks" involved since they're based on different underlying varieties of SPIR-V ("kernels" vs. "shaders") and seem to have evolved independently in other ways (Vulkan does not have the amount of support for numerical computation that OpenCL/SYCL has) - but nothing too terrible or anything that couldn't be addressed by future Vulkan extensions.


A subset that lacks pointers, which makes compute shaders a toy language next to CUDA.


Vulkan 1.3 has pointers, thanks to buffer device address[1]. It took a while to get there, and earlier pointer support was flawed. I also don't know of any major applications that use this.

Modern Vulkan is looking pretty good now. Cooperative matrix multiplication has also landed (as a widely supported extension), and I think it's fair to say it's gone past OpenCL.

Whether we get significant adoption of all this I think is too early to say, but I think it's a plausible foundation for real stuff. It's no longer just a toy.

[1] https://community.arm.com/arm-community-blogs/b/graphics-gam...


Is IREE the main runtime doing Vulkan or are there others? Who should we be listening to (oh wise @raphlinus)?

It's been awesome seeing folks like Keras 3.0 kicking out broad Intercompatibility across JAX, TF, Pytorch, powered by flexible executuon engines. Looking forward to seeing more Vulkan based runs getting socialized benchmarked & compared. https://news.ycombinator.com/item?id=38446353


The two I know of are IREE and Kompute[1]. I'm not sure how much momentum the latter has, I don't see it referenced much. There's also a growing body of work that uses Vulkan indirectly through WebGPU. This is currently lagging in performance due to lack of subgroups and cooperative matrix mult, but I see that gap closing. There I think wonnx[2] has the most momentum, but I am aware of other efforts.

[1]: https://kompute.cc/

[2]: https://github.com/webonnx/wonnx


How feasible would it be to target Vulkan 1.3 or such from standard SYCL (as first seen in Sylkan, for earlier Vulkan Compute)? Is it still lacking the numerical properties for some math functions that OpenCL and SYCL seem to expect?


That's a really good question. I don't know enough about SYCL to be able to tell you the answer, but I've heard rumblings that it may be the thing to watch. I think there may be some other limitations, for example SYCL 2020 depends on unified shared memory, and that is definitely not something you can depend on in compute shader land (in some cases you can get some of it, for example with resizable BAR, but it depends).

In researching this answer, I came across a really interesting thread[1] on diagnosing performance problems with USM in SYCL (running on AMD HIP in this case). It's a good tour of why this is hard, and why for the vast majority of users it's far better to just use CUDA and not have to deal with any of this bullshit - things pretty much just work.

When targeting compute shaders, you pretty much have to manage buffers manually, and also do copying between host and device memory explicitly (when needed - on hardware such as Apple Silicon, you prefer to not copy). I personally don't have a problem with this, as I like things being explicit, but it is definitely one of the ergonomic advantages of modern CUDA, and one of the reasons why fully automated conversion to other runtimes is not going to work well.

[1]: https://stackoverflow.com/questions/76700305/4000-performanc...


Unified shared memory is an intel specific extension of OpenCL.

SYCL builds on top of OpenCL so you need to know the history of OpenCL. OpenCL 2.0 introduced shared virtual memory, which is basically the most insane way of doing it. Even with coarse grained shared virtual memory, memory pages can transparently migrate from host to device on access. This is difficult to implement in hardware. The only good implementations were on iGPUs simply because the memory is already shared. No vendor, not even AMD could implement this demanding feature. You would need full cache coherence from the processor to the GPU, something that is only possible with something like CXL and that one isn't ready even to this day.

So OpenCL 2.x was basically dead. It has unimplementable mandatory features so nobody wrote software for OpenCL 2.x.

Khronos then decided to make OpenCL 3.0, which gets rid of all these difficult to implement features so vendors can finally move on.

So, Intel is building their Arc GPUs and they decided to create a variant of shared virtual memory that is actually implementable called unified shared memory.

The idea is the following: All USM buffers are accessible by CPU and GPU, but the location is defined by the developer. Host memory stays on the host and the GPU must access it over PCIe. Device memory stays on the GPU and the host must access it over PCIe. These types of memory already cover the vast majority of use cases and can be implemented by anyone. Then finally, there is "shared" memory, which can migrate between CPU and GPU in a coarse grained matter. This isn't page level. The entire buffer gets moved as far as I am aware. This allows you to do CPU work then GPU work and then CPU work. What doesn't exist is a fully cache coherent form of shared memory.

https://registry.khronos.org/OpenCL/extensions/intel/cl_inte...


https://enccs.github.io/sycl-workshop/unified-shared-memory/ seems to suggest that USM is still a hardware-specific feature in SYCL 2020, so compatibility with hardware that requires a buffer copying approach is still maintained. Is this incorrect?


Good call. So this doesn't look like a blocker to SYCL compatibility. I'm interested in learning more about this.


> Vulkan 1.3 has pointers, thanks to buffer device address[1].

> [1] https://community.arm.com/arm-community-blogs/b/graphics-gam...

"Using a pointer in a shader - In Vulkan GLSL, there is the GL_EXT_buffer_reference extension "

That extension is utter garbage. I tried it. It was the last thing I tried before giving up on GLSL/Vulkan and switching to CUDA. It was the nail in the coffin that made me go "okay, if that's the best Vulkan can do, then I need to switch to CUDA". It's incredibly cumbersome, confusing and verbose.

What's needed are regular, simple, C-like pointers.


It boggles my mind that github search is case-insensitive and doesn't search the whole words. For a search term getInfo it will return: GetInfoCell

Don't get me started about how it's harder than it needs to be to limit the search to your own repos


Use sourcegraph to search instead. It doesn't even require you to be logged in.


Here’s an idea. Instead of the rod being parallel with the wall, have multiple small rods perpendicular to the wall. They can easily accommodate full size coat hangers. Problem solved


Ha, good idea. Something like this, right?: https://www.richelieu.com/us/en/category/closet-and-storage/...

Not sure how much it costs, but it doesn't look difficult to make. There might be patents though.


> Here’s an idea. Instead of the rod being parallel with the wall, have multiple small rods perpendicular to the wall. They can easily accommodate full size coat hangers. Problem solved

That takes up more wall space, and makes it harder to see what everything is.

I get that this is not the product for you (or me either), but don't compare the 3 years of on again, off again mulling over the design with your 20 seconds of thought on the problem space.


> with your 20 seconds of thought on the problem space

It's a solved problem, is what I imagine OP meant by the "idea". It's not theirs. Those products exist.

It's not harder to see what everything is because they come in variants where they are stacked vertically, can pop out, can slide out. I agree it might not be as space efficient, but at least you are not limited to thin items that don't crease. And it's just a rack and you can use $0.50 coat hangers instead of $6 ones.

I do see the beauty of this design, and it can be useful when you have limited width as well (as the van pictured), so this is not a hate on that, just that this is not a revolution, just a different take on it.


Someone linked the "parallel to the wall" hanger here https://www.amazon.ca/Retractable-Adjustable-Wardrobe-Clothi... and it costs a lot. If you want to put multiple ones next to each other, it becomes quite expensive.

Also access to items behind other items becomes harder and slower. And it requires a shelf above to secure to.

Horizontal pole and foldable hanger design still seems like a simpler and better approach. The design seems so good that I bet you will be able to order these from China in a couple of years for less than a dollar.


Ikea has them for pretty cheap, though they are made to fit inside their closets.


So I've thought about the problem some more, and have my own idea.

You would still have a rod mounted a short distance from the wall, like the clothes hinger design, but a little further out. The rod for the clothes hinger has grooves that are perpendicular to the long axis of the rod.

But what if you cut the grooves at an angle instead, maybe 60 degrees. This would take up about the same distance out from the wall as the clothes hinger design, but slightly more wall space. This would be a factor for very constrained walls, but for the longer rods, the "overhead" would be insignificant. And it would only project out from the wall the same as the clothes hinger design (that's trigonometry!).

You would use a smaller diameter rod, so that with the grooves cut at an angle, the diameter is still sufficiently small enough to use a regular clothes hanger.


Twist the hook 45 degrees. Save square root of whatever the coat hanger width is. There's some wasted space in corners though.


I like this simple solution, but it makes it difficult to see all your clothes and which article you want.


It's common for them to be stacked like stairs or if there's more space to slide out, so that's not usually an issue. Sometimes it's kinda both (stairs that pop up).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: