Hacker Newsnew | past | comments | ask | show | jobs | submit | echo_time's commentslogin

This is fantastic - but I encountered something strange. I was searching `ghostty per window shader` and your site came up as the first hit. Excellent - however, this was the text under the link:

Fun with Ghostty Shaders 22 Feb 2025 — Ghostty doesn't directly support shaders, but a repo with shaders can be cloned to ~/.config/ghostty/shaders. Examples include 'drunkard+retro- ...

Now, no where in the text on the site does it say this - so did google just wrongly summarize and put it in as "website text". To be clear, this isn't an AI overview - its in the main list of links! Maybe this has been happening and i just missed it but its absurd! It doesn't even fit with the text! Thanks for the resource, again, had a lot of fun with that.


What the crap is going on with this. Is Google just blindly making stuff up these days? Why would it show some preview text that doesn’t exist on the page.


Maybe they noticed that everybody ignores/downvotes/hates/hides the AI overviews, so the next attempt to force people to see them is to replace descriptions and previews with generated summaries?


This looks incredible, and its obvious that a lot of work has been done, but in exploring it I notice a lot of things that make me hesitate to spend the money!

First, in the section "Expressions are flashcards on steroids", the flavor text on each element (Translations, Audio, etc) is identical.

Next, I look at the pricing and get one idea. Then when I create an account and go to upgrade, I see completely different pricing options. Its not that I care so much about the options, but it kind of worries me!

At one point I swear I saw the phrase "Say something about comprehensible input" instead of an explanation of CI, and the sentence itself was duplicated but now I don't. Maybe you are making this landing page live? It _is_ a nice landing page, to be sure.

Overall, I think it looks really cool and I'm interested in trying it out but just a little nervous at the moment.


What the heck? Thank you for bringing the flavor text issue to my attention. You have no idea how long I spent making sure the copy on each of those to make sure they were unique, fit all screen sizes, etc. I have no idea what happened and I’m tragically upset now XD

The “say something about comprehensible input” was indeed a funny copy issue I found a few weeks ago. edit: found and fixed! original: I thought I fixed it though, there must be a screen size that needs to be updated. I’ll look for it, but it’s a framer website so I can’t grep. Let me know if you find it again!

Indeed I just launched the new page with the new pricing. I have two major tasks this week, the second of which is to update the pricing flow to match the new prices on the home page.

It’s a one man show and fully bootstrapped, so apologies about the disarray. Everything takes a month or two to migrate when you do all the design, marketing, engineering, support, and bug fixes yourself!

EDIT: Both the flavor text and the “say something about ci” have been fixed. The upgrade flow will take a few days. I am planning to grandfather everyone who signs up for the old plan ($10pm) into the new plan ($20pm) at the old price :)


Impressive turn around in general, that certainly instills some confidence!

It does look great, so kudos!


Heh I can’t promise much, but I can promise I’m working on it full-time 7 days a week and am moving as fast as I can! If you have any questions, please don’t hesitate to contact me via the support card on the dashboard (it all goes straight to me)


Note for Firefox users - view the page in Chrome to see more of what they are talking about. I was very confused by some of the images, and it was a world of difference when I tried again in Chrome. Things began to make a lot more sense - is there a flag I am missing in Firefox on the Mac?


https://bugzilla.mozilla.org/show_bug.cgi?id=hdr there's the tracking issue for HDR support.


Tbh, I'm glad this isn't supported in Firefox as of right now


HDR support in Chrome (Android) looks still broken for me. For one, some of the images on the blog have a posterization effect, which is clearly wrong.

Second, the HDR effect seems to be implemented in a very crude way, which causes the whole Android UI (including the Android status bar at the top) to become brighter when HDR content is on screen. That's clearly not right. Though, of course, this might also be some issue of Android rather than Chrome, or perhaps of the Qualcomm graphics driver for my Adreno GPU, etc.


Yeah, the HDR videos on my Asus Zenfone 9 (on Android 14) look like really terrible.


Which Android phone are you using?



Can confirm on Windows 11 with HDR enabled on my display— I see the photos in the article correctly on Chrome and they're a grey mess on Firefox.


On macOS even without HDR enabled on my display there's a striking difference due to better tone mapping between Safari and Firefox.

If I enable HDR the Firefox ones become a gray mess vs the lights feeling like actual lights in Safari.


For what it's worth, your comment has me convinced I just "can't see" HDR properly because I have the same page side-by-side on Firefox and Chrome on my M4 MBP and honestly? Can't see the difference.

edit: Ah, nevermind. It seems Firefox is doing some sort of post-processing (maybe bad tonemapping?) on-the-fly as the pictures start out similar but degrade to washed out after some time. In particular, the "OVERTHROW BOXING CLUB" photo makes this quite apparent.

That's a damn shame Firefox. C'mon, HDR support feels like table stakes at this point.

edit2: Apparently it's not table stakes.

> Browser support is halfway there. Google beat Apple to the punch with their own version of Adaptive HDR they call Ultra HDR, which Chrome 14 now supports. Safari has added HDR support into its developer preview, then it disabled it, due to bugs within iOS.

at which point I would just say to `lux.camera` authors - why not put a big fat warning at the top for users with a Firefox or Safari (stable) browser? With all the emphasis on supposedly simplifying a difficult standard, the article has fallen for one of its most famous pitfalls.

"It's not you. HDR confuses tons of people."

Yep, and you've made it even worse for a huge chunk of people. :shrug: Great article n' all just saying.


I wasn't using Firefox, but I had the page open on an old monitor. I dragged the page to an HDR display and the images pop.


Shame about the "research use only" limitation. That performance really puts local use in range for all sorts of devices - and with (allegedly) great performance! The future is bright/terrifying.


Funnily enough the Example Page 1 is wrong. Rendering du^n as du^*, and then nu^n-1 as nw^*-1.

It is impressive but...it really feels like those are the details that really really matter.


Second page is even worse. Ends in repeated \cdots and doesn’t finish parsing page. Also it read number 73 as 3 I guess because the previous section number was 2.


The main issue with OCR of anything with math in it is always that it has to be not 99.99% but 100% correct. Which is probably not possible.


In some ways, absolutely not - precision is a huge challenge with an indirect method like fMRI - but this example is over a decade old now: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3130346/

Fig4 shows the letter M on the cortical surface, where the stimulus accounted for the effects of foveal magnification (foveal vision gets more cortical space). Keep in mind that we now, in theory, have stronger magnets, better head coils (the part that picks up the image information), and better sequences (the software that manipulates the magnets to produce the images) so we could do even better than that these days.


If the images are in DICOM format, which is common, then dcm2niix should be able to convert them https://github.com/rordenlab/dcm2niix

I think it can handle a few other formats as well. Once they are .nii(.gz) files, then mricrogl (https://www.nitrc.org/projects/mricrogl) should be able to render it - of course, for a brain scan - this would be your whole head. Brain extraction is performed by more specialized software, but that would get you started.


Clinical scanners often use 2mm isotropic voxels, or even 3. Clinical usage is almost a bad reference point!. Research MRI at ultra high field (7T) goes to 0.8mm isotropic and below (0.5 or 0.6 is possible).


7T is already regularly used for human research, and approval for human usage has been granted for 10.5T and I believe for 11.7T (though I'm not sure how many images they've gotten out of that yet).

Yes it is incredibly expensive, but it is in fact already done.


What's the bore diameter for a 7T or a 10T magnet? All of the ones I've seen wouldn't be wide enough for an adult human.


They are human size research (and now clinical, albeit limited so far) magnets, so big enough, think about 60cm. It's an elbow rubbing environment, but sufficient for even somewhat large adults. The animals magnets are super tiny, of course.


I see a 7T model from Siemens with a standard 60cm bore. Interesting!


I am impressed by what's possible, thank you !


https://thedebrief.org/impossible-photonic-breakthrough-scie... :

> For decades, that [Abbe diffraction] limit has operated as a sort of roadblock to engineering materials, drugs, or other objects at scales smaller than the wavelength of light manipulating them. But now, the researchers from Southampton, together with scientists from the universities of Dortmund and Regensburg in Germany, have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.

> According to that research, the key to confining light below the previous impermeable Abbe diffraction limit was accomplished by “storing a part of the electromagnetic energy in the kinetic energy of electric charges.” This clever adaptation, the researchers wrote, “opened the door to a number of groundbreaking real-world applications, which has contributed to the great success of the field of nanophotonics.”

> “Looking to the future, in principle, it could lead to the manipulation of micro and nanometre-sized objects, including biological particles,” De Liberato says, “or perhaps the sizeable enhancement of the sensitivity resolution of microscopic sensors.”

"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885

Could such inexpensive coherent laser light sources reduce medical and neuroimaging costs?

"A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing" https://scholar.google.com/scholar?start=10&hl=en&as_sdt=5,4... :

> Several detrimental effects limit the use of ultrafast lasers in multi-photon processing and the direct manufacture of integrated photonics devices, not least, dispersion, aberrations, depth dependence, undesirable ablation at a surface, limited depth of writing, nonlinear optical effects such as supercontinuum generation and filamentation due to Kerr self-focusing. We show that all these effects can be significantly reduced if not eliminated using two coherent, ultrafast laser-beams through a single lens - which we call the Dual-Beam technique. Simulations and experimental measurements at the focus are used to understand how the Dual-Beam technique can mitigate these problems. The high peak laser intensity is only formed at the aberration-free tightly localised focal spot, simultaneously, suppressing unwanted nonlinear side effects for any intensity or processing depth. Therefore, we believe this simple and innovative technique makes the fs laser capable of much more at even higher intensities than previously possible, allowing applications in multi-photon processing, bio-medical imaging, laser surgery of cells, tissue and in ophthalmology, along with laser writing of waveguides.

TL Transfer Learning might be useful for training a model to predict e.g. [portable] low-field MRI with NIRS Infrared and/or Ultrasound? FWIU, "Mind2Mind" is one way to ~train a GAN from another already-trained GAN?

From https://twitter.com/westurner/status/1609498590367420416 :

> Idea: Do sensor fusion with all available sensors timecoded with landmarks, and then predict the expensive MRI/CT from low cost sensors

> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?

> Task: Learn a function f() such that f(lowcost_sensor_data) -> expensive_sensor_data

FWIU OpenWater has moved to NIRS+Ultrasound for ~ live in surgery MRI-level imaging and now treatment?

FWIU certain Infrared light wavelengths cause neuronal growth; and Blue and Green inhibit neuronal growth.

What are the comparative advantage and disadvantages of these competing medical imaging and neuroimaging capabilities?


This is incorrect - all of the protons align along the static (the strong 1.5, 3 , 9.4 etc Tesla) field, some point one way, and some the other - but they have all shifted so that they line up. The excite portion is a separate step, distinct from the static (B0) field. edit: distinct in some ways - the strength of the static field determines the RF used to flip the protons out of alignment.


You are mixing up a few things here, but they are all missing the point. At normal body temperatures, their thermal energy distribution prevents most protons in a human from aligning parallel or antiparallel with the static field, even at MRI field level strengths of several Tesla. The excitation by the varying field, which only affects another one-in-a-million of those aligend protons, is indeed another step, meaning that even fewer protons actually get to experience the precession effect. So about one in a million protons gets aligned with the static field and less than one in a trillion gets to produce a measurable signal. But since there are so many of them, (~10^20 per mm^3 for water), you still get enough (about 1000 protons or so per voxel) to measure a signal at 2 Tesla. With higher field strengths you can get a bit more and thus more resolution but even at 10 Tesla you won't align all of your protons - not even close.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: