Hacker Newsnew | past | comments | ask | show | jobs | submit | mamp's commentslogin

I haven't been writing Rust for that long (about 2 years) but every time I see .unwrap() I read it as 'panic in production'. Clippy needs to have harder checks on unwrap.

To be fair the performance of rules or Bayesian networks or statistical models wasn't the problem (performance compared to existing practice). DeDombal showed in 1972 that a simple Bayes model was better than most ED physicians in triaging abdominal pain.

The main barrier to scaling was workflow integration due to lack of electronic data, and if it was available, interoperability (as it is today). The other barriers were problems with maintenance and performance monitoring, which are still issues today in healthcare and other industries.

I do agree the 5th Generation project never made sense, but as you point out they had developed hardware to accelerate Prolog and wanted to show it off and overused the tech. Hmmm, sounds familiar...


Here are more expansive reflections on FGCS from Alan Kay and Markus Triska: https://www.quora.com/Why-did-Japan-s-Fifth-Generation-Compu...

The paper of Ueda they cite is so lovely to read, full of marvelous ideas:

Ueda K. Logic/Constraint Programming and Concurrency: The hard-won lessons of the Fifth Generation Computer project. Science of Computer Programming. 2018;164:3-17. doi:10.1016/j.scico.2017.06.002 open access: https://linkinghub.elsevier.com/retrieve/pii/S01676423173012...


Don’t attribute to jealousy that can be adequately explained by vanishing gradients.

BTW the ad hoc treatment of uncertainty in Mycin (certainty factors) motivated the work of Bayesian network.


Unfortunately, I think the context rot paper [1] found that the performance degradation when context increased still occurred in models using attention sinks.

1. https://research.trychroma.com/context-rot


Saw that paper have not had a chance to read it yet, are there other techniques that help then? I assume theres a few different ones used.


I've been using Gemini 2.5 and Claude 3.7 for Rust development and I have been very impressed with Claude, which wasn't the case for some architectural discussions where Gemini impressed with it's structure and scope. OpenAI 4.5 and o1 have been disappointing in both contexts.

Gemini doesn't seem to be as keen to agree with me so I find it makes small improvements where Claude and OpenAI will go along with initial suggestions until specifically asked to make improvements.


I have noticed Gemini not accepting an instruction to "leave all other code the same but just modify this part" on a code that included use of an alpha API with a different interface than what Gemini knows is the correct current API. No matter how I promoted 2.5 pro, I couldn't get it to respect my use of the alpha API, it would just think I must be wrong.

So I think patterns from the training data are still overriding some actual logic/intelligence in the model. Or the Google assistant fine-tuning is messing it up.


I have been using gemini daily for coding for the last week, and I swear that they are pulling levers and A/B testing in the background. Which is a very google thing to do. They did the same thing with assistant, which I was a pretty heavy user of back in the day (I was driving a lot).


When do you turn it off? I have a Mac M1 Studio and I just let it sleep. If things get weird I reboot. I think I recall using the power button about a year ago after returning from vacation after I had shut it down.


Right now I mount up to 7 HDDs to the Mac via SMB, have some Streamdeck / Pedal and the necessary external SSDs for fast storage connected. I will see if the SMB mounts come back OK after sleep (my laptop acts as server) but the Streamdeck and HDDs wake up randomly so overall it's easier to switch everything on and off depending on usage.


Stop underwear off complaining about the mini, you should complain at streamdeck


Like seriously WTF are people turning it off its 3 watts at idle lol, most power supplies have that much phantom drain lol


Everyone keeps citing idle, which is when the device is on and active but not particularly doing anything.

The standby power draw is 1W or less. I've used Mac Minis for years -- just replaced my M1 with an M4, though the M1 left me wanting for nothing -- and the number of times I've interacted with the power button is so negligible I imagine I've gone over a year without touching it. When I haven't touched it in a while it goes to standby, waking instantly when I engage it again.


Not everyone lives the same way. I am seriously considering a Mac Mini as my next upgrade yet I live in a RV and move frequently. Are there ways that I can keep the Mac mini powered while traveling.. sure, but why would/should I?


Are you not turning off entire circuits to reduce power draw when mobile? I’m actually thinking about one of these for my truck camper and its power draw seems fine, but the stumbling point for me is the additional power draw from the monitor it would require. I think I’m leaning toward an M4 MBP with nano textured screen for maximum power efficiency and ability to work outside when it’s nice, though I have not yet put much effort into researching efficient monitors


My EU mind is blown by these claims. Let’s take the lowest(1W) at sleep mode. With a thousand mac minis at sleep mode, that is already 1kW! In my country, a single person household’s yearly electricity package comes at 1400kW(+100 depending on provider) per year.

Note: intentionally keeping it simple, please don’t nitpick.


No household uses 1400kW, and kW/year doesn't make sense. Do you mean 1400kWh/year? That seems pretty low (NZ is 7000kWh/year), but if so, you're comparing power to energy, which doesn't mean much. 1W 24/7 < 9kWh/year, which is pretty small.


Personal guess from a fellow European citizen: I think they meant to say 1700 kWh/year. According to most German power utilities, the average 2-person household consumes about 2400 kWh/year.


It’s not clear what your point is because you’ve made a strong argument for it being negligible.


But unfortunately it’s not premature. It’s been a problem for so long!


I suspect it's not so much the "expert" instruction but the list of subjects of the expertise. These words will generate embeddings that have a better chance of activating useful pathways and relationships within the LLM's generation path.

My understanding is the goal is to prime the LLM with context that it will use when generating answers, rather than hoping it will infer connections in the feed forward layers when generating answers based on a sparse prompt.


Was it the "map of Tasmania"? Then I could understand why...

https://info.umkc.edu/womenc/2016/05/20/showin-off-her-map-o....


Apple’s first in-house designed chip was the A4 in 2010.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: