Hacker Newsnew | past | comments | ask | show | jobs | submit | atticora's commentslogin

There's a nice discussion of this in Surely You're Joking, Mr. Feynman!

> I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all!

> Then I say, “The main purpose of my talk is to demonstrate to you that no science is being taught in Brazil!”

https://v.cx/2010/04/feynman-brazil-education


We had a student in class which was so brilliant at memorizing stuff.

But each test had one or two questions where you had to put together the knowledge, not just regurgitate, and that student consistently failed those question on each and every test.

Yet the student got top scores on each and every test, because the accumulated number of points was enough to get them into the top bracket.

I was so annoyed with that, asking the teacher how they could get top scores while clearly demonstrating they didn't understand the subject matter. Of course, all in vain.

edit: Great read BTW


Measuring student outcomes is hard!

For example, do you think we should encourage students to study for tests, or should we encourage them to just show up? After all, if you understand it intuitively why would you need to study the night before?

Also, the act of testing changes the students being measured. As does the existence of a test in the future.


Hey this is a good stackoverflow answer, why can't I upvote it? Oh.


Learning to use a REPL. It's been at the core of my coding practice since. Especially using it as a debugger in an interpreted language was a game changer for me. It converts programming into a fast interactive game.


Is the code for the weather report at all similar to the one for "help I am a prisoner in a vintage dress factory"?


It depends. I've been coding for a living for 41 years and have a feeling I'll get the hang of it real soon now.


Then there's C++. After ~30 years of it, I realized that I don't know how to initialize an object anymore. So I've given up.


I've been doing C++ since 93. What's changed that makes you feel this way?


Everything, man. Everything.

From the perspective of pulling some C++ programmer in 1993 into 2024, and dropping them into a large C++ code base consistently written with the latest C++ idioms.

(And yes, this is humorous exaggeration. But the name of C++ is apt. A C language dedicated to accumulating features.)


It's a pre-increment operator. If it was called ++C you'd have a point :)

I just don't understand how someone could have been working in C++ and not picked up the largest changes even just by osmosis. My code-based "upgraded" to C++11 about a year ago, but I can still read C++17 and am not intimidated by C++20 fragments. YMMV I suppose.


++C = Every time you go to write new code, some new features have already been adopted that you need to get hip with.

C++ = Every time you look at old code, it is already out of date based on new features that were adopted after the code was written.


If only they would stop changing the rug out from under ya, am I right? Just when you finish with one API, they go and release a new one.


Amazing answer


[flagged]


Right - because the entire point of learning to program is becoming Mr. Robot


It is possible you are not cut out for sarcasm?

(Giving you the benefit of the doubt you may have taken the comment literally, if so I'd maybe apologize for insulting someone with 41 years of experience in a field that is still young whilst having no knowledge of exactly who you are talking to)


I aim higher.


The first-generation iPhone was announced in January 2007. As of November 2018 more than 2.2 billion iPhones had been sold. The child suicide rate topped out when the cell phone market became saturated. Hardly proof but it would be interesting to see suicide statistics broken out by cell phone use.


  conscious, kŏn′shəs, adjective -- Characterized by or having an awareness of one's environment and one's own existence, sensations, and thoughts. synonym: aware.
Self-attention seems to be at least a proxy for "awareness of ... one's own existence." If that closed loop is the thing that converts sensibility into sentience, then maybe it's the source of LLM's leverage too. Is this language comprehension algorithm a sort of consciousness algorithm?


ML attention is nothing like human attention. I think it’s madness to attempt to map concepts from one field we barely understand to another field we also barely understand just because they use overlapping language.


Having done some research into human attention, I have to agree with Hommel et al: No one knows what attention is [1].

In current ANNs "attention" is quite well defined: how to weigh some variables based on other variables. But anthropomorphizing such concepts indeed muddies things more than it clarifies. Including calling interconnected summation units with non-linear transformations "neural networks".

But such (wrong) intuition pumping terminology does attract, well, attention, so they get adopted.

[1] https://link.springer.com/article/10.3758/s13414-019-01846-w


No. Self-attention is more akin to kernel smoothing[0] on memorized training data that spits out a weighted probability graph. As for consciousness, LLMs are not particularly well aware of their own strengths and limitations, at least not unless you finetune them to know what they are and aren't good at. They also don't have sensors, so awareness of any environment is not possible.

If you trained a neural network with an attention mechanism using data obtained from, say, robotics sensors; then it might be able to at least have environmental awareness. The problem is that current LLM training approaches rely on large amounts of training data - easy to obtain for text, nonexistent for sensor input. I suspect awareness of one's own existence, sensations, and thoughts would additionally require some kind of continuous weight update[1], but I have no proof for that yet.

[0] https://en.wikipedia.org/wiki/Kernel_smoother

[1] Neural network weights are almost always trained in one big run, occasionally updated with fine-tuning, and almost never modified during usage of the model. All of ChatGPT's ability to learn from prior input comes from in-context learning which does not modify weights. This is also why it tends to forget during long conversations.


It’s debatable to what degree ”attention” in LLMs relates to ”attention” in psychology. See Cosma Shalizi’s note on this http://bactra.org/notebooks/nn-attention-and-transformers.ht...


Careful, depending on who you ask there's 40 different definitions of the term. Any given mind, natural or artificial, may well pass some of these without passing all of them.


No mention of how this can translate smooth motion without VR sickness. Perhaps its a revolution for the people who can tolerate it.


This is a very frustrating app for me. It is one of the best document sources out there but has become unusable because it cannot retain my selection of documentation. Almost every other time I visit I have to start from scratch picking the stack I use. It's great but not great enough to keep doing that over and over and over ...

I don't have an issue with dropping cookies or local storage elsewhere. I'm on an updated linux chrome. Any ideas?


The enabled docs are stored in local storage. Are you frequently deleting your browser data? The enabled docs may also be exported as well as re-imported as JSON.


Use devdocs-desktop? There's a lot of webview wrappers that have a separate config folder, and store docs offline.

https://github.com/hardpixel/devdocs-desktop


So should Hacker News. As is a comment with 2 up and 0 down votes looks the same as one with 102 up 100 down.


I prefer the suggestion to show a percentage instead of the raw count of upvotes and / or downvotes. Seems more meaningful and contextual. And less likely to contribute to a mob mentality like behaviour (where people in an online community tend to be swayed by the majority opinion and sometimes don't post opposing views because they fear the downvotes). .


I'm much less concerned with comments, than with posts which don't have downvotes. This means a great story with narrow focus will have a small number of votes similar to a content-less one in an area of general appeal (aka noise).

The thing I keep coming to HN for are those deep stories in things that I don't normally read about, but are written and discussed in a manner that keeps me reading and learning.


Hacker News used to show "points" on comments and it was removed quite a while ago.


I have had that thought, being aware of some back and forth in votes. But in the end it might tend towards - I had 100 upvotes, those downvoters are just idiots. Not sure what else it really adds.


But that's good. People should keep posting views even if the majority disagrees with them.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: