Hacker Newsnew | past | comments | ask | show | jobs | submit | mhardcastle's commentslogin

I found myself accidentally behind the secured area of this solar installation while driving in the Mojave National Preserve. It is truly bizarre to see up close - the glow that you see around the towers in the photo on the article is quite bright in person.

I wondered when I saw it - is that glow the air turning into plasma? Are otherwise-invisible dust particles reflecting the absurd amount of light hitting them? Is the heat enough for the air to start scattering light?

It's no surprise that it would incinerate birds, in any case.


“However, it is important to note that this does not impact our results,” [lead study author] Liu told National Post."

So an order-of-magnitude difference has no impacts on the result? How can that be?


Maybe they're just saving face or don't want to take responsibility for people having unnecessarily thrown away their utensils.


Why include that capacitor at all if it doesn't matter whether it works?


If you look at the traces you can see the capacitor is right next to the power connector, on the -5V rail (which is not used for much, only for the RS422 serial port). The capacitor will be there to smooth the power supply when the machine is just switched on, or there's a sudden load which causes the voltage to "dip" above -5V. Basically it's like a tiny rechargable battery which sits fully charged most of the time, but can supplement the power on demand.

So you can see why it probably didn't matter that this capacitor didn't work: It's only needed for rare occasions. RS-422 is a differential form of RS-232 (https://en.wikipedia.org/wiki/RS-422) so being differential it's fairly robust against changes in load if they affect both wires. And the worst that can happen is you lose a few characters from your external modem.

In addition, electrolytics can probably work when reversed like this, at least a little bit. It's not exactly optimal and they might catch fire(!).


> It's only needed for rare occasions.

The two RS-422 ports are actually used quite often on these old Macs for printers, modems and apple talk networking. It was the only communication port, as there was no parallel port. They were backwards compatible with RS-232.

So it obviously worked well enough.

The backwards cap was measured to reduce the voltage to about -2.4v.

I suspect that all it did was reduce the maximum range, which started at a massive 1,200 meters for RS-422 (and a good 10m for RS-232)


Also known as the Madman Muntz theory of Engineering :-)

https://en.wikipedia.org/wiki/Muntzing


I never knew there was a name for this :)

When I was a demo coder my artist friend would just haphazardly go through all my assembler code and snip random lines out until it stopped working to improve performance.


This goes massively against the consensus of experts in this field. The modal AI researcher believes that "high-level machine intelligence", roughly AGI, will be achieved by 2047, per the survey below. Given the rapid pace of development in this field, it's likely that timelines would be shorter if this were asked today.

https://www.vox.com/future-perfect/2024/1/10/24032987/ai-imp...


I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.


So the researchers at Deepmind, OpenAI, Anthropic, etc, are not "serious front line researchers"? Seems like a claim that is trivially falsified by just looking at what the staff at leading orgs believe.


Apparently not. Or maybe they are heavily incentivized by the hype cycle. I'll repeat one more time: none of the currently known approaches are going to get us to AGI. Some may end up being useful for it, but large chunks of what we think is needed (cognition, world model, ability to learn concepts from massive amounts of multimodal, primarily visual, and almost entirely unlabeled, input) are currently either nascent or missing entirely. Yann LeCun wrote a paper about this a couple of years ago, you should read it: https://openreview.net/pdf?id=BZ5a1r-kVsf. The state of the art has not changed since then.


I hope you have some advanced predictions about what capabilities the current paradigm would and would not successfully generate.

Separately, it's very clear that LLMs have "world models" in most useful senses of the term. Ex: https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-o...

I don't give much credit to the claim that it's impossible for current approaches to get us to any specific type or level of capabilities. We're doing program search over a very wide space of programs; what that can result in is an empirical question about both the space of possible programs and the training procedure (including the data distribution). Unfortunately it's one where we don't have a good way of making advance predictions, rather than "try it and find out".


It is in moments like these that I wish I wasn’t anonymous on here and could bet a 6 figure sum on AGI not happening in then next 10 years, which is how I define “foreseeable future”.


You disagreed that 2047 was reasonable on the basis that researchers didn't think it wouldn't happen in the foreseeable future, so your definition must be at least 23 years for consistency's sake


I'd be OK with that, too, if we adjusted the bet for inflation. This is, in a way, similar to fusion. We're at a point where we managed to ignite plasma for a few milliseconds. Predictions of when we're going to be able to generate energy have become a running joke. The same will be the case with AGI.


LeCun has his own interests at heart, works for one of the most soulless corporations I know of, and devotes a significant amount of every paper he writes to citing himself.

He is far from the best person to follow on this.


Be that as it may, do you disagree with anything concrete from this paper?


Fair, ad hominems are indeed not very convincing. Though I do think everyone should read his papers through a lens of "having a very high h-index seems to be a driving force behind this man".

Moving on, my main issue is that it is mostly speculation, as all such papers will be. We do not understand how intelligence works in humans and animals, and most of this paper is an attempt to pretend otherwise. We certainly don't know where the exact divide between humans and animals is and what causes it, which I think is hugely important to developing AGI.

As a concrete example, in the first few paragraphs he makes a point about how a human can learn to drive in ~20 hours, but ML models can't drive at that level after countless hours of training. First you need to take that at face value, which I am not sure you should. From what I have seen, the latest versions of Tesla FSD are indeed better at driving than many people who have only driven for 20 hours.

Even if we give him that one though, LeCun then immediately postulates this is because humans and animals have "world models". And that's true. Humans and animals do have world models, as far as we can tell. But the example he just used is a task that only humans can do, right? So the distinguishing factor is not "having a world model", because I'm not going to let a monkey drive my car even after 10,000 hours of training.

Then he proceeds to talk about how perception in humans is very sophisticated and this in part is what gives rise to said world model. However he doesn't stop to think "hey, maybe this sophisticated perception is the difference, not the fundamental world model". e.g. maybe Tesla FSD would be pretty good if it had access to taste, touch, sight, sound, smell, incredibly high definition cameras, etc. Maybe the reason it takes FSD countless training hours is because all it has are shitty cameras (relative to human vision and all our other senses). Maybe linear improvements in perception leads to exponential improvement in learning rates.

Basically he puts forward his idea, which is hard to substantiate given we don't actually understand the source of human-level intelligence, and doesn't really want to genuinely explore (i.e. steelman) alternate ideas much.

Anyway that's how I feel about the first third of the paper, which is all I've read so far. Will read the rest on my lunch break. Hopefully he invalidates the points I just made in the latter 2/3rds.


51% odds of the ARC AGI Grand Prize being claimed by the end of next year, on Manifold Markets.

https://manifold.markets/JacobPfau/will-the-arcagi-grand-pri...


This could also just be an indication (and I think this is the case) that many Manifold betters believe the ARC AGI Grand Prize to be not a great test of AGI and that it can be solved with something less capable than AGI.


I don't understand how you got 2047. For the 2022 survey:

    - "How many years until you expect: - a 90% probability of HLMI existing?" 
    mode: 100 years
    median: 64 years

    - "How likely is it that HLMI exists: - in 40 years?"
    mode: 50%
    median: 45%
And from the summary of results: "The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059"


Reminds me of what they've always been saying about nuclear fusion.


In 2022, the median ML researcher surveyed thought that there is a 5% or 10% chance of AI leading to "human extinction or similarly permanent and severe disempowerment of the human species," depending on how the question was asked.

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#...


I claim in 2024 this has doubled or tripled.


In most companies, "operations" staff is among their lowest-paid: customer service, cashiers, bank tellers, factory workers, etc. They are used to constant adherence to strict metrics.

Medical care is strange in that the operational staff includes doctors, who will bristle at their expertise and years of schooling culminating into being judged on how many reviews (or appointments) they can churn through.

It's an interesting issue that I sense naturally causes this kind of conflict.


The reason this is so incredibly bad, is that there is no competition in Medical.

Each special interest group carved out a chunk of taxes and Power, and now they seek to expand through taxation or higher prices/lower quality.

I own a clinic and our most maligned incentives are that medicaid patients have basically free care, so they will somehow find themselves getting more visits than someone who has private insurance is paying out of pocket until deductibles are met and is much more cost conscious.


For what it's worth, "play money" betting site Manifold is currently at a 69% chance of sale versus 31% of shutting down.

https://manifold.markets/mint/conditional-on-the-tiktok-ban-...


There's another market with more options where "ban will be rendered unenforceable by courts" is leading: https://manifold.markets/MichaelBlume/tiktok-endgame-which-w...


What about no sale yet they continue to operate because the ban is not enforceable either legally or technically?


For example, if they (or some party that's directly targeted like Apple or Cloudflare) gets a U.S. court to enjoin enforcement of some of the provisions.


Plug pulling is easy technically. I think plug pulling legally is also easy. Bytedance just doesn't have a strong defense.


The defense is the bill of rights.


Buzzkill! Everybody else on here was having fun opining that it's 100% certain that China pulls the plug, or conversely that a sale will happen, and you just had to point out that it's an empirical question where nobody can read the CCP's mind. Boo.


> Separately this is generally not a great idea economically. You do kill inflation in Argentina (replaced with American inflation but that is extremely low and stable by Argentine standards). But you have no control over your monetary policy or your exchange rates. If your economy is export driven to (e.g.) China, you now have to worry about the dollar-yuan rate, which is not under your control.

"generally not a great idea economically" is true, but this specific case is one of the few where it could be a fantastic idea.

Control over one's currency > pegging to or using USD >>>> historic Argentinian monetary policy. If (1) is impossible, then (2) would represent monumental improvement over the status quo.


Social status is positional and follows arms race dynamics. Everyone would have more money and nobody would suffer worse ring-positional-status if everyone spent proportionately less.

Also, there is a deep body of economics and psychology literature describing how individual spending decisions are sometimes suboptimal in predictable ways, particularly that we under-value experiential purchases.

It doesn't contradict general "live and let live" principles to think that cultural emphasis on positional goods is bad, nor that individuals might be better off if they made different consumption choices.


Traffic is a function of how many cars go through a road/intersection per unit time, which is reduced if cars keep shorter following distances, which self-driving cars can do because of faster reaction times.

In the extreme, imagine if a light turns green and all cars in line accelerate at the same time. Intersections would let through a lot more cars.

Or think of freeway speeds where the length of a car is near-negligible. A one-second following distance means almost 60 cars pass a given point in a minute, versus almost 30 cars per minute for a two-second following distance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: