Hacker News new | past | comments | ask | show | jobs | submit login

I don't believe machine learning will ever reach the level of sophistication that you're thinking. The reality is that we can't even make 5th generation (goal oriented) programming languages yet. Mostly because we keep chasing the unicorn of generalized AI rather than building the foundations of useful programming standards. Like imagine using optimization the way we do with loops and branching to handle more complex strategy based problems. We don't even have that level of sophistication in our compilers yet. We're in what amounts to as the Neolithic Age of programming.



> yet.

You appear to be denying the progress made over the past 30 years by deep learning, ML frameworks, constraint solvers, and immense computing power.

Prolog has been around for 50 years. I suggest you look at what it can do.

Programming languages exist only for humans to specify and discuss intended computing behavior. There are plenty of goal-solving algorithms and heuristics already. Turing completeness is all that's necessary. "Flying car" programming languages are unnecessary because they offer no additional intrinsic power, only convenience of expressiveness that could be provided by libraries.

Self-programming and -designing systems, if left unconstrained, will develop their own IRs, design, and manufacturing specification protocols that would likely become rapidly incomprehensible. They won't need anthropocentric programming languages after several iterations because it would be an inefficiency. That's what the technological singularity will look like.

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair


> You appear to be denying the progress made over the past 30 years by deep learning, ML frameworks, constraint solvers, and immense computing power.

Most of the progress in the last 30 years was immense computing power, almost all foundations for todays ML are revised old concepts. What you propose is AGI, how you want to achieve that? We don't even know where to start in theory, this is not my opinion but current top names in ML world[1], which was discusses on HN many times.

1. https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis...


>Turing completeness is all that's necessary.

As a programmer and someone who's earned a computer science degree, I'm going to say you're wrong for many reasons but I'll point out a couple here. First, the problem with modern programming is the problem that all computing has struggled with: how do you define meaning (semantics). We've resolved almost all the issues of representation of syntactic reasoning (the ability to encode and process symbols) but to tell a computer the general meaning of a program, its limits, its problem space, and even potential issues that it will need to form strategies to resolve are the realm of pure research right now. And have been for decades, it's why most research has been focusing on things like machine learning, genetic algorithms, and neural networks because these can be trained on a specific set of problems with the hopes that given sufficient time that we can augment this via hardware improvements (it's why you see Google touting android phones with neural network processors and the like). The problem still remains that we can't just do a TNG science scene where you speak out constraints to a computer and it generates tentative results on those constraints.

Second, we have another problem that still persists to this day: how to get firms to fund such research. In the past, the military industrial complex (which still does to an extent) and monopolies would fund such research to keep ahead but in this day it seems most folks aren't keen to fund what won't produce profit within a few quarters and as such most research has been pared down to limited scopes. I believe this is an issue due to how much pure research in the past seventy years has resulted in the majority of gains seen in the private sector. Thus, the private sector assumed all progress was natural and not inevitable build up from decades, even centuries, of hard work. And without that pure research being allowed to exist and even fail in its pursuits, we've put ourselves at odds with the intergenerational aspect of all scientific research. Meaning, we're most likely only be able to achieve such gains many centuries from now.

>That's what the technological singularity will look like.

There won't be any technological singularity because in the past we never had one. People who think this are fools like Ray Kurzweil. Anyone with an inkling on the subjects of anthropology will know better. The advances of the human species before recorded history were building on the small steps of pre-human ancestors. There wasn't a magic Eureka moment for us and never will be.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: