Gary Marcus is unquestionably one of the most negative , and consistently wrong voices in the AI community. I do not understand why he is continued to be given credence or ears to anything he claims.
Do you often discount studies and science when it comes to ideas that don’t align with your natural instinct?
Is addiction a disease or a moral failure? Is depression a chemical imbalance or is it your own fault, maybe just “exercise and smile more”?
Questions of that nature are fair but simply being against a researcher or author because they have a viewpoint different your own is entirely what’s wrong with so much of discourse today.
You're implying that Robert Sapolsky speaks the truth the contradicts my belief system. Maybe you're thinking I'm Catholic (I'm not) who see free will as a gift from God that allows humans to freely choose to love and obey Him. Catholic doctrine holds that free will is granted to humans by God as part of being created in His image and likeness. Without free will, humans could not be held morally responsible for their actions. The ability to choose between good and evil is fundamental to Catholic understanding of sin and redemption.
No, I think Robert Sapolsky's focus is too narrow and materialistic, ignoring almost everything that makes us human. In the story of the seven blind men and the elephant, Robert Sapolsky is the blind scientist that feels the trunk and proclaims the elephant is a snake. Robert Sapolsky's science is flawed. If human behavior can be explained by chemicals then why do we need his colleagues in other departments at Stanford, such as the psychologists?
My point was he was doing what he does to make money Malcolm Gladwell style and I thought there was nothing wrong with that but that was a reason why I didn't take him seriouly.
There are studies that show quote plainly that people with kids are unholier than those without. The difference is much later on life when the happiness “winner” switches to the group with children.
One theory aside from the obvious is that the brain makes sense of whatever might happen to it, over time. Like how you miss out on some big opportunity, but years later you say, “ah, that’s ok because it led me to where I am today.”
No idea if that was coherent as I’m a bit tired, but wanted to share
I'll speak as someone that waited until almost 40 to have kids and who feels, in hindsight, I was being completely selfish. I prioritized my career, my travel, my sleeping in, my social life, my leisure, etc. As soon as you become a parent, you realize you shorted yourself and your kid shared time on earth together. I also just don't have the energy or physical ability to play like I would have at a younger age. It's not something you can really grapple with or consider the magnitude of as a childless person purposely avoiding/delaying it. I don't think it's necessarily intentionally selfish in the moment, as you're living it, but later sometimes you get to reflect back and see your decisions as what they really where with the clarity of hindsight and you'll understand what you really did. It's a weird thing that happens and I think part of the whole "wisdom with age" thing. I think I was just being selfish. All the travel and me stuff I did in my 20s-30s is not really very important, my identity as a parent to the humans I created is my identity now, It's the only part of my identity I really care very much about. Sure I still have hobbies and travel and stuff but it's on an entirely different plane of importance.
What if someone is not waiting on anything and just doesn't want to have kids out of any number of reasons? It's alright if you have regrets, but I don't see it as selfish if you never intended to have kids in the first place.
Well, this is kind of the camp I was in. I never wanted kids, so I said and thought. I knew my wife was kind of on the fence; so I knew it was a possibility. But I was actually kind of expecting her to either A) just one day tell me she had been thinking and wanted a kid or, B) what I actually hoped at the time, her clock would stop ticking while we were busy doing us and the decision would be made due to that. We had long agreed that if we ever did do it, we'd try but if fertility was ever an issue we wouldn't pursue IVF or adoption; we'd just consider it a sign from the universe and live our lives happily childfree. But then, we (ok she) was diagnosed and overcame a huge medical issue and it had us re-evaluate our thoughts on family/life/everything in the process.
Also noteworthy is that I don't project an opinion of selfishness on others; it's what I feel of myself. I can say also, now having a lot of friends that also waited for whatever reason, it's not uncommon in this cohort. It's also very common that people wait then run into fertility issues and the feeling hits harder for them. We've known a ton of people that struggle hard with that, more-so if having kids was already on your must-do list and you just delayed it too long.
So to answer your question more directly, I don't think the person you described is selfish. They may however realize they were if they ever decided to actually have kids and perhaps have some regret do to that.
And you depend on doctors and sewerage workers and programmers and garbage men and police officers and soldiers and physicists and so on. Is it selfish not to become all of these things yourself?
As long as you participate actively in society by working, you are doing your part and are not being selfish.
And my taxes go towards (in most countries) supporting that. Also, the children are not obligated to take care of me. They can choose it as a profession and I will gladly pay them for it, just as I gladly pay any professional in my life for cutting my hair or helping my children if they have trouble at school (even though they themselves do not have to have children and are helping them instead of the other way around). Are the people providing services also selfish for providing them?
Why? Anyone paying taxes to the state has fully earned their right to exist, especially if the state has a welfare system.
Not having children isn't even a new concept. People have done it since time immemorial. What if they had multiple children and then lost them all to the statistically high child mortality rates? Were they being selfish by not furthering their lineage due to external circumstances?
Off the top of my head: not being able to biologically, thinking that they are not able to provide worthy living conditions for the children, abuse in childhood and being afraid of exhibiting the same behaviour, having sick parents/siblings who need a lot of care leaving no time for children, not wanting to increase the burden on the planet... or just plain having the freedom of choice.
It's not a blanket rule and it's more about how you'll feel about yourself, realizing you've been selfish, than it is projecting outwards that anyone without kids must be selfish; that's certainly not the case.
These seem like very clear exceptions and reasons. Take it as a general sentiment, I'm not trying to footnote everything I write online to consider every possible circumstance any human could conceivably encounter. I think you should have known these are clearly not selfish acts. Biologically infertile being selfish? Come on dude.
OK, it sounds like your experience was selfish simply because later ON you had children, and the time you spent alone was time you could've had with your eventual children.
But what of someone who will not have children at all? How is it, literally (in the definition of the word), "selfish" to not have children? From whom are you robbing experience? What is being "taken away" and from whom?
If you want kids, its selfish to bring the other party into existence. Life isn't all great, and you don't have a choice in whether or not you're born.
If you don't want kids, its selfish because you are not supporting the society (other party) that supports your existence. If we don't have kids above the replacement rate, society doesn't continue. If you are fine with that great, but its hypocritical if you plan to continue to benefit from society in your own life.
What of the people that have children but those children do not contribute in any meaningful way to society? Is the implication here that every child born must somehow be a net-positive to society?
I see what you're saying about the replacement rate, but I don't see how, on a smaller scale, someone choosing to not have children is a selfish act within the context of a society that is not yet below "replacement" level.
"I'm not giving anyone else a chance" -- there is no "anyone else". There is nothing being robbed, as there is no other party that literally exists in the universe.
What I hear from millennials is "life is suffering, our future is doomed (global warming, mass extinction, insane politics, increasing loneliness), why would I force some kid to have to grow up in this shit world just for my own fulfillment?"
Which is crazy. Because no one had a life harder than the previous generations.
The amount of technological progress in the past decade is staggering. So many infrastructure issues are now just automated, that we don’t have to think about.
It’s a great time to be alive. And a great time to give someone else that opportunity to experience it too.
Looking at long term trends, the only single thing I can be sure of is the future won't look like the present.
I am currently experiencing a strong drive to start a family. Lets say I get very lucky and cause a pregnancy tomorrow; kid comes out 9 months later, 21 years (± whatever for when school starts), they graduate university and… it's not predictable in the slightest. We might have single-atom transistors as standard, or the factories might be too expensive to mass produce them. We might have solved all genetic conditions, or research might have stalled with Moore's Law. AI might be good enough to make human labour uneconomical, which is a separate issue to if we will arrange our economies to make this a utopia or a dystopia, or it might not. Even without AI, internet connectivity and robotics is enough to make Amazon Mechanical Turk more like its namesake, with the same impact on cost-cutting and outsourcing to whoever has an internet connection. Genetics research, even if limited to no-growth-in-computing scenarios, may also make it affordable for small and dumb terrorist groups to attempt DIY genocides, and even if they fail at their goal that may still cause megadeaths… but we may well also have defences against it, either biological or surveillance. Similar issues for drones and high-powered lasers. We've already got government-affordable ways to put literally every human on the planet under 24/7 surveillance, and that's likely to get cheap enough for organised crime to automate blackmail, but we also already have people doing that with generative AI and the social responses (let alone the legislative) could be almost anything. 3D printed houses and boats are both realities now, will we finally witness large-scale seasteading, or is that fundamentally untenable?
And that's all assuming no world-ending, or even just economy-ending, catastrophes of any kind — no paperclip optimisers, no nuclear wars, no peak-${insert-resource-here}, no environmental issues causing 1e9-scale migration.
Media has them mind broke. The material conditions climate change / etc will inflict on them are nothing compared to the material conditions our ancestors endured before the creation of modern technology.
"Boo hoo, the weather is shitty and some people are moving around because of it" he says, as he doesn't have to worry about Bubonic Plague or Genghis Khan.
I wish people didn't mind people moving around, but (a) the UK gets worried about mere tens of thousands, and (b) "some people" in the case of climate-change induced changes to farming output would, with current economics, be "about three times the total population of Earth when the Black Death started".
I agree with you here. For boilerplate stuff I can never remember it's fine.
With more advanced use cases I find the LLMs better at exploring a problem space and possible solutions rather than providing a solution full stop. Being able to refine a problem down into a digestible chunk the AI can take a bite of does require human level understanding that's hard to get if you haven't 'done the work' to have good fundamentals.
> With more advanced use cases I find the LLMs better at exploring a problem space and possible solutions
The number of times it misled me makes me reluctant to rely on it again. I was wasting time chasing non existent paths.
I think the marketing around chatgpt is more powerful than the tool. It looks impressive to junior devs or people without expertise in a domain, but that’s about it. What bothers me is the amount of spam around this tool - they need to work on improving it and then restart promoting as thus far it’s just annoying noise.
There is intuition you can develop to understand when it might be not good at solving the problem and when it will be. I have 10+ yrs coding experience, tons of side projects besides work and I find it amazing.
I actually think maybe it is even better with more experience as it allows you to understand what it excels at and be critical of its output to recognize when it might be wrong.
I would usually be able to do whatever it does, it would just take more energy and time, but I am able to almost immediately recognize when it is wrong.
In essence an AGI is an intelligence capable of upgrading itself — in terms of qualitative intelligence — and gets faster at this with every iteration (hence, upgrade). That is why it is often associated with technological singularities, and that is why it is easy to inspire fear by invoking its name, even if you're not building anything even remotely capable of such a feat.
You might say that's a very strict definition as opposed to "human level intelligence", but if you think about it, we are (humanity as a whole) certainly capable of that, so it ought to be one and the same thing.
In theory, AI is not subject to the same limitations as we are (though not without limits entirely), so it should be able to do this faster than we can, hence the FUD.
How could an AGI upgrade itself if the hardware its running on is fixed? For me personally this definition is flawed by this fact alone. AGI doesn't imply for me that it continues improving until some sort of mythical technological singularity.
AGI for me, is simply an AI that can reason, doubt itself, then keep thinking and absorbing information so it can correct itself. Also, it has to capable of novel research, even if slow. Like slowly working on an unsolved physics problem over a year in the same way a human researcher might do it. However, my definition does not include this idea of "upgrading itself" which I'm not sure makes any sense at all.
Upgrading itself doesn't mean tweaking its own software. It means being able to understand its own hardware and software well enough to design an improved model. And then that improved model would be able to do the same, examine its own hardware and software and design something else that's even better.
One crucial difference between humans and computers is that we can't be turned off indefinitely and started up again. Nor can we make a one to one copy of our software in another device, much as we might try with our children. So for us, our own lives are intrinsically precious, and consciousness is part of how we protect our lives. But machines don't have precious lives in that sense, so they may never need to be conscious, even if they achieve AGI.
It’s an illusion but that doesn’t negate we must operate as though our will is free!