Germany has a 6 month probation period for new hires in which both sides can terminate the contract with 2 week notice. After that, it is one month, two months after 3 years going up to 7 months after 20 years.
The trick I see the most is actually hiring consultants. They're basically like employees but it's not you who hire them, it's the consultancy company. So you can have them working for your startup in short contracts of a few months (which can be prolongued and without much trouble even terminated early). But normally, they also have clauses against trying to hire the consultants directly, so if they are really good and write a good chunk of your stuff, when they leave you might be left in a bit of trouble.
> Granted, you need to have the political structure in place that allows the growth to benefit everyone.
Which is the scary part of the AI revolution. Devaluing labor always leads to increased inequality in the short-to-mid term until a new equilibrium is met. But what if we have machines that can do most jobs for 10-20k a year? Suddenly we have a hard ceiling for everyone below a certain "skill level", where skill includes things like owning capital, going to the right college, and having the right parents.
In the past, when inequality became too extreme, (the threat of) violent uprisings usually led to reform, but with autonomous weapon systems, drones and droids, manpower becomes less of a concern. The result might be a permanent underclass.
Really? The AI revolution is happening in the West, and mostly in the US. Just imagine it happened in a muslim country, or Russia, or China, or even India. Half of them would immediately use it to start a war. If you think labor is devalued here, it can be SO much worse.
Also I don't understand the entire argument. The thread is about stopping economic growth. You say you don't receive enough of the current economic growth ... so you want growth to reduce? That will make your life a lot worse, won't it? At 0 growth the only way to give you anything would be to take it away from someone else. In other words: you want an extra meal at 0% growth? That can only happen if someone else doesn't get one ...
> so you want growth to reduce? That will make your life a lot worse, won't it?
Personally, I don't want growth to reduce, exactly. I'd prefer it if there were tighter restrictions on the direction of growth, and we spent more time finding creative ways to return to smaller communities where the efforts are spent less on pure money and more on people helping each other. And more time restoring nature. So growth, but not purely in an economic sense.
It only seems like a degrowth thing when you look at from a purely fiscal angle.
There is nothing stopping you from moving to a smaller community, and in the west there's tons of them around. And if you're willing to take the (very low) wages that go with that, you can live there for the rest of your life easily.
Hell, I know people who've done this. Several actually. Well, only one that's still alive (they retired there, wanted to grow old and die there ... and did), but still.
But ... why bother anyone about it? You want others to do this but would never accept doing so yourself?
> But ... why bother anyone about it? You want others to do this but would never accept doing so yourself?
Well, first I already have, so I don't know where you got "never accept doing so yourself", which is something you made up. Secondly, why bother: because the current world system is destroying nature, which in my opinion is on the same level as actively targeting people. So I do want that to stop.
If nature was not being destroyed and it was just people messing up their own little world, then that would be different.
> Secondly, why bother: because the current world system is destroying nature
Is it? If you put it this dramatically, it's bullshit. Nature will survive us, rather than the other way around, guaranteed. MAYBE we can kill large animals if we tried, but probably not even that (they'd just shrink and then grow large again, wouldn't be the first, or second, or even the tenth time that happened).
Life on earth is being sustained by the sun and by nuclear reactions inside the earth. Nothing we do makes the tiniest of difference in the long run.
Increased temperature and increased CO2 and climate change essentially make more chemical and solar energy available in the environment. Life is chemical in nature and is limited by available energy. That means there would be more life, more green, if more energy was available. Life would have to be pretty damn badly designed if this damaged it, rather than what we actually see happening: life is spreading to much more of the planet than even 100 years ago.
So, first, you can rest assured: it is just people messing up their own little world.
Second: it would be seriously unnatural if we stopped. After all competing and using up all available resources is literally the sole goal of all life on earth. And if you compare humans to an average ocean-bound bacterial species, we're not even particularly good at it.
Okay, so we are not destroying all of nature, only enough nature that it will get seriously uncomfortable for us. Great! Your second argument is even stranger. It would be unnatural to stop polluting the environment? Where are you going with this?
> Except flight simulators. They're great as long as they have realistic physics.
I'm quite fascinated by the huge overlap of flight enthusiasts and computer nerds. Any discussion on HN even tangentially involving flight will have at least one thread discussing details of aviation. Why planes, and not cars or coffee machines or urban planning?
On a more serious note, there should almost certainly be regulation regarding open weights. Either AI companies are responsible for the output of their LLMs or they at least have to give customers the tools to deal with problems themselves.
"Behavioral" approaches are the only stop-gap solution available at the moment because most commercial LLMs are black boxes. Even if you have the weights, it is still a super hard problem, but at least then there's a chance.
> people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort
With such strong wording, it should be rather easy to explain how our thinking differs from what LLMs do. The next step - showing that what LLMs do precludes any kind of sentience is probably much harder.
Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.
But this signalling (and connections) may be more complex than connected/unconnected and on/off, such that we cannot completely describe them [digitally/using a countable state space] as we would with silicon.
If you think it can't be done with a countable state space, then you must know some physics that the general establishment doesn't. I'm sure they would love to know what you do.
As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.
If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.
Had you performed your reading outside of PopSci, you would know that the "general establishment" does not agree with your interpretation of Planck units. In fact, even a cursory look at the Wikipedia page on Planck units would show you that some of the scales can obviously not be interpreted as some sort of limits of measurability.
A reasonable interpretation for the Planck length is that it gives the characteristic distance scale at which quantum effects to gravity become relevant. Given that all we currently have is a completely classical theory of gravity and an "unrelated" quantum field theory, even this amounts to an educated guess.
No observations have ever been made that would suggest that the underlying spacetime is discrete in any sense, shape or form. Please refrain from posting arrogant comments on topics in which you are out of your depth.
I, uh... What? Did you mean to respond to some other post there?
I can't see how anything you said is a response to anything I said. My statement was very simple: if two models predict the same result, you can use either of them. As far as we have worked out so far, continuous and discrete spacetime give the same results for every experiment we can run. If you have an experiment where they don't, physicists would really love to see it.
Firstly, my comment was overly antagonizing, sorry for that.
My problem is with the interpretation of Planck units; they really do not appear in current theories as signifying any theoretical lower limit to measurability, as I must interpret that you claim by saying:
> As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.
For example, the Planck energy is a nice macroscopic quantity of approximately 2 gigajoules. For the Planck quantities that are more extreme, the measurement is not hampered by the theory but by practical issues.
Sure, we don't expect our theories to hold at Planck length, but this is not due to something that's baked into the Standard Model or general relativity.
That is so narrow a definition of scientific research it excludes many major contributions to our base of knowledge. The primary difference between engineering and science is the intention - Scientists want to understand how things work by using the scientific method, engineers want to make stuff that works, but this still often includes iterating over designs by using empirical data.
If a team of engineers find a cool new algorithm to make computer vision easier, we learnt something new about the world in the process. On the flip-side, you actually have plenty of research in fields you would consider science, eg. physics, that do not use the scientific method at all, but instead deduce possibilities based on mathematical modelling.
Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
I don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
>I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
> I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I think its a result of its complete denial of and ignorance of politics. Because rationalist and effective altruist movements make a whole lot more sense, if you realize they are talking about deeply social and political issues with all politics removed from it. Its technocrat-ism the poster child of the kind of "there is no alternative" neoliberalism that everyone in the western world was indoctrinated into since the 80s.
Its a fundamental contradiction, we don't need to talk about politics because we already know liberal democracies and free-market capitalism is the best we ever going to achieve, faced with the numerous intractable problems we face that can not possibly be related to liberal democracies and free-market capitalism.
The problem is: How do we talk about any issue the world is facing today without ever challenging or even talking about any of the many assumptions the western liberal democracies are based upon? In other words: the problems we face are structural/systemic, but we are not allowed to talk about the structures/systems. That's how you end up with space flight and shrimp welfare and AGI/ASI catastrophizing taking up 99% of everything these people talk about. It's infantile, impotent liberal escapism more than anything else.
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
As far as I know it's never hosted an impress-the-oligarch fundraiser, which as you say would at least have a logic behind it[1] even if it might seem distasteful.
For a philosophy which started out from the point of view that much of mainstream aid was spent with little thought, it was a bit of an end of Animal Farm moment.
(to their credit, a lot of people who identified as EAs were unhappy. If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
[1]a pretty shaky one considering how easy it is to impress American billionaires with Oxford architecture without going to the expense of operating a nearby mansion as a venue, particularly if you happen to be a charitable movement with strong links to the university...
[2]obviously people are only objecting to it for PR purposes because they're not smart enough to realise that capital appreciates and that venues cost money, and definitely not because they've got a pretty good idea how expensive upkeep on little used medieval venues are and how many alternatives exist if you really care about the cost effectiveness of your retreat, especially to charitable movements affiliated with a university...
> If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
I’m a bit confused by this one.
Are you saying that no-one who identifies as rationalist sneered at the objections? Because I don’t think that’s true.
Nope, I'm implying the people sneering at the objections were the self proclaimed rationalists. Other, less contrarian thinkers were more inclined to spot that a $15m heritage building might not be the epitome of cost-effective venues...
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
Because they we doing so many workshops that buying a building was cheaper than renting all the time.
You may argue that organizing workshops is wrong (and you might be right about that), but once you choose to do them, it makes sense to choose the cheaper option rather than the more expensive one. That's not rationalist rhetoric, that's just basic economy.
Israeli and american intelligence agree that Iran was not aware of the October 7th attack. Hamas did that by themselves. In hindsight we also know that Israel thoroughly infiltrated the Iranian forces, so if they had known, Israel would have known in advance as well.