On this board, there's an exceptionally good chance a response will come from someone who wrote code or published research fundamental to your daily life, perhaps your favorite sci novel, etc. Not that such questions are ever a reasoned entrance into discussions you aren't burning down-- they're not. But there are still some few places on the internet where a bit if discussion can happen, and this is usually one where the noise doesn't fully drown that out.
Absolutely nothing. They don't, they didn't, it's a poorly stitched together attack job.
The 1.4 T amounts to a broad nearly decade long capex plan, not liabilities.
The loans and backstops etc were a request, not for OpenAI, but on behalf of manufacturers of grid equipment, manufacturers that OpenAI wouldike the government to consider as eligible for money already carved out by the AMIC national investment in chips and AI, and also probably more money as well-- it's a separate group of tangential industries that weren't initially considered, so why not ask? Sure it would help keep the wheels moving in OpenAI and the broad AI and US semiconductor industry, but it's far away from and ask by Altman for a bailout of his company.
It looks like the actual crux was Altman's plea that the backtops be granted... Not to OpenAI? The document linkedfrom the article that had the actual ask, rather than cleanup over deliberate misatributions by others, was to... "Extend eligibility", for the AMIC money already carved out, to companies that are producers of things such as "grid components such as transformers and specialized steel".
So: Altman did not ask for OpenAI loans to be guaranteed, nor did the CFO. It was on behalf of others drawn into the needs of the industry the AMIC grant was supposed to support. Self interested by OpenAI? Sure! And also not about to make the top 10k leaderboard for "sleezy things companies do".
That would be the same grind to a halt you'd get on just about anyone's face when they have a random stranger try to explain something obvious in a rude and condescending way. The inside voice goes something like: "Do I walk by, is this person sane, or maybe say something equally condescending like 'Hey buddy, with the bombs we have it will be called whatever we want.'"
Probably because most adults in the US grew up and were educated at a time when the EU was, comparative to today, insignificant in # of countries, population, GDP, and general importance, and so very little talked about in either news or text books compared to Europe as an economic and political block. And since Europe was abbreviated 'Eur' well, easy to see how dropping the 'r' hasn't resulted in universal US intuition that it's not the same thing. In general though it does seem pretty understandable to think something calling itself "The European Union" is comprised of just about all of Europe. Especially back with the expanded in '93 countries it was a little presumptuous at only a small fraction of the continent getting together and calling itself that? I do remember learning something about it in school at the time, under the EEC name.
Want to avoid confusion? Call it something like "United Nations", 'UN'. Confusion solved, Americans happy, call off the tariffs, peace, etc.
That's a lot of justification for what ultimately just amounts to ignorance of the outside world.
It's certainly not the case that "most adults in the US grew up and were educated at a time..." The EU exceeded $3 trillion in GDP by 1980. The original EU countries included Germany, France, and Italy, so were hardly insignificant.
You don't seem aware that the "EU", in 1980, didn't exist, nor did you do the sums on the ages of the population in school by the time it did exist to realize that yes, by typical textbook replacement timelines in schools, something like the existence of the EU is unlikely to have been in the textbooks during the school days of most people over the age of 30.
Well, there's the goal posts moved and a Scotsman denied. It's got an infrastructure in which it operates and "didn't show its work" so it takes an F in maths.
A random walk can do mathematics, with this kind of infrastructure.
Isabelle/HOL has a tool called Sledgehammer, which is the hackiest hack that ever hacked[0], basically amounting to "run a load of provers in parallel, with as much munging as it takes". (Plumbing them together is a serious research contribution, which I'm not at all belittling.) I've yet to see ChatGPT achieve anything like what it's capable of.
yeah but random walks can't improve upon the state of the art on many-dimensional numerical optimisation problems of the nature discussed here, on account of they're easy enough to to implement to have been tried already and had their usefulness exhausted; this does present a meaningful improvement over them in its domain.
When I see announcements that say "we used a language model for X, and got novel results!", I play a little game where I identify the actual function of the language model in the system, and then replace it with something actually suited for that task. Here, the language model is used as the mutation / crossover component of a search through the space of computer programs.
What you really want here is represent the programs using an information-dense scheme, endowed with a pseudoquasimetric such that semantically-similar programs are nearby (and vice versa); then explore the vicinity of successful candidates. Ordinary compression algorithms satisfy "information-dense", but the metrics they admit aren't that great. Something that does work pretty well is embedding the programs into the kind of high-dimensional vector space you get out of a predictive text model: there may be lots of non-programs in the space, but (for a high-quality model) those are mostly far away from the programs, so exploring the neighbourhood of programs won't encounter them often. Because I'm well aware of the flaws of such embeddings, I'd add some kind of token-level fuzzing to the output, biased to avoid obvious syntax errors: that usually won't move the embedding much, but will occasionally jump further (in vector space) than the system would otherwise search.
So, an appropriate replacement for this generative language model would be some kind of… generative language model. Which is why I'm impressed by this paper.
There are enough other contributions in this paper that slotting a bog-standard genetic algorithm over program source in place of the language model could achieve comparable results; but I wouldn't expect it to be nearly as effective in each generation. If the language model is a particularly expensive part of the runtime (as the paper suggests might be the case), then I expect it's worth trying to replace it with a cruder-but-cheaper bias function; but otherwise, you'd need something more sophisticated to beat it.
(P.S.: props for trying to bring this back on-topic, but this subthread was merely about AI hype, not actually about the paper.)
Edit: Just read §3.2 of the paper. The empirical observations match the theory I've described here.
Prediction: there will be a huge demand for apps that have at least a baseline of functionality that is TUI enabled, preferably feature complete.
Why? Token in & out inference cost. Heck of a lot cheaper to use an LLM than a vision model for something like automating apps, or enabling your app to be agent-friendly at a low cost.
I’m continually astounded that so many people, faced with a societal problem, reflexively turn to “Hmmm, perhaps if we monitored and read and listened to every single thing that every person does, all of the time…”
As though it would 1) be a practical possibility and 2) be effective.
Compounding the issue is that the more technology can solve #1, the more these people fixate on it as the solution without regards to the lack of #2.
I wish there were a way, once and for all, to prevent this ridiculous idea from taking hold over and over again. If I could get a hold of such people when these ideas were in their infancy… perhaps I should monitor everything everyone does and watch for people considering the same as a solution to their problem… ah well, no, still don’t see how that follows logically as a reasonable solution.
The issue is that there is a place where this model ~is working. It's in China and Russia. The GFW, its Russian equivalent, and the national security laws binding all of their tech companies and public discussion do exactly these things in a way that has allowed their leadership to go unchallenged for decades now.
The rest of the world isn't stupid or silly for suggesting these policies. They're following a proven effective model for the outcomes they are looking for.
We do ourselves a disservice by acting like there is some inherent flaw in it.
Are you seriously trying to suggest that monitoring of all private messages in Russia and China has stopped child abuse images from being shared?
That is preposterous.
We dismiss the suggestion of removing the right to privacy precisely because it doesn’t stop these crimes but it does support political repression.
The crimes go on, only criticism of the government for failing to address them is stopped.
EDIT: the more I reread your post the more I suspect this might be exactly the point you are making. Sorry, too subtle for me first thing in the morning. Need more coffee.
"They're following a proven effective model for the outcomes they are looking for."
That reads like just stating government perspective.
"We do ourselves a disservice by acting like there is some inherent flaw in it."
But this says something different to me. Because yes, I do see it as a inherent flaw if governments focus is on things that are mainly good for the government. Government's job should be focusing on what is good for the people.
C'mon, we all know that the main reason for such laws is controlling dissent.
Allegedly, Spanish police is a great supporter of Chat Control, not because of CP, but because of them wanting to spy on Catalan and Basque separatists more effectively.
Is Catalan/Basque separatism still a thing, in the sense of violent outcomes (a la ETA)? I have the impression that it's become a fairly civil process (more along the lines of the Scottish or Bavarian independence movements)
I don't know about China, but in Russia private conversations do not trigger immediate response and they do not control every possible means of communication. They simply do not have capacity to investigate every violation - too many people talk negatively about the government and ongoing events, so use reactive approach. People may get in trouble while being searched on the border crossing or after being reported by someone, but it is hardly different from border searches in USA. Things may change with their new messenger and disruption of WhatsApp and Telegram there (Russia just started blocking SMS verification codes making registration there difficult).
There's literally a white list of permitted sites now, supposedly only to be used when there's a 'drone threat'. Guess what, there are places in Russia where there's a constant 'drone threat' for at least half a year and vk.com is basically all they can use to communicate. Why would they start arresting people for private VK messages now, while their 'max' messenger is still struggling? It could wait until all other messengers are less than 10% market share, that way it won't impede adoption until it's the only option available.
They don’t need to monitor every conversation. Just enough that every conversation is a little risky. It’s the ability to read it all, if they want, that matters.
As far as I know, their biggest problem isn’t reading chats (if device seized and unlocked, not a problem at all regardless of service and encryption level), but listening encrypted calls. This one really bothers them and WhatsApp appears to be threat number one. I don’t know anything about the scale of CSAM distribution there, but I think they don’t need ChatControl-like technology for dealing with it. ChatControl was worse than Russian surveillance state, maybe on par with Chinese tech.
> The GFW, its Russian equivalent, and the national security laws binding all of their tech companies and public discussion do exactly these things in a way that has allowed their leadership to go unchallenged for decades now.
Isn't this exactly the argument for never, ever doing it?
Yes Its an argument for the general public to think of their interests and that the interests of general public says to never do it
But they aren't thinking of our interests, they are thinking of theirs which is what I think that the parent comment wanted to share that their and our interests are fundamentally conflicting and so we must fight for our right I suppose as well.
It is also hard for me to understand this angle. While in Russia at the moment and China the "they" is pretty much constant, it is not the case in EU. Why would be in their interest something that can be used against them the moment the tide turns?
> Why would be in their interest something that can be used against them the moment the tide turns?
They are doing this to prevent tide turn and personally, I feel like if both/many political parties agree to something like chat-control and agree that they make it a bi-partisan issue, then they can fundamentally do it and the "they" would be constant
Also the "they" here also refers to lobbying efforts. The billionaires/millionaires/rich people might like these things solely because it increases the influence of govt. and thus the rich people as well
As an example, Let me present to you the UK censorship act which tries to threaten any and every website with a very large price which is very scary to many people who have thus shut down their services / websites to UK at large if they were a niche project/couldn't do it
Internet as we speak, would continue on to become more centralized. I feel like the idea here is that make internet so centralized that you can control the flow of information itself(I mean it already is but there are still some spots left like hackernews as an example)
Its also one step towards authoritarianism. This could be a stepping stone for something even larger which could have a more constant "they" as well but I have already provided some reasonings as to why they do that, simply because they can and chat control gives them a way to do mass surveillance which is something which to me increases the infleunece of both parties or the whole system massively in a way which feels very threatening to freedom/democracy making it thus dystopian.
But in those countries the intended goal is not just to stop CSAM, but primarily to censor communications and suppress the opposition from voicing their opinion. If you still want to give our politicians the benefit of doubt, then they don't, after all, want to actually censor communications in the same way to destroy democracy.
This is not because I support their mass surveillance proposal, I am strongly against it. I think that the politicians are naive (maybe even to the point of warranting the label stupid) and ignore the huge risks that exists of future governments to start using the mass surveillance platform, once it is in place, to start doing actual censorship. I am also extremely worried about the slow scope creep that will inevitably result from this; today it starts with CSAM and terrorism, next year it is about detecting recruiting of gang members, and in a couple of years it is about detecting small-scale drug transactions.
That just sounds like advocating for these policies is inherently undemocratic, in a Western understanding of democracy. Which is even worse than the policies simply being ineffective at their stated goals. Leadership being challenged is an essential part of our (stated) government system
EDIT: I improved my comprehension, and it looks like I agree actually, not disagree.
I agree, it's a great, proven tool to do away with political enemies, and to selectively enforce the law, for whatever motivation.
I just don't understand what you mean by
>We do ourselves a disservice by acting like there is some inherent flaw in it.
We (as in, "the people") don't do any disservice for us by opposing such an effort. Specifically because we are also looking at what goes on in Russia and China to name a few. Authoritarian regimes do "work", but don't, generally, want that kind of working over here in Europe for example.
I think they meant it's a disservice to act like these panopticons are inefficient/ineffective and thus not a real threat. Even current-gen AI plus mass surveillance would make it trivially easy to build dossiers and trawl communications for specific ideas.
Thanks for the clarification, it went over my head. Re-reading the comment chain multiple times it's now clear that OP was alluding to the ulterior motive, and the ulterior motive being effective, which I agree with. Again, thanks for taking the time to clarify.
You're responding to a completely different thing:
>many people, faced with a societal problem, reflexively turn to (total surveillance)
It's not about the malicious elites. These societal problems surveillance keeps being pushed for never get fixed in either China or Russia. Yet people (not just politicians) keep pushing for it or at the very least ignoring the push. A decade+ after the push, things like KYC/AML regulations are not even controversial anymore, and never even were for most people. Oh, these are banks! Of course they need the info on your entire life because how else would you stop money laundering, child molesters, or shudders those North Koreans? What, are you a criminal?
And of course you somehow manage to blame the usual bad guys for something that happens in your society, because of course they're inherently evil and are always the reason for your problems. Guess what, the same often happens there and they copy your practices. Don't you have your own agency?
The reality is that the majority in any place in the world doesn't see privacy, or most of their or others' rights for that matter, worth fighting for. Having the abundancy and convenience is enough.
That last point is even enough as demonstrated by the swiss people voting for the eID, democratically paving the way for future mass surveillance and total dependency to our iOS and Android locked bootloaders overlords. As stated further down this is all stemming from education.
Yes it had a great impact on political opposition. Such a weird coincidence that the politicians who want to keep their unlimited power indefinietly supporting a way of catching opposition early.
I guess this is what we need in the West too. Lets just cement the current ruling class in for decades.
Why do you think that the Chinese "value" having political opposition? They're the largest developing economy in the world.
What is your sales pitch? "Hey, you guys should try having a less stable government, in exchange you'll get some abstract platitudes about freedom and privacy."
Really? It’s working? No crime, no abuse, it has stopped perfectly or near so compared to other countries all of the things, like CSAM, that proponents want it to?
Your comment is precisely what I mean when I said people end up fixating on #1 to the exclusion of #2.
No, you have talked past my actual comment, inserted your own "control dissent, remain in power" purpose for this instead of what I actually said in my comment.
I didn't claim "there are no problems that can be solved or goals achieved by means of mass/total surveillance". My topic was societal problems. The political dilemma "how do I retain power and curtail disagreement?" isn't in this category.
No, you are talking past the point of the poster to whom you are replying.
"The issue is that there is a place where this model ~is working. It's in China and Russia. The GFW, its Russian equivalent, and the national security laws binding all of their tech companies and public discussion do exactly these things in a way that has allowed their leadership to go unchallenged for decades now.
The rest of the world isn't stupid or silly for suggesting these policies. They're following a proven effective model for the outcomes they are looking for.
We do ourselves a disservice by acting like there is some inherent flaw in it."
I stand by my comment that these technologies are doing exactly what they are intended for.
This is simply not true, this is Western paranoid fantasy. It's also the kind of fantasy that allows escalation of surveillance and censorship. You should look up the "missile gap."
Also, Russian and especially Chinese leadership doesn't go unchallenged. Chinese leadership has had many transitions. While Putin has squatted on the leadership of Russia for a very long time now, it isn't because he's not popular, and he's forced to do a lot of things he'd rather not do because of pressure on his leadership.
How do the neoliberal rulers in the West stay on top with extreme minorities of popular support, like in France or the UK? Why does popular opinion have no effect on the politics of the US*, and why are its politics completely run by two private clubs with the same billionaire financial supporters (that also finance politics all over the rest of the West)? How do they do it without massive surveillance, censorship and information control? Or a better question: how can we be given the evidence of massive surveillance efforts and huge operations dedicated to censorship and information control, over and over again, and still point to the East when we talk about the subject? Isn't that "whataboutism"?
Chinese Communist Party leadership had many transitions before Xi Jinping purged all rivals and alternative power centers, and personally took control of all key decision making. It will be "interesting" to see what happens when he finally dies.
In the 2000s a law was passed in Denmark that allowed for extensive logging of internet traffic.
But the ISPs couldn't implement it in a practical way and essentially refused until they were given something doable. That ended up, in some cases, being "register every 500th TCP package" (or similar; it might've been DNS lookup).
At the same time, if the police wanted actual digital surveillance, they'd just contact the ISP and say "Hey, can we get ALL the traffic for this one person who is under suspicion?" and the ISPs would, in some cases I'm familiar with, comply without a court order. So there was a clear path of execution for actual surveillance while at the same time this political circus made no sense.
Imagine you're surveilling a place for criminal activity and you're recording one second of audio every 8 minutes. Surely gold nuggets are gonna leak out of that.
I think a lot of this is rooted in the basic world view people have. Those with a conservative mindset will think of humans as fundamentally flawed, misguided creatures that need to be contained and steered so they don’t veer of the path, which they are naturally inclined to; while those with a liberal mindset consider humans to be inherently kind and only misguided by circumstances and their environment.
Most people can pretty clearly relate to one of these perspectives over the other, and it’s pretty clear what actions follow from that.
I think that's a little simplistic, I have liberal (in the British English sense) views specifically because I think humanity is fundamentally flawed. If we are all flawed particularly when it comes to wielding power over others, it's self-evident in my opinion that governments should be limited and the total power any individual or institution can amass should have a hard ceiling. I see explicit anti-authoritarianism as a necessary counterweight to our flawed nature, every exercise of political power is potentially harmful but through the ideas developed in the Enlightenment it can at least be contained and controlled.
Humans are inherently flawed and they're inherently kind. We're evolutionarily primed for competition and cooperation. Antisocial behaviour can be both inherent and environmental. I feel you might be setting up a false dichotomy when the motivations for political beliefs are often pretty complex and varied.
That is at best simplistic, and at worst completely inaccurate.
It is common for "liberal" governments, as in the UK at the moment, who are inclined to pass censorship, surveillance and control (of people's lives) laws. It is also common for "conservative" governments to do the same.
What is very common is for people to think themselves and people like themselves to be naturally kind and people unlike themselves as fundamentally flawed.
You’re talking about parties, while I was referring to ideology. And in ideological terms, while a HN comment isn’t scientific, I think I represented the ideology of conservatism and liberalism correctly here, so call out the social sciences over that.
That's absurd. “Conservatives think people are bad, liberals think people are good” is primary-school-level reductionism.
Conservatives generally see people as capable of self-direction and argue for minimal interference because virtue needs room to act.
Progressives also tend to see people as capable of good, but assume outcomes depend on systems, so they push for more state involvement to improve those conditions.
Neither thinks humans are irredeemable. Both generally believe humans are inherently kind. The difference is whether you trust individuals or bureaucracies to manage human weakness.
Again, you are treating these terms as equal to their contemporary meaning in bipartisan US politics, when they are pretty well-defined terms for describing political ideology in general. Part of that is one of the pillars of conservatism, that humans are imperfect beings and thus need institutions to guide them. So i would say you’ve pretty much got it backwards. I’m not making this up, you can go and read up on this for yourself.
> Again, you are treating these terms as equal to their contemporary meaning in bipartisan US politics, when they are pretty well-defined terms for describing political ideology in general.
I’m neither American nor using US partisan definitions. I’m using the terms as they’re broadly understood in political theory and history.
> Part of that is one of the pillars of conservatism, that humans are imperfect beings and thus need institutions to guide them.
That’s a paternalist or technocratic premise, not a conservative one. Classical conservatism accepts human fallibility but trusts evolved social norms not bureaucracy to contain it. The belief that people must be centrally guided is the antithesis of that tradition.
> So i would say you’ve pretty much got it backwards. I’m not making this up, you can go and read up on this for yourself.
You might try the same. Hobbes wasn’t a conservative - he was an absolutist. Quoting him to define conservatism is like citing Marx to define capitalism.
> Part of that is one of the pillars of conservatism, that humans are imperfect beings and thus need institutions to guide them
Who made this definition, left wing scholars? I doubt many conservative persons would say this.
You could just as well say that this is the central pillar of communism, that people need to be controlled since they are too evil if they are free to form companies and structure themselves. Or that this is the central pillar of social democracy, that without big taxes and central institutions to spend peoples money on things that benefits them they will make bad choices and not get the things they need.
Every government is about taking control from the people, I don't get what you mean that conservatives would do this more than any other group.
I feel the entire philosophical distinction is tainted to the point where it should be retired and no longer discussed. It was useful as a thought experiment but folks in general have shown they are completely unable to understand this and instead treat it as some tribal dogma to which they must choose allegiance. It's become harmful.
I say it should be kept in the university library under lock and key, something philosophy professors can sit and debate in their spare time behind closed doors. /s
Parties are associated with ideologies and supported by people who share their ideologies.
> I think I represented the ideology of conservatism and liberalism correctly here, so call out the social sciences over that.
If you are saying that you there is a correct definition within the social sciences, can you cite an authoritative source for that?
In any case you were talking about "the world view people have" and I think your definition correlates very poorly with those of people one would normally describe as "liberal" or "conservative". I am not even sure which mindset you associate with the "monitor and control" mentality. I think you mean its a conservative mindset, but a lot of the people I know who most strongly oppose it are conservative or Conservative (as in members of the party that has the word in its name).
This might be a US vs UK difference, of course. These are not words that are really used very consistently within societies, let alone between them.
> If you are saying that you there is a correct definition within the social sciences, can you cite an authoritative source for that?
You can start at Wikipedia, for example, which quotes Thomas Hobbes:
> the state of nature for humans was "poor, nasty, brutish, and short", requiring centralized authority with royal sovereignty to guarantee law and order.
And further:
> Conservatism has been called a "philosophy of human imperfection" by political scientist Noël O'Sullivan, reflecting among its adherents a negative view of human nature and pessimism of the potential to improve it through 'utopian' schemes.
I don’t mean to insinuate "conservatives are evil and want to spy on citizens", but merely that they are generally more inclined to believe people are inherently incapable of behaving well, so they need to be nudged towards the right thing. Really believing this makes it far more likely to view government monitoring as a plausible solution to the problem they see. And again, I’m saying this without implying any judgement.
> will think of humans as fundamentally flawed, misguided creatures
I do.
> that need to be contained and steered so they don’t veer of the path
No. When people are misguided that will even more apply to those in power and with weapons. The fact that I view everyone as likely misguided means nobody should have any say over the other.
The modern trend to frame conservatism and authoritarianism as "the same thing" is very bad and leads to the normalization of authoritarianism.
>Those with a conservative mindset will think of humans as fundamentally flawed, misguided creatures that need to be contained and steered so they don’t veer of the path, which they are naturally inclined to; while those with a liberal mindset consider humans to be inherently kind and only misguided by circumstances and their environment.
It is also this mindset of wanting to micromanage things/people in hope for better performance. Those are the usually the ones pushing for scanning people's private messages. People like Stalin and Mao would love stuff like that. The urge to micro-manage is a very band-aid type of solution and not dealing with complexity of the situation.
These are American boxes. Skewed by American culture. Simplistic to the absurd extent where it can mean the tail leads the dog i.e. people will adopt some viewpoint they're actually at odds with deep down. More tribalism than any fundamental ground truth.
> As though it would 1) be a practical possibility
Well that's kind of the thing. With AI it is. In theory, they can now monitor all of us at the same time on a scale never before thought possible. The time of "big brother has better things to do than monitor you specifically" is over.
Then are able to record everything now, they are able to add "AI" to it. The "AI" will tell them some result. They will prosecute based on that. The fact that "AI" isn't something that can reliable "monitor" people is something they won't care about.
I have always felt like what these services would do is to push towards things like matrix/signal etc. and matrix is decentralized as an example so they can't really do chat control there but my idea of chat control was always similar to UK in the sense that they are gonna scare a lot of people to host services like this which bypass intentionally or unintentionally this because if they bypass it, they would have to pay some hefty fees and that possibility itself scares people similar to what is happening in the UK itself.
VPN's are a good model maybe except that once they get on the chopping block, they might break the internet even further similar to chinese censorship really. Maybe even fragmenting the internet but it would definitely both scare and scar the internet for sure.
The real root cause of many societal problems is that a significant portion of the population everywhere across the world is either unable or unwilling to think more than one step ahead. This why I believe most dumb decisions are voted for. One immigrant was bad? Lets ban all immigrants. One criminal slipped the police? Lets allow spying on his chat and catch him. Etc.
Countries like Denmark are civilised and do spend literally billions on trying to protect and nourish children.
And it is a credit to these nations and the people who dwell in them.
But the reality is that there will always be a scumbag dad who decides to molest his daughter.
HN assumes evil but this is a "road paved with good intentions" kind of deal.
In this particular case it is obviously evil and misguided. There is no "iron wall" around EU phones or EU computers, a criminal can install literally anything on their PC and almost anything on their phones. So the real actual criminal won't be affected by the change at all, they will use software without backdoors for their shady stuff. This is obvious to everyone really, so this leads to a conclusion that the real reason for the enforced breaking of encryption is to spy on the political opposition, on the business competitors and in general promote a healthy fear and paranoia in society. Just what we need, in the age when EU enemies are acquiring more and more collaborators and tools inside EU, to disrupt and hopefully destroy the union.
> HN assumes evil but this is a "road paved with good intentions" kind of deal.
They are the same picture. Every evil person thinks they have good intentions. The Nazis were literally thinking and saying they resurrect the German nation and lead Germany to the eternal, "thousand year long" empire.
"...talking 5 minutes with the average voter" and all that. Ironically, lots of these people are meanwhile fine with "AI glasses" being used everywhere. They just haven't thought it through. What if a pedophile wears them?
It's like surveillance logic eats itself in the end. What really gets me is how often these proposals skip over the part where we ask whether they actually work
Funny thing is that people are all ok with reading chats but as soon as you touch their mail they go apeshit. (Note: it is officially illegal to open mail not addressed to you, even for law enforcement unless they have a very specific court order)
> I’m continually astounded that so many people, faced with a societal problem, reflexively turn to “Hmmm, perhaps if we monitored and read and listened to every single thing that every person does, all of the time…”
.. and stored! Which is the worst part, IMO, because once you have a record it's only a matter of time until it reaches the wrong hands.
The mass-surveillance proponents will always exist in small numbers, but it gets revived every other month because the number of ignorant politicians receptive to the idea is a function of their ignorance and malformed understanding of reality.
But that isn't their fault.
It's the magic tech companies are selling - and it's knowledgeable individuals who have to effectively communicate and explain bullshit.
There is a different problem at play behind this.
There are strong lobbying efforts that want this tech/system, for their own reasons which are not being advocated to the public.
At the same time, the lobbying forces behind this are pushing a random assortment of "popular" reasons to implement it - "think of the children/vegetables/climate".
All this crap is not being driven by grassroots movements. To the extent those are involved, they are being manipulated and sheepherded.
A lot of politics, if not all of it, are driven by real reasons separated from the methods used to push it through. See Brexit, MAGA, and a lot of other crap :-/.
As happens in other countries, I see the politicians in my own country implementing a lot of random policies that have no popular support, but is quite relevant to some inner circle and network of policitians and their friends.
I agree, the ghost of Stalin is right behind the curtain waiting for a 2nd, 3rd, 4th chance at making it work ("oh, no but this time we will REALLY get it right and eliminate only all the REAL enemies of the people")
>It's because European socialist heritage, Stasi, from East Germany, had 2% of its citizens as spies to "read and listen to everybody."
Have you ever heard of the Red Scare "McCarthyism" or the Patriot Act? The EU is the opposite of East Germany, nothing was inherited; they were robbed and left behind.
Just last year, France used similar argument of "exporting illegal encryption against Telegram" to get the master keys to decrypt all end-to-end encrypted messages:
The other EU countries have not seen similar proper purge like East Germany did. The secret legacy police is still going strong in countries like Spain, Greece, Hungary, Poland, as we have seen from the cases where these governments are using Pegasus spyware against their own political opposition.
>The other EU countries have not seen similar proper purge like East Germany did.
And what has that to do with "socialist heritage"? IF East Germany had a proper purge then nothing was inherited into the EU right?
>where these governments are using Pegasus spyware against their own political opposition.
Yes and then there was Watergate...but again what has Nixon in common with socialism? And maybe have a look at "Merkelphone" when "Friends spy on Friends"
Hell even the German police can "access" Whats-Up/Facebook.
>>German security forces can access personal Whatsapp messages of any user even without installing spyware, several German media institutions reported.
>>attains the information of suspects via "Whatsapp Web."
No, German police cannot access encrypted WhatsApp messages.
They specifically state in the introduction of Chat Control act that the reason for the banning end-to-end encrypted communications is that criminals use WhatsApp and police cannot crack the message, hence the Chat Control act. The police cannot crack these messages and they want to read everyone's messages and that's the whole point of making encryption illegal.
If you are unfamiliar with the topic, you can read more about Chat Control and encryption here:
i dont want to be the guy but the law is not about police having acces to private messages. As far as I understood they want to scan content for csam material by hashing. and if positive police geta involved. Or am I wrong? Maybe I misunderstood something.
I am unclear on the problem: are the apps, previously free, significantly more limited than their prior free versions unless/until you also purchase a subscription for the CanvaAI portions?
That aside, this isn’t a new thing for Canva, they aren’t chasing AI here, in this space GenAI is chasing the use case that Canva has been filling for a while, and incorporating genai as part of that is just, you know, “hey lots of people use this ai tool for design work now so maybe we add one because like it or not it’s how thing are”. Design is the Canva space, it’s not like they did a pivot to crypto.
> are the apps, previously free, significantly more limited than their prior free versions unless/until you also purchase a subscription for the CanvaAI portions?
I was curious too. The FAQ says this:
> Yes, Affinity really is free. That doesn’t mean you’re getting a watered-down version of the app though. You can use every tool in the Pixel, Vector, and Layout studios, plus all of the customization and export features, as much as you want, with no restrictions or payment needed. The app will also receive free updates with new features and improvements added.
Not sure I see the problem then. Canva has a long enough track record that I don’t suspect they’re going to pull a bait and switch on the freemium and start gate keeping new features any time soon.
I've been using it and so far yeah, a lot of the existing functionality and new functionality is effectively free and it generally only nags you when you use the clearly separate "Canva AI" panel.
Checking the settings also tells me that Segmentation (used in Object Selection) is provided but Depth Estimation (used in Portrait Blur and Select Sampled Depth tools), Colorization (used in Colorize filter, apparently intended to colorize B&W photos) and Super Resolution (used in Super Resolve filter) are all paywalled.
Honestly, I think that's fine. While I'd wish these are all available (and they could be if you looked hard enough for models that can do this) for a flat price (for parts that are not handled server-side), this is still imho mostly fair.
“I am skeptical of the idea that this method of intentionally leaning into discomfort only works for some minuscule abnormal subset of people that are just wired differently”
That may be the aspect of this line of thinking that’s not clear then: it doesn’t work for anyone. At least, in so far as the free will is illusory, it is a hallucination that such people have that they made such decisions, and stuck to them. It’s the demon hand syndrome, the person hallucinating a rationale for its motion.
The free will question seems to be a red herring- philosophers and physicists can argue all they want about if something like free will is physically possible or not, but for all intents and purposes, even without free will, the path of someone overcoming, e.g. alcoholism after using these methods requires using them in a way that is challenging, and if you decide on the nihilistic stance that there is no free will so there is no point in ever trying to do anything, then you are guaranteed an undesirable outcome.
Perhaps beforehand it was somehow "pre-determined" which of these attitudes and paths you would take, but that is completely irrelevant for the individual just living life, they have no way to know that one way or another, or any reason to actually care, as they still need to act exactly like they have free will and made the right choice to actually play out a future as the type of person pre-determined to have a desirable outcome.
It doesn't actually feel any easier or less painful to accomplish something difficult, even if free will is some sort of illusion when looked at from the outside perspective. You still experience, e.g. trying and failing over and over and never giving up until you succeed.
I can buy that, for example perhaps there is something outside our control that decides if you are capable of never giving up, but you still cannot know until you decide to never give up and try it- so it literally does not matter except as a philosophical curiosity.
I think a more interesting biological (and philosophical) question is why and how exactly do these GLP-1 drugs work, and why exactly are they so shockingly effective? Maybe they do somehow act on the brain to offer exactly the same psychological benefits as the stoic approach I am talking about, by the same or related underlying mechanism, and they're essentially interchangeable but work more often?
reply