Anytype. Local-first, p2p sync with e2ee. It uses the notion model (everything is an object, you can create "databases" with queries of objects). It doesn't store markdown directly I believe, but it can export into markdown perfectly.
I recently moved to anytype from logseq as it hit for me all the right notes for personal stuff, shared family stuff and work stuff.
I wanted something like notion but faster and most importantly, private.
For what is worth, I find the ugro-finnic trinity (Finnish, Estonian and Hungarian) a step down towards hell, compared to German, Russian and many other languages traditionally considered hard.
But bash isn't a key ingredient in any of these. The exact same payload could easily be insert in the project's source code, and has the benefit of being persistent. Using a bash shell to do it might be the most obvious way, sure, but shutting down bash access is such a poor defense that it isn't worth doing.
Are those new "concepts"?
Uber for your house seems all. Effectively every single business in the gig economy has been rehashing the same concept trying to commodify every aspect of the human experience by inserting a platform as an alternative to a regulated market (or no market at all).
Yeah definitely. There was no way you could rent out your house before Airbnb.
Uber for X is a new concept if nobody has done it before. Sure you can say it's obvious but most business ideas are, especially in hindsight. The idea is the easy part.
The point is that this is not a punishment against "them", it's a loss for everyone (human kind), who affects for the vast majority innocent people both in Iran and all over the world.
I don't see a world where this is "ideal", even if you agree with the block.
We don't know where the people who will make scientific breakthrough will be. Imagine losing the cure for cancer, or a form of clean energy (or anything that could change the world for everyone) due to this.
That’s nice and wishful thinking. This is just the sad truth regarding sanctions. If your country cannot participate on the global stage at least partially than the people suffer. No different than North Korea. It sucks but life is unfair and sucks.
Don't give them "your" email. Give them a mail alias, which I am sure as a privacy conscious person you already have, and you are good. They also recommend to do so themselves.
However, the most important argument here is not the fact that they are legally bound to those privacy commitments (they are), but that their business incentives are fundamentally incompatible with tracking users.
For a very niche business with an extremely narrow and homogeneous user base, if they would get caught doing so, it would be game over.
The privacy pass feature is available if you don't trust, and you can verify since everything relevant happens client side.
There is virtually no difference with a private entity which can be compelled by the government to do the same, plus has its own profit motive which could also create incentive to do it.
There must be a non-repudiation and integrity check to verify transactions (e.g., in Estonia I sign digitally all my transactions), so the latter problem is easier to mitigate.
There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc. Saying "we don't expect to say the truth" is a little bit too easy.
If everyone was not expecting them to say the truth or just being accurate, they shouldn't have been designed as programs that speak with such authority and probably wouldn't be the target of massive investments.
So yeah, in principle I may agree with you, but in the socio-technical context in which LLMs are being developed, the argument simply does not work in my opinion.
>There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc.
Who are you referring to? Did someone tell you that chatgpt "works for search" without clicking the "search" box?
Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
Also, you seem to be arguing that, because the general tone you've been hearing about AI is that "they work for search", that therefore openai should be liable for generative content. However, what you've been hearing about the general tone of discussion doesn't really match 1:1 with any company's claim about how their product works
Just an example, read https://openai.com/index/introducing-chatgpt-search/ , see how many mentions there are to "better information", "relevant", " high quality". Then see how many mentions there are of "we don't expect it to be real stuff".
> Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
If designers wanted it any other way, they would have changed their software. If those who develop the software are not responsible for its behavior, who is? Technology is not neutral. The way AI communicates (e.g., all the humanizing language like "sorry", " you are right" etc.) is their responsibility.
In general, it is painfully obvious that none of the companies publishing LLMs paints a picture of their tools as "they are dream machines". This narrative is completely the opposite of what is needed to gather immense funding, because nobody would otherwise spend hundreds of billions for a dream machine. The point is creating a hype in which LLMs can do humans jobs, and that means them being right - and maybe doing "some" mistakes every now and then.
All you need is to go on openai website and read around. See https://openai.com/index/my-dog-the-math-tutor/ or https://openai.com/chatgpt/education/ just as a start.
Who would want a "research assistant" that is a "dream machine"? Which engineering department would use something "not expected to say real stuff" to assist in designing?
Policies are often used to create markets or - like in this case is propose - to create that demand. The article makes the argument that free market failed to deliver in this sector in Europe.
One of the explanations is that once you lag behind competitors, if policies don't force to value specific parameters that can be fulfilled only by other competitors, no competitors will join the market because nobody is going to choose them. So the author argues for example to impose regulation for public tenders such as "must be subject to only EU laws". This creates a demand, which is not currently matching any offer in the market and creates market incentives for new players to compete.
So regulations can absolutely work where "free" market fails (quotes because even the big 3 are/were pumped full of money by government/defense contracts).
I had a quick look, that "deploy" user can run any sudo command without password? It's basically root at that point. I think that forcing a password (maybe using some lax timeout if you don't want to insert it so often) is a much better option.
Correct me if I am wrong, but I also see that there are secrets in the file (e.g., gmail SMTP creds). Make sure the file is protected in read at a minimum. If those are your gmail app credentials, they are pretty serious and obtainable by just reading the file (same goes for postfix config).
I’ve had this argument so many times over the years, and usually the response comes down to security by obscurity because people won’t know the non-root username
That I guess is relevant in the context of brute-force login, which given you only use key with, is not really something I would stress over. However, depending on what that user does, there might be vulnerable services running with its privileges, or there might be supply-chain vectors for tools that user runs.
I recently moved to anytype from logseq as it hit for me all the right notes for personal stuff, shared family stuff and work stuff.
I wanted something like notion but faster and most importantly, private.
reply