Configuring innodb file per table was pretty common years ago when I last worked with MySQL. I thought the entire file gets deleted if you nuke the table. This would be for the default config with a single innodb file right? To remove some tombstones.
Yeah this tool was written to help recover from accidentally `rm -rf` the entire mysql data folder. The `page_extractor` you can feed it the whole block device and extract all InnoDB pages that has checksum that match.
I blindly clicked this expecting an article about the matter. Was kinda surprised to be given a live demonstration. I can reach nhs.uk fine.
Edit: searching Google, there are recent reports that make me feel like this is their response to being too incompetent to process cyber crime reports. I’m genuinely curious if someone unilaterally decided that people not physically in the uk don’t matter.
Doesn't work from Australia either, so apparently the Commonwealth can get fucked. I imagine some middle-ranking civil servants at the FCO will, in a not-too-distant-future, be expressing a measure of dismay to their Home Office counterparts. Before which time, hopefully, the gov.uk folks will figure out how to selectively block access to live reporting channels, not the entire police service, in response to vexatious abuse.
Prank emergency calls from children have been a hassle since time immemorial, and I see these circumstances as entirely corresponding, merely at scale; I'd even express surprise if this is a first for online British police channels.
>> You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
This is almost certainly an IT admin not bothering to read Cloudflare's documentation and leaving something on the wrong setting.
Number of customers who read documentation << number of customers, although it seems especially endemic in infosec.
I'm not saying it makes complete sense, but I'm not sure I understand why it bothers you that much? I'm not sure how many times I've visited police.uk in my (British) life, probably a few, but I'm pretty sure it's zero for police.au or whatever it is.
I've had reason to correspond with British police forces on several occasions. There are many transnational matters of diplomacy, law enforcement, citizenship, sport, tourism, hiring, migration, security, and trade for which police clearance and input may be (sometimes must be) sought, or notification given, and this disrupts all of them: websites are a starting point for interaction, documentation, and communication, and it's a massive burden if you're forced back to calling up folks at your UK offices (or worse, the high commission) just to start a conversation each time, let alone work through a process.
There's a good chance that instead of "fuck you Aussies", what we have is a firewall that has been battered to the point of banning most of the world.
The UK is under some form of attack at the moment (probably mostly internally generated - no need to invoke bogeymen). We have seen quite a lot of anti racist activity recently and it is likely the nasty lot have been contained (hopefully).
The Commonwealth is always welcome here - that's why our mutual Queen (EIIR) created it. She got to grips with us proles rather well from around 1995 onwards when the Royal Family realised they have to engage with us lot.
Personally speaking, I've always thought of AU and NZ (SA etc) as mates. We might disagree about a few things and rugby and cricket and that. However we have more in common than not.
I have no doubt that the mood in AU and NZ is largely stirred with anti British sentiment (we see it here from Scotland, even though "modern Britain" was invented by a Scottish king - James 1 or VI).
NZ have often moved towards a silver fern on black flag. I say: why not - if that is your idea of your identity then crack on (it does look rather cool). Also, perhaps AU should rethink their flag - you go all in on gold and green. Ditch the blue thingie with the southern cross and the union flag in the top left - it doesn't suit you. ... or keep it because you can and proudly do whatever floats your boat. AU needs to sit down and take a really hard look at its flag and do what is best for AU.
The world is a messy place but let's keep it civil (and real).
I think the EU is affected as well by this. If there was any foreign region that needs access to the police services of the UK, I would bet the neighboring EU countries would be at the top of the list.
At the very real risk of "talk is cheap," my understanding is that is part of why Jepsen publishes the test suites (e.g. https://github.com/jepsen-io/etcd ) so it's not "take my word for it" but rather "lein run test-all" and watch the fireworks. So, a sufficiently motivated actor, say for example one of the deep-pocketed stewards of the Kubernetes project could run the tests themselves
Between my indescribable hatred for etcd and my long-held lust for a pluggable KV backend (https://github.com/kubernetes/kubernetes/issues/1957 et al) it'd be awesome if any provable KV safety violations were finally the evidence required for them to take that request seriously
Having looked at the test suite already, I know enough to know that I don't understand it well enough to be that guy to do this. It's for this reason, I'm personally going to pull out the popcorn and see what happens over the next few weeks.
I'm currently working on a Rust v3 client, and have been reading the Go v3 source code, and the code definitely is hard to follow so I would be unsurprised if there were issues lurking.
I was curious and dug into the Go client code. You linked to the definition of KV – the easiest way to create one is with NewKV [1], which internally creates a RetryKV [2] wrapper around the Client you give it.
RetryKV implements the KV methods by delegating to the underlying client. But before it delegates an immutable request (e.g., range), it sets the request retry policy to repeatable [3].
Retries are implemented with a gRPC interceptor, which checks the retry policy when deciding whether a request should be retried [4].
The Jepsen writeup says a client can retry a request when “the client can prove the first request could never execute, or that the request is idempotent”. In my (cold) read of the code, the Go client stays within those bounds.
For non-idempotent requests, the Go client only retries when it knows the request was never sent in the first place [5]. For idempotent requests, any response with gRPC status unavailable will be retried [6].
Unlike jetcd, the Go client’s retry behavior is safe.
The watch is a simple processing loop that receives and sends on a bi-directional GRPC channel. Leases have a similar loop for keep-alive messages, everything else is quite literally delegated.
I get that it's difficult to translate this 1:1 into Rust without channels and select primitives, but saying it's complex is wild. Try the server-side code for leases ;)
Your posts are something I have in my bookmarks and reference regularly as I continue to build my own distributed data system. Thanks for continuing to test and report on these issues. These posts have clarified a lot of details about the consistency guarantees of these systems that I really couldn’t discern from their own documentation. The knowledge is invaluable with how developers lean towards just trusting the system they consume to be correct.
Depending on what you’re using these tools for. If you want a locking manager and some meta data storage to help your distributed system maintain state, etcd is better for the job than rqlite for that. It’s a better zookeeper. With etcd you can hold a lock and defer unlocking if the connection is disrupted. Rqlite is not a good option for this.
Agreed, in the sense that while rqlite has a lot in common with etcd (and Consul too -- Consul and rqlite share the same Raft implementation[1]) rqlite's primary use case is not about making it easy to build other distributed systems on top of it.
Every time I've looked at rqlite, it just falls short features-wise in what I would want to do with it. A single raft group does not scale horizontally, so to me rqlite is a toy rather than a tool worth using (because someone might mistake the toy as production grade software).
That's clearly a mistaken attitude because both Consul and etcd also use a single "Raft group" and they are production-grade software.
Ruling out a piece of software simply because it doesn't "scale horizontally" (and only writes don't scale horizontally in practice) is a naive attitude.
The qualifier here is for /my/ use cases. However I couldn't recommend rqlite over better options at the level of scale that it can fill.
One of the problems is if you're working with developers, the log replication contents is the queries, instead of the sqlite WAL like in dqlite. I know this is a work around to integrate mattn/sqlite3, but it's untenable in enterprise applications where developers are going to just think "oh, I can do sqlite stuff!". This is a footgun that someone will inevitably trigger at some point if rqlite is in their infrastructure for anything substantial. In enterprise, it's plainly untenable.
Another issue is if I want to architect a system around rqlite, it wont be "consistent" with rqlite alone. The client must operate the transaction and get feedback from the system, which you can not do with an HTTP API the way you've implemented it. There was a post today where you can observe that with the jetcd library against etcd. Furthermore to this point, you can't even design a consistent system around rqlite alone because you can't use it as a locking service. If I want locks, I end up deploying etcd, consul, or zookeeper anyways.
If I had to choose a distributed database with schema support right now for a small scale operation, it would probably be yugabyte or cockroachdb. They're simply better at doing what rqlite is trying to do.
At the end of the day, the type of people needing to do data replication also need to distribute their data. They need a more robust design and better safety guarantees than rqlite can offer today. This is literally the reason one of my own projects has been in the prototyping stage for nearly 10 years now. If building a reliable database was as easy as integrating sqlite with a raft library, I would have shipped nearly 10 years ago. Unfortunately, I'm still testing non-conventional implementations to guarantee safety before I go sharing something that people are going to put their valuable data into.
To simply say I'm "ruling out a piece of software because it doesn't scale horizontally" is incorrect. The software lacks designs and features required for the audience you probably want to use it.
Hopefully you find my thoughts helpful in understanding where I'm coming from with the context I've shared.
>One of the problems is if you're working with developers, the log replication contents is the queries, instead of the sqlite WAL like in dqlite.
I think you mean rqlite does "statement-based replication"? Yes, that is correct, it has its drawbacks, and is clearly called out in the docs[1].
>Another issue is if I want to architect a system around rqlite, it wont be "consistent" with rqlite alone. The client must operate the transaction and get feedback from the system, which you can not do with an HTTP API the way you've implemented it.
I don't understand this statement. rqlite docs are quite clear about the types of transactions it supports. It doesn't support traditional transactions because of the nature of the HTTP API (though that could be addressed).
>Furthermore to this point, you can't even design a consistent system around rqlite alone because you can't use it as a locking service. If I want locks, I end up deploying etcd, consul, or zookeeper anyways.
rqlite is not about allowing developers build consistent systems on top of it. That's not its use case. It's highly-available, fault-tolerant store, the aims for ease-of-use and ease-of-operation -- and aims to do what it does do very well.
>If I had to choose a distributed database with schema support right now for a small scale operation, it would probably be yugabyte or cockroachdb. They're simply better at doing what rqlite is trying to do.
Of course, you should always pick the database that meets your needs.
>If building a reliable database was as easy as integrating sqlite with a raft library, I would have shipped nearly 10 years ago.
Who said it was easy? It's taken almost 10 years of programming to get to the level of maturity it's at today.
>They need a more robust design and better safety guarantees than rqlite can offer today.
That is an assertion without any evidence. What are the safety issues with rqlite within the context of its design goals and scope? I would very much like to know so I can address them. Quality is very important to me.
This seems like a lack of knowledge issue. The problems with rqlite are inherit in it's design as I've already articulated. You can literally start reading jepsen analyses right now and understand it if you don't already: https://jepsen.io/analyses
"Evidence Dump Fallacy." This fallacy occurs when a person claims that a certain proposition is true but, instead of providing clear and specific evidence to support the claim, directs the questioner to a large amount of information, asserting that the evidence is contained within.
rqlite -- to the best of my knowledge and as a result of extensive testing -- offers strict linearizability due to its use of the Raft protocol. Each write request to rqlite is atomic because it's encapsulated in a single Raft log entry -- this is distinct from the other form of transactions offered by rqlite[1], but that second form of transaction functionality has zero effect on the guarantees offered by Raft and rqlite (they are completely different things, operating at different levels in the design). If you know otherwise I'd very much like to know precisely why and how.
I won't be following up further. I've shared all I have to share on this topic. On a personal level, I'm actually disappointed in how you take to critical feedback about your product and don't seem to be interested in understanding the problem domain you're developing for.
This is pretty much a repeat of Richard Stallman’s experiences that led him to founding what would become GNU. Also open source really isn’t just being able to view the code, but the ability to modify, use, and redistribute freely.
One of the main things that they have discussed in interviews is trying to create a third pole of licensing structures. There is Free Software, OSS, and then they want to create a 3rd option with non-commercial clauses that everyone can rally behind for people who want to use those sorts of clauses. I think this is a pretty interesting idea and it will be neat to see how far they take it.
From what I’ve seen, the problems that have cropped up with open source software doesn’t seem to be corporate use, but instead companies that operate at scale without doing much to improve the software they are making bank on. AGPLv2 bridges that gap imo. What I’ve noticed is companies that want to use software without the requirements imposed will pay for custom licenses so they can make changes without having to share back. The money from commercial licenses ends up benefiting the projects using the license. Grafana and Minio seem to be doing great in this model.
Many software projects have had licenses with no-commercial-use clauses over the last 30 years or more. If such a license were a good idea, why can't I think of a single example of a project that kept such a license and succeeded or became popular?
The article is pay walled. Seems like this would be the fault of the airlines though. There is a reason to be distributed between different geographic areas.
But if the Azure outage is due to Windows machines crashing because of the currently ongoing CrowdStrike crash/reboot loop issue, then such servers might end up being down in all regions. Looks like there might be some advanced lessons to be learned about blast radius here...
Maybe because Windows Defender Advanced Threat Protection is an enormous resource hog that scans every byte of memory and storage accessed by the Hypervisor and performs a quadratic time computation on the data? I am just guessing because my “fastest” Windows laptop CPU money could buy feels like a hot smelting furnace and a sloth at the same time when I use VMWare Workstation. What the &$@* is it scanning the VMWare guests for?
More likely crash looping of so many VMs overloading some system with insufficient back pressure, possibly combined with unfortunate cluster management scheduler behavior at this scale of crash looping (e.g. too eager to retry scheduling instances, maybe even on new hosts which causes more infrastructure load).
VM storage is probably on Windows Server, plus AD. I'd bet out of band management is all in the impact zone too. Might be back to someone pushing physical switches and hooking up a KVM.
Not sure if anyone else would have an interest in this, but personally I find dealing with SQL in Go worse than just dealing with HTTP APIs. This is the first step in an ecosystem I'm putting together to rapidly prototype products to run in Kubernetes clusters.