It depends on their test dataset. If the test set was written 80% by AI and 20% by humans, a tool that labels every essay as AI-written would have a reported accuracy of 80%. That's why other metrics such as specificity and sensitivity (among many others) are commonly reported as well.
Just speaking in general here -- I don't know what specific phrasing TurnItIn uses.
I don’t know the “correct” answer, but here’s my answer as someone whose TOTP are split across a YubiKey and Bitwarden: I store TOTP in Bitwarden when the 2FA is required and I just want it to shut up. My Vault is already secured with a passphrase and a YubiKey, both of which are required in sequence, and to actually use a cred once the Vault is authenticated, requires a PIN code (assuming the Vault has been unlocked during this run of the browser, otherwise it requires a master password again).
At that point, frankly, I am gaining nearly nothing from external TOTP for most services. If you have access to my Vault, and were able to fill my password from it, I am already so far beyond pwned that it’s not even worth thinking about. My primary goal is now to get the website to stop moaning at me about how badly I need to configure TOTP (and maybe won’t let me use the service until I do). If it’s truly so critical I MUST have another level of auth after my Vault, it needs to be a physical security key anyway.
I was begging every site ever to let me use TOTP a decade ago, and it was still rare. Oh the irony that I now mostly want sites to stop bugging me for multiple factors again.
My Bitwarden account is protected with YubiKey as the 2FA. I then store every other TOTP in Bitwarden right next to the password.
I get amazing convince with this setup, and it’s still technically two factor. To get into my Bitwarden account you need to know both my Bitwarden password and have my yubikey. If you can get into my Bitwarden, then I am owned. But for most of us who are not say, being specifically targeted by state agents, this setup provides good protection with very good user experience.
2FA most commonly thwarts server-side compromised passwords. An API can leak credentials and an attacker still can’t access the account without the 2FA app, regardless of which app that is. The threat vector it does open you up to are a) a compromised device or b) someone with access to your master password, secret key and email account. Those are both much harder to do and you’re probably screwed in either case unless you use a ubikey or similar device.
How is it possible to have compromised password but not compromised the second factor? I don't understand the theory of leaking not enough factors. What is stopping webmasters from using 100FA?
> How is it possible to have compromised password but not compromised the second factor?
Server-side (assuming weak password storage or weak in-transit encryption) or phishing (more advanced phishers may get the codes too but only single instance of the code, not the base key).
> What is stopping webmasters from using 100FA?
The users would hunt them down and beat them mercilessly?
Mostly for the sites that insist on MFA and I need to use daily. Using two separate stores would be too annoying, and the increase in security is minimal - I consider Bitwarden to be secure enough (password + yubikey), and the main scenario somebody could get to my account would be on the server side, or phishing. For that, MFA helps somewhat, but storing MFA code in a separate app doesn't do much.
Only finitely many values of BB can be mathematically determined. Once your Turing Machines become expressive enough to encode your (presumably consistent) proof system, they can begin encoding nonsense of the form "I will halt only after I manage to derive a proof that I won't ever halt", which means that their halting status (and the corresponding Busy Beaver value) fundamentally cannot be proven.
Yes, but as far as I know, nobody has shown that the Collatz conjecture is anything other than a really hard problem. It isn't terribly difficult to mathematically imagine that perhaps the Collatz problem space considered generally encodes Turing complete computations in some mathematically meaningful way (even when we don't explicitly construct them to be "computational"), but as far as I know that is complete conjecture. I have to imagine some non-trivial mathematical time has been spent on that conjecture, too, so that is itself a hard problem.
But there is also definitely a place where your axiom systems become self-referential in the Busy Beaver and that is a qualitative change on its own. Aaronson and some of his students have put an upper bound on it, but the only question is exactly how loose it is, rather than whether or not it is loose. The upper bound is in the hundreds, but at [1] in the 2nd-to-last paragraph Scott Aaronson expresses his opinion that the true boundary could be as low as 7, 8, or 9, rather than hundreds.
That's a misinterpretation of what the article says. There is no actual bound in principle to what can be computed. There is a fairly practical bound which is likely BB(10) for all intents and purposes, but in principle there is no finite value of n for which BB(n) is somehow mathematically unknowable.
ZFC is not some God given axiomatic system, it just happens to be one that mathematicians in a very niche domain have settled on because almost all problems under investigation can be captured by it. Most working mathematicians don't really concern themselves with it one way or another, almost no mathematical proofs actually reference ZFC, and with respect to busy beavers, it's not at all uncommon to extend ZFC with even more powerful axioms such as large cardinality axioms in order to investigate them.
Anyhow, just want to dispel a common misconception that comes up that somehow there is a limit in principle to what the largest BB(n) is that can be computed. There are practical limits for sure, but there is no limit in principle.
You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for. For any fixed set of axioms you'll eventually run into BB(n) too big that gets indepentent.
>You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for.
You can't categorically declare that something is unprovable. You can simply state that within some formal theory a proposition is independent, but you can't state that a proposition is independent of all possibly formal theories.
They didn't claim that. They claimed that any (sound and consistent) finitely axiomatizable theory (basically, any recursively enumerable set of theorems) can only prove finitely many theorems of the form BB(n) = N.
Only if your goalpost of what "mathematics" is endlessly shifting. To prove values of BB(50000) you're probably going to need some pretty wacky axioms in your system. With BB(any large number) that's just going to be unfeasible to justify that the system isn't tailored to prove that fact, just short of adding axiom of "BB(x) = y".
I don't understand the point of this article, as it doesn't define an objective function. It just states a strategy that is only practically implementable for small board sizes (given the cited NP-completeness result) and then calls it good sans theorem or even conjecture.
I believe it is provably not the optimal algorithm for solving the problem under the minimax objective, and I have a hunch that (due to rounding issues) it is also not optimal for minimizing the expected number of guesses under even a uniform prior. So what does this actually accomplish?
I agree with you. I agree with OP in the following sentences:
>We have now landed on our final strategy: start by figuring out the number of possible secret codes n. For each guess, calculate the number n_i' of codes that will still be viable if the Code Master gives response i in return. Do this for all possible responses.
But then I don't agree with:
>Finally, calculate the entropy of each guess; pick the one with the highest.
Why wouldn't we just pick argmin_{guess} sum{i in possible responses}{Pr[i] * n'_i} = sum{i in possible responses}{n'_i/n * n'_i} = sum{i in possible responses}{n'_i^2}? This is the guess that minimizes the expected size of the resulting solution space.
Looks like she gave up her US citizenship when she moved, as did Boris Johnson who was also mentioned. So I haven't seen anyone who retained citizenship and was a recognized head of state.
The temperature of an object is a macroscopic property basically depending on the kinetic energy of the matter within it, which in a typical cup of water varies substantially from one molecule to the next. If before you could guess a little bit about the kinetic energy of a given water molecule based on whether it is part of the ice or not, after melting and sufficient time to equilibrate the location of a particular molecule gives you no additional information for estimating its velocity.
Not specific to this article, but it's tragic that computer science curricula, and discussions of these algorithms, virtually never highlight the tight connection between binary search and long division. Long division done in binary is exactly binary search for the correct quotient, with the number written above the line (plus some implicit zeros) being the best lower bound proven thus far. Similarly, division done in decimal is just "ten-ary" search.
Neat. Division being iterative makes me feel better about last week when I couldn't think up a good closed-form solution for something and I didn't feel like computing gradients so I just made a coordinate descent loop. It'll be funny if it's still in there when it ships
What counts as a standard formatter? Python one-liners are Turing complete even without semicolons, evals, execs, and with finite stack depth. E.g. some formatters keep this at one line, while others introduce line breaks https://yonatan.us/misc/bf.html .
> Since P is a subset of NP, everything in P can be also be turned into an instance of SAT.
This statement is kind of trivial. The same is true for any language (other than the empty language and the language containing all strings). The reduction is (1) hardcode the values of one string, y, that is in the language and another string, z, that is not in the language (2) solve the problem on the given input x in polynomial time poly(x) (3) return y if x is to be accepted and z otherwise.
The total running time is at most poly(x)+O(|y|+|z|) which is still poly(x) since |y| and |z| are hardcoded constant values.
Just speaking in general here -- I don't know what specific phrasing TurnItIn uses.
reply