Actually that is very "easy" to do, first you short sell some stock (naked short) for $1M, then the stocks value goes up by $8M. You are asked to cover your short and even after your $1M you are still left holding $7M in liabilities.
This is incorrect. You can’t talk about absolute $8mill gain without talking about the total market cap (total value) of the stock.
What would have happened that the company its not increase by $8mill but increase by 8 times the current value which is more possible to happen for small cap stocks.
Yes agreed with your example. The total loss in your example is only 16%. In the example provided above he starts with $1mill and has a loss of $7 mill. Thats a 700% loss.
Umm, you know how short sales work right? You're borrowing shares to sell them at current prices and then have to return the shares later. If the price rises, you are out the difference between what you sold at and what you repurchase at for when the call to cover comes in.
From Wikipedia: "Naked short selling, or naked shorting, is the practice of short-selling a tradable asset of any kind without first borrowing the security or ensuring that the security can be borrowed, as is conventionally done in a short sale." It is really easy to lose your shirt with a naked short.
Naked shorting doesn't do much to exacerbate the downside of shorting a security. Even if you have borrowed the security, you still have an unlimited downside (i.e. the stock price _can_ go to infinity).
Naked shorting is more of a systemic problem because you can put downward pressure on a stock even when the market is not willing to sell at the price you are pretending to sell at. It also makes it hard to keep track of voting shares and you can see many more votes than actual shares.
Look at the agreement you sign when opening a brokerage account. You give the brokerage firm the right to loan out any shares of stock you own. I believe this only happens in accounts that have margin (you do have to have a margin account to short), but I am not entirely certain.
This scenario might be called a "short squeeze". If you can't actually borrow the stock which you've located before shorting - then you get "bought in" - and your broker will just buy shares at whatever price is necessary to cover.
Maybe he could launch a GoFundMe like that guy that lost $100K on E*trade shorting stocks. Although either he or GoFundMe at least had the good conscience to delete that campaign...
This may be a silly question, but whats the current point of creating more efficient methods of encryption when 20-30 years down the road we'll have things like quantum computing with 10^N processing?
Wouldn't that then essentially make modern encryption methods obsolete?
I'd like to hear a more educated viewpoint on this, because most of the sources I've read gloss over everything and make it seem like magic and this seems like a good thread to ask.
Edit: Thanks for the responses, I think I get it a bit better now :D
To address just one of your points, quantum computation is not a silver bullet. It does not work the way one might think it works. Not many researchers are saying that they will replace classical computers, and for good reason. For many, or even most, computational tasks there is no way to get a large quantum speedup. (And classical computers have had a lot more R&D put into them.) Although easy integer factoring will break quite a few popular crypto schemes, like RSA, the risks of quantum computers are heavily overplayed. For example, I don't believe there is any known quantum attack against AES, which is a good start in a post-quantum world. (I can't think of an asymmetric encryption scheme that I'm sure is resistant to quantum attacks off the top of my head, since I'm no expert, but I'm positive some do exist.)
The (provably) best "general purpose" quantum algorithm is Grover's algorithm. The problem it solves is as follows:
Given an arbitrary function f(x), and a desired output k, find the unique input z such that f(z)=k.
We can see that using classical computers, this problem is O(n), where n is the size of the domain of f. However, Grover's algorithm can solve this problem in O(sqrt(n)). It has been shown that O(sqrt(n)) is the best possible solution to this problem on a quantum computer.
This means that quantum computers effectively halve the key-size for a brute force attack (and probably most other types), but doing any better than this would require exploiting some structure of the cryptosystem you are trying to break. To my knowledge, no such structure has been demonstrated for any major symmetric encryption algorithms.
That's really interesting! I have only a passing education in quantum computers, which is why my comment is so devoid of detail... I really should learn more.
AES (and other symmetric ciphers) are vulnerable to Grover's algorithm (https://en.wikipedia.org/wiki/Grover%27s_algorithm), which effectively cuts key sizes in half. AES-128 would be reduced to 64-bit security. This isn't a big problem in practice, since we can just switch to 256-bit ciphers like AES-256 and ChaCha20.
Public-key schemes based on factoring and discrete logarithms are undone by Shor's algorithm (https://en.wikipedia.org/wiki/Shor%27s_algorithm), but there are asymmetric systems not known to be vulnerable to quantum algorithms. They are less mature, but researchers are working it.
Another way to explain this is that quantum computing would be the perfect answer to some extremely-naive kinds of cryptography, reducing the decryption time-complexity quite harshly (though not all the way to O(N) as one might expect)—but modern, practical cryptography does things fairly differently than spherical-cow cryptography.
For example, if cryptography basically consisted of taking a set of plaintext "blocks" and a key, permuting the key separately and deterministically for each block in O(1) time, expanding the key into a one-time pad again in O(1) time, and XORing each block with its respective pad—then you could use a quantum computer to quickly search the keyspace for a key that decrypts the blocks into something sensible according to some heuristic. You can make a quantum algorithm that "searches" a static keyspace, given that you can map a particular ciphertext block to particular plaintext block in O(1) time—this mapping effectively becomes a "projection" of the keyspace, and the quantum computer searches that.
The problem with this approach is that, in reality, subkeys in a "stream" aren't generated independently per-block (as happens in the much-derided "ECB mode" of block ciphers), but rather are generated serially by feeding in the previous subkey in a chained operation; and that each cipherblock is then created by "expanding" the subkey through a CSPRNG, which itself uses iterated hashing.
You can't make a quantum algorithm that searches a keyspace, given an encryption algorithm that is defined recursively or "statefully" on its input stream. There's no such thing as a "quantum for-loop" that magically makes an O(N)-step process into a single linear projection of the keyspace—and without this, there's nothing to usefully search through.
Yeah, we have some public key schemes that are conjectured to be resistant to quantum attacks. This one in particular is fun because of the connection to machine learning. (https://en.wikipedia.org/wiki/Learning_with_errors)
Basically, it uses the result that it should be 'hard' to learn a linear function with sufficient noise rate to create a cryptosystem.
Also note that most homomorphic encryption schemes proposed so far are based on LWE and are therefore also conjectured to be resistant to quantum attacks.
Also, homomorphic encryption isn't about making it more efficient, its about being able to compute any function over the encrypted dataset, and get the same result as if it was over the unencrypted version - this is pretty cool if it works because you could say send over an encrypted version of a program to let someone else test it without giving any details of how it works.
from the paper:
The purpose of homomorphic encryption is to allow computation on encrypted
data. Thus data can remain confidential while it is processed, enabling useful
tasks to be accomplished with data residing in untrusted environments. In a
world of distributed computation and heterogeneous networking this is a hugely
valuable capability.
For one, quantum computing may not happen? Like, we're pretty sure it will work, but we aren't actually sure we can make it happen. The closest we've gotten is D-wave and it's still controversial.
The best way to articulate that criticism is to quote me (or at least paraphrase Daniel Lidar) and say:
"We have quantum information processing, but no computers as of yet."
And, that which we already have is so-darned-expensive; that it is less useful than classical computation unless you are building some exotic kind of sensor.
NTRU is another quantum-secure (i.e. thought to be quantum-secure) cryptosystem. It can do most things we demand from public-key cryptosystems. More generally, there is no known quantum attack that significantly breaks lattice-based cryptosystems.
While true, LLL basis reduction is quite effective against it. To counter it, your key must be much larger. Further, it multiplies the length of the ciphertext, IIRC (or something gets bigger...).
my simple answer is, encryption works by creating a function which is vastly more costly in one direction (encryption) than the other (cracking).
for any computing power available, if you apply x seconds of computation to encrypt with a complex key, it will take a multiple of x years to crack it.
as long as that multiple remains, which cryptology seeks to improve, just applying more computing power will never obsolete modern encryption methods.
It is worth noting that modern cryptography generally considers a system securing only if it takes exponentially longer to crack than it does to encrypt. That is to say, the cryptosystem defines some security parameter y, and legitimate operations can be done in O(y) time, while attacks require O(b^y) for some b>1.
Even quatom computers require superlinear time run Shor's algorithm.
I think a distinct problem with the most recent flow of migrants into Europe is the idea of 'reliability' and how that effects feelings towards a migrant population.
In many previous migration movements in both Europe and the United States there was some rateable factor between the native population and the migrants. For example, people coming to the US from Cuba were seen as escaping communism for democracy and were seen as good. Similarly with Vietnamese immigrants for example. In Europe the same thing happened after 1991.
Conversely now, other than the fact that the migrants are of a different skin color and religious make up than the majority of Europeans, society does not have the same sympathetic view as compared to the past. There were not Cuban or Vietnamese terrorist strikes on American or European soil, and thus the new migrant wave is seen as a risk.
Also given that that fact that we are still at 'war' with the middle east doesn't lend any help. It is easier for a society to forgive a nation/ethnicity that they've been fighting after said nation has won than while they're still in conflict.
Overall I think the situation is very complex, and theres not a easy way to make Europe accept this wave of migration in comparison to similar waves in the past.
I agree and it scares the crap out of me to think about what might happen in the coming years.
Mostly I'm trying to think of the best course of action for me as an individual. Not doing anything seems like the worst option. But when it comes to doing something, should I:
1. actively fight right-wing sentiment, with probably little effect (mostly preaching to the choir, as most of my friends are highly-educated left-wing liberals).
2. volunteer in a refugee camp, which while personally more satisfying, might waste more 'valuable' things I could do.
3. become politically active in some way, but this would require a significant up-front effort in trying to untangle the situation.
4. try to find longer-term sustainable solutions for refugees who are likely to be here for a long time no matter what happens, faced with few good options (they can't work, can't study, and are often isolated).
5. something else?
I'm currently a bit paralyzed by the possible options, yet for various reasons feel a strong desire to contribute to the problem, perceived or otherwise, of the influx refugees. I'd greatly appreciate some concrete suggestions.
0. Stop the dogmatic multi-culti cult, accept human nature (= we prefer to live with our kind) and strive for a world in which every race and religious group is able to live peacefully and prosper, in its OWN LAND, with its own way of life.
Don't worry about the 'Murrica. I'm sure he's from one of those 'Urop startups we're hearing so much about....
:^)
On topic I'm pretty sure NASA uses metric units, after they had that one launch in the 90's that messed up do to a conversion error between metric and imperial.
NASA used metric units before and after, and they were right there in the spec. The problem was their contractors at Lockheed using some other units instead.
He puts himself on quite a pedestal, comparing himself to Fonda like he's the righteous movie star, but I'm no psychologist.
It might be hard, but try to understand the minority perspective in viewing the system, you might trust any part of it, and you'll see things like bias and privilege even when they don't exist.
Ignore that guy. He's a new account that specialises in trolling. I've seen him in other threads where his comments are so downvoted that they're barely readable. I think they're either deleted or removed at this point.
Your quote would seem to support the idea you object to. "I'm no Henry Fonda" explicitly draws a comparison between yourself and Henry Fonda, and this kind of rhetorical device is often used to suggest things that the bounds of propriety forbid you from saying outright.
Imagine two political rivals campaigning against each other, Mr. Lowbrow and Mr. Classy. Mr. Lowbrow releases campaign literature to the effect that Mr. Classy is a pussy. Mr. Classy responds "a less courteous person might observe that my esteemed opponent washed out of seventh grade... but I won't. This race is about the issues."
The particular rhetoric Mr. Classy is using there doesn't mean it makes sense to defend him against charges that he called Mr. Lowbrow stupid. (Hey, he specifically said he wasn't doing that!)
In this case, the author is clearly comparing another juror ("Henry") to Henry Fonda, not himself. Which is definitely not comparing himself to Henry Fonda.
Wait so this article is basically him saying that the system worked?
That's the impression that I got. Even disregarding his early learnings towards high-school level leftist protest and mistrust of the government, doesn't his careful consideration of the case show the reasons why we use a jury system? Even if the 'mob' e.g. the other jurors decide that a person is guilty, one or two reasonable arguments can decide otherwise.
It seems to me that everything worked out as it should. I wouldn't feel bad if I was the author. (oh and he'll be back in court, serving a case in most states only gives you a 3-5 year reprieve from jury duty)
If you admit you were ever on a hung jury, you are much more likely to be struck from the list, so while the author will have to serve jury duty, there’s a good chance the author won’t be on another jury any time soon, especially in another serious case.
I wish that had worked for me. I served 3 times, first time was a hung jury, next two were burglary and battery. Terrible experiences all of them. Wish I could have been excused.
I really enjoyed this article, but it made me realize that I don't have a good understanding the goals of the voir dire process. It reminded me of job interviews, where certain questions are off-limits in order to hopefully limit the effects of some of the personal biases of the interviewer.
From a game design point of view, it's fascinating. The selection process attempts to give both opponents an equal chance of eliminating undesired pieces from the board. But from the system's point of view, balancing that power against the delivery of justice as innocent-until-proven-guilty/burden-of-proof/reasonable-doubt seems kinda suspect.
I'm totally going down the rabbithole reading about this over the weekend :)
Yes. They asked if I had served on a jury before and if the jury had reached a verdict. The battery case was the second time I served. I answered yes that I had served and no the jury did not reach a verdict. They then asked what the prior case was about - maybe since it wasn't related they didn't boot me. The third time I served it was a burglary case so I answered that yes I had served twice before, once was a hung jury and once we arrived at a verdict. No further questions.
I believe his point is that although it worked in this instance it seems likely that it DOESN'T work in many other cases.
Then again 10 people would argue the system doesn't work because 2 stubborn knuckleheads refused to issue a guilty verdict when that obviously should have been the ruling.