The weakest link I can identify here is MITM attacks when you're sending the link.
Imagine:
Alice generates a link.
Sends the link to Bob over an unencrypted/unauthenticated link.
Mallory intercepts the link. Generates his own link and send that link to bob.
Bob enters the confidential information on Mallory's link.
Mallory sees the confidential information, and then sends it to Alice's original link.
The only way to prevent this type of attack is sending the link over a secure channel. But if you already have a secure channel - what's the use case?
The channel does not need to be secure, only authenticated. So e.g, you could send them a Slack message then have them call you to confirm you just sent that link and not someone who hacked your Slack account.
It's fine to have someone snoop and see the link, as long as they can't change the link in transit.
>But if you already have a secure channel - what's the use case?
You can use Whatsapp, or some other E2E-encrypted service that is easy to use, and then transmit the sensitive data over encrypted email, which is more convenient for long-form text.
You are right on this one. The only protection against this situation implemented (right now), is that the email address to where it will be sent and the fingerprint of the key that will be used to encrypt, are shown to the user of the link so he is able to verify it. This way mallory's email address will show up on the page and the user can see he is not be sending to the right person.
This might not be enough for the app use cases, but we are working on more solutions so the user can be sure that it is going to the right person.
Regarding the last statement, lets assume the "secure" channel they have is chat app, like slack, for example, it will store the content indefinitely and will be there in clear text, not only slack can see it but if a smartphone/computer is lost,stolen or accessed by someone else they will be able to see all history and content sent through it.
I wouldn't go so far to say that Trump and Hitler are one and the same, but you can draw similarities in the rise of nationalism in both campaigns and their impact on the electorate.
Even if Hitler isn't used as an example (it is a rather extreme one), say David Duke ran for president and Peter Thiel donated to his campaign. Would YC cut ties with Peter then?
I'm just curious to know where YC thinks the line is, and if there is even one.
It's not lazy. It's showing the extreme (clearly unacceptable) end of the spectrum to ask someone how they decide where to draw the line. What would Trump have to do before supporting him is unacceptable to YC?
But if you really don't want to use Hitler, try Rodrigo Duterte or Silvio Berlusconi instead. They actually share a lot of similarities to Trump.
"It's showing the extreme (clearly unacceptable) end of the spectrum to ask someone how they decide where to draw the line."
That's actually very lazy. It's shifting the burden of "drawing the line" on the other side.
It's the "well, why don't we just kill toddlers as well? Where do you draw the line?" in an abortion argument.
It's the "why don't we just give 12 year old's automatic assault weapons" or "why don't we disarm the police and military as well" in a gun control debate.
It's the "So why don't we just tax corporations 100% of their income" in a corporate tax debate.
Trump has started heavily implying that the election will be rigged, Russia-style. He repeats Info Wars propaganda sillyness. He's come out to say that the elections will be stolen from true Americans by illegal immigrants. He's called for beating protestors.
Some of his positions are more a question of form rather than function compared to the political mainstream. His comments on muslims immigrating into the country are not far from the Republican Party mainstream, unfortunately.
But beyond his policies, he has been openly hostile to the concept of functioning democracy. There's a lazy comparison to be made to Hitler, but that's because Hitler also subverted the system through a cult of personality.
I know some people oppose him for his policies, but it feels internally inconsistent (you'd have to boycott most Republican supporters). But surely using strategies to undermine the legitimacy of the election results (that in a different country would be an obvious lead-up to a coup d'etat) is something we can all get behind as a bad thing.
What's even better is that breakpoints haven't really changed over that period of time. They just work.
While a lot of tech is rapidly moving and constantly changing - this is the type of fundamental knowledge that will probably prove valuable for the rest of your career.
Usually they either run debugged code in the interpreter or recompile the JIT code in a special instrumentation mode. (Not having to monkey patch the code at runtime—being able to do a "proper" recompilation—is one of the advantages of having a JIT!)
With an interpreter you can just probably ask the interpreter to stop at a certain instruction. For the purposes of your debugger the interpreter is the CPU in that respect. Except that you don't necessarily need to rewrite memory (although this probably exists too where the IL has a special breakpoint opcode).
With a JIT compiler you could do the same as in the article, but it complicates things because the code you want to debug may not have been jitted yet, or, with some JIT compilers, may not be jitted at all, ever (e.g. a small method that runs exactly once). You could also do the same as above, with a breakpoint opcode, or asking the runtime to break at a particular statement. Both cases require the JIT to play along and do the right thing. For code that isn't jitted you'd have to fall back to the interpreter anyway, though, so in some cases JIT compilation is simply disabled in the debugger (e.g. Java, if I remember correctly), which has the unfortunate side effect that not only you lose optimizations, which is normal for debug code, but you also get a hefty performance hit because you're now running interpreted.
The reason INT 3 is used is that it's the only interrupt that has a single byte opcode (0xCC). Other interrupts require two bytes: CD <interupt number>.
This makes setting a breakpoint really easy, as all you have to do is replace a single byte (and restore a single byte) where you want to place your breakpoint.
INT 3 being only one byte is also important when you're setting a breakpoint instead of a another single byte instruction - your newly set breakpoint won't override the consecutive instruction, which might be jumped to somewhere else in the code.
> The reason INT 3 is used is that it's the only interrupt that has a single byte opcode (0xCC). Other interrupts require two bytes: CD <interupt number>.
It's kind of the other way around. The reason it has a single byte opcode is because Intel wanted INT3 to be for break points, so they designated 0xCC for it. In fact, 0xCD 0x03 works, but just isn't used.
Because x86 instructions can cross 4/8/16 byte alignment boundaries, you can't safely set a multi-byte breakpoint in all cases. CPU might execute (bytes [0xCD, x] -> int x) instruction before the parameter becomes visible and trigger some other exception, whatever happened to be at that address before.
I agree with the first part, it's kind of a self reinforcing decision.
Intel wanted INT 3 to be for break points so they gave it a single byte instruction, and because INT 3 is a single byte instruction - it's the only one that makes sense for debug breakpoints.
Lets say you have a lot of single byte opcodes:
40 INC EAX
43 INC EBX
41 INC ECX
C3 RET
And you want to set a breakpoint on INC EAX.
If you replace "40" with "CD03" - you'll overwrite INC EBX as well.
That can cause your program to crash if there are control flows that end up jumping to INC EBX without going through INC EAX first.
One-byte opcodes won't save you when the code is jumping into the middle of instructions. The instruction you want to breakpoint might be in the middle of some other instruction that will run.
The most important thing to remember about developer compensation, especially when comparing to more "traditional" roles, is scale.
A programmer working at a large internet company may be impacting millions upon millions of people.
Building a feature that lets you collect (or save) a penny from a user every month is worth $240K a year when spread out over 2 million users.
$120M when spread out over 1 billion users.
That's really my answer to the common question at the end of the post:
"I don’t understand this at all and would love to hear a compelling theory for why programming “should” pay more than other similar fields, or why it should pay as much as fields that have much higher barriers to entry."
Scale is not the most important factor. It really is about supply and demand. If employers could get away with paying their developers 30K a year, they would (... and they do whenever possible).
The actual scale of the work certainly informs the compensation, like it might for other jobs like journalists for example, but that is only part of the story. If you need developers, you simply gotta pay them salaries the are competitive with what others pay for that particular role.
Scale is part of the demand side of the supply and demand equation.
Using dkopi's example, at a company with 2M users, it's worth hiring a developer for each "penny per user per month" problem. A company with 1B users, it's worth hiring a developer for each "0.002 pennies per user per month" problem.
Or similarly, if you look at developers who work on internal tools that save other developers time. At a startup with 40 employees, each new tools developer must save everyone else two days per year. At a large company with 20 000 developers, each new tools developer must be able to save everyone else 6 minutes per year.
As the scale gets bigger, smaller and smaller problems become worth hiring people for.
Scale IS the most important factor, because it controls the "demand" side of "supply and demand". So long as hiring someone makes the company more than they cost, the company will be willing to pay that price. Because software is so well leveraged in creating profits, it means there are nearly infinite spots where companies could make a profit by hiring a developer at current rates. The rates will continue to rise, and the supply will continue to increase, until it has hit equilibrium.
That's hardly the most important thing about developer compensation. The value a worker provides is a ceiling on their compensation, not a floor. As long as the "large internet companies" are receiving applications from more than one developer, they are unlikely to be paying $240K / year for developers who produce $240K / year of benefits. If the same people can handle building that feature for 2 million users and for 6 million users, the developers who get hired to build it for 6 million users will make 1x as much as the ones who build for 2 million users, not 3x.
How much impact does a laborer repairing a Dutch dike have? How much of that impact do they earn?
Construction workers are traditionally resources, They can be quickly replaced. With software this becomes tricky. You need to spend considerable time, amount and energy to find the correct replacement who can drive it with same efficiency. The 2 reasons why companies like Google, Facebook etc pay so much is because:
1) They want to attract the top talent, top 1%, 5% may be.
2) They want to keep people happy so that the key people working on most important features don't leave the company.
And this is where the supply side of the equation comes into play. Many people can build the road and there shouldn't be any differentiation in how it is built (assuming they have common engineering standards). Thus, the supply is almost perfectly competitive causing low wages.
However, there are infinite ways a developer can make something. 1 developer can make design a system that can save/make their company millions that most other developers would miss. The supply side of developers is more like a monopolistically competitive market. Thus, they can extract some of those savings/profits in wages.
I was thinking along that line too. In traditional more "human" field, a good lawyer/ doctor can just do that much, regardless of how good you would be.
It would be interesting to compare revenue or profit per employee for tech company, comparing to law firms and the like.
Another thing is that we believe (ie. The Mythical Man Month) in software, there is a limit of programmers per task before you just can't throw any more body into it anymore. Is this true (or still true), and does this apply to other fields of comparison?
Immigration could likely be a bigger factor than we thought. How long has developer compensation been skyrocketing? The immigration situation to the US has only been getting shittier for the past 5-7 years mostly (although it wasn't flower and pleasant before that, if you wanted to do startup or worked in the US with a job offer, it's several folds easier back then). And seeing that it takes like 5-7 years for a startup to mature, even longer for immigration "market" to catch up with reality (it takes a long time for people to decide or to immigrate in general), it wouldn't be a surprised that compensation is still lopsided for the US.
> there is a limit of programmers per task before you just can't throw any more body into it anymore
This is important. The effect is that to complete the task you have to effectively scale the team up, you can't scale horizontally, which increases demand on top talent.
It means that if a programmer is (theoretically) 10x more effective than an average programmer he may get much much more than 10x the pay.
Port mirroring means you can only be a passive eavesdropper. Attacks like SSL mitm wouldn't work because you actually have to intercept and modify the traffic
SSL MITM still won't work unless you want it to be very noticeable or you have very substantial resources.
Port mirroring is enough to capture SSL traffic and to break weak SSL keys or if you have compromised the key of the destination services (w/ some caveats like no forward secrecy etc.)
And it doesn't prevents you from executing MITM attacks from upstream or just doing specific MITM attacks from within the TOR exit node later on.
But overall there is nothing you can do to ensure that your TOR exit node, your VPN gateway or even your ISP isn't reading your traffic other than to use encrypted tunnels everywhere and even then you are for the most part only moving the problem upstream.
I think that it's fairer to say that it's built by enterprise technical people for enterprise technical people. Things like the fine-grained control over networking and resource permissions are hallmarks of enterprise tech.
That's not to say that these things don't solve real problems, but they are the problems of big organizations. Smaller teams building pure-cloud products don't have the same problems, and may not even have people with Big Corp experience.
Any OSS project leads here, that need a front-end developer? ;)
Also:
"Can I apply with a project that already exists?
Yes you can. However, your proposal ought to be clearly defined and and have its own degree of novelty, e.g. you plan to expand or enhance your preexisting project with a new module. In any case, you need to make clear what you will be working on during the 6-month project term."
Some ideas for useful addons to existing projects, keeping in mind I am not sure what your focus as a front-end developer is:
Gnu Privacy Assistant https://www.gnupg.org/related_software/gpa/index.html is a GnuPG frontend written in GTK. The software works well and does nice things like file encryption, taking a piece of text and outputting it encrypted.
The GUI badly needs improvement. I'd be happy to point out several problems.
I also don't believe GPA is available in platforms like Windows and macOS.
Kleopatra is a GnuPG and S/MIME frontend written in Qt (KDE libraries). It is maintained and available on Windows (through GPG4Win) but is still "computer power user"-centric and could also use user interaction love.
On a side note: One method that Enigmail uses to import PGP certificate is the ability to get a valid PGP certificate from a URL. Adding that functionality to GPA and Kleopatra would be nice wins.
All three frontends, Enigmail, Kleopatra, or GnuPG would do well to have a graphical or screen-based export of public keys. Something like a QR-code export of a minimal PGP public key would be nice for importing via smartphones.
While the privacy concerns are more than valid, reverse engineering is common practice in trying to copy your product.
Reverse engineering isn't inherently good or bad, it's just a tool. That tool can be used for both good and bad.
I always recommend certificate pinning in order to prevent MITM attacks. I also recommend it if you're backend API gives out a lot of information about your product's "secret ingredient".
Reverse Engineering to copy is even considered a legal tool in the EU.
The argument of the European Court of Justice was that car manufacturers also buy cars from competitors, take them apart, and use the gained knowledge on their own products. The same happens in every industry — but in Software Engineering, it should now be forbidden? That's not possible.
If you want to protect yourself from that, publish your secret sauce and patent it.
Imagine:
The only way to prevent this type of attack is sending the link over a secure channel. But if you already have a secure channel - what's the use case?