Hacker Newsnew | past | comments | ask | show | jobs | submit | more AnaniasAnanas's commentslogin

OCaml is very popular in academia though, especially in the field of theoretical computer science and formal verification. Coq, Frama-C, Flow, CompCert, etc are all written in OCaml. Heck, if you are running a graphical GNU distribution chances are that you have installed FFTW, which is written in OCaml. The "industry" is not the only thing that matters when considering the adoption of a language.


Also a Mirage OS/unikernel, BAP and BinCat binary analysis frameworks, Facebook Infer source-level static analyzer, etc.


Reason (the frontend framework/language by Facebook) is OCaml.


Have you used Reason for anything serious? How was it?


Particularly if your talking about a quasi-adademic PGP community.


No offense, I am genuinely curious, why would anyone use any closed source software for anything related to security after the Snowden revelations?


Why does everyone seem to hate delay slots? I understand that it makes writing assembly more annoying but most people use a compiler anyway.


They make writing assembly more annoying. They make writing compilers more annoying.

But the big reason is that except for the case of simple, short pipeline designs like the early MIPS parts they make designing CPUs annoying too.

The second you introduced parallel decode or a branch predictor with more than a cycle of latency these things hurt and don't help.


It makes the hardware implementation more complicated. The delay slot was perfect for the 5-pipeline original design. Once you try to push this to out of order execution (executing more than one instruction per cycle), the delay slot just doesn't make any sense.


That's my understanding as well. Software-wise, I, for one, have not had issues with reading or writing code with branch delay slots -- automatic nops, at worst. I guess it all depends how early in one's development they were introduce to the concept of delay slots.


There was one nifty thing that fell out from having delay slots - you could write a threading library without having to burn a register on the ABI. When you changed context to a different thread, you'd load in all the registers for the new thread except for one which held the jump address to the new thread's IP. Then, in that jump's delay slot, you load in the thread's value for that register and, presto, zero overhead threading!


in addition to the complexities they add to every layer of the stack that ajross and alain94040 brought up, they're not all that useful in practice. i seem to recall that they'd rarely be over 50% utilized and the majority of instructions in the delay slot were nops


A better question would be: why were Coinbase employees allowed to use any browser with javascript enabled and outside of a VM? Qubes OS has been a thing for quite a while.


> why were Coinbase employees allowed to use any browser with javascript enabled

I don't know, maybe because they need to get work done...? Even traditional banks allow JS.


I've worked at a large traditional bank (market cap and enterprise value are both around 100b), they also allowed firefox as well as js, at least for developers (I don't know what it looked like for non developers).


Of course, there generally are legal processes to leverage if money is stolen from a bank. The cryptosphere isn't as forgiving.


Google and Stackoverflow work just fine without javascript enabled. Trustworthy sites can be whitelisted if absolutely necessary.


It's this sort of attitude that makes sysadmins so incredibly popular among the masses.

Hint: if your environment feels like a concentration camp, users will find ways to work outside of it most of the time - which will be even more disastrous.


That's a fair point when literally hundreds of millions of dollars aren't on the line. It's not hard to properly secure your system from all manner of internet threats. There's no excuse for crypto exchanges not to implement such measures.


If hundreds of millions of dollars are one JS exploit away, the defense model is flawed. That sort of movement should require approvals from multiple people and even dedicated terminals that are not used for everyday browsing.

Security is a tradeoff; nuking browsers for everyone is just a bad tradeoff in 2019.


There are a million hypothetical security issues you could worry about. How would you weigh the risks of Javascript against the loss of basically all online productivity apps?


A standard VM doesn't protect from attacks of this level of sophistication.


I imagine it does unless the attacker has an additional XEN zero-day to pile on.


In the thread about this attack yesterday someone linked a paper about another attack against cryptocurrency researchers which did use a VM escape exploit [1], so if a cryptocurrency researcher is worth such an exploit, I'd say a company handling the kind of money Coinbase does is probably worthy as well.

[1]: https://news.ycombinator.com/item?id=20221279


I mentioned the possibility of an untrustworthy person gaining access to bugzilla yesterday but it seems that most people disagreed with it: https://news.ycombinator.com/item?id=20221397


Shouldn't the one responsible personally have to pay for it rather than the city and its taxpayers?


Do you have to pay the damages for every mistake you make at your job?


How on earth do you attribute responsibility here? Usually these things are an underfunded disaster waiting to happen, because the city can't find the money to upgrade from XP or whatever.


You mean the one who wrote the attacking code? Or the one who wrote the vulnerable code? Why do we even assume there is a "one" here?


Whoever made the decision not to take backups for example. The ones who will have to pay for their mistakes will be the taxpayers otherwise.


This is a public service, aren't the voters responsible? They could have voted in competent leaders.


This is it exactly. The voters are the ones who are ultimately responsible, and they'll be the ones to ultimately pay, just as it should be. They should be voting for competent leaders, and for sufficient taxes to pay decent salaries to attract good IT talent, but they don't, so this is what they get.

Every nation gets the government it deserves. - Joseph de Maistre


The voters are not one person. Sadly democracy ends up being the fascism of the many.


It sounds like you don't understand what "fascism" is, because this statement is plainly wrong.

The common statement is that "democracy is tyranny of the majority*, which is basically true IMO. Tyranny is not synonymous with fascism, though fascism can certainly be a form of tyranny.

Anyway, it doesn't matter if the voters aren't one person; they're a collective, and collectively they generally approve whatever government they're living under, or else they wouldn't have elected it, or allowed it to continue to rule them. If they elected it, they're getting what they voted for and what they deserve. If they didn't elect it, but allow it to rule them anyway, they're still getting what they deserve (though I'd make an exception for a small country being forcibly occupied by a much larger and more powerful country).


NDAs and non-compete agreements should not ever be considered as valid contracts by the government.


Some NDA's can be too broad, but this is a bad take. It needs to be possible to hire people that you trust not to disclose all your secrets, and your customer's secrets. This is what privacy regulations are all about. (At Facebook in particular, disclosing stuff about users is pretty bad, see lots of news stories over the last few years.)

The balance between protecting privacy and making abuses public is pretty nuanced and doesn't lend itself to one-bit thinking.


> needs to be

Nothing needs to be anything, though the world order would certainly look different and reflect the interests of different classes of people than today


No. But, in the absence of NDAs and other agreements for both employees and external partners, you'd see a great deal more limits on sharing information both within and without companies to a strictly need to know basis. Certainly those limits exist today to a degree because NDAs basically just allow for consequences. But if you can't keep someone from turning around and sharing anything you tell them other than through some sort of mutual trust, you'll be less inclined to share it.


Seems like looking at this from a class perspective only complicates things further?Poor people often have secrets and can be pretty vulnerable to attack if they're disclosed.


> It needs to be possible to hire people that you trust not to disclose all your secrets, and your customer's secrets

I disagree, it needs to be possible for whistle-blowers to operate freely. It should also be possible to disclose to the whole world new and superior techniques and technologies that a company tries to hide.

> This is what privacy regulations are all about

I am pretty sure that this is a separate thing to NDAs. Nevertheless I believe that the solution should be technical rather than legal, with things like end to end encryption and public key cryptography.


That's just wishing the problem away with techno-optimism. When you call someone at a company on the phone to get help, they often need to access your account. If they don't have access to anything, they're mostly useless and you get no help.

We're a long way away from making everything self-service and companies not needing to hire anyone to do support. Until all the support people get laid off, they need to be trusted at least to some extent. (Internal controls can be helpful.)


>It should also be possible to disclose to the whole world new and superior techniques and technologies that a company tries to hide.

Whether or not a company really benefits from this in a particular case, the consequence of prohibiting any legal protections against the broad sharing of company information would be a lot more secrecy and compartmentalization of information.


Which privacy regulations protect corporations more than they protect actual meat people?


I think you're painting with a very broad brush there.

NDAs and non-competes have their uses. It's when they become part of the default boilerplate that everyone signs to get a job that the problems start.


> NDAs and non-competes have their uses

I have yet to see a valid use that does not hinder whistle-blowing, the advancement of technology, or does not abuse the employees. I am sure that you will find a few valid use-cases if you try hard enough, however in the vast majority of cases they are used in order to repress the rights of others.


>I have yet to see a valid use that does not hinder whistle-blowing

The legal system isn't static. If your company is breaking the law and you report it to authorities, your NDA will be unenforceable.


Do any companies use something more like a blackmailer's NDA, which works without a legal system?

I'm not sure exactly how it would work, and perhaps it wouldn't work in practice, but I imagine it might involve paying the (former) employee a certain sum every month for their continued cooperation, and the employer would reserve the right to unilaterally cancel the arrangement: it would be "discretionary" or whatever. So the employee has a motive to cooperate (unless they're terminally ill ...) but there's nothing to "enforce".


If there were no NDAs then companies would exploit patent, trademarks and copywrite even more than they already do. If they kept NDAs limited to trade secrets then there wouldn't be a problem

I think Non-competes should be limited to while youre actually working there


Companies seem to try to abuse patent, trademarks, and copyrights as much as they can anyway. The best of-course would be if NDAs, patents, and copyrights all disappeared overnight. Trademarks are generally fine but they can be abused.


NDAs need to be heavily restricted, but it's a difficult distinction to draw between "trade secrets of the job" (which arguably should be protected) and "abusive working conditions" (which should not)


Honestly why the hell are trade secrets protected expect for misguised sense of intellectual property that forgets the concept was a contract and not a natural right?

They don't even have the benefit of disclosure which patents were meant to give - to prevent said knowledge being lost. Trade secrets are why we had to investigate Damascus Steel reproduction throughly and still speculate.

They give the useful arts and sciences nothing and yet they get free resources for enforcement.

I believe the proper legal response from the state for breach of trade secrets should be "Wow, sucks to be you!" We really shouldn't be promoting that artificial scarcity and restriction of knowledge.


Consider trying the debian package until it is updated in your system.



https://bugzilla.mozilla.org/show_bug.cgi?id=1544386

I find it really gross that they do not allow others to access it. This behavior damages the forks.


The source code for the fix is public. Presumably the bug report includes working exploit code. I don't see how this is "damaging" for forks.


It is important to also understand what causes the issue, how it was exploited, etc. Plus I am pretty sure that they had the bug report before the fix was released.


Are there any fork that modifies Firefox so thoroughly that one needs a context to patch SpiderMonkey?


Mozilla can still give access for the developers of forks without opening it to the public before they (and the forks!) have managed to rollout a full update.


Anyone can run a fork though, I right now might be running my personal fork. This is part of the point of free software.

Plus, you assume that the select few developers that are given the exploit information are trustworthy. The exploit being public from the first day is better than if even a single developer is untrustworthy or compromised.


I don't understand this logic. It's better to have everyone see it and to guarantee it is seen by a malicious actor, instead of only a small few seeing it and there being some small potential for it to be seen by a malicious actor?


It will be seen by a malicious actor anyway after the fix is released. The difference is that there will be more time for a malicious actor to act against a fork if an embargo is applied.


Mozilla used to open up the security bugs after the fix is out for a while.

I say used to because I notice that the security issues fixed in Firefox 66.0 (released in March according to the release notes) still appear to be private. I suspect the internal people that cared about it have left, and their process is now broken. Somebody might read this thread and poke people to open access, but it would have to be done as an exceptional step (given that it hasn't been the first time I've noticed this happening).


The same people who were in charge of opening up security bugs are still around and still in charge of it.

Security bugs are opened up once in-the-wild usage of affected versions is low enough, if I recall correctly. This usually takes a while after the fix is shipped. At no point were bugs opened up immediately after the Firefox release with the fix shipped. It's usually a year or so between the fix being shipped and the bug getting opened up, in my experience.


Ah, okay, thanks! My (very unreliable) memory thought it was sooner; that was why I picked 66 (released in March) rather than 67 (May).

The security issues in 60.0.2 (June 6 2018) is now public.


Unless things have changed dramatically since I left Mozilla, forks that are willing to be active in the Mozilla security community are able to get access to security bugs.


Would you mind elaborating? What does it have that uBlock Origin doesn't?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: