Hacker Newsnew | past | comments | ask | show | jobs | submit | smarnach's commentslogin

I wasn't able to reproduce the back button hijack. It never asks me for an email address, regardless of what I try.

How did you come to the conclusion that 750 people is a lot to build a web browser? The Chrome-adjacent teams at Google are about 4,000 people, and that doesn't even include all the people at Google providing infrastructure (e.g. servers, workplace, HR, legal etc.).

Comparing Firefox to Chromium-based browsers doesn't make much sense since these browsers don't develop their own web engine.


How did you come to the conclusion that it's not? Google being bloated is not a good justification for why Mozilla should be bloated too. Someone in the comment below suggested that Ladybird was built by about 10 people. Call me naive, but I don't think you'd need 75x number of people to work on a browser that's already established for over 2 decades.

take the reference of ladybird.

in a couple of years they built the engine from scratch. it's going to soon enter Alpha. how many people from ladybird built that engine? about 10?

all while everyone has said that modern web makes this task impossible


> it's going to soon enter Alpha

Perhaps other browser makers want to move faster than Ladybird.


that's fine.

point is that Mozilla is wasting money and having 4000 people working on chrome may not be the correct benchmark.


Wait why is that fine? The whole point was that ladybird is yet to enter alpha which is the very reason why it's not the correct benchmark. And you said the Chrome comparison isn't the correct one but... didn't follow it up with an actual reason.

They are not entirely separate from Mozilla. The MZLA Technologies Corporation is a for-profit subsidiary of the Mozilla Foundation. They have access to some of Mozilla's common infrastructure, but are otherwise entirely funded by donations. Donations to MZLA only fund Thunderbird and no other products.

Seems fine if you can donate to Thunderbird development. Compared to Firefox, where I don't think it's possible to donate to development at all (only to Mozilla activism side).

You can buy their Products. Afaik if you buy i.e. Firefox relay the revenue does not go to the foundation.

Edit: I just checked the Invoice, payment goes indeed to Mozilla Corporation, not the foundation.


Mozilla also runs hiring and HR for MZLA. They control who gets hired and fired.

It is more like money laundering, than independent entity.


> we should default to the calculation of 2-4x the rate.

No we should not. We should accept that we don't have any statistically meaningful number at all, since we only have a single incident.

Let's assume we roll a standard die once and it shows a six. Statistically, we only expect a six in one sixth of the cases. But we already got one on a single roll! Concluding Waymo vehicles hit 2 to 4 times as many children as human drivers is like concluding the die in the example is six times as likely to show a six as a fair die.


More data would certainly be better, but it's not as bad as you suggest -- the large number of miles driven till first incident does tell us something statistically meaningful about the incident rate per mile driven. If we view the data as a large sample of miles driven, each with some observed number of incidents, then what we have is "merely" an extremely skewed distribution. I can confidently say that, if you pick any sane family of distributions to model this, then after fitting just this "single" data point, the model will report that P(MTTF < one hundredth of the observed number of miles driven so far) is negligible. This would hold even if there were zero incidents so far.


We get a statistically meaningful result about an upper bound of the incident rate. We get no statistically meaningful lower bound.


Uh, the miles driven is like rolling the die, not hitting kids.


Sure, but we shouldn't stretch the analogy too far. Die rolls are discrete events, while miles driven are continuous. We expect the number of sixes we get to follow a binomial distribution, while we expect the number of accidents to follow a Poisson distribution. Either way, trying to guess the mean value of the distribution after a single incident of the event will never give you a statistically meaningful lower bound, only an upper bound.


The Poisson distribution is well approximated by the binomial distribution when n is high and p is low, which is exactly the case here. Despite the high variance in the sample mean, we can still make high-confidence statements about what range of incident rates are likely -- basically, dramatically higher rates are extremely unlikely. (Not sure, but I think it will turn out that confidence in statements about the true incident rate being lower than observed will be much lower.)


The distance from Earth to Mars is about 3 to 22 light minutes, not 20 to 90. That doesn't change anything about your point, except the capacity is lower.


Twice as fast at executing JavaScript? There's absolutely zero chance this is true. A JavaScript engine that's twice as fast as V8 in general doesn't exist. There may be 5 or 10 percent difference, but nothing really meaningful.


You might want to revise what you consider to be "absolutely zero chance". Bun has an insanely fast startup time, so it definitely can be true for small workloads. A classic example of this was on Bun's website for a while[1] - it was "Running 266 React SSR tests faster than Jest can print its version number".

[1]: https://x.com/jarredsumner/status/1542824445810642946


I only claimed there is absolutely zero chance that Bun is twice as fast at executing general JavaScript as Deno. The example doesn't give any insight into the relative speeds of Bun and Deno, as fast as I can tell.


    johnfn@mac ~ % time  deno eval 'console.log("hello world")'
    hello world
    deno eval 'console.log("hello world")'  0.04s user 0.02s system 87% cpu 0.074 total
    johnfn@mac ~ % time   bun -e 'console.log("hello world")'
    hello world
    bun -e 'console.log("hello world")'  0.01s user 0.00s system 84% cpu 0.013 total
That's about 560% faster. Yes, it's a microbenchmark. But you said "absolutely zero chance", not "a very small chance".


That's as far from being a general JS executing speed benchmark as it could be. It essentially just times the startup speed.


Keep in mind that it's not just a matter of comparing the JS engine. The runtime that is built around the engine can have a far greater impact on performance than the choice of v8 vs. JSC vs. anything else. In many microbenchmarks, Bun routinely outperforms Node.js and Deno in most tasks by a wide margin.


The claim I responded to is that Bun is "at least twice as fast" as Deno. This sounds a lot more general than Bun being twice as fast in cherry-picked microbenchmarks. I wasn't able to find any benchmark that found meaningful differences between the two runtimes for real-world workloads. (Example: https://hackernoon.com/myth-vs-reality-real-world-runtime-pe...)


Real world benchmarks include database queries and http requests? That’d quickly obviate any differences between runtimes.

Lol, yeah, this person is running a performance test on postgres, and attributing the times to JS frameworks.


We are in the "system engineering territory" and as such it might have more to do with the way the runtime is designed and how the javascript native runtime does things than the compiler optimizations. You have to measure syscalls, memory access, cpu cache locality and a bunch of design decisions that in the end contribute a lot to the running time. So depending on the decisions taken, it can easily happen.


It depends on what. Bun has some major optimisations. You’ll have to read into them if you don’t believe me. The graphs don’t come from nowhere


Are you using the Ubuntu Snap to train Firefox? If so, you can switch to the native Debian packages released directly by Mozilla. They don't do that sandboxing stuff, and they are a lot faster. I don't notice any speed difference between Chromium und Firefox even on a Raspberry Pi.


He also links a Wikipedia article stating that more than 60 percent of Londoners were born in Britain to prove his point that only a third of Londoners are "native Brits". That doesn't leave much room for interpretation.


Not even that. There's no rule in the GDPR to disclose the use of cookies. The regulation doesn't actually mention cookies at all, except maybe in an example. Instead, any data collection that's obviously required to do what the user requests (including session and shopping cart cookies) doesn't require any explicit consent. Only additional data collection, whether performed by cookies or any other means, requires consent.

That's why there are websites without cookie banners, like GitHub. It's not even hard to do that; it's just that most companies don't bother, because they know the EU will be blamed anyway.


Similarly, the Rust query will include "trust", "antitrust", "frustration" and a bunch of other words


A guerilla marketing plan for a new language is to call it a common one word syllable, so that it appears much more prominent than it really is on badly-done popularity contests.

Call it "Go", for example.

(Necessary disclaimer for the irony-impaired: this is a joke and an attempt at being witty.)


Let’s make a language called “A” in that case. (I mean C was fine, so why not one letter?)


Or call it the name of a popular song to appeal to the youngins.

I present to you "Gangam C"


You also wouldn't acronym hijack overload to boost mental presence in gamers LOL


Reminded me about Scunthorpe problem https://en.wikipedia.org/wiki/Scunthorpe_problem


Amusingly, the chart shows Rust's popularity starting from before its release. The rust hype crowd is so exuberant, they began before the language even existed!


Now if we only could disambiguate words based on context. But you'd need a good language model for that, and we don't... wait.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: