Hacker Newsnew | past | comments | ask | show | jobs | submit | more codefined's commentslogin

Nah, something is definitely wrong with the randomness occassionally. See this demo[0] of me reliably getting 30 satire results in a row >75% of the time. The chances of that happening are astronomically low.

[0] https://imgur.com/a/XTQsyNH


Well that's pretty conclusive. Thanks for the video, I'll have to take a look and see what's wrong.


Piggybacking to confirm I saw similar behavior, and someone upthread had 22 satire titles in a row.


The thing that surprises me the most is that one tiny almond takes about 4kg of water to grow. That means that maximally about 0.04% of the water gets used (almond = 1.5g, water = 4000g, 1.5 / 4000 * 100). Avocados, for comparison, are at about 0.5% water usage (1 / 2000 * 100). Apples are at about 0.2% (0.25 / 125 * 100).

Have since found out there's a unified measure, called 'WUE' (Water Use Efficiency), measured in (kg m^-3) water. Some examples include: Almonds, 0.4; Legumes 0.42; Cereals 2.4; Rice 0.73.


Mmmm, you are ignoring the biomass of the tree


Not just the tree, but the fruit itself.

Almond fruits are essentially peaches. If you crack open a peach or nectarine seed, you'll see an almond-like kernel inside.

Almond trees have been bred to improve the kernel at the expense of the rest of the fruit, but that rest of the fruit is a substantial portion of biomass which requires water.


Mattermost v7 was released on Thursday and includes video conferencing functionality! [0]

> Channels: Voice calls and screen share (beta)

[0] https://mattermost.com/blog/mattermost-v7-0-is-now-available...


> Mattermost v7 was released on Thursday and includes video conferencing functionality

have you tried running it using Mattermost team (the OSS) instance?

I also use their cloud instance to stay updated on new features but voice is a paid only plan, so couldn't check it out


Thanks for bringing this to my attention; I will try it despite ReadOnlyFriday :)


Not seeing it in the article, but how are you handling `index` routing in the `app` folder? If you are putting it in the top directory (e.g. `/app/index.js`), is it not able to get a custom `layout.js` file (as that position is used for the global variant)?

If it's in an `index` folder (e.g. `/app/index/page.js`), how would one create a route for `example.com/index`?


The current working idea is that you wouldn't use index.js. There are Layout Components and Page Components, both explicitly defined by `layout.js` and `page.js`. We are open to other opinions here, though. Others have shared similar sentiment: https://github.com/vercel/next.js/discussions/37136


> Clarity is built by Microsoft, one of the largest technology companies in the world. Microsoft processes a massive amount of anonymous data around user behaviour to gain insights and improve machine learning models that power many of our products and services. Clarity is one of the ways Microsoft gathers this important data – and why we've made it available for free.

Looks like this is a data gathering effort by Microsoft.


    let { value, weight } =
        if values.empty() {
            { value: 0, weight: 1.0 }
        } else {
            { value: values[0], weight: 1 / values.len() }
        }
Using JS style object expansion, instead of tuple expansion.


Looking at the Dutch site on my phone (Samsung S10) I noticed it took a little while to load compared to the nigh instant loading of the UK Gov variant. Looking at Page Insights [0] [1] tells a similar picture. Desktop time to interactive times of 0.4s Vs 3.5s and Mobile time to interactive times of 4.5s Vs 13.1s.

The Dutch website seems to spend a lot of that time running the Next JS framework stuff, which the Gov.uk variant does not. It might work quickly on fast computers, but even on modern phones it seems to visibly pause.


On my iPhone Xr, older than S10, it loads very fast. Also performance depends on which page you benchmark. Landing page is faster for UK, but the cases pages is about twice as fast for the NL dashboard (time to interactive 2.4s for NL vs 4.4s for UK). Also first meaningful paint is faster (0.5s vs 0.8s). This proves that you can get decent performance without an overly bloated costly architecture.


Interestingly, doing a similar analysis where we bucketed each word into 243 (3^5) buckets based on the possible result, we found "RAISE" as the best word. Source[0]

[0] https://gist.github.com/popey456963/a654e98d0180566b897b70ee...


RAISE also sounds pretty plausible.

How is the score calculated? What is the trade-off between expectation greens / expectation yellows?

Intuitively I expect that greens are significantly more "valuable" than yellows, since they reduce the search space much more...

Maybe this is all the wrong approach and the right metric is how many legal answers remain after the first guess?


I would say that the distribution of yellow letters in the remaining word determines the value. If you have a yellow "X" or "Q" or one yellow vowel after testing all 5 vowels, that is a lot more meaningful than a yellow "T", which could really go anywhere. A yellow "G" but grey "I" tells me (at least intuitively, not quantitatively) a bit more than a green "I" and grey "G".

A recent word had a single vowel. This made it reasonably easy to guess where those vowels could go and to also choose a consonant-heavy next word that would really tear apart the search space if even one consonant was valid.

I think I'll switch to using RAISE or SLATE as a first choice. It was ADIEU before, but there's almost never an A in slot 1, U in slot 5, or D in slot 2, and I don't feel like any of these letters has a significantly most common position with just the info from these 5 letters. The yellow/grey letters are helpful after ADIEU, but I still need to explore at least 4 more letters (including O, and usually N or C) to even get a good idea of where any matches might go. In the moments that letter 4 is "E" while guessing ADIEU on round 1, that actually doesn't help me much with round 2. In fact, then I'm faced with the decision to explore unguessed letters in general, words with two "E"s, and/or words with "E" in 4th position. (same with "I" in 3rd)


I've implemented your last suggestion of scoring words by how much they reduce the pool of potential candidates (12k+). Testing it against the 1,000 most frequent words (targets), the word LORES seems to work best on average.


I did the same and ended up with ARISE.

My word list was alphabetical and the system picked up the first one :P


Not a Slack user here. Is the app version _slower_ than the website? I assumed that since Electron is only a little bit more than a Chromium wrapper that they would have identical performance.


It's not just the app being slower than the website, it's what it does to the rest of your machine.

Contained within a tab Slack can only use a set amount of resources allotted by the browser.

Whereas installing the app means it can go full hog.


One Chromium runtime managing five tabs is more efficient than five Chromium runtimes running one site each (because those instances aren't able to share anything with one another).


Long time slack app user here, I don’t know how you guys organize your slack but performance has never been an issue for our organization, I don’t think it would run any better then it currently does if it were an os native app, of course my machine isn’t a toaster.


Most important thing for me is the latency of the streaming solution. Discord seem to do really well here, with <1 second latency times. You done any measurements with your solution to suggest how much latency there is between the server and the client?


If you're connecting to the RTMP server directly, you can get about 1-2s latency. However, due to the additional overhead of the conversion process to HLS, I'm getting about 6-10 seconds of latency.

I have considered looking into WebRTC, but it seems a bit more complicated to work with than just running an RTMP server and calling it a day.

Both Apple and Twitch have their own implementation of low latency HLS, so it could be worth looking into how feasible it would be to setup something similar.

https://developer.apple.com/documentation/http_live_streamin...


Lowering hls_fragment[1] to 1s should help out a little with the latency. This is what I've done in the past with a similar project as yours, Open Streaming Platform[2], with good results.

Since you mentioned WebRTC for streaming to provide even lower latency, this is what I believe Project Lightspeed has achieved[3]. Might be something you'd be interested in, if you hadn't already seen it.

[1] https://wiki.openstreamingplatform.com/Usage/Streaming#osp-n...

[2] https://wiki.openstreamingplatform.com/Usage/Streaming#osp-n...

[3] https://github.com/GRVYDEV/Project-Lightspeed


I've experimented a bit with lowering the hls_fragment value and the playlist length, and found that fragments smaller than 2 seconds caused too much buffering on the end client for it to be stable - which resulted in higher latency than before.

I'm looking into the latency issues and I've started drafting up possible solutions. Low latency MPEG-DASH might be something to experiment with. It won't reach sub-second latency, but it will definitely be better than HLS.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: