Hacker Newsnew | past | comments | ask | show | jobs | submit | sockmeistr's commentslogin

They're still going, are nice and litigious, and actively defend their trademarks. I'm kind of surprised they haven't sent a cease and desist already.


But it was never a Sun product? Java was a Sun product, giving JavaScript a name with "Java" in it was the mistake that created this whole mess.


Rename it GoScript this time around.


It was livescript originally.


Mocha before that. No going back.


For what it's worth, I do hate people pasting their AI summaries to the comments. Not only are they adding nothing, they are actively detracting from the conversation; they have just pasted a wall of text without fact-checking it. And in fact, this "summary" misrepresents the article; it completely ignores the humor and presents it as a serious scientific endeavour.

But judging from the rest of the comments, it seems like most people barely managed to finish reading the title, so perhaps there's no need to worry about them reading this AI slop...


I read the article and fact-checked the summary before posting. The original article is quite long, so the summary may be useful for those who are intrigued by the title and just want to know the outcome.

I don’t see any need to mention the "humor" aspect here. Many seemingly laughable hypotheses have turned out to be true when rigorously tested. The author did a good job investigating this one.


I just wanted to second this recommendation - while the video is nearly an hour long, it feels much shorter and is well worth watching: https://www.youtube.com/watch?v=kya_LXa_y1E


Are you sure it's really gone? I can see it on en-GB:

https://addons.mozilla.org/en-GB/firefox/addon/enhancer-for-...

Edit: Nevermind, it now seems to be gone from the en-GB site too!


Are you thinking of the "Indie Wiki Buddy" extension? They have already added support for the new minecraft wiki: https://getindie.wiki/listings/

It's pretty incredible that Fandom/Fextralife have driven people to developing browser extensions just to avoid everything they touch...


No, there's a Path of Exile wiki specific extension also.

It's slightly counterproductive from an SEO perspective as it redirects Fandom clicks to the new wiki, which gives Fandom all the SEO benefits of receiving those clicks.


Indeed, which is I suspect why fandom still ranked so highly above poewiki for many searches for far too long, and initially even appending poewiki would often return no results or just the fandom result.

Thankfully there's been a notable shift in the past 3-6 months.

What has finally helped I think is the sheer amount of stale or missing data on fandom, so as a really basic current example, it doesn't have mention of tattoos, a fairly critical aspect of the current league.


The biggest reason was simply that we were being penalised for having duplicate content, which Google are able to detect.

We definitely over index on content added post-fork, which makes sense given that we're not competing with Fandom for those searches.

According to Google Search console, in the last 3 months we've had 32.9M impressions and 4.39M clicks, which is nothing to sniff at.


Are there any headers added by the extension so you can detect natural vs redirected traffic?


I don't believe so, we never implemented client or server side tracking on the wiki though so it's not like we'd look at it anyway.


The context is that they work for Tailscale. (I'm guessing you assumed they were FAANG?)


I made no such assumptions. Doing this in any workplace, tech or no, is toxic to people who are justifiably upset with their workplace. The vibe I was getting was that rather than admit there could be toxic elements at play, the author chooses to blame those complaining instead.

It's a tale as old as the workplace.


Why do you guess/assume that assumption?


You can export your savegame via google takeout, but often these savegames will only work on the stadia builds of games, and aren't able to be loaded into stadia. (Source: https://support.cdprojektred.com/en/cyberpunk/stadia/sp-tech...)


Isn't this incredibly dangerous? I know everyone likes to pretend they have perfect code coverage, but just ripping stuff out that wasn't called during 'probing' feels like the perfect way to make rare code paths even more dangerous.


kkrieger (https://www.pouet.net/prod.php?which=12036) is an impressive 3D shooter in only 96 KiloBytes. As one of their optimization techniques they recorded all code paths and discarded unused parts. At least in the first version this was why you could only use CursorDown in the menus and CursorUp did not work.


I can't tell if this is praise or condemnation by demonstration ;P.


My guess is: praise for kkrieger for its tiny size, and condemnation by demonstration because kkrieger was known to have bugs because of the usage of that technique.


kkrieger was my first thought too - I think there was some other important piece of functionality that got stripped as well (I want to say something to do with hitboxes or collision detection).


I guess it only seems dangerous to me if you blindly follow it's recommendations. Feels like it could generate a list of "things you may want to consider", that you'd then be able to use to take a look at your container.


it sometimes doesnt work sure but thats why we have tests and test i minify all my containers nowdays and in most cases it works in those that it doesnt i figured out the pattern when and why for my apps and use include flags to ensure things remain inside


Even prior to docker-slim there were tools like Quay.io that "did the right thing" by squashing images to just the contents of the final image layer.

The best thing you can do is use minimal images and multi-stage builds. This should help you immensely to reduce your attack vector and do standard software bill of materials, too.


The quay.io squashing optimization is a lot safer though, right, as it doesn't remove anything that should be visible to the container?

I agree that the multi-stage builds are the best option, but it can be hard to know if you've included everything that is required or if you've accidentally excluded something that is important in rare cases.


docker-slim is incredibly dangerous and should never be used for a production app.


I guess the question is in which way dangerous? It might lead for crash for sure, but is that crash controlled? If it is, then it is just a crash. Stability vs. minimal attack surface

But I agree, this is just bandaid for lazy bois. Better use Bazel etc. for distroless builds


This is dangerous in that it strips assets, resources, and files from your app without understanding how they are used.

If you forget a critical code path when you build using Docker-Slim, and a resource file is not used, that resource will be stripped. The feature which depends on it will be broken in production.


i would disagree i use em in production apps, i configured it and it works if you do it blindly it happens that sometimes things break but if you configure it, it will work


There is no guarantee that a blind code shaker will leave in everything important while stripping out everything that isn't. How could it possibly know?

If Docker-Slim is working for you in production apps, you are either getting lucky or your app is trivial enough to lack unseen code paths.


Maybe even unforseen files outside of your app, right?

Like, maybe some log forwarder utility that only gets called for "CRIT" messages that didn't happen to get triggered by testing.


Sandstorm does something similar in its packaging process. In practice it works pretty well; besides integration tests exercising functionality, the packaging generates a manifest list people typically check into source control, making it easy to see a diff between versions. I’m sure I’ve had a bad build at some point, but on the whole it works fine. Most dependencies tend to get referenced up-front anyway and any issues that arise are usually from shelling out to binaries in rare code paths.


If you have good integration tests is it still a problem?


It depends on how comprehensive they are, and how important it is that your container operates correctly.

For example, even the best integration tests (for small/mid-size companies) don't always include tests that exercise weird paths around dates/times - leap years, leap seconds, daylight savings time, etc. We often trust that our datetime library or code will handle these for us, but what if the configuration is stored in a file that isn't accessed during your integration tests?

Best case scenario is you hit the error-path soon in production and your code either crashes or does something correct-enough with a fallback path, but a worse scenario is you start losing critical information and don't realize it/fix it until it's gone on for a while.


In a non-trivial app, you can never guarantee that your "good integration tests" cover every edge case. If you could, we wouldn't have outages in production.


Not really. There's a few common problems doing this with Go and for example, Alpine as a base image. You'll have to manually grab something like sqlite.c or some openssl stuff (its been awhile) but after that it works just fine.

Its actually a good way to find dependencies you didn't know were there. As long as you're diligent this isn't something you should be shaking in your boots over.


The dependencies are part of the build process. It seems like something apko would be better suited for.


If you have a good pipeline to prod, should be okay. You should hopefully have plenty of automated tests to ensure it doesn't get to prod if there are errors.


I think a "good pipeline to prod" with sufficient automated tests to ensure nothing is broken is the exception not the rule. Even in places that think/say they have a "good pipeline to prod". It's something that takes a shocking amount of engineering effort to do well, and tons of discipline to maintain.


Hire a test engineer to manage all of that - it’s a full time job but an important one!


Totally agree. I've joined companies where there was literally zero test coverage. All tests were done manually. But then if you have bad test coverage, then you shouldn't be using something like this.


should be okay is definitely not enough for me to ship things to production.

And while I do have automated tests, they might sometimes stub system calls as I'm mostly testing my code to keep things stable and fast.

I'd rather explicitly declare my dependencies and use the same container for development, test and production to feel much more confident that it includes actually everything that's needed.


with good pipeline and knowledge about your app you should be able to ensure it works without much of a problem


thats why you should test and if there are stuff htat needs to be included but arent and you know wont work fail the test and add --include-path to your docker-slim command to ensure something is added


RE:Twitter, I suspect that's because Germans get a different report form than other visitors - Germany has some specific laws (e.g. NetzDG) that apply to visitors from Germany, rather than visitors who speak German.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: