Hacker Newsnew | past | comments | ask | show | jobs | submit | mjfisher's commentslogin

Is there a way of getting them to store a dozen or so totp secrets? And if so, how do you select which one you want to use?


For that use case get an onlykey rather than a yubikey.


And in case it helps further in the context of the article: traditional rendering pipelines for games don't render fuzzy Gaussian points, but triangles instead.

Having the model trained on how to construct triangles (rather than blobbly points) means that we're closer to a "take photos of a scene, process them automatically, and walk around them in a game engine" style pipeline.


Any insights into why game engines prefer triangles rather than guassians for fast rendering?

Are triangles cheaper for the rasterizer, antialiasing, or something similar?


Cheaper for everything, ultimately.

A triangle by definition is guaranteed to be co-planer; three vertices must describe a single flat plane. This means every triangle has a single normal vector across it, which is useful for calculating angles to lighting or the camera.

It's also very easy to interpolate points on the surface of a triangle, which is good for texture mapping (and many other things).

It's also easy to work out if a line or volume intersects a triangle or not.

Because they're the simplest possible representation of a surface in 3D, the individual calculations per triangle are small (and more parallelisable as a result).


Triangles are the simplest polygons, and simple is good for speed and correctness.

Older GPUs natively supported quadrilaterals (four sided polygons), but these have fundamental problems because they're typically specified using the vertices at the four corners... but these may not be co-planar! Similarly, interpolating texture coordinates smoothly across a quad is more complicated than with triangles.

Similarly, older GPUs had good support for "double-sided" polygons where both sides were rendered. It turned out that 99% of the time you only want one side, because you can only see the outside of a solid object. Rendering the inside back-face is a pointless waste of computer power. This actually simplified rendering algorithms by removing some conditionals in the mathematics.

Eventually, support for anything but single-sided triangles was in practice emulated with a bunch of triangles anyway, so these days we just stopped pretending and use only triangles.


As an aside, a few early 90s games did experiment with spheroid sprites to approximate 3D rendering, including the DOS game Ecstatica [1] and the (unfortunately named) SNES/Genesis game Ballz 3D [2]

[1] https://www.youtube.com/watch?v=nVNxnlgYOyk

[2] https://www.youtube.com/watch?v=JfhiGHM0AoE


>triangles cheaper for the rasterizer

Yes, using triangles simplifies a lot of math, and GPUs were created to be really good at doing the math related to triangles rasterization (affine transformations).


Yes cheaper. Quads are subject to becoming non-planar leading to shading artifacts.

In fact, I belive that under the hood all 3d models are triangulated.


Yes. Triangles are cheap. Ridiculously cheap. For everything.


I'm on mobile; I scrolled to the bottom and clicked the image of the painting and could zoom in to my heart's content - did it ask you for an account?


You can zoom in a lot on the 2490 × 1328 pixels offered. When you hit the download button for the full version, you get nagged.

Edit: you can zoom in, and then it will offer up the painting in slices at a higher resolution. So in theory you could download those and stitch them together if you manage to hit an unscaled version.


Fascinating reading:

> The majority of developers are unacquainted with features such as processing instructions and entity expansions that XML inherited from SGML. At best they know about <!DOCTYPE> from experience with HTML but they are not aware that a document type definition (DTD) can generate an HTTP request or load a file from the file system.

I was one of them!


Developers are even less aware that SGML has (and always had) quantities in the SGML declaration, allowing among other things to restrict the nesting/expansion level of entities (and hence to counter EE attacks without resorting to heuristics).

Regarding DOCTYPE and DTDs, browsers at best made use of those to switch into or out of "quirks mode", on seeing special hardcoded public identifiers but ignored any declarations. WHATWG's cargo cult "<!DOCTYPE html>" is just telling an SGML parser that the "internal and external subset is empty", meaning there are no markup declarations necessary to parse HTML which is of course bogus when HTML makes abundant use of empty elements (aka void/self-closing elements in HTML parlance), tag omission, attribute shortforms, and other features that need per-element declarations for parsing. Btw that's what defines the XML subset of SGML: that XML can always be parsed without a DTD, unlike HTML or other vocabularies making use of above stated features.

Keep in mind SGML is a markup language for text authoring, and it would be pretty lame for a markup language to not have text macros (entities). In fact, the lack of such a basic feature is frequently complained about in browsers. The problems came when people misused XML for service payloads or other generic data exchange. Note SOAP did forbid DTDs, and stacks checked for presence of DTDs in payloads. That said, XML and XML Schema with extensive types for money/decimals, dates, hashes, etc. is heavily used in eg ISO 20022 payments and other financial messages, and to this date, there hasn't evolved a single competitor with the same coverage and scope (with the potential exception of ASN.1 which is even older and certainly more baroque).


> Regarding DOCTYPE and DTDs, browsers at best made use of those to switch into or out of "quirks mode", on seeing special hardcoded public identifiers but ignored any declarations.

Not when processing XML mime types. In modern browsers that mostly means SVG files, but i think XHTML is still possible.

(Modern) HTML is neither SGML nor XML, so it doesn't follow the rules of either.


"Modern" WHATWG HTML is still following SGML rules to the letter in its dealings with tag inference and attribute shortforms ([1]). Which isn't surprising when it's supposed to hold up backward compat. To say that "HTML is not SGML" is a mere political statement so as not be held accountable to SGML specs. But (the loose group of Chrome devs and other individuals financed by Google to write unversioned HTML spec prose that changes all the time, and that you're calling "modern HTML" even though it doesn't refer to a single markup language) WHATWG had actually better used SGML DTDs or other formal methods, since their loose grammar presentation and its inconsistent, redundant procedural specification in the same doc is precisely were they dropped the ball with respect to the explicitly enumerated elements on which to infer start- and end-element tags. This was already the case with what became W3C HTML 5.1 shortly after Ian Hickson's initial HTML 5 spec (which captured SGML very precisely) ([1]). But despite WHATWG's ignorance, even as recent as two or three years ago, backward compatibility was violated [2]. Interestingly, this controversity (hgroup content model) showed up in a discussion about HTML syntax checkers/language servers just the other day ([3]).

Where HTML does violate SGML was when CSS and JS were introduced already, to prevent legacy browsers displaying inline CSS or JS as content. The original sin being to be place these into content rather than attributes or strictly into external resources in the first place.

Regarding SVG and XHTML, note browsers basically ignore most DTD declarations in those.

[1]: XML Prague 2017 proceedings pp. 101 ff. available at <https://archive.xmlprague.cz/2017/files/xmlprague-2017-proce...>

[2]: <https://sgmljs.net/blog.html>

[3]: <https://lobste.rs/s/o9khjn/first_html_lsp_reports_syntax_err...>


> "Modern" WHATWG HTML is still following SGML rules to the letter...To say that "HTML is not SGML" is a mere political statement so as not be held accountable to SGML specs.

That is self-contradictory and makes no sense. If its following sgml to the letter than there is nobody to be held accountable for violating the sgml spec and hence nobody to hide behind "political statements".

You can't have this both ways.

> Regarding SVG and XHTML, note browsers basically ignore most DTD declarations in those.

They listen to dtd's for entity references and default attribute values. I'd hardly call that ignoring.


Most of these exploits are so famous that common xml processors have disabled the underlying features.

So in practise you probably dont have to worry too much as long as you dont enable optional features in your xml library. (There are probably exceptions)


> I was one of them!

I still one of them!


Nope, it's not a coincidence - it's an interesting exploration of the history of the definition of a metre. Read the article.

As it says, at some point there was an attempt to standardise the length of a metre in terms of a pendulum's length; which related it directly to g through Pi.


Perhaps not all that offtopic - Hatetris is what happens when you subvert normal the rules and make the game play against you. Anti-mimetics stories are what happens when you subvert the rules of ideas and make them play against you.

I can imagine a common space of inspiration there.


Most recently, Elixir.

I never really lost my love for programming, but twenty years in the n-th commercial project in the more common languages (plus a front end based in whatever combination of JS frameworks is the new flavour) really ground a lot of the original creative joy out of it for me. The interesting bits got too easy and the hard bits got more uninteresting.

Elixir is a breath of fresh air; it's purely functional so it requires thinking a bit differently, but it's accessible enough to start easily and pretty enough that it's not a soup of parentheses (looking at you, lisps). It's practical and well suported enough to build a wide variety useful things, and very good at concurrency.

It's what I really wanted Ruby to feel like.


The author mentions those at the bottom of the article, but two problems highlighted still remain:

* There's another intermediary concept (kernel density estimation) between the audience and the data

* They're still likely to misrepresent tight groupings and discontinuities, which will be smoothed out


Histograms and box plots are just clunky kernels density estimates too


There's a difference between asking someone to invert a binary tree, and asking someone to sum up a list of numbers. The latter finds the people who can't code much more quickly!


That's often not true. Testing more, earlier, tends to surface problems whe they require less work to fix.

Easier context switching, less dealing with extra added complexity between coding the issue and fixing it etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: