Hacker News new | past | comments | ask | show | jobs | submit | fxfan's comments login

runaway success of oss Linux on the desktop? I'm a full time user but I can't tell you how much I hate he constant hardware issues. it's 2019 and I cannot hibernate or sleep my laptop reliably. the backlight doesn't work without fiddling with kernel Params.


> it's 2019 and I cannot hibernate or sleep my laptop reliably.

Oh, no worries, it doesn't even work properly on Windows on half of the hardware.


I've never really had Windows hibernation fail on a laptop or desktop for nearly a decade now. Inversely, Linux doesn't fail on hibernation, it's usually rock solid there, but has other problems, like battery not being picked up, or nonexistent power saving (probably because it thinks it's a desktop).

Source: I like installing Linux on obscure things like Chromebooks and Windows 2-in-1s.


I manage a crap ton of windows systems... First thing i do is disable Hibernate on all of them. It does not work very well on windows either. So that is not a linux thing. Hibernate should simply not be a power state.

Nor should sleep in IMO but I do keep that on in windows.. On and Off should be the 2 power states. People just putting their windows computers to sleep for months on end causes all kinds of weird shit to happen in windows

>but I can't tell you how much I hate he constant hardware issues

And you believe windows works perfectly on all hardware? I select my personal computers based on Linux hardware support and I have zero issues. Windows has all kind of wierd driver issues, random shit that happens with some hardware.

Every issue you can come up with that linux has a problem around, i can find the same problem in windows. Welcome to General Computing


I will suggest that Windows has far fewer problems... and while my next desktop (months away) will likely be Linux, it is definitely not nearly as friendly as either Windows or Mac for most users.

Of course, if you don't manage your own computer and mostly run out of a browser, it doesn't matter nearly as much. It's significantly better today than a decade, or even 2 years ago. About every 2-3 years, I've run Linux as my main OS for anywhere from a month to I think 4 months being the longest. It's not stuck because invariably I hit a sticking point that just gets me to say "fuck it." Running a hackintosh for all its' faults is generally easier than running Linux as a desktop OS, which is a really sad statement.

It's not that Windows doesn't have problems, it's that it has them a lot less frequently. I say this as someone who prefers to develop against Linux (mostly web ui and server backend, etc). I absolutely hate the frustrating windows-isms in Git bash. I find the WSL 1.x isn't sufficient, and the lack of properly performing volume mounts in Docker for Windows very frustrating to say the least.

I, frankly, welcome a more transparent/fast/effective/complete option for linux software development in windows. What has come so far has allowed me to push it as an option moving forward that wouldn't otherwise be there.


> People just putting their windows computers to sleep for months on end causes all kinds of weird shit to happen in windows

Not just Windows, Linux/BSDs have problems, too, because just like Windows they tend to clean up temporary files only on reboot / relogon. Though Windows in particular had a number of issues where it would fill up the entire system drive, no matter how big, if you let it run long enough (related to not cleaning up extremely verbose log files and a couple problems with Windows Update).

> Windows has all kind of wierd driver issues, random shit that happens with some hardware.

Recent 1-2 years of nVidia drivers seem to generally lack in stability, with many people reporting frequent driver resets and outright game crashes esp. on Windows 7 and 8. Linux drivers aren't in a particularly convincing state, either. Issues with dis/reconnect displays, suspend/resume issues, 3D application hangs, hardware video crashing drivers (an issue on both OSes) etc. etc.


I clearly didn't say Linux. I said Unix. OSX.


A lot of OSX is not OSS, so I wouldn’t be surprised by the confusion.


When does CoreRT ship?


fossil from the creator of SQLite


Reminds me of the time there was a comment wondering what the OS on Microsoft watch was- someone responded with a speculation - an erlang vm with js for gui


Some of use bernie supporters were also called comrade and bernie was maligned for having had his honeymoon in Soviet. I hate Trump based on what i've seen him say first hand but of late I'm beginning to form an opinion that media blows things completely out of proportion and Russia Collusion is Iraq 2.0


[flagged]


Not even Mueller claims Trump collaborated. You are basing your claims on emotion, not evidence. You want Trump to be guilty, so he is, despite the facts.


You haven't worked for large companies have you? Intel ME is NOT a backdoor. It may have vulnerabilities, sure. But none explicitly put in there.

It was designed for a specific purpose- troubleshooting enterprise computers. And it does that job amazingly well. No more IT guy guiding me when he can just do all the clicks himself.


It doesn't matter if it's a deliberate backdoor or not. It's a door, and I want to be able to close that door if I'm not using it, and Intel won't let me. Reducing attack surface is a security best practice exactly because any software can have bugs.

An allegory: imagine if an OS ran an SSH server and there was no way to turn it off or to control the keys it accepts. Maybe it has no bugs (you can't see the source code). Maybe it has no malicious intent or backdoors. As a security conscious computer owner, I still view its existence as a negative. I would like to be able to provably turn it off or control the keys it accepts.


And that's exactly what matters and why I among many others call it a backdoor.

Telnet on the other hand is a service that I can switch off or block with far less work involved in normal circumstances.

To get rid of Intel ME I'd need to use Core-/LibreBoot and install it in a ritual that for a novice has something of a "black magic rite".


Forcing upon users is wrong but calling it backdoor, as someone who sounds reasonably intelligent to other reasonably intelligent people is misleading and wrong too.


I think this is one of those cases where you need to take into account the intent of the Intel ME and whether or not you can consider it a backdoor. Surely it's a useful tool in corporate environments but to any other average individual it's definitely a backdoor. It's a "feature" of nearly every modern x86 CPU that undoubtedly has capabilities of a backdoor that cannot be turned off or disabled by regular means. If I wanted to be able to remotely manage my machines out of band then I would've asked for it, but instead I foolishly bought myself into a very easy way for vendors to maintain control over me and my data.


A backdoor is access to a computer which the legitimate owner cannot control. Intel ME fits this very well. Let me switch it off (verifiable) and we can talk.

If it was for troubleshooting enterprise computers, it would be opt in. At this point I assume bad faith.


If one mistake tarnished reputation forever then after Iraq war none of CNN, Fox, NYT, MSNBC, WaPo should be in business, specially since all of these were accomplices in and not victims of falsehoods.


You are allowed to objectively criticize the Iraq war any more as it's been endorsed by both major political parties at this point.

It's sad that the major news outlets are complete propaganda machines at this point. Nobody is taking the government to task for it's continued wars in the Middle East. The only angle I ever see is how some 'atrocity' has taken place and how we need to bomb them even more.


I don't know about the LEO numbers but quick back of the envelop calculation assuming 1000 km means this latency would be nearly 3 ms.


EDIT: Release notes: https://tug.org/texlive/doc/texlive-en/texlive-en.html#

One technology that has stood the test of time- amazingly well designed TeX by Donald Knuth and LaTeX by Leslie Lamport.

Plain TeX can be a bit beginner unfriendly because most beginners use LaTeX and experienced people have their own 'coding style' but it's amazingly powerful.

I only wish somebody would add the minimum required primitives to plain TeX to render reflowable text in browsers and ebook readers

TeX was designed for "beautiful" typesetting but it gives you much more control than that IMHO. I'd urge everyone to try it out once, I use it offline but I think overleaf.com allows for plain TeX too. (XeTeX may work best for unicode, it's plain tex + minimal additions for Unicode)

On that note- I am not fond of PDFs because of their awfully poor unicode search support- does anybody knowledgeable know of a good target format I should use (and the appropriate drivers?)


You might like this LaTeX to HTML converter I'm working on: https://github.com/arxiv-vanity/engrafo

What primitives do you think need adding to TeX? LaTeXML, which powers Engrafo, does a pretty good job of converting plain TeX, as well as LaTeX.


Ironically, Computer Modern is terrible to read on computer displays. It was designed for ink and toner bleed, and looks great when printed on 1970s-era Xerox printers [1], but it's far too thin for digital rendering.

I might suggest Bitstream Charter for a well-designed and readable analogue to CMR for digital use.

[1] https://www.typografie.info/3/topic/22238-ist-die-computer-m... (German language)


That is the Type 1 realization of it, the original Metafont outputs bitmaps tailored for the target device. Unfortunately no widely font rendering library supports Metafont directly, so they are often converted to Type 1 or OpenType and loose that capability.


A very good font family for screens that's openly licensed is Inter[2]. It's well designed and also quite versatile.

[2] https://rsms.me/inter/


Out of interest, what are the advantages of LaTeXML over Pandoc's LaTeX to XML/HTML conversion? Why did you choose LaTeXML?


Last time I checked, pandoc’s LaTeX parser did not support too much of the TeX syntax. It only works for some subset of LaTeX, basically


This is super cool, thank you for sharing!


The problem is that browsers choose not to implement high-quality justification algorithms like the Knuth-Plass algorithm that Tex uses because it is computationally intensive. That’s why justified text looks like garbage on the web.

There are some experimental JavaScript implementations, but without browser support reflowing high-quality justified text is a non-starter on the web.


TeX’s line-breaking algorithm is certainly not computationally intensive. On my 7-year-old MacBook Pro, it takes 0.34 seconds to run the entire 495 page The TeXbook on a single thread. That includes parsing, macro expansion, page-breaking, lots of (slower) math layout, and dvi output, which means that the line-breaking takes at most a few hundred microseconds per page.

Remember, TeX was written to be usable on a 1 megabyte, 10 megahertz machine, where it ran about a page a second. One of my contributions at the time was to modify the Pascal compiler on the Sail PDP10 to count cycles for every machine instruction executed by TeX over all users over a number of months, and Knuth fine-tuned the inner-loops of TeX here and there based on the results (the code that automatically inserts kerns and ligatures got the most attention, IIRC).


My comment was based on what I've read about this from multiple supposed authorities on web development. Intuitively it made sense that you wouldn't want to install a computationally intensive algorithm on browsers on mobile devices or when the content area changes size frequently, as on web pages. It's fascinating to have those assumptions overturned by someone so deeply involved in Tex.


Well, if you're going to be nice about it, here's some more info: The Mozilla discussion claims that TeX's line-breaking algorithm is "quadratic," which seems a bit far-fetched. So, I just pulled the raw text of Moby Dick off the web, removed the blank lines so it's all one paragraph, and ran it. TeX produces 112 pages (hmmm, it was just "Volume 1") in 2.1 seconds. So, 30x slower than "normal-size" paragraphs, but hardly quadratic, as the single-paragraph Moby Dick is 1000x as large as the average paragraph in The TeXbook. Of course, as pointed out elsewhere, with a little effort, one could make minor changes that would remove even this speed penalty.

I'm much more sympathetic to the point that, while TeX's line-breaking algorithm can easily handle paragraphs with different line lengths for each line, it needs to know at the start what the different line lengths are. It's not clear how to generalize it to be able to handle layouts where the length of the nth line of a paragraph depends on the earlier (or later!) line breaks. Think tall floating figures which impinge on the text area of the paragraph they're in. I'm guessing that was the real impediment in using it in Web-land.


My assumption was also that performance is the reason we aren't getting more esthetically pleasing line breaking. Until I read a comment[1] by Philip Walton, who works on WebRender at Mozilla, that is.

[1]: https://news.ycombinator.com/item?id=19473277


> ... it's not possible in the general case, at least not with the specs as they are today.

That's a fair response, but how about changing the (CSS) specs to allow better line breaking? Surely that would take less time than WebUSB, and Google or Mozilla could quickly push it through the IETF.


See also the 8-year-old Mozilla bug for a more detailed discussion: https://bugzilla.mozilla.org/show_bug.cgi?id=630181


I'm not a web developer, so take this with a huge pinch of salt, but, if floats are the problem, does that imply that with layouts that use CSS Grid or Flexbox, we could have a decent justification algorithm?


Oh, thanks for this. Very intertesting.


He's Patrick, not Philip.


Woups. Thanks for the correction. I think it's too late to edit my original post. :/


It's not (just) about the computation power, it's also incompatible with the standard. The only compatible algorithm is a naive greedy one.


I haven't tried this myself but I think ConTeXt can output PDFs with embedded XML markup and even epub files. You need to use \startsection and \stopsection instead of just \section, and maybe there are other limitations, but it's a small price to pay, isn't it?

https://wiki.contextgarden.net/Epub

https://wiki.contextgarden.net/Export


TeX is fundamentally not compatible with reflowable text because it's at the lowest level about putting glyphs in positions.


I fail to see any fundamental incompatibility. Every text renderer of any kind is about putting glyphs in positions. The layout would declare few things not moveable and algorithms needs to decide how to fit text around those constraints. The algorithms for TeX are computationally expensive but I don't see why with faster computers, you can't reflow the text. About 15 years ago it used to take dozen second to compile my typical PDF, now overleaf.com does that in just a second in browser.


If you are outputting a layout for a reflowable medium like HTML or EPUB, you are not putting glyphs in positions. You are constructing graphical objects and defining their relationships (and how those relationships change based on form factor), and you are permitting the output device to render glyphs and put them in positions.

This is why we don't use PDF for web pages.


The web browser also "puts glyphs into positions." Neither HTML nor TeX source specifies positioning in the source format though. The same TeX document can be re-rendered at different page sizes, font sizes etc. Really not sure what fundamental difference you are seeing.


It's still way too slow. Lots of the documents I've worked on recently have taken over ten seconds to compile on my 2017 MacBook Pro (touch bar). That's 3 orders of magnitude too slow for reflowing at 60hz when you resize a window. I doubt a laptop will ever be 3 orders of magnitude faster (in sequential execution) than the one I have now, radical post-semiconductor computers notwithstanding.


Compilation seems to take place on the server.


Doesn't Xe(La)TeX/Lua(La)TeX produce PDFs with decent unicode search support? What issues do you have?


XeTeX in particular does but my problem is with the uncertainty around the whole process and the frustration when a search fails on a 300 page document


? PDF works fine for searching.

It depends on the document, a pdf page could be:

1) Text

2) An image of text

3) Lines/Curves that happen to be in the shape of letters/text

If it's text it perfectly searchable. If it's one of the others the creator has to also OCR it and add the text behind the image or in front but invisible (no stroke or fill).


It could also be mostly text, but mixed with ligatures having their own codepoints (sometimes non-Unicode).


If you set the \XeTeXgenerateactualtext=1 option in a Xe(La)TeX document, the resulting PDF will include ActualText annotations to support searching even in complex scripts with ligatures, character reordering, etc.


Unicode search in PDF works fine, there are just a few issues that can cripple it for specific files, or specific files using specific PDF implementations. The following is advice for anyone working with the format, it should not be construed as making excuses for how PDF does things. It's a ginormous format with a ginormous spec that's been around for a very long time and has accumulated a lot of baroque qualities over the years. So I don't mean to condemn it, but I'm not exonerating it either. It is what it is, and if you have to work with it this might be useful. Anyhoo:

Regardless of what one thinks of their other qualities, if a PDF can be searched in Adobe Reader or Acrobat, then the file is probably OK but the PDF reader that you were trying to search it with has a bug on its end (or more likely, some unimplemented dark corner of the PDF spec).

On the other hand, if the file isn't searchable via Reader/Acrobat, then the problem is most likely with the authoring of the file itself. The most common thing that breaks searching is when instead of embedding all fonts used, a PDF refers to fonts by name from the local system. This can cause unpredictable issues when reading the PDF from another OS that can't resolve those font names.

Another common breaker of search that seems to be much more common with TeX are workflows which somehow produce PDFs that have embedded Type 3 fonts. Type 3 fonts represent glyphs using PDF drawing instructions. It's less a file format than something that only exists as an embedded font within a PDF, but I've only seen them in the wild when authored by pdflatex or similar. It seems like TrueType or OpenType are the most reliable formats to embed across the most commonly used PDF implementations, but that's an educated guess. Type 3 font support is spotty in non-Adobe implementations.

Finally, font subsetting might screw up search. Most software that knows how to produce PDFs can produce PDFs with embedded but subsetted fonts. This means the software that created the file embedded TrueType fonts (for example), but created a special version of the font for embedding that only contained the glyphs used in the PDF. Depending on the quality of the software used to do the font subsetting, the output PDF might not retain the mapping of glyphs back to Unicode characters. If that happens, text using that font in the PDF becomes unsearchable, but the PDF remains renderable.

None of this is meant to excuse the shortcomings of the format, it's clearly overcomplicated and fragile. But when I've seen problems with searchability of PDFs, it's most often been with the software that was used to create the files, or how that software was configured by the author. And when it is a problem with the authored file itself, it's almost always because of fonts. Either the fonts aren't embedded, or they're embedded in a slightly oddball format that's in spec for PDF but not perfectly, universally supported by all of the various non-Adobe PDF implementations.


I hit the comment limit earlier but thank you for writing in detail about what's actually going on behind the scenes so I can take care to not let the problem affect me much.

That said, and despite your yourself having mentioned that there is no excuse for the shortcomings, I feel like this decision in particular is just plain inexcusable anyway because if you're going to write portable document, at least use some form of UTF encoding rather than indexing into glyphs (or on top of !)


Kind of.

I was deep into LaTeX during the 90's while at the university, my Lamport's book copy state reflects how much I used to refer back to it.

Nowadays I rather prefer the convenience of something like FrameMaker.


Thought about updating, but the actual changelog is pretty underwhelming for a 5+GB download... https://tug.org/texlive/doc/texlive-en/texlive-en.html#news


I never thought I'd oppose he decision of transferring ICANN control to outside of USA- it seemed increasingly and inevitably for the greater good. now I'm no longer so sure why it was done.


So they could rob everyone blind and not be accountable to anyone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: