Hacker Newsnew | past | comments | ask | show | jobs | submit | YSFEJ4SWJUVU6's commentslogin

The engine actually only made 150hp, but luckily there were 2 of them.

Yes, they were certainly less efficient than modern engines. However, ICE engines have always had trade-offs between power output per size, mechanical efficiency and reliability, and for this vehicle the last one was certainly of most importance.


That fwrite writes the contents of output to stdout.

output probably contains all the output that doesn't depend on input (which this program doesn't have).


>since UTF-8 doesn’t have a “magic number” to identify itself, the convention is to use the BOM codepoint

Neither does any other of the hundreds of existing text encodings.

It's debatable how much of a magic number it's supposed to be anyway, considering that few people have insisted on having magic numbers in text files, and that you get the BOM at the beginning by simply naively converting a UCS-2/UTF-16 file codepoint by codepoint (and vice versa, enforce it to be there if you ever happen to do the conversion the other way around because of course you're conversion couldn't include that extra logic in it).


The nice thing about the BOM is you can't get it accidentally in an ASCII file - all the bytes have the upper bit set but all ASCII characters have that bit as zero. It makes an excellent magic number for that reason. It's probably just as unlikely to come up in other encodings that use the upper bit.


BOM is only a problem with strict syntaxes, which robots.txt is not an example of. If the "consumer" simply ignores invalid or meaningless lines, you can avoid issues from invisible characters by not having anything meaningful on the first line of your file.


>A motorbike will generally [...] brake faster than a car in a straight line (mostly because it's light)

That's generally not true. At best you'd roughly match an average car when balancing your weight perfectly; but typically you can only expect worse results when comparing the two.


Classical equations of friction gives a constant relationship between weight and maximum acceleration from friction. And that applies fairly well to most pairs of materials.

But it applies less well to rubber. Rubber has significant natural adhesive qualities, so a normal force of zero Newtons gives a non zero acceleration. This adhesion dominates low weight acceleration. This is why performance cars tend to have larger tires. See dragsters for the logical conclusion. Without that adhesive quality, performance cars would prefer smaller tires for reduced air resistance.

The classical coefficient of friction model is one of "spherical frictionless cows in a vacuum" white lies we tell students to prevent them from getting overwhelmed by the hairiest differential equations imaginable.


Yes and no. Technically that results in UB with many inputs with platforms that use a signed char type.


>Higher accuracy isn't the only reason, you'll get chronic wrist pain within a couple of months if the mouse movement comes from your wrist.

Or you could just move the mouse with your fingers. Other positives include requiring much less space and not looking silly flailing around. (My preference is maybe 1–2 cm corner-to-corner, with no wrist related issues a couple decades in; and obviously, mouse acceleration is a great idea.)


> to avoid things exploding because a zero length variable might exist is a stupid and ugly hack.

That has less to do with POSIX compliance, as opposed to working around quirks with less-compliant historical implementations. Also, I think you'd have to dig pretty deep to find a test that doesn't support zero-length arguments; you'd be safe as long as you quote your variables.

The actual problem of ambiguity arises from variables that are also syntax, like '(', or '!', when using compound expressions (which you should avoid anyway).

  # this is bad (also obsolete)
  [ "$var" != foo -a "$var" != bar ]
  # this is good (and fully POSIX compliant)
  [ "$var" != foo ] && [ "$var" != bar ]
  # you probably don't need to care for systems where this was necessary
  [ "X$var" != Xfoo ] && [ "X$var" != Xbar ]


>10 years later, copy-paste is still a horrible user experience. Different apps use different clipboards :scream: Even for copying these commands from Firefox to the terminal with keyboard shortcuts (CTRL INSERT, SHIFT INSERT, as CTRL C has a different meaning in terminals), you'll need copy-paste fixed already.

IMO this thing is way overblown. What applications these days do not simply use primary and clipboard selections in a consistent way? (FTR I like them separate very much as I would hate to have my clipboard contents change by simply selecting some text.) Middle click is not a shortcut for Ctrl+V!

Ctrl-C is indeed used to send SIGINT to the foreground processes in terminals and thus terminal emulators, and the established alternative for terminal emulators is to replace Ctrl-C/V copy/paste with Ctrl+Shift+C/V (I think that's how even cmd.exe works these days). Copying and sending a SIGINT would hardly be a useful default.


Highlight and use The middle mouse button to past (often you can “click” the wheel of a mouse with a scroll wheel for middle button.). It’s quite fast.

When I switch to a Mac at work I miss this functionality (though middle button copy works in the Mac terminal)

And honestly on Mac copy/paste between jet brains software and x windows, it’s not working great on all Mac apps either..


In fairness to the author, if you're used to cmd+c and are switching to Linux (or Windows for that matter), the ctrl+c / ctrl+shift+c could be a little jarring.

I say this because I have ctrl(+shift)+c committed into muscle memory and find it jarring to use cmd+c


Is there a way to configure a <Super> or <Meta>-C/-V on linux to be copy/paste in a halfway compatible way? I often struggle too with my <Cmd>-C/-V muscle memory when using linux.



>yes some compilers don't give warnings by default for implicit functions

Broken or ancient compilers. That's as much C's fault as it's Intel's fault that Windows 10 doesn't run on Pentium MMX. Implicit functions are not part of C; they were gotten rid of back in 1999!


I agree to some extent (I'd also add that undisciplined and apathetic programmers are an even bigger problem).

I have to disagree about implicit functions not being a part of C anymore. I've yet to see one generate an error; this is optional, and frequently not used. I'm including brand new, state of the art compilers, even (default) gcc. Of course, not even emitting a warning is bananas.

I confirmed this, with the following snippet:

// start fail.c

int fail = 0;

// end fail.c

int main(void)

{

   fail();
   return 0;
}

gcc --std=c99 -g main.c fail.c -o main.exe && gdb main.exe -ex run

# Cmd line junk, however a warning was emitted for implicit function "fail"

Program received signal SIGSEGV, Segmentation fault.

0x00407020 in fail ()


>I have to disagree about implicit functions not being a part of C anymore. I've yet to see one generate an error;

This is not really a debatable matter. There's an international standard for what is, and what is not a part of the C language.

Errors as you seem to understand them are optional for everything else except for the #error preprocessor directive. For all the other invalid C programs, only a diagnostic (like the warning you got) is required, and a conforming implementation is free to complete translating the invalid translation unit. I don't see why that would be an issue, as it's very easy to switch those warnings into translation errors if wanted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: