Hacker News new | past | comments | ask | show | jobs | submit login

Too many problems with unsigned. Stick with signed.

https://google-styleguide.googlecode.com/svn/trunk/cppguide....




The argument 'google says so' is addressed by the author in part [1]: Overall their argument for avoiding unsigned integers and sample code seems surprisingly weak for a company of the caliber of Google. The most important reason to use unsigned instead of signed is not self-documentation, it is to avoid undefined behaviour.

[1] http://blog.robertelder.org/signed-or-unsigned/


The book "Expert C Programming" also says stick with signed, use unsigned for bitfields or binary masks. This post says unsigned for bitwise or modulo, so basically unsigned when expecting the value to be bits in a machine instead of a number.


That's funny since signed over/underflow isn't defined, while unsigned is.


Just because it's defined didn't mean it's expected. Unsigned overflow can happen with seemingly innocent subtraction of small values.


>You should assume that an int is at least 32 bits, but don't assume that it has more than 32 bits.

I thought the spec[0][1] only guarantees int to be at least 16 bits? Am I missing something here?

[0] http://www.cplusplus.com/doc/tutorial/variables/

[1] http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf


True, but in practice I think the only place you'd find a 16-bit int today is with Arduino, but that is only because it is based on an archaic architecture.

All ARM and Intel chips have 32-bit ints.


Or rather, too many problems with C/C++'s treatment of unsigned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: