Consider the idiomatic way of interating backward through a array:
for(i=n; i-- > 0 ;)
{ /* operate on a[i] */ }
converting i--; to a statement at the start of block makes it less clear that it's part of the iteration idiom rather than a ad hoc adjustment that's specific to this particular logic. There are other examples, but they're either more involved or statementification is less obviously wrong.
It seems sensible to always just use signed values for indices. Indices are difference types, which should include negative values so that you can subtract two indices and get a sane delta. The range of signed values seems 'big enough.'
Umm, no? Indices are ordinals[0], forming the canonical/nominal well-ordering of a collection such as a array.
> an ordinal number, or ordinal, is one generalization of the concept of a natural number that is used to describe a way to arrange a (possibly infinite) collection of objects in order, one after another. [...] Ordinal numbers are thus the "labels" needed to arrange collections of objects in order.
In C an index is a difference that you add to a pointer to get a pointer. `a[i]` is `*(a + i)`. Given two indices `i` and `j`, you want `i - j` to be such that `a[j + (i - j)]` is `a[i]`, and it then makes sense to me that `i - j` is signed. The expression works out whether they are signed or unsigned, but just in terms of their interpretation on the part of a user (eg. "oh this is 2 elements before bc. it says -2") or so that comparisons like `i < j` are equivalent to `i - j < 0` and so on. That's why it's always made sense to me to use `ptrdiff_t` (or just `int`) for an index, vs. using `size_t`.
ptrdiff_t exists for subtraction between pointers that produce negative values. But how many times have you ever needed to subtract p and q where p represents an array element at a higher index than q? For that matter, how many times have you ever needed to add a negative integer to a pointer?
In C an object can be larger than PTRDIFF_MAX, a real possibility in modern 32-bit environments. (Some libc's have been modified to fail malloc invocations that large, but mmap can suffice.) Because pointer subtraction is represented as ptrdiff_t, the expression &a[n] - a could produce undefined behavior where n is > PTRDIFF_MAX. But a + n is well defined behavior for all positive n (signed or unsigned) as long as the size of a is >= n.
There's an asymmetry between pointer-pointer arithmetic and pointer-integer arithmetic; they behave differently and have different semantics. Pointers are a powerful concept, but like most powerful concepts the abstraction can leak and produce aberrations. I realize opinions vary on whether to prefer signed vs unsigned indices and object sizes (IME, the camps tend to split into C vs C++ programers), but the choice shouldn't be predicated on the semantics of C pointers because those semantics alone don't favor one over the other.
Negative offset is used often to access fields in parent struct having pointer to a field only. For example, to implement garbage collection or string type.
But p - 2 is not the same as p + -2, and it's not clear in your example whether the former suffices or the latter is required. I can definitely imagine examples where the latter is required--certainly C clearly accommodates this usage--but IME it's nonetheless a rare scenario and not something that could, alone, justify always using signed offsets. Pointers are intrinsically neither signed nor unsigned; it's how you use them that matters.
Nope. But I do know of at least one implementation where it's not present at all—msvcrt. ssize_t isn't specified in the c standard, it's part of posix. ptrdiff_t is standard.
The two-liner is actually the one which is simpler and more direct, as it requires less knowledge of operator precedence rules. The one-liner and two-liner compile to the same number of instructions, so I don't see how either "avoids inconsistent state".
Many expert-level C programmers tend towards one-liners. Here's an example from the original "Red book":
> The one-liner and two-liner compile to the same number of instructions, so I don't see how either "avoids inconsistent state".
It's about performance, or thread safety, or anything like that; it's about having a coherent mental model of the code. A statement should, if possible, represent a single, complete operation. Invariants should not be violated by a statement, with respect to its environment. (This more true for 'push' than 'pop'.) One way of solving that is to bundle the 'push' and 'pop' operations up into functions; someone else in this thread did that. But why bother with the mental overhead of a function call when you could just represent the operation directly? To be sure, there are cases where the abstraction is warranted, but a two~three-line stack operation isn't abstraction, it's just indirection.
> For someone who doesn't have the operator precedence rules memorized, it isn't clear whether the above code means [snipped] or [snipped]
> The two-liner [...] requires less knowledge of operator precedence rules
It's not operator precedence—that's a separate issue; despite having implemented c operator precedence, I don't know all of them by heart—but simply behaviour of pre- and post-increment/decrement operations. It's even mnemonic—when the increment symbol goes before the thing being incremented, the increment happens first; else after—but even if not, it's a fairly basic language feature.
Even beyond that, though, it's an idiom. Code is not written in a vacuum. Patterns of pre- and post-increment fall into common use over time and become part of an established lexicon which is not specified anywhere. Natural language works the same way. Nothing wrong with that.
> It's not operator precedence—that's a separate issue
> It's even mnemonic—when the increment symbol goes before the thing being incremented, the increment happens first; else after—but even if not, it's a fairly basic language feature.
I think you missed the issue.
This is 100% about operator precedence, and has nothing to do with the decrement operator being in front of or behind the variable.
Right, yes. I got confused by your example, because the example is definitely about pre- vs post-increment. My point about idioms still stands, though.
> (* stack)-- evaluates to 22, while * (stack--) evaluates to 52.
Actually, (* stack)-- evaluates to 23, but changes *stack to 22 :)
Saving characters on spacing is a terrible thing to do. In fact that jumble is missing a zero on the equality, which is made less evident because all the the characters are not spaced in a way that makes this mistake obvious.
For one it’s three and two lines for what is two logical operations. I assume the “inconsistent state” is the time between the lines where the stack is not truly in the right state-many people prefer to preserve their invariants as much as possible.
The use of that construct is mainly a stylistic choice. On any compiler from this millennium there should be no difference in the code that it produces.
Yep so if we're going with style I'm very happy with the functions dashed off there. Nobody will confuse those even when very, very tired (similar effect on the brain to being drunk). There is zero difference in the generated output.
Calling those functions tells you exactly what they are and what they do. Vertical space is not an issue at all with 3 line functions.
Relying on post-increment? Make sure it's a one line block that is totally unbraced with only single letter variable names if you do it because otherwise it's just faux-macho C and that's /weak/.
> Make sure it's a one line block that is totally unbraced with only single letter variable names if you do it because otherwise it's just faux-macho C and that's /weak/.
I think you're projecting. The point being made was that when you're writing a simple stack (as you often might do in C, since the standard library and the language itself conspire against providing you one) and you don't have the overhead to write multiple functions to wrap it up (vertical space is an issue when you make more than one of these–trust me, I used to write Java and every thing about it was just a papercut in verbosity), the post- and pre-increment versions are concise, idiomatic, and–to be honest–more clear simply because they use the operators in the way that they are meant to be used. I can glance at them and see, OK, this one gives me whatever the stack is pointing to and then makes it point to the next element; this one first moves the pointer to the next element (which is free) and sets it. All in one line. There's nothing to show off here, this is just how you write C; those operators exist for exactly this purpose (and IMO single letter variable names are generally only a good idea in the smallest of scopes, and I personally use braces even when optional).
Sorry no. That's not for you in particular that's just a general comment on macho C, which I think we've all seen.
int abc(int a, int b, int c)
{
}
I can do postincrement. I learned C the macho way. We all still have to read that crap. Now I know better when I'm writing it. I strongly disagree that
a = *stack--;
*++stack = b
is better in any way beyond "I'm a macho C guy" than
If we're being serious about a stack you really /need/ to access through functions so you can switch on and off instrumentation, eg bounds check & backtrace on failure, poison etc.
But this is as much beside the point under discussion as global pointers you raise.
Post-increment is an artefact from PDP-11 assembler and maps to a single instuction there. That's where it came from quite directly. It's completely unnecessary. Most modern languages find it useless enough they remove it. Python goes fine without it relying on +=, for example. (Although some do repeat C mistakes when basing their syntax on C, eg the unbraced, single line block that serves only to add non-zero probability of introducing a future bug but with the benefit of precisely nothing.. Hi Walter! Larry Wall cops flack for Perl syntax but he did not copy that.)
Post increment is hardly the end of the world it just isn't useful. It doesn't help readability. It can harm it. As a question of taste I find it lacking.
But hey, everyone else uses it, and duff's device is fun to read so go with them, knock yourself out.
I love music (and am a musician), but I disagree that good music is scarce. Part of the issue here (among many others) is that music is fairly "evergreen", and certainly much more so than apps. Zillions of people are still listening to the Beatles and Bob Dylan (two examples of "good music") more than half a century later. I worry that the ever increasing catalog of recorded music is making it ever harder to gain mindshare as a musician.
I frequently think just before committing/pushing, "Time to add some comments!" That doesn't mean the code is fully finished, but it does provide a clear point in time to add them (when needed).
I have a ritual of watching through my queue of Youtube videos (on many topics) when I am preparing and eating breakfast each morning. It works well. I've been able to churn through so much useful content over the years that I wouldn't have normally watched.
I have thought about this phenomenon with regard to the internet a lot. Online it's easy to be exposed to videos and other content produced by people who are literally among the best in the world at anything. This means we start to measure ourselves against the most unforgiving yardstick imaginable, which makes being a noob (or even "normal") even more painful.
No matter what you do, for anything you put effort into, you are probably above average at it, when you include not just everyone who tries it but also everyone who hasn't even tried.
You can be the worst person at your job and still do a job worth getting paid for.
I have started doing this on days I go to the gym (3x per week) and anecdotally it has positive effects on how I feel mentally and physically. (And these effects have lasted after about 6 months of this practice.)
I actually ran into a bug recently while implementing my first raytracer, where the point calculated from the sphere-intersect test would just occasionally end up inside the sphere due to floating point imprecision, so the diffuse sample rays would have their origins completely in the dark, leading to randomly black pixels. Solved it by bumping every intersection out by 0.01 in the direction of its normal.
And then of course there have been several other "x.abs() < 0.01" cases for various purposes. So I could definitely see that being an interesting experiment.
That's really interesting - hadn't thought of that before. To fix that, would you be able to do a square of the magnitude comparison with the radius and just bump the borderline cases, or is it more efficient without the extra branching?
I just did it across the board; since the error is in the floating-point noise I don't know if I'd even trust a comparison on that. Plus, the discrepancy between "bumped" and "unbumped" samples might cause some visible artifacts.
The same controller pose can correspond to different arm positions, so VR games/apps that do IK are making a guess that is generally going to be somewhat inaccurate. I do like the implementation in Lone Echo, though.
It's also worth noting that it's possible to "just want" something for reasons that one isn't consciously aware of (e.g., feeling physically attracted to markers of health and fertility).
There's nothing special about a job! If anything, I'd believe that someone who had the drive to do challenging activities on their own (and engage socially with others) is doing more for their mind and body than someone clocking in and out at a boring job.