Surely this changes depending on the individual giving/receiving? Regardless of intent you have people who would prize both.
I've never worked in a place that had them, but I do know an individual who views it as a source of pride to collect as much as possible from peer bonuses. According to him it's not about the money, it just helps ease impostor syndrome since it's quantifiable and therefore suddenly the only "real" feedback he gets. Since it's quantifiable it renders any soft feedback he gets from his manager worthless in comparison.
Not saying that's common, just saying that's just one of what must be many many interpretations of the incentive structure.
Does anyone have an example of a codebase that uses Design By Contract to an extreme extent? You really only get a sense of the power of patterns like this when you see that power abused. I've greatly enjoyed writing 2 very small codebases in that style...but that doesn't mean I did it well or that it'd hold up after years of maintenance from a revolving door of developers.
Since this thread is very general, I'd like to specifically request anything that'd help one dive into Model Theory. Everything I've found labeled introductory is very dense, and I feel like I'm missing some prereqs it assumes. It's not easy to know what those are though.
Are you familiar with a general treatment of first order logic, including the completeness theorem? If not, I suggest starting there, for which I'd recommend:
- Chiswell & Hodges, "Mathematical Logic" - This is very clear and careful and gives lots of motivations for introduced concepts, but moves somewhat slowly (introducing FOL in three stages, first propositional, then quantifier-free and then full FOL)
- Leary & Kristiansen, "A Friendly Introduction to Mathematical Logic" (available online for free) - this moves somewhat faster and skips "lower" logics such as propositional logic and uses a different proof system. If you read up to Löwenheim-Skolem, you already have a little bit of Model Theory (the rest of the book is more about computability and logic).
If you've already done FOL, I would recommend:
- Kirby, "An Invitation to Model Theory". This doesn't presuppose much more than FOL (and even recapitulates it briefly) and some basic familiarity with undergraduate math concepts (e.g. groups, fields, vector spaces) and explains everything very carefully. I also think it has great exercises.
Introductory model theory has no particular prerequisites beyond elementary set theory and logic, but it does demand a fair amount of mathematical maturity. If this is your first exposure to higher math, you're probably better off starting with another topic. Real analysis or group theory would be the traditional choices.
I guess one question I'd have is why you think your brain is "operating properly". It's definitely perceiving and constructing a narrative out of those perceptions, but that happens differently for every brain and it really doesn't seem like there's a perfect version of it that's "correct".
If you're willing to accept that you can't really rank or judge modes of mental experience quantitatively on some kind of non-relative scale (which would require access to a "reality" outside yourself), then it shouldn't be too much of a leap to say messing with how it functions within tolerable bounds could be more of an optimization (or equal but just different) to your experience.
You're still not seeing the contradiction between making truth claims and having beliefs and the skeptical position you're entertaining. By your own standards, I could ask how you know there's something called a brain, that it has something to do with perception, etc. Maybe you hallucinated the brain? Maybe it's just some weird belief you have? Maybe hallucinations aren't a thing? How do you know what other brains do or don't do differently? And why can't some of them be wrong?
Attaching doubt to things "just because" isn't rational and cannot be resolved rationally precisely because such doubts are not rationally motivated. If I say to you "I doubt that you are here", for no reason other than some arbitrary skepticism about my perceptual faculties, then there is no way that that doubt can rationally be resolved. The very idea of hallucination presumes a normative perception. That we can know that we can misperceive or be subject to illusions itself presumes that we can tell the difference. Otherwise, we are just positing idle and detached possibilities while tacitly, and paradoxically, drawing on various convictions about the real.
There's no truth claims. The point is that under the influence of psychedelics, you realize you can't know the truth.
I understand you're looking at it from a scientific point of view, but the discussion is not scientific. Consciousness is probably the hardest body of knowledge to integrate with science. Nobody knows where consciousness comes from. Nobody knows how to measure it.
I can tell you: "I'm conscious and aware", but there's no way I can prove to you I'm not a philosophical zombie(someone that acts like it's conscious, but isn't). Currently, there's no way to measure consciousness.
>By your own standards, I could ask how you know there's something called a brain, that it has something to do with perception, etc. Maybe you hallucinated the brain? Maybe it's just some weird belief you have? Maybe hallucinations aren't a thing? How do you know what other brains do or don't do differently? And why can't some of them be wrong?
I think you nailed it. I don't know whether there's a brain, or if there is something else, and this something else is hallucinating this reality where there is a brain, or maybe something else entirely. Sure, we can make scientific claims about stuff when analyzing the reality within the bounds of our perception, but if you try to go beyondg that, you're own your own.
>I can tell you: "I'm conscious and aware", but there's no way I can prove to you I'm not a philosophical zombie(someone that acts like it's conscious, but isn't).
You can't prove that to yourself either, because a zombie has the same thoughts as you. You can't differentiate even subjectively whether you're a zombie or not.
Interesting idea, but I'm not sure I follow completely. Philosophical zombies may have the same "thoughts", but they do not have qualia, they act like they do, but they don't.
As for me, I'm pretty sure I have qualia. I've been watching this movie called "my life" ever since I was born. If I were to be a philosophical zombie, it all would have passed in the dark.
I think the phrase "I think, therefore I am" is the essence here. A philosophical zombie does not "think". It looks like they do, but it's just a deterministic result of neurochemichal events going on in a brain that lives in the dark. Something like a neural network for instance. They can exhibit thought-like behavior without experiencing anything(as far as we know).
If a zombie could detect it doesn't have thoughts, it would report about it. This can be done with reflection. If a zombie can't do reflection, that would be an observable functional difference from human, which is not allowed by definition. Therefore a zombie knows it has thoughts the same way a human knows it. A zombie only doesn't have qualia based on the assumption that it's possible to think without qualia.
Did you answer to the right comment? I mean you can't prove you're not a zombie, thoughts don't help with this, because a zombie has all the same thoughts.
I'm a single person who isn't looking to start a family, so I don't really use most of the money I make. It'd give me time to work on my more esoteric/mathy/experimental programming projects that I enjoy. At this point in my career getting job offers is pretty easy, so I wouldn't take a pay cut for no reason...but trading back time sounds super fair.
Of course, I'd have to talk to the company to make sure we agree on what the boundaries and expectations are. Just like with the current 5 day set up, ambiguity can easily be taken advantage of or lead to unexpected outcomes for both parties. That's not a reason to be hesitant, it's just a reason to be explicit and make sure everyone is upfront about discussing it maturely from the outset.
Like others, I've worked at a company who had fridays off in the summer and it really didn't hinder productivity at all. We just planned accordingly.
Does it work if you run `cargo clean` and then `cargo clippy` again? Clippy runs it's lints in an early pass of the compiler/checker. Many IDEs/editors will automatically `cargo check` under the hood to grab errors. Then when you run `cargo clippy`, that part of the compilation is already cached and so clippy doesn't give you any output :(
To my knowledge (it's been a while since I looked) fixing this behavior is blocked on cargo stabilizing something and has been for literal years.
That point of frustration aside, it's worth it...Clippy is an absolutely amazing piece of software. Both for pedagogy and normal development.
EDIT: Just dug up the issue. If you're on nightly you can use `cargo clippy -Z unstable-options` to avoid the clean/rebuild. Hopefully stuff gets stabilized soon. Here's the issue for reference: https://github.com/rust-lang/rust-clippy/issues/4612
Instead of `cargo clean`, `touch src/main.rs` or `touch src/lib.rs` (or actually touching any source file and thereby changing the `last modified` date to now) will have the same effect. That's what I've been using.
I guess one question is if you can tell the difference. Without reading the book from the youtuber, how would you know you wouldn't like it and it's not just prejudices getting in the way of you giving it a chance? If we can't tell then I'm not sure whether the site being able to makes any difference.
I think it would make a difference, in that if the user is somewhat convinced that the site is able to handle that issue then they will be more willing to take a leap of faith.
My point being, I'm perfectly capable of finding books I'd never read by myself - I just have to enter a library or bookshop and grab the first title that makes me frown.
There's only value in giving me books out of my circle of comfort if there's an implication that I might like the book or get something out of it, with more likelihood than mere chance, isn't it?
I've never worked in a place that had them, but I do know an individual who views it as a source of pride to collect as much as possible from peer bonuses. According to him it's not about the money, it just helps ease impostor syndrome since it's quantifiable and therefore suddenly the only "real" feedback he gets. Since it's quantifiable it renders any soft feedback he gets from his manager worthless in comparison.
Not saying that's common, just saying that's just one of what must be many many interpretations of the incentive structure.