Hacker Newsnew | past | comments | ask | show | jobs | submit | cballard's commentslogin

Maybe this is not really an issue for homosexual couples, but I would never have considered Tinder because unlike OK Cupid, there's no way to filter out Republican-hates-the-gays-hates-abortions types. The question/answer filtering and profile text was the most important part of online dating to me.


It's funny you say this, because I am a Republican pro-life lesbian. I have used both OkCupid and Tinder in the past, and I 1. Didn't find OkCupid to be terribly good for filtering out those with worldview I disagreed with 2. Liked Tinder more because it forced that initial conversation with someone, which I would rather have, as I am interested in their mind just as much as their looks. Best of both worlds. YMMV.


You need to figure out the signaling then. It isn't that most people would never settle with a Republican pro-lifer, it's just that being a Republican/pro-lifer is an indicator of other incompatibilities (which usually is a good indicator of other things).

Take for instance you yourself are better off filtering out all the Republican/pro-lifers because of the baggage they'll bring with them.

Usually the signally has evolved to be different. You don't say Republican, you say Libertarian (considering gay rights being one of the topic of difference).


I'm registered as a Republican, and in theory, would rather date a Republican than a Libertarian. In unsurprising news, every woman I have ever dated has been a Democrat, and my wife is a Democrat from a socially democratic country. She will be joining me in voting for Gary Johnson in the upcoming presidential election, but is voting for Hilary in her party's primary.

It's funny, I used to look at the types of signaling you reference when I was dating (do we like the same things, have the same outlook, etc). I have found my greatest happiness with my wife, and the only two signals there I used were her work ethic and intellectual capacity. Well. And she's the hottest woman I've ever seen. All three, and I couldn't wait to put a ring on it.

No, she doesn't know my HN username.


> Republican ... lesbian

As someone who doesn't like to be used as a political boogeyman or have their rights dangled in front of them when it's convenient, I will never understand this.


I go back to the old-school, small government intellectual Republicanism (this may sound crazy to many of you, but I assure you, at one point it existed). I realize that this is currently not where the party is (see: Trump, Cruz, Jindal, Carson, Palin, et al), but I believe in trying to change it from the inside.

As it stands, I vote libertarian a lot.


Current and future quality of life for myself and others trumps paying lip service to ideals that are never realized.

Given the past ~30 years, saying that is the equivalent of saying you're a Nazi jew, because the Marxist in you likes some principles of national socialism. I still don't get it.


If she didn't qualify her comment with the statement about often voting libertarian, you'd have a point. But the political spectrum she fits into is currently covered by the Libertarian (big-L) and Republican parties (specifically, a minority libertarian portion of the Republican Party). A pro-life position would push someone back towards the Republican party, along with the fact that the Libertarian Party isn't going to win many major elections any time soon (maybe if Trump gets the nomination and the Republican party finally fractures?).


Bingo. My academic studies of political systems, US politics, game theory and economics leads me to believe that a true free market economy is the causa sine qua non for a more tolerant and free society.

I also believe that one cannot legislate social mores, and that the government does not, can not, and should not grant or even 'uphold' rights. Further, I believe that free markets CAN and HAVE caused (on net) greater social change than any legislated social change, and that on an individual level, free markets are the best channel for allowing minorities/oppressed individuals to champion their cause while simultaneously avoiding and stamping out oppression.

Ergo, I dedicate myself (and my vote) to the goal of an entirely free market. In essence, I swipe right for the Austrian School. (Go ahead and groan).


"Current and future quality of life for myself and others trumps paying lip service to ideals that are never realized."

Agreed. However, I bet we differ on the means by which one should go about achieving/obtaining (I separate those words very deliberately here) a high quality of life for oneself, and the means that will help others. If you're truly interested in a political conversation, and open to learning, my email is in my profile. While the HN community may benefit, we are going on quite the tangent from the main topic. Cheers.


Maybe because the Democrats do the exact same thing using social issues as political footballs etc when convenient.


The best way to do that is to actually read their little bio, assuming they put one in there, and look for typical indicators for that kind of alignment: talking about guns/shooting ranges, pictures of them at the shooting range, pictures of them with a dead deer, etc. Finally, if that fails, try just asking them when you have a match and start chatting.

I like the idea of being able to filter people too, and I find I can tell a lot about people from their OKC profile. The problem, however, is that OKC is basically a big waste of time because everyone desirable has moved to Tinder, and the few decent women on OKC don't respond because they get too many messages, and I end up spending way too much time writing messages with zero return. With Tinder, I don't have to waste my precious time 1) reading through a profile to see if this is someone I should spend time writing, and 2) writing a long, thoughtful message, only to get no response. If a woman matches me on Tinder, there's a decent chance she's actually going to respond. It's probably something like 33% chance I'll get a response on Tinder to a match. On OKC, it's probably less than 1%. If that means I have to spend a little more time text-chatting to learn about them because the Tinder profiles are so sparse, that's still a giant time savings for me.


Don't you have to talk to people before you set up a date?


OK Cupid also has questions covering:

- Evolution.

- Whether or not dinosaurs were a thing.

- If the Moon landing happened.

- The relative size of the Earth and the Sun.

- If Astrology is scientifically accurate.

... among others, all of which I had marked, with the correct answer set as "mandatory". If someone answered enough of these with the wrong answer, you won't even see them (if you do, you'll have a high "enemy" percentage), so you won't waste your time talking to them.


The moon landing thing would be important to me.


I have been on a date with a conspiracy nut, and can confirm that this is something I'd want to filter out if I was still dating and not in a long-term relationship.


They actually instrument the process of setting up your own personal information 'bubble'? 1984 was 30 years ago, so its probably overdue.


I wouldn't use the term "information bubble" to describe this sort of filtering. It's not as though a person is unaware of those who believe differently regarding topics like evolution or the moon landing, or will never hear those arguments. It's just not necessarily a good idea to try to form a romantic partnership with someone who you don't respect because they believe a position you consider to be "crazy".


"Decent" is being very kind to any language that has a type system where every reference type is implicitly a Maybe (okay, Scala exists).


Go and Scala also allow every pointer to be null, no? Scala has an option type, but so does Java 8. Defining one in C# is easy.

But there is Ceylon, Kotlin, Clojure, whatever. Kotlin uses ? suffixes to define if something is optional. And at least the code interops nicely. You can inherit a huge Java codebase and slowly convert the code over. No such luck with Go. Unless you're converting into C!


C# and Java optionals aren't very useful, since they would be reference types themselves, and thus could be null!

I would submit Rust and Swift as "decent" type systems, if Haskell is the standard for "good". They do nullability correctly, but lack HKTs.


C# has non-nullable value types via the `struct` keyword. And C# in fact already has a defined optional type to handle this; it's called `Nullable`. [0]

[0] https://msdn.microsoft.com/en-us/library/b3h38hb0%28v=vs.110...


In Scala, Option[Whatever] may also be null. The difference is that it's considered "some Java compatibility leftover" and never exploited by any sane piece of code.


GitX is my favorite, despite being quite buggy, because the others I've seen are all way too complicated.

I really just want visual log and visual add -p and GitX nails that.


The author of NSHipster works for Apple now and typically, Apple employees don't blog about work - possibly a policy?


There are so many cabs that I don't really see the need for this, and most cabs take Apple Pay now, so I've never really seen the point of this and the other one (is there another one? I think there is?).

If I was somewhere where there aren't a lot of cabs I'd probably just use Uber, it's cheaper and cleaner anyways.


In New York, taxis are required to have functioning credit card machines.

http://www.nyc.gov/html/tlc/html/passenger/passenger_creditc...


The issue of taxi drivers lying and saying their machine is broken when it isn't is so rampant they even cover it in the FAQ you linked to!

>What if a driver says the system is not working? The passenger should note the medallion number and go to 311 Online. Drivers are permitted to work with a broken system for up to 48 hours as long as they have reported the problem and are awaiting repair. Almost all (90%) system repairs must be completed within six hours.

And there isn't much you can do about it except threaten to complain and hope the taxi driver relents and lets you use it.


You can also just get out of the cab. I usually throw em any cash if I have it on me but not my fault you just drove me to the airport and forgot to mention your card machine doesn't work.


If you call 311, TLC will slap a $300 fine on them, so taking out your phone and doing that will instantly fix the machine.


They are required in lots of places, but people don't always follow the law. It's not surprising to hear the old "the machine is broken, need cash" excuse.


Not sure about the law in other places, but in Boston that's too bad for them. You can just get out of the cab without paying if their machine doesn't work.


I never really understood that. A cab driver told me that once, so I told him that I was leaving without paying. Even if he wasn't lying, I accept no responsibility for their broken equipment.


It's really not an issue in New York. 311/TLC doesn't mess around with this.


This is a bad idea masquerading as a good idea. Before making a pull request (or doing any sort of merge), you should rebase against upstream master (or whatever you're going to push to). However, keeping distinct atomic commits that change one and only one small thing, when possible, is much preferable if bisect or blame is used. If you have broken or poorly written commits, use fixup, reword, squash, etc. in rebase -i.

Using fast-forward (and possibly only allowing fast-forward) is a good idea. Squashing entire pull requests that may change multiple things into a single commit is a very bad idea.


If someone prepares a pull request with a well-structured series of commits, making a logical series of changes, where the project builds and passes tests after each commit, then those commits shouldn't get squashed.

However, I frequently see people adding more commits on top of a pull request to fix typos, or do incremental development, where only the final result builds and passes, but not the intermediate stages, and where the changes are scattered among the commits with no logical grouping. In that case, I'd rather see them squashed and merged than merged in their existing form, and having a button to do that makes it more likely to happen.


The trick is to not squash everything into one giant commit, but to use rebase -i liberally to squash/fixup those typo fix commits where they belong.


That's what the author of the pull request should do. But this provides a potentially useful alternative when that doesn't happen.


Plain squashing commits, while still a valid option in very few cases, will likely lead to gigantic commits that are hard to reason about.

I've seen projects where maintainers clean up poor commits before merging them: rebase/squash/reword only what's appropriate.


It's also the case that you lose the code review if you force push to a PR's branch after adding in a typo fix and squashing locally, right?

That's a pretty good reason not to squash till the review is done.


> It's also the case that you lose the code review if you force push to a PR's branch after adding in a typo fix and squashing locally, right?

Not as far as I can tell; I've force-pushed pull request branches many times, and the code reviews seem to stick around. (Perhaps they wouldn't if the code changed more drastically, like files disappearing; I haven't tried that.)


I used to feel the same way (and still do to some degree). However I think the issue is more nuanced. I agree that rebasing beforehand is a good idea. But I can see the value in keeping commits on the master branch corresponding to specific features or bug fixes (which presumably map to PRs).

I think the argument can be made that if you don't feel comfortable performing a squashed merge of a PR, then that PR contains too much work and should be split up. However, I don't think there's an easy rule to decide in either case.


Small PRs are an issue because PRs are dependent on other users and can't be dependent on a prior PR.

Let's say we're adding an interface/typeclass/protocol and a concrete implementation. I'd say these should be two separate commits, as they're adding two different things. An interface doesn't require a provided implementation to work. But, if we were to create those as two separate pull requests, it would be more work for the project maintainers, and the initiator wouldn't be able to create the PR for the concrete implementation until the interface PR was merged - the concrete PR can't be added as a dependent PR of the interface one, or something to that effect.

Since you can "compare" almost anything on Github, small commits aren't really an issue, just view a larger-scope comparison to get an idea of the whole PR.

Another way to put this might be that commits are for individual code changes that build up to a pull request, which is a conceptual change?


> and can't be dependent on a prior PR.

This pinpoints the major problem exactly. Without dependencies between PRs there's really no sane way (with this feature enabled) to submit a series of commits while expecting those commits to remain separate.

Oh, and I object to the general sentiment in the responses to your post that seem to value drive-by/inexperienced contributors over the "experts". Yes, we definitely should make things as easy/simple as possible for new contributors, but NOT at the expense of adding a gotcha for expert contributors. The experts are what keep a project going over many years instead of just releasing version upon version of trivial spelling fixes.

(And, btw, the default "merge" option for GitHub PRs also sucks. It should be possible to simply disallow non-FF merges and to force all merges to be FF. EDIT: Interestingly, this seems to be about the only workflow explicitly forbidden by the new rules... unless, of course, you're willing to merge everything manually using your local copy of the repo and pushing from that to GH.)


Yeah, it is also the obvious thing that is missing in the review stage of a pull request - viewing all the content and all the diffs separately but on one page, in a serial way that corresponds to the actual order they will show up when you do git log, all on the same page.


This is something that Gerrit supports natively: you can have a Gerrit CL that depends on another CL. It's unfortunate that Github doesn't support any equivalent.


Yup. Happy user of Gerrit here. :)


How does not squashing your commits help the protocol/implementation scenario you described?


You can merge the interface PR into the concrete implementation's PR. You don't have to work off of just one remote branch.


> Before making a pull request (or doing any sort of merge), you should rebase against upstream master (or whatever you're going to push to)

See, and maybe this is because I'm just dumb or something, but I have never gotten rebasing to work for me. Ever. Every single time I do it I read at east 3 articles about it so I don't screw something up, I attempt to do it and ultimately I lose a bunch of work.

I just don't get it. I can write web, mobile and desktop apps and I like to think I'm pretty good at it. But I'm one of those people who constantly have commits of merges in their code because for whatever reason I just can't get my head around making rebasing work correctly.

Am I the only one? Sorry for the derail but it's bothering me that I've never gotten this to work correctly and I feel otherwise normally smart. ¯\_(ツ)_/¯


A few tips!

1. Always use the "upstream" branch as your rebase target - "git rebase -i master", or " git rebase -i origin/master". This is almost always what you want, and picking the wrong base is the most common error I've seen when teaching people rebase -i

2. Use autosquash! https://robots.thoughtbot.com/autosquashing-git-commits. If you have trouble with the text-editor interface you get when you run rebase -i, this will both handle its usage, and in the long run give you some visual examples of how the interface is supposed to be used. If you're really into this, set the config option "rebase.autoSquash true" to avoid the extra command-line flag.

3. If you mess up and realize in the middle, git rebase --abort.

4. Use the reflog after the fact for both finding and undoing mistakes: git diff branchname branchname@{1} to check for unintended code differences, and git reset --hard branchname@{1} to undo the rebase.


Thanks! I get the feeling I should give up on using a GUI for most of my git usage as doing many of these seems awkward or impossible with the GUI. That's probably part of my problem.


For what it's worth, I pretty much exclusively use git through Eclipse's EGit UI. I do a lot of rebasing, as we use Atlassian's "centralized" workflow. [0] It's interactive rebase interface is pretty good. If you haven't used it, maybe try loading a repo into it and see how you do.

[0] https://www.atlassian.com/git/tutorials/comparing-workflows/...


Interactive rebase works fine for me in Eclipse (eGit). At least on wiindows that's preferable to me over the command line editor.

However, I am squashing very rarely, mainly for commits which correct typos.


Yeah. I'd recommend that. Personally, I found that I didn't really understand git until I did it all from the command line. YMMV, but that's what made it all click for me.


I find SourceTree to be surprisingly effective. Also, try setting Sumblkme Text to be your core.editor in git


  $ git checkout master
  $ git pull
  $ git checkout branch-name
  $ git rebase master
If there are merge conflicts, open the affected file(s) and resolve them. Then:

  $ git add filename.ext
  $ git rebase --continue
Finally:

  $ git push origin branch-name
If you've already pushed the branch, use -f. Make sure to always specify the branch name when using that flag!


For those cases where you have created a fork of a project and are preparing a pull request, would that be something like:

    $ git checkout master
    $ git fetch upstream  # https://help.github.com/articles/syncing-a-fork/
    $ # git merge upstream/master  # <- leaves merge commits in your fork
    $ git checkout branch-name
    $ git rebase upstream/master  # Use rebase instead of merge?
    $ git push -f origin branch-name


I think the advice to rebase runs up against the business pattern of pushing your branch as soon as you create it (git-flow and a lot of jira/stash integrations work like this). Also some teams want to see evidence of your commits as you make them, which means pushing as you commit.

If you have a branch and it's already pushed, rebasing just feels kind of funny and can sometimes cause a lot of problems if anyone else has checked it out.

If you have a branch and it's local only, then merging from mainline into your branch and selecting rebase instead of merge is relatively painless.


> ultimately I lose a bunch of work.

One trick that's worked ok for me in a private repo is, before starting to edit the fix-spline-reticulation branch (which has a handful of separate logical changes, fixes discovered midway through a later change that really belong in an earlier change, and temporary debug code that was never meant to go into the product) for publication, to do

    git branch fix-spline-reticulation.0
(or .the-next-sequential-number). Then no matter how badly the "rebase -i master" goes, there's a branch tag pointing at the original state, and

    git branch -D fix-spline-reticulation
    git checkout fix-spline-reticulation.0
    git branch fix-spline-reticulation
will destroy the failed attempt and restore the branch to its earlier state. (Note that if you decide in the middle of the rebase that you're losing, "git rebase --abort" will undo anything you've done so far; you need the backup only if you regret the rebase after you're finished). It also makes it easy to "git diff my-feature.0..my-feature" and confirm that all the changes in the edited history add up to the same as the real history.

Sometimes I do this in the middle of development to move all the changes intended for the product ahead of the temp debug stuff in case I suspect the debug code is causing problems. Keeping the debug code in the dev branch even after the cleanup rebase makes the diff to check the rebase easier (then, of course, the merge should take the commit just before the debug).

Best never to do let anything but the cleaned-up branch hit a shared repo.


> See, and maybe this is because I'm just dumb or something, but I have never gotten rebasing to work for me. Ever. Every single time I do it I read at east 3 articles about it so I don't screw something up, I attempt to do it and ultimately I lose a bunch of work.

Rebase takes a little bit of practice, but everyone who's using git owes it to themselves to learn it by heart. It's almost like having superpowers compared to any VCS which doesn't have rebase.

My advice[1] would be to simply create some dummy repository (perhaps just copy an existing repository with some real code) and going through various scenarios described in the git-rebase man page (using some trivial changes). If something blows up, don't worry, you can always just start from scratch.

The key to making rebase work for you is: 1) understanding the underlying model of git[2], and 2) practice, practice, practice. With enough practice you'll get a good feeling for which "type" of rebase works best in a given situation.

[1] In addition to the excellent advice given by others in this thread.

[2] It may look like it's really all based on snapshots of files, but the workflows are definitely mostly centered around patch-based thinking.


I know that I probably swear in church since git is the current de facto standard for version control but this shows that gits usability is way too low. Why do I have to invest so much time to understand the inner workings of a tool that should just help me collaborate with my coworkers? I've given up on understanding git and use gitflow and the built in tools in Intellij Idea for all my branching/merging/committing needs.


While I like the git flow model, I find the git flow plugin really useless.

You lose so much of the power that makes git such an awesome tool.

Same with every single front end to git I've ever tried, free or purchased. I always come back to the CLI because it's so much more powerful.

The IntellJ merge tool is pretty nice though.


Because this is a quite advanced feature, and creating an interface for it is inherently hard. IIRC, the IDEA interface for interactive rebase ends up looking exactly the same as the text-mode one.


Version control is hard, simple as that. Git actually makes a great job at keeping simple stuff simple, but if you need some of the more complex stuff... well, I guess you could always go back to pen, paper and a secretary (aka: make someone else do it for you).


  ultimately I lose a bunch of work.
Take a copy of the entire repository before attempting anything potentially destructive.


This is completely unnecessary. Everything git does, git can undo as long as your working tree is clean when you start the rebase. git reflog and git reset are your friends, if you want to "get back to where I was before I started this awful merge".


Or just write down the latest commit (or use git reflog to find it post facto) and if you mess up, do "git reset --hard <commit>" to get back to it.


In tricky situations, I always commit work done. Then I attempt to do potentially harmful work. Note that afaik you can't lose commits in your history (they may be hidden, but reflog to the rescue). If I am very unsure whether something will work as intended, I place a dummy branch (a tag will do as well) onto that safety commit which will make it easier to find it back (in that case you don't need to resort to reflog). I never lost work once committed even when I painted myself into a corner. Note as well that rebase -i will always create a new commit rebased onto the entry commit. Going back to where you started is always possible.


Use `rebase --interactive` so you can have a better idea of what is going on.


I don't blame you. Git has terrible usability.


I think that GitHub's pull-request-based model is fundamentally broken. Gerrit's model, where every commit is quasi-independent (and hence must pass tests) and you can easily edit without force-pushing anything or losing review history, is superior (though not perfect) in almost all cases. (Exception: merging a long-running feature branch where all the commits in the branch have already been reviewed.)

This is GitHub's attempt to solve the problem without really changing anything. It won't really change anything. Since pull requests routinely contain a mixture of both changes that should be squashed (fixups) and changes that should not be squashed (independent changes), this just means that you get to pick your poison.


I used to strongly believe what you do until my company started using Phabricator, which forces the squash workflow on you. It makes your history more useful, not less. The pull request is the appropriate unit of change for software. Make small commits as you develop, then squash them down into a single meaningful change to the behavior of your software.


As a git novice I wonder, doesn't a proper workflow do the same thing? When I submit a feature branch it might have a lot of ugly commits. However, once I merge it to an integration branch there is one nice commit explaining what I did.

When coworkers create Pull requests I don't go through all of their commits and changes along the way. I just look at the diff so, I don't see the need for them to squash it first.


Once merged, the history of your feature branch becomes part of the history of the integration branch.

Sounds like you're using GitHub (Enterprise) or something similar where the pull request view shows you all of the changes in a "squashed" fashion.


Yes Bitbucket so, maybe that explains it.


Sometimes yes, sometimes no. Merge commits are nasty imo and I'm glad we can now forbid them outright, but a full squash isn't always the solution either.

Take a look at this PR for example:

https://github.com/HearthSim/python-unitypack/pull/4

Lots of back & forth. All the commits are related, and the PR is there to land all of those commits at once. I could land some of them right now (as they're safe to land), but keeping them in the PR keeps everything related in the same place (and none of them are required until that last commit lands).

A PR mirrors a "patchset" on mailing lists. You don't always want to squash all of it.

What you do want to avoid is a situation like this:

https://github.com/jleclanche/django-push-notifications/pull...

Where the original author creates their original commit and doesn't know how to --amend + push --force to the PR, and you end up with a ton of commits which you don't want to land all at once.


To clarify, pull requests shouldn't be squashed before review. They should be squashed when they get merged in, like this GitHub feature does. The granular commit history is useful during code review. It is not useful as a future developer trying to evaluate how the behavior of the program changed over time.


Right, sometimes. Pull requests are not necessarily a "unit of change" like you mentioned, though. For example, the first link I gave should not be squashed. But I don't want it to create a merge commit either.

I'm a little underwhelmed with the feature, it looks like it's either "squash everything" or "make a merge commit". There's no option to rebase & merge and/or selectively squash.


Exactly. In order for always-squash to work, then every PR must be the smallest possible atomic change. But often times certain features don't degrade to nice small atomic changes. It's sometimes useful to see that as a development process smell and consider using feature flags and other things to incrementalize the development, but sometimes the cleanest thing to do is just have a series of commits. I don't see a good reason to take this option off the table since source code history management is truly a craft in its own right.

FWIW, I prefer a carefully curated rebase and then merge --no-ff so that you can still see the branch and get at the branch diff trivially, but the history is still linear for practical purposes so bisecting is clean, etc.


Interesting. It's the first time I hear a fairly solid argument in favour of --no-ff. My main argument against it is keeping a clean commit log on github itself (and the default git log), as it's an entry point for new contributors.

Example:

https://github.com/lxde/pcmanfm-qt/commits/master

vs

https://github.com/facebook/react/commits/master


I think your first link should be squashed once the code has been reviewed. When looking back on history, commits are most useful when they're a list of behavior changes in the software. That pull request "exports meshes as OBJ." That's what's useful for future developers, not "add Transform fields." Leaving those in your history makes it harder to work with, not easier. If someone cares about the back and forth that happened within that behavior change, the pull request is always available.


"Add transform fields" may not be a very descriptive commit message, but those fields are fully independent from the OBJ exporter. They are the implementation of unity's Transform class. This implementation was stubbed before, it happens to be needed for an OBJ exporter to go through, but has nothing to do with the exporter itself.


It seems like it'd be nice to have two levels of granularity exposed in views of a source control system's history, basically corresponding to pull requests and commits. So you could drill-down to individual commits as needed, but would normally be able to work at the PR level.


Does "git log --merges" get us there?


I suppose it does, though at least the tool my org uses doesn't write good descriptions on the merge commits that are created upon merging in a PR — they're just like "Merge pull request 190 from …" when I'd prefer it to be named for the changes that are actually in that PR. That's good to know that exists, it could be useful. Thanks.


That's the `diff` tab on the PR


But the PR isn't retained (afaik) in the repo's history as a distinct entity (I'm talking about git proper here, not extra tools like GitHub, etc). In the end, git just has commits. [Edit: as prodigal_erik points out, perhaps merge commits are really what I need to be looking at.]


Nope, it does this by default, but doesn't force it.

$ cat ~/git/ATLAS/.arcconfig { "project_id" : "ATLAS", "repository.callsign" : "ATLAS", "conduit_uri" : "https://phabricator.$MYCOMPANY/", "arc.land.onto.default" : "develop", "immutable_history" : true }


Sure, it's a terrible feature to always use. And it's likely to be of little use to contributors who know how to use Git well. But in large open-source projects, often new contributors make a small change that needs a few minor corrections. Eliminating that final back-and-forth ("squash please") is a huge win for maintainers.


Not only maintainers - anyone tracing back through the history to find out what broke their use case.


There's no guarantee that every individual commit of a feature branch is meaningful, or even builds. It also makes the history of the master branch a lot harder to read when it has tons of commits representing the minutiae of the feature's development.


It really depends on each individual's workflow. I tend to use lots of "in progress" commits (each time things are "green"), and as I go, I regularly squash the commits, so in the final pull request I typically have several commits (and if I wasn't squashing it would have been a dozen). If I do feature and a refactor, they are always separate commits, it's easier to review these and bisect if something turns out to go wrong.

Some people might do similar things but they might not assure each commit is green, and they never squash anything (so you end up with non-meaningful commits).

As @3JPLW said, I see when it can be useful for opensource maintainers to have the option to squash someone's commits, when the change is small, but there are many commits (due to a review ping-pong etc)


There's no guarantee, but there are many benefits to striving for this ("git bisect run", CI test results).


If it isn't meaningful, then that is something the review stage should catch.


I use this workflow:

  1. branch off master
  2. work, commit, push, test (on CI server)
  3. decide it's time to ship
  4. rebase -i, push, test (again)
  5. git checkout master && git merge --no-ff feature_branch 
(make the merge commit message a summary of the feature)

master ends up being a list of feature branch commits, bookended by the merge commit which introduced the feature. Getting the squash commit diff is as easy as 'git diff feature_branch_merge^..feature_branch_merge'.


Squashing entire pull requests that change multiple things into a single commit is a bad idea, yes. But uploading and asking for review on such wide-reaching pull requests is a bad idea in the first place.

Using fast-forward without squash is also a bad idea in many cases: the string of commits may contain multiple points that don't actually build or pass tests, even if the final commit in the chain fixes all that. There's no point in landing those broken commits, and doing so will confuse bisection tools.

Fast-forward with squash, and enforcing reasonably sized code reviews as a matter of culture, is the best of all worlds in my opinion.


Why would you want to rewrite your whole work history and change the actual state of the repository at each of your commits? Why don't just merge?


Rebasing seems to clutter the Github PR's commit history and diff with all the commits to master that were made between the time the branch was cut and the time the rebase happens. But it doesn't do that if you merge in master. I never understood this.


It's a bad idea because it's a bad implementation. If it allowed you to select what to squash, defaulting to the behaviour of git rebase -i --autosquash master then it would be a clearly good feature.


> Squashing entire pull requests that may change multiple things into a single commit is a very bad idea.

If changes are too large/complex/disjoint to fix in a single commit then why have them in one PR?


I wonder why they did not add `--ff-only` as an option, like GitLab has.


I'd shorten that to:

> Contrary to what this VC believes, it is NOT okay to expect someone to "work 60 hours a week at Facebook"

Let's get one of these already https://en.wikipedia.org/wiki/Working_Time_Directive


I suspect that tech workers already have that ability for themselves. Don't want to work 60 hours a week at Facebook? Great; don't.

What they don't have is the ability to prevent others from doing it. Personally, I like that and hated working in Germany where I was forced to stop working because of an arbitrary hours limit per week.


While that might apply to some BigCo there's large parts of tech that would really benefit from a WTD(GameDev I'm looking at you).


> What they don't have is the ability to prevent others from doing it

Yes they do. They have a law.


Citation needed [that tech workers have the ability to prevent others from working 60 hours per week at Facebook].


Vote for politicians who support such a law. Join/form a union that will campaign for that sort of law. Talk to your friends and family to tell them that such a law should be brought in.


That's the thing. I oppose such a law and prefer the status quo in the US. But others who do believe in it should feel free to lobby for such a law.


To be clear - this is a bike lane on a road with onramps? Uh, what? There aren't even bollards (in the part without an onramp, obviously) or any semblance of protection.


It's the surface street side of the ramp there in the photo. Here's the approx location:

http://www.openstreetmap.org/?mlat=37.51639&mlon=-122.25424#...

The photo is taken looking towards the freeway, showing the offramp.

A typical bike lane in the US is made out of paint, the lack of protection isn't noteworthy.


The road is a surface street -- a 30 or 35mph divided arterial which goes over a freeway. The ramp is an offramp from a freeway. But yes, there is no protection, which is normal.


Non-"blatant spam" email advertising really isn't better. Purchasing a product from almost any non-Amazon company will instantly sign you up for an almost daily torrent of email.

Even worse is when you're automatically signed for dead trees being sent to your house, because those don't have unsubscribe links.

It's crazy. I already bought the thing! I don't need more things, or else I would have bought them when I bought the original thing. Yet, companies are treating their customers - people who have actually already bought things - in this terrible manner. Maybe we need legislation mandating opt-in?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: