Hacker News new | past | comments | ask | show | jobs | submit | wickedchicken's comments login

I fly SFO <-> JFK very often. VX34 there and VX27 or VX29 back are my rides of choice, since I hate layovers and prefer redeyes. The wifi is passable too (don't try to SSH over it, though).


> terminal with antialiased Source Code Pro Ligth font

Just have an ssh-able desktop/VM somewhere with emacs, install https://chrome.google.com/webstore/detail/secure-shell/pnhec..., and set the font to Source Code Pro (font instructions are in http://git.chromium.org/gitweb/?p=chromiumos/platform/assets...)


Is your programming done via SSH to a linux desktop or VM? You can SSH from a chromebook no problem: https://chrome.google.com/webstore/detail/secure-shell/pnhec...


Most of mine is done offline, which is so far the dealbreaker for the Chromebook for me. I need to be able to run vim on a local filesystem on a plane. I hear it's possible to root the Chromebook though?


It's pretty straightforward, and as long as you don't modify the hardware it's completely reversible:

http://www.chromium.org/chromium-os/developer-information-fo...


Operators and Things, a (supposed) first-person account of a schizophrenic who recovered from the condition and wrote about her experience. The second half of the book is where it really shines, since the author attempts to analyze her experience as a window into the inner workings of her cognition: how it broke down, what she experienced when it did, how it recovered itself, and what led to it. Since the author is anonymous, and talking about one's mind is very introspective, it's hard to take away real science from the book but I found it fascinating nonetheless. While I really dislike pseudoscientific explanations of brain functioning, after reading this I took up the idea that the conscious mind is more of a time-slice scheduler and message-passer than where the actual computation is done. So concentration is about controlling your unconscious indirectly, like training a puppy how to play fetch: you give it suggestions of what to do, and ignore it when it doesn't do that :).

I'm linking to the Amazon page, but IIRC the book is old enough to be in the public domain and there is a free text version somewhere.

http://www.amazon.com/Operators-Things-Inner-Life-Schizophre...


A lot of people here are commenting on GitHub being 'overpriced' or 'greedy.' TPW did an interview a while ago that has insight into why their pricing structure is the way it is. It's a pretty interesting read:

http://mixergy.com/tom-preston-werner-github-interview/

(search for 'which metrics') to skip to the pricing part).

Money quote: "That’s like buying a car based on how much it weighs. It’s irrelevant."

I may be biased since GitHub does a lot to foster the developer community in my area (I nabbed a sweet contracting gig at one of their drinkups), but I'm perfectly happy with their pricing.


> Money quote: "That’s like buying a car based on how much it weighs. It’s irrelevant."

Car manufacturers are constantly going on about how the latest model weighs X% less than last years. Lower weight usually means better handling and a more fun driving experience...


Owners of SUVs and off-road cars might disagree.


The BSD networking stack.


I suspect it is less deployed than SQLite. SQLite gets into things even without network capabilities (e.g. Digital TVs even before they were internet connected for the channel data and the EPG information).

I don't know to what extent the code of the BSD stack remains in Windows but you can bet even if a raw build of Windows doesn't have SQLite embedded somewhere within it (which it might well do) that there are probably multiple software installations with it embedded somewhere.

Would you actually describe the BSD networking code as being deployed in Linux even? I know it is derived from the BSD stack but is it really still the same software.

Zlib and libjpeg as someone else suggests are good suggestions and at least in the races with SQLite.


> However, I could use rebase to start combining loosely related commits, trading the time resolution for clarity in the commit history.

In general, your commits should be the smallest atomic operation that makes sense. When people talk about 'clean history,' they're talking about working in the awesome workflow git provides:

1. Write half-written broken code. 2. Fix that code up. 3. Add some more onto that. 4. Fix a typo! 5. Forgot to update the README.

Now, you could push that to master, but then the main master is littered with commit messages like 'oops' and 'typo.' Instead, you can rebase 5-1 onto the latest master, squash them together, and have one 'nice' commit that only has the cleaned up final changes.

This is one of the most powerful things about git: in a private repo, you can commit all kinds of garbage and half-written stuff without caring. When you want to make your stuff public, rebase and squash, then send it out. Be careful though! Only rebase your own private branches, or you're gonna have a bad time™.


Okay, that is basically keeping with my current understanding (though I'm not sure how much I live up to the "only have working history in the public repo" rule).

There is the other issue I raised, however: is there a good way to group a series of commits that happen to be towards a single distinct goal. Using branches is a clear step in that direction, but it seems like a nightmare to perform a rebase like you described if the commits are mixed and I would like the end result to involve grouping via branches. That is confusing, hopefully this will clear it up:

1. Bugfix in function1. 2. Bugfix in function2. 3. New feature in function2. 4. Bugfix in function1. 5. Bugfix in function2

...and we want in the end:

      /-- 1 ---- 4 ---\
  ---<                 >--HEAD
      \- 2 -- 3 -- 5 -/
Can rebase do this easily? Is this a good idea (it seems like it is to me)? The programmer would have to confirm that the code works at every state.


So I'm not sure if I understand correctly, but let me put it this way: with a little more git craziness, you can crack apart a commit and separate it into two. This is good if you did two unrelated changes to a file, committed that, and realized you wanted two separate commits later.

The basic process is:

1. git rebase -i, and change a commit to 'edit' 2. git reset HEAD^, this 'undoes' the commit and leaves the changes in your directory as if you had written the code but hadn't committed it yet 3. git status 4. git add <filename> -p, this lets you add commits to your file a chunk at a time. first, add all the commits as a part of commit one. skip the parts you want for commit two. 5. git commit (do not do git commit -a here) and write the message for your first commit 6. now your working directory will be all the changes for commit two. git commit -a if you want all of them 7. git rebase --continue

This page[1] has a more concise answer, but leaves out the git commit -p part.

Note that if you mess up in rebase-land, you can always git rebase --abort. If you come out of the rebase and everything looks lost ('oh god I lost my data!'), use git reflog and pull up the hash of where you were before. Your data is still there.

Another note: if your commits are already separate, you can use rebase to selectively squash and reorder them. Read the manual on git rebase -i, if you rearrange commits and only squash some I think you'll get what I'm talking about.

[1] http://stackoverflow.com/questions/6217156/how-to-break-a-pr...


Switching branches is cheap, I'd say the "right" way to get a tree like you want is to have two or even five branches all the time you're working. But I suspect you could make two branches and cherry-pick different sets of commits onto them to get the result you're after. To my mind it wouldn't be worth the effort though; how often do you really care whether the code worked with only 1 and 4 applied?


Right, I would say that it isn't worth the effort. Also, I probably never care about the code with only 1 and 4 applied. So perhaps branches aren't the right way to do what I am describing.

I always saw VC as a systematic way to keep a log of my development so that I could figure out where I may have broken my code. For this purpose, having some sort of meta-data where commits can be grouped would be nice. It would also work to do something like always end my commit messages with some kind of meta-data tag that I could grep the log for. I was just wondering if there was a prescribed/built-in way for Git to handle this.


git-bisect is the standard tool for figuring out where you broke something. I don't know what it does with branching histories though, I tend to effectively linearise my history by rebasing each branch on the trunk head before merging it.


This is great for people who are that organized. I'm not, so I like the 'just merge everything into master' mentality. See http://scottchacon.com/2011/08/31/github-flow.html


My main issue with the described github-flow is that they push development branches to the server, and encourage that to be done very often.

And my issue with that is once you push something, it's off-limits to any kind of archaeology in the history. And that's not a "principle" thing. If you push your branch, do some rebasing and push again, you are in a world of hurt.

The operation will very likely fail, and recovery is a serious pain in the butt.

If you don't ever do any sort of archaeology, then that's great and it will work for you. I have had numerous occasions where I've tried some git merge or something and screwed things up. I've fixed it by putting my Indiana Jones hat on and digging in.

Being able to tamper with the history has gotten me out of trouble many times. The only time it has gotten me in to trouble is rewriting history that has been published.


And yet if you _don't_ push it to the server, then nobody else can see it. And don't you want other people to see develop branches to give feedback and even to collaborate on writing?

In practice, what everyone does is they DO rewrite history on those pushed dev branches, and they TRY to avoid the world of hurt by some convention for keeping track of what branches are 'development branches', and knowing that their history can change, and thus not _pulling_ from these branches into anything except a branch that does nothing but track the dev branch. And then using 'rebase' in just the right way on your local copy of that dev branch, when you need to. And then winding up in that world of hurt when something goes wrong.

Contrary to all the git apologists in this thread, i think it is one of the biggest usability problems with git. I'm not familiar enough with the other dcvs to know if they manage to do this better. I do know for all that, branching/merging is still a hell of a lot better than it was with svn.

What I myself tend to do is avoid ever rewriting history, sacrificing 'cleanness' for reliability and safety. Except when I'm working on a dev branch for an open source project where they insist upon it, and then I worry, and mess up a lot, and spend lots of time recovering from my mistakes.


which workflow are you referring to?


> The event-driven concurrency model makes it easier to write servers without worrying about race conditions and thread locks

Well, no, you're just writing the locks yourself in an ad-hoc way. Every time you have a callback calling another callback, you have a lock and all the race condition/deadlock issues associated with that. Of course, writing an application that doesn't have complicated synchronization requirements (streaming fileserver) can often require less boilerplate in an evented system. However, you run into a catch-22 here: by definition it's an application with fewer synchronization requirements, so you'd have to use fewer complicated locks in a 'heavy thread' implementation as well :).

Ultimately it's an engineering tradeoff problem, and you have to weigh lightweight node-style cooperative multitasking with the ability of a traditional thread system to better handle highly complicated scenarios.

Or you can be Russ Cox and argue that this is a false dichotomy[1] and that we should all be using CSP. I'm in that camp.

Andrew Birrell: threads.

John Ousterhout: events.

John DeTreville: no.

Rob Pike: yes.

[1] http://swtch.com/~rsc/talks/threads07/


There's an interesting paper that came out of Stanford's WebBase project that might be helpful: http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: