Hacker Newsnew | past | comments | ask | show | jobs | submit | more stream_fusion's commentslogin

I have one of the affected drives mentioned in the article in my development laptop - the Samsung SSD 850 PRO 512GB.

As one of the most expensive SSD drives available on the market, it was disconcerting to find dmesg -T showing trim errors, when the drive was mounted with the discard option. Research on mailing lists, indicated that the driver devs, believe it's a Samsung firmware issue.

Disabling trim in fstab, stopped the error messages. However it's difficult to get good information about whether drive performance or longevity may be impacted without the trim support.


Trim really is only a helpful message when the drive is near full so the GC can preemptively zero blocks and retain good write speed. Without trim, the firmware must wait until it gets a write for a particular block before it know it can be erased.

If your drive has reasonably with unprovisioned space, it can simply work around the missing trim commands - this however, is theory, I do not know if the firmware actually does this. This is the exact thing that makes some drives better than others when working without trim.


Thanks. I'll probably end up creating an unprovisioned partition. It's frustrating, exactly because of the uncertainty re future performance. Especially given the price premium for pro/enterprise level hardware.


You can research if the firmware understands MBR and GPT - if it only understands one, then you have to use that. Alternatively use Samsungs own software (I think it's called Magician, can't remember exactly), it will make sure you have the unprovisioned space setup correctly.


An argument behind not increasing the block size, is that miners can price-ration the inclusion of transactions in blocks.

During periods of high network contention, an ordinary user can include a higher-fee to prioritize their transactions.

Correspondingly, the cost for attacking the network using spurious transactions will increase.


> In the real world there is always a discord between requirements and reality, skillsets and the problem space, change management and the need for rapid change, scope creep, and on and on.

We follow Agile. At the beginning of iteration planning, the dev-team, has to task-out the items and score the complexity, and then decide the cut-off point as to what can and cannot be achieved within the 2 week iteration.

It's taken us several years, but this ruthless cycle of feedback, and responsibility has led us to a point, where we scope it right about 75% of the time.

Having also been involved in projects that have overrun deliverable dates by years - I wouldn't have believed it possible for a software-management process to work so well.

Of course it helps, that there is genuine philosophical buy-in, and that our revenue is derived from our software (eg. we're not a cost-centre).


Reverting a commit is trivial. There is zero opportunity of destroying a repo. From my notes,

# revert a commit

git revert dd61ab32


Well that does look simple, but first of all, it doesn't really revert a commit. It makes a new commit to cancel out the old one, or something. Not intuitive, unexpected results if you don't know what you're doing.

But the situation I got into was when I did something stupid and just wanted to undo it. Let me go back to an hour ago before I screwed up my project. That's not revert, it's reset, but reset only reverts the commit history, you really want reset --hard. And of course you need to be pointing to the right place for this to work, and there are other ways for it to go wrong. It might be hard to imagine if you're at all competent with Git, but trust me when I say you can get yourself into some very frustrating scenarios if you just Google "undo push" and blindly follow the instructions.

Here's another simple scenario: I'm working on a bug, my friend fixes it first and pushes his version. I just want to pull his changes and throw away whatever I was working on. Ok, so I find this: http://stackoverflow.com/questions/1125968/force-git-to-over...

Look how many different methods there are. Look how many warnings of "THIS WILL DELETE ALL UNTRACKED FILES" there are. Why is this necessary? I could rant for a while longer, but this stuff is so incredibly unfriendly and frustrating to newcomers. I don't want to think about this stuff, I just want to go back to the code.


And yet every time I looked at the docs for git in the two or three years I was using it heavily I happened upon these dire warnings about destroying and losing work. I understand why git is so powerful, and to manage something as complex as the Linux kernel that power matters.

For what I do for a living it isn't, and Mercurial is a nice alternative that for whatever reason doesn't have any scary warnings in the docs about how I can destroy everything if I revert or rebase inappropriately, or whatever. Mercurial also seems a lot more aggressive about checking if I'm doing something stupid or dangerous and warning me about it, whereas git takes more of the traditional Unix "You Asked For It You Got It" philosophy.

Again: I understand why this is the case, but it's not the optimal tradeoff for me.


Not knowing the technical, or legal arrangements of SWIFT doesn't prevent one from making a traditional international wire-transfer.


Almost all languages have an escape-hatch to do dangerous stuff.

The key insight about Rust is the set of programming abstractions (linear types to manage resources) that enable one to get systems-like programming tasks done, without needing to fall back on very low-level coding techniques.


My experience playing around with KD trees is that they are super effective for multidimensional indexing (including spatial) and range search so long as the data is relatively static. The difficulty comes when trying to update and re-balance them dynamically, which is where other structures perform better.


Oh Dear,

I was remembering my previous research wrong.

It's quad-trees and related Z-order based curves that give log(n) search and inserts.

With those, "everything" become log^n.

- Given that, my previous argument concerning log time for triangle/polygon search should stand.

http://en.wikipedia.org/wiki/Quadtree http://en.wikipedia.org/wiki/Z-order_curve


Everyone who needs 'upsert' is forced to go and read those discussions (and probably a few more articles as well), and then implement their own version of the same thing many times over.

The fact that it's complicated is precisely the reason this ought to be solved for the general case.

Otherwise, Postgresql is still an awesome product.


Nah, you do not need to read those discussions unless you want to help out with the current patch which if enough people help out might land in PostgreSQL 9.5.


npm can use server-side resources.

But where do you store the 25GB blockchain in a client-side full-node implementation?

Isn't this just an implementation only for Node?


I don't think anyone is going to run a full node in a browser any time soon, but if they do, IndexedDB.


My experience is that, Haskell syntax is a dream compared to Ocaml, with 'where' clauses, beautifully simple lambda syntax, do notation support, . and $ composition, typeclass operator overloading.

Ocaml is more practical in a hard to explain way. Much more predictable in terms of it's memory and cpu use with non-lazy default evaluation, and I found much better performing on general code.


Funny, I have the opposite experience. Where clauses force you to look down and then up again (bonus if you managed to use both let and where in the same function!). The lambda syntax is a matter of debate, writing "fun" instead of "\" doesn't bother me and is, IMHO, clearer. OCaml has an equivalent of $ with @@, but I haven't seen it used very much, since chaining functions with |> is so convenient.

OCaml does not have generic ways of doing monadic stuff (do notation or generic >>=), which means in practice that monadic code looks like "fun1 x >>= (fun r -> )" which is kind of awkward. It also doesn't have typeclasses (yet).

However, one thing you're unlikely to find in OCaml and which is unfortunately not uncommon in Haskell is functions with a long list of positional parameters (since you have named arguments) or different versions of the same function with a "_" suffix, depending on various defaults (since you have named arguments). It makes a tremendous difference of readability when dealing with complex code.

Also, interface files (.mli), while a bit of a pain to maintain at times, give a very clear idea of what interface a given module exposes.

All in all, I find OCaml code much easier to read, not to mention less "clever" than equivalent Haskell code.


> However, one thing you're unlikely to find in OCaml and which is unfortunately not uncommon in Haskell is functions with a long list of positional parameters (since you have named arguments) or different versions of the same function with a "_" suffix, depending on various defaults (since you have named arguments). It makes a tremendous difference of readability when dealing with complex code.

Can you give an example, I'm interested in seeing what you mean but having trouble figuring out what an example would look like.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: