Hacker Newsnew | past | comments | ask | show | jobs | submit | wellpast's commentslogin

This is one of those discourses that disappoints me about our industry.

Estimation can be done. It's a skillset issue. Yet the broad consensus seems to be that it can't be done, that it's somehow inherently impossible.

Here are the fallacies I think underwrite this consensus:

1. "Software projects spend most of their time grappling with unknown problems." False.

The majority of industry projects—and the time spent on them—are not novel for developers with significant experience. Whether it's building a low-latency transactional system, a frontend/UX, or a data processing platform, there is extensive precedent. The subsystems that deliver business value are well understood, and experienced devs have built versions of them before.

For example, if you're an experienced frontend dev who's worked in React and earlier MVC frameworks, moving to Svelte is not an "unknown problem." Building a user flow in Svelte should take roughly the same time as building it in React. Experience transfers.

2. "You can't estimate tasks until you know the specifics involved." Also false.

Even tasks like "learn Svelte" or "design an Apache Beam job" (which may include learning Beam) are estimable based on history. The time it took you to learn one framework is almost always an upper bound for learning another similar one.

In practice, I've had repeatable success estimating properly scoped sub-deliverables as three basic items: (1) design, (2) implement, (3) test.

3. Estimation is divorced from execution.

When people talk about estimation, there's almost always an implicit model: (1) estimate the work, (2) "wait" for execution, (3) miss the estimate, and (4) conclude that estimation doesn't work.

Of course this fails. Estimates must be married to execution beat by beat. You should know after the first day whether you've missed your first target and by how much—and adjust immediately.

Some argue this is what padding is for (say, 20%). Well-meaning, but that's still a "wait and hope" mindset.

Padding time doesn't work. Padding scope does. Scope padding gives you real execution-time choices to actively manage delivery day by day.

At execution time, you have levers: unblock velocity, bring in temporary help, or remove scope. The key is that you're actively aiming at the delivery date. You will never hit estimates if you're not actively invested in hitting them, and you'll never improve at estimating if you don't operate this way. Which brings me to:

4. "Estimation is not a skillset."

This fallacy is woven into much of the discourse. Estimation is often treated as a naïve exercise—list tasks, guess durations, watch it fail. But estimation is a practicable skill that improves with repetition.

It's hard to practice in teams because everyone has to believe estimation can work, and often most of the room doesn't. That makes alignment difficult, and early failures get interpreted as proof of impossibility rather than part of skill development.

Any skill fails the first N times. Unfortunately, stakeholders are rarely tolerant of failure, even though failure is necessary for improvement. I was lucky early in my career to be on a team that repeatedly practiced active estimation and execution, and we got meaningfully better at it over time.


Someone might run:

curl -s https://www.cs.cmu.edu/~biglou/resources/bad-words.txt | tr -d '\r' | while read -r w; do curl -s -X POST https://subth.ink/api/thoughts -H 'Content-Type: application/json' -d "{\"contents\":\"$w\"}"; done


95 other users*

Love this. Is there a scrape-able list of these?


Thanks for the love.No need to scrape, just use this json containing all the data used in making the site :

https://storage.googleapis.com/globalhnbucket/normalized_boo...


https://xelly.games/

Vine but for user-submitted microgames

Docs: https://xelly-games.github.io/docs/intro


Have you checked out rooms.xyz? it's a similar concept.


Just d/l’d. Thx. Exploring… Looks interesting!


https://xelly.games/

Users post small games to social feeds.

Scroll like a social network, jump into and play any game by tapping on it.

Games are served into fully locked-down, sandboxed iframes for security.



https://xelly.games/

Twitter but for games instead of tweets.


https://xelly.games/

Social media network where users post microgames!


It’s a least-common denominator effect.

I.e., most people don’t care.

Local-first is optimal for creative and productivity apps. (Conversely, non-local-first are terrible for these.)

But most people are neither creative nor optimally productive (or care to be).


> most people don’t care.

it's not that they "don't care", but that they dont know this is an issue that needs to be cared about. Like privacy, they didnt think they need it until they do, but by then it's too late.


Wouldn’t it depend on use case?

If the app confirms to me my crypto transaction has been reliably queued, I probably don’t want to hear that it was unqueued because a node using SQLite in the cluster had died at an inconvenient specific time.


If you had a power failure between when the transaction was queued and the sqlite transaction was comitted, no amount of fsync will save you.

If that is the threat you want to defend against this is not the right setting. Maybe it would reduce the window for it a little bit, but power failures are basically a non existent threat anyways, does a solution that mildly reduces but not eliminate the risk really matter when the risk is negligible?


> but power failures are basically a non existent threat anyways

Not in the contexts sqlite3 is often used. Remember, this is an embedded database, not a fat MySQL server sitting in a comfy datacenter with redundant power backups, RAID 6 and AC regulated to the millidegree. More like embedded systems with unreliable or no power backup. Like Curl, you can find it in unexpected places.


I think in that context, durability is even less expected.


A better example is probably

1. I general a keypair and commit it.

2. I send the public key to someone.

I *really* want to be sure that 1 is persisted. Because if they for example send me $1M worth of crypto it will really suck if I don't have the key anymore. There are definitely cases where it is critical to know that data has been persisted.

This is also assuming that what you are syncing to is more than one local disc, ideally you are running the fsync on multiple geographically distant discs. But there are also cryptography related applications where you must never reuse state otherwise very bad things happen. This can apply even for one local disc (like a laptop). In this case if you did something like 1. Encrypt some data. 2. Commit that this nonce, key, OTP, whatever has been used. 3. Send that datasome where. Then You want to be sure that either that data was comitted or the disc was permanently destroyed (or at least somehow wouldn't be used accidentally to be encrypt more data).


Of course it will because same programmers don’t ack their customers until their (distributed, replicated) db says ack.


sane*


I believe in the comment they're referring to the "crypto transaction" not the SQLite transaction.


if you are doing crypto you really ought to have a different way of checking that your tx has gone though that is the actual source of truth, like, for exple, the blockchain.


I knew I shouldn’t have said crypto, but it is why I said queued. I knew a pedant was going to nitpick. Probably subconsciously was inviting it. I think my point still stands.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: