Hacker Newsnew | past | comments | ask | show | jobs | submit | more azhenley's commentslogin

Discussed heavily 25 days ago: https://news.ycombinator.com/item?id=43957010


It's also not nearly as good as Eric Gilliam's "How did places like Bell Labs know how to ask the right questions?" https://www.freaktakes.com/p/how-did-places-like-bell-labs-k... , posted here a few months ago https://news.ycombinator.com/item?id=43295865 . That could almost have been written as a rebuttal to TFA's storytelling about unlimited researcher freedom at Bell Labs, though in fact it predates TFA by a couple of years.


Good thinking. I discuss population density, cities near borders, and narrow borders in the last section.


Another possible suggestion. Maybe choose random points that are within a set radius of points chosen along the borders? So perhaps choose first a random selection of points on the border, then choose random points within a circle (or perhaps just a square with a set delta in the lat/long) that are "nearby to the border" - then measure your error rates for those points at various boundary simplification tolerances? That'd remove the "middle of the state" random points where the border tolerance inevitable makes no difference.


As a native Philadelphian, I immediately see why you need a good resolution here - at 0.1 degrees resolution you very well could have assigned my birthplace to New Jersey. If I'm not mistaken New York and Philadelphia are the largest cities where you might have a problem. Chicago's on a state line but the Illinois-Indiana border is straight.


I wonder if it’s actually straight though? In the chart on the page, Colorado is described as having 7000-something vertices, where I would have expected it to have … 4.


There's the congresionally approved boundary. Then there's the surveyed boundary. Wherein a team of people goes out and hammers survey marks and tags into the earth or creates man made monuments when that is not possible.


> The post got linked by Hackernews and Reddit. As is usual the majority of comments did not talk about the actual content but instead were focused on two tangential things.

Too true, and this is too good. Start with part 1 (and the comments) if you haven’t.


I wonder what it is about the HN audience or about the HN voting system that seems to always result in this. You see this in so many stories posted here: Out of a 1000 word article, someone nitpicks a single phrase, and 75% of the comments rathole on that discussion rather than talking about the article.


This is:

1. hilarious because it itself is an example of this behavior, we aren't discussing the article, but instead a tangent from a few words from it

2. an instance of Parkinson's Law of Triviality: it's just easier to respond to a comment than to read an entire article. Plus, many people read comments first to try and determine if an article is worth reading. So you end up with engagement in spinoff discussion, especially when the original is harder to read or understand to a general audience.


I actually thought about #1 as I typed it. I admit it's definitely hilarious and hypocritical.


It's quite simple. These sites are frequented by humans how have their own interests and sometimes agendas rather than by automatons who only discuss what you personally want them to discuss. Hope that clears it up.


If you want to read the referenced comments, here they are (probably -- I don't have a crystal ball, only access to a search engine):

- https://news.ycombinator.com/item?id=43468976

- https://www.reddit.com/r/programming/comments/1jjluxe/writin...


You should try Mirror. The LLM-powered programming-by-example programming language I made:

signature is_even(x: number) -> bool

example is_even(0) -> true

example is_even(1) -> false

example is_even(222) -> true

example is_even(-99) -> false

It will take your examples and "compile" to a callable function. You can read more or try it out: https://austinhenley.com/blog/mirrorlang.html


So many agent tools now. What is the special sauce of each?


Gemini has 1 Million context window, which usually works better for coding.

When it gets priced, it's usually cheaper (for the same capability)


The whole "industry" right now is hacked together crap shoved out the door with zero thinking involved.

Wait a year or two, evaluating this stuff at the peak of the hype cycle is pointless.


Spoiler alert: there isn't one


Context Window and Pricing absolutely matters


But many "agentic" tools are model-agnostic. The question is about what the tool itself is doing.


Looks like their GitHub Copilot Workspace.

https://githubnext.com/projects/copilot-workspace


This is beautiful. I think I'll add a similar page to my website. Side projects are what I look forward to!

Right now, I just have my blog + github as a messy portfolio of personal projects, but I like this much better.


Go for it! I love creating list sites, so I listed my side projects too. xD


If you had a blog or YouTube channel where you just went around to open source projects optimizing them down, I’d be very interested.


Objectively the best podcast. https://www.acquired.fm/


Previous discussion from 2023: https://news.ycombinator.com/item?id=38262251

Recent discussion on the follow-up, "The Fifth Kind of Optimisation": https://news.ycombinator.com/item?id=43555311


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: