Hacker Newsnew | past | comments | ask | show | jobs | submit | louisstow's commentslogin

I haven't seen this approach after researching ways to implement a "Trending" query. We get super fast queries over an arbitrary time window at the cost of a slower but highly cacheable threshold calculation query.


I wanted to share some of my techniques for managing and scaling 100+ unique web crawlers. I've tried a few approaches but these techniques stand out as having the most impact in terms of quick iteration and being resilient to breaking changes. Interested in hearing different approaches.


I built an early HTML5 game engine in 2010 called CraftyJS when Facebook games were starting to become big. The project itself got me the job at a gaming startup and an offer at Zynga.


Nice! I used CraftyJS for my pet project!


https://app.secalerts.co

Our newest vulnerability alert service. We aggregate vulnerability data from over 50 sources into one consistent data schema and allow users to write detailed queries and setup alerting. We've even introduced a pretty good local scanner to keep your SBOM up-to-date.


I built SecAlerts https://secalerts.co a few years ago. Fairly straight forward SaaS product to send email alerts on new vulnerabilities matched to customer software.


This was originally a startup in Brisbane that was acquired, not a Microsoft name.


HN title needs editing.


Not necessarily. It does belong to MS now.


Yes saw a sister comment with a link that says that even though the main link doesn’t.


Main link does, see footer.



I remember this crazy game when I was a teenager, and followed its development on and off for many years -- the combat was just so ahead of its time. Glad that they found a narrative to back the action, and an appreciating audience with Overgrowth.


https://signals.to

I'm building a cybersecurity database that aggregates vulnerability information from about 100 different sources in one cohesive structured dataset that can be easily queried. It'll be much faster than CVEs and include way more things (remedy information, news, social media, forum, advisories).


Can someone explain how it's lossless if there's a chance of hash collisions in the index?


The encoder can always generate the code for a literal pixel if the required colour is not found in the index.


TLDR is hash collisions are an explicit part of the format. Encode and Decode are both specified to use the same hash function, so the hash collisions will all cancel out.


Nice! I've been running a similar ASCII based collab tool called https://walloftext.co


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: