Hacker Newsnew | past | comments | ask | show | jobs | submit | kevinslin's commentslogin

hi kgeist - i work on the team that manages the github app. are you able to share a conversation where the github connector did not work? feel to message me at https://x.com/kevins8 (dm's open)


I think I understand what went wrong. I was confused by the instructions and ChatGPT's UI.

I asked the GitHub app to review my repository, and the app told me to click the GitHub icon and select the repository from the menu to grant it access. I did just that and then resent the existing message (which is to be expected from a user). After testing a bit more, from what I understand, the updated setting is applied only to new messages, not to existing ones. The instructions didn't mention that I needed to repeat my question as a separate message again.


as someone that likes the npm package ecosystem but is not fond of malware - ended up building a CLI wrapper around npm to only install packages older than a configurable amount of days.

in case folks find it helpful: https://github.com/kevinslin/safe-npm


snarky part of me wants to say that X not Y is a trope that has been over used by people trying to peddle their own thing that is just a thin veneer of what was there before

but it’s easy to be a pessimist and pull down other work

i agree that goals are hard and in some sense, set people up to fail by making the target some ephemeral thing in the distance

maybe reframing it as a quest can help it’s something that can be hard it can change you the outcome might be different from what you intended

by all means. do quests. not goals. will go back to nike on this one. just do it


> You can't reference your contracted volume rates when building monitors out and the units for the metrics you need to watch don't match the units you contract with them on the SKU.

Are you referring to the `datadog.estimated_usage.logs.ingested_events` metric? It includes excluded events by default but you can get to your indexed volume by excluding excluded logs. `sum:datadog.estimated_usage.logs.ingested_events{datadog_index:*,datadog_is_excluded:false}.as_count()`


For datadog, unfortunately there's no obvious altnernative despite many companies trying to take marketshare. This is to say, datadog both has second to none DX and a wide breadth of services.

Grafana Labs comes closest in terms of breadth but their DX is abysmal (I say this as a heavy grafana/prometheus user) Same comments about new relic though they have better dx than grafana. Chronosphere has some nice DX around prometheus based metrics but lack the full product suite. I could go on but essentially, all vendors either lack breadth, DX, or both.


the way I think of datadog is that datadog it provides a second to none DX combined with a wide suite of product offerings that is good enough for most companies most of the time. does it have opaque pricing that can be 100x more expensive than alternatives? absolutely! will people continue to use it? yes!

something to keep in mind is that most companies are not like the folks in this thread. they might not have the expertise, time or bandwidth to build invest in observability.

the vast majority of companies just want something that basically works and doesn’t take a lot of training to use. I think of Datadog as the Apple of observability vendors - it doesn’t offer everything and there are real limitations (and price tags) for more precise use cases but in the general case, it just works (especially if you stay within its ecosystem)


In terms of Datadog - the per host pricing on infrastructure in a k8/microservices world is perhaps the most egregious of pricing models across all datadog services. Triply true if you use spot instances for short lived workloads.

For folks running k8 at any sort of scale, I generally recommend aggregating metrics BEFORE sending them to datadog, either on a per deployment or per cluster level. Individual host metrics tend to also matter less once you have a large fleet.

You can use opensource tools like veneur (https://github.com/stripe/veneur) to do this. And if you don't want to set this up yourself, third party services like Nimbus (https://nimbus.dev/) can do this for you automatically (note that this is currently a preview feature). Disclaimer also that I'm the founder of Nimbus (we help companies cut datadog costs by over 60%) and have a dog in this fight.


Author here. Spent far too much time staring at spreadsheets in my living room calculating observability costs across different vendors and decided to write a post about it.

This is my attempt to create a common model for thinking about usage based pricing across all the vendors and (make a best effort attempt) normalize and compare the actual cost of usage.

If you have any questions or comments, would love to hear them.


author of the post here - was inspired to write this post after working with OTEL for a few months - realized that OTEL had a ridiculously large surface area that most people (at least myself) might not be aware of

I see a lot of comments about how overly complex OTEL is. I don't disagree with this. in some sense, OTEL is very much the k8 for observability (good and bad)

The good is that it is a standard that can support every conceivable use case and has wide industry adoption The bad is that there is inherent complexity in needing to support the wide array of use cases


I realize that a fair answer is "no - you shouldn't be doing that" but like .env files, I find that its a widespread practice. Curious if others have managed a way of dealing with it besides hope someone in the room has tribal knowledge of what metrics have become system dependencies


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: