Hacker Newsnew | past | comments | ask | show | jobs | submit | distracteddev90's commentslogin

Does this work well when set up to use my local machine and my personal credentials?


Can you give me more details? Do you want to run kubetail locally but pull logs from your cluster remotely? Yes, this is possible.


Not the original commenter but I manage a similar system:

1. Wait timings for jobs.

2. Run timings for jobs.

3. Timeout occurrences and stdout/stderr logs of those runs

4. Retry metrics, and if there is a retry limit, then metrics on jobs that were abandoned.

One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.


This is great - we do capture all logs for each run including any retries, so you can see errors and general successes. All of these other metrics we have internally, but need to expose to our users!

Observability is super key for background work even more so since it's not always tied to a specific user action, so you need to have a trail to understand issues.

> One thing that is easy to overlook is giving users the ability to define a specific “urgency” for their jobs which would allow for different alerting thresholds on things like running time or waiting.

We are adding prioritization for functions soon so this is helpful for thinking about how to think about telemetry for different priority/urgent jobs.

re: timeouts - managing timeouts usually means managing dead-letter queues and our goal is to remove the need to think about DLQs at all and build metrics and smarter retry/replay logic right into the Inngest platform.


Sorry DLQs make it easier to do those alerts where a human needs to look asap at something. Not sure they can be gotten rid of, but maybe you call them something else.


Inngest engineer here!

Agreed that alerting is important! We alert on job failures, plus we integrate with observability tools like Sentry.

For DLQs, you're right that they have value. We aren't killing DLQs but rather rethinking them with better ergonomics. Instead of having a dumping ground for unacked messages, we're developing a "replay" feature that lets you retry failed jobs over a period of time. Our planned replay feature will run failures in a separate queue, which can be cancelled at any time. The replay itself can be retried as well if there's still a problem


Was affected by this, and so far, windscribe is a far superior service.

The transition was seamless.


Just signed up and this is almost exactly what I've been looking for.

The Github repo mentions a REST api that can be used to push arbitrary data into the system. Are there any docs around this yet?


Thanks for having a look, and glad to hear it. No API docs yet, but I've just created an issue for this (https://github.com/onejgordon/flow-dashboard/issues/24).

If you're familiar with python webapps, you can see api.py:TrackingAPI, which takes a date and a simple JSON data param.

Let me know if that looks like it fits your need, and if not, I'd be curious to hear more about how you'd like such an API to work.


There is no reason to feel alone, perhaps rare, but not alone. There are others that share your feelings (including myself).

These people are often labelled as the "pragmatic". They also lean towards having a more general scope of knowledge and thus often lack a "speciality".

There are also those that simply wish to solve the problem in the "best" way possible, and will use whichever stack gets them closer to that ideal solution.

disclaimer: purely based on my experience/opinion/perspective.


Turning off javascript also removes most paywalls, even the ones that have started protecting against direct links from search results.


That works great for the NY Times paywall, but not WSJ.

For WSJ I open a clean browser (empty cache, no cookies, no nothing) then hit the "web" link.


I believe this is symptomatic of an industry moving away from job board based recruiting. There has been a significant change in the recruiting sphere and many talent teams are starting to focus more on sourcing and pursuing passive candidates.


This is the exact situation I'm in and I'm on my 3rd TN. Get a good lawyer and this shouldn't be a problem.

If you do get this visa though, be very careful what you say to the border guards. You need to ensure that when speaking about your work, you always frame it as "work in support of scientific research/innovation".


The benchmarks for node.js are terribly misleading. The node.js implementations only ever spawn a single process, and thus node is only running on a single core and uses only a single thread.

Specifically, the http server example(1), doesn't even bother using the standard library provided Cluster module(2). Cluster is specifically designed for distributing server workloads across multiple cores.

All node.js services/applications I've worked on in the past 3 years (that are concerned with scale) utilize a multi-process node architecture.

The current benchmark can only claim that a single python process that spawns multiple threads is 2x faster than a single node.js process that spawns only one thread.

This fact may be interesting to some, but is irrelevant to real world performance.

[1]: https://github.com/MagicStack/vmbench/blob/master/servers/no...

[2]: https://nodejs.org/api/cluster.html


There is nothing misleading about the benchmarks. It is explicitly said that ALL frameworks were benchmarked in single-process and single-thread modes.

Yes, in production you should run your nodejs app in cluster, your Python apps in a multiprocess configuration, and you should never use GOMAXPROCS=1 for your go apps in production!

Running all benchmarks in multiprocess configuration wouldn't add anything new to the results.


The main premise in my comment is that the benchmarks do not resemble real world performance, and are therefore misleading.

The comment above (https://news.ycombinator.com/item?id=11626762) further expands on why these kinds of benchmarks, although interesting, have no real value.

Each implementation does something wildly different and responds to different inputs with completely different outputs.

To put it metaphorically, if you put a car engine in two completely different chassis and then race them on a track, you aren't gaining any real insight into relative performance of the engine in the two vehicles.

Also, just to be clear, my qualms are with the benchmarks alone, I think the library is great! Thanks for all the hard work :)


I guess I look at the benchmarks in a bit different light.

These benchmarks are primarily comparing event loops and their performance. TCP benchmark is very fair, HTTP - maybe not so much. The point is to show that you can write super fast servers in Python too, just have a fast protocol parser.

As for the HTTP benchmarks, I plan to add more stuff to httptools and implement a complete HTTP protocol in it. Will rerun the benchmakrs, but I don't expect more than 20% performance drop.


Since this is a benchmark of eventloop based frameworks, it makes sense to only spawn a single eventloop and test against that. I looked through the code for the python servers and they are all configured for a single event loop, making this a comparison on equal footing.

Yes it's true you normally run multiple node processes in production, but you likewise normally run multiple asyncio/tornado/twisted processes in production as well. I don't see it as a big deal, or misleading to compare them in this sense.


It says that all the benchmarks are single-threaded and even mentions at the end that you could push performance further with multicore machines.

It doesn't matter anyway, with one thread per core it would be pretty straightforward to scale in beefier machines.


Do the other benchmarkside spawn multiple processes?


No, even Go is explicitly configured to only use one scheduler:

> We use Python 3.5, and all servers are single-threaded. Additionally, we use GOMAXPROCS=1 for Go code, nodejs does not use cluster, and all Python servers are single-process.


One thing I find interesting is that everyone loves to hate CoffeeScript, but its individual features/syntax are consistently lauded in conversations about other languages.

(Not to mention half of ES6 existed in CoffeeScript first, but that's a gripe for another day)


The composition of features is more important than just the individual features - CoffeeScript is a great example of when feature composition goes wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: