Hacker Newsnew | past | comments | ask | show | jobs | submit | diurnalist's commentslogin

> Also, thoughts on Vector vs otel agent?

IMO, with the current tech, it entirely depends on what data you're talking about.

For metrics and traces, I would use the OTel collector personally. You will have much more flexibility and it's pretty easy to write custom processors in Go. Support for traces is quite mature and metrics isn't far off. We've been running collectors for production scale of metric and trace ingest for the past couple of years, on the order of 1m events/sec (metric datapoints or spans). You mentioned low volume so that's less important, but I just wanted to mention in case others find this comment.

Logs are a bit different. We looked in to this in the past year. Vector has emerging support for OTLP but it's pretty early. Still, I bet it's pretty straightforward if your backend can ingest via OTLP. Our main concern with running the otel-collector as the log ingest agent was around throughput/performance. Vector is battle-tested, otel is still a bit early in this space. I imagine over time the gap will be closed but I would probably still reach for Vector for this use-case for higher scale. That said, YMMV and as with any technical decision, empirical data and benchmarking on your workloads will be the best way to determine the tradeoffs.

For your scale you could probably get away with an OTel collector daemonset and maybe a deployment with the Target Allocator (to allocate Prometheus scrapes) and call it a day :)


The article imo misuses the term BFF a bit, but perhaps its meaning has evolved over time. I was at SoundCloud when BFF was being introduced as an important piece of the microservice architecture--this post explains the purpose well[0]. BFFs can enable you to build more general-purpose and domain-specific services with few assumptions as to how they are used and their callers. BFFs then provide a composition layer you can use to, e.g., call one service to get a list of tracks, then call an authorization service w/ the list of track IDs to get geo-specific distribution rules for them, and compose that together in one materialized presentation view for the clients of the BFF.

Eventually I think there was some work to move to GraphQL, which can solve some of the same problems. But GraphQL is a technology, and BFF is more of a pattern. There is a later reflection on that blog that makes this distinction, which I only read today.[1] It makes another observation that I kind of forgot about, because it was hiding right in front of my face as a worker there:

"The defining characteristic of a BFF is that the API used by a client application is part of said application, owned by the same team that owns it, and it is not meant to be used by any other applications or clients."

The ownership model is indeed a big deal. In practice, it helped in many ways to have a sort of intermediate layer between the client applications and the rest of the architecture. For example, in the SoundCloud web application, when you load a page of playlists, only the first 5 tracks in the playlist are visible to the end-user. So the web BFF application had special logic to only partially load all the track metadata past track 5, which had significant impact on scalability and latency, especially when rendering a lot of playlists that had lots of tracks!

[0]: https://philcalcado.com/2015/09/18/the_back_end_for_front_en... [1]: https://philcalcado.com/2019/07/12/some_thoughts_graphql_bff...


If you have lots of backend services, building a gateway API (aka BFF) using GraphQL is really nice. Your GQL handles composition of all the resources into exactly what the client needs for something, without having to define a REST endpoint for every single use case.

On the frontend you compose the various schemas that each component needs (fragments) and can in one request pull exactly the data needed with one request to the gateway which will use the minimum required calls to the upstream services, and execute them in the most efficient order.


> gateway API (aka BFF)

A gateway API is an API that gates other APIs. A BFF is a gateway API used for a particular purpose, clearly identified by the name ("backend for frontend").

Thus, they are not the same: one is a wider term, another is a focused implementation of that concept.


And you can have multiple Backeds for Frontends different types of clients (i.e. browser, mobile)


This is very true! In practice I have seen that it is exceedingly difficult to write the GraphQL "sinks" (I don't recall the exact term) that can intelligently handle things like batching and understanding things like pre filtering, where one service in the composed call could and should be performed first to limit the result set. YMMV, in my experience it can be simpler to be more explicit about these things, especially when in "true" BFF the client team is also responsible for their immediate backend, which can give them that flexibility, at the cost, perhaps, of more boilerplate.


This is a really interesting reply, thanks for taking the time to compose it. As a soundcloud user, I actually noticed little things like how difficult the playlist loading problem might be and wondered precisely about how the client was getting updates as I scrolled. Interesting to know a bit more about the backend design.

I suppose I’m wondering still, in your example, why georestrictions couldn’t have just been an api feature in the list service vs both pieces of data needed to be fetched and reduced by a third service (and duplicate that logic across clients). I also work at a large company though, so I can see how “backends owned directly by client teams” is a path one might prefer when dealing with other teams with e.g. other priorities.


just want to say, this is the most comprehensive, eloquent, and succinct description of the Staff role I have seen put to page.


I don't believe this is a solved problem, and it's been around since OpenTracing days[0]. I do not think that the Span links, as they are currently defined, would be the best place to do this, but maybe Span links are extended to support this in the future. Right now Span links are mostly used to correlate spans causally _across different traces_ whereas as you point out there are cases where you want correlation _within a trace_.

[0]: https://github.com/opentracing/specification/issues/142


Keycloak solves a complex problem.

It is built on a plugin architecture, so plugins are certainly a viable option and this is documented in more detail here[0]. In general I have found the Keycloak docs thorough and well-written. When I operated Keycloak I built a few plugins to solve specific needs/assumptions we had around IdP when migrating to Keycloak from a bespoke solution.

Re: your second point, the docs also describe this in detail[1]. Having the realm data exist in a simple form that can be exported/imported was very useful. However, I would have liked if they thought more about how to do live backup/restore; perhaps that is easier now than it was at the time.

[0]: https://www.keycloak.org/docs/latest/server_development/inde... [1]: https://www.keycloak.org/server/importExport#_importing_a_re...


> Keycloak solves a complex problem.

A lot of problems, actually, and most people don't have many of them. If you just want an OIDC server in front of your self-hosted apps you can solve that with a much simpler and faster tool.


The docs can say whatever it wants, there were large parts of our configuration that wasn't included in an export, so we couldn't automate provisioning.


Good find :)

I find it interesting how this ties in to the "productivity paradox."[0] The idea the author seems to be getting at--that software accelerates the creation of ever more elaborate solutions (often to problems created by prior iterations of software), and in the process leaves a wake of complexity that frustrates and baffles the society it was supposed to serve--is something I'd like to read more about.

[0]: https://en.wikipedia.org/wiki/Productivity_paradox


David Graeber addresses precisely this in several pieces that ultimately became the book _Bullshit Jobs_. Here is a snippet from a relevant interview[1].

"What happened? Well, I think part of it is a hypertrophy of this drive to validate work as a thing in itself. It used to be that Americans mostly subscribed to a rough-and-ready version of the labor theory of value. Everything we see around us that we consider beautiful, useful, or important was made that way by people who sank their physical and mental efforts into creating and maintaining it. Work is valuable insofar as it creates these things that people like and need. Since the beginning of the 20th century, there has been an enormous effort on the part of the people running this country to turn that around: to convince everyone that value really comes from the minds and visions of entrepreneurs, and that ordinary working people are just mindless robots who bring those visions to reality.

But at the same time, they’ve had to validate work on some level, so they’ve simultaneously been telling us: work is a value in itself. It creates discipline, maturity, or some such, and anyone who doesn’t work most of the time at something they don’t enjoy is a bad person, lazy, dangerous, parasitical. So work is valuable whether or not it produces anything of value."

[1]: https://theanarchistlibrary.org/library/david-graeber-and-th...


This sort of Calvinistic ideal of work as an inherent virtue is what was behind the “Arbeit macht frei” slogans on archways at concentration camps.


Do you think it's virtuous for an able adult to demand that other people supply his survival and hapiness?


Survival, yes, and happiness to a certain extent. This is probably one of the most important realisations of the 20th Century, that by providing for the basic needs of everyone (and thereby essentially amortizing those costs over the entire population), we can produce overall better results for the country (measured in efficiency, production, happiness, etc).

So by setting up a national health service in the UK, you ensure a base level of health across the whole country. By providing free education (and free higher education), you ensure that the workforce is more skilled, and able to make a wider range of choices. By providing basic housing, food, etc to those who need it, you prevent people having to opt out of society in order to survive.

This sort of safety net is cheaper in the long run (it's worth watching Unlearning Economics' video "Free Stuff is Good Actually", which goes into detail on this point), but it's also fundamentally about freedom. If you have access to free education, you can make choices about your career and life that you just couldn't before. If you have free healthcare and financial support if you get sick, you can continue working for longer, but you'll also have more time to make bad choices and weird choices - the sorts of choices that are fundamental to healthy entrepreneurship. If you have a financial safety net, you can take more interesting risks, because the consequences are less severe.

So yeah, I think if you believe freedom is a virtue, then I think you also need to see it as a virtue for people to be supplied with the tools that can give them freedom.


You didn't answer the question. Taking for granted that insurance and social welfare safety nets are good for civilized society, it's also useful to recognize that they're not free; someone has to work to pay for them. Do you see any virtue in wanting to work so that others don't have to carry as much of your weight? Or do you only value the virtue in other people being willing to pay your way for you?


I don’t see how that’s relevant to the idea that work (regardless of how useful or enabling of survival) is an inherent virtue.


For many of us, work is partly motivated by the desire to avoid making other people carry our weight. Do you not see anything virtuous about that?


You’re still talking about something different than work for work’s sake.


No, I think Nazism was behind those words.


It sounds like you might be suggesting I was conflating or blaming Protestant ideals for evil Nazi actions. Instead, I am pointing out an earlier cultural influence that persisted. In a way, there’s something far more sinister about the idea that a Nazi believed there was some sort of genuine freedom to be had, as opposed to it being outright malicious.

“He seems not to have intended it as a mockery, nor even to have intended it literally, as a false promise that those who worked to exhaustion would eventually be released, but rather as a kind of mystical declaration that self-sacrifice in the form of endless labor does in itself bring a kind of spiritual freedom”

- historian Otto Friedrich


U will hear the same thing under commies, something like only labor is glorious(but of coz this only applies to the plebs but not their leaders)


"The US Research Software Engineer Association (US-RSE) is a community-driven effort that brings together people who write and contribute to research software within the US."

I looked at the "who's hiring?" post and there were not many positions posted in science/research. I wanted to share this relatively new resource, which includes a job board advertising open positions in research software. You do not need a PhD for most/any of these positions. I worked as a research software engineer for several years and it was very rewarding, and a very different kind of work than I was doing in industry. I don't think this opportunity is well-known still, and suspect others might be interested in making the move from industry to research, even if just to try it out.


Sr. cloud computing researcher | Chicago, IL | ONSITE (indep. research group based out of University of Chicago)

Disclosure: I left my position on this research team this summer after a pleasant 4 years. It's quite a different environment than one normally might find on these listings.

The Nimbus team[1] is funded by multiple grants from the National Science Foundation and concentrates on building accessible, powerful cloud infrastructure for the comp sci research community. Our users are folks who are doing work that cannot be done on commercial clouds for a variety of reasons (lack of low-level access, performance variability too high, cost $$, etc.) We work with stuff like OpenStack, Kubernetes, Liqid, and are always exploring new things, like how to build a distributed edge/IoT testbed, and how to enable practical reproducibility of CS experiments.

I think more development talent should consider working in science/research. There are lots of opportunities and I don't think they are often advertised well.

The job is in Chicago but we work largely remote. Still, in-person has been preferred especially as sometimes it's helpful to work with collaborators at the Argonne National Laboratory west of Chicago. Happy to answer more in pm.

[1]: https://www.nimbusproject.org/jobs


Hi HN! I previously submitted this project 7 months ago (https://news.ycombinator.com/item?id=26026972) and am trying it again; I hope that is OK.

I spent time writing this utility after being surprised that there didn't seem to be many resources in this space. I wanted to write a weekly report that would send mail to my boss (and boss' boss) about the state of some key metrics. The main design goals were to (1) support a simple on-premises deployment use-case and (2) leverage the existing Python ecosystem around visualizations. I hope others find it useful and am interested in feedback/suggestions. It is open source but is released under the Prosperity license, so if it's used commercially you should purchase an annual license after 30 days of trial.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: