Anyone else surprised that the download links are plain HTTP without SSL? I know it's a page that in the past I would have typically not worried about securing - but nowadays it's SSL everything or else your browser yells at you.
Yeah, this is bad. The page almost seems like someone’s pet project that didn’t have any explicit funding and they got bored or left Netflix in 2020. I’m not sure how that would explain the lack of SSL cert except for just general lack of thoroughness.
I think you're looking at OTel from a strictly infrastructure perspective - which Cloudwatch does effectively solve without any added effort. But OTel really begins to shine when you instrument your backends. Some languages (Node.js) have a whole slew of auto-instrumentation, giving you rich traces with spans detailing each step of the http request, every SQL query, and even usage of AWS services. Making those traces even more valuable is that they're linked across services.
We've frequently seen a slowdown or error at the top of our stack, and the teams are able to immediately pinpoint the problem as a downstream service. Not only that, they can see the specific issue in the downstream service almost immediately!
Once you get to that level of detail, having your infrastructure metrics pulled into your Otel provider does start to make some sense. If you observe a slowdown in a service, being able to see that the DB CPU is pegged at the same time is meaningful, etc.
Agree with you on this.
OTel agents allows exporting all host/k8s metrics correlated with your logs and traces. Though exporting AWS service specific metrics with OTel is not easy. To solve this SigNoz has 1-Click AWS Integrations: https://signoz.io/blog/native-aws-integrations-with-autodisc...
Also SigNoz has native correlation between different signals out of the box.
Not confusing anything. Yes you can meter your own applications, generate your own metrics, but most organizations start their observability journey with the hardware and latency metrics.
Otel provides a means to sugar any metric with labels and attributes which is great (until you have high cardinality) but there are still things that are at the infrastructure level that only CloudWatch knows of (on AWS). If you’re running K8s on your own hardware - Otel would be my first choice.
> And I'm actually quite happy that most Deno projects don't have a custom testing and linting setup.
I feel similarly. The standard configurations (e.g. tsconfig, linting, formatting) and bolts-included tooling (test, lint, fmt, etc.) are what make Deno so great for developers.
I've started using Deno in my spare time for various projects - and it just _feels_ more productive. I go from idea to testing TypeScript in minutes - which never happened in Node land.
> The standard configurations (e.g. tsconfig, linting, formatting) and bolts-included tooling (test, lint, fmt, etc.) are what make Deno so great for developers.
And that's great for greenfield projects - although there's competition with Biome and Vite / Vitest for a lot of those - but the vast majority of Node use today is existing projects, and at least at one point Deno (and Bun, maybe others) were marketed (I think?) as a drop-in replacement for NodeJS. But maybe I'm misremembering.
> Server authors working on large systems likely already have an OAuth 2.0 API.
I think this biases towards sufficiently large engineering organizations where OAuth 2.0 was identified as necessary for some part of their requirements. In most organizations, they're still using `x-<orgname>-token` headers and the like to do auth.
I'm not sure that there's a better / easier way to do Auth with this use case, but it does present a signficant hurdle to adoption for those who have an API (even one ready for JSON-RPC!) that is practically ready to be exposed via MCP.
> I think this biases towards sufficiently large engineering organizations where OAuth 2.0 was identified as necessary for some part of their requirements. In most organizations, they're still using `x-<orgname>-token` headers and the like to do auth.
I don't think that's it. Auth is a critical system in any organization, and larger organizations actually present more resistance to change, particularly in business critical areas. If anything, smaller orgs gave an easier time migrating critical systems such as authentication.
Everyone I know who uses Teams does not like it. Is that selection bias based on my personal network of technically savvy peers and professionals, or is it founded in a broader experience?
More importantly, does Microsoft really believe this is a winning product? Are they that out of touch with their customer base?
I’ve never met anyone who chose to use Teams. My organization forces me to use it. I don’t like it, but to its credit, it is better than “Skype for business.”
It’s strange to me how companies take these huge messaging brands and then just burn them down (Skype, MSN Messenger, Yahoo Messenger, Google’s chatbominations, even AIM). As a user, they don’t fizzle out due to user disinterest, it’s always the owner just sort of doing away with it for something new.
Is there just not a financial model here? Cheap to run and lots of users but no money. Skype seemed to be profitable back when I would use it for international calling.
The issue is that they have working tools which work in one period, but fail to adapt to change.
AIM, Yahoo, MSN etc. were mostly plain text systems, built around presence and a single client. Then came mobile phones, with unreliable connections, with messengers which allowed integrating pictures etc. and easy sign up (iMessage by using apple ID, which every iPhone user has; Whatsapp by using phone number which directly linked the contacts), which worked without battery draining connection.
Skype originally worked by making random clients "super nodes" which coordinated the network, without needing a big data center managing it all. Making phones super nodes wasn't an option so they had to change their protocol in big ways.
So adapting was a cost and changed user experience, while newcomers grew.
In case of Google there was the addition of missing strategic leadership, where each team built their own new messenger, but nobody maintained any old one.
Accidental Tech Podcast episode 581 [1] had a great conversation about the reason why Teams is winning: Microsoft was using their dominant position in office applications to win market share.
Specifically, they would offer Teams for free in a bundle with Office (which basically every company buys anyway). Every manager could strike Slack from their expenses, replace it with Teams and claim great success.
Microsoft has since been forced to change their tactics [2-3], but the damage is done.
This was obvious, at least to me. MSFT has always offered Teams as a "free" add-on to O365 licenses. Google does the same with Meet and Workspace licenses.
It's a real shame, given that Zoom is leagues better than both solutions. But "free" is free :(
Zoom? Better?
Maybe to have a videocall.. but add a couple hundred users to a call .. and you start hitting limits.
Granted, those are not very common but I think the killer feature of slack and teams is discoverable channels. Dont get me wrong Teams UX and performance is terrible in MacOS the whole experience is like taking a school bus to a freeway.. but feature-wise is very complete IMHO and it can handle 500+ user video calls with no problem.
I think this is exactly why it's getting all the hate. People being forced to migrate from Slack to Teams. Not only do they lose years of archives, but the UX and features are a huge downgrade for such an essential tool.
If it was zoom -> teams, i don't think anyone would care so much.
Most basic communication needs, if your basic needs don't include messages being communicated reliably.
Some people use these products to go through the motions of appearing to do work, and don't seem to be aware how ineffective they're being.
Whether MS office suites slot right in because the org is already dysfunctional, or whether the org is dysfunctional because MS office suites have made them that way over the years, I don't know.
I haven't used Zoom extensively, so I cannot comment on its performance. I just had a meeting this morning on Teams using video, and my MBP battery went from 95% to 49% in about 25 minutes (it normally lasts for about 5-6 hours under normal workflow). That was after I had to restart Teams the first time I picked up the call, because it hung and didn't display the conference window.
I find Teams performance atrocious for modern standards. It's not great on W10, but on my Mac is horrible. The conversation buffer is abysmal, having to refresh a million times if you need to scroll up in the conversation history.
Most of my peers use windows, and have constant crashes, incorrectly identified input devices like mics and cameras, sluggish performance when sharing screens, etc...
As of right now, search appears to be completely broken. I can see that the keyword I'm looking for is in the messages. But it won't show up via search (either from the top bar, or from the helpful "press ctrl-f to search within this channel" right-side-bar.)
The UI is aggressively debouncing, so it won't search again unless I change the query.
The UI message is "We couldn't find any results in this channel. Check for spelling, try another search keyword, or search in another channel."
Looking into the network response, I see some fun JSON:
It is not the best, but it offers just enough and is good enough that you dont want to spend on a separate dedicated product.
Especially as teams comes as a part of a bundle and is integrated with it.
I hate skype's interface (the is just so bad, even sharing screen), I also hate the sharepoint integration, but reality is that if you look at costs you will use it
Different thing is that windows 11 taskbar now has space for 11 windows open.. wow this is bad for any officr work
My guess is that for the VCs and PMs at Teams, it is easier to play politics and ensure your product is tightly integrated and coupled with other MS products, than to build an actual quality product.
It speaks of the culture at MS really, but it's amazing how a revenue and cash rich company can keep a dogshit product going.
I despise teams, all my colleagues despise teams, all my friends despise teams. It's a product that all users hate with amazing uniformity. But MS bundles it with Office, which makes demure and uninspired IT departments use it because it's less work.
Caddy is great! But am I the only one who gets frustrated seeing breaking changes in a minor version release? I know semver isn't "the one true way", but it sure does solve a problem for me. I've had too many applications FTBFS or start up on minor (or even patch!) version bumps because they included breaking changes.
Semver is great for libraries. Caddy is a project which has so many dimensions of "surface area" that it's hard to boil down the implications of changes into a single decimal-separated number. Presumably this is why a lot of larger projects use year-month versioning or just bump the major version every release. I'm actually considering doing the latter, except ... Go modules are very opinionated about major version bumps because it was designed with libraries, not commands (main funcs) in mind... and bumping the major version each release would be also be a major inconvenience to dozens of plugin authors.
Caddy has a Go API, a configuration surface (two built-in: JSON and Caddyfile); a config API; HTTP, TLS, TCP, and UDP behavior; a command-line interface; etc... the list goes on and on in ways that can break a deployment. Which of those does the one and only semver version apply to? Go has opinionated tooling around semver for the Go package stuff, so we kind of have to cater to that, but end users don't really care about that. We could split the project into multiple smaller sub-projects, each with their own versions, but then it gets confusing and tedious to build and maintain. It's also inconvenient for people to contribute to that. And we'd have to ship Caddy with several versions, one for each "surface area dimension."
We've settled on mediocrity with our current version scheming which I admit is not my favorite, but I haven't really found anything better yet. Year-month versions are nice except that it implies either a regular release cadence (which we don't have) or that a larger span between two releases is more significant than more frequent releases (maybe true, but maybe not); it doesn't really tell you anything about the build... just approximately when it was made, but not even exactly when it was made (a month is a big window!) - I guess if you do multiple releases per month you just tack another number on the end? Maybe it should just be a timestamp. Or we could invent an N-dimension decimal number or some sort of string that has to be split and parsed...
Anyway. We do try to be gentle with breaking changes. Most of these have been documented as deprecated for years as well as printing warnings in logs. But we try to minimize the number of these, for sure.
Your versioning system honestly doesn't sound that bad. If you took the major version out of the picture, you could consider the minor and patch versions to line up with the major and minor versioning scheme of Compatible Versioning (ComVer) [1]. I think if you were to explain your versioning system around ComVer, including the inconveniences which currently come from major version updates of Go modules, it would be quite clear. If Go were to improve upon the major version inconveniences down the road, you could then migrate your major / minor ComVer versions from the minor / patch SemVer versions to the major / minor SemVer versions. Great work, by the way.
Interesting timing for me since I was just reading the semver.org site just last week. I know that their proposal is to only include breaking changes in the major version, but in my experience lots of products use major versions for marketing purposes, and minor versions for functional changes including both breaking and non-breaking changes.
The important thing is to be clear in your docs about your versioning strategy. And from a quick search of the Caddy website, I couldn't find anything that explains this. Their install guide doesn't really mention versions at all, giving people no clues that a minor version change could break their sites.
We could improve that for sure. I am hoping to redo the website docs later this year... and will try to include some information about our development and release process.
Basically we try to put the bigger changes in the "minor" version bump (because bumping the major version introduces a lot of friction in the Go ecosystem) and encourage people to read release notes. We are open to suggestions though!
> in my experience lots of products use major versions for marketing purposes
This is what people are desperately hoping to change. The industry is tired of updating from 4.7.1 to 4.7.2 and having everything break. Please bring sanity to versioning, so people can have a reasonable expectation that x.y to x.z isn’t going to require days of rewriting stuff.
> I know that their proposal is to only include breaking changes in the major version, but in my experience lots of products use major versions for marketing purposes, and minor versions for functional changes including both breaking and non-breaking changes.
The solution is one more number: Marketing.Breaking.Feature.Bugfix
It's not clear to me that trying to assign semver semantics to a command line interface is going to work well; trying to define the "interface" for command line tools seems too challenging. Perhaps one might suggest that caddy should have used a different versioning pattern, but I have to admit that the x.y.z pattern is so prevalent these days, I have a hard time faulting someone for using it.