Hacker Newsnew | past | comments | ask | show | jobs | submit | jalaziz's commentslogin

GitHub Actions started off great as they were quickly iterating, but it very much seems that GitHub has taken its eye of the ball and the improvements have all but halted.

It's really upsetting how little attention Actions is getting these days (<https://github.com/orgs/community/discussions/categories/act...> tells the story -- the most popular issues have gone completely unanswered).

Sad to see Earthly halting development and Dagger jumping on the AI train :(. Hopefully we'll get a proper alternative.

On a related note, if you're considering https://www.blacksmith.sh/, you really should consider https://depot.dev/. We evaluated both but went with Depot because the team is insanely smart and they've solved some pretty neat challenges. One of the cooler features is that their caching works with the default actions/cache action. There's absolutely no need to switch out popular third party actions in favor of patched ones.


> Sad to see Earthly halting development and Dagger jumping on the AI train :(. Hopefully we'll get a proper alternative.

Hi, Dagger CEO here. We're advertising a new use case for Dagger (running AI agents) while continuing to support the original use case (running complex builds and tests). Dagger has always been a general purpose engine, and our community has always used it for more than just CI. It's still the exact same engine, CLI, SDKs and observability stack. It's not like we're discontinuing a product, to the contrary: we're getting more workloads on the platform, which benefits all our users.


Great to know. I think the fear is that so many companies are prioritizing AI workloads for the valuation bump rather than delivering actual meaningful value.


I completely understand that fear. I see lots of other tech companies making that mistake, throwing away a perfectly good product and market out of pure "FOMO". I really, really don't want us to be one of those companies.

I think what we're doing is different: we built a product that was always meant to be general purpose; encouraged our community to experiment with alternative use cases; and are now doubling down on a new use case, for the same product. We are still worried about the perception of a FOMO-driven AI pivot (and the reactions on this thread confirm that we still have work to do there); but we're confident that the product really is capable of supporting both.

Thank you for the thoughtful comments, I appreciate it.


A lot of GH actions teams were impacted by layoffs in November.

Example:

https://github.com/actions/runner/pull/2477#issuecomment-244...


Presumably the issue is that GH underpriced Actions such that it's not worth improving because driving more usage won't drive revenue, and that then forced prices down for everyone else because everyone fixed on the Actions pricing.


I might have missed the news, but I did not find anything in regards to earthly stopping development

What happened there?


I missed it too, but then found this: https://github.com/earthly/earthly/issues/4313


Sigh, this is awful. Earthly is/was not perfect, but is basically the most capable build tool I've ever used. Fingers crossed there's enough enthusiasm in the community to fork it (I'd be organizing it myself if I had any experience with Go at all)


We switched to Depot last week. Our Rust builds went down from 20+ minutes to 4-8 minutes. The easy setup and their docker builds with fast caching are really good.


This sounds promising. What made your Rust builds become that fast? Any repo you could point us to?


Check out this Dockerfile template if you're building Rust in Docker: https://depot.dev/docs/container-builds/how-to-guides/optima...

What makes Depot so fast is that they use NVMe drives for local caching and they guarantee that the cache will always be available for the same builders. So you don't suffer from the cold-start problem or having to load your cache from slow object storage.


Thanks! We already use self-hosted runners on physical machines with NVMe drives that we assembled ourselves. I was wondering if there's something else you're doing for the caching.


Founder of Depot here. For image builds, we’ve done quite a bit of optimization to BuildKit for our image builders to make certain aspects of the builds fast like load, cache invalidations, etc.

We also do native multi-platform builds behind one build command. So you can call depot build —platform Linux/amd64,linux/arm64 and we will build on native Intel and ARM CPUs and skip all the emulation stuff. All of that adds up to really fast image builds.

Hopefully that’s helpful!


If you're building rust containers, we have the world's fastest remote container builders with automated caching.

You wouldn't really have to change anything on your dockerfile to leverage this and see significant speed up.

The docs are here: https://docs.warpbuild.com/docker-builders#usage


We faced this at my last company and this is actually a super mild case. In our case, we were dealing with call toll fraud. We ended up with tens of thousands of dollars in charges in less than 24 hours.

In our case, Twilio reached out to us to tell us they were detecting toll fraud. Before that, we actually had no idea what toll fraud was.

We quickly tried to address it with distributed rate limiting and that worked, for all of a couple of hours. The fraudsters quickly figured out the rate limit and worked around it by spacing out the calls and using more IPs.

Eventually, we had to disable a set of countries known for toll fraud and change our product to not connect calls in a variety of scenarios.


This letter is clearly not written by someone who interacts with customers. As someone who has tried to work with Elastic, their pricing is atrocious even for the most basic of features.

You can't get alerting without paying a king's ransom for features you'll likely never use.


I think this is a terrible idea, but time will tell. As a former Hulu employee, I can say that the media rights business is a mess. It took us (and Netflix) a very long time to convince media companies that the future is streaming. Now that they understand that, it's natural that they want to control all the revenue streams for their content.

What they fail to understand is that online streaming is not easy. Netflix and Hulu have invested a lot of resources into their tech stacks. Also, have you tried building an app that works on all the different streaming devices out there? It's annoyingly difficult. You have to do all that, keep up with the latest technologies (e.g. 4K), and still provide unique features and value. Storing the content is the easiest part.

Point is, media companies should stick to what they do best and let the likes of Netflix, Hulu, and Amazon do what they do best.


They are pushing forward with this by purchasing a controlling stake in BAMTech, which is the streaming tech company behind HBO Now, MLB and others.


I saw that and I can't speak for HBO Now, but my experience with WatchESPN (which apparently also streams through BAMTech) has been abysmal. This may be incredibly naive of me, but I'd much rather have an independent tech powerhouse like Netflix focused on content delivery.

In my experience, traditional media companies stifle innovation instead of encourage it. It was never easy to work with them and they held us back. The whole industry is a mess. I can't imagine Disney buying a controlling stake in BAMTech will make things better.

That being said, Disney is better with technology than most other media companies. Of course, I could be completely off-base here. It's just my opinion.


Full disclosure, I work for Highfive.

My coworkers and I are here and ready to answer any questions!


I've been wondering the same thing. At the very least you should be able to argue that they've changed the contract and you should no longer be held liable for early termination.


I agree completely. My bet is this is Comcast's way of fighting back against net neutrality. With 4K streaming coming fast, it seems like they want to further pad those profit margins under the guise of fairness.


That's not what's going on. You're not paying a flat price. If you want more bandwidth, you pay for it. The currency for an ISP is bandwidth. The amount of data that's transferred does not matter. This is basically pure profit for Comcast and comes from an artificial excuse that last-mile ISPs have effectively made up.


But it does matter. ISPs simply don't have the means to provide capacity for every user to simultaneously max out their connection at once. That is not, and never has been how Internet infrastructure works. That'd be like building a 100 lane highway because on Christmas it gets congested.

You're paying for peak performance, not 24/7 performance.


You're paying for a rate, not an absolute amount. The internet infrastructure works on that rate (bandwidth). You don't buy a 10GB switch or router, you buy a 1gbps switch or router. ISPs peer based on bandwidth.

I do understand, though, that by having a data cap they're encouraging users to use less of that bandwidth. However, again, they're just passing on costs to the consumer instead of paying a fixed cost to upgrade their peering arrangements.


That is not, and never has been how Internet infrastructure works.

Isn't a "link of speed X" exactly how it works everywhere outside of the overselling done by last mile monopolists?


There is nothing reasonable about this. You pay Comcast for bandwidth. It shouldn't matter how much data you're using as long as you're not exceeding that bandwidth. If I pay for 250mbps, I should be able to download 24/7 (assuming I did the math right, that's 83.7 TB). Comcast isn't storing the bits, they're just moving them. This is simply their way of avoiding having to upgrade their peering arrangements (which are already abysmal). Level3 has a great blog post explaining a similar issue (http://blog.level3.com/open-internet/observations-internet-m...).


Please Google "contention ratios".


Well aware of contention ratios. That's the gamble the ISP makes. Now they're trying to pass the risk onto the consumer. That shouldn't be the consumer's problem. That problem also doesn't quite relate to a data cap. Contention ratios are entirely dependent on bandwidth. If everyone in my neighborhood had a 250mbps connection and we all downloaded at full blast at the exact same time, sure we'd face an issue with contention ratios. The amount of data transferred doesn't matter though.


That has nothing to do with how much total data you use in a month.

It doesnt matter if i download 1 TB or 100TB, it only matters if im trying to use all my bandwidth at the same time as everyone else.

Did you even look it up yourself?


How can you possibly not see these concepts as fundamentally related?

Yes, you're paying for a 250Mbps max connection speed, if your limit is 1000GB then you can only max out your connection for 9 hours a month, meaning other users have a chance to use the backhaul bandwidth, not just you!

Applying quota limits means less usage and allows for higher contention ratios to be used throughout the network. Which is absolutely the only way ISPs can afford to deliver you an Internet connection without charging thousands of dollars a month.


That's not true.

Infrastructure costs money, absolutely. But infrastructure is largely a fixed cost (plus some maintenance cost). After that, it's all peering arrangements. ISPs here in the US can have some pretty ridiculous profit margins (probably exaggerated, but http://www.huffingtonpost.com/bruce-kushnick/time-warner-cab...).

With gigabit internet slowly becoming standard, ISPs do have to invest quite a bit in upgrading their infrastructure. However, that investment is easily amortized over time.

All that being said, the amount of data is never in play. Only the rate at which you can push data through that infrastructure matters.

Yes, data quotas help ISPs have higher contention ratios, but that's simply cost cutting and it hurts the consumer. Effectively, the same money you're paying now is buying you less.

Last-mile ISPs have a very nasty way of trying to lie to consumers about how the internet works. I highly recommend you read the Level3 blog posts from around the time of the net neutrality discussions. They're very insightful into how the internet actually works. Last-mile ISPs can be and are effectively bullies because they own that last mile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: