Hacker News new | past | comments | ask | show | jobs | submit | zackify's comments login

Agree with each of these points so much!

That’s why I really like copilot agent and codex right now.

Even more parallel stuff and from my phone when I’m just thinking of ideas.


It’s because MCP return types are so basic. It’s text. Or image. Or one other type in the protocol I forget.

It’s not well thought out. I’ve been building one with the new auth spec and their official code and tooling is really lacking.

It could have been so much simpler and straight forward by now.

Instead you have 3 different server types and one is deprecated already (SSE) it’s almost funny


M4 max 128gb ram.

LM studio MLX with full 128k context.

It works well but has a long 1 minute initial prompt processing time.

I wouldn’t buy a laptop for this, I would wait for the new AMD 32gb gpu coming out.

If you want a laptop I even consider my m4 max too slow to use more than just here or there.

It melts if you run this and battery goes down asap. Have to use it docked for full speed really


Yep I have an M4 Max Studio with 128GB of RAM, even the Q8 GGUF fits in memory with 131k context. Memory pressure at 45% lol

How many tokens per second are you both getting?

Ollama breaks for me. If I manually set the context higher. The next api call from clone resets it back.

And ollama keeps taking it out of memory every 4 minutes.

LM studio with MLX on Mac is performing perfectly and I can keep it in my ram indefinitely.

Ollama keep alive is broken as a new rest api call resets it after. I’m surprised it’s this glitched with longer running calls and custom context length.


I used devstral today with cline and open hands. Worked great in both.

About 1 minute initial prompt processing time on an m4 max

Using LM studio because the ollama api breaks if you set the context to 128k.


How is it great that it takes 1 minute for initial prompt processing?

That time is just for the very first prompt. It is basically the startup time for the model. Once it is loaded, it is much much faster in responding to your queries. Depending on your hardware of course.

Have you tried using mlx or Simon Wilson’s llm?

https://llm.datasette.io/en/stable/

https://simonwillison.net/tags/llm/


I thought this was another vercel shill, and cheating up the hackernews ranks LOL

But I tried it today and it’s pretty nice. A few bugs with user creation and custom fields with the beta oauth2 plugin. But overall very solid abstraction that will save lots of time.

Google sign in was a breeze too.

The migrations do not pick up nullable being true for custom fields though, and I see someone else already reported this.

Direct oauth registration works, most everything I need is here!


Been fiending to set up a sideproject that uses this for auth, instantdb for backend, and htmx/web components on the frontend.

Crazy. I thought hackernews didn’t care because this happened yesterday and I never saw anything!

We’re updating our app in a couple days this will save a LOT of money.

We will kick users out to web and pass a JWT in the url with a short lifespan to log the user in on web and then prompt for Apple Pay or credit card. Then a link back to our app’s deep link


So many questions:

- Why not just handle all of this in the app? Do you think Apple won't allow it?

- Are you geofencing this functionality? It seems like per other comments this is US only.

- How are you handling existing subscribers (not sure if applicable)? Will you "encourage" them to migrate?


It seems right now the rules are more clear about if you mention or link out, it’s ok. So that’s why.

We should geofence it to US yeah.

We are thinking of sending a push notification with a discount to pay on web and cancel


It'll be a nightmare to get an in-app change like this through their approval process. OP's solution to offload the entire thing to the web is a great stop-gap measure since when it is definitely rejected they can appeal through their App Store rep pointing to this external payment URL decision and have some small chance of getting it approved in the nearest term.


If you pay for something in the app apple gets 30% of that money. If you send users to your website to pay you get to keep it all now that the 27% fee was found illegal.


Because customers can’t trust payments in the app. Unless I’m being bounced out to somewhere I can see the URL and an SSL certificate I’m not paying.

What you’re suggesting is a dangerous anti-pattern.


I don't know about iOS, but on Android you can just pay through a Google Pay pop-up, you don't need to input any kind of payment information to the app itself. Does iOS not have such a mechanism?


iOS does and of course you can call an in-app Safari popup to any payment processor or website (with limitations on JavaScript speed) if you don't want to pop them into a browser and then back into the app.


> I thought hackernews didn’t care because this happened yesterday and I never saw anything!

This is a bit of an "egg-on-face" moment for the community that has relentlessly defended Apple's righteousness.


Would be interested to hear the magnitude estimate of savings.


Presumably somewhere between 30% and 0%. Let's call it...20%?


Noiiice, tx for feeding my mind.


Completely agree. Surprised it has to many upvotes when it ends with a “fix” telling you to make useStates that rely on localstorage when you’re doing SSR


I guess what makes this different than EG4’s all in one inverter plus a 5kwh server rack battery, is the integration of software and hardware.

I think you have a great shot at being successful. Because the cheaper options, software isn’t great or fully integrated with the hardware.

These systems are a fraction of the cost of powerwall though.

I learned a lot about the DIY side of this from Will Prose on YouTube. It’s amazing how much markup Tesla has on top of LFP batteries.


Correct, integration is key for a seamless consumer experience.


Cline over everything for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: