Hacker Newsnew | past | comments | ask | show | jobs | submit | geomcentral's commentslogin

The article gives an example of agent friendly APIs:

    {
       "plan_id": "123",
       "text": "This plan looks good, but please focus on the US market."
    }
> By preserving the text, the downstream agent can read the feedback ("Approved, but focus on US market") and adjust its behavior dynamically.

I imagine it could be useful for systems to communicate using rich dialogue. But looking at the API, it struck me as a security risk. Couldn't a 'bad' agent try to adjust the behaviour of the downstream agent in a malicious way? Or am I out of touch - is this how it's usually done?


> that said "feeling of incompleteness" will distract me for the rest of the evening

I find that it can even ruin an evening. I'll find myself trying to solve a problem in my head rather than being present with my family.

The only thing that works for me is waiting for clean breaks that happen mid-late afternoon. As soon as I reach a happy stopping point, I must stop and switch to shallow work for the rest of the afternoon. Then it's easy to leave my computer at the end of the day.

It's hard but after years of struggling with this, I found a sustainable way to balance work and life.

I also track my consistency with doing this so that I notice myself slipping back and correct it before it becomes a problem.


Java isn't really used to develop Android apps any more, especially now that Jetpack Compose is here:

Java → Business applications

Kotlin → Android


Is it possible that we'll see non-smartphone devices (eg dumbphones) being able to interop with Apple and Google through the same end-to-end encrypted protocol?


How does this work in a Scrum context?

I want to get stuff done but I need to raise a ticket, have the priority agreed, and get it planned into a sprint. Then after these layers of 'asking for yes' I am allowed to work on something without scrutiny. Let's say I wasn't subject to this process, my pull request would still need someone to approve it.

Is there a way I can adopt the approach of 'asking for no' with these constraints? Or does it only apply in high autonomy workplaces?


I don't see how this works in the context of scrum as I've seen it done, except in the weak case of you getting to raise the ticket of whatever topic you'd like.

I guess you could ask for no around specific implementation details too? So if you are working on ticket XYZ that could be solved in N ways, you could pick one that might be a bit out of the norm (but still solves the problem) and 'ask for no'.


  - Never change the file "src/supervisor.js" under any circumstances.
This prompt[1] made me laugh - might as well be 'never overthrow your human overlords'.

It made me think about the future of AI as we start using it to self modify and have more autonomy. It's going to get increasingly difficult to keep it on the rails without more and more complex rules and boundaries.

[1] https://github.com/victorb/metamorph/blob/8f505ff268ed696816...

Edit: add thought


Even so, GPT decided to edit supervisor regardless, so little point in that part of the prompt... Here is an example: https://github.com/victorb/metamorph/pull/2


.gitignore supervisor.js

GPT4 can't reach it now. GG AI


Unfortunately, not that simple. supervisor.js lives inside the src/ directory, as the application counts everything there as part of the application and sends the context to GPT-4. So one simple solution would be to move it out of there, but then GPT-4 doesn't have the context of the supervisor, and tries to eventually invent it's own.

So if supervisor.js is not in the context, the application tries to write it's own and if it is in the context, eventually it wants to change it, but the runtime won't reload the supervisor.js so any changes there are effectively "lost".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: