Hacker Newsnew | past | comments | ask | show | jobs | submit | kpen11's commentslogin

The standardization is AGENTS.md, mentioned in the compatibility. See https://agents.md/

This is really cool! I've been using Obsidian more and more as a second brain and getting data in has consistently been the point of failure, so I've been wanting something just like this. Specifically something that runs locally and offline.

Is the future goal of Hyprnote specifically meeting notes and leaning into features around meeting notes, or more general note taking and recall features?


At least for near future, we'll focusing on meeting notepad side of a thing.

We actually have "export to Obsidian". I think you can pair Hyprnote nicely with Obsidian.

Screenshot: https://github.com/user-attachments/assets/5149b68d-486c-4bd...

You need this plugin installed in Obsidian first: https://github.com/coddingtonbear/obsidian-local-rest-api

Obsidian export code 1:

https://github.com/fastrepl/hyprnote/blob/d0cb0122556da5f517...

Obsidian export code 2:

https://github.com/fastrepl/hyprnote/tree/main/plugins/obsid...


Thanks for the reply! I will try it out :)


Whether or not there was a claim that code _was_ the bottleneck, this raises some points that I've been talking over with people for a while now.

Introducing a lever to suddenly produce more code faster creates an imbalance in the SDLC. If our review process was already a bottleneck, now that problem is even worse! If the review bottleneck was something we could tolerate/ignore before, that's no longer the case, we need to solve for it. No, that doesn't mean let some LLM review the code and ship it. CI/CD needs to get better and smarter. As a reviewer, I don't want to be on the lookout for obscure edge cases. I want to make sure my peer solved the problem in a way that makes sense for our team. CI/CD should take care of making sure the code style aligns with our policies, that new/updated tests provide enough coverage for the new/changed functionality, and that the feature actually works.

The code expertise / shared context is another tough problem that needs solving, only highlighted by introducing a random graph of numbers generating the code. Leaning on that one engineer who has been on the team for 30 years and knows where all the deep dark secrets are was not a sustainable path even before coding agents. Having a markdown file that just says "component foo is under /foo. Run make foo to test it" was not documentation. The imbalance in the SDLC will light the fire under our collective asses to provide proper developer documentation and tooling for our codebases. I don't know what that looks like yet. Some teams are trying to have *good* markdown files that actually document where all the deep dark secrets are. These are doubly beneficial because coding agents can use those as well as your humans. But better markdown is probably a small step towards the real fix which we wont be able to live without in the near future.

Anyway, great points brought up in the article. Coding agents aren't going away, so we need to solve this imbalance in the SDLC. Fight fire with fire!


I'll be digging deeper on these ideas on a webinar on the 15th if this topic is interesting to you! https://dagger.io/webinar/agentic-ci


I tried it with container-use and its pretty nice! (while the APIs cooperated). One thing that stood out to me compared to other agent products was how intuitive the interface was to use. `/help` is something that not everybody has and its wild. https://www.youtube.com/watch?v=hmh30wuXg08


I think you're missing step 3! A key part of building agents is seeing where they struggling and improving performance in either the prompting or the environment.

There are a lot of great posts out there about how to structure an effective prompt. One thing they all agree on is to break down reasoning steps the agent should follow relevant to your problem area. I think this is relevant to what you said about brute forcing a solution rather than studying the problem.

In the agent's environment there's a fine balance to achieve between enough tools and information to solve any appropriate task, and too many tools/information that it'll frequently get lost down the wrong path and fail to come up with a solution. This is also something that you'll iteratively improve by observing the agent's behavior and adapting.


Nice post! I enjoyed reading about how many teams were involved in the process. CI has to be a collaborative effort to be really successful.

It's often underestimated how much benefit you'll get from taking a good look at your cache usage. It all worked great the day your platform team set up the build system, but 100 new CI jobs later there will be tons of room for improvement. Similar story with consolidating CI jobs in general. If we keep just tacking things on eventually we have to step back and optimize.


Here's the demo video linked in the blog post: https://www.youtube.com/watch?v=c0bLWmi2B-4

It goes step by step through the getting started guide from the Dagger Python SDK docs


I worked for 3 years on a TN visa with no degree of any kind. My job was a Software Engineer, and my TN was classified as a Scientific Technician. This was 2016-2019


Interesting! Thanks for sharing that. What sort of supporting documents did you end up having, to prove that you were supporting "a professional in the scientific field"?


It was a stressful process with a lot of uncertainty. We had to show language that my job was a "supporting role", which is fairly easy to spin if you're not a Senior or Principal. We also needed to provide my direct supervisor's qualifications. I believe they had to have a degree in a CS related field.


To be clear, because of the lack of a bachelor's degree requirement, the Scientific Technician/Technologist category is a red-flag category and such applications in this category oftentimes are denied.


Yes, I wouldn't recommend this path if other options are available


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: