The thing that people miss out is Git is really a content addressed storage. This means all commits, even the ones not linked to any refs are still stored and addressable.
p.s: If you run OSS project, please use Github Advanced Security and enable Push Protection against secrets.
Are you talking about the local branch and the local reflog?
I thought garbage collection should get rid of all dangling stuff. But even without that, I am curious if pushing a branch would push the dangling commits as well.
I am working on a next-gen software composition analysis tool that can identify malicious open source packages through code analysis. Adopts a policy as code (CEL) approach to build security guardrails against risky OSS components using opinionated policies.
Ok! So all the novel jailbreaks and "how I hacked your AI" can make the LLM say something supposedly harmful stuff which is a Google search away anyway. I thought we are past the chat bot phase of LLMs and doing something more meaningful.
I am just wondering how do we differentiate between AI generated code and human written code that is influenced or copied from some unknown source. The same licensing problem may happen with human code as well especially for OSS where anyone can contribute.
Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
> Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
It’s a power saw. A really powerful tool that can be dangerous if used improperly. In that sense the code generator can have more or less of a mind of its own depending on the wielder.
Ok I think I’ve stretched the analogy to the breaking point…
Vibe “migrated” our docs portal to another framework the other day. Looked awesome. Only when I decided to do a quick review before switchover did I found all subtle hallucinations.
Not sure what would be faster. Manually reviewing and fixing the AI migrated docs or just give up and use sed or something to do it from scratch.
Whenever you feel ,,not sure if fixing what a huge change did is faster'' you are asking something too big and not specific enough.
It's much better to ask the specific queries that you yourself would do with sed and treat the models as smarter sed (or ask them to generate the sed statements themselves)
Seems like a bit forced scenario to me. I have never seen anyone auto-merge Dependabot PR automatically using GitHub Action.
Also pull_request_target is a big red flag in any GHA and even highlighted in GHA docs. It’s like running untrusted code with all your secrets handed over to it.
Hmm. Not sure if discovery and connection will solve the root cause of the problem that you highlighted. I think it’s more to do with dopamine rush from social media and decreasing ability for people to people IRL connection.
Tried playing with SDR a while back. Back then, biggest challenge was to find an appropriate hardware that can receive at various frequencies and also compatible with my Linux box.
If you want to go even a step up in the trvial to use ladder, there's the Portapack H4m project. It builds on the HackRF One and adds a screen, custom firmware (open source, extensible) into an handheld factor and lets you do a bunch of... _stuff_ without needing a computer :) Also not _that_ expensive, I got mine for about 400€ from lab401.
Things have improved a lot. These days GNU Radio (via OsmoSDR) supports all the big hobbyist-price-point SDR vendors, and most of them go from ~50MHz to ~6GHz.
p.s: If you run OSS project, please use Github Advanced Security and enable Push Protection against secrets.
reply