Hacker Newsnew | past | comments | ask | show | jobs | submit | wfn's commentslogin

> It’s already happening on 50c14L.com

You mention "end to end encrypted comms", where to you see end to end there? Does not seem end to end at all, and given that it's very much centralized, this provides... opportunities. Simon's fatal trifecta security-wise but on steroids.

https://50c14l.com/docs => interesting, uh, open endpoints:

- https://50c14l.com/view ; /admin nothing much, requires auth (whose...) if implemented at all

- https://50c14l.com/log , log2, log3 (same data different UI, from quick glance)

- this smells like unintentional decent C2 infrastructure - unless it is absolutely intentional, in which case very nice cosplaying (I mean owner of domain controls and defines everything)


> Isn't every single piece of content here a potential RCE/injection/exfiltration vector for all participating/observing agents?

100%, I wonder when we get LLM botnets (optional: orchestrated by an agent), if not already.

The way I see prompt injection is, currently there is no architecture for a fundamental separation of control vs data channels (others also think along similar lines of course, not an original idea at all). There are (sometimes) attempts at workarounds (sometimes). This apart from other insane security holes.

edit p.s. Simon has been talking about this for multiple years now, I should mention this in fairness (incl. in linked post)


This is a funny chain.. of exchanges, cheers to you both :)

At the risk of ruining 'sowbug having their fun, I'm not sure how Julian Jaynes theory of origins of consciousness aligns against your assumption / reduction that the point (implied by the wiki article link) was supposed to be "I am only my brain." I think they were being polemical, the linked theory is pretty fascinating actually (regardless of whether it's true; and it is very much speculative), and suggests a slow becoming-conscious process which necessitates a society with language.

Unless you knew that and you're saying that's still a reductionist take?.. because otherwise the funny moment (I'd dare guessing shared by 'sowbug) is that your assumption of fixed chain of specific point-counter-point-... looks very Markovian in nature :)

(I'm saying this in jest, I hope that's coming through...)


I've been thinking about this, take a look at this:

> From Tool Calling to Symbolic Thinking: LLMs in a Persistent Lisp Metaprogramming Loop

https://arxiv.org/abs/2506.10021

edit but also see cons[3] - maybe viable for very constrained domains, with strict namespace management and handling drop into debugger. Also, after thinking more, it likely only sounds nice (python vs lisp training corpus and library ecosystems; and there's mcp-py3repl (no reflection but otherwise more viable), PAL, etc.) Still - curious.

In theory (I've seen people discuss similar things before though), homoiconicity and persistent REPL could provide benefits - code introspection (and code is a traversable AST), wider persistent context but in a tree structure where it can choose breadth vs depth of context loading, progressive tool building, DSL building for given domain, and (I know this is a bit hype vibe) overall building up toolkit for augmented self-expanding symbolic reasoning tools for given domain / problem / etc. (starting with "build up toolkit for answering basic math questions including long sequences of small digits where you would normally trip up due to your token prediction based LLM mechanism"[2]). Worth running some quick experiments maybe, hm :)

P.S. and thinking of agentic loops (a very uh contemporary topic these days), exposing ways to manage and construct agent trees and loops itself is (while very possibly recipe for disaster; either way would need namespaces not to clash) certainly captivating to me (again given effective code/data traversal and modification options; ideally with memoization / caching / etc.)

[1] https://arxiv.org/abs/2506.10021

[2] https://www.youtube.com/watch?v=AWqvBdqCAAE on need for hybrid systems

[3] cons (heh): hallucination in the metaprogramming layer and LLMs being fundamentally statistical models and not well trained for Lisp-like langs, and inevitable state pollution (unless some kind of clever additional harness applied) likely removes much of the hype...


Yes. I have (as part of Claude output) a

- `FEATURE_IMPL_PLAN.md` (master plan; or `NEXT_FEATURES_LIST.md` or somesuch)

- `FEATURE_IMPL_PROMPT_TEMPLATE.md` (where I replace placeholders with next feature to be implemented; prompt includes various points about being thorough, making sure to validate and loop until full test pipeline works, to git version tag upon user confirmation, etc.)

- `feature-impl-plans/` directory where Claude is to keep per-feature detailed docs (with current status) up to date - this is esp. useful for complex features which may require multiple sessions for example

- also instruct it to keep main impl plan doc up to date, but that one is limited in size/depth/scope on purpose, not to overwhelm it

- CLAUDE.md has summary of important code references (paths / modules / classes etc.) for lookup, but is also restricted in size. But it includes full (up-to-date) inventory of all doc files, for itself

- If I end up expanding CLAUDE.md for some reason or temporarily (before I offload some content to separate docs), I will say as part of prompt template to "make sure to read in the whole @CLAUDE.md without skipping any content"


Agree re: no need for heap allocation - for others: I recommend reading thru whole masscan source (https://github.com/robertdavidgraham/masscan), it's a pleasure btw - iirc rather few/sparse malloc()s which are part of regular I/O processing flow (there will be malloc()s which depending on config etc. set up additional data structs but as part of setup).


> Quartz Composer

Have you looked at https://vvvv.org/ ? Maybe it's still comparatively too heavy but imho it's not that heavy (cf. touch designer and the likes). I want to play with it some more myself...


> but then the shell commands were actually running llama.cpp, a mistake probably no human would make.

But in the docs I see things like

    cp llama.cpp/build/bin/llama-* llama.cpp
Wouldn't this explain that? (Didn't look too deep)


Yes it's probs the ordering od the docs thats the issue :) Ie https://docs.unsloth.ai/basics/deepseek-v3.1#run-in-llama.cp... does:

```

apt-get update

apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y

git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON

cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server

cp llama.cpp/build/bin/llama-* llama.cpp

```

but then Ollama is above it:

```

./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf

```

I'll edit the area to say you first have to install llama.cpp


> There is partial mitigation for RAPTOR: Counter-RAPTOR from 2017 (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=795...)

Oh I had missed that, thank you btw! Need more of those BGP monitoring systems...

(and they performed an actual live BGP attack (not just simulation), neat)


Thank you for the pointer to LEANN! I've been experimenting with RAGs and missed this one.

I am particularly excited about using RAG as the knowledge layer for LLM agents/pipelines/execution engines to make it feasible for LLMs to work with large codebases. It seems like the current solution is already worth a try. It really makes it easier that your RAG solution already has Claude Code integration![1]

Has anyone tried the above challenge (RAG + some LLM for working with large codebases)? I'm very curious how it goes (thinking it may require some careful system-prompting to push agent to make heavy use of RAG index/graph/KB, but that is fine).

I think I'll give it a try later (using cloud frontier model for LLM though, for now...)

[1]: https://github.com/yichuan-w/LEANN/blob/main/packages/leann-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: