Hacker Newsnew | past | comments | ask | show | jobs | submit | kyuksel's commentslogin

Google DeepMind’s AlphaEvolve made a key insight clear: hashtag#AgenticAI can act as a team of evolutionary scientists, proposing meaningful algorithm changes inside an evaluation loop. AlphaEvolve and similar methods also share a fundamental limitation. Each mutation overwrites the structure. Earlier variants become inert. Partial improvements cannot be recombined. Credit assignment is global and coarse. Over long horizons, evolution becomes fragile. I introduce EvoLattice, which removes this limitation by changing the unit of evolution itself. Instead of evolving a single program, EvoLattice evolves an internal population encoded inside one structure. A program (or agent) is represented as a DAG where each node contains multiple persistent alternatives. Every valid path through the graph is executable. Evolution becomes additive, non-destructive, and combinatorial — not overwrite-based. We evaluate EvoLattice on NAS-Bench-Suite-Zero, under identical compute and evaluation settings. EvoLattice outperforms AlphaEvolve, achieves higher rank correlation, exhibits lower variance and faster stabilization, and improves monotonically without regression. We further validate generality on training-free optimizer update rule discovery, where EvoLattice autonomously discovers a nonlinear sign–curvature optimizer that significantly outperforms SGD, SignSGD, Lion, and tuned hybrids — using the same primitives and no training.

Why this matters? Persistent internal diversity: AlphaEvolve preserves diversity across generations. EvoLattice preserves it inside the program. Strong components never disappear unless explicitly pruned. Fine-grained credit assignment: Each micro-operator is evaluated across all contexts in which it appears, producing statistics (mean, variance, best-case). AlphaEvolve only sees a single scalar score per program. Quality–Diversity (QD) without archives: EvoLattice naturally exhibits MAP-Elites-style dynamics: monotonic improvement of elites, widening gap between best and average, bounded variance — without external archives or novelty objectives. Structural robustness: AlphaEvolve relies on the hashtag#LLM to preserve graph correctness. EvoLattice applies deterministic self-repair after every mutation, removing structural fragility from the loop.

AlphaEvolve shows how hashtag#LLMs can mutate programs. EvoLattice shows what they should evolve: the internal computational fabric, not entire programs. This turns LLM-guided evolution from a fragile rewrite process into a stable, cumulative, QD-driven discovery system. The same framework applies to prompt and agentic workflow evolution. As agent systems grow deeper and more interconnected, overwrite-based evolution breaks down. EvoLattice’s internal population and self-repair make long-horizon agentic evolution feasible and interpretable.


From a market microstructure perspective, the repeated emergence of similar indicator components was the most interesting outcome.

Across markets with very different behavior — crypto (jump-prone), FX (mean-reverting), equity indices (regime-switching) — the evolved indicators tended to modulate position sizing based on: • local entropy (as a proxy for noise/chaos) • short + medium horizon trend consistency • volatility bursts

This is consistent with the idea that markets penalize signals most during entropy spikes, regardless of asset class.

Would be interested in perspectives from people studying microstructure-aware signal generation.


Adding a bit of detail: this work tries to replace the standard numerical pipeline (expected returns → covariance → optimizer) with structured reasoning steps.

Two components: • A correlation tree is repurposed as a tournament bracket. At each node, the LLM allocates “selection slots” across branches and performs eliminations inside correlation regimes. • A qualitative evolution loop compares portfolio variants using a rubric (business quality, durability, diversification, resilience) and accepts improvements iteratively — without any explicit optimization objective.

The interesting aspect is not the performance but the explainability: every elimination and mutation step is text-auditable.

Curious whether others have experimented with LLM-based reasoning loops as substitutes for classical optimization in areas outside finance.


This paper reports an unexpected pattern that appeared repeatedly during evolutionary search over technical indicator architectures. Across unrelated markets (crypto, FX, index futures, equities), the search process converged toward similar structural motifs.

These motifs included multi-scale momentum, entropy-based filtering, volatility-adaptive scaling, and regime gating. They reappeared independently across runs, suggesting that certain indicator structures may function as “market invariants” under diverse microstructures.

The system does not use prior knowledge of finance; it only evaluates candidate architectures by out-of-sample performance and stability. The repeated convergence raises questions about whether modern markets impose structural constraints that shape successful technical signals.


This work explores whether large language models can replicate the qualitative reasoning processes used by investment committees, instead of relying on numerical optimizers.

The first component is a correlation-aware selection method that repurposes a hierarchical clustering dendrogram as a tournament bracket. At each internal node, the LLM allocates selection slots between clusters and performs structured eliminations within correlation regimes.

The second component is a portfolio evolution loop that contains no objective function, expected returns, covariance matrices, or solvers. Instead, the model compares variants using a qualitative rubric (business quality, durability, thematic alignment, drawdown resilience, diversification) and accepts improvements through iterative reasoning.

Both mechanisms are fully text-explainable: every elimination, selection, and mutation is auditable.


While widely used by the industry for over a half-century, the Sharpe ratio falls short in out-of-sample robustness for portfolio judgments that generalize to the future. Despite many attempts to improve it—like the Probabilistic Sharpe Ratio—the problem persists. With their implicit domain knowledge and code-generation capabilities, LLMs are proving powerful tools for evolving algorithms and formulas. From enhancing matrix algorithms to making scientific discoveries, their potential is immense. It turns out that LLMs can discover new risk-adjusted metrics with over 3x the rank correlation to future Sharpe ratios compared to the Sharpe itself. When these metrics are used to select the top 25% of assets, they double the risk-return performance of the Sharpe portfolio. The paper dives deeper into this breakthrough, and the discovered metrics are available in the repository for full reproducibility. This research can disrupt how we evaluate backtests, select assets, or optimize portfolios!


Have you ever wanted to invest in a US ETF or mutual fund, but found that many of the actively managed index trackers were expensive or out of reach due to regulations? I have recently developed a solution to this problem that allows small investors to create their sparse stock portfolios for tracking an index by proposing a novel population-based large-scale non-convex optimization method via a Deep Generative Model that learns to sample good portfolios.

I've compared this approach to the state-of-the-art evolutionary strategy (Fast CMA-ES) and found that it is more efficient at finding optimal index-tracking portfolios. The PyTorch implementations of both methods and the dataset are available on my Github repository for reproducibility and further improvement. Check out the repository to learn more about this new meta-learning approach for evolutionary optimization, or run your small index fund at home!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: