o3 pro seems to be good with meta-prompting. Meaning, when you ask it to create a prompt for you. In particular it seems to be more concise than o3 when doing this.
Literally just asking it to write a prompt to ask itself (or maybe another LLM) to do something.
Example: You want to improve a piece of code that interprets bytecode and translates it into native 8051 assembly. You have the code, but you don't really have any ideas on how to improve it. You want the LLM to come up with ideas.
Without meta prompting, your prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. Think of ways to improve it."
With meta prompting, your initial prompt would be like this:
"Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. My goal is to improve it. Don't think about the improvements, though.
Instead, write a large language model prompt that will guide the LLM on ways of thinking about possible improvements. Incorporate as many methods as you can think of."
The o3 would then come back with a prompt like this:
----------
Here's a piece of code that interprets bytecode and translates it into native 8051 assembly. I'd like you to think of some ways to improve it. Here are some guidelines about what you should and shouldn't do:
- Describe different models of optimization (e.g., peephole, mid-level IR transformation, architecture-aware instruction packing) that might apply to this code.
- Provide brief explanations behind each improvement idea, optionally citing known compiler or assembler optimization strategies.
- Use critical code analysis strategies such as abstraction refactoring, loop unrolling evaluation, and inline expansion assessment.
- Suggest external tools, static analysis techniques, or benchmarks that would support an empirical review of the code’s performance.
- Draft a structured list of investigative questions I should ask to uncover hidden inefficiencies or architecture mismatches.
- Consider ways to minimize register swaps and optimize instruction selection for the 8051 architecture.
- Try breaking your ideas into perspectives—a performance-focused one, a maintenance-focused one, and a correctness-focused one.
- Emphasize reasoning paths: “Consider factors like X, contrast options A and B…”
- Do not directly suggest code changes—instead, construct a high-quality LLM prompt I can run that will guide another model into doing this with rich and varied insights.
- Step into the shoes of different roles: a compiler designer, a security auditor, and a systems programmer. Think from each of their perspectives.
- Don’t just list ideas—group them by impact level and feasibility, and reflect on trade-offs.
- Don’t rewrite the entire architecture. Focus on micro-optimizations and structural improvements that retain the current functional design.
- Evaluate whether parts of the bytecode can be cached or memoized to avoid reprocessing.
- Analyze for common performance bottlenecks typical in interpreters, such as inefficient branching or redundant memory access.
- Avoid superficial ideas like "add comments" or "modernize syntax" unless critically justified.
----------
As you can see, the meta-prompt is essentially an instruction to "compile" a shorter prompt into something much larger and more detailed - which may be wrong, but if the meta-prompt basically encodes well-known methodology (like above), it's hard for it to go wrong.
(Yes, I did use an LLM to generate the list above - I checked each bullet point by hand just to make sure they weren't nonsensical.)
Despite the confusing copy here and on the landing page, the tool appears to be a commit message linter not a generator. Which seems useful. In fact, I was thinking of building something like that for internal use. Thank you for sharing.
I think semantic versioning actually predates distributions. It just was not called "semantic versioning." It was called Unix shared library versioning.
Imagine a world where every library and every package has their release date as their version. You'd instantly know which software lacks maintenance or updates (bitrot).
To me it seems more attractive than how Nix does it, but I guess they considered such and saw conflicts, therefore went with hashes.
How do you know if 12345 is a patch for 432 released years ago or a major upgrade from 12344 released a moment ago? Pure time versioning doesn't work with multiple release streams.
Recently in the python ecosystem the `uv` package manager let's you install a package as it was on a certain date. Additionally, you can "freeze" your dependencies to a certain date in pyproject.toml. So when someone clones it and installs dependencies it will install it at the date you've chosen to freeze it at.
Personally I love this method much more than versioning.
I think versioning is mostly useful to just talk about software and maybe major major additions/changes e.g io-uring shipping with linux mainline.
On the opposite side of the world in Gentoo, we compile updates to libraries and applications together on a rolling basis all the time, and it generally works out while letting us have everything as bleeding edge as we want it.
Yes, but if statically linked, this excludes all software that relies on security-relevant libraries (e.g. cryptography) or receives data from the network. I struggle to think of a lot of software that would qualify beyond coreutils and friends.
My mother-in-law is the President of the Central Florida Fruit Society, and is in charge of sourcing mangoes for their annual party. She sends us all the excess mangoes :)
As I understand it, this year's mangoes mostly came from Merritt Island, as there was some not-so-great weather in southern Florida.
The devops group is good, but the main one (Orlando Devs aka odevs) is what I was thinking. Covid messed things up quite a bit and things are still getting going again in terms of individual groups.
Did you wait for the text to turn blue and then black? And were the twos still wrong then? The real-time text is non-final tokens and has many more errors than what is ultimately committed to the document (but committing is slower than we'd like at the moment).
I wouldn't be dismissive of people telling you that the product description can be improved. My opinion is that the description of the product in this thread will outperform your site 10 to 1.
I'll try to explain, not in the spirit of being argumentative, but with the hope of being useful.
The comment you replied to was not questioning the value of integrated cabling. It was pointing out that the product description on the site does not make sense.
"Cloud computer" sounds like a server you rent from AWS. It's kind of like calling Rust "cloud compiler."
If you choose to use words that your audience doesn't understand, or even worse understands to mean the opposite of what you want them to mean, it's a good idea to explain these words immediately using conventional words with conventional meaning. The comments by throw0101a did that.
The product seems really cool, but there is no way I would've understood what it was from the website.
I understand that's what you're saying, and I understand what the parent is saying. I chose to explain what that alluded to, in case anyone in this conversation is also finding it hard to understand what is meant by that specific copy. That doesn't mean I don't understand the broader point, or that I think the website copy is perfect.
Perhaps if you don't understand what the copy means, then that is a sign that you are not the target audience, rather than that the copy is bad? From what I've gathered from reading other comments in this thread, that copy will make perfect sense to Oxide's target audience, as it uses words in a way that will be very familiar and make perfect sense to the kind of person who might make a purchasing decision for a system like this.
And for what it's worth, I don't think you need to explain what's happening to Steve, it seems to me that he understands perfectly well. To me you come across as being rather condescending and in my opinion Steve is being commendably polite in response.
What's up with that dramatically different shape of the "Ratio of male-to-female death rates across ages for the U.S., other countries" graph between say Chile/US and South Korea?
If this is for an application you're building for browsers, use Leaflet (easy, but not as flexible) or OpenLayers (more flexible, more complicated). Tehre should be database libraries, though I've typically interacted with m database through an API I developed instead of direct from the browser. Cesium exists as well, but that is a resource hog.
IF you're playing around on a local machine, R and Python have sql interfaces that can let you load the data for whatever local processing you want.
There's a basic map display in ssms if I recall correctly. But otherwise you can generate a map with something like Tableau. Less of a learning curve than R.
Has anybody else noticed this?