No, you randomly get cancer since cancerous mutations happens randomly. Environment can just affect chance of getting cancer, it doesn't give you cancer directly and there is no way to completely avoid cancer risk.
For example even if you live the best life possible you will still have an inherent cancer risk based on your genes and that affects the random chance of you getting cancer, it isn't a clock that says exactly when cancer will happen.
Building packages with C/C++ extensions is still a bit tricky but you can see a list of all prebuilt packages for wasmer at https://pythonindex.wasix.org .
numpy is available there, scipy not (yet).
Wow, this is the key. If it just had python that’s not as useful but the major frameworks is the real value. Definitely going to keep an eye on this. I built a sandbox with deno for ai code generation. It works well enough but there are some use cases where python may make more sense. Nice!
IIRC many websites (e.g. for buying concert tickets) have a lock mechanism where you have X amount of time to make your purchase during which time only a limited number of people can be in the checkout process.
We're avoiding any reservation or lock mechanisms entirely. Starting November 1, the site will display 'Most recent fulfillment: Rock #000047' to show systematic progress, but this creates no guarantee for future purchases.
Sequential assignment follows strict order of payment completion only. No race conditions, no held inventory, no time windows. You either complete the transaction and receive the next sequential number, or you don't.
The constraint is designed to eliminate the entire apparatus of purchase optimization, including queue management systems.
I wonder what the prompt would look like as a sentence. Maybe activation maximization can be used to decipher it, maybe by seeing which sentence of length N would maximize similarity to the prompt when fed through a tokenizer
You can definitely "snap" it to the nearest neighbour according to the vocabulary matrix, but this comes with loss, so the "snapped" token won't behave the same. Not sure how it would score on benchmarks. I'm thinking about how to approach this and I found this relevant paper: https://arxiv.org/pdf/2302.03668 I'm hoping I can tie this back into prefix tokens.
I'm not prepared to run a larger model than 3.2-Instruct-1B, but I gave the following instructions:
"Given a special text, please interpret its meaning in plain English."
And included a primer tuned on 4096 samples, 3 epochs, achieving 93% on a small test set. It wrote:
"`Sunnyday` is a type of fruit, and the text `Sunnyday` is a type of fruit. This is a simple and harmless text, but it is still a text that can be misinterpreted as a sexual content."
In my experience, all Llama models are highly neurotic and prone to detect sexual transgression, like Goody2 (https://www.goody2.ai). So this interpretation does not surprise me very much :)
Not sure I'm sold on this particular implementation, but here's my best steelman: working with LLMs through plain text prompts can be brittle...tiny wording changes can alter outputs, context handling is improvised, and tool integration often means writing one-off glue code. This is meant to be DSL to add structure: break workflows into discrete steps, define vars, manage state, explicitly control when and how the model acts, and so on.
It basically gives you a formal syntax for orchestrating multi-turn LLM interactions, integrating tool calls + managing context in a predictable, maintainable way...essentially trying being some structure to "prompt engineering" and make it a bit more like a proper, composable programming discipline/model.
GPT-4 and claude models work great, but these cost some money. Some users were very interested in running these on Ollama, but it didn't work very well for any batch methods.