So the reaso we should not use Rust (a memory safe language that gets rid of categories of bugs) is because some new AI tools are not read?
Using AI for over a year now on a daily basis, most AI tools do not struggle with Rust, they struggle to provide value. If I do not instruct them to keep the output to the bare minimum, I usually end up with 10x the output what would be a simple solution to my problem.
The syntax feels complicated. Maybe I just don't have enough patience for learning a typesetting syntax (I never worked with Latex before).
On top of that, there is no easy way to create a template. For example, I want an invoice template which I can reuse with different data. Theoretically, I can create a typ file for the template, and define the invoice as a function which I then call from a string with, say, json data. It seems great as web service, but not as a library I can use from, say, Rust.
And the type system is a bit confusing. I can define basic types like numbers or string, but when it comes to structs, they don't seem to have support for that.
I find it easier to create a handlebars template, and feed the HTML to headless chrome printing service, which will output a PDF for me. It's not scalable for high volume, but good enough for my needs (takes about 2-3 seconds to generate PDF).
> On top of that, there is no easy way to create a template
Templates are just functions [0].
I think much of the frustration comes from typesetting being a harder problem than it seems at first. In general a typesetting system tries to abstract away how layout is recomputed depending on content.
Supporting contextual content -- cases where the content depend on other content, e.g. numbered lists, numbered figures, references, etc -- involves iterative rendering. This is evidentidly a complexity sinkhole and having a turing complete script language will bite you back when dealing with it. I recommend reding their documentation about it [1] where they explain how they propose solving this problem.
I agree if your intention is to only make desktop applications. But if you have in mind a hybrid app or a team with web experience only who are ok with making compromises on performance and/or UI, it's a tradeoff that people are right to make, imo. It's of course not optimal, but that's engineering for you.
Factually incorrect. Many religions are explicitly against another religions and degrade any non-believers, just like Hitler did with jews to make the general public do horrid things.
I might not know enough about this subject and think the main idea is to make the initial search retrieval much smarter and more comprehensive, so the results are already good enough, lessening or removing the need for a second, often costly, re-ranking step.
They achieve this with few different ways:
- Unified Multimodal Vectors (Mixing Data Types from the Start)
Instead of just creating a vector from the text description, Superlinked creates a single, richer vector for each item (e.g., a pair of headphones) right when it's indexed. This "multimodal vector" already encodes not just the text's meaning, but also its numerical attributes (like price, rating, battery life) and categorical attributes (like "electronics," "on-ear").
- Dynamic Query-Time Weighting (Telling the Search What Matters Now)
When you make a query, you can tell Superlinked how important each of those "baked-in" aspects of the multimodal vector is for that specific search. For example: "Find affordable wireless headphones under $200 and high ratings" – you can weight the "price" aspect heavily (to favor lower prices), the "rating" aspect heavily, and the "text similarity" to "wireless headphones" also significantly, all within the initial query to the unified vector.
- Hard Filtering Before Vector Search (Cutting Out Irrelevant Items Early)
You apply these hard filters (like price <= 200 or category == "electronics") before the vector similarity search even happens on the remaining items.
If these are implemented well, Superlinked could improve the quality of initial retrieval to a point where a separate re-ranking stage becomes less necessary.
"In 1988, Sassenrath left Silicon Valley for the mountains of Ukiah valley, 2 hours north of San Francisco. From there he founded multimedia technology companies such as Pantaray, American Multimedia, and VideoStream. He also implemented the Logo programming language for the Amiga, managed the software OS development for CDTV, one of the first CD-ROM TV set-top boxes, and wrote the OS for Viscorp Ed, one of the first Internet TV set-top boxes."
Right? And I think that's what keeps bringing me back to REBOL, and thus Red. They don't appeal to me on the face of them. Like, the code examples look interesting but in a "magical" kind of way that strikes a little bit of fear into my engineering heart. But with that kind of pedigree, I can't dismiss the ideas. If Sassenrath came up with it, I bet there's a kernel of awesomeness inside.
I suspect a lot of the magic will fall away after realizing the block data structure (the square brackets) are pretty close to a Lisp list. And just like in Lisp, they're used for both code and data. One big difference is words are evaluated by default instead of just the first word in a list, so there's nowhere near as much nesting, and whenever an expression ends the next one can begin with no delimiter (but use newlines for legibility).
I think LLMs are great for generating the structure of your code. The only issue is that these models are amazing when you are navigating in very well documented, talked about, etc. subjects, and start to go off the rail when you are using somewhat esoteric things. The speed boost I get by generating Terraform and HTML, CSS, JS with Deepseek and Grok (sometimes asking one to review the others code) is pretty significant. My biggest disappointment is Claude. I have purchased a subscription from them but I need to cancel it. The usual ouput from 3.7 is horrenndous. I am not even sure why. The same prompt works very well with Deepseek and fails miserably with Claude.
You're right and I will still use it, but only for more limited scopes.
I've also cancelled my Claude subscription recently, but for different reasons - it doesn't have the "memory" feature that makes ChatGPT so much more worth it at the moment.
paid for a year of claude all at once and deeply regret it the last month or so. seems like it just tries to do so much extra lately. feel like half of my prompts lately are just reiterating that i dont want it to try to do more than i ask...
if i dont do that it always seems to throw out 3 fresh files ill need to add to make their crazy implementation work.
ive pretty much swapped to using it just for asking for minor syntax stuff i forget. ill take my slower progress in favor of fully grasping everything ive made.
i have one utility that was largely helped by claude in my current project. it drives me nuts, it works but im so terrified of it and its so daunting to change now.
It's been abandoned for some years, the author was working on a new engine for it and in the last 5 days they started working again on sled proper. However, it's pretty good the way it is (the 0.34.7 release from 2021 https://crates.io/crates/sled), despite the beta warnings
It already uses a key/value store for on-disk storage, but you’ll have to write the server API and client yourself, along with a Raft state machine layer. It’s not a big lift though, and could make a fun weekend project.
Using AI for over a year now on a daily basis, most AI tools do not struggle with Rust, they struggle to provide value. If I do not instruct them to keep the output to the bare minimum, I usually end up with 10x the output what would be a simple solution to my problem.
reply