The short answer is that we tried, back in 2020, while working on a central bank payment switch by the Gates Foundation. We found we were hitting the limits of Amdahl's Law, given Postgres' concurrency control with row locks held across the network as well as internal I/O, leading to the design of TigerBeetle. To specialize not for general purpose but only for transaction processing.
On the one hand, yes, you could use a general purpose string database to count/move integers, up to a certain scale. But a specialized integer database like TigerBeetle can take you further. It's the same reason, that yes, you could use Postgres as object storage or as queue, or you could use S3 and Kafka and get separation of concerns in your architecture.
I did a talk diving into all this recently, looking at the power law, OLTP contention, and how this interacts with Amdahl's Law and Postgres and TigerBeetle: https://www.youtube.com/watch?v=yKgfk8lTQuE
i am not an exact expert on the limitation you claim to have encountered on postgresql but perhaps someone with more postgresql expertise can chime in on this comment and give some insight
For updating a single resource where the order of updates matters the best throughput one can hope for is the inverse of locking duration. Typical postgres using applications follow the pattern where a transaction involves multiple round trips between the application and the database to make decisions in the code running on the application server.
But this pattern is not required by PostgreSQL, it's possible to run arbitrarily complex transactions all on server side using more complex query patterns and/or stored procedures. In this case the locking time will be mainly determined by time-to-durability. Which, depending on infrastructure specifics, might be one or two orders of magnitude faster. Or in case of fast networks and slow disks, it might not have a huge effect.
One can also use batching in PostgreSQL to update the resource multiple times for each durability cycle. This will require some extra care from application writer to avoid getting totally bogged down by deadlocks/serializability conflicts.
What will absolutely kill you on PostgreSQL is high contention and repeatable read and higher isolation levels. PostgreSQL handles update conflicts with optimistic concurrency control, and high contention totally invalidates all of that optimism. So you need to be clever enough to achieve necessary correctness guarantees with read committed and the funky semantics it has for update visibility. Or use some external locking to get rid of contention in the database. The option for pessimistic locking would be very helpful for these workloads.
What would also help is a different kind of optimism, that would remove durability requirement from lock hold time, which would then result in readers having to wait for durability. Postgres can do tens of thousands of contended updates per second with this model. See the Eventual Durability paper for details.
I was planning to do this for every single post ever made in this category with traffic tracking, linkedin profiles of people at work, how much funding have they raised or how much ARR / MRR for bootstrapped projects. how many are open source on github but you beat me to it lol
indian dude here, sorry to hear that. The way it gets talked over in local circles is that most people dont like the concept of minimum wage when they have to pay someone. The thing about most Indians is that they dont have clear distinct personal time vs professional time. Even if a boss asks them to make a presentation at 10 pm, they'll do it. They feel that people of other nationalities mostly developed countries would not put up with such workplace behavior. Unfortunately it is a race to the bottom on that one
This is a very poor article. What I understood is that they take one benchmark (in particular) that tests grade school level math. This benchmark apparently claims to test ability to reason through math problems.
They agree that the benchmarks show that the LLMs can solve such questions and models are getting better. But their main point is that this does not prove that the model is reasoning.
But so what??? It may not reason in the way humans do but it is pretty damn close. The mechanics are the same - recursively generate a prompt that terminates in an answer generating prompt.
They don’t like that this indicates the model “reasons through” the problem. But it’s just semantics at this point. For me and for most others - getting the final answer is important. And it largely accomplishes this task.
I don’t buy that the model couldn’t reason through - have you ever asked a model for its explanation? It does genuinely explain how it got the solution. At this point who the hell cares what “reasoning” means if it
We care whether it's reasoning or not because the alternative is that it's guessing, rather than reasoning, and when guessing is measured on benchmarks that are supposed to measure reasoning, the results are likely to be misleading.
Why do we care if the benchmark results are misleading? The reason we have benchmarks in machine learning is that we can use the results on the benchmarks to predict the performance of a system in uncontrolled conditions, i.e. "in the real world". If the benchmarks don't measure what we think they measure then they can't be used to make that kind of prediction. If that's the case then we really have no idea how good or bad a system really is. Seen another way, if a benchmark is not measuring what we think it measures, all we learn from a system passing the benchmark is that the system passes the benchmark.
Still, what do you care if it gets you the right answer? The question is, exactly, how do you know it's really getting you the right answer? Maybe you can tell when you know the answer, but what about answers you genuinely don't know? And how often does it get you the wrong answer but you don't realise? You can't realistically test an AI system by interacting with it as thoroughly and as rigorously as you can with... a benchmark.
That's why we care about having accurate benchmarks that measure what they're supposed to be measuring.
P.S. Another issue of course is that guessing is limited while reasoning is... less limited. We care about reasoning because we ideally want to have systems that are better than the best guessing machine.
what about splinter cell conviction, 15 yrs and nobody has figured out its map file format .unr that uses custom unreal engine 2.x. It even has a tool that lets you unpack its UMD files https://github.com/wcolding/UMDModTemplate The library on github requires this tool unumd https://www.gildor.org/smf/index.php/topic,458.msg15196.html... The same tool also works for blacklist. I would like to change the type of enemy spawned in the map but I cannot find any assistance on it. UEExplorer doesnt work because it is some kinda custom map file
does anyone have the slightest idea on how much open AI is currently earning in terms of revenue per quarter vs how much money they are actually burning per quarter? What is their userbase? 1 billion? What is the upside basically is my question
The way i like to think of this is along the lines of mathematics as usual. Because everything we observe in this universe so far adheres to mathematics except the inside of a blackhole, what happens after death also remains one of the many infinite possibilities. one possibility is that nothing happens and you are just gone gone. Another possibility is that you get reincarnated based on your karma. Another possibility is that you go to heaven or hell. Another possibility is that something entirely different happens as soon as you step outside this spacetime continuum because death takes you outside this thing for sure. There could be another infinite list of possibilities that none of the religions and none of the humans have accounted for
reply