Was gonna say, this is the most popular HN thread about slicer ever. I'll admit it has its shortcomings (as someone who built a tool that was theoretically almost perfect for slicer, but didn't quite fit it), and has some other limitations for potential large clients. But I think it gets more mindshare within Google than without (especially today).
Ok maybe it hasn’t taken HN by storm but at three large, public companies in my post-google experience they all had re-implemented slicer, but they all lacked other, simpler and more generally useful rpc directors that are much more common, but don’t have papers to go with them.
According to Agnieszka Grabska-Barwinska, a member of the team, the graph neural network learned to encode a pattern that physicists call correlation length. That is, as DeepMind’s graph neural network restructured itself to reflect the training data, it came to exhibit the following tendency: When predicting propensities at higher temperatures (where molecular movement looks more liquid-like than solid), for each node’s prediction the network depended on information from neighboring nodes two or three connections away in the graph. But at lower temperatures closer to the glass transition, that number — the correlation length — increased to five.
“We see that the network extracts, as we lower the temperature, information from larger and larger neighborhoods” of particles, said Thomas Keck, a physicist on the DeepMind team. “At these different temperatures, the glass looks, to the naked eye, just identical. But the network sees something different as we go down.”
Increased correlation length is a hallmark of phase transitions, in which particles transition from a disordered to an ordered arrangement or vice versa. It happens, for instance, when atoms in a block of iron collectively align so that the block becomes magnetized. As the block approaches this transition, each atom influences atoms farther and farther away in the block.
They used molecular dynamics (MD) simulation to train the model - couldn't the correlation length be calculated from the MD simulations directly without the graph network to gain the same insight?
They appear to have change something in the past few minutes. When I first opened this HN thread it showed me Google's homepage. Now I'm also seeing that redirect.
It was 2015 when I first encountered this while working with Andrew and James. I couldn't find any online info on the origin, and the few Firebasers I asked couldn't tell me. I feel like I have closure now that it really wasn't a thing. Or, I guess it was and it really is now.
I've been around a company that was 180+ years old (so rebranding was non-trivial ;)). They switched to an acronym based on the original name, so there was was 1) a direct link between old and new, 2) old branded tgat was mussed wasnt a big deal for bew customers, and 3) everyone called it by the acronym going forward.
A similar approach might work for you and allow an incremential phase out of the old brand over time. Instead of HxHxHx, it's now HITD (HxHxHx IT Desktop)
I once did a write up about Aflac's rebranding because I worked there for over five years and couldn't find a write up of it anywhere. I think that blog is no longer available to the public, so we may be back to "It doesn't exist anywhere online that I can tell."
One example would be where you have conditional logic that requires a database call. Without workers, everything would go back to the backend system, perhaps across the ocean. With workers in front you could shortcircuit that for all calls that don't require the DB call. You could also handle routing logic at that layer, allowing it to pick closer DB instances, etc.
[Disclaimer: Product Manager on Cloud Firestore who thought this was an interesting use-case]
1. Mainly because it's both a different search problem (general DB vs specific to web search) and hard engineering wise given our model; we implement not only the cloud database, but embedded versions for iOS, Android, and Web - not to mention real-time functionality and tailoring it to how our index engine works, etc. While we have a lot of customers and use-cases that don't need Fulltext Search, we totally agree it's important and have done explorations on how we'd deliver something along these lines.
2. Agreed. During the beta program we've delivered the managed export and import service for backups, adding array contains capabilities to queries and have got close enough to delivering Collection Group queries to mention them as part of GA. For documentation our tech writing team as done a lot of updates, new pages, and fixes - we know there is always more to do. Cloud Firestore is definitely used in production and at scale by our customers, and with nearly 1 million databases being created the range of use cases and traffic/load patterns has been vast. Our beta program involved working with a lot of them to improve things like hardening and scalability to ensure we can meet our 5 nines of availability SLA.
"Isn't half a solution better than no solution?" -> In a lot of cases, absolutely not. A half solution that falls over when you tip a certain point of scale can result in extended downtimes, since the solution often ends up being "we need to completely rearchitect this", which isn't easy or quick when your business is out of commission.
"from the perspective of a customer and outside observer, a number of things smell quite off." -> Sorry to hear this, I can only hope the continued hard work from the team will turn you around.
Thank you for the answer. I have to admit it doesn't quite sway me, for reasons such as those below, but thank you.
e.g. yes I realise they are different search problems, but I'd presume that Google is nonetheless well-equipped to handle the document db one. the only apps I can imagine that couldn't benefit from a search box are games - anything content-focused or ecommerce focused needs one, and the majority of utilities benefit too (yes I can do chat without search, but it certainly benefits from being able to search through chats) - any examples? yes I realize having to do Backend/iOS/Android/Web is hard (as it is for everybody else), but with on device cases at least the db is smaller. Im sure you do have big users, I didn't mean to imply otherwise, but with my admittedly very limited knowledge I'd still wager that a majority do not see uptime and scalability as the most urgent improvements, but rather those we are discussing. In our case, give us just 2 9s of uptime, and give us the above queries and searches even if 2x as slow and expensive as you'd like them to be, and limited to a db the size of an average relational db, and that would beat the extra 3 9s of uptime and the super scalability any day. Not least because, I don't mean to be rude here, just candid, but if we were ever to reach a point where we needed that massive scale and uptime, I'm not sure I'd be keen to trust Google with user data.
To be clear - I like several things about Firebase/Firestore, which is why we use it and why I'm insisting on badgering you here. I just wish I could be completely comfortable with my choice rather than wondering every day if I shouldn't just use something else.
If Cloud Spanner is too big, then you'll almost certainly be well served by Cloud SQL (fully managed MySQL and PostgreSQL): http://cloud.google.com/sql/