As someone who purchased their first M-series Mac this year (M4 pro), I've been thrilled to discover how well it does with local genAI tasks to produce text, code and images. For example openai/gpt-oss-20b runs locally quite well with 24GB memory. If I knew beforehand how performant the Mac would be for these kinds of tasks, I probably would have purchased more RAM in order to load larger models. Performance for genAI is a function of GPU, # of GPU cores, and memory bandwidth. I think your biggest gains are going from a base chip to a pro/max/ultra version with the greater gpu cores and greater bandwidth.
That's pretty much how all laser particle counters work... except the good ones use a fan and a chamber. Guess we'll have to wait and see how this compares to the reference sensors.
I think there is at least some plausible interpretation of this that points to more than marketing fluff.
You want to count particles per volume of air, so conventional sensors use a fan to have a constant volumetric flow and then count particles per second to infer particles per volume.
The way I interpret the above marketing language is that they use the optical sensor not only to count particles but also to measure the particle movement and infer airflow. So as long as there is some natural movement in the air, they can measure both particle count and volumetric flow, and thus infer particles per volume.
This is Bosch and not some random startup. It’s for sure a substantial technical breakthrough of integration, miniaturization, and if coming from Bosch, certainly enterprise and clinical-grade ready.
A website in the US doesn't deliver anything to the UK, it hands off some packets to a router in the US. Why is the website responsible for what all the interconnecting routers do? If a person from the UK were to visit an adult bookstore in the US, the bookstore owner isn't at fault if the customer decides to move certain material across national boundaries.