The biggest issue with Windows 11 for me is the noticeable performance lag in basic apps like Notepad and File Explorer. Simple tasks, like opening files or navigating folders, feel sluggish, and I can visibly see windows rendering in slow motion. I’ve heard this might be due to Windows 11’s UI elements being redrawn over lower level UI. I’m considering switching to Linux as my daily driver.
Last I checked DuckDB spatial didn’t support handling projections. It couldn’t load the CRS from a .prj file. This makes it useless for serious geospatial stuff.
You can get prescription inserts from Vuzix, but they're pretty bad. If you need a prescription, and want to run AugmentOS, your best bet is to buy the Even Realities G1 instead.
LLMs make it really trivial to work with MermaidJS. Just yesterday I used it to sketch out some business logic. Seeing the whole flowchart like that helped me catch some corner cases.
Which cheap LLMs are good for Mermaid, and to sketch graphs from code? I had to switch between the cheap LLMs and "tutor" them a bit to edit Mermaid in a Markdown file, although Claude seems perfect.
Prefect is amazing. Built out an ETL pipeline system with it at last job and would love to get it incorporated in the current one, but unfortunately have a lot of legacy stuff in Airflow. Being able to debug stuff locally was amazing and super clean integration with K8S.
Other guy said it right. These work and are fine but you lose the legacy stuff. If you know your limits and where the eventual system will end up it's great and probably better.
If you are building a expandable long term system and you want all the goodies baked in choose airflow.
Pretty much the same as any architecture choice. Ugly/hard often means control and features, pretty/easy means less of both.
On the surface the differences are not very noticable other than the learning curve of getting started.
I think the use cases are slightly different between for the two. The playwright MCP depends on the mcp server (like claude desktop or cursor) to provide the intelligence, while browser-use can "think" by itself. Plus it seems that unless you use the vision mode, you are kind of restricted to the accessibility tree, which may not be present or well populated depending on the website you're using. This also means that it won't really work as well with stuff like cursor/windsurf since they don't really process images from MCPs right now.
I'm more in the camp of using claude computer-use/openai cua. I think they work better for most things, especially if you don't interact with hidden/obscured elements.
If you're interested in comparing these different services, you can try HyperPilot by Hyperbrowser at https://pilot.hyperbrowser.ai .
Disclaimer: I worked on Hyperpilot so I might be a bit biased.
its a 100% client-side react app, there are no server requests except from the first one creating the anonymous user. the local DB is SQLite implementation that supports offline mode and cloud sync (not enabled in this version)
There isn't good support for nested RDP connections, didn't even know that some people used something like this.
But there is very good support for nested shell connections and tunneling RDP over SSH. If your target system isn't reachable directly and require something like a bastion host connection first, you can still connect via RDP if you use SSH tunnels over multiple hops in XPipe.