A KVM system is not a BMC. Running a cluster (even a fairly small one) you probably want power control, e.g. with powerman, serial console availability, e.g. with conman, metrics, e.g. via freeipmi and some monitoring solution, and probably logs for alerts. Control and monitoring need to be out of band.
(Powerman, conman, and freeipmi come from Livermore for use with serious HPC systems.)
wxWidgets is a wrapper over the native GUI toolkit. It allows you to write a OS independent GUI code and because it is a wrapper you get most of the benefits of native toolkit like dark mode.
> most of the benefits of native toolkit like dark mode.
On which OS does dark mode work with wx? Certainly not with Windows... (which, as VZ points out, isn't wx's fault, it's just that no APIs officially exist for this - explorer.exe has dark mode, but that uses completely private, undocumented APIs).
The fact that you have to change your SSID to opt out of third parties using it is... shady at best. What happens when two competing third parties have conflicting name requirements for you to opt-out?
There’s rumor that due to a large number of people taking ML courses, there will be far more people with ML skills than ML jobs.
I hope this is true. There are so many areas where ML skills could be useful. The sad part would be that some industries would be changed forever. The animation industry for example might not even exist the way it is today.
Don't think it's the case - there will be more desired ML projects than people capable of implementing them. ML is like electricity 100 years ago or programming 40 years ago, we haven't applied it yet to most problems of society.
The problem is it's not as useful as many people seem to think. I often hear my colleagues suggest ML for anything remotely complicated, even something like "measuring body fat percentage using electricity" that in reality only needs a physical equation.
I've even heard people suggest it for web scraping which seems absolutely crazy to me.
It can make a lot of sense for web scraping, if you have lots of target sites you can either build strict rules for the extraction and update them constantly, hand build something generic (often very hard) or train some classifiers for the content you want.
I could actually see a use case for web scraping. If you're after particular pieces of content that aren't accessed in a structured way, on a site that rate limits you to the point of being restrictive, maybe using a bit of NLP could help you rank links to click.
I tried using OCR to scrap Facebook profiles by simulate web browsing behavior. It helps a lot in avoid account blocking but still too slow to be practical.
Really just curious about this approach and want to test it since most old scraping methods failed on Facebook data. My take is that it is possible with enough resources since it is actually pretty hard to separate this from real usages.
The rear view camera system almost always runs independent of the main Head Unit (HU) or In-Vehicle Infotainment (IVI). In most cases the rear view camera view is a single application specifically coded for the target micro processor (SuperH for example) and is the only thing running on that micro processor. The HU and the rear view camera share the display. While you are driving in R the HU is booting Linux or QNX and when you move to D the screen switches to the HU. The rear view camera application keeps running uninterrupted.
What you said is/used to be right but GP is also probably right, hardware companies love to migrate distributed and reliable system into overcomplicated but integrated Linux contraptions