Having gone through a round of interviews, I completely agree that live coding doesn’t measure anything of substance.
There was one company whose process I really appreciated:
- A take-home test, more of a puzzle-style challenge.
- The interviewer does a quick review of your submission.
- If they like what they see, you're invited to a follow-up session where you explain your code and make a few simple changes to it.
This approach proves that the work is genuinely yours, shows your thought process, and still gives the opportunity to demonstrate live coding on code you're already familiar with.
I think I saw a video from their CEO showcasing some of the things in their pipeline, they are upgrading the webcams significantly. Best thing about a framework is you can just get the webcam and upgrade it.
Might upgrade in a year or two if I feel I need more brightness.
The fact you can change it slowly, piece by piece, is fantastic. It spreads the cost of upgrading the machines over the years instead of having to pull the entire cash at once. No need to reinstall everything every time.
You can upgrade the battery when you need to. Then maybe the RAM 3 years later once electron app with embedded AI model takes over the world and a hello world eats 4go.
Perhaps, but unless there's a way of processing wool I've never seen, it also lets lots of wind through and doesn't insulate like a bit of dry duck down (nothing does). And you have to keep the down dry.
These devices should be apps on your phone. Google and Apple will won't open their devices enough for these assistants to useful cause they want to roll their own. So their only hope is to create a stand alone device.. But yeah they're dead in the water.
Another point from Dave2D: they rush to launch the devices now because they might become obsolete after Google's I/O and Apple's WWDC when/if they introduce on-device AI-assistants.
The goal is to iteratively create training data and add it to its own training set. The LLM acts as its own judge and scores its own responses to decide if it should add the data. It’s expensive to have a human in the loop labeling preferences, so the folks at Meta showed you can have a clever prompt and fine tune the model to judge its own responses.
There was one company whose process I really appreciated:
- A take-home test, more of a puzzle-style challenge. - The interviewer does a quick review of your submission. - If they like what they see, you're invited to a follow-up session where you explain your code and make a few simple changes to it.
This approach proves that the work is genuinely yours, shows your thought process, and still gives the opportunity to demonstrate live coding on code you're already familiar with.