Hacker Newsnew | past | comments | ask | show | jobs | submit | mmusc's commentslogin

Having gone through a round of interviews, I completely agree that live coding doesn’t measure anything of substance.

There was one company whose process I really appreciated:

- A take-home test, more of a puzzle-style challenge. - The interviewer does a quick review of your submission. - If they like what they see, you're invited to a follow-up session where you explain your code and make a few simple changes to it.

This approach proves that the work is genuinely yours, shows your thought process, and still gives the opportunity to demonstrate live coding on code you're already familiar with.


> make a few simple changes to it.

You're back to live coding then ultimately, so clearly they think it is necessary.

All you're describing is a very involved pre-screen.


If they have 3 cloud providers and say 5 regions that's potentially 15 clusters. My assumption is that they're running in even more regions..


I think I saw a video from their CEO showcasing some of the things in their pipeline, they are upgrading the webcams significantly. Best thing about a framework is you can just get the webcam and upgrade it.


Yep.

Also, the new screen looks nice.

Might upgrade in a year or two if I feel I need more brightness.

The fact you can change it slowly, piece by piece, is fantastic. It spreads the cost of upgrading the machines over the years instead of having to pull the entire cash at once. No need to reinstall everything every time.

You can upgrade the battery when you need to. Then maybe the RAM 3 years later once electron app with embedded AI model takes over the world and a hello world eats 4go.

This is, to me, the killer feature.

Young me would have hated it.

I wanted a new poney regularly.


3rd book Also reading and enjoying it!


Im on windows and I find the native Docker desktop app pretty good. Does podman offer benfits over it?


Not desperate for a business plan and revenue like Docker.


For now. Docker started out the same way.


Should probably be under the Apache umbrella/stewardship to be honest.


If it's damp wool is one of the best things you can wear


Perhaps, but unless there's a way of processing wool I've never seen, it also lets lots of wind through and doesn't insulate like a bit of dry duck down (nothing does). And you have to keep the down dry.


I think Dave2D zeros in on the why in this video https://www.youtube.com/watch?v=ZMqhE9r5JuI&ab_channel=Dave2...

These devices should be apps on your phone. Google and Apple will won't open their devices enough for these assistants to useful cause they want to roll their own. So their only hope is to create a stand alone device.. But yeah they're dead in the water.


Another point from Dave2D: they rush to launch the devices now because they might become obsolete after Google's I/O and Apple's WWDC when/if they introduce on-device AI-assistants.


Would be my suggestion to.


What's the goal of a self rewarding llm?


The goal is to iteratively create training data and add it to its own training set. The LLM acts as its own judge and scores its own responses to decide if it should add the data. It’s expensive to have a human in the loop labeling preferences, so the folks at Meta showed you can have a clever prompt and fine tune the model to judge its own responses.


funny you say that. I recently installed the latest vs community edition and initial startup and general performance has been very snappy.

I am running very little extensions but with hot reload enabled I'm blazing through work.


It gets pretty sluggish when you're working with really really big solutions (in my experience).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: