Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried Experiments. It's great at the easy part of the ML workflow: optimising a working model. But it doesn't really help with the hard part - the debugging at the interface of the model and the data.

Say you are building a car detector or something. Building the CNN is ML101, and SageMaker experiments helps with optimising the training parameters to get the best out of the model.

But that's not really a hard thing. The hard part is working out that your model is failing on cars with reflections of people in the windscreen or something, or your dataset co-ordinate space is "negative = up" so your in memory data augmentations are making the model learn upside down cars or something.

I don't know what Debugger gives me over a notebook, but I've only read the blog post.

I haven't tried Model Monitor but I do think that could be useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: