Some require a lot more understanding than others (for instance I'm not sure I'd be comfortable implementing a kernelized SVM from scratch, even though intuitively I know how it works) but basic neural networks (simple perceptron, simple feedforward network, simple recurrent network) are quite easy to grasp, and backpropagation is very intuitive. You can even use finite difference approximation [1] to bypass the derivatives when you're starting (at the cost of some efficiency) and figure out the rest as you go.
[1] https://en.wikipedia.org/wiki/Finite_difference