There's lots written in the credit scoring space that I think other industries could look at - especially when it comes to calibration of models. It doesn't matter if the prediction is weak just as long as it is consistent over time periods. Banks rely on this consistency to ensure they are provisioning properly for losses.
Most of our people came from a stats background using R or python (more focused on Scipy/numpy than any other package).
A few of us had some experience with Lua before with game programming, so we took it on ourselves to teach the others Scheme (almost like an intro to programming class) and then teach how Lua worked in comparison.
I guess we lucked out because they were able to pick it up quickly after the education on functional programming.
I'm curious about the real world implementation risk and if anyone has a methodology to proactively deal with external factors affecting the model performance. Such as if a feature is highly predictive of a certain outcome is there a framework to measure the volatility based on information outside of the dataset ie. product changes, marketing campaigns etc.