It is not a buzzword, as it describes a very specific concept: learning based on a neural network with multiple (say >= 3) hidden layers.
It is not a new idea, but it has been viable only since the last few years thanks to both computational power advances (huge clusters, GPUs, big data), and algorithmic breakthroughs (sampling algorithms, stochastic optimization, contrastive divergence, ...).
Yes, often in research when there is a hot topic like this everybody tries to jump on the bandwagon, and now the term is so widely spread that attaching "deep learning" to anything makes it sound cool.
What I meant is that at least this has some specificity, in contrast with terms like "web scale", "big data", "machine learning", "2.0", that are so broad that can be attached to anything.
It has content and communicates an approach to machine learning distinct from other approaches. It isn't like "big data" which is truly meaningless. However, deep learning is also not a single method or algorithm.
I would have described the library in question as a "GPU-Accelerated Neural Network library" since that is more descriptive.