The 1968 NATO SE conference is also the earliest reference I know of discussing software reuse (page 79 of the PDF, "Mass Produced Software Components"). Interesting to see how some of the problems then are still problems today.
1. Interactive editing
2. Interactive compilation (these two are probably the biggest reason programmers now are more productive)
3. Having enough storage for most problems
4. Having enough computation to solve most problems interactively
5. Handling numbers larger than one machine word
6. Handling floating point numbers
7. Handling precise decimal numbers (except for the lucky few using mainframes / COBOL)
8. Handling text using characters which wouldn't have worked in a telegram
9. Exchanging text with other systems (you couldn't even assume ASCII with other encodings (e.g. EBCDIC) being common)
10. Tracking changes to source code
11. Distributing shared code and tracking new releases
12. Communicating with other systems
We've gotten a lot better on various technical details. The parts which remain difficult are generally different aspects of human limitations reasoning about complex systems.
I think that the success of open source libraries shows how software reuse can work rather well. However when designing a new library it is still (and may always be) difficult to "parameterize" (to use McIlroy's term) in the most useful way for your users.