A third option for the staging database is to do a dump and then scrub the data for security compliance. You may be able to use that database through several development cycles.
A company I am familiar with did that. Down that path lies madness. See the 33 bits blog: if data gets out, you are almost certainly screwed. (Trivial example: imagine you're my university and you release the medical records and student registration tables of your students for research purposes. You anonymize names and randomize ID numbers. Want to find my records? Look for the only person in the university who ever took AI, Japanese, and Women's Studies in the same semester. My medical records are pretty boring. They don't include, for example, an abortion. Let your imagination run wild on what happens if your company leaks the identity of someone whose records do. Something similar-with-the-serial-numbers-filed-off has happened before.)
For the purpose of a private staging server, particularly one used by people who have access to production data anyway, you don't need such "hard" anonymisation.
The main purpose of anonymisation, in this case, is to make sure you don't send testing emails to clients. So actually, the only kind of scrubbing you really need to do is to make sure every email/phone number/twitter handle/outwardly communicating piece of data is replaced by a test email/etc.
The hardcore anonymisation that banks use is only necessary because there is an actual security and reputation risk if the data is leaked by some random developer in India (or some angry developer in London). In the case of swiss banks, they are also legally obliged to scrub that data when using it internally in locations outside of Switzerland.
However, for the purpose of a startup with 1-30 ppl, most of whom have access to production anyway, there is no sense in doing that kind of anonymisation. The only risk you're protecting yourself against is sending hundreds of testing emails to your customers.
If your access controls to the staging server are ironclad, you're right. But they stop being ironclad the moment you make allowances to allow the staging server to connect to external API's. Most people who think they have ironclad controls on who can attack the staging server don't.
Or, in a distressingly common failure mode in Japan, when the staging server is initialized by a developer from a SQL dump and the developer does not realize that he has left a copy of if-this-gets-out-oh-god-the-company-is-finished.tar.gz on his hard drive until the day after losing it.
I don't see why access control (i.e. unix/db users) should be any more lax for a staging server than for a production server... After all, it's got your whole application on there. If you're running a rails app, that means it has your whole source code.
The solution there is to have robust access control to all of your servers.
Much less screwed than if you fail to catch a bug and the live production database is compromised, particularly if you store credit card numbers. This does mean that the staging environment must have all the same security controls as the production environment. If you can't achieve that then you probably shouldn't use a database with PII (even if it's indirect, like your course listing).
Incidentally, The nice thing about having the infrastructure to deploy a replica of your production environment is that it's probably not much harder to deploy multiple scaled-down versions cheaply, so that you can do two stages of QA. You can do all possible testing in an environment with a fake database, then for the real staging test use the scrubbed production version.
This is even more important for bigger sites because on staging sites with limited data some performance issues won't be visible until bigger data is thrown at the code
I agree, generally, but I much prefer writing a one-time script (or using one of the tools available) to populate the database with random-ish data instead of using production data.
It may be that I've been working with government data too much, but people's dev environments are generally far less secure than their prod environments, and I personally don't want to be the guy who declared that the scrubbed data contains nothing sensitive and be wrong, whether or not that data gets out into the wild.
For one client, we're going to do exactly that, and put an automated process that will produce the data, anonymize sensitive data (with checks).
This will also be useful for developers willing to test changes with the real data volume.
EDIT: after reading patio11 comment, I can only emphasize the importance of using a "white list" approach here, ie carefully picking the fields you will keep, and adding ETL screens. You have been warned :)