No, I agree with the post I replied to in that migrations should be human versioned as opposed to auto-versioned via a VCS. Again, I was being generous to the author trying to imagine a scenario where this would not create a dumpster fire.
This isn't quite the same as being solely git-based, but there are numerous examples in the industry of successful use of auto-versioned declarative schema management. By this I mean, you have a repo of CREATE statements, and a schema change request = making a new commit that adds, modifies, or removes those CREATE statements. No one actually writes a "migration" in this approach.
Facebook, who operate one of the largest relational database fleets in the world, have used declarative schema management company-wide for nearly a decade. If engineered well, you end up with a system that allows developers to change schemas easily without needing any DBA intervention.
This is interesting but I am curious about some of the implications. I am under the impression that migrations encompass more than schema changes. It may require data transformations, index rebuilds, or updates to related tables. I would agree that these things are preferably avoided, but in practice it seems that they happen frequently.
I viewed the author as targeting smaller dev shops who don't have a dedicated DBA/expert. Groups who might use an ORM's migration framework for example. In which case, it would appear that there is significant loss in migration flexibility if you remove application code from the architecture.
When operating at scale, it becomes necessary to have separate processes/systems for schema changes (DDL) vs row data migrations (DML) anyway.
For example, say you need to populate a new column based on data in other columns, across an entire table. If the table is a billion rows, you can't safely run this as a single DML query. It will take too long to run and be disruptive to applications, given the locking impact; it will cause excessive journaling due to the huge transaction; there can be MVCC old-row-version-history pileups as well.
Typically companies build custom systems that handle this by splitting up DML into many smaller queries. It's then no longer atomic, which means such a system needs knowledge about application-level consistency assumptions.
Hopefully I have cleared that up.