The 25% increase in size is true but it's 8 bytes, a small and predictable linear increase per row. Compared to the rest of the data in the row, it's not much to worry about.
The bigger issue is insert rate. Your insert rate is limited by the amount of available RAM in the case of UUIDs. That's not the case for auto-incrementing integers! Integers are correlated with time while UUID4s are random - so they have fundamentally different performance characteristics at scale.
The author cites 25% but I'd caution every reader to take this with a giant grain of salt. At the beginning, for small tables < a few million rows, the insert penalty is almost negligible. If you did benchmarks here, you might conclude there's no practical difference.
As your table grows, specifically as the size of the btree index starts reaching the limits of available memory, postgres can no longer handle the UUID btree entirely in memory and has to resort to swapping pages to disk. An auto-integer type won't have this problem since rows close in time will use the same index page thus doesn't need to hit disk at all under the same load.
Once you reach this scale, The difference in speed is orders of magnitude. It's NOT a steady 25% performance penalty, it's a 25x performance cliff. And the only solution (aside from a schema migration) is to buy more RAM.
The bigger issue is insert rate. Your insert rate is limited by the amount of available RAM in the case of UUIDs. That's not the case for auto-incrementing integers! Integers are correlated with time while UUID4s are random - so they have fundamentally different performance characteristics at scale.
The author cites 25% but I'd caution every reader to take this with a giant grain of salt. At the beginning, for small tables < a few million rows, the insert penalty is almost negligible. If you did benchmarks here, you might conclude there's no practical difference.
As your table grows, specifically as the size of the btree index starts reaching the limits of available memory, postgres can no longer handle the UUID btree entirely in memory and has to resort to swapping pages to disk. An auto-integer type won't have this problem since rows close in time will use the same index page thus doesn't need to hit disk at all under the same load.
Once you reach this scale, The difference in speed is orders of magnitude. It's NOT a steady 25% performance penalty, it's a 25x performance cliff. And the only solution (aside from a schema migration) is to buy more RAM.