No, GB is perfectly correct when referring to memory. Just because some people standardized on Gibibyte and redefined gigabyte doesn't invalidate what memory makers have been doing for ever. JEDEC still uses GB, as they should.
I was unaware they had a monopoly on language usage. A byte is not an SI unit. Base2 is vastly more defensible and natural than base10. The real issue is that everyone in networking likes round base10 numbers divided over some arbitrary cesium fluctuations. This leads to 1GB / 1Gbps not being 8 seconds, which is confusing. But in JEDEC's and others defense: "why should I have to change, he's the one that sucks."
Trick question: How many bytes are in a 1.44MB floppy? In a 700MB CD? A 4GB USB stick? And a 480GB SSD? How many bytes/s bandwidth for 10Gb Ethernet?
I can't understand how people can be so delusional to think that randomly redefining prefixes is a good idea. It was never, ever used consistently, and JEDEC just should get rid of this idiocy.
Trick question in return: I'm selling a 256GB SSD. How many bytes am I legally required to deliver?
This is why there is a need for the two different prefixes. If I am selling a 256GiB SSD and you are selling a 256GB SSD, those are now differentiable on their face.
The IEEE and/or SI are in charge of language? The same IEEE that sends car insurance offers to its members, just so we're clear. JEDEC is also a standards group, and they disagree. What now, a Wikipedia editing war to determine the victor?
Standards bodies aren't anything magical, and I don't get the slavish following they seem to get. So an RFC says something, or another group mandates something. BFD. Unless you're expecting interop to work, use standards as you see fit. They aren't an ends unto themselves.
In this case "GB", when referring to RAM is unambiguous. Only disingenuous cloud providers or petty editors would use a base 10 interpretation.