If you're talking about general decimal numbers, this is provably false. Pi, for example, cannot be written as a fraction.
If you're talking about floating-point numbers on a piece of finite hardware, they can all be written as fractions, and, depending on whether you use multiple-precision math, you might be able to represent all the fractions your computer can work with as decimal numbers.
When you use it as a very informal synonym for 'real', or as a short form for 'infinite decimal'. (This is even more understandable when you realize every decimal is an infinite decimal, and our convention of truncating an infinite string of zeroes has little mathematical reality.)
If you're talking about general decimal numbers, this is provably false. Pi, for example, cannot be written as a fraction.
If you're talking about floating-point numbers on a piece of finite hardware, they can all be written as fractions, and, depending on whether you use multiple-precision math, you might be able to represent all the fractions your computer can work with as decimal numbers.