This requires padding out an O(n)-digit number to memory size O(n log n), with addition operations of numbers with log(n) bits deemed to take O(1) time.
If this is allowed by your model of computation, then you could use this magical O(1) addition to to implement multiplication in O(n).
For example, starting from decimal representation, you can calculate the product of five digit numbers, ("abcde" * "fghij") with the following, where capitalized variables X and S are the magic O(1)-addition integer types:
let X = to_magic_integer("abcde")
let y = "fghij".reverse()
let S = to_magic_integer("0")
for d = 0 to 4
for i = 1 to y[d]
S += X
end
X = X+X+X+X+X + X+X+X+X+X
end
return S
(You can implement to_magic_integer with basically the same algorithm in O(n).)
> addition operations of numbers with an arbitrary number of bits deemed to take O(1) time
?
I was picturing a more restrictive model of computation in which your claim is along the lines of
> There's an O(n)-time algorithm, which, given two numbers with n digits and a magic black box that can add log(n)-digit numbers together in O(1) time, will output the product of those two numbers.
Fun fact, you can construct and store a lookup table for sums of all possible pairs of O(log(n))-digit numbers in O(n) space. (And O(n) time, probably?)
So if your model of computation lets you look things up in a table like that in O(1) time, you can kind of get the situation I'm describing above. Maybe your model of computation shouldn't let you do this, but some do. Often if an algorithm takes up O(foo) space, we don't worry about the fact that lookups into that space should take Theta(log(foo)) time (to even read the address) rather than O(1) time, because things are assumed to "fit into memory".
Another fun way to be fussy along these lines is that the usual algorithms for sorting a list of n distinct items only give bounds of O(n (log n)^2) time. Each item takes around log(n) bits to even store, so comparing two of those can't be done in time O(1) in the worst case (in a model that forbids the kind of "cheating" I'm describing above). The best uniform bound you can hope for is time O(log n) per comparison. (Though in some sense the extra log(n) factor might "drop back out", because it takes O(n log(n)) space to even write down an input to this algorithm.)
I was meaning arbitrary numbers of digits, yes. In the sense that the sets {ceil(log2(n >= 1))} = {n >= 0}, and generally taking for granted that the model doesn't change based on the value n.
If this is allowed by your model of computation, then you could use this magical O(1) addition to to implement multiplication in O(n).