Came across this after some related Quora discussion:
http://www.maa.org/external_archive/devlin/devlin_06_08.html
I have big issues with this rant.... The basic claim is that multiplication is not defined through addition.
Someone call up all the computer engineers out there and tell them they've been building circuit chips wrong all this time.
I think that's a slight mischaracterisation of Devlin's position. He's saying that, however it's defined, repeated addition is not what multiplication
is. This is an incredibly common pattern in mathematics. You can define the natural numbers as {}, {{}}, {{},{{}}}, ... and the ordered pair (a,b) is {{a},{a,b}} and an integer as an equivalence class of ordered pairs of naturals, and a rational as an equivalence class of ordered pairs of integers, and a real as an equivalence class of Cauchy sequences (whatever they are) of rationals ... but that's not what a real number
is.
A function from X to Y "is" a set of ordered pairs such that every x in X appears in exactly one pair (x,y) with y in Y. But that's not how you think about a function, or how you interpret an occurrence of f(x). A graph "is" a pair (V,E) where V is a set of vertices and E a set of 2-subsets of V, but you're not going to think in those terms if I ask you to draw a K_4 in the plane. Mathematical objects have their own existence, and when we think about them we think about them in terms of properties they have. The concrete definitions can convince somebody that a structure exists, but when you get to work with it you can throw away the scaffolding. For multiplication the rules are that 0.x = 0, 1.x = x, x.y = y.x and (x+y).z = x.y + x.z. These contain the inductive definition of multiplication on the naturals, but once you've checked that everything makes sense those particular cases are no longer all that special.