> I think you're overthinking it.No, this is a standard fallacy that is covered in most introductory mathematical logic courses (under Tarski's undefinability of truth result).
> Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers.
At this level of generality with no restrictions on "mapping", you can define a mapping from finite-length strings to all real numbers.
In particular there is the Lowenheim-Skolem theorem, one of its corollaries being that if you have access to powerful enough maps, the real numbers become countable (the Lowenheim-Skolem theorem in particular says that there is a countable model of all the sets of ZFC and more generally that if there is a single infinite model of a first-order theory, then there are models for every cardinality for that theory).
Normally you don't have to be careful about defining maps in an introductory analysis course because it's usually difficult to accidentally create maps that are beyond the ability of ZFC to define. However, you have to be careful in your definition of maps when dealing with things that have the possibility of being self-referential because that can easily cross that barrier.
Here's an easy example showing why "definable real number" is not well-defined (or more directly that its complement "non-definable real number" is not well-defined). By the axiom of choice in ZFC we know that there is a well-ordering of the real numbers. Fix this well-ordering. The set of all undefinable real numbers is a subset of the real numbers and therefore well-ordered. Take its least element. We have uniquely identified a "non-definable" real number. (Variations of this technique can be used to uniquely identify ever larger swathes of "non-definable" real numbers and you don't need choice for it, it's just more involved to explain without choice and besides if you don't have choice, cardinality gets weird).
Again, as soon as you start talking about concepts that have the potential to be self-referential such as "definability," you have to be very careful about what kinds of arguments you're making, especially with regards to cardinality.
Cardinality is a "relative" concept. The common intuition (arising from the property that set cardinality forms a total ordering under ZFC) is that all sets have an intrinsic "size" and cardinality is that "size." But this intuition occasionally falls apart, especially when we start playing with the ability to "inject" more maps into our mathematical system.
Another way to think about cardinality is as a generalization of computability that measures how "scrambled" a set is.
We can think of indexing by the natural numbers as "unscrambling" a set back to the natural numbers.
We begin with complexity theory where we have different computable ways of "unscrambling" a set back to the natural numbers that take more and more time.
Then we go to computability theory where we end up at non-computably enumerable sets, that is sets that are so scrambled that there is no way to unscramble them back to the natural numbers via a Turing Machine. But we can still theoretically unscramble them back to the natural numbers if we drop the computability requirement. At this point we're at definability in our chosen mathematical theory and therefore cardinality: we can define some function that lets us do the unscrambling even if the actual unscrambling is not computable. But there are some sets that are so scrambled that even definability in our theory is not strong enough to unscramble them. This doesn't necessarily mean that they're actually any "bigger" than the natural numbers! Just that they're so scrambled we don't know how to map them back to the natural numbers within our current theory.
This intuition lets us nicely resolve why there aren't "more" rational numbers than natural numbers but there are "more" real numbers than natural numbers. In either case it's not that there's "more" or "less", it's just that the rational numbers are less scrambled than the real numbers, where the former is orderly enough that we can unscramble it back to the natural numbers with a highly inefficient, but nonetheless computable, process. The latter is so scrambled that we have no way in ZFC to unscramble them back (but if you gave us access to even more powerful maps then we could scramble the real numbers back to the natural numbers, hence Lowenheim-Skolem).
It doesn't mean that in some deep Platonic sense this map doesn't exist. Maybe it does! Our theory might just be too weak to be able to recognize the map. Indeed, there are logicians who believe that in some deep sense, all sets are countable! It's just the limitations of theories that prevent us from seeing this. (See for example the sketch laid out here: https://plato.stanford.edu/entries/paradox-skolem/#3.2). Note that this is a philosophical belief and not a theorem (since we are moving away from formal definitions of "countability" and more towards philosophical notions of "what is 'countability' really?"). But it does serve to show how it might be philosophically plausible for all real numbers, and indeed all mathematical objects, to be definable.
I'll repeat Hamkins' lines from the Math Overflow post because they nicely summarize the situation.
> In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines.
> And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page [no longer on the Wikipedia page] on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.