This post will attempt an intuitive demonstration that by excluding the axiom of choice from the foundations of mathematics, a consistent theory will result which will not only be identical to the prevalent theory for computational purposes, but may also prove to be more suitable for application to theoretical physics.
The customary inclusion of the axiom of choice in the foundation of mathematics leads to the possibility of sets of infinite cardinality, e.g. the set of integers. The consequent appearance of "infinities" in physical theories have proven to be an annoyance.
If the axiom of choice is denied, then we may not suppose that any set is defined beyond what some person or machine has explicitly defined it. In particular, the set of integers must have an upper bound.
Practically speaking, this limitation poses no problem for computation because all computing devices, including human brains, have limits on the size of numbers which have been, if not may be, involved in any computation. If this limitation proves to be a problem, such as using a computer with a too-small word size, the solution is simply to use a machine with a big enough word size. Nothing prevents us from designing machines with arbitrarily large word sizes, i.e. defining an arbitrarily large integer, and yet not be required to accept the axiom of choice and the infinities that come along with it.
Mathematically speaking, the Practical Number System, which is what I will call the proposed set of bounded numbers, is fundamentally different from finite number systems. The difference is that in finite systems, the operations of addition and multiplication must be defined for each pair of numbers. That cannot be the case for Practical Numbers. For example, suppose the largest integer is Q, then the product Q x Q is not defined, nor is the sum Q + 1. This situation is identical to the conditions which cause overflows and underflows in computers.
In order to visualize Practical Numbers in either an Argand plane or a Cartesian plane, the very familiar array of pixels on a computer monitor may be used. If the resolution of the monitor is fine enough, the granularity is not detectable to the eye for practical purposes. Thus, we may display continuous functions on the screen and they "look" to be as continuous as those we imagine using customary infinite sets of numbers.
It is reasonable to suspect that all the important theorems of Analysis can be proved in the Practical Number System. The typical epsilon-delta definitions of limits and continuity would, of course be different. In traditional mathematics, use is made of a stylized protagonist/antagonist confrontation in these definitions: The protagonist makes a certain claim saying something like, "I defy you to define a tolerance, or margin of error, for some function for which I can't guarantee the function will stay within that tolerance as long as the variable is restricted to some range." Then the antagonist produces his/her tolerance, typically denoted by epsilon. The protagonist then produces the number delta, a function of epsilon, which defines the restriction on the range of the variable. Finally it is demonstrated mathematically that an inequality holds showing that the function must indeed be restricted by plus or minus epsilon.
The idea of this argument is that no matter how small a number you, the antagonist, can pick, I, the protagonist, can always pick a smaller one. The trick, however, is that you must choose your epsilon first, and after I know what you have chosen, I can then pick my delta.
In the Practical Number System, there is a lower limit on the size of positive, non-zero numbers; there is a smallest interval. This means that we cannot use the traditional epsilon-delta type of definition. We need something else.
Let's return and examine some of the direct consequences of assuming the existence of some largest integer, Q. If we think of all ordered pairs of numbers plotted on a Cartesian plane, they would appear as something like the shadow of a sphere on the plane. That is, there would be an umbra and a penumbra. The outside radius of the penumbra would be Q, and the radius of the umbra would be P = square root of Q. This is because the sum and product of each pair of numbers in the umbra are defined. For pairs in the penumbra, some sums and products are defined and others are not. It would depend on whether the "sum" or "product" would lead to a "number" which is outside of the penumbra. Of course, these "numbers" are not defined, so we cannot actually talk about such "sums" or "products". We can only say that for pairs of numbers in the penumbra, some sums and products, all of which must be less than or equal to Q, are defined while others are not.
This means that as long as we stay within the umbra defined by the radius P, we can calculate in the normal way. These are the Practical Numbers. Those numbers in the penumbra are defined, but they are impractical in the sense that not all operations are defined for them and if we try to use them in a computation, we may get an "overflow" as a result. This is a problem we are up against in real life, so this number scheme would not exacerbate the problem. It is in fact much easier to mathematically define a larger Q than it is to design a computer with a larger word size.
Returning to the problem of defining limits and continuity, we see that the umbra/penumbra suggests a method of replacing epsilons and deltas. It seems reasonable to suppose that the same type of protagonist/antagonist interplay could be used where the antagonist is limited to specifying the epsilon using only numbers from the umbra, but the protagonist is free to use numbers from the penumbra in order to produce his/her delta. Intuitively, this would be equivalent to saying that a curve plotted on a computer monitor is defined to be continuous iff there is no pixel-size gap in the curve.
In the Practical Number System, there would be no such thing as an "infinite" series. All series would be finite. That would pose no problem, however, since we could define convergence to mean producing a sum less than Q and divergence to mean that the sum is undefined, i.e. would produce a "number" greater than Q. It is not obvious to me at this point whether or not the number P need be involved in the definition of convergence.
With new definitions such as these for limits, continuity, and convergence, it would seem that all the important theorems of analysis could be proved in the Practical Number System in a way that would be computationally equivalent to the theorems in conventional mathematics.
The implications of this system for such topics as topology or measure theory are not as obvious to me. It does seem that it would simplify things though, since all sets would be closed and there would be no such thing as an open set. I will leave this for others to ponder.
Turning our attention to physics, it appears that the Dirac Delta Function has a natural and rigorous definition in the Practical Number System. Intuitively, we again think of a computer screen containing the graph of the function. The x-axis will contain a row of pixels representing a y value of zero for all except a single pixel at some particular value of x. At this value of x, the function is defined to be P. (P is a valid Practical Number, but it is the "closest" practical number to "infinity" that we can get.) The integral of the function over any interval containing this value of x will be the area of the triangle defined by the two pixels on either side of the missing one at x, and the point (x,P). The area of this triangle is 1/2 the base times the height. Since the pixel spacing is 1/P, the length of the base is 2/P. The height of the triangle is P, so the area is 1. This is exactly the required value for the definition of the Dirac Delta Function, with no appeal to limits or infinities.
This definition also has the property that for intervals for which x is one of the end-points, the integral has the value 1/2.. This seems to be a nice intuitive compromise for what might be thought to be a half-open interval as opposed to a closed interval.
It would seem that in the hundred or so years since Georg Cantor introduced the rigorous consequences of assuming the axiom of choice, it would now be appropriate to explore the alternative consequences of denying the axiom of choice. It seems intuitively evident to me that the consequences could be beneficial to physics while leaving computation harmless.