I’ve had this idea for a long time, and I suspect others have too: that what we call "types" often seem to emerge less from intrinsic properties and more from which operations happen to work together. For example, if you want subtraction to always make sense, you are effectively forced to move from the natural numbers to the integers. Likewise, insisting that division be total pushes you from the integers to the rational numbers. In each case, the domain is enlarged not for its own sake, but to preserve the coherence of the operations we care about.
And why not insist on totality? After all, mathematical tools are typically valued for the questions they can answer, and the frequency with which an operation fails to apply places a real constraint on how useful it is. This is part of the motivation behind extending the real numbers to the complex numbers: allowing operations like roots and exponentiation to remain defined even when their real-valued interpretations break down.
Division by zero, however, is usually treated as a different kind of failure altogether, and one that many frameworks explicitly refuse to accommodate. Various attempts have been made to soften this boundary. Wheel theory, for example, defines a reciprocal for zero and allows computation to proceed further than usual, at the cost of introducing a distinguished "bottom" element that represents a terminal collapse of information rather than a conventional value.
But zero misbehaves even before we try division. In algebra, we usually expect that performing the same operation on both sides of an equation preserves equality. Multiplying both sides by zero technically respects this rule, yet it immediately destroys the information we had, leaving the equation effectively unusable - functionally equivalent to an information-destroying reset. In short, while division typically fails to be total, multiplication fails to be generally reversible.
This train of thought eventually led me to ask: What if every arithmetic operation were total, and every operation had a reversible counterpart? Imagine mathematics with no dead ends, no unrecoverable errors, and no branch ambiguities. Could such a system exist? Could it be practical?
First, let's observe that "total" and "reversible" apply not to numbers themselves, but to the operations that act on them. And even the word "number" is used liberally here - we don't care what qualifies as a number, as long as all the primitive operations are total and reversible for every member of the type. We make no prior categorical commitments; any such structure must emerge from the operational constraints themselves, guided only by the requirements of totality and reversibility.
By totality, I mean that every primitive operation is defined on all elements of the type. When an operation would normally be undefined or indeterminate, the response is not to forbid it, but to represent the unresolved structure explicitly. Rather than collapsing expressions at the point of failure, operations are deferred in a way that preserves the information needed to resolve them later.
Reversibility is more subtle. In broad terms, it means that every operation admits a way of being undone, though not necessarily by a simple inverse function. The reverse of an operation may involve multiple steps, auxiliary structure, or a re-expression of the original term. In this sense, reversibility does not imply bijectivity, injectivity, or surjectivity in the usual sense, but rather the absence of irreversible collapse.
Operational deferral implies that some values exist even when no immediate numeric representation is available. In such cases, a value is better understood not as a static number, but as the result of a history of operations that has not yet been collapsed.
From this perspective, a type is not characterized by intrinsic properties of its elements, but by the operations that can act on them without loss of information. In particular, a type is defined by the operations it admits and remains stable under.
This motivates the following definition:
A type is defined by the total, reversible operations it admits.
To make this concrete, consider the differential of a function f(x) . In the COTT framework, we begin by multiplying the expression by zero:
At first glance, this looks trivial. But in COTT, it has a precise structural meaning: it computes a zero-magnitude displacement of the function’s input,
Note: 0x is a symbolic object that will be described in more detail in the next section.
Here, multiplying by zero does not annihilate information. Instead, it produces a structural differential that preserves the form of the expression, while maintaining equality, totality, and reversibility. Any subsequent cancellation arises from the additive identity, not from zero itself.
As a concrete example, consider the function f(x)=x^2+c . Applying the COTT differential gives:
Notably, no appeal to limits or infinitesimals is required; the differential arises entirely from algebraic expansion under a zero-magnitude displacement.
Continuing this example:
At this stage, division by 0x is permitted because 0x denotes a structured zero-displacement, not a scalar zero, and cannot be freely reassociated or canceled.
The quadratic term remains present as a higher-order residue, explicitly visible rather than silently discarded. The derivative is obtained by projecting onto the component independent of the zero displacement:
2x+0x≈2xFormally, this is a linear projection in the sense that the derivative is the component of the differential independent of the zero displacement. The term 0x remains present and can be used in further algebraic manipulations. The ≈ symbol denotes selection of the linear component, not destruction of information.
Conceptually, this is reminiscent of dual numbers, which encode derivatives via a nilpotent displacement. Unlike traditional dual numbers, however, COTT retains higher-order structure and enforces reversibility and totality throughout.
Historically, arithmetic systems that lacked an explicit zero treated cancellation as erasure rather than structure. Roman numerals, for example, never introduced a symbol for zero; absence simply removed terms. In COTT, annihilation is not a primitive act but an operational outcome of invariance.
Zero erases only when the result is invariant under the surrounding operation. We refer to this principle as the invariance condition for discharge. When invariance is not achieved, annihilation cannot discharge and zero remains bound to its context, where it functions as a boundary marker. Invariance is always evaluated relative to the operation under which discharge is being considered.
We call zero a boundary marker in a loose, operational sense: it marks the transition point of an arithmetic act when annihilation fails to discharge, without introducing new values or erasing structure.
The limit formalism can be understood as a reconciliation ritual: algebraic expressions are manipulated beyond their sanctioned domain, then forgiven by appeal to a limiting process that retroactively justifies the result. COTT refuses to accept the premise of reconciliation. Instead of appealing to limits or other formal rituals, it preserves all displacements and operations within a unified syntax, ensuring that structure and information are never discarded.
First, a note on syntax:
When zero binds to a constant, no structural dependency is introduced. The result is therefore observationally zero, but not erased. Any residual is carried as bookkeeping structure, preserving reversibility even when no dependency is introduced.
When zero binds to a variable, it produces a tagged zero-magnitude displacement:
No assumption of nilpotency or idempotency is made. Truncation of higher-order structure is treated as a domain-specific permission, not a default algebraic rule. Once bound, a zero-displacement may not be canceled, absorbed, or freely reassociated, unless a domain-specific rule explicitly permits discharge. The zero does not vanish; it remains attached to its operand.
This is the simplest nontrivial manifestation of zero as a boundary marker.
When zero binds to an expression, it structurally lifts, producing a differential:
Here, zero does not erase the expression. It generates a structured offset that preserves form and enables algebraic rewrite without appeal to limits.
If annihilation is only allowed to erase under invariance, then division by zero cannot be forbidden. It must either be represented structurally, or the system is incomplete.
In calculus, this prohibition is routinely circumvented by replacing the zero denominator with a nonzero placeholder, such as dx, and then appealing to a limiting argument. The algebraic manipulation occurs outside its strictly valid domain and is later reconciled by a limit.
COTT takes a different approach: division is defined as multiplicative discharge (i.e., cancellation under multiplication), and the arithmetic is extended with explicit structure to preserve totality.
We introduce a symbol ω, defined by
ω is a distinguished symbol introduced to totalize division. Like zero-bound structure, it does not permit unrestricted reassociation or cancellation outside invariant contexts.
By multiplicativity, for any x,
To satisfy the invariance condition for discharge under multiplication, ω is defined such that
Operationally, ω represents the unresolved scale factor required to complete multiplicative discharge when annihilation alone is insufficient.
As a result, cancellation becomes total rather than conditional.
Since discharge must be invariant, the identity x/x=1 holds without exception. Applied to zero, this yields
This equality is not an assertion about classical arithmetic; it arises naturally as a structural consequence of extending the multiplicative system so that zero participates symmetrically rather than exceptionally.
The purpose of introducing ω is not to collapse distinctions, but to remove the need for exceptional cases. Expressions that traditionally require domain exclusions or limiting arguments remain algebraic and total, with all cancellations made explicit.
Our goal in this section is to extend exponentiation to a zero or ω base while preserving internal consistency and totality. Rather than introducing new axioms ad hoc, all values will be derived from identities that already govern exponentiation and inversion elsewhere in the system.
We begin with identities that are universally accepted for exponentiation over arbitrary bases.
For any base x,
Applying this identity uniformly yields
Similarly, for any base x,
And therefore:
For higher positive powers, no further simplification is imposed. These expressions remain well-formed and tractable:
and so on.
Negative exponents are defined via multiplicative inversion. Since division is totalized in COTT, zero admits a reciprocal, denoted ω:
and in general
We follow the same approach in defining exponentiation for ω:
Only two values remain unspecified: 0^ω and ω^ω . Requiring closure and reversibility uniquely determines both:
Exponentiation over a zero or ω base is defined for all integer exponents without contradiction.
Zero-base exponentiation is iterated: values may appear at higher exponential depth.
This iteration generates the integers structurally, without introducing sign or magnitude as primitives.
Having established exponentiation over boundary bases, we now examine its inverse.
Rather than defining logarithms from scratch, we proceed exactly as before: by pairing each exponential identity with its logarithmic counterpart and requiring that both remain valid for boundary bases.
The logarithm log0 is defined as the inverse of zero-base exponentiation on its image, including values arising at higher exponential depth.
0^a=b |
log_0b=a |
|---|---|
|
0^0=1
|
log_0(1)=0
|
|
0^1=0
|
log_0(0)=1
|
|
0^n=0·0·...
|
log_0(0^n)=n
|
|
0^(0^n)=n
|
log_0(n)=0^n
|
|
0^(-n)=ω^n
|
log_0(ω^n)=-n
|
|
0^(ω^n)=-n
|
log_0(-n)=ω^n
|
We write n without restriction to emphasize that no structural obstruction appears at the integer level; whether these identities extend beyond it is a matter of analysis, not exception.
Observe that log0 inverts zero-base exponentiation exactly on its image, preserving the correspondence at higher exponential depths.
Crucially, we now observe that log_0 is not a one-way or independently chosen operation. Instead, it exists as the structurally inseparable counterpart to exponentiation. On its domain, it satisfies the identity:
This identification is not definitional but structural. Both operations produce identical outputs for identical inputs across their shared image.
Likewise, zero-base exponentiation is self-resolving at higher depth:
Together, these identities show that logarithms with boundary base do not introduce a new operator. Instead, they reveal a reuse of the existing exponential structure at a different depth. Thus, logarithm and exponentiation are not opposing operations here, but the same operation viewed at different levels of the boundary hierarchy.
Before moving on, let's briefly revisit 0^ω=-1 by showing how this identity is consistent under logarithm:
Using the Quotient Rule:
Thus confirming: 0^ω=-1.
We begin with a negative base written explicitly.
Rewriting exponentiation through zero-base exponentiation and its inverse gives
And by the reciprocal rule,
Where x=1, the expression may be rewritten as
And the dyadic basis i may be derived as
Powers of 0^{w/2} produce the expected periodic behavior, just like i.
All powers of 0 or w are phase residuals and translate directly to complex rotation primitives. But, determining how they translate is nontrivial.
For example:
The question is: what is the value of 0^2?
Solve for x:
Now double check:
We can do the same thing with 0^3. We know:
So we solve:
Bivectors, like complex numbers, exhibit a 4-cycle symmetry; however, this symmetry acts simultaneously along two orthogonal axes.
A bivector can be represented as: