I’ve had this idea for a long time, and I suspect others have too: that what we call "types" often seem to emerge less from intrinsic properties and more from which operations happen to work together. For example, if you want subtraction to always make sense, you are effectively forced to move from the natural numbers to the integers. Likewise, insisting that division be total pushes you from the integers to the rational numbers. In each case, the domain is enlarged not for its own sake, but to preserve the coherence of the operations we care about.
And why not insist on totality? After all, mathematical tools are typically valued for the questions they can answer, and the frequency with which an operation fails to apply places a real constraint on how useful it is. This is part of the motivation behind extending the real numbers to the complex numbers: allowing operations like roots and exponentiation to remain defined even when their real-valued interpretations break down.
Division by zero, however, is usually treated as a different kind of failure altogether, and one that many frameworks explicitly refuse to accommodate. Various attempts have been made to soften this boundary. Wheel theory, for example, defines a reciprocal for zero and allows computation to proceed further than usual, at the cost of introducing a distinguished "bottom" element that represents a terminal collapse of information rather than a conventional value.
But zero misbehaves even before we try division. In algebra, we usually expect that performing the same operation on both sides of an equation preserves equality. Multiplying both sides by zero technically respects this rule, yet it immediately destroys the information we had, leaving the equation effectively unusable - functionally equivalent to an information-destroying reset. In short, while division typically fails to be total, multiplication fails to be generally reversible.
This train of thought eventually led me to ask: What if every arithmetic operation were total, and every operation had a reversible counterpart? Imagine mathematics with no dead ends, no unrecoverable errors, and no branch ambiguities. Could such a system exist? Could it be practical?
First, let's observe that "total" and "reversible" apply not to numbers themselves, but to the operations that act on them. And even the word "number" is used liberally here - we don't care what qualifies as a number, as long as all the primitive operations are total and reversible for every member of the type. We make no prior categorical commitments; any such structure must emerge from the operational constraints themselves, guided only by the requirements of totality and reversibility.
By totality, I mean that every primitive operation is defined on all elements of the type. When an operation would normally be undefined or indeterminate, the response is not to forbid it, but to represent the unresolved structure explicitly. Rather than collapsing expressions at the point of failure, operations are deferred in a way that preserves the information needed to resolve them later.
Reversibility is more subtle. In broad terms, it means that every operation admits a way of being undone, though not necessarily by a simple inverse function. The reverse of an operation may involve multiple steps, auxiliary structure, or a re-expression of the original term. In this sense, reversibility does not imply bijectivity, injectivity, or surjectivity in the usual sense, but rather the absence of irreversible collapse.
Operational deferral implies that some values exist even when no immediate numeric representation is available. In such cases, a value is better understood not as a static number, but as the result of a history of operations that has not yet been collapsed.
From this perspective, a type is not characterized by intrinsic properties of its elements, but by the operations that can act on them without loss of information. In particular, a type is defined by the operations it admits and remains stable under.
This motivates the following definition:
A type is defined by the total, reversible operations it admits.
To make this concrete, consider the differential of a function f(x) . In the COTT framework, we begin by multiplying the expression by zero:
At first glance, this looks trivial. But in COTT, it has a precise structural meaning: it computes a zero-magnitude displacement of the function’s input,
Note: 0x is a symbolic object that will be described in more detail in the next section.
Here, multiplying by zero does not annihilate information. Instead, it produces a structural differential that preserves the form of the expression, while maintaining equality, totality, and reversibility. Any subsequent cancellation arises from the additive identity, not from zero itself.
As a concrete example, consider the function f(x)=x^2+c . Applying the COTT differential gives:
Notably, no appeal to limits or infinitesimals is required; the differential arises entirely from algebraic expansion under a zero-magnitude displacement.
Continuing this example:
At this stage, division by 0x is permitted because 0x denotes a structured zero-displacement, not a scalar zero, and cannot be freely reassociated or canceled.
The quadratic term remains present as a higher-order residue, explicitly visible rather than silently discarded. The derivative is obtained by projecting onto the component independent of the zero displacement:
2x+0x≈2xFormally, this is a linear projection in the sense that the derivative is the component of the differential independent of the zero displacement. The term 0x remains present and can be used in further algebraic manipulations. The ≈ symbol denotes selection of the linear component, not destruction of information.
Conceptually, this is reminiscent of dual numbers, which encode derivatives via a nilpotent displacement. Unlike traditional dual numbers, however, COTT retains higher-order structure and enforces reversibility and totality throughout.
Historically, arithmetic systems that lacked an explicit zero treated cancellation as erasure rather than structure. Roman numerals, for example, never introduced a symbol for zero; absence simply removed terms. In COTT, annihilation is not a primitive act but an operational outcome of invariance.
Zero erases only when the result is invariant under the surrounding operation. We refer to this principle as the invariance condition for discharge. When invariance is not achieved, annihilation cannot discharge and zero remains bound to its context, where it functions as a boundary marker. Invariance is always evaluated relative to the operation under which discharge is being considered.
We call zero a boundary marker in a loose, operational sense: it marks the transition point of an arithmetic act when annihilation fails to discharge, without introducing new values or erasing structure.
The limit formalism can be understood as a reconciliation ritual: algebraic expressions are manipulated beyond their sanctioned domain, then forgiven by appeal to a limiting process that retroactively justifies the result. COTT refuses to accept the premise of reconciliation. Instead of appealing to limits or other formal rituals, it preserves all displacements and operations within a unified syntax, ensuring that structure and information are never discarded.
First, a note on syntax:
When zero binds to a constant, no structural dependency is introduced. The result is therefore observationally zero, but not erased. Any residual is carried as bookkeeping structure, preserving reversibility even when no dependency is introduced.
When zero binds to a variable, it produces a tagged zero-magnitude displacement:
No assumption of nilpotency or idempotency is made. Truncation of higher-order structure is treated as a domain-specific permission, not a default algebraic rule. Once bound, a zero-displacement may not be canceled, absorbed, or freely reassociated, unless a domain-specific rule explicitly permits discharge. The zero does not vanish; it remains attached to its operand.
This is the simplest nontrivial manifestation of zero as a boundary marker.
When zero binds to an expression, it structurally lifts, producing a differential:
Here, zero does not erase the expression. It generates a structured offset that preserves form and enables algebraic rewrite without appeal to limits.
If annihilation is only allowed to erase under invariance, then division by zero cannot be forbidden. It must either be represented structurally, or the system is incomplete.
In calculus, this prohibition is routinely circumvented by replacing the zero denominator with a nonzero placeholder, such as dx, and then appealing to a limiting argument. The algebraic manipulation occurs outside its strictly valid domain and is later reconciled by a limit.
COTT takes a different approach: division is defined as multiplicative discharge (i.e., cancellation under multiplication), and the arithmetic is extended with explicit structure to preserve totality.
We introduce a symbol ω, defined by
ω is a distinguished symbol introduced to totalize division. Like zero-bound structure, it does not permit unrestricted reassociation or cancellation outside invariant contexts.
By multiplicativity, for any x,
To satisfy the invariance condition for discharge under multiplication, ω is defined such that
Operationally, ω represents the unresolved scale factor required to complete multiplicative discharge when annihilation alone is insufficient.
As a result, cancellation becomes total rather than conditional.
Since discharge must be invariant, the identity x/x=1 holds without exception. Applied to zero, this yields
This equality is not an assertion about classical arithmetic; it arises naturally as a structural consequence of extending the multiplicative system so that zero participates symmetrically rather than exceptionally.
The purpose of introducing ω is not to collapse distinctions, but to remove the need for exceptional cases. Expressions that traditionally require domain exclusions or limiting arguments remain algebraic and total, with all cancellations made explicit.
Our goal in this section is to extend exponentiation to a zero or ω base while preserving internal consistency and totality. Rather than introducing new axioms ad hoc, all values will be derived from identities that already govern exponentiation and inversion elsewhere in the system.
We begin with identities that are universally accepted for exponentiation over arbitrary bases.
For any base x,
Applying this identity uniformly yields
Similarly, for any base x,
And therefore:
For higher positive powers, no further simplification is imposed. These expressions remain well-formed and tractable:
and so on.
Negative exponents are defined via multiplicative inversion. Since division is totalized in COTT, zero admits a reciprocal, denoted ω:
and in general
We follow the same approach in defining exponentiation for ω:
Only two values remain unspecified: 0^ω and ω^ω . Requiring closure and reversibility uniquely determines both:
Exponentiation over a zero or ω base is defined for all integer exponents without contradiction.
Zero-base exponentiation is iterated: values may appear at higher exponential depth.
This iteration generates the integers structurally, without introducing sign or magnitude as primitives.
Having established exponentiation over boundary bases, we now examine its inverse.
Rather than defining logarithms from scratch, we proceed exactly as before: by pairing each exponential identity with its logarithmic counterpart and requiring that both remain valid for boundary bases.
The logarithm log0 is defined as the inverse of zero-base exponentiation on its image, including values arising at higher exponential depth.
0^a=b |
log_0b=a |
|---|---|
|
0^0=1
|
log_0(1)=0
|
|
0^1=0
|
log_0(0)=1
|
|
0^n=0·0·...
|
log_0(0^n)=n
|
|
0^(0^n)=n
|
log_0(n)=0^n
|
|
0^(-n)=ω^n
|
log_0(ω^n)=-n
|
|
0^(ω^n)=-n
|
log_0(-n)=ω^n
|
We write n without restriction to emphasize that no structural obstruction appears at the integer level; whether these identities extend beyond it is a matter of analysis, not exception.
Observe that log0 inverts zero-base exponentiation exactly on its image, preserving the correspondence at higher exponential depths.
Crucially, we now observe that log_0 is not a separate primitive operation. On its domain, it coincides exactly with exponentiation by zero:
This identification is not definitional but structural. Both operations produce identical outputs for identical inputs across their shared image.
Likewise, zero-base exponentiation is self-resolving at higher depth:
Together, these identities show that logarithms with boundary base do not introduce a new operator. Instead, they reveal a reuse of the existing exponential structure at a different depth. Thus, logarithm and exponentiation are not opposing operations here, but the same operation viewed at different levels of the boundary hierarchy.
Before moving on, let's briefly revisit 0^ω=-1 by showing how this identity is consistent under logarithm:
Thus confirming: 0^ω=-1.
We freely use logarithmic identities inherited from exponentiation, since logarithms in COTT are defined solely as inverses of existing exponential structure.
We now examine the square root of −1.
This step is not required for correctness; it exposes the internal mechanics of the expression by making the boundary operators explicit.
Using the standard logarithmic identity, we obtain:
From the results of the previous section, the logarithm of −1 with respect to the zero base is already determined:
Substituting, we find:
Thus,
The derivation of i is structurally reminiscent of Euler's identity.
Here “≈” denotes phase correspondence rather than algebraic equality: both expressions generate the same phase outcomes under iteration, though they arise from different constructions.
In this system, the default "zero" orientation is zero rather than one: phase begins from annihilation rather than identity.
(Recall: 0ω=1,0^1=0)
From here, integer phase advances multiplicatively.
Fractional phase interpolates this oscillation.
The phase 1/3 is our first example of a non-dyadic phase. It can be represented in terms of i, but not generated by it alone.
We now derive a closed expression for rational powers of negative numbers, using only structures already introduced.
The key observation is that, in COTT, logarithms are not primitive. For the zero base we have the identity
We begin with a negative base written explicitly.
Rewriting exponentiation through zero-base exponentiation and its inverse gives
From earlier results, the logarithm of a negative value separates cleanly into magnitude and boundary contribution:
Substituting, we obtain the compact form
This expression defines all rational powers of negative numbers without branch cuts or external conventions.
As an example, consider a non-dyadic case:
We begin with the simplest dyadic phase:
This tells us we have a terminal construction with a coefficient of −1 and a residual boundary contribution on the second-order axis.
In COTT, rational phase is not an angle but a discrete axis structure whose dimensionality is determined by the denominator and whose coupling depends on its factorization.
Taking a non-dyadic example:
The denominator does not indicate magnitude but the dimensionality of the phase interaction; coprime factors generate independent signed axes.
Even irrational phases can be analyzed, although determining the terminal construction is a bit more complicated.
Here the quantity is not itself a phase, but the projection of a higher-order construction onto the level-0 reference frame.
We begin by applying the involution to bases of exponentiation. This involution lifts the quantity into a construction space where phase can be resolved before projection. The initial reference frame is level 0.
Apply the power rule to the lifted base:
Split the exponent: multiplication becomes addition.
Split the exponent: division becomes subtraction.
We rewrite 1 as 0 ^ 0 .
Double-involution on the 0 ^ 2
Now all terms have the same base, so we resolve the redundant involutions, not by cancelling them out, but by shifting the reference frame up two levels, preserving construction history instead of erasing it. We are now on level 2.
This terminal construction represents the projection of a three-axis interaction onto the level-0 frame, with signed residual structure retained rather than normalized away.
We can now compute with phase without angles, rotation without normalization, and dimensionality without basis vectors.