C++ProgrammingC++ Developer

What specific bit-level comparison rule does **C++20** apply to determine equivalence between floating-point values used as non-type template arguments, and why do **-0.0** and **+0.0** create distinct template instantiations despite comparing equal in runtime expressions?

Pass interviews with Hintsage AI assistant

Answer to the question

C++20 introduced floating-point types as non-type template parameters (NTTPs) by classifying them as structural types. According to the standard ([temp.type]/4), two non-type template arguments match only if they are equivalent. For floating-point values, equivalence is determined by bitwise identity rather than value equality. This means two floating-point constants are considered the same template argument only if they have identical object representations (every bit matches).

Consequently, +0.0 and -0.0, which differ only in their sign bit under IEEE 754 representation, instantiate distinct templates. Similarly, different NaN payloads create distinct types. This contrasts sharply with runtime behavior where +0.0 == -0.0 evaluates to true, because the equality operator implements mathematical equivalence while the template mechanism requires physical identity.

Situation from life

We encountered this while building a compile-time dimensional analysis library for a physics simulation engine. We used double NTTPs to represent physical constants (like gravitational constants) and wanted to specialize solvers for the theoretical case of zero mass (represented as 0.0). However, some constexpr calculations evaluating center-of-mass produced -0.0 through specific arithmetic operations (e.g., -1.0 * 0.0).

When users passed the result of these calculations as a template argument, the compiler selected the generic implementation instead of our ZeroMass specialization, causing a 40% performance regression because the generic version performed full matrix inversions instead of returning identity matrices.

We considered three solutions. First, we could explicitly specialize for both +0.0 and -0.0. This approach guaranteed correct behavior but doubled our maintenance burden and still failed to handle various NaN representations or values that were effectively zero but had different bit patterns due to rounding errors.

Second, we considered normalizing all inputs using a constexpr helper function that forced the sign bit to zero (e.g., value == 0.0 ? 0.0 : value). This solution was robust for zeros but required wrapper macros around every template instantiation, polluting the API and confusing users who expected direct parameter passing.

Third, we implemented a type normalization layer using if constexpr and std::bit_cast to canonicalize values at the entry point of our meta-functions, effectively treating all zeros as positive and collapsing quiet NaNs to a canonical payload. We chose this solution because it provided transparency to library users while ensuring internal consistency.

After implementation, we documented that the library treated all floating-point NTTPs by their bit representation. This resolved the performance issues, though it required developers to be aware that -0.0 and +0.0 were distinct configuration states in the type system.

What candidates often miss

Why does std::is_same_v<decltype(func<+0.0>()), decltype(func<-0.0>())> evaluate to false when +0.0 == -0.0 is true?

Template instantiation relies on the One Definition Rule and exact template argument matching. When the compiler encounters func<+0.0>(), it hashes or compares the bit pattern of the floating-point literal. Since IEEE 754 specifies that -0.0 has its sign bit set while +0.0 does not, the compiler sees two different constant values and generates two distinct function instantiations. The equality operator at runtime implements the IEEE 754 specification that signed zeros compare equal, but the template machinery operates at the level of object representation before runtime semantics apply. Candidates often assume that because the values are mathematically equivalent, they should produce the same type, confusing runtime value semantics with compile-time type identity.

Why does template<float F> struct S{}; S<1.0> fail to compile despite 1.0 being implicitly convertible to float in normal expressions?

For non-type template parameters of floating-point type, the C++20 standard explicitly requires the template argument to have the exact same type as the parameter; standard floating-point promotions and conversions are not permitted ([temp.arg.nontype]/5). The literal 1.0 has type double, not float, so it cannot bind directly to float F. You must use the float suffix: S<1.0f>. This restriction exists because template mangling and type identity require unambiguous representation without conversion precision loss. Beginners often miss this because function calls allow the conversion, but templates perform exact type matching before conversion rules are considered.

How do different quiet NaN (qNaN) payloads affect template instantiation when they all represent "not a number"?

IEEE 754 allows NaN values to carry payload bits (diagnostic information). Since C++20 template equivalence uses bitwise comparison, two NaNs with different payloads (e.g., std::numeric_limits<double>::quiet_NaN() versus the result of 0.0/0.0 on different hardware) are distinct template arguments. This can lead to code bloat if code paths instantiate templates for multiple NaN bit patterns, or to subtle ODR violations if different translation units observe different NaN representations for what the programmer assumed was a single specialization. Candidates frequently assume NaN is a singular value like nullptr, but it actually represents a range of bit patterns, each distinct in the template system.