你能解释为什么会发生这种情况吗? static void Main(){ const float xScaleStart = 0.5f; const float xScaleSt
...你能解释为什么会发生这种情况吗?
static void Main(){ const float xScaleStart = 0.5f; const float xScaleStop = 4.0f; const float xScaleInterval = 0.1f; const float xScaleAmplitude = xScaleStop - xScaleStart; const float xScaleSizeC = xScaleAmplitude / xScaleInterval; float xScaleSize = xScaleAmplitude / xScaleInterval; Console.WriteLine(">const float {0},(int){1}",xScaleSizeC,(int)xScaleSizeC); Console.WriteLine("> float {0},xScaleSize,(int)xScaleSize); Console.ReadLine();}
输出:
>const float 35,(int)34> float 35,(int)35
我知道0.1的二进制表示实际上是0.09999990463256835937,虽然为什么会发生使用’const float’而不是’float’?这是否被认为是编译器错误?
对于记录,代码编译成:
private static void Main(string[] args){ float xScaleSize = 35f; Console.WriteLine(">const float {0},35f,34); Console.WriteLine("> float {0},(int)xScaleSize); Console.ReadLine();}.
解决方法
. “为什么”这基本上归结为这样的事实,即使用浮点数据时,可以使用具有比为float或double指定的精度更高的精度的内部表示。这在虚拟执行系统(VES)规范( Partition I第12节)中明确规定: .floating-point numbers are represented using an internal floating-point
type. In each such instance,the nominal type of the variable or expression is either float32
or float64
,but its
value can be represented internally with additional range and/or precision
然后,我们有:
.The use of an internal representation that is wider than float32
or float64
can cause differences in
computational results when a developer makes seemingly unrelated modifications to their code,the result of
which can be that a value is spilled from the internal representation (e.g.,in a register) to a location on the
stack.
现在,根据C# language specification:
.The compile-time evaluation of constant expressions uses the same rules as run-time evaluation of non-constant expressions,except that where run-time evaluation would have thrown an exception,compile-time evaluation causes a compile-time error to occur.
.但是如上所述,规则实际上允许更多的精度被有时使用,并且当使用这种增强的精度时,实际上不是在我们的直接控制下。
显然,在不同的情况下,结果可能恰恰与您观察到的结果相反 – 编译器可能已经降低了精度,而运行时可以保持更高的精度。
. ..