Interval Arithmetic in CUDA
Résumé
Every now and then some headlines question the confidence we may have in results obtained from computers. In the past, failures to deliver an accurate result were caused by exceptions (division by zero for example [6]) or situations that are now characterized as catastrophic (cancellation) [1], illposed or ill-conditioned [7]. Peak performance of GPUs typically counts in teraflops, and they are used in peta-scale systems. Most applications running on such systems can be considered as ill-conditioned, and no catastrophic or exceptional event may be blamed for computed results that fall too far from their target. One solution is to spend some of the impressive computing power offered by GPUs to compute a validated range for the results instead of the uncontrolled estimation delivered by regular floating-point arithmetic. Interval arithmetic (IA) accounts for uncertainties in data at operator level and returns reliable bounds that include the correct result of an expression [8]. The basic principle of IA is to replace approximate floating-point numbers by intervals enclosing the results. Expressions and programs are then evaluated on intervals instead of points.