Abstract
In this article, we introduce a new technique for precision tuning. This problem consists of finding the least data types for numerical values such that the result of the computation satisfies some accuracy requirement. State of the art techniques for precision tuning use a trial-and-error approach. They change the data types of some variables of the program and evaluate the accuracy of the result. Depending on what is obtained, they change more or less data types and repeat the process. Our technique is radically different. Based on semantic equations, we generate an Integer Linear Problem (ILP) from the program source code. Basically, this is done by reasoning on the most significant bit and the number of significant bits of the values which are integer quantities. The integer solution to this problem, computed in polynomial time by a classical linear programming solver, gives the optimal data types at the bit level. A finer set of semantic equations is also proposed which does not reduce directly to an ILP problem. So we use policy iteration to find the solution. Both techniques have been implemented and we show that our results encompass the results of state-of-the-art tools.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
ANSI/IEEE: IEEE Standard for Binary Floating-point Arithmetic, std 754–2008 edn. (2008)
Ben Khalifa, D., Martel, M.: Precision tuning and internet of things. In: International Conference on Internet of Things, Embedded Systems and Communications, IINTEC 2019, pp. 80–85. IEEE (2019)
Ben Khalifa, D., Martel, M.: Precision tuning of an accelerometer-based pedometer algorithm for IoT devices. In: International Conference on Internet of Things and Intelligence System, IOTAIS 2020, pp. 113–119. IEEE (2020)
Ben Khalifa, D., Martel, M.: An evaluation of POP performance for tuning numerical programs in floating-point arithmetic. In: 4th International Conference on Information and Computer Technologies, ICICT 2021, Kahului, HI, USA, 11–14 March 2021, pp. 69–78. IEEE (2021)
Ben Khalifa, D., Martel, M.: A study of the floating-point tuning behaviour on the n-body problem. In: Gervasi, O., et al. (eds.) ICCSA 2021. LNCS, vol. 12953. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86976-2_12
Ben Khalifa, D., Martel, M., Adjé, A.: POP: a tuning assistant for mixed-precision floating-point computations. In: Hasan, O., Mallet, F. (eds.) FTSCS 2019. CCIS, vol. 1165, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46902-3_5
Cherubin, S., Agosta, G.: Tools for reduced precision computation: a survey. ACM Comput. Surv. 53(2), 1–35 (2020)
Chiang, W., Baranowski, M., Briggs, I., Solovyev, A., Gopalakrishnan, G., Rakamaric, Z.: Rigorous floating-point mixed-precision tuning. In: Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL, pp. 300–315. ACM (2017)
Costan, A., Gaubert, S., Goubault, E., Martel, M., Putot, S.: A policy iteration algorithm for computing fixed points in static analysis of programs. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 462–475. Springer, Heidelberg (2005). https://doi.org/10.1007/11513988_46
Cousot, P., et al.: The ASTREÉ analyzer. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 21–30. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31987-0_3
Darulova, E., Horn, E., Sharma, S.: Sound mixed-precision optimization with rewriting. In: Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS, pp. 208–219. IEEE Computer Society/ACM (2018)
Diffenderfer, J., Fox, A., Hittinger, J.A.F., Sanders, G., Lindstrom, P.G.: Error analysis of ZFP compression for floating-point data. SIAM J. Sci. Comput. 41(3), A1867–A1898 (2019)
Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33, 13-es (2007)
Guo, H., Rubio-González, C.: Exploiting community structure for floating-point precision tuning. In: Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, pp. 333–343. ACM (2018)
Gustafson, Y.: Beating floating point at its own game: posit arithmetic. Supercomput. Front. Innov. Int. J. 4(2), 71–86 (2017). https://doi.org/10.14529/jsfi170206
Kotipalli, P.V., Singh, R., Wood, P., Laguna, I., Bagchi, S.: AMPT-GA: automatic mixed precision floating point tuning for GPU applications. In: Proceedings of the ACM International Conference on Supercomputing, ICS, pp. 160–170. ACM (2019)
Lam, M.O., Hollingsworth, J.K., de Supinski, B.R., LeGendre, M.P.: Automatically adapting programs for mixed-precision floating-point computation. In: International Conference on Supercomputing, ICS 2013, pp. 369–378. ACM (2013)
Martel, M.: Floating-point format inference in mixed-precision. In: Barrett, C., Davies, M., Kahsai, T. (eds.) NFM 2017. LNCS, vol. 10227, pp. 230–246. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57288-8_16
McKeeman, W.M.: Algorithm 145: adaptive numerical integration by Simpson’s rule. Commun. ACM 5(12), 604 (1962)
Morris, D., Saponas, T., Guillory, A., Kelner, I.: RecoFit: using a wearable sensor to find, recognize, and count repetitive exercises. In: Conference on Human Factors in Computing Systems (2014)
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24
Papadimitriou, C.H.: On the complexity of integer programming. J. ACM (JACM) 28(4), 765–768 (1981)
Parker, D.S.: Monte Carlo arithmetic: exploiting randomness in floating-point arithmetic. Technical report CSD-970002, University of California (Los Angeles) (1997)
Rubio-González, C., et al.: Precimonious: tuning assistant for floating-point precision. In: International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2013, pp. 27:1–27:12. ACM (2013)
Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
We need a lemma on some algebraic operations stable on the set of functions written as the min-max of a finite family of affine functions. The functions are defined on \(\mathbb R^d\).
Lemma 1
The following statements hold:
-
The sum of two min-max of a finite family of affine functions is a min-max of a finite family of affine functions.
-
The maximum of two min-max of a finite family of affine functions is a min-max of a finite family of affine functions.
Proof
Let g and h be two min-max of a finite family of affine functions and \(f=g+h\). We have \(g=\min _i\max _j g^{ij}\) and \(h=\min _k\max _l h^{kl}\). Let \(x\in \mathbb R^d\). There exist i, k such that \(f(x)\ge \max _j g^{ij}(x)+\max _l h^{kl}(x)=\max _{j,l} g^{ij}(x)+h^{kl}(x)\). We have also, for all i, k, \(f(x)\le \max _j g^{ij}(x)+\max _l h^{kl}(x)=\max _{j,l} g^{ij}(x)+h^{kl}(x)\). We conclude that \(f(x)=\min _{i,k}\max _{j,l} g^{ij}(x)+h^{kl}(x)\) for all x. We use the same argument for the max. \(\Box \)
Proposition 2
The following results hold:
-
1.
Let \(\xi \) the constant function equal to 1. The system \(S_\xi ^1\) can be rewritten as \(\{\mathsf {nsb}\in \mathbb N^{Lab}\mid F(\mathsf {nsb})\le (\mathsf {nsb})\}\) where F maps \(\mathbb R^{Lab}\times \mathbb R^{Lab}\) to itself, \(F(\mathbb N^{Lab}\times \mathbb N^{Lab})\subseteq (\mathbb N^{Lab}\times \mathbb N^{Lab})\) and has coordinates which are the maximum of a finite family of affine order-preserving functions.
-
2.
Let \(\xi \) the function such that \(\xi (\ell )\) equals the function defined at Fig. 5. The system \(S_\xi \) can be rewritten as \(\{(\mathsf {nsb},\mathsf {nsb_e})\in \mathbb N^{Lab}\times \mathbb N^{Lab}\mid F(\mathsf {nsb},\mathsf {nsb_e})\le (\mathsf {nsb},\mathsf {nsb_e}) \}\) where F maps \(\mathbb R^{Lab}\times \mathbb R^{Lab}\) to itself, \(F(\mathbb N^{Lab}\times \mathbb N^{Lab})\subseteq (\mathbb N^{Lab}\times \mathbb N^{Lab})\) and all its coordinates are the min-max of a finite family of affine functions.
Proof
We only give details about the system \(S_\xi ^1\) (Fig. 4). By induction on the rules. We write \(L=\{\ell \in Lab\mid F_\ell \text { is constructed }\}\). This set is used in the proof to construct F inductively.
For the rule (CONST), there is nothing to do. For the rule (ID), if the label \(\ell '=\rho (x)\in L\) then we define \(F_{\ell '}(\mathsf {nsb})=\max (F_{\ell '}(\mathsf {nsb}),\mathsf {nsb}(\ell ))\). Otherwise \(F_{\ell '}(\mathsf {nsb})=\mathsf {nsb}(\ell )\). As \(\mathsf {nsb}\mapsto \mathsf {nsb}(\ell )\) is order-preserving and the maximum of one affine function, \(F_{\ell '}\) is the maximum of a finite family of order-preserving affine functions since \(\max \) preserves order-preservation.
For the rules (Add), (Sub), (Mult), (Div), (Math) and (Assign), by induction, it suffices to focus on the new set of inequalities. If \(\ell _1\in L\), we define \(F_{\ell _1}\) as the max with old definition and \(RHS(\mathsf {nsb})\) i.e. \(F_{\ell _1}(\mathsf {nsb})=\max (RHS(\mathsf {nsb}),F_{\ell _1}(\mathsf {nsb}))\) where \(RHS(\mathsf {nsb})\) is the right-hand side part of the new inequality. If \(\ell _1\notin L\), we define \(F_{\ell _1}(\mathsf {nsb})=RHS(\mathsf {nsb})\). In the latter rules, \(RHS(\mathsf {nsb})\) are order-preserving affine functions. It follows that \(F_{\ell _1}\) is the maximum of a finite family of order-preserving affine functions.
The result follows by induction for the rule (SEQ).
The rules (Cond) and (While) are treated as the rules (Add), (Sub), (Mult), (Div), (Math) and (Assign), by induction and the consideration of the new set of inequalities.
The last rule (Req) constructs \(F_{\rho (x)}\) either as the constant function equal to p at label \(\rho (x)\) or the maximum of the old definition of \(F_{\rho (x)}\) and p if \(\rho (x)\in L\). The proof for the system \(S_\xi \) uses the same arguments and Lemma 1. \(\Box \)
Proposition 3 (Algorithm correctness)
The sequence \((\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell ))_{0\le k\le K}\) generated by Algorithm 1 satisfies the following properties:
-
1.
\(K<+\infty \) i.e. the sequence is of finite length;
-
2.
each term of the sequence furnishes a feasible solution for Problem (11);
-
3.
\(\sum _{\ell \in Lab} \mathsf {nsb}^{k+1}(\ell )<\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) if \(k<K-1\) and \(\sum _{\ell \in Lab} \mathsf {nsb}^{K}(\ell )=\sum _{\ell \in Lab} \mathsf {nsb}^{K-1}(\ell )\);
-
4.
the number k is smaller than the number of policies.
Proof
Let \(\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) be a term of the sequence and \((\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) be the optimal solution of \(\text {Min}\{\sum _{\ell \in Lab} \mathsf {nsb}(\ell )\mid f^{\pi ^k}(\mathsf {nsb},\mathsf {nsb_e})\le (\mathsf {nsb},\mathsf {nsb_e}),\ \mathsf {nsb}\in \mathbb N^{Lab},\ \mathsf {nsb_e}\in \mathbb N^{Lab}\}\). Then \(F(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le f^{\pi ^k}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) by definition of F. Moreover, \(F(\mathsf {nsb}^k,\mathsf {nsb_e}^k)=f^{\pi ^{k+1}}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) and \(f^{\pi ^k}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le (\mathsf {nsb}^k,\mathsf {nsb_e}^k)\). This proves the second statement. Furthermore, it follows that \(f^{\pi ^{k+1}}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le (\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) and \((\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) is feasible for the minimisation problem for which \((\mathsf {nsb}^{k+1},\mathsf {nsb_e}^{k+1})\) is an optimal solution. We conclude that \(\sum _{\ell \in Lab} \mathsf {nsb}^{k+1}(\ell )\le \sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) and the Algorithm terminates if the equality holds or continues as the criterion strictly decreases. Finally, from the strict decrease, a policy cannot be selected twice without terminating the algorithm. In conclusion, the number of iterations is smaller than the number of policies.\(\Box \)
Proposition 4
Let \(G:[0,+\infty )^d\mapsto [0,+\infty )^d\) be an order-preserving function such that \(G(\mathbb N^d)\subseteq \mathbb N^d\). Suppose that the set \(\{y\in \mathbb N^d\mid G(y)\le y\}\) is non-empty. Let \(\varphi :\mathbb R^d\mapsto \mathbb R\) a strictly monotone function such that \(\varphi (\mathbb N^d)\subseteq \mathbb N\). Then, the minimization problem below has an unique optimal solution which is integral.
Proof
Let \(L:=\{x\in [0,+\infty )^d\mid G(x)\le x\}\) and \(u=\inf L\). It suffices to prove that \(u\in \mathbb N^d\). Indeed, as \(\varphi \) is strictly monotone then \(\varphi (u)<\varphi (x)\) for all \(x\in [0,+\infty )^d\) s.t. \(G(x)\le x\) and \(x\ne u\). The optimal solution is thus u. If \(u=0\), the result holds. Now suppose that \(0<u\), then \(0\le G(0)\). Let \(M:=\{y\in \mathbb N^d\mid y\le G(y), y\le u\}\). Then \(0\in M\) and we write \(v:=\sup M\). As M is a complete lattice s.t. \(G(M)\subseteq M\), from Tarski’s theorem, v satisfies \(G(v)=v\) and \(v\le u\). Moreover, \(v\in \mathbb N^d\) and \(v\le u\). Again, from Tarski’s theorem, u is the smallest fixpoint of G, it coincides with v. Then \(u\in \mathbb N^d\).\(\Box \)
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Adjé, A., Ben Khalifa, D., Martel, M. (2021). Fast and Efficient Bit-Level Precision Tuning. In: Drăgoi, C., Mukherjee, S., Namjoshi, K. (eds) Static Analysis. SAS 2021. Lecture Notes in Computer Science(), vol 12913. Springer, Cham. https://doi.org/10.1007/978-3-030-88806-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-88806-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88805-3
Online ISBN: 978-3-030-88806-0
eBook Packages: Computer ScienceComputer Science (R0)