Skip to main content

Fast and Efficient Bit-Level Precision Tuning

  • Conference paper
  • First Online:
Static Analysis (SAS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12913))

Included in the following conference series:

Abstract

In this article, we introduce a new technique for precision tuning. This problem consists of finding the least data types for numerical values such that the result of the computation satisfies some accuracy requirement. State of the art techniques for precision tuning use a trial-and-error approach. They change the data types of some variables of the program and evaluate the accuracy of the result. Depending on what is obtained, they change more or less data types and repeat the process. Our technique is radically different. Based on semantic equations, we generate an Integer Linear Problem (ILP) from the program source code. Basically, this is done by reasoning on the most significant bit and the number of significant bits of the values which are integer quantities. The integer solution to this problem, computed in polynomial time by a classical linear programming solver, gives the optimal data types at the bit level. A finer set of semantic equations is also proposed which does not reduce directly to an ILP problem. So we use policy iteration to find the solution. Both techniques have been implemented and we show that our results encompass the results of state-of-the-art tools.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 69.99
Price excludes VAT (USA)
Softcover Book
USD 89.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. ANSI/IEEE: IEEE Standard for Binary Floating-point Arithmetic, std 754–2008 edn. (2008)

    Google Scholar 

  2. Ben Khalifa, D., Martel, M.: Precision tuning and internet of things. In: International Conference on Internet of Things, Embedded Systems and Communications, IINTEC 2019, pp. 80–85. IEEE (2019)

    Google Scholar 

  3. Ben Khalifa, D., Martel, M.: Precision tuning of an accelerometer-based pedometer algorithm for IoT devices. In: International Conference on Internet of Things and Intelligence System, IOTAIS 2020, pp. 113–119. IEEE (2020)

    Google Scholar 

  4. Ben Khalifa, D., Martel, M.: An evaluation of POP performance for tuning numerical programs in floating-point arithmetic. In: 4th International Conference on Information and Computer Technologies, ICICT 2021, Kahului, HI, USA, 11–14 March 2021, pp. 69–78. IEEE (2021)

    Google Scholar 

  5. Ben Khalifa, D., Martel, M.: A study of the floating-point tuning behaviour on the n-body problem. In: Gervasi, O., et al. (eds.) ICCSA 2021. LNCS, vol. 12953. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86976-2_12

    Chapter  Google Scholar 

  6. Ben Khalifa, D., Martel, M., Adjé, A.: POP: a tuning assistant for mixed-precision floating-point computations. In: Hasan, O., Mallet, F. (eds.) FTSCS 2019. CCIS, vol. 1165, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46902-3_5

    Chapter  Google Scholar 

  7. Cherubin, S., Agosta, G.: Tools for reduced precision computation: a survey. ACM Comput. Surv. 53(2), 1–35 (2020)

    Article  Google Scholar 

  8. Chiang, W., Baranowski, M., Briggs, I., Solovyev, A., Gopalakrishnan, G., Rakamaric, Z.: Rigorous floating-point mixed-precision tuning. In: Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL, pp. 300–315. ACM (2017)

    Google Scholar 

  9. Costan, A., Gaubert, S., Goubault, E., Martel, M., Putot, S.: A policy iteration algorithm for computing fixed points in static analysis of programs. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 462–475. Springer, Heidelberg (2005). https://doi.org/10.1007/11513988_46

    Chapter  Google Scholar 

  10. Cousot, P., et al.: The ASTREÉ analyzer. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 21–30. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31987-0_3

    Chapter  Google Scholar 

  11. Darulova, E., Horn, E., Sharma, S.: Sound mixed-precision optimization with rewriting. In: Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS, pp. 208–219. IEEE Computer Society/ACM (2018)

    Google Scholar 

  12. Diffenderfer, J., Fox, A., Hittinger, J.A.F., Sanders, G., Lindstrom, P.G.: Error analysis of ZFP compression for floating-point data. SIAM J. Sci. Comput. 41(3), A1867–A1898 (2019)

    Article  MathSciNet  Google Scholar 

  13. Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33, 13-es (2007)

    Google Scholar 

  14. Guo, H., Rubio-González, C.: Exploiting community structure for floating-point precision tuning. In: Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, pp. 333–343. ACM (2018)

    Google Scholar 

  15. Gustafson, Y.: Beating floating point at its own game: posit arithmetic. Supercomput. Front. Innov. Int. J. 4(2), 71–86 (2017). https://doi.org/10.14529/jsfi170206

  16. Kotipalli, P.V., Singh, R., Wood, P., Laguna, I., Bagchi, S.: AMPT-GA: automatic mixed precision floating point tuning for GPU applications. In: Proceedings of the ACM International Conference on Supercomputing, ICS, pp. 160–170. ACM (2019)

    Google Scholar 

  17. Lam, M.O., Hollingsworth, J.K., de Supinski, B.R., LeGendre, M.P.: Automatically adapting programs for mixed-precision floating-point computation. In: International Conference on Supercomputing, ICS 2013, pp. 369–378. ACM (2013)

    Google Scholar 

  18. Martel, M.: Floating-point format inference in mixed-precision. In: Barrett, C., Davies, M., Kahsai, T. (eds.) NFM 2017. LNCS, vol. 10227, pp. 230–246. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57288-8_16

    Chapter  Google Scholar 

  19. McKeeman, W.M.: Algorithm 145: adaptive numerical integration by Simpson’s rule. Commun. ACM 5(12), 604 (1962)

    Google Scholar 

  20. Morris, D., Saponas, T., Guillory, A., Kelner, I.: RecoFit: using a wearable sensor to find, recognize, and count repetitive exercises. In: Conference on Human Factors in Computing Systems (2014)

    Google Scholar 

  21. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  22. Papadimitriou, C.H.: On the complexity of integer programming. J. ACM (JACM) 28(4), 765–768 (1981)

    Article  MathSciNet  Google Scholar 

  23. Parker, D.S.: Monte Carlo arithmetic: exploiting randomness in floating-point arithmetic. Technical report CSD-970002, University of California (Los Angeles) (1997)

    Google Scholar 

  24. Rubio-González, C., et al.: Precimonious: tuning assistant for floating-point precision. In: International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2013, pp. 27:1–27:12. ACM (2013)

    Google Scholar 

  25. Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1998)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dorra Ben Khalifa .

Editor information

Editors and Affiliations

A Appendix

A Appendix

We need a lemma on some algebraic operations stable on the set of functions written as the min-max of a finite family of affine functions. The functions are defined on \(\mathbb R^d\).

Lemma 1

The following statements hold:

  • The sum of two min-max of a finite family of affine functions is a min-max of a finite family of affine functions.

  • The maximum of two min-max of a finite family of affine functions is a min-max of a finite family of affine functions.

Proof

Let g and h be two min-max of a finite family of affine functions and \(f=g+h\). We have \(g=\min _i\max _j g^{ij}\) and \(h=\min _k\max _l h^{kl}\). Let \(x\in \mathbb R^d\). There exist ik such that \(f(x)\ge \max _j g^{ij}(x)+\max _l h^{kl}(x)=\max _{j,l} g^{ij}(x)+h^{kl}(x)\). We have also, for all ik, \(f(x)\le \max _j g^{ij}(x)+\max _l h^{kl}(x)=\max _{j,l} g^{ij}(x)+h^{kl}(x)\). We conclude that \(f(x)=\min _{i,k}\max _{j,l} g^{ij}(x)+h^{kl}(x)\) for all x. We use the same argument for the max. \(\Box \)

Proposition 2

The following results hold:

  1. 1.

    Let \(\xi \) the constant function equal to 1. The system \(S_\xi ^1\) can be rewritten as \(\{\mathsf {nsb}\in \mathbb N^{Lab}\mid F(\mathsf {nsb})\le (\mathsf {nsb})\}\) where F maps \(\mathbb R^{Lab}\times \mathbb R^{Lab}\) to itself, \(F(\mathbb N^{Lab}\times \mathbb N^{Lab})\subseteq (\mathbb N^{Lab}\times \mathbb N^{Lab})\) and has coordinates which are the maximum of a finite family of affine order-preserving functions.

  2. 2.

    Let \(\xi \) the function such that \(\xi (\ell )\) equals the function defined at Fig. 5. The system \(S_\xi \) can be rewritten as \(\{(\mathsf {nsb},\mathsf {nsb_e})\in \mathbb N^{Lab}\times \mathbb N^{Lab}\mid F(\mathsf {nsb},\mathsf {nsb_e})\le (\mathsf {nsb},\mathsf {nsb_e}) \}\) where F maps \(\mathbb R^{Lab}\times \mathbb R^{Lab}\) to itself, \(F(\mathbb N^{Lab}\times \mathbb N^{Lab})\subseteq (\mathbb N^{Lab}\times \mathbb N^{Lab})\) and all its coordinates are the min-max of a finite family of affine functions.

Proof

We only give details about the system \(S_\xi ^1\) (Fig. 4). By induction on the rules. We write \(L=\{\ell \in Lab\mid F_\ell \text { is constructed }\}\). This set is used in the proof to construct F inductively.

For the rule (CONST), there is nothing to do. For the rule (ID), if the label \(\ell '=\rho (x)\in L\) then we define \(F_{\ell '}(\mathsf {nsb})=\max (F_{\ell '}(\mathsf {nsb}),\mathsf {nsb}(\ell ))\). Otherwise \(F_{\ell '}(\mathsf {nsb})=\mathsf {nsb}(\ell )\). As \(\mathsf {nsb}\mapsto \mathsf {nsb}(\ell )\) is order-preserving and the maximum of one affine function, \(F_{\ell '}\) is the maximum of a finite family of order-preserving affine functions since \(\max \) preserves order-preservation.

For the rules (Add), (Sub), (Mult), (Div), (Math) and (Assign), by induction, it suffices to focus on the new set of inequalities. If \(\ell _1\in L\), we define \(F_{\ell _1}\) as the max with old definition and \(RHS(\mathsf {nsb})\) i.e. \(F_{\ell _1}(\mathsf {nsb})=\max (RHS(\mathsf {nsb}),F_{\ell _1}(\mathsf {nsb}))\) where \(RHS(\mathsf {nsb})\) is the right-hand side part of the new inequality. If \(\ell _1\notin L\), we define \(F_{\ell _1}(\mathsf {nsb})=RHS(\mathsf {nsb})\). In the latter rules, \(RHS(\mathsf {nsb})\) are order-preserving affine functions. It follows that \(F_{\ell _1}\) is the maximum of a finite family of order-preserving affine functions.

The result follows by induction for the rule (SEQ).

The rules (Cond) and (While) are treated as the rules (Add), (Sub), (Mult), (Div), (Math) and (Assign), by induction and the consideration of the new set of inequalities.

The last rule (Req) constructs \(F_{\rho (x)}\) either as the constant function equal to p at label \(\rho (x)\) or the maximum of the old definition of \(F_{\rho (x)}\) and p if \(\rho (x)\in L\). The proof for the system \(S_\xi \) uses the same arguments and Lemma 1. \(\Box \)

Proposition 3 (Algorithm correctness)

The sequence \((\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell ))_{0\le k\le K}\) generated by Algorithm 1 satisfies the following properties:

  1. 1.

    \(K<+\infty \) i.e. the sequence is of finite length;

  2. 2.

    each term of the sequence furnishes a feasible solution for Problem (11);

  3. 3.

    \(\sum _{\ell \in Lab} \mathsf {nsb}^{k+1}(\ell )<\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) if \(k<K-1\) and \(\sum _{\ell \in Lab} \mathsf {nsb}^{K}(\ell )=\sum _{\ell \in Lab} \mathsf {nsb}^{K-1}(\ell )\);

  4. 4.

    the number k is smaller than the number of policies.

Proof

Let \(\sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) be a term of the sequence and \((\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) be the optimal solution of \(\text {Min}\{\sum _{\ell \in Lab} \mathsf {nsb}(\ell )\mid f^{\pi ^k}(\mathsf {nsb},\mathsf {nsb_e})\le (\mathsf {nsb},\mathsf {nsb_e}),\ \mathsf {nsb}\in \mathbb N^{Lab},\ \mathsf {nsb_e}\in \mathbb N^{Lab}\}\). Then \(F(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le f^{\pi ^k}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) by definition of F. Moreover, \(F(\mathsf {nsb}^k,\mathsf {nsb_e}^k)=f^{\pi ^{k+1}}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) and \(f^{\pi ^k}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le (\mathsf {nsb}^k,\mathsf {nsb_e}^k)\). This proves the second statement. Furthermore, it follows that \(f^{\pi ^{k+1}}(\mathsf {nsb}^k,\mathsf {nsb_e}^k)\le (\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) and \((\mathsf {nsb}^k,\mathsf {nsb_e}^k)\) is feasible for the minimisation problem for which \((\mathsf {nsb}^{k+1},\mathsf {nsb_e}^{k+1})\) is an optimal solution. We conclude that \(\sum _{\ell \in Lab} \mathsf {nsb}^{k+1}(\ell )\le \sum _{\ell \in Lab} \mathsf {nsb}^k(\ell )\) and the Algorithm terminates if the equality holds or continues as the criterion strictly decreases. Finally, from the strict decrease, a policy cannot be selected twice without terminating the algorithm. In conclusion, the number of iterations is smaller than the number of policies.\(\Box \)

Proposition 4

Let \(G:[0,+\infty )^d\mapsto [0,+\infty )^d\) be an order-preserving function such that \(G(\mathbb N^d)\subseteq \mathbb N^d\). Suppose that the set \(\{y\in \mathbb N^d\mid G(y)\le y\}\) is non-empty. Let \(\varphi :\mathbb R^d\mapsto \mathbb R\) a strictly monotone function such that \(\varphi (\mathbb N^d)\subseteq \mathbb N\). Then, the minimization problem below has an unique optimal solution which is integral.

$$ \displaystyle {\mathop {\text {Min}}_{y\in [0,+\infty )^d}}\ \varphi (y)\ \text { s. t. }G(y)\le y $$

Proof

Let \(L:=\{x\in [0,+\infty )^d\mid G(x)\le x\}\) and \(u=\inf L\). It suffices to prove that \(u\in \mathbb N^d\). Indeed, as \(\varphi \) is strictly monotone then \(\varphi (u)<\varphi (x)\) for all \(x\in [0,+\infty )^d\) s.t. \(G(x)\le x\) and \(x\ne u\). The optimal solution is thus u. If \(u=0\), the result holds. Now suppose that \(0<u\), then \(0\le G(0)\). Let \(M:=\{y\in \mathbb N^d\mid y\le G(y), y\le u\}\). Then \(0\in M\) and we write \(v:=\sup M\). As M is a complete lattice s.t. \(G(M)\subseteq M\), from Tarski’s theorem, v satisfies \(G(v)=v\) and \(v\le u\). Moreover, \(v\in \mathbb N^d\) and \(v\le u\). Again, from Tarski’s theorem, u is the smallest fixpoint of G, it coincides with v. Then \(u\in \mathbb N^d\).\(\Box \)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Adjé, A., Ben Khalifa, D., Martel, M. (2021). Fast and Efficient Bit-Level Precision Tuning. In: Drăgoi, C., Mukherjee, S., Namjoshi, K. (eds) Static Analysis. SAS 2021. Lecture Notes in Computer Science(), vol 12913. Springer, Cham. https://doi.org/10.1007/978-3-030-88806-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88806-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88805-3

  • Online ISBN: 978-3-030-88806-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics