Abstract
We study several theoretical aspects of both 2D and 3D intra multi-view geometry of calibrated cameras when all that they can reliably recognize is each other. Starting with minimal reconstructable configurations, we propose a method for obtaining the position-orientation structure of such camera ensembles, up to a global similarity. In the 3D setting we base our analysis on Rodrigues’ vector techniques familiar from mechanics and robotics. We also examine the average number of visible cameras and discuss some kinematic aspects of the problem.
Similar content being viewed by others
Notes
the focal distance may be set to one.
with the exception of the case \(\mathbf{u}=-\mathbf{v}\), in which \(\mathbf{c}\) is infinite in magnitude and oriented arbitrarily in the plane \(\mathbf{u}^\perp \).
References
Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987)
Aspnes, J., Eren, T., Goldenberg, D.K., Morse, A.S., Whiteley, W., Yang, Y.R., Anderson, B.D.O., Belhumeur, P.N.: A theory of network localization. IEEE Trans. Mob. Comput. 5(12), 1663–1678 (2006)
Brezov, D.S.: Projective bivector parametrization of isometries in low dimensions. In: Proceedings of the Nineteenth International Conference on Geometry, Integrability and Quantization. pp. 91–104. Avangard Prima, Sofia, Bulgaria (2018). https://doi.org/10.7546/giq-19-2018-91-104
Brezov, D.S., Mladenova, C.D., Mladenov, I.M.: From the kinematics of precession motion to generalized rabi cycles. Adv. Math. Phys. 2018 (2018)
Cao, M.W., Jia, W., Zhao, Y., Li, S.J., Liu, X.P.: Fast and robust absolute camera pose estimation with known focal length. Neural Comput. Appl. 29(5), 1383–1398 (2018). https://doi.org/10.1007/s00521-017-3032-6
En, S., Lechervy, A., Jurie, F.: Rpnet: an end-to-end network for relative camera pose estimation. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Eren, T., Goldenberg, O., Whiteley, W., Yang, Y.R., Morse, A.S., Anderson, B.D., Belhumeur, P.N.: Rigidity, computation, and randomization in network localization. In: IEEE INFOCOM 2004. vol. 4, pp. 2673–2684. IEEE (2004)
Faugeras, O.D., Hebert, M.: The representation, recognition, and locating of 3-D objects. Int. J. Rob. Res. 5(3), 27–52 (1986). https://doi.org/10.1177/027836498600500302
Halperin, T., Werman, M.: An epipolar line from a single pixel. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 983–991 (2018)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)
Kasten, Y., Werman, M.: Two view constraints on the epipoles from few correspondences. In: 2018 25th IEEE International Conference on Image Processing (ICIP). pp. 888–892 (Oct 2018). https://doi.org/10.1109/ICIP.2018.8451727
Kenmogne, I.F., Drevelle, V., Marchand, E.: Cooperative localization of drones by using interval methods. Acta Cybernetica 24(3), 557–572 (Mar 2020). https://doi.org/10.14232/actacyb.24.3.2020.15. https://cyber.bibl.u-szeged.hu/index.php/actcybern/article/view/4059
Levi, N., Werman, M.: The viewing graph. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 1, pp. I–I (2003)
Li, H.C.: Average length of chords drawn from a point to a circle. Pi Mu Epsil. J. 8(3), 146–150 (1985)
Pasquetti, M., Michieletto, G., Zhao, S., Zelazo, D., Cenedese, A.: A unified dissertation on bearing rigidity theory. arXiv preprint arXiv:1902.03101 (2019)
Piña, E.: A new parametrization of the rotation matrix. Am. J. Phys. 51(4), 375–379 (1983). https://doi.org/10.1119/1.13253
Sato, J.: Recovering epipolar geometry from mutual projections of multiple cameras. Int. J. Comput. Vis. 66(2):123–140 (2006)
Smale, S.: Mathematical problems for the next century. Math. Intell. 20(2), 7–15 (1998)
Taylor, C.J., Spletzer, J.: A bounded uncertainty approach to cooperative localization using relative bearing constraints. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 2500–2506. IEEE (2007)
Zelazo, D., Zhao, S.: Formation control and rigidity theory 17:1–16 (2019)
Zhang, F., Kumar, V., Pereira, G.A.: Necessary and sufficient conditions for localization of multiple robot platforms. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 46954, pp. 13–21 (2004)
Zhao, S., Zelazo, D.: Bearing rigidity theory and its applications for control and estimation of network systems: life beyond distance rigidity. IEEE Control Syst. Mag. 39(2), 66–83 (2019)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Proceedings ICCA 12, Hefei, 2020, edited by Guangbin Ren, Uwe Kähler, Rafal Ablamowicz, Fabrizio Colombo, Pierre Dechant, Jacques Helmstetter, G. Stacey Staples, Wei Wang.
This research was supported by the DFG.
Appendix: How Many Cameras does Each Camera See on Average?
Appendix: How Many Cameras does Each Camera See on Average?
Here we treat the number of sighting of other cameras in a sphere as a function of the distance to the center, orientation and FOV. In the two-dimensional case we consider a point \(z_0\) in a circle of radius r at distance from its center \(d\in [0,r]\). It is convenient to introduce polar coordinates \(\rho \in [0,r]\), \(\theta \in [0,2\pi )\) choosing \(z_0\) as the origin. The circle’s boundary is viewed from the perspective of \(z_0\) as a curve with polar equation (see [14] for a derivation)
and it is straightforward to obtain the area of the disc segment sliced by the viewing angle \(\theta \in [a,b]\) at \(z_0\) as
Choosing a polar orientation \(\phi \in [0,2\pi )\) for the camera at \(z_0\) and denoting the width of the field of view \(FOV = 2\delta \), one obtains the above integral as a a function of the parameters \(\phi \) and d, or more conveniently \(\epsilon = d/r\) (keeping r and \(\delta \) fixed), namely
The first two terms are trivial, while for the third one we use integration by parts, finally arriving at
e.g. on the boundary of the unit circle \(\epsilon \!= \!r\! =\! 1\) one has
while for an arbitrarily placed camera pointing towards the center (note that we always assume \(\delta \in [0,\pi ]\))
Using formula (38) one obtains an estimate for the geometric probability
that in the case of the uniform distribution corresponds to the relative number of agents seen by the camera at \(z_0\). The rotational symmetry allows us to work with the above-chosen range for the parameters \(\epsilon \in [0, 1]\), \(\phi \in [0,2\pi )\) and then integrate \(P(\phi ,\epsilon )\) dividing by \(2\pi \) in order to obtain the geometric probability and thus, the average number of cameras seen by an arbitrary camera
where we use the periodicity of the trigonometric terms in (38) with respect to \(\phi \) and get the same result as in (21). Note that since the nonzero contribution to (39) does not depend explicitly on \(\epsilon \), the above relation holds for any point on the circle. This result may also be derived using symmetries of circles and spheres, which is preferable especially in the 3D case where explicit calculations are nontrivial except for a camera positioned at the center: then the observed domain becomes a spherical sector with volume proportional, with a factor equal to one third of the radius, to the solid viewing angle
For an arbitrary position and orientation however averaging factors out the complexity just as in the planar case (or the physical setting of a charged spherical shell) and formula (27) still holds as if the camera was at the center.
Rights and permissions
About this article
Cite this article
Brezov, D., Werman, M. Cameras Seeing Cameras Geometry. Adv. Appl. Clifford Algebras 32, 30 (2022). https://doi.org/10.1007/s00006-022-01211-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00006-022-01211-5