Underwater natural gas pipelines constitute critical infrastructure for energy transportation. Any damage or leakage in these pipelines poses serious security risks, directly threatening marine and lake ecosystems and potentially causing operational issues and economic losses in the energy supply chain. Due to the difficulty in detecting deterioration over time and regularly inspecting these submerged pipelines by divers, the use of unmanned underwater vehicles (UUVs) becomes crucial in this field. In this study, an underwater pipeline tracking experiment was carried out by providing autonomous features to a remote-controlled unmanned underwater vehicle. During the tracking of the underwater pipeline, damages were identified, and the locations of these damages were determined. The navigation information of the underwater vehicle, including orientation in the x, y, z axes (roll, pitch, yaw) from a gyroscope integrated with magnetic compass, speed and position information in the three axes from an accelerometer, and the distance to the water surface from a pressure sensor, was integrated into the vehicle. Pre-tests determined the necessary pulse width modulation values for the thrusters of the vehicle, enabling autonomous operation by providing these values as input to the thruster motors. In this study, the vehicle moves in 3D was achived by the vertical thruster of the vehicle was activated to maintain a specific depth, and applying equal force to the vehicle’s right and left thrusters forward movement, while differential force induced deviation angles. In pool experiments, the unmanned underwater vehicle autonomously tracked the pipeline as intended, identifying damages on the pipeline using images captured by the vehicle's camera. The images for damage assessment were processed using a convolution neural network (CNN) algorithm, which is a deep learning method. The position of the damage relative to the vehicle was estimated from the pixel dimensions of the identified damage. The location of the damage relative to its starting point was obtained by combining these two positional pieces of information from the vehicle's navigation system. The all study was performed within the Python environment.