One of the main purposes of this project has been to introduce a positioning system in order to be able to fulfill the ultimate goal of a fully autonomous system in the future. This goal has been achieved with the position control, which enables the operator to specify a position to which the ROV then navigates.
The ROV uses multiple sensors to become aware of its position and orientation. The sensors used are an IMU, a camera and two pressure sensors. The video feed from the camera is used for detection of QR-like codes in the vicinity and uses these as measurements of the position and attitude of the ROV. All sensor data is processed by the sensor fusion module to reduce noise and improve the system state estimates for the controller.
There are two controllers implemented in this project, one which controls the angular and linear velocities and one which controls the attitude and the position of the ROV. The velocity controller is the inner loop of a cascade structure where the position and attitude controller is the outer loop. This means that the outer loop takes a position and Euler angle reference as input, and would produce a reference signal for the inner velocity controller as output. Furthermore, a decentralized structure with PI-controllers is used in order to control the various inputs independently.
Modelling and Simulation
A model describing the system dynamics has been implemented in the sensor fusion model to increase the performance of the state estimates in all six degrees of freedom: 3D position (x, y, z), roll angle, roll angle and yaw angle. The model also includes the derivatives of these states.
However, underwater dynamics are very non-linear since the water dampens movement and the water flux around the vessel disturbs the system. These effects are modelled in a non-linear grey box model which after reduction and adaption to our system contains 20 unknown parameters to be estimated.
Several system identification tests were performed and the data shall be used to estimate these parameters.