Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
Autonomous 3D mobile robot mapping for environment monitoring has gained prominence due to its potential for eliminating human intervention. However, existing techniques often exhibit suboptimal performance in complex environments. To overcome this limitation, this paper introduces a novel framework known as "3DMRM-MP" (3-Dimensional Mobile Robot Mapping and Motion Planning using Deep Q-Learning-based Markov Decision Model Deep Neural Network). This framework relies on primary sensors for robot navigation. It involves pre-processing point clouds and grouping similar pixels to extract features, with an added enhancement step using the Gazelle Optimization Algorithm (GOA). Estimation of the robot's current posture is achieved through the Transformation matrix applied Single value decomposition Linear N-Point Camera Pose Estimation (TMSVDLCPE). Based on this estimated pose, the framework determines the desired view, captures images, and converts them into 3D formats. The robot's 3D images, speed, and current position serve as inputs to the DQMD-DNN, which efficiently plans the robot's next optimal move. Experimental results demonstrate that the proposed technique achieves significantly higher decision accuracy compared to existing approaches.