Autonomous Soft-Tissue Needle Steering Using Reinforcement Learning Guided By Human Input

Soft-tissue needle steering, where a deformable needle is inserted into the tissue to guide its tip to a desired position, is a common minimally invasive surgery (MIS) procedure. The diverse types of needles and complex tissue dynamics limit the use of existing approaches that utilize models of the needle and the tissue for automating the task. In this work, we employ a data-driven approach using deep reinforcement learning (DRL) to achieve autonomous needle steering by viewing it as a multi-goal reinforcement learning problem. Human interventions are incorporated during training to accelerate learning and reduce catastrophic failures. Generative adversarial imitation learning (GAIL) is combined with regular DRL by utilizing a hindsight relabeling scheme for human interventions to encourage the agent to imitate human behavior. To emulate the sim-to-real process, an agent is first trained in a simplistic simulation environment for needle steering and then transferred to a sophisticated one considered as the real world with fine-tuning (sim-to-sim). Experimental results show that with human interventions, the proposed method outperforms the other compared DRL approaches and can achieve good performance with only 2,000 training steps in the complex simulation environment, achieving an average return comparable to that of a 55,000-step agent trained from scratch.