A brain-computer interface-based robotic arm self-assisting system and method. The system comprises: a sensing layer, a decision-making layer, and an execution layer. The sensing layer comprises an electroencephalogram acquisition and detection module and a visual identification and position-locating module and is used for analyzing and identifying user intent and for identifying and position-locating, on the basis of the user intent, a corresponding cup and a location of a mouth of a user. The execution layer comprises a robotic arm controlling module that performs trajectory planning and controls a robotic arm on the basis of an execution command received from a decision-making module. The decision-making layer comprises the decision-making module and is used for connecting the electroencephalogram acquisition and detection module, the visual identification and position-locating module, and the robotic arm controlling module, realizing the acquisition and transmission of electroencephalogram signal data, position-locating location data, and robotic arm status data and the sending of the robotic arm execution command. The system combines visual identification and position-locating technology, a brain-computer interface, and a robotic arm, enabling a paralyzed patient to drink water by themselves, improving the quality of life of the paralyzed patient.
展开▼