How viable is the method of using LLMs to control robots?
Research Question/Goal: Create a framework that incorporates an LLM into a robotic system to serve as the central reasoning node. Then evaluate how LLMs work in real-time, resource-constrained environments that are often characteristic of home automation and personalized robotics.
Three main components that define the structure of the system:
- Action (Physical, Observable Responses)
- Sensing (Integrate Sensors)
- Intelligence (LLM)
-
Create 3 Robot
- 7 IR Obstacle Sensors in the front
- Can be used to detect obstacles
- Three Buttons on top
- Can be overloaded using ROS 2 application
- Power Button features ring of 6 RGB LEDs for indication
- Multi-Zone Bumper
- Docking Sensor
- Adapter Board below Faceplate
- Main Purpose: Used to interface to external computers either through Bluetooth or via USB-C
- Unregulated Battery Port (~14 V at 2 A max)
- USB-C Connector: USB 2.0 Host connection into robot with 5.13 V at 3.0 A provided to power downstream connections. Power is disabled on this port unless a proper USB-C device is connected.
- USB/BLE Toggle routes the robot's single USB Host connection to either the USB-C port or to the on-board Bluetooth Low Energy module.
- Faceplate + Cargo Bay
- Regular hole pattern for attaching payloads
- 4 Cliff sensors
- Keeps robot on ground
- Optical Odometry Sensor
- IMU
- 7 IR Obstacle Sensors in the front
-
Nvidia Jetson AGX Xavier Development Kit
- CPU structure:
- GPU structure:
-
Desktop Machine
- CPU structure:
- GPU structure:
-
Raspberry Pi (Optional)
-
Intel Realsense LiDar Camera L515
- Depth camera
- RGB camera
check rest of the list
- Gyroscope?
This section describes all the required setup steps for the devices and also installing the necessary programs such as the LLMs.
- Install Jetpack 5.1.1 using SDK Manager (make sure installation occurring on 18.04 device)
- Install to NVMe to utilize SSD as main path
- Once installation done, make sure that Ubuntu version on Jetson is 20.04
- AGX Developer Manual
- Setup to install everything (and all packages) on the SSD.
Ethan finish
- Install Whisper
- ROS 2 Installation (using Binary Packages)
- Configure ROS 2 Environment
- Create 3 ROS 2 Setup
- Nvidia Jetson Setup with Create 3
- Test Run
- Make sure to run “source /opt/ros/galactic/setup.bash” and “export ROS_DOMAIN_ID=0”
- Run Docking and Undocking Commands
- Install Pip
- Install Cuda
- Install Pytorch
- Install Numpy
- pip install numpy
- Install transformers
- pip install transformers
- Install LLM
- python3
- Type in commands under “Load Model Directly” in here
- Setup the Raspberry Pi
- Install 64-bit Raspbian OS
- Download WhisperAI files from Github Repository: GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision
- Test Python Example Code on ThonnyIDE
- Connect Create 3 Robot to Wifi using these steps
- Go to Terminal
- Type “source /opt/ros/galactic/setup.bash”
- Type “export ROS_DOMAIN_ID=<your_domain_id>” (in our case, 0)
- To open