Experimental AI-Controlled Robotic Face — RoboFace: Holliday
RoboFace is an experimental project that explores the intersection of artificial intelligence, robotics, and human expression. The robotic face, named "Holliday", serves as our test subject. At its core, this project investigates: How does an AI language model behave when given control over a human-like face and made to understand that it is controlling a human face?
Core Research Question: Can an LLM develop meaningful, context-aware behaviors when controlling facial expressions, understanding the logical constraints of human facial anatomy (e.g., if eyelids are closed, moving pupils makes no sense)? Can Holliday, under AI control, exhibit human-like expression patterns, or will it reveal the limitations of AI reasoning?
This project consists of two main components: an Arduino-based control system that manages Holliday's physical servomotors, and a Python script that interfaces with a local LLM to make autonomous decisions about facial expressions and movements.
Data flow:
Local LLM (LlamaCPP) → Python Controller Script → Arduino Uno R4 WiFi → PCA9685 Servo Driver → Holliday (Robotic Face)
The Arduino Uno R4 WiFi serves as the control kernel, bridging AI decisions and the physical servomotors. The firmware exposes a REST API for manual and AI-driven operation.
Channel 0: Head (Rotation: 47°-180°, Center: 114°)
Channels 1-4: Eyelids (Rest: 10°, Closed: 90°)
Channel 5: Horizontal Pupils (60°-110°, Center: 85°)
Channel 6: Vertical Pupils (70°-120°, Center: 95°)
Channel 7: Mouth (Open: 47°, Closed: 80°)
Channels 8-15: Reserved
The Python script is the brain of the system. It queries a local LLM (LlamaCPP) to decide the next facial expression or movement based on context, in a continuous loop (about once per second), executing up to 3 simultaneous actions when logically consistent.
Key Innovation: The LLM is instructed that it is controlling a human face and must respect logical constraints (e.g., if eyelids are closed, moving pupils is pointless).
1. Script sends available actions, limits, context, and constraints to the LLM. 2. LLM returns one or more actions (test or servo). 3. Script validates, enforces limits, executes via REST API, updates history. 4. Process repeats every second.
This project is experimental. Holliday is a research tool to observe:
Research Goal: Contribute to understanding how AI behaves when given autonomous control over physical systems with logical constraints, especially when those systems are anthropomorphized. Through Holliday, we observe whether an LLM can develop an intuitive sense of human facial expression logic.
Primary Finding: LLMs can follow logical constraints and produce contextually appropriate expressions, but they do not demonstrate true understanding of human facial anatomy. They rely on pattern matching and instruction following. With clear constraints and context, they can still produce surprisingly human-like sequences that appear intentional to an observer.
If you need additional information about Holiday Robot / RoboFace or want to discuss this project, please send an email.