Cognitive science · VR · Robotics · Human–AI collaboration

Designing and studying how people work with intelligent systems.

I’m a cognitive scientist and research developer working with VR environments, robots, and AI agents to explore human–AI teaming, decision making, and training.

VR simulation & experimental design
Embodied AI & social robots
Human–AI teams & training
Rapid prototyping & tooling
Profile

About

I have a PhD in Cognitive Science and a background that blends experimental psychology, human–computer interaction, and hands-on development. Over the past several years I’ve built and supported research testbeds involving VR simulations, mobile and humanoid robots, and intelligent agents that interact with people in real time.

My work typically lives at the boundary between research and implementation: turning abstract research questions into concrete experiments, building the technical infrastructure to run them, and helping teams reason about what the results mean for real-world training and operations.

What I’m good at

  • Designing and implementing human–AI and human–robot experiments.
  • Building interactive VR environments for training and simulation.
  • Integrating robots, sensors, and AI models into cohesive systems.
  • Explaining complex systems clearly to interdisciplinary teams.

Tools & platforms

Unity (VR / AR) Python Robotics SDKs (Spot, Ghost, Furhat) LLM-based agents & APIs Experimental control & data pipelines fNIRS / physiological data streams
Focus areas

Current work & interests

Recently, my work has focused on testbeds where people interact with intelligent systems under pressure: immersive crisis simulations, command-and-control style interfaces, and structured team tasks that reveal how humans calibrate trust and coordination.

A recurring theme is exploring how an AI or robot “teammate” should behave: when it should take initiative, when it should defer to humans, and how to make its capabilities and limitations legible to the people working with it.

  • VR environments where participants work with an AI or robotic teammate.
  • Robots that respond to human cognitive state (e.g., workload / effort signals).
  • Interfaces that let non-programmers orchestrate robot/agent behavior.
  • Infrastructure for multi-participant studies with streaming data.
Selected work

Projects & testbeds

A sample of systems I’ve helped design and build. Some demos and technical details are not publicly posted; I’m happy to share examples or walk through specifics in a brief call.

Immersive crisis simulation testbed
Selected visuals available on request
VR Space Station Scenario
VR · Human–AI Teams
A VR simulation where participants respond to a developing “space station disaster” while coordinating with an AI assistant. Work included Unity environment development, interaction flow, and experimental logic around guidance vs. autonomy.
Unity · VR AI teammate design Experimental pipelines
Robot teleoperation + perception in VR
Video clips available on request
Embodied AI with Legged Robots
Robotics
Integrating quadruped robots (e.g., Spot, Ghost) with VR views and LLM-based control. Implemented networked video streaming, control interfaces, and early conversational command pipelines.
Python · SDKs Live video streaming LLM command interface
Social robot as teammate/advisor
Demo scripts available on request
Social Robots in Team Decision Tasks
Social Robotics
Wizard-of-Oz and semi-autonomous setups where a social robot participates in group decision-making tasks. Built orchestration tools for speech, gaze, and expressions, enabling real-time experimenter control.
Furhat Wizard-of-Oz tools Dialogue sequencing
3D perception + physiological sensing
Technical write-up available on request
3D Perception & Cognitive State Integration
Perception & Physiology
Tools for visualizing depth and point cloud data in VR, and experiments that connect physiological measures (e.g., fNIRS effort signals) to adaptive robot/agent behavior— such as modulating system proactivity when users are overloaded.
Unity · Point clouds fNIRS / LSL Adaptive behavior
Outputs

Publications & talks

A brief selection below; a complete and up-to-date list is available in my CV. I’m happy to share copies or additional details on request.

Investigating human-robot overtrust during crises
MULTITTRUST Workshop, Hybrid Human Artificial Intelligence Conference · 2023
Robot guided emergency evacuation from a simulated space station
AIAA SciTech Forum · 2023
Using virtual reality to simulate human-robot emergency evacuation scenarios
AAAI AI-HRI Symposium · 2022
The Impact of Anthropomorphism on Trust in Robotic Systems
Conference / workshop publication · 2021
Connectionist models of bilingual word reading
Book chapter in Methods in Bilingual Reading Comprehension Research · 2016
Spectral convergence in tapping and physiological fluctuations
Frontiers in Human Neuroscience · 2014
Mentorship

Teaching, mentoring & collaboration

I’ve mentored students and early-career researchers working on projects in VR, robotics, and human–AI interaction—often helping translate a broad idea into a concrete experimental design and technical plan.

I’ve also taught undergraduate courses in logic, ethics, and philosophy of cognitive science.

Teaching (university-level)
Logic, Bioethics, Professional Ethics, Philosophy of Cognitive Science
Project supervision
VR simulations, robot control interfaces, and human–AI teaming experiments
Workshops & internal training
Helping interdisciplinary teams use VR, robots, and AI agents in studies
Next steps

Contact & links

I’m open to conversations about:

  • Research or development roles involving VR, robotics, or human–AI teaming.
  • Collaborations on experimental testbeds or training environments.
  • Ways to bring intelligent agents or robots into existing research programs.
Available for full-time roles
Remote or hybrid
Research & development