As the Lead Robotics Foundation Model Engineer, you will design, train, and deploy large-scale multimodal models that integrate vision, language, and action components for real-world robotic applications. Leveraging data from our teleoperation systems, you will create generalizable policies for our robots to perform complex tasks autonomously and reliably—beyond lab-scale or proof-of-concept demos. You will guide the end-to-end pipeline, from data processing and model design to on-robot deployment and performance optimization.
Key Responsibilities
Model Architecture & Implementation
Design Vision-Language-Action Models: Develop and refine network architectures (transformers, multimodal encoders) that integrate vision data, language instructions, and robot control signals to output intelligent action policies
Scalable Training Pipelines: Set up robust machine learning pipelines (distributed training, large-batch processing) to handle extensive teleoperation datasets
Real-Time Control Integration: Work closely with our robotics control team to ensure model outputs align with real-time actuation requirements, bridging deep learning inference with embedded controllers
Teleoperation & Data Utilization
Data Collection & Curation: Collaborate with the teleoperation software team to design data-collection strategies, ensuring we capture high-quality vision and operator-action sequences for model training
Multimodal Annotation & Preprocessing: Implement processes for labeling or inferring language-based instructions, sensor metadata, and contextual cues from unstructured teleoperation logs
Domain Adaptation & Continuous Learning: Guide methods to adapt VLA models as new teleoperation data is collected, ensuring models remain robust across varying tasks, operators, and environments
Real-World Robot Deployment
On-Robot Inference & Optimization: Package and deploy trained policies onto embedded compute platforms (NVIDIA Jetson or similar), ensuring low-latency inference and reliable control signals
Performance Evaluation & Safety Checks: Establish rigorous evaluation protocols (safety, accuracy, and autonomy metrics) to validate VLA models in real industrial or field environments, not just in simulation
Continuous Field Optimization: Work hand-in-hand with hardware teams and site operators to diagnose issues, refine model hyperparameters, and optimize inference for new or unexpected scenarios
Collaboration & Stakeholder Management
Cross-Functional Collaboration: Liaise with internal and external robotics researchers, control engineers, and teleoperation specialists to align on objectives, share findings, and integrate best practices
External Partnerships: Represent the VLA team in collaborations with external research institutes or technology partners, advocating for our approach to building robust production models
Continuous Optimization & Innovation
Metrics & Model Health: Define key performance indicators (accuracy, success rate, real-time efficiency) for model-driven robot autonomy and continuously track improvements
Research & Knowledge Sharing: Stay up-to-date with advancements in multimodal deep learning, large-scale model optimization, and robotic control research; share breakthroughs internally
Qualifications
Technical Skills
Deep Learning Expertise: Demonstrated track record building and training large-scale multimodal or transformer-based models (e.g., vision-language transformers, reinforcement learning pipelines)
Robotics Integration: Experience deploying AI/ML solutions onto physical robots with real-time constraints; proficiency using robotics middleware (e.g., ROS1/2) and embedded edge hardware (e.g., Jetson)
Data Engineering for ML: Proficiency in constructing data-processing pipelines (Python, C++, or similar. Training using high-performance GPU) for large, complex datasets (images, video, text, sensor logs)
Distributed Systems: Familiarity with distributed training paradigms (PyTorch Distributed or similar) for large-scale model development
Control & Actuation: Solid understanding of control theory and how high-level AI actions map to low-level motors, actuators, and physical robot systems
Professional Experience
Robust Deployment Track Record: Proven success in taking advanced ML/AI or robotics projects from initial research to stable, real-world operation (beyond simulation and PoC)
Team Leadership: Prior experience leading or mentoring a team; capable of managing project timelines, delegating tasks, and aligning stakeholders toward common goals
Industry & Research Contributions: Strong portfolio or publication record in AI or robotics; comfortable presenting at conferences or leading technical discussions
Soft Skills & Culture Fit
Ownership mentality: Takes responsibility for outcomes and problem-solves proactively
User-Centric Mindset: Demonstrates the ability to understand how diverse stakeholders (including end users, partners, and internal teams) will interact with the product, envision optimal workflows, communicate these concepts clearly to both technical and non-technical audiences, and translate them into actionable technical requirements
Comfortable in a performance-driven environment (high rewards for results, potential demotion for underperformance)
Communication skills in English; Japanese proficiency is a plus
[IMPORTANT] Application & Evaluation
To help us evaluate your candidacy effectively, please provide detailed examples of your past work that demonstrate your ability to deliver robust, AI-driven robotics solutions. Submitting only generic or unrelated experience (e.g., “I have done some machine learning; trust me”) will not suffice. Instead, please explicitly detail:
Project Portfolios & Videos: Links to notable projects or demonstrations showcasing successful real-world or near-real-time robot deployments
Technical Explanations: Summaries of your role in designing data pipelines, training large-scale models, integrating AI with physical robot systems, and managing any real-time constraints
Relevant Publications: If applicable, include research papers or intellectual property that illustrates your expertise in multimodal AI or robotics
Applications consisting solely of a standard resume without addressing these points will not proceed in our selection process. We look forward to reviewing your concrete evidence of expertise in building and deploying advanced robotics foundation models.
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job