Aurelius Systems

Perception Engineer

San Francisco, CA, US

14 days ago
Save Job

Summary

Citizens and permanent residents only - we do not sponsor and cannot hire non-citizens or PRs.


We work 6 days a week in the office in downtown San Francisco, no remote available.


What is Aurelius Systems?Aurelius Systems is a directed energy company developing edge-deployed laser systems for cUAS applicationsWe shoot down drones with laser guns
Why?The US and our allies have prepared for a world where we will be fighting increasingly large and sophisticated systems (Planes, Aircraft carriers, etc), but this world never came. The future of warfare will involve large numbers of small-scale and mass-manufacturable drones, and we aren’t prepared to defend our country from these threats abroad or domestically. Aurelius Systems has set out to address this problem by equipping the US and our allies with the systems they need to fight current and future conflicts in a scalable and cost-effective manner.


Job Description

We are seeking a skilled Perception Engineer to join our software team and own the end‑to‑end perception and sensor‑fusion stack. You will develop, train, and deploy vision and sensor based models; manage data pipelines; and enable real‑time detection and tracking for our laser‑based defense system. This role bridges data science, software engineering, and robotics to deliver reliable, high‑throughput perception performance on edge hardware.

Key Responsibilities
  • Model Development & Training: Design, train, validate and fine-tune machine‑learning and deep‑learning models (e.g., YOLO, RT-DETR, CNNs) for object detection, classification, and segmentation.
  • Sensor Fusion: Integrate and fuse data from multi‑modal sensors (RGB, thermal, LiDAR/ToF, IMU, encoders) to produce robust, real‑time Regions of Interest (ROIs).
  • Advanced CV Algorithms: Research, implement, and as-needed develop high and low-level image-processing techniques, such as deconvolution, low SNR detection, and motion-isolation techniques. 
  • Hardware Integration & Firmware Development: Collaborate with hardware teams to integrate and troubleshoot sensors (global‑shutter and rolling‑shutter cameras) over GigE Vision, USB3 Vision, CAN, SPI, and I²C protocols; develop and debug embedded firmware in C/C++ (or Rust) for microcontrollers (STM32, NXP, TI) and FPGAs using VHDL/Verilog within RTOS environments (FreeRTOS, Zephyr).
  • Data Handling & Pipelines: Build scalable data ingestion, labeling, augmentation, and storage pipelines (simulated and field data) ensuring large #s of labeled frames accuracy.
  • Real‑Time Detection: Optimize inference frameworks for edge deployment (GPU/FPGA)
  • Diagnostics & Monitoring: Develop dashboards and telemetry for drift analysis, hardware health monitoring, performance metrics, and automated retraining triggers.
  • Documentation & Mentorship: Author clear technical docs; mentor junior engineers on best practices in vision, sensor‑fusion, and embedded firmware engineering.
  • Qualitative Goal Determination: Determines development needs by directly analyzing technical and physical limitations of our goals.
Qualifications
  • Hyper Engineer - your brain only releases dopamine when you're building
  • Experience: 3–6 years in computer vision, sensor fusion, or robotics perception roles.
  • Technical Skills:
  • Proficient in C++ and Python
  • Hands‑on with ML frameworks (TensorFlow, PyTorch) and real‑time inference engines (TensorRT, OpenVINO)
  • Familiarity with Docker, and CI/CD for ML pipelines
  • Experience with multi‑sensor calibration and data synchronization
  • Education: BS/MS in Computer Science, Robotics, Electrical Engineering, Mathematics, or related (PhD a plus).
  • Soft Skills: Strong problem‑solving, communication, and cross‑functional collaboration ability.
Nice‑to‑Haves
  • Edge‑AI optimization (quantization, pruning)
  • Experience with FPGA or embedded GPU platforms
  • Background in defense or safety‑critical systems
  • Familiarity with cybersecurity guidelines and secure coding practices
Hardware & Firmware Requirements
  • Hardware Interfaces: Proven experience integrating and troubleshooting multi‑modal sensors (global‑shutter and rolling‑shutter cameras, thermal imagers, LiDAR/ToF modules, IMUs), including proficiency with GigE Vision, USB3 Vision, MIPI, CAN, SPI, and I²C protocols.
  • Embedded Firmware: Skilled in developing and debugging firmware in C/C++ (or Rust) for microcontrollers (STM32, NXP, TI) and FPGAs using VHDL/Verilog; familiarity with RTOS environments (FreeRTOS, Zephyr) and hardware abstraction layers for low‑latency data acquisition and sensor calibration.


!!!

Send all applications w/ resume, evidence of excellent previous work, portfolio, github/website and a description of the most impressive thing you've done to [email protected]

!!!

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: