Software Engineer, Perception (Autonomy)


2018-11-09T19:49:16.674649Z

At Lyft, community is what we are and it’s what we do. It’s what makes us different. To create the best ride for all, we start in our own community by creating an open, inclusive, and diverse organization where all team members are recognized for what they bring.

We care deeply about delivering the best transportation experience; this means the best experience for the passenger and the best experience for the driver. We believe this quality of service can only be achieved with a deep understanding of our world, our cities, our streets… how they evolve, how they breathe. We embrace the powerful positive impact autonomous transportation will bring to our everyday lives and with our ambition, we will become a leader in the development and operation of such vehicles. Thanks to our network, with hundreds of millions of rides every year, we have the means to make autonomy a safe reality. As a member of Level5, you will have the opportunity to develop and deploy tomorrow’s hardware & software solutions and thereby revolutionize transportation.

As part of the Autonomy Team, you will be interacting on a daily basis with other software engineers to tackle highly advanced AI challenges. Eventually we expect all Autonomy Team members to work on a variety of problems across the autonomy space; however, with a focus on perception, your work will initially involve turning our constant flow of sensor data into a model of the world. For this position, we are looking for a software engineer with the ability to understand autonomous vehicles in general and a strong level of expertise in computer vision and/or machine learning.

Responsibilities:


  • Work on core perception algorithms such as objection detection, tracking, segmentation, and state space estimation

  • Build segmentation and classification algorithms on LiDAR point cloud data

  • Implement state-of-the-art detectors and tracking for vision

  • Develop sensor fusion algorithms for radar, LiDAR, and vision modalities

  • Implement real-time algorithms (< 10 milliseconds) on CPU/GPU in C++

  • Build tools and infrastructure to evaluate the performance of perception stack and track it over time


Experience & Skills:

  • Ability to produce production-quality C++

  • Strong background in mathematics, linear algebra, geometry, and probability

  • Ability to build machine learning applications using a broad range of tools such as decision trees, Hidden Markov Models, deep neural networks, etc.

  • Bachelor's degree or higher in Computer Science, Electrical Engineering, or related field

  • Ability to work in a fast-paced environment and collaborate across teams and disciplines

  • Openness to new / different ideas. Ability to evaluate multiple approaches and choose the best one based on first principles


Nice to Have:

  • 2+ years experience working in a related role

  • 5+ years developing in C++ / Python

  • Hands on experience with applying deep learning to computer vision or other sensor data

  • Experience with GPU programming in CUDA

  • Experience with classical computer vision techniques like structure from motion, RANSAC, Hough transformations, camera calibration, pinhole projection models, etc.


Lyft is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Lyft does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender-identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Pursuant to the San Francisco Fair Chance Ordinance and other similar state laws and local ordinances, and its internal policy, Lyft will also consider for employment qualified applicants with arrest and conviction records.