Top Related Projects
openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system in 275+ supported cars.
Autoware - the world's leading open-source software project for autonomous driving
Autoware - the world's leading open-source software project for autonomous driving
Open-source simulator for autonomous driving research.
Quick Overview
The udacity/self-driving-car repository is an open-source project that contains code and data from Udacity's Self-Driving Car Nanodegree program. It serves as a resource for learning about autonomous vehicle technology and includes various projects, datasets, and algorithms related to self-driving cars.
Pros
- Provides a comprehensive collection of self-driving car projects and algorithms
- Offers real-world datasets for testing and development
- Encourages community collaboration and contributions
- Serves as an educational resource for those interested in autonomous vehicle technology
Cons
- Some projects may be outdated or no longer maintained
- Limited documentation for certain components
- May require significant computational resources for some projects
- Not a production-ready solution for autonomous driving
Code Examples
- Lane detection using OpenCV:
import cv2
import numpy as np
def detect_lanes(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
lines = cv2.HoughLinesP(edges, 1, np.pi/180, 100, minLineLength=100, maxLineGap=50)
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
return image
- Simple steering angle prediction using a CNN:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Flatten, Dense
def create_model():
model = Sequential([
Conv2D(24, (5, 5), strides=(2, 2), activation='relu', input_shape=(66, 200, 3)),
Conv2D(36, (5, 5), strides=(2, 2), activation='relu'),
Conv2D(48, (5, 5), strides=(2, 2), activation='relu'),
Conv2D(64, (3, 3), activation='relu'),
Conv2D(64, (3, 3), activation='relu'),
Flatten(),
Dense(100, activation='relu'),
Dense(50, activation='relu'),
Dense(10, activation='relu'),
Dense(1)
])
return model
- Basic obstacle detection using LIDAR data:
import numpy as np
def detect_obstacles(lidar_data, distance_threshold):
obstacles = []
for point in lidar_data:
x, y, z = point
distance = np.sqrt(x**2 + y**2 + z**2)
if distance < distance_threshold:
obstacles.append(point)
return obstacles
Getting Started
To get started with the udacity/self-driving-car repository:
-
Clone the repository:
git clone https://github.com/udacity/self-driving-car.git
-
Navigate to a specific project directory:
cd self-driving-car/project_name
-
Install required dependencies (if provided):
pip install -r requirements.txt
-
Run the project-specific scripts or notebooks as instructed in the project's README file.
Note: Each project may have different requirements and setup instructions. Always refer to the project-specific documentation for detailed guidance.
Competitor Comparisons
openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system in 275+ supported cars.
Pros of openpilot
- Active development with frequent updates and improvements
- Supports a wider range of vehicle models
- More advanced features, including lane keeping and adaptive cruise control
Cons of openpilot
- Steeper learning curve for beginners
- Requires specific hardware (e.g., EON, comma two) for full functionality
- More complex installation process
Code Comparison
openpilot:
def update(self):
if self.sm.updated['carState']:
self.v_cruise_kph = self.sm['carState'].cruiseState.speed * CV.MS_TO_KPH
self.v_cruise_cluster_kph = self.v_cruise_kph
self-driving-car:
def process_image(image):
image = cv2.resize(image, (200, 66))
image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
return image
Summary
openpilot is a more advanced and actively maintained project, offering support for a wider range of vehicles and more sophisticated features. However, it comes with a steeper learning curve and specific hardware requirements. self-driving-car, while less feature-rich, may be more accessible for beginners and provides a solid foundation for learning autonomous driving concepts. The code comparison shows openpilot's focus on real-time vehicle control, while self-driving-car emphasizes image processing for perception tasks.
Autoware - the world's leading open-source software project for autonomous driving
Pros of Autoware
- More comprehensive and production-ready framework for autonomous driving
- Active development with frequent updates and a larger community
- Supports ROS2, providing better performance and real-time capabilities
Cons of Autoware
- Steeper learning curve due to its complexity and extensive features
- Requires more computational resources to run effectively
- Less beginner-friendly compared to the Udacity project
Code Comparison
Self-driving-car (Python):
def process_image(image):
yuv = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
yuv[:,:,0] = cv2.equalizeHist(yuv[:,:,0])
return cv2.cvtColor(yuv, cv2.COLOR_YUV2RGB)
Autoware (C++):
void LaneDetector::detectLanes(const cv::Mat& image, std::vector<Lane>& lanes) {
cv::Mat gray, edges;
cv::cvtColor(image, gray, cv::COLOR_BGR2GRAY);
cv::Canny(gray, edges, 50, 150);
// Further processing...
}
The Self-driving-car code focuses on basic image processing, while Autoware's example demonstrates a more complex lane detection algorithm. Autoware's codebase is generally more extensive and modular, reflecting its production-ready nature.
Autoware - the world's leading open-source software project for autonomous driving
Pros of Autoware
- More comprehensive and production-ready framework for autonomous driving
- Active development with frequent updates and a larger community
- Supports ROS2, providing better performance and real-time capabilities
Cons of Autoware
- Steeper learning curve due to its complexity and extensive features
- Requires more computational resources to run effectively
- Less beginner-friendly compared to the Udacity project
Code Comparison
Self-driving-car (Python):
def process_image(image):
yuv = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
yuv[:,:,0] = cv2.equalizeHist(yuv[:,:,0])
return cv2.cvtColor(yuv, cv2.COLOR_YUV2RGB)
Autoware (C++):
void LaneDetector::detectLanes(const cv::Mat& image, std::vector<Lane>& lanes) {
cv::Mat gray, edges;
cv::cvtColor(image, gray, cv::COLOR_BGR2GRAY);
cv::Canny(gray, edges, 50, 150);
// Further processing...
}
The Self-driving-car code focuses on basic image processing, while Autoware's example demonstrates a more complex lane detection algorithm. Autoware's codebase is generally more extensive and modular, reflecting its production-ready nature.
Open-source simulator for autonomous driving research.
Pros of CARLA
- More advanced and realistic simulation environment
- Actively maintained with regular updates
- Extensive documentation and tutorials
Cons of CARLA
- Steeper learning curve
- Higher system requirements
Code Comparison
CARLA example (Python):
import carla
client = carla.Client('localhost', 2000)
world = client.get_world()
blueprint_library = world.get_blueprint_library()
vehicle_bp = blueprint_library.find('vehicle.tesla.model3')
spawn_point = carla.Transform(carla.Location(x=230, y=195, z=40))
vehicle = world.spawn_actor(vehicle_bp, spawn_point)
Self-Driving Car example (Python):
from vehicle import Vehicle
car = Vehicle()
car.start_engine()
car.accelerate(0.5)
car.steer(0.2)
car.brake(0.1)
The CARLA example demonstrates a more complex setup with a realistic simulation environment, while the Self-Driving Car example shows a simpler, more abstract approach to vehicle control.
CARLA offers a more comprehensive and realistic simulation platform, making it suitable for advanced research and development. However, it may require more resources and time to set up and use effectively. The Self-Driving Car repository provides a simpler starting point for learning basic concepts but may lack the depth and realism needed for more advanced applications.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
This repository is deprecated. Currently enrolled learners, if any, can utilize the https://knowledge.udacity.com/ forum for help on specific issues.
Weâre Building an Open Source Self-Driving Car
And we want your help!
At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too.
Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe!
You can read more about our plans for this project.
Contributions
Here's a list of the projects we've open sourced:
- Deep Learning Steering Models â Many different neural networks trained to predict steering angles of the car. More information here.
- Camera Mount by @spartanhaden â A mount to support a lens and camera body that can be mounted using standard GoPro hardware
- Annotated Driving Datasets â Many hours of labelled driving data
- Driving Datasets â Over 10 hours of driving data (LIDAR, camera frames and more)
- ROS Steering Node â Useful to enable the deep learning models to interact with ROS
How to Contribute
Like any open source project, this code base will require a certain amount of thoughtfulness. However, when you add a 2-ton vehicle into the equation, we also need to make safety our absolute top priority, and pull requests just donât cut it. To really optimize for safety, weâre breaking down the problem of making the car autonomous into Udacity Challenges.
Challenges
Each challenge will contain awesome prizes (cash and others) for the most effective contributions, but more importantly, the challenge format enables us to benchmark the safety of the code before we ever think of running it in the car. We believe challenges to be the best medium for us to build a Level-4 autonomous vehicle, while at the same time offering our contributors a valuable and exciting learning experience.
You can find a current list of challenges, with lots of information, on the Udacity self-driving car page. This is the primary way to contribute to this open source self-driving car project.
Core Contributors
@ericrgon
@macjshiggins
@olivercameron
Open Source Base Software Support
Top Related Projects
openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system in 275+ supported cars.
Autoware - the world's leading open-source software project for autonomous driving
Autoware - the world's leading open-source software project for autonomous driving
Open-source simulator for autonomous driving research.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot