CARLA¶
CARLA is an open-source simulator for autonomous driving research. As such CARLA data is synthetic and can be generated with varying sensor and environmental conditions. The following documentation is largely incomplete and merely describes the provided demo data.
Note
Data from the CARLA simulator can be collected using the LEAD framework, which provides a state-of-the-art expert driver in CARLA.
Quick Links
Paper |
|
Website |
|
Code |
|
License |
MIT License |
Available splits |
n/a |
Available Modalities¶
Name |
Available |
Description |
|---|---|---|
Ego Vehicle |
✓ |
Depending on the collected dataset. For further information, see |
Map |
✓ |
We included a conversion method of OpenDRIVE maps. For further information, see |
Bounding Boxes |
✓ |
Depending on the collected dataset. For further information, see |
Traffic Lights |
X |
n/a |
Cameras |
✓ |
Depending on the collected dataset. For further information, see |
Lidars |
✓ |
Depending on the collected dataset. For further information, see |
Download¶
n/a
Installation¶
n/a
Dataset Specific¶
Box Detection Labels
- class py123d.parser.registry.DefaultBoxDetectionLabel[source]
Default box detection labels used in 123D. Common labels across datasets.
- EGO = 0
- VEHICLE = 1
- TRAIN = 2
- BICYCLE = 3
- PERSON = 4
- ANIMAL = 5
- TRAFFIC_SIGN = 6
- TRAFFIC_CONE = 7
- TRAFFIC_LIGHT = 8
- BARRIER = 9
- GENERIC_OBJECT = 10
- to_default()[source]
Inherited, see superclass.
- Return type:
DefaultBoxDetectionLabel
Dataset Issues¶
n/a
Citation¶
If you use CARLA in your research, please cite:
@article{Dosovitskiy2017CORL,
title = {{CARLA}: {An} Open Urban Driving Simulator},
author = {Alexey Dosovitskiy and German Ros and Felipe Codevilla and Antonio Lopez and Vladlen Koltun},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
year = {2017}
}