2026 FAQs
General
1. We would like to participate. What do we need to do?
This time the data access information is shared on each track’s description page under CHALLENGE tab. Please find the instructions in there.
2. I am interested only in submitting a paper but not in the Challenge. Can I do that?
Yes. Please make sure to submit your paper by the submission deadline.
3. How large can a team be?
There are no restrictions on team size.
4. Are we allowed to use other external data/pre-trained models?
External dataset or pre-trained models are allowed only if they are public. Teams that wish to be listed in the public leader board and win the challenge awards are NOT allowed to use any private data or private pre-trained model for either training or validation. The winning teams and runners-up are required to submit their training and testing codes for verification after the challenge submission deadline in order to ensure that no private data or private pre-trained model was used for training and the tasks were performed by algorithms and not humans.
5. What are the prizes?
This information is shared in the Awards section.
6. Will we need to submit our code?
Teams need to make there code publicly accessible to be considered for winning (including complete/reproducible pipeline for mode training/creation). This is to ensure that no private data is used for training and the tasks were performed by algorithms and not humans and contribute to the community.
7. How will the submissions be evaluated?
The submission formats for each track are detailed on each track’s description page under CHALLENGE tab.
The validation sets are allowed to be used in training.
9. Are we allowed to use test sets in training?
Additional manual annotations on our testing data are strictly prohibited. We also do not encourage the use of testing data in any way during training, with or without labels, because the task is supposed to be fairly evaluated in real life where we don’t have access to testing data at all. Although it is permitted to perform algorithms like clustering to automatically generate pseudo labels on the testing data, we will choose a winning method without using such techniques when multiple teams have similar performance (~1%). Finally, please keep in mind that, like all the previous editions of the AI City Challenge, all the winning methods and runners-up will be requested to submit their code for verification purposes. Their performance needs to be reproducible using the training/validation/synthetic data only.
10. Are we allowed to use data/pre-trained models from the previous edition(s) of the AI City Challenge?
Data from previous edition(s) of the AI City Challenge are allowed to be used.
11. Do the winning teams and runners-up need to submit papers and present at the workshop?
Track 1 – Multi-Camera 3D Perception (Sim2Real)
1. Is calibration available for each camera?
The comprehensive camera calibration information is available for each camera, including 3-by-4 camera matrix, intrinsic parameters, extrinsic parameters, etc.
2. What is the standard of labeling visible 2D bounding boxes??
The annotations of the test set are generated based on the same standards as the training and validation set.
- For occluded objects (objects that are blocked by an object within the camera frame), objects must satisfy both the visibility in height and width requirements.
- For objects that are truncated – objects that are cut off via the camera frame, the objects must satisfy EITHER of the conditions in visibility for height OR the visibility for width.
- Here are the definitions for visibility in height and width:
- Visibility for height
- If the head is visible and 20% of the height is visible then, label the object.
- If the head is not visible, then label the object if 60% of the height is visible.
- Visibility for width
- More than 60% body width visible should be labeled.
- Visibility for height
3. How are the object IDs used for evaluation? Do the submitted IDs need to be consistent with the ground truths?
We use the HOTA metric for evaluation. The IDs in the submitted results do not need to match the exact IDs in the ground truths. We will use bipartite matching for their comparison, which will be based on IoU of 3D bounding boxes in the global coordinate system.
Track 2 – Transportation Safety Understanding and Captioning (Sim2Real)
1. Can we use any pre-trained models from earlier AI City Challenge versions of this track?
No. Models pre-trained or fine-tuned on AI City Challenge data from previous years or data from other tracks from this year’s challenge are not allowed for this track.
2. Can other pre-trained models be used for this challenge track?
Yes. Teams may fine-tune general pre-trained models with open weights such as Qwen 3.6 or Gemma 4. As solutions to this challenge should be reproducible, teams should refrain from using paid API-based models such as Google Gemini or OpenAI GPT.
3. Can we modify the synthetic videos to increase their realism before we train our models?
Yes. Teams may use generative models that may enhance the realism of the scenes, such as the NVIDIA Cosmos line of models, which may better align the synthetic data and the real data distributions.
Track 3 – Anomalous Events in Transportation
[We will add frequently asked questions with answers here for this track]
Track 4 – Text-Based Person Re-Identification (Sim2Real)
[We will add frequently asked questions with answers here for this track]
Track 5 – Generative Traffic Video Forecasting
[We will add frequently asked questions with answers here for this track]
Track 6 – Cross-City Object Detection (Milestone Systems Hafnia)
1. Do participants get direct access to the full training and test datasets?
No. Only a small sample dataset will be downloadable for local development and pipeline adaptation. The full training data and the test data remain hidden and are accessed only through the Milestone Systems Hafnia platform.
2. Can I train my model locally?
You may train locally only on the small downloadable sample dataset for debugging and adapting your pipeline. Training on the full dataset must be done through Hafnia.
3. Are pretrained models or external data allowed?
Pretrained models are allowed. External data may also be allowed, but subject to constraints that will be detailed in the final challenge rules and platform documentation.
4. How do I submit results to the challenge leaderboard?
Participants will upload their inference code, model, and Docker environment to Hafnia. The platform will run inference on the hidden test data and generate predictions in the format required for the AICity challenge submission system.
5. When will Track 6 and the associated platforms be available?
Track 6 and the associated platforms will be released on May 18, 2026. This includes the track page, registration information, and the sample dataset, while some platform services may become available slightly later.
