2023 FAQs
General
1. We would like to participate. What do we need to do?
Please fill out the participation intent form to list your institution, your team and the tracks you will participate in. You just need to follow the instructions and submit the form.
2. I am interested only in submitting a paper but not in the Challenge. Can I do that?
Yes. Please make sure to submit your paper by the submission deadline.
3. How large can a team be?
There are no restrictions on team size.
4. What are the rules for downloading the data set?
A participation agreement is available ahead of the data being shared. You need to accept that agreement and submit that response ahead of getting access to the data set.
5. Can I use any available data set to train models in this Challenge?
Teams that are willing to be listed in the public leader board and win the challenge awards are NOT allowed to use any external data for either training or validation. The winning teams and runners-up are required to submit their training and testing codes for verification after the challenge submission deadline. However, teams are NOT allowed to use data and trained models across different challenge tracks — The only exception is that the pre-trained re-identification models are allowed to be used on other tracking-related tasks.
6. What are the prizes?
This information is shared in the Awards section.
7. Will we need to submit our code?
Teams need to make there code publicly accessible to be considered for winning (including complete/reproducible pipeline for mode training/creation). This is to ensure that no external data is used for training and the tasks were performed by algorithms and not humans and contribute to the community.
8. How will the submissions be evaluated?
The submission formats for each track are detailed on the Data and Evaluation page.
The validation sets are allowed to be used in training.
10. Are we allowed to use test sets in training?
Additional manual annotations on our testing data are strictly prohibited. We also do not encourage the use of testing data in any way during training, with or without labels, because the task is supposed to be fairly evaluated in real life where we don’t have access to testing data at all. Although it is permitted to perform algorithms like clustering to automatically generate pseudo labels on the testing data, we will choose a winning method without using such techniques when multiple teams have similar performance (~1%). Finally, please keep in mind that, like all the previous editions of the AI City Challenge, all the winning methods and runners-up will be requested to submit their code for verification purposes. Their performance needs to be reproducible using the training/validation/synthetic data only.
11. Are we allowed to use data/pre-trained models from the previous edition(s) of the AI City Challenge?
Data from previous edition(s) of the AI City Challenge are considered external data. These data and pre-trained models should NOT be utilized on submissions to the public leaderboards. But it is allowed to re-train these models on the new training sets from the latest edition.
12. Are we allowed to use other external data/pre-trained models?
The use of any real external data is prohibited. There is NO restriction on synthetic data. Some pre-trained models trained on ImageNet / MSCOCO, such as classification models (pre-trained ResNet, DenseNet, etc.), detection models (pre-trained YOLO, Mask R-CNN, etc.), etc. that are not directly for the challenge tasks can be applied. Please confirm with us if you have any question about the data/models you are using.
Track 1
1. Is calibration available for each camera?
The calibration is not provided with the dataset. But the top-view map is available for each subset. Teams can choose correspondences between the frame images and top-view map to compute homographies for projecting objects to the world coordinate system.
2. What is the standard of labeling?
A bounding box is annotated if 60% of the body is seen, or the head and shoulder are seen.
3. How are the object IDs used for evaluation? Do the submitted IDs need to be consistent with the ground truths?
We use the same metrics as MOTChallenge for the evaluation. The IDs in the submitted results do not need to match the exact IDs in the ground truths. We will use bipartite matching for their comparison, which will be based on IOU of bounding boxes.
Track 2
1. What is UUID?
2. Can I use pre-trained natural language models?
Yes. Any NLP models that are not trained specifically for the CityFlow Benchmark are allowed for Track 2.
3. Where I can find the baseline model?
The repository is available on GitHub: https://github.com/fredfung007/cityflow-nl
The arXiv paper of the CityFlow-NL dataset and the baseline model is available here.
Track 3
1. Can we use dataset A2 for supervised or semi-supervised training on our algorithms?
Same as for other tracks, here for track3 the data set A2 (provided to teams with no label) should be used as validation only which means any sort of training (manual labeling, semi-supervise labeling) using dataset A2 is prohibited.
2. Why it is sometimes out of sync between the three camera view?
The videos are synced at the start, however since there are some video concatenation happened during the video creation process, it may be slightly out of sync in the later part but should not be more than one second.
3. Does the activity id start at 0 or 1?
There was a discrepancy between the track description and the label file in the dataset. To confirm, the activity id starts at 0 and the info on the track description page has been updated accordingly.
Track 4
1. What is an automated checkout process look like?
The following figure illustrates a snapshot of our testing videos mimicking an automated checkout process.
2. Can the test set A be used for training models with methods such as pseudo-label, semi-supervision, or self-supervision?
The test set can’t be used. We do not limit or recommend how participants should detect the tray but teams should keep in mind that the camera view might change slightly within a recorded test video.
Track 5
1. What is the standard of labeling?
An object is annotated if 40% of the object is seen. The minimum height and width of the bounding boxes are 40 pixels. Objects which are smaller than 40 pixels are not taken into consideration and will not influence the test accuracy results.
2. In the dataset, objects behind the redacted areas are not annotated. Why?
The objects which are overlapping with the redacted area (blurred region) are not taken into consideration, because the blurred region can suppress some important features of the objects to be detected. In the test dataset, any objects (in the submission file) which will have overlap with redacted areas will be ignored and will not influence test accuracy.
3. whether the data type of width and height of submitted results in Track 5 is Int or Float?
The data type of width and height for the submitted result is Integer.