2026 Challenge Track Description

Track 6: Cross-City Object Detection (Milestone Systems Hafnia)

Challenge Track 6 focuses on fine-grained object detection in real-world traffic imagery under geographic domain shift. Participants are asked to train models on data from one city and evaluate them on a distinct target city with different visual characteristics, scene layouts, and environmental conditions. The track is designed to study cross-city generalization, a setting that remains underexplored due to the difficulty of obtaining large-scale, compliant, real-world data from multiple locations. To support this effort, the track is powered by Milestone Systems Hafnia, which provides access to compliant and ethically sourced video data together with managed training infrastructure. The data used in this track are irreversibly anonymized, and full datasets for training and test data are kept hidden from participants, enabling privacy-conscious and compliance-aware experimentation on real-world camera data. Track 6 and the associated platforms will be released on May 18, 2026. 

  • Data 

This track is based on a subset of the large-scale real-world dataset curated through Milestone Systems Hafnia. The benchmark contains over 20k annotated images for training and a similarly sized hidden test set, with over 100k annotated object instances in the training split and a comparable number in the test split. The data are extracted from real-world traffic video streams and include diverse viewpoints, roadway types, and imaging conditions. Images are primarily provided at 1080p resolution, with some data available at 4K.

The benchmark includes 14 object classes:

Class IDClass Name
1Car
2Van
3Pickup Truck
4Single Truck
5Combo Truck
6Heavy Duty Vehicle
7Trailer
8Emergency Vehicle
9Motorcycle
10Bicycle
11Tricycle
12Bus
13RV
14People

Annotations are provided as axis-aligned 2D bounding boxes. Objects are annotated as long as they are at least slightly visible, including partially occluded or truncated objects. The minimum box size considered in the benchmark is 10 × 10 pixels. To reduce ambiguity caused by tiny distant objects that are not part of the benchmark, some ignored regions are blurred. Frames are sampled from video with a minimum spacing of 2 seconds, increasing visual diversity and reducing redundancy.

A hidden validation subset will be available within Hafnia as part of the training data split. Detailed dataset statistics, metadata, and annotation information will be available on the dataset page inside the Hafnia platform. 

  • Task

Given an input image, participants must detect all target objects and assign each detection one of the predefined fine-grained classes.

The task is formulated as a single-image detection problem. No temporal cues are allowed at inference time. The primary challenge is not only achieving strong detection accuracy, but doing so under cross-city domain shift, where the model must generalize from the source-city training distribution to a distinct target-city test distribution.

Participants may use pretrained models. External data may also be allowed, subject to challenge constraints that will be detailed in the final rules (18th of May). Ensembles are permitted as long as they can be executed as a single inference pipeline within the platform constraints. Due to model size and platform limitations, very large foundation-scale models may not be supported in this track and the final platform documentation will specify the applicable limits.

  • Submission Format

Participants will run inference through the Milestone Systems Hafnia platform. Instead of manually preparing challenge submissions from raw test data, teams will upload:

      • Trained model weights
      • Inference source code
      • Docker file describing the inference environment
      • Any required configuration or runtime parameters.

The platform will execute inference on the hidden test data and produce prediction files in the format required by the official AICity evaluation system. Participants will then be able to download the generated results and submit them to the AICity challenge page. Automatic transfer from Hafnia to the AICity submission system may be supported in a later update.

The training and inference package formats are designed to be similar, so that participants can reuse most of their code structure. Starter templates for training jobs and dataset configurations will be provided through the platform documentation.

    •  
  • Evaluation

The primary evaluation metric is mean Average Precision (mAP), with the exact evaluation details to be released in the final rules update.

The ranking will be based mainly on performance on the hidden target-city test set, while also considering performance on data from the training city. A public leaderboard will be available during the challenge, showing partial results on the hidden target-city test data. For final ranking, inference time and model size may be used as tie-breaking criteria.

Further metric details, including IoU thresholds and averaging conventions, will follow commonly used state-of-the-art object detection practice and will be published in the official evaluation protocol.

  • Data Access

This track is hosted through Milestone Systems Hafnia Training as a Service. Participants will not directly access the full-resolution hidden training or test data. Instead, the challenge is organized around a managed workflow that supports compliant experimentation on real-world data.

The expected high-level participation flow is:

      1. Register to the Hafnia platform using the ECCV / AICity challenge access code (will be shared here in the May 18th update).
      2. Wait for account approval and challenge access to be granted by the organizers.
      3. Explore the Hafnia platform and the documentation.
      4. Download the sample dataset and starter materials.
      5. Adapt your training pipeline to the Hafnia dataset format.
      6. Build a Docker file for the training environment.
      7. Upload the training job, including code, model definition, and Docker environment.
      8. Monitor your training jobs on the Hafnia’s experiments tracker. 
      9. Retrieve the trained model artifacts after training finishes.
      10. Upload the inference job with the trained model, inference code, and Docker environment.
      11. Run inference on the platform.
      12. Download the generated results and submit them to the AICity evaluation page.

A public registration page for Hafnia is available here:

https://hafnia.milestonesys.com/joinwaitlist

The track page update, sample data, and Hafnia access will be released on May 18, 2026. The inference environment and some service components may become available slightly later, and the remaining timeline will follow the official AICity challenge schedule.

  • Platform Resources and Constrains

Participants will be able to use the downloadable sample dataset locally for:

      • Understanding the dataset structure
      • Validating data loading pipelines
      • Testing training and inference code
      • Checking Docker compatibility

The full training data will only be accessible inside Hafnia. The platform will provide:

      • Sample data
      • Dataset documentation and statistics
      • Starter templates for training and dataset configurations
      • Training logs and experiment monitoring
      • Export of trained model weights.

Resource usage will be constrained through platform limits, including:

      • Model size limitations
      • Limits on the number of experiments
      • Restricted compute tiers.

These constraints are intended to ensure fair access to shared infrastructure and will be described in more detail in the final documentation.

  • Privacy and Compliance

A key motivation behind Milestone Systems Hafnia is the creation of a large-scale, compliant, and legally sourced computer vision data library for static real-world cameras. This track showcases that capability in the form of a cross-city benchmark built on hidden real-world data.

The data used in this challenge are irreversibly anonymized. Hafnia uses a privacy-preserving data pipeline, including DNAT (Deep Natural Anonymization) technology from brighter AI, to protect identities while preserving the utility of the visual data for computer vision development. Participants do not access the raw full-resolution hidden training or test corpus directly. This hidden-data training protocol supports privacy-conscious benchmarking and compliance-aware experimentation on real-world traffic imagery.

  • Important dates

      • May 18, 2026: Track 6 page released
      • May 18, 2026: Hafnia access and registration code released
      • May 18, 2026: Sample dataset released
      • A few weeks later: test service and additional platform components become available
      • Further dates: leaderboard opening, submission deadlines, and final challenge milestones will follow the official AICity challenge calendar (to be updated)
  • Organizers

This track is organized by:

      • Milestone Systems – Hafnia
      • Universidad Autónoma de Madrid
      • NVIDIA

Track organizers: 

      • Fulgencio Navarro – Milestone Systems
      • Rafael Martin – Milestone Systems
      • Peter Christiansen – Milestone Systems
      • Juan Carlos SanMiguel – Universidad Autónoma de Madrid
      • Alvaro García-Martín – Universidad Autónoma de Madrid
  • References

[1] Milestone Systems, “Project Hafnia: A Game-Changer in AI Model Training”, 2025.

[2] brighter AI, “Privacy v Progress: How DNAT Protects Privacy in the Age of Machine Learning,” 2022. 

[3] Milestone Systems Hafnia Python SDK / CLI documentation – GitHub

  • Contact

For track-related questions, please contact:

      • info.hafnia@milestone.dk