Challenge Winners

  • Teams need to submit workshop paper to be eligible for awards
  • Teams need to open-source full code for result reproduction to be eligible for awards

Track 1:

Winner: Team28 matcher

Box-Grained Reranking Matching for Multi-Camera Multi-Target Tracking

Runner-up: Team59 BOE

Multi-camera vehicle tracking system for AI City Challenge 2022

The following table shows the top teams (that have submitted papers) from the public leader board of Track 1 by the challenge submission deadline.

RankTeam IDTeam NameScore
128matcher0.8486
259BOE0.8437
337TAG0.8371
450FraunhoderIOSB0.8348
1094SKKU0.8129
184HCMIU0.7255

Track 2:

Joint Winner: Team183 MegVideo

Symmetric Network with Spatial Relationship Modeling for Natural Language-based Vehicle Retrieval

Joint Winner: Team176 Must Win (Baidu-SYSU)

A multi-granularity retrieval system for natural language-based vehicle retrieval

Runner-up: Team91 HCMUS

Text Query based Traffic Video Event Retrieval with Global-Local Fusion Embedding

The following table shows the top teams from the public leader board of Track 2 by the challenge submission deadline.

  • Team4 did not open-source their code for result reproduction thus per challenge rule we have to disqualify team4 for winning the award.
  • Team176 used track1 data for training (which is prohibited) that gives a significant advantage by workshop time. Afterwards team176 removed track1 data from training and demonstrate their performance is still far ahead. Considering all above we grand team 176 and team 183 joined winners.
RankTeam ID TeamScore (MRR)
1176Must Win0.6606
34HCMIU-CVIP0.4773
4183MegVideo0.4392
591HCMUS0.3611
710Terminus-AI0.3320
924BUPT_MCPRL_T20.3012

Track 3:

Winner: Team72 VTCC-UTVM

An effective temporal localization method with multi-view 3D action recognition for untrimmed naturalistic driving videos

Runner-up: Team95 Tahakom

Temporal driver action recognition using action classification method

The following chart shows the performance of submitted codes from top teams on test set B:

RankTeam ID TeamF-1 Score
172VTCC-UTVM0.4025
295Tahakom0.3261
343Stargazer0.3152
41SCU_Anastasiu0.2381
516BUPT-MCPRL20.2143

Track 4:

Winner: Team9 CyberCore

Improving Domain Generalization by Learning without Forgetting: Application in Retail Checkout

Runner-up: Team117 GRAPH@FIT

Image inpainting for automated checkout solution

  • Team55 did not submit paper to reveal their work thus per challenge rule we have to disqualify team55 for winning the award.

The following chart shows the performance of submitted codes from top teams on Dataset B. All submitted codes were tested on the same testing machine with the following specs:

•GPU: 4 NVIDIA TITAN RTX, 24 GB RAM
•CPU: 12-core Intel(R) Core(TM) i9-7920X  @ 2.90GHz
•Memory: 128 GB DDR4 RAM
•Drive:  2 NVMe RAID