ACMMM MEGC2022: Facial Micro-Expression Grand Challenge

MICRO- AND MACRO-EXPRESSION SPOTTING TASK



[update]  Notification of acceptance: 7 July 2022 16 July 2022. The decision is based on the merit of the method and the spotting performance (the ranking).
[update]Submission platform: https://openreview.net/group?id=acmmm.org/ACMMM/2022/Track/Grand_Challenges.  And Supplemental materials are required to be submitted.
[update] Submission deadline has been extended to 25th June 2022 (23:59AoE)
[note] Articles and results should be submitted together, both with a deadline of 25th June 2022 (23:59AoE).




UNSEEN TEST DATASET

  • The unseen testing set (MEGC2022-testSet) contains 10 long video, including 5 long videos from SAMM (SAMM Challenge dataset) and 5 clips cropped from different videos in CAS(ME)3. The frame rate for SAMM Challenge dataset is 200fps and the frame rate for CAS(ME)3 is 30 fps. The participants should test on this unseen dataset.
  • To download the MEGC2022-testSet, Download and fill in the license agreement form of SAMM Challenge dataset and the license agreement form of CAS(ME)3_clip, upload the file through this link: https://www.wjx.top/vj/wMAN302.aspx.
    • For the request from a bank or company, the participants need to ask their director or CEO to sign the form.
    • Reference:
      1. Li, J., Dong, Z., Lu, S., Wang, S.J., Yan, W.J., Ma, Y., Liu, Y., Huang, C. and Fu, X. (2022). CAS(ME)3: A Third Generation Facial Spontaneous Micro-Expression Database with Depth Information and High Ecological Validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2022.3174895.
      2. Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2016). SAMM: A spontaneous micro-facial movement dataset. IEEE transactions on affective computing, 9(1), 116-129.



EVALUATION PROTOCOL

  • Baseline Method:
    Please cite:
    Yap, C.H., Yap, M.H., Davison, A.K., Cunningham, R. (2021), 3D-CNN for Facial Micro-and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame, arXiv:2105.06340 [cs.CV], https://arxiv.org/abs/2105.06340.
  • Participant should test the proposed algorithm on the unseen dataset and upload the result to the Leaderboard for the evaluation.



  • Submission stage: 23rd May - 18th June  25th June, 2022
    • The participants could upload the result and then the Leaderboards will calculate the metrics.
    • Please contact lijt@psych.ac.cn for the participants' own evaluation result, with mail subject: [Evaluation Result Request] MEGC2022 - Spotting task - [user name] - [submission time]. 
    • The evaluation result of other participants and the ranking will not be provided during this stage. You could compare your result with the provided baseline result. 

  • Live Leaderboard stage: Since 19th June 2022  26th June 2022
    • Results uploaded after June 25th will not be considered by ACM MEGC2022 for the final ranking of the competition.
    • However, any research team interested in the spotting task can upload results to validate the performance of their method.
    • The leaderboard will calculate and display the uploaded results and real-time ranking.



Important dates:

  • Submission Deadline: 18 June 2022 25th June 2022 (23:59AoE)
  • Notification: 7 July 2022 16 July 2022
  • Camera-ready: 20 July 2022   23 July 2022
  • Submission guidelines:
    • Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template as used by regular ACMMM submissions Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template.
    • Grand challenge papers will go through a single-blind review process. Each grand challenge paper submission is limited to 4 pages with 1-2 extra pages for references only.
    • For all files except for the paper, please submit in a single zip file and upload to the submission system as supplementary material.

For more information, please visit https://megc2022.github.io/challenge.html.