Getting started

  • /submission: Submit your model output
  • /study: Study screen for human evaluation
  • /user: User participation
  • /result: Leaderboard

Summary

System naming convention

  • natural mocap data (ground truth): NA
  • submitted systems: SA, SB, ..., SZ
  • baseline systems: BA, BB, BC, ...

We provide a input and output to the participants, and you generate gestures npy for the input.

Then, we recruit Prolific to start user studies. Using pairwise comparison, we evaluate the generated gestures with human evaluation

Evaluation Process

Download submission_inputs.csv file

input_code, video_file
234083450345, 234083450345.npy
346643424234, 346643424234.npy
443646423423, 443646423423.npy

Run your model inference

Run your model inference to get all video file output.

234083450345.npy
346643424234.npy
443646423423.npy

Submission your model .npy output

Go to /submission and upload all video of your model

Convert & Split to Video

We will convert all .npy file to .npy video and split to mutiple segment video.

Generate all study pairwise compare screen

Sample study screen

study_screen
  • You can visit /study to get information about each study screen.
  • You can visit /user to follow all user participation our study.

Recruit partipation on Prolific to study

Each Prolific partipation will be study for

prolific

When partipation study, we record all action and final result

Selection Result

selection_result

Actions Record

actions_record

Evalution final result

Sample evalution result

eval_result