Important remarks

root/sudo commands can only be used at the time of building the docker. Thus, you need to prepare the locations of the folders with the right permissions when building the docker, so that they can write to it later.

For example, you can add in your Dockerfile:

  ## Add folder for intermediate results, logs etc
  RUN mkdir /myhome/
  RUN chmod 777 /myhome
  RUN export HOME=/myhome/

And then write all your logs, temporary files, intermediate outputs, etc in /myhome.

Evaluation Platform

The submitted docker containers will be evaluated on a Ubuntu 20.04 desktop. Detailed information is listed as follows:

  • CPU: Intel® Xeon(R) CPU E5-1650 v3 @ 3.50GHz x 12 threads
  • GPU: NVIDIA Titan X (Available memory 12 Gb)
  • RAM: 62Gb
  • Driver Version: 470.57.02
  • CUDA Version: 11.4
  • Docker version 20.10.14

Instructions For Submission

To have access to the evaluation phase and be eligible for the challenge prizes, participants are required to:

  1. Submit a short paper describing their method (deadline: 15th August 2022)
  2. Containerise their algorithm using Docker (deadline: 15th August 2022)

Participants will also have the opportunity to submit a long paper with the results on the validation and testing sets (deadline: 1st October 2022) published as part of the BrainLes workshop proceedings distributed by LNCS.

1. Short Paper Instructions

Participants will have to evaluate their methods on the validation set and submit their short paper (4-5 LNCS pages) to the BrainLes CMT submission system and choose crossMoDA as the "Track". This unified scheme should allow for appropriate preliminary comparisons.

If you are participating in the two tasks, you can submit one paper for both tasks.

The short papers must describe their segmentation algorithm and present the validation results. Specifically, the submissions must include:

  1. a clear description of the mathematical setting, algorithm, and/or mode for reproducibility purposes.
  2. the appropriate citations mentioned at the bottom of the Data page.
  3. the range of hyper-parameters considered, the method used to select the best hyper-parameter configuration, and the specification of all hyper-parameters used to generate results.
  4. a description of results obtained on the validation leaderboard for the two structures (VS and Cochlea) with mean and standard deviation for the two metrics (Dice score and ASSD).

Paper submissions should use the LNCS template, available both in LaTeX and in MS Word format, directly from Springer (link here).

After receiving the reviewers' feedbacks, participants will be allowed to submit their methods on open-access platforms (e.g., ArXiv).

Later, participants will have the opportunity to submit longer papers to the MICCAI 2022 BrainLes Workshop post-proceedings. crossMoDA papers will be part of the BrainLes workshop proceedings distributed by LNCS.

2. Docker Container Instructions

Introduction

The test set won't be released to the challenge participants. For this reason, participants must containerise their methods with Docker and submit their docker container for evaluation. Your code won't be shared and will be only used internally by the crossMoDA organisers.

Docker allows for running an algorithm in an isolated environment called a container.  In particular, this container will locally replicate your pipeline requirements and execute your inference script.

Design your inference script

The inference will be automatically performed using Docker. More specifically, a  command will be executed when your Docker container is run (example: `python3 run_inference.py`)

Task 1

The command must run the inference on the test set, i.e. predict the segmentation for each test hrT2 scan. The test set will be mounted into /input and the predictions must be written in /output. The folder /input will contain all the test hrT2 scans with the format crossmoda_XXX_hrT2.nii.gz. The participant script must write each prediction using the format crossmoda_XXX_Label.nii.gz in the /output folder.

For example, the prediction for the file /input/crossmoda_290_hrT2.nii.gz``must be located at /output/crossmoda_290_Label.nii.gz.

We provide a script example here.

Task 2

The command must run the inference on the test set, i.e. predict the Koos grade for each test hrT2 scan. The test set will be mounted into /input and a CSV file must be written at /output/predictions.csv.

  "case","class"
  "crossmoda2021_ldn_211",1
  "crossmoda2021_ldn_212",2
  "crossmoda2021_ldn_213",3
  "crossmoda2021_ldn_214",4
  ...
  "crossmoda2022_etz_210",1
  "crossmoda2022_etz_211",2
  "crossmoda2022_etz_212",3
  "crossmoda2022_etz_213",4

An example submission file can be found here.

We provide a script example here.

Create your Docker Container

Docker is commonly used to encapsulate algorithms and their dependencies. In this section we list four steps you will have to follow in order create your docker image so that it is ready for submission.

Firstly, you will need to install Docker. The NVIDIA Container Toolkit is also required to use CUDA within docker containers.

Secondly, you will need to create your own image. Docker can build images by reading the instructions from a Dockerfile. Detailed explanations are provided here.

Important: NVIDIA base images must be used for Tensorflow or PyTorch models.

Please look at the crossMoDA Docker container example on Github.

In a nutshell, Dockerfile allows for:

  1. Pulling a pre-existing image with an operating system and, if needed, CUDA (FROM instruction).
  2. Installing additional dependencies (RUN instructions).
  3. Transferring local files into your Docker image (COPY instructions).
  4. Executing your algorithm (CMD  and ENTRYPOINT instructions).

Dockerfileexample:


  ## Pull from existing image
  FROM nvcr.io/nvidia/pytorch:21.05-py3

  ## Copy requirements
  COPY ./requirements.txt .

  ## Install Python packages in Docker image
  RUN pip3 install -r requirements.txt

  ## Copy all files (here "./src/run_inference.py")
  COPY ./ ./

  ## Execute the inference command 
  CMD ["./src/run_inference.py"]
  ENTRYPOINT ["python3"]

Thirdly, you can build your docker image:

  docker build -f Dockerfile -t [your image name] .

Fourthly, you will upload your image to Docker Hub. Instructions can be found here:

  docker push [your image name]

Docker commands

Your container will be run with the following command:

  docker run --rm -v [input directory]:/input/:ro -v [output directory]:/output -it [your image name]

[input directory] will be the absolute path of our directory containing the test set, [output directory] will be the absolute path of the prediction directory and [your image name] is the name of your Docker image.


Test your Docker container

To test your docker container, you will have to run your Docker container and perform inference using a subset of the validation set.

Firstly, download the a subset of the validation set zip here.

Secondly, unzip the set in [unzip validation set] and run:

  docker run --rm -v [unzip validation set]:/input/:ro -v [output directory]:/output -it [your image name]

Thirdly, check that the predictions are correct.

Fourthly, please zip [output directory] and include the .zip in your CMT submission.

Submission instructions

All the information regarding your Docker container (Docker Hub image name and validation predictions in a .zip file) will have to be included in your submission via the BrainLes CMT system (crossMoDA track).

We will also ask you to specify the requirements for running your algorithm: number of CPUs, amount of RAM, amount of GPU memory (12GB max) and estimated computation time per subject.

FAQ

Q: Should DockerHub Repository visibility be set to Public?
A: The easiest option is to set the visibility to Public for at least the three weeks following the deadline (Friday 13 August).

Q: I need to add "--runtime=nvidia" to use a GPU in the Docker container. Is that ok?
A: Yes, this is absolutely fine.

Q: Which GPU should I use to run the inference in the Docker container?
A: Please use: "CUDA_VISIBLE_DEVICE=0"

3. Long Paper Instructions

(coming)