African WildLife Classification

Get ready to rumble with machine learning in the Wildest Contest of the Semester! Your mission: build a model that can distinguish between the fabulous four of African wildlife—Buffalo, Elephant, Rhino, and Zebra. This Machine Learning Safari meets Creature Classification Challenge runs weekly, with submissions evaluated every Friday at 5pm. So, forget the intimidation from whoever is currently king (or queen) of the AI jungle on the leaderboard. The only way to win this mane event is to git your models in gear and submit!
Ranking Procedure
  • Model Evaluation: Each submitted model will be evaluated on the held-out test dataset and the Top-1 accuracy for each model will be calculated.
  • Ranking: Participants will be ranked in descending order based on their model's Top-1 accuracy. The model with the highest Top-1 accuracy will be ranked first.
  • Tie-breaking: If two or more models achieve the same Top-1 accuracy, the following tie-breaking criteria may be applied (in order of priority):
    • Validation Accuracy: The model's accuracy on the provided validation set.
    • Number of Parameters: The model with the fewer number of parameters will be ranked higher (to favor more efficient models).
    • Training Time: The model with the shorter training time will be ranked higher.
Legend:
A (90%-100%)
B (80%-89%)
C (70%-79%)
D (60%-69%)
E (50%-59%)
F (<50%)
Leaderboard
Contest Details
Submission

Leaderboard

Last updated:

RankNameDepartmentTop-1 Accuracy
1
Reeng Kuol ReengInformation Technology
94.27%
2
MOSES LUWALLA WANI NYIGILOInformation Technology
93.83%
3
Peter Akok NgorComputer Science
93.39%
4
Kuot Chol MajokInformation Technology
92.51%
5
Marko Ngor Wek WekInformation Technology
92.07%
6
Rhok Longar AkueiComputer Science
92.07%
7
Yai Thon NyokComputer Science
92.07%
8
AJACK GUET KUOLComputer Science
91.63%
9
Suzan Adut marialComputer Science
91.63%
10
Chris Khamis BobonoComputer Science
91.63%
11
Deng Kuur NhialComputer Science
91.19%
12
Michael Atem CholComputer Science
91.19%
13
Agar Marial Riak AtuongtokComputer Science
91.19%
14
AYATH AGANY AYATHComputer Science
90.75%
15
Abraham Dit ManyangInformation Technology
90.75%
16
Emmanuel Deng MeiComputer Science
90.75%
17
MOU MOU BAKComputer Science
90.75%
18
MAWIEN GUET AYIIComputer Science
90.31%
19
Deng Kuol Ajak DengComputer Science
90.31%
20
Athou Rebecca AjakComputer Science
90.31%
21
mary ojinio lako tombeComputer Science
90.31%
22
Jacob Dau DengComputer Science
90.31%
23
Awut Deng AguerComputer Science
90.31%
24
DUOT DENG AJANGComputer Science
89.87%
25
Malish Ben KenyiComputer Science
89.43%
26
Mawien Tito Ariik TobyInformation Technology
89.43%
27
KUOT JOOL ALUEL DENGComputer Science
89.43%
28
Andrew Akuei Atem ManyuonComputer Science
88.99%
29
Christina Adhar MonyjiithComputer Science
88.99%
30
Maxim Edwin Zozimo OgoComputer Science
88.99%
31
Nesnea khadi silvanoComputer Science
88.99%
32
Daniel Clement LejuComputer Science
88.55%
33
Monica Ayen BolComputer Science
88.55%
34
Mangar makur MachiekInformation Technology
88.11%
35
David Akech Ayor NgongComputer Science
88.11%
36
Peter Arol AwanComputer Science
88.11%
37
Yai simon cholComputer Science
88.11%
38
Alek Garang TorComputer Science
87.67%
39
Jenty jore TheophiluComputer Science
87.67%
40
Deng Dut MayenComputer Science
86.78%
41
Emmanuel Khamis Victor LoyaComputer Science
86.34%
42
Garang Yai GarangComputer Science
86.34%
43
James Dut MathokInformation Technology
83.70%
44
Samuel Jada TombeComputer Science
83.26%
45
Samuel thongbor makethComputer Science
67.40%
46
Edina Yeno JamesComputer Science
66.96%
47
Edmond Anthony MesagaInformation Technology
65.64%
48
Marko Agany KuicComputer Science
58.15%
49
Atem Khor DengComputer Science
40.97%
50
Adam Juma HaruunInformation Technology
40.09%
51
Abraham Ariik MakerComputer Science
35.24%
52
Betty Juru Patrick WunyiComputer Science
0.00%
53
James machar makurComputer Science
0.00%
54
Winny poni ErestoComputer Science
0.00%
55
Lomude Charles JamesComputer Science
0.00%
56
Lual dot WieuComputer Science
0.00%
57
John Boush MayiekComputer Science
0.00%
58
Emilio Albert ApaiComputer Science
0.00%
59
BIET PUORIC MATUONGComputer Science
0.00%
60
YOUSIF JOHN MICHAELComputer Science
0.00%
61
Malong Nuoi Malong AbeiComputer Science
0.00%
62
Deng Zakaria MachComputer Science
0.00%
63
Bol Monica AyuenComputer Science
0.00%
64
Abraham Madit KurComputer Science
0.00%
65
Samuel Maker MangarComputer Science
0.00%
66
Mapath Samuel AjithComputer Science
0.00%
67
Panom Chot JalComputer Science
0.00%
68
Alfred Malek MaborComputer Science
0.00%
69
Bakhita Malek Tong DutComputer Science
0.00%
70
Daniel parach malek ditComputer Science
0.00%
71
Deng Deng MadutComputer Science
0.00%
72
Dhel Malith CholComputer Science
0.00%
73
Dominic Paulino OmerComputer Science
0.00%
74
Franco Komma James OgawiComputer Science
0.00%
75
George Morbe MikeComputer Science
0.00%
76
Godfrey Lino ArkangeloComputer Science
0.00%
77
Joseph chol MagaiComputer Science
0.00%
78
Loi Emmanuel TongComputer Science
0.00%
79
Magisto Ohisa LukaComputer Science
0.00%
80
Mary Adut Achiek AruComputer Science
0.00%
81
Nelson Makim AterComputer Science
0.00%
82
Simon Mading AyolComputer Science
0.00%
83
Stephan Jansuk Jolius YengiComputer Science
0.00%
84
Thomas TODOKO SamuelComputer Science
0.00%

Challenge Description

As a participant, you are tasked with developing a machine learning model that can accurately classify images of African wildlife into one of four categories:

  • Buffalo
  • Elephant
  • Rhino
  • Zebra

Dataset

The contest provides a dataset split into three parts:

  • Training set: 1,049 labeled images (at least 254 per class)
  • Validation set: 1,000 labeled images (at least 51 per class)
  • Test set: Held-out by instructor for evaluating algorith

All images are 128x128x3 pixels in JPEG format. The dataset includes various lighting conditions, angles, and backgrounds to challenge participants' models. Samples from the dataset are shown below.

Starter Code

Participants can use the following starter code and data to begin their projects:

Rules

  • No teams allowed - individual effort.
  • Any machine learning approach permitted (CNNs, Transformers, etc.)
  • No external datasets allowed
  • Pre-trained models are allowed but must be declared

CNN Architectures for Image Classification

Below are pre-trained CNN architectures ordered by their Colab-Friendly Performance (Minimal Resources):

MobileNet (MobileNetV2/V3):
  • Extremely lightweight and designed for mobile devices.
  • Trains very quickly and requires minimal resources.
  • Popular for applications where efficiency is crucial.
  • Good baseline model.
EfficientNet (EfficientNetB0, B1):
  • Balances accuracy and efficiency.
  • EfficientNetB0 and B1 are relatively small and can be trained on Colab without excessive resource usage.
  • Very popular, and state of the art results.
InceptionV3:
  • More complex than MobileNet, but still reasonably efficient.
  • Uses inception modules to capture features at different scales.
  • Good balance of accuracy and resource usage.
ResNet (ResNet50):
  • A good compromise between accuracy and resource usage.
  • ResNet50 is a common choice for transfer learning.
  • Very popular, and good results.
Xception:
  • Generally more resource intensive than the previous models, but still usable on colab.
  • Performs well on many complex image classification tasks.
VGG16:
  • More resource-intensive due to its depth and fully connected layers.
  • Can be challenging to train on Colab with limited resources.
  • Very simple to understand architecture.

Evaluation Metric

The primary evaluation metric for this contest is Top-1 Accuracy. Top-1 accuracy measures the percentage of test images for which the model's top prediction matches the ground truth label. In simpler terms, it's how often the model correctly predicts the single most likely class for each image. This is a standard and intuitive metric for evaluating the performance of multi-class classification models in computer vision.

Submission Instructions

To submit your model for evaluation:

  1. Prepare your model file (saved your model when training)
  2. Create a README file with:
    • Your name, index, and department
    • Brief description of your approach
    • Any special instructions for running your code
  3. Create a Google Drive folder and make sure it is shared with uojdeeplearning@gmail.com
  4. Put all your project files in Google Driver you shared above (code, saved model, readme file)

Evaluation Schedule

Submissions will be evaluated weekly. Results will be posted on the leaderboard by Monday at noon.