This page outlines the weekly schedule for lectures, labs, assignments, and examinations. The schedule will be updated regularly to align with the University of Juba's academic calendar and holiday schedule. Reading materials, lecture slides, and lab materials will be accessible through this schedule, with links provided for downloading prior to the commencement of each lecture or lab session. If you encounter any difficulties or have questions, please contact the lead Teaching Fellow, Thiong Abraham.
This lecture will review materials for exams
This lecture will review materials for exams
This lecture will cover advance topics.
Welcome to our lecture on Convolutional Neural Networks (CNNs)! Today, we'll delve into the fundamental building blocks that make CNNs so powerful for image recognition and computer vision tasks. We'll start by understanding the concept of convolution, where a filter (or kernel) slides over an input image, performing element-wise multiplications and summations with the underlying pixels. This process extracts local features like edges and textures. Next, we'll explore pooling layers, which reduce the spatial dimensions of feature maps while preserving essential information. Techniques like max pooling and average pooling are commonly used to downsample the input and make the network more efficient. Finally, we'll discuss the role of fully connected layers, which take the output of the convolutional and pooling layers and map them to class probabilities. By combining these building blocks, CNNs can learn hierarchical representations of visual data, enabling them to accurately classify images, detect objects, and even generate realistic images.
Welcome to our lecture on Convolutional Neural Networks (CNNs)! Today, we'll delve into the fundamental building blocks that make CNNs so powerful for image recognition and computer vision tasks. We'll start by understanding the concept of convolution, where a filter (or kernel) slides over an input image, performing element-wise multiplications and summations with the underlying pixels. This process extracts local features like edges and textures. Next, we'll explore pooling layers, which reduce the spatial dimensions of feature maps while preserving essential information. Techniques like max pooling and average pooling are commonly used to downsample the input and make the network more efficient. Finally, we'll discuss the role of fully connected layers, which take the output of the convolutional and pooling layers and map them to class probabilities. By combining these building blocks, CNNs can learn hierarchical representations of visual data, enabling them to accurately classify images, detect objects, and even generate realistic images.
Welcome to our lecture on Convolutional Neural Networks (CNNs)! Today, we'll delve into the fundamental building blocks that make CNNs so powerful for image recognition and computer vision tasks. We'll start by understanding the concept of convolution, where a filter (or kernel) slides over an input image, performing element-wise multiplications and summations with the underlying pixels. This process extracts local features like edges and textures. Next, we'll explore pooling layers, which reduce the spatial dimensions of feature maps while preserving essential information. Techniques like max pooling and average pooling are commonly used to downsample the input and make the network more efficient. Finally, we'll discuss the role of fully connected layers, which take the output of the convolutional and pooling layers and map them to class probabilities. By combining these building blocks, CNNs can learn hierarchical representations of visual data, enabling them to accurately classify images, detect objects, and even generate realistic images.
This lecture delves into the core concepts of training and optimizing neural networks. We will explore the fundamental building blocks of neural networks, including neurons, layers, and activation functions. You'll learn about the backpropagation algorithm, a crucial technique for calculating gradients and updating weights to minimize loss. We'll discuss various optimization algorithms like gradient descent, stochastic gradient descent, and advanced techniques like Adam and RMSprop. The lecture will also cover regularization techniques, such as L1 and L2 regularization, dropout, and early stopping, to prevent overfitting and improve generalization. By the end of this lecture, you'll have a solid understanding of the principles behind training and optimizing neural networks, enabling you to build and fine-tune your own models effectively.
This lecture will introduce students to the fascinating world of artificial neural networks (ANNs), drawing inspiration from biological neural networks. We will explore the fundamental components and functions of biological neurons and how these concepts have been adapted to create artificial counterparts. Students will learn how to construct ANNs with different layers, including input, hidden, and output layers, and understand the role of activation functions in shaping the network's behavior. The lecture will culminate in a discussion of deep neural networks, which leverage multiple hidden layers to tackle complex tasks such as image recognition, natural language processing, and more. By the end of this session, students will have a solid foundation in the theory and practical aspects of building ANNs.
In the lecture portion of Week 1, we will provide a comprehensive overview of deep learning, including its fundamental concepts, applications, and the role of neural networks. We will also discuss the TensorFlow/Keras framework, which will be the primary tool used throughout the course. In the lab session, students will set up their development environment using Google Colab, experiment with a neural network, and familiarize themselves with the TensorFlow/Keras API.