COMP0090: Introduction to Deep Learning
Assessed Coursework 1 2021-22
Available on 26th November 2021
Submission before 16:00 (UK time), 16th December 2021, on Moodle
Introduction
This is the first of two assessed coursework. This coursework accounts for 50% of the module with three
independent tasks, and for each task, a task script needs to be submitted with other supporting files and
data. No separate written report is required.
There are hyperlinks in the document for further reference. Throughout this document, various parts of
the text are highlighted, for example:
The aim of the coursework is to develop and assess your ability a) to understand the technical and
scientific concepts behind deep learning theory and applications, b) to research the relevant methodology
and implementation details of the topic, and c) to develop the numerical algorithms in Python and one of
the deep learning libraries TensorFlow and PyTorch. Although the assessment does not place emphasis
on coding skills and advanced software development techniques, basic programming knowledge will be
taken into account, such as the correct use of NumPy arrays, tensors – as opposed to, for example,
unnecessary for-loops, sufficient commenting and consistent code format. Up to [20%] of the relevant
marks may be deducted for good programming practice.
Do NOT use this document for any other purposes or share with others. The coursework remains UCL
property as teaching materials. You may be risking breaching intellectual property regulations and/or
academic misconduct, if you publish the details of the coursework or distribute this further.
Conda environment and other Python packages
No external code (open-source or not) should be used for the purpose of this coursework. No other
packages should be used, unless specified and installed within the conda environment below. Individual
tasks may have specific requirement, e.g. only TensorFlow or PyTorch, as opposed to NumPy for example,
can be used for certain function implementation. Up to [100%] of the relevant marks may be deducted
for using external code. This will be assessed by, on the markers’computers, running the submitted code
within a conda environment created as follows:
conda create -n comp0090-cw1 python=3.9 tensorflow=2.4 pytorch=1.7 torchvision=0.8
conda activate comp0090-cw1
conda list
Class names are highlighted for those mandatory classes that should be found in your submitted code.
Function names are highlighted for those mandatory functions that should be found in your submitted code.
Printed messages on terminal when running the task scripts.
Visualisation saved into PNG files with task scripts.
“filepath.ext”: quotation marks indicate the names of files or folders.
commands: commands run on bash or Python terminals, given context.
COMP0090 Coursework 1 2021-22
2
Use the above command to see the available libraries for this coursework after activating comp0090-cw1.
Make sure your OS is up-to-date to minimise potential compatibility issues.
TensorFlow or PyTorch
You can choose to use either TensorFlow or PyTorch, but not both of them in this coursework, as it is
designed to have a balanced difficulties from different tasks.
Working directory and task script
Each task should have a task folder, named as“task1”,“task2”and“task3”. A Python task script should
be a file named as“task.py”, such that the script can be executed on the bash terminal when the task
folder is used as the current/working directory, within the conda environment described above:
python task.py
It is the individual’s responsibilities to make sure the submitted task scripts can be run, in the above-
specified conda environment. Even for the data/code available in module tutorials, copies or otherwise
automated links need to be provided to ensure a standalone executability of the submitted code. Care
needs to be taken in correct use of relative paths, as it has been a common issue. Jupyter Notebook files
are NOT allowed. Up to [100%] of the relevant marks may be deducted if no runnable task script is found.
Printing and visualisation
Summarising and communicating your implementation and quantitative results is being assessed as part
of the module learning outcome. Each task specifies relevant information and messages to be printed on
terminal, which may contain description, quantitative summary and brief remarks. The printed messages
are expected to be concise, accurate and clear.
When the task requires visualising results (usually in the form of images), the code should save the results
into a PNG file in the respective working directory. These PNG files should be submitted with the code.
This is for compatibility with those environments that do not support graphics, such as WSL or remote
setups. Please see examples in the module repository using Pillow. Please note that matplotlib cannot be
used in the task scripts. Up to [50%] of the relevant marks maybe deducted if this is not followed.
Design your code
The functions/classes/files/messages highlighted (see Introduction) are expected to be found in your
submitted code, along with the task scripts. If not specifically required, you have freedom in designing
your own code, for example, data type, variables, functions, scripts, modules, classes and/or extra results
for discussion. They will be assessed for correctness but not for design aspects.
The checklist
This is a list of things that help you to check before submission.
The coursework will be submitted as a single“cw1”folder, compressed as a single zip file.
Under your“cw1”folder, you should have three subfolders,“task1”,“task2”and“task3”.
The task scripts run without needing any additional files, data or customised paths.
All the classes and functions colour-coded in this document can be found in the exact names.
Check all the functions/classes have docstring on data type, size and what-it-is for input
arguments, outputs and a brief description of its purpose.
COMP0090 Coursework 1 2021-22
3
Task 1 Stochastic Gradient Descent for Linear Models
This task needs to be implemented entirely using TensorFlow/PyTorch, without using NumPy.
Implement a polynomial function polynomial_fun, that takes two input arguments, a weight vector
of size + 1 and an input scalar variable , and returns the function value . Using the linear algebra modules in TensorFlow/PyTorch, implement a least square solver for fitting
the polynomial functions, fit_polynomial_ls, which takes pairs of and target values as input, with
an additional input argument to specify the polynomial degree , and returns the optimum weight
vector ? in least-square sense, i.e. ‖ ? ‖2 is minimised. [5]
Using relevant functions/modules in TensorFlow/PyTorch, implement a stochastic minibatch gradient
descent algorithm for fitting the polynomial functions, fit_polynomial_sgd, which has the same input
arguments as fit_polynomial_ls does, with additional two input arguments, learning rate and
minibatch size. This function also returns the optimum weight vector ?. During training, the function
should report the loss periodically using printed messages. [5]
Implement a task script“task.py”, under folder“task1”, performing the following:
o Use polynomial_fun (= 3, = [1,2,3,4]T) to generate a training set and a test set, in the
form of respectively sampled 100 and 50 pairs , [?20, 20] and observed . The observed
values are obtained by adding Gaussian noise (standard deviation being 0.2) to . [3]
o Use fit_polynomial_ls (= 4) to compute the optimum weight vector ? using the training
set. In turn, compute the predicted target values ? for all in both the training and test sets.
[2]
o Report, using printed messages, the mean (and standard deviation) in difference a) between
the observed training data and the underlying“true”polynomial curve; and b) between the
“LS-predicted”values and the underlying“true”polynomial curve. [3]
o Use fit_polynomial_sgd (= 4) to optimise the weight vector ? using the training set. In
turn, compute the predicted target values ? for all in both the training and test sets. [2]
o Report, using printed messages, the mean (and standard deviation) in difference between the
“SGD-predicted”values and the underlying“true”polynomial curve. [2]
o Compare the accuracy of your implementation using the two methods with ground-truth on
test set and report the root-mean-square-errors (RMSEs) in both and using printed
messages. [3]
o Compare the speed of the two methods and report time spent in fitting/training (in seconds)
using printed messages. [2]
Task 2 A Regularised DenseNet
For the purpose of this task, the dataset is simply split into two, training and test sets, as in the tutorial.
Adapt the Image Classification tutorial to implement a new network DenseNet3, with the following:
o Contain a member function dense_block, implementing a specific form of DenseNet
architecture, each contains 4 convolutional layers. [3]
COMP0090 Coursework 1 2021-22
4
o Design and implement the new network architecture to use 3 of these dense blocks. [4]
o Summarise and print your network architecture, e.g. using built-in summary function. [1]
Implement a data augmentation function cutout, using the Cutout algorithm.
o Use square masks with variable size and location. [2]
o Add an additional parameter s, such that the mask size can be uniformly sampled from [0, s]