关于后端:SOFT3202分析

36次阅读

共计 10325 个字符,预计需要花费 26 分钟才能阅读完成。

Details
SOFT3202 / COMP9202
Testing Assignment
Background
You have been employed to develop a tesng suite for Reynholm Industries, a sizable company that among other things performs consulng for clients. This consulng is done on a project basis, and
requires the tracking of billable hours. This is managed by an ERP system, the relevant components of which you will be asked to write a test suite for.
Note: The development and tesng processes you will be using here are for assessment purposes – they do not resemble normal industry pracce. The API design has also been modified from best pracces in order to
support assessment, and has been pared down to just the API you need to test.
Your assignment will be automacally marked by a script. This places strict requirements on your classes and filenames, as well as on what you are able to assume in your code. You will be provided with
package structure that you must follow along with the API. Your marks will be drawn enrely from the marking script – failure to follow these instrucons will lead to automac loss of marks (up to 100%
depending on script output).
For this submission, you must use the informaon contained in the TAdocs API (TAdocs.zip (hps://canvas.sydney.edu.au/courses/40450/files/22667310?wrap=1)
(hps://canvas.sydney.edu.au/courses/40450/files/22667310/download?download_frd=1) ) to test the following components:
BILLINGSYSTEM MODULE
You must write an independent unit test suite for the BSFacadeImpl class. This class requires the injecon of relevant modules – you have been given the interface these modules must implement, but
not the concrete classes. You must use mocks to handle these requirements appropriately.
CHEAT MODULE
You will noce au.edu.sydney.so3202.reynholm.erp.cheatmodule.ERPCheatFactory in the API. This is a class you can use if you want to obtain most of the marks for the least amount of me spent (you
will sll need to perform tesng and mocking, but the depth to which you need to mock is reduced).
In order to use this module, you should instanate the ERPCheatFactory class (it is concrete, not an interface), and use it to get implementaons of the necessary classes you will need for the
authencaon and authorisaon steps in BSFacadeImpl.
API Clarication
The API should be considered volale for the early stage of this assessment. If you would like to clarify parts of the API and have the API modified to make things less ambiguous, make a request on
Edstem strictly in the acve API Clarificaon thread. The final version of the API will be updated here once this process is complete.
Submission Requirements
You will be subming your code as a GitHub Repository. This GitHub Repository must be created under your unikey account on the hps://github.sydney.edu.au (hps://github.sydney.edu.au/) plaorm,
with the repository name exactly matching:
SCD2_2022
You must create this repository as a PRIVATE repository. Make sure it is private, otherwise you could get in trouble for academic dishonesty! (This should be the same repository as for Task 1)
You must add the following unikeys as ‘collaborators’ so both the marking script and the teaching team can access your work. Do not add any other collaborators, and make sure you get the
spelling/numbers correct to avoid releasing your code to somebody else:
jbur2821
aest9988
agha0431
efis3423
hzen5475
phao5814
Your repository should match the following structure (keep the package and directory structure exactly as it is wrien in the API). You can use other files if you like to help you while you work (for
example, an implementaon of BSFacadeImpl), but you should take care to keep them out of the relevant directories:

task1 (this is not part of this assessment, but should help in understanding the necessary repository structure)
ShoppingBasketTest.java
tesngassignment
BSFacadeImplTest.java
anyOtherFolderName
Marking script doesn’t care. For example, you can store the full gradle project here if you want
Tesng Assignment
Due: Tue Mar 22, 2022 23:59
15 Possible Points
Add Comment
0
Previous
(hps://canvas.sydney.edu.au/courses/40450/modules/items/1521836)
Next
(hps://canvas.sydney.edu.au/courses/40450/modules/items/1537078)
Academic honesty
While the University is aware that the vast majority of students and staff act ethically and honestly, it is opposed to and will not tolerate academic dishonesty or plagiarism and will treat all allegations of
dishonesty seriously.
Marking Mechanism
The marking for this assignment is done by a script. This script will run each night at any me aer midnight, based on your last pushed commit to your repository’s master/main branch. Below is a
simplified descripon of the process the marking script will follow so you can beer understand the feedback it gives you. Feedback will be available through this Canvas assignment. Note that some
feedback will be hidden unl the due date!
First, it checks to see if it has access to a correctly named repository for your unikey. If it does not, it terminates.
If it has access to a repository, it will clone the repository, and retrieve the latest pushed commit you have made to the master/main branch (most likely HEAD). Don’t do anything like deleng or
renaming the master/main branch, but working on other branches is perfectly fine. The script will only look at master/main though.
Once it has the latest commit, it will parse the directory tree to see if it looks like it should (i.e. it will look for the assessable file in the directory it should be in. If they are not there, it terminates.
If it has found the file, it will move them into the test harness. Your other folders are ignored (your assessable code cannot rely on them!). This test harness includes mulple environments for tesng
your test cases (i.e. your tests will be run on my code)
One version in each category will have no bugs. If you reject this version as being bugged, the script will terminate. You MUST pass the working version in order to gain any marks at all.
Various numbers of versions in each category will have one bug each. You gain marks based on the number of bugged versions you reject as bugged. Most bugs will be ‘hidden’ unl aer the
due date.
Each of these will be checked with the gradle command ‘gradle test’ – using the same build.gradle file you have been provided.
Your code may fail to compile. If this is the case the script will terminate. Your code MUST compile in order to gain any marks at all. This can occur separately depending on which files are being tested
(that is, your implementaon might compile and run, but a test file might fail).
Once all of the above completes successfully a mark will be calculated and the script will terminate.
Your feedback will include some of the following, depending on how far the script got:
If the script terminated prematurely, you will be given a message indicang when it terminated. Any errors generated (such as compile-me errors) will be included.
If this is a ‘before the due date’ marking run, and the script completed, you will receive the following:
A message indicang your code structure appears to be ok and your code compiled successfully
A message indicang how many tests your implemented code passed vs failed, including the JUnit report
If this is the ‘aer the due date’ marking run, and the script completed, you will receive the following:
A message indicang your code structure appears to be ok and your code compiled successfully
A message indicang how many tests your implemented code passed vs failed, including the JUnit report
A message indicang how many bugs you have caught vs missed, including what those bugs were
A mark derived from the above based on the marking guide.
Assessment Notes:
Your final submission will be assessed using a variety of automated tests. These tests are complex as you have been asked to write a sophiscated test suite with some very specific requirements. Ensure
you read and follow these instrucons carefully as automated tesng is not a forgiving system!
Some important notes:
Ensure you sck to the folder structure, package structure, and filenames required for this assignment – the marking script will not know the difference between a typo in the filename and a syntax
error and will fail you either way! In parcular do not reference methods not declared in the public API documents – the code that is swapped in will not implement any other methods and this will
cause a compilaon failure.
Pay aenon to what classes you are supposed to test – you do not need to test any of the given interfaces, you will be tesng concrete implementaons of those interfaces based on the
requirements the interface javadocs specify.
You will be tesng the defensive programming elements of these modules as well as their actual operaons – that is, do they correctly idenfy and reject input that breaches their precondions in the
way their API says they will.
Something to make things easier: You may assume that the implementaons you are given are enrely determinisc – there is no use of any pseudorandom funcons, the system does not react to the
system clock anywhere, it does not query the network, and it does not look at the current hardware. This is obviously NOT something you can assume when doing real tesng! (be careful though –
some Java in-built classes do not offer guarantees you might assume – for instance, the order of certain collecons)
All methods should be considered to have an implicit‘and no unrelated externally observable effects’requirement. That is, for example, BSFacade.addProject does not explicitly say that this operaon
should not modify any other project, but this and all similar cases should be assumed. You do not need to test for breaches of this requirement (none of the bugs you need to catch are like this).
Unless otherwise specified, this API does not make any guarantees of concrete implemenng class – that is, where List is specified, ArrayList or LinkedList or a custom List would all be valid. Do not
make more detailed assumpons of behaviour when tesng.
Do not rely on the copied informaon in the concrete class javadocs (not all of the documentaon gets copied to implemenng classes) – refer to the interface specificaon.
You may find ‘best pracce’ informaon that says you should not test simple methods like basic geers and seers – this is correct, it’s usually a waste of me. For the scope of this assignment
however you should be tesng everything, even the simplest methods.
Sanity note: If your tests passing mean you can say with certainty that the API is adhered to, then you are 100% guaranteed to pick up all of the marking bugs. Each of them directly breach something
said in the API – however a detailed and correct comprehension of this API will be required!
To give you an idea of just how much easier this makes things (and how hard real world tesng is), there is no bug that only occurs if a Product object has an ID of 1337.
That also means you don’t need to test for some things that you normally should – such as integer over/underflows. Sck to the API.
Conversely, DON’T add things not required by the API – either in your implementaon, or your test suite. e.g. if the API doesn’t say to throw an excepon, then don’t (and don’t have your tests
expect the implementaon you are given to throw one either). There is at least 1 deliberate gotcha here where a well designed and consistent system would act differently – but we’re just here for
verificaon, not validaon.
Lists that do not guarantee order in the API should not be tested with a required order.
Marking Guide
Note: For each bug secon, marks are only available IF your test suite accepts the given working example. If your test suite marks the working example as bugged, the total score for that secon will be
0%.
0% Does the test suite compile, and is the working version passed? (must achieve this for any marks in the relevant secons)
11% BSFacadeImpl ‘standard’ tesng (20 bugs to detect, and 2 alternave version that is not bugged)
4% BSFacadeImpl ‘advanced’ tesng that does not use the cheat factory (8 bugs to detect)
Note that marking is not linear – catching 50% of the available bugs will yield more than 50% of the marks. This is not an automac process, but will be based on a selected number of handmarked
submissions that will generate a mapping of bugs caught <-> marks based on the quality of the tesng that caught that number.

正文完
 0