Example Piazza Posts¶
Below you will find three example Piazza posts where the student provided enough information for us to provide assistance right away, with few (if any) follow-up questions:
Example Post #1: In this post, the student provided the full output of running the tests, and confirmed that their code had been pushed to Git.
Example Post #2: Even better, this post also included the exact command they ran, and also specified they were running on a VM (Note: this was an option in pre-2020 offerings of CS 121). Not just that, the output was formatted using Piazza’s “code” formatting, which makes it much easier to read.
Example Post #3: In this post, the student also included the print statements they were using to debug the problem, and explained what exactly seemed “off” with the output. This is very helpful because, if the issue is a common mistake, we can sometimes provide a quick answer without even looking at your code.
Example Post #1¶
Task 2 run simulation bug
My partner and I have been to 3 office hours today and since there are only group office hours left for the times we are available any help will be greatly appreciated. This is regarding part 2 (running the simulation) since our is_satisfied works and passed all the tests. We are getting an error message on test 2 saying the expected value “B” is different from actual “O” We added some print statements to see what was going on and we are getting extra square brackets after each line. Our code is pushed on my git. This is the full error message:
============================= test session starts ============================== platform linux – Python 3.5.2, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 – /usr/bin/python3 cachedir: .cache metadata: {‘Python’: ‘3.5.2’, ‘Packages’: {‘pytest’: ‘3.3.1’, ‘py’: ‘1.5.2’, ‘pluggy’: ‘0.6.0’}, ‘Plugins’: {‘html’: ‘1.16.0’, ‘json’: ‘0.4.0’, ‘metadata’: ‘1.5.1’}, ‘Platform’: ‘Linux-4.4.0-135-generic-x86_64-with-Ubuntu-16.04-xenial’} rootdir: /home/student/cmsc12100-aut-18-student/pa2, inifile: pytest.ini plugins: metadata-1.5.1, json-0.4.0, html-1.16.0 collected 14 items
test_do_simulation.py::test_0 PASSED [ 7%] test_do_simulation.py::test_1 FAILED [ 15%]
generated json report: /home/student/cmsc12100-aut-18-student/pa2/tests.json =================================== FAILURES =================================== ____________________________________ test_1 ____________________________________
def test_1(): ‘’’ Check stopping condition #2 ‘’’ input_fn = “tests/a18-sample-grid.txt” output_fn = “tests/a18-sample-grid-1-33-33-3-final.txt” > helper(input_fn, output_fn, 1, 0.33, 0.33, 3, 2)
test_do_simulation.py:113: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input_filename = ‘/home/student/cmsc12100-aut-18-student/pa2/tests/a18-sample-grid.txt’ expected_filename = ‘/home/student/cmsc12100-aut-18-student/pa2/tests/a18-sample-grid-1-33-33-3-final.txt’ R = 1, M_threshold = 0.33, B_threshold = 0.33, max_num_steps = 3 expected_num_relocations = 2
def helper(input_filename, expected_filename, R, M_threshold, B_threshold, max_num_steps, expected_num_relocations): ‘’’ Do one simulation with the specified parameters (R, threshold, max_num_steps) starting from the specified input file. Match actual grid generated with the expected grid and match expected steps and actual steps.
Inputs: input_filename: (string) name of the input grid file expected_filename: (string) name of the expected grid file. R: (int) radius for the neighborhood M_threshold: lower bound for similarity score for maroon homeowners B_threshold: lower bound for similarity score for blue homeowners max_steps: (int) maximum number of steps to do expected_num_relocations: (int) expected number of relocations performed during the simulation ‘’’
input_filename = os.path.join(BASE_DIR, input_filename) actual_grid = utility.read_grid(input_filename) expected_num_homeowners = count_homeowners(actual_grid) opens = utility.find_opens(actual_grid)
actual_num_steps = do_simulation(actual_grid, R, M_threshold, B_threshold, max_num_steps, opens) actual_num_homeowners = count_homeowners(actual_grid)
expected_filename = os.path.join(BASE_DIR, expected_filename) expected_grid = utility.read_grid(expected_filename)
if actual_num_steps != expected_num_relocations: s = (“actual and expected values number of steps do not matchn” ” got {:d}, expected {:d}”) s = s.format(actual_num_steps, expected_num_relocations) pytest.fail(s)
if actual_num_homeowners != expected_num_homeowners: if actual_num_homeowners <= expected_num_homeowners: s = “Homeowners are fleeing the city!n” else: s = (“The city is gaining homeowners.n”) s += (” Actual number of homeowners: {:d}n” ” Expected number of homeowners: {:d}n”) s = s.format(actual_num_homeowners, expected_num_homeowners)
pytest.fail(s)
mismatch = utility.find_mismatch(actual_grid, expected_grid) if mismatch: (i, j) = mismatch s = (“actual and expected grid values do not match ” “at location ({:d}, {:d})n”) s = s.format(i, j) s = s + ” got {}, expected {}”.format(actual_grid[i][j], expected_grid[i][j]) > pytest.fail(s) E Failed: actual and expected grid values do not match at location (0, 1) E got O, expected B
test_do_simulation.py:100: Failed —————————– Captured stdout call —————————– [[‘O’, ‘B’, ‘O’, ‘B’, ‘M’], [‘M’, ‘M’, ‘M’, ‘M’, ‘O’], [‘M’, ‘B’, ‘O’, ‘M’, ‘B’], [‘M’, ‘M’, ‘O’, ‘O’, ‘O’], [‘M’, ‘M’, ‘O’, ‘M’, ‘O’]] H [] H [(0, 0), (0, 2), (1, 4), (3, 2), (3, 3), (3, 4), (4, 4)] 1 [[‘O’, ‘O’, ‘B’, ‘B’, ‘M’], [‘M’, ‘M’, ‘M’, ‘M’, ‘O’], [‘M’, ‘B’, ‘O’, ‘M’, ‘B’], [‘M’, ‘M’, ‘O’, ‘O’, ‘O’], [‘M’, ‘M’, ‘O’, ‘M’, ‘O’]] H [(0, 0), (1, 4), (3, 3), (3, 4), (4, 4), (0, 1)] 2 [[‘O’, ‘O’, ‘B’, ‘B’, ‘M’], [‘M’, ‘M’, ‘M’, ‘M’, ‘O’], [‘M’, ‘O’, ‘O’, ‘M’, ‘B’], [‘M’, ‘M’, ‘O’, ‘B’, ‘O’], [‘M’, ‘M’, ‘O’, ‘M’, ‘O’]] H [] H [] [[‘O’, ‘O’, ‘B’, ‘B’, ‘M’], [‘M’, ‘M’, ‘M’, ‘M’, ‘O’], [‘M’, ‘O’, ‘O’, ‘M’, ‘B’], [‘M’, ‘M’, ‘O’, ‘B’, ‘O’], [‘M’, ‘M’, ‘O’, ‘M’, ‘O’]] H [] H [] H [] H [] H [] ============================== 1 tests deselected ============================== =============== 1 failed, 1 passed, 1 deselected in 0.15 seconds ===============
Example Post #2¶
Error running final test
Hi, my and my partner’s code have passed the tests for is_satisfied and for do_simulation, but when I try to run the entire py.test, it does not appear to be reading the test code correctly. Any idea why this might be? This is on a VM.
student@cs-vm:~/cmsc12100-aut-18-student/pa2$ py.test ../common/grader.py
================================== test session starts ===================================
platform linux -- Python 3.5.2, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
metadata: {'Packages': {'py': '1.5.2', 'pytest': '3.3.1', 'pluggy': '0.6.0'}, 'Python': '3.5.2', 'Platform': 'Linux-4.4.0-135-generic-x86_64-with-Ubuntu-16.04-xenial', 'Plugins': {'html': '1.16.0', 'metadata': '1.5.1', 'json': '0.4.0'}}
rootdir: /home/student/cmsc12100-aut-18-student, inifile:
plugins: metadata-1.5.1, json-0.4.0, html-1.16.0
collected 0 items / 1 errors
========================================= ERRORS =========================================
___________________________ ERROR collecting common/grader.py ____________________________
../common/grader.py:14: in <module>
args = parser.parse_args()
/usr/lib/python3.5/argparse.py:1738: in parse_args
self.error(msg % ' '.join(argv))
/usr/lib/python3.5/argparse.py:2394: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
/usr/lib/python3.5/argparse.py:2381: in exit
_sys.exit(status)
E SystemExit: 2
------------------------------------ Captured stderr -------------------------------------
usage: py.test [-h] [--json-file JSON_FILE] [--rubric-file RUBRIC_FILE]
[--csv]
py.test: error: unrecognized arguments: ../common/grader.py
!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!
================================ 1 error in 0.92 seconds =================================
Example Post #3¶
Task 5: Strange Error In Task 4 causing task 4 to break but only in program not in ipython
Ok so I figured out that something is going on with my task 4 output.
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 -- /usr/bin/python3
cachedir: .cache
metadata: {'Python': '3.5.2', 'Plugins': {'metadata': '1.5.1', 'html': '1.16.0', 'json': '0.4.0'}, 'Platform': 'Linux-4.4.0-135-generic-x86_64-with-Ubuntu-16.04-xenial', 'Packages': {'pluggy': '0.6.0', 'py': '1.5.2', 'pytest': '3.3.1'}}
rootdir: /home/student/cmsc12100-aut-18-student/pa1, inifile: pytest.ini
plugins: metadata-1.5.1, json-0.4.0, html-1.16.0
collected 47 items
test_sir.py::test_run_simulation_1 FAILED [ 16%]
generated json report: /home/student/cmsc12100-aut-18-student/pa1/tests.json
=================================== FAILURES ===================================
____________________________ test_run_simulation_1 _____________________________
def test_run_simulation_1():
'''
Purpose: tests basic functionality.
'''
test_file = TEST_DATA_DIR + '/3.json'
helper_run_simulation(test_file,
> ['S', 'R', 'R'], 3)
test_sir.py:603:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
filename = '/home/student/cmsc12100-aut-18-student/pa1/configs/3.json'
expected_state = ['S', 'R', 'R'], expected_num_days = 3
def helper_run_simulation(filename, expected_state, expected_num_days):
'''
Purpose: helper function for task 5
Inputs:
filename: (str) json file to open
expected_state: (list) expected value
expected_num_days: (int) expected value
'''
starting_state, random_seed, d, r, _ = \
util.get_config(filename)
# helper function for testing
(actual_state, actual_num_days) = sir.run_simulation(
starting_state, random_seed, d, r)
if actual_state != expected_state:
s = "Actual ({:}) and expected ({:}) final states do not match"
> pytest.fail(s.format(actual_state, expected_state))
E Failed: Actual (['S', 'S']) and expected (['S', 'R', 'R']) final states do not match
test_sir.py:591: Failed
----------------------------- Captured stdout call -----------------------------
20170217
This is start state:
['S', 'S', 'I1']
This is citystate:
['S', 'S', 'I1']
0.4
Before Simulation citystate is:
['S', 'S', 'I1']
The Item Being Tested is
S
Current State of City Sim is:
['S']
The Item Being Tested is
S
Current State of City Sim is:
['S', 'S']
The Item Being Tested is
I1
Current State of City Sim is:
['S', 'S']
After Simulation citystate is:
['S', 'S']
This is the entirety of the output. For some reason when it gets to the third item in the list it refuses to recognize it as being equal to ‘I1’ and instead does nothing. I’m not sure why it isn’t Appending a new value. I passed all the test case and they all worked. I have pushed my most recent version.