diff --git a/lab-p3/README.md b/lab-p3/README.md new file mode 100644 index 0000000000000000000000000000000000000000..80bd54eeae1c8b79efda228bed572fda51f66ab7 --- /dev/null +++ b/lab-p3/README.md @@ -0,0 +1,200 @@ +# Lab-P3: Learning an API and some functions + +### Corrections/Clarifications + +- None yet. + +**Find any issues?** Report to us: + +- Ashwin Maran <amaran@wisc.edu> + +------------------------------ +## Learning Objectives + +In this lab, you will practice... +* Writing functions with return statements +* Importing a module and using its functions +* Using parameters' default values when calling functions +* Avoiding hardcoding by using a `get_id` function +* Working with the index of a row of data + +------------------------------ +## Note on Academic Misconduct + +You may do these lab exercises only with your project partner; you are not allowed to start working on Lab-P3 with one person, then do the project with a different partner. Now may be a good time to review [our course policies](https://cs220.cs.wisc.edu/f23/syllabus.html). + +------------------------------ + +## Project partner + +We strongly recommend students find a project partner. Pair programming is a great way to learn from a fellow student. Project difficulty increases exponentially in this course. Finding a project partner early on during the semester is a good idea. + +If you are still looking for a project partner, take a moment now to ask around the room if anyone would like to partner with you on this project. Then you can work with them on this lab as well as the project. + +------------------------------ +## Description + +For many projects this semester, we'll provide you with a *module* (a collection of functions) named `project`, in a file named `project.py`. This module will provide functions that will help you complete the project. In the lab, we will introduce the module `project.py` which you will need to use in `P3`. + +When using an unfamiliar module, the first thing you should do is study the module's *API*. API stands for "Application Programming Interface". +The API descibes everything a programmer needs to know about a piece of the module in order to use it. Understanding the API will involve learning about each function and the arguments it takes, and what functions might need to be called before you can use other functions. + +There are two ways you can learn about an API. First, the person who created the API may have provided written directions, called *documentation*. Second, there are ways you can write code to learn about a collection of functions; this approach is called *inspection*. + +------------------------------ +## Segment 1: Setup + +Create a `lab-p3` directory and download the following files into the `lab-p3` directory: + +* `lab_budget.csv` +* `project.py` +* `lab-p3.ipynb` +* `public_tests.py` + +Once you have downloaded the files, open a terminal and navigate to your `lab-p3` directory. Run `ls` to make sure the above files are available. + +**Note:** If you accidentally downloaded the file as a `.txt` instead of `.csv` (say `lab.txt`), you can execute `mv lab.txt lab_budget.csv` on a Terminal/PowerShell window. Recall that the `mv` (move) command lets you rename a source file (first argument, example: `lab.txt`) to the destination file (second argument, example: `lab_budget.csv`). + +------------------------------ + +## Segment 2: Learning the `project.py` API + +The file `project.py` contains certain *functions* that will be useful for you when you are solving `P3`. It is not necessary to understand *how* these functions work (although you will learn how they work within a few weeks), but to use this module, you need to know *what* these functions do. +When dealing with an unfamiliar module, the best way to learn what its functions are, and how to use them, is to study the module's API. In this segment, we will be learning how to do exactly that. + +First, open a new Terminal/PowerShell window, and navigate to the `lab-p3` folder which contains `project.py`. From here, type `python` (or `python3` if that is what worked for you in [Lab-P2](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p2)) to enter the Interactive Mode. It is also recommended that you review [Lab-P2](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p2#task-16-exit-interactive-mode) on how to exit Interactive mode. + +### Task 2.1: Using `dir` + +From the Interactive mode, type the following command: + +```python +>>> import project +``` +This line *imports* the `project` module, so you can use the functions inside it. If you want to use any of the functions inside any module, you will have to import it first. But before we can use the functions inside this module, we need to find out *what* functions are inside this module. To do that, type the following command in Interactive mode: + +```python +>>> dir(project) +``` + +You should see the following output: + +``` +['__agency_to_id', '__builtins__', '__cached__', '__csv__', '__data', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '__years', 'dump', 'get_budget', 'get_id', 'init'] +``` + +The functions inside this module that will be relevant to us are the ones that do **not** begin and end with two underscores. + +### Task 2.2: Inspecting `project.py` + +Now that we know the functions inside the module that we can call, we need to figure out *what* these functions do. One way to do that is to just try and call them. Try the following on Interactive mode: + +```python +>>> project.dump() +``` + +You will likely see the following error message: + +``` +Traceback (most recent call last): + File "<stdin>", line 1, in <module> + File "C:\Users\myname\Documents\cs220\lab-p3", line 49, in dump + raise Exception("you did not call init first") +Exception: you did not call init first +``` + +This tells us that before we can call `project.dump`, we will need to call the `project.init` function. + +### Task 2.3: Using `help` + +We can continue to try and inspect the other functions in the `project` module to figure out what they do. However, most modules come with *documentation* explaining what each of the functions does, and it can save us time if we read this documentation. Try the following on Interactive mode: + +```python +>>> help(project.init) +``` + +You should see the following output: + +``` +Help on function init in module project: + +init(path) + init(path) must be called to load data before other calls will work. You should call it like this: init("madison_budget.csv") or init("lab_budget.csv") +``` + +**Note:** If you are using Mac OS, you **may** enter **Python help mode** when you type `help(project.init)`. You can recognize that you are in help mode if you no longer see the prompt `>>>` appearing on your screen. You will not be able to execute other Python commands from this mode. In order to *exit* the help mode, you need to type `q` and hit the `RETURN` key. If you do not enter Python help mode, this is unnecessary. + +The documentation here tells us that the function `init` takes in a `path` such as `lab_budget.csv`, or `madison_budget.csv` (which you will work with in `P3`) as its argument, and loads the data from the file into the Python program. Can you now understand what the Traceback was telling us when we called `project.dump` earlier? + +Let us now load the data from `lab_budget.csv`. Execute the following command on Interactive mode. + +```python +>>> project.init('lab_budget.csv') +``` + +**Note:** If you load the file `lab_budget.csv` correctly, you will see the following warning message: +``` +WARNING! Opening a path other than madison_budget.csv. That's fine for testing your code yourself, but madison_budget.csv will be the only file around when we test your code for grading. +``` +That is **to be expected**. It is warning you that for the project `P3`, you will not be working with the data in `lab_budget.csv`, but instead, the data in `madison_budget.csv`, and that you should be careful not to load in the wrong file when working on `P3`. + +Now that we have loaded in our data, let us now see what `project.dump` does. Execute the following command on Interactive mode. + +```python +>>> help(project.dump) +``` + +You should see the following output: + +``` +Help on function dump in module project: + +dump() + prints all the data to the screen +``` + +Can you figure out what this function does, and how to call it? Call the function yourself. You should see the following output: + +``` +Building Inspection [ID 14] + 2021: $5.015456 MILLION + 2022: $4.935448 MILLION + 2023: $5.116290 MILLION + +Information Technology [ID 7] + 2021: $7.864314 MILLION + 2022: $9.438775 MILLION + 2023: $9.569373 MILLION + +Mayor [ID 10] + 2021: $1.123505 MILLION + 2022: $1.142239 MILLION + 2023: $1.259001 MILLION + +Streets [ID 28] + 2021: $27.812921 MILLION + 2022: $26.703376 MILLION + 2023: $26.734510 MILLION +``` + +This is data on the **budget** of a select few government **agencies** within the City of Madison, across the last few **years**. If you manually open `lab_budget.csv` using Microsoft Excel or some other Spreadsheet software, you will find this data stored there. + +We now need to figure out how to use the other functions in the module. Read the *documentation* using `help` to figure out what the following functions do: + +- `project.get_id` +- `project.get_budget` + +------------------------------ + +## Segment 3: Solving `lab-p3.ipynb` + +You will be finishing the rest of your lab on `lab-p3.ipynb`. Exit Python Interactive mode on your Terminal/PowerShell (using the `exit` function, as in [Lab-P2](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p2#task-16-exit-interactive-mode)), and run the command `jupyter notebook`. Remember not to close this terminal window while Jupyter is running, and open a new Terminal window if necessary. + +**Note:** For `P3`, you will be working on `p3.ipynb` which is very similar to `lab-p3.ipynb`. It is strongly recommended that you finish working on this notebook during the lab, so you can ask your TA/PM any questions about the notebook that may arise. + +**Note:** Unlike `p3.ipynb`, you do **not** have to submit `lab-p3.ipynb`. This notebook is solely for your practice. + +------------------------------ +## Project 3 + +Great, now you're ready to start [P3](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/p3)! Remember to only work with your partner from this lab on P3 from this point on. Have fun! diff --git a/lab-p3/lab-p3.ipynb b/lab-p3/lab-p3.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..cf646092157a2c94f6795d822ebabfadefa2983e --- /dev/null +++ b/lab-p3/lab-p3.ipynb @@ -0,0 +1,1642 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "d57febfc", + "metadata": { + "cell_type": "code", + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "# import and initialize otter\n", + "import otter\n", + "grader = otter.Notebook(\"lab-p3.ipynb\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "82da096e", + "metadata": { + "editable": false, + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.003643Z", + "iopub.status.busy": "2023-08-25T19:02:34.002643Z", + "iopub.status.idle": "2023-08-25T19:02:34.230981Z", + "shell.execute_reply": "2023-08-25T19:02:34.230981Z" + } + }, + "outputs": [], + "source": [ + "import public_tests" + ] + }, + { + "cell_type": "markdown", + "id": "1b134e5d", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "# Lab-P3: Learning an API and some functions\n", + "\n", + "**WARNING:** Please go through Segments 1 and 2 of [Lab-P3](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p3) **before** you start to solve this notebook." + ] + }, + { + "cell_type": "markdown", + "id": "377f59ff", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Task 3.1: Calling functions in the `project` module\n", + "\n", + "You have already learned how to learn the *API* of the `project.py` module. You will now demonstrate your ability to call the functions in this module." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9640d0ac", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.236011Z", + "iopub.status.busy": "2023-08-25T19:02:34.236011Z", + "iopub.status.idle": "2023-08-25T19:02:34.248174Z", + "shell.execute_reply": "2023-08-25T19:02:34.247145Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# include the relevant import statements in this cell\n", + "\n", + "import project # we have imported the project module for you here; you will have to add the import statement in p3" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f64cf36c", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.252162Z", + "iopub.status.busy": "2023-08-25T19:02:34.252162Z", + "iopub.status.idle": "2023-08-25T19:02:34.262513Z", + "shell.execute_reply": "2023-08-25T19:02:34.261483Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "project.init(\"lab_budget.csv\") # we have also loaded in 'lab_budget.csv' for you here; you will have to load the data yourself in p3\n", + "\n", + "# you may call the dump function here to test if you have loaded the dataset correctly." + ] + }, + { + "cell_type": "markdown", + "id": "93e2ad60", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 1:** What is the `id` of the agency *Streets*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "92a16f81", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.266500Z", + "iopub.status.busy": "2023-08-25T19:02:34.266500Z", + "iopub.status.idle": "2023-08-25T19:02:34.276847Z", + "shell.execute_reply": "2023-08-25T19:02:34.275818Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# we have done this one for you\n", + "streets_id = project.get_id(\"Streets\")\n", + "\n", + "streets_id" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ff02757c", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q1\")" + ] + }, + { + "cell_type": "markdown", + "id": "8fc00ba1", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 2:** What is the `id` of the agency *Information Technology*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5c3b785a", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.290100Z", + "iopub.status.busy": "2023-08-25T19:02:34.290100Z", + "iopub.status.idle": "2023-08-25T19:02:34.297339Z", + "shell.execute_reply": "2023-08-25T19:02:34.296323Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# you will have to do this yourself; replace the ... with your code\n", + "# NOTE: assigning `info_tech_id = 7` => is considered hardcoding\n", + "# if you do this in p3, the Gradescope autograder will **deduct**\n", + "# points for hardcoding\n", + "info_tech_id = ...\n", + "\n", + "info_tech_id" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "af45d8f3", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q2\")" + ] + }, + { + "cell_type": "markdown", + "id": "56d50576", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 3:** What is the `id` of the agency *Building Inspection*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c8e63f9e", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.311536Z", + "iopub.status.busy": "2023-08-25T19:02:34.310549Z", + "iopub.status.idle": "2023-08-25T19:02:34.316889Z", + "shell.execute_reply": "2023-08-25T19:02:34.316889Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'building_inspection_id'\n", + "\n", + "# display the variable 'building_inspection_id' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bb33116b", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q3\")" + ] + }, + { + "cell_type": "markdown", + "id": "86658a0a", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Task 3.1.1: Avoiding index hardcoding\n", + "\n", + "You should now have a good sense of how the `project.get_id` function works. You will now move on to the `project.get_budget` function. Read the documentation of this function, if you haven't already. Let us now call this function and see how it operates:\n", + "\n", + "**Note:** Recall that you can use `help(function_name)` to read the documentation of a function. Try reading the documentation for `get_id` function using `help(project.get_id)`. If you want, you can create a **new** Cell by clicking on the `+` button at the top of the notebook, and use that cell to read the documentation." + ] + }, + { + "cell_type": "markdown", + "id": "799089c6", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 4:** What was the budget of the agency *Mayor* in *2023* (in millions of dollars)?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5aa9b664", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.330853Z", + "iopub.status.busy": "2023-08-25T19:02:34.330853Z", + "iopub.status.idle": "2023-08-25T19:02:34.338012Z", + "shell.execute_reply": "2023-08-25T19:02:34.336994Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# run this cell for now\n", + "mayor_budget_2023 = project.get_budget(10, 2023) # 140is the `id` of the agency *Mayor*\n", + "\n", + "mayor_budget_2023" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "97c38576", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q4\")" + ] + }, + { + "cell_type": "markdown", + "id": "6d4a6ada", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## NOTE: Even though you passed the above test, this is considered hardcoding!\n", + "\n", + "You were asked what the budget of the agency *Mayor* in *2023* was. The `id` *10* was **not** provided to you as part of the question. You have no reason to expect that the dataset might not change in the future and that the ids will not be updated. Furthermore, by **hardcoding** the `id` into your answer, you are limiting its ability to be used for other datasets, where *Mayor* might have a **different** `id`.\n", + "\n", + "**Warning:** Since you used some information that was **not** explicitly provided to you as part of the question, this is considered **hardcoding**, and even if you pass the public tests, the Gradescope autograder will deduct points.\n", + "\n", + "Let us now see the **correct** way to answer the previous question." + ] + }, + { + "cell_type": "markdown", + "id": "cd1b7a7a", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 5:** What was the budget of the agency *Mayor* in *2023*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ebd8cc9f", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.356848Z", + "iopub.status.busy": "2023-08-25T19:02:34.355850Z", + "iopub.status.idle": "2023-08-25T19:02:34.366478Z", + "shell.execute_reply": "2023-08-25T19:02:34.365467Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# this is the *correct* way to solve this question without hardcoding any indices\n", + "mayor_id = project.get_id('Mayor')\n", + "mayor_budget_2023 = project.get_budget(mayor_id, 2023)\n", + "\n", + "mayor_budget_2023" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "042b86cb", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q5\")" + ] + }, + { + "cell_type": "markdown", + "id": "5623e763", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 6:** What was the budget of the agency *Information Technology* in *2022*?\n", + "\n", + "You **must not** hardcode the `id` of the agency" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "90aa0d82", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.381047Z", + "iopub.status.busy": "2023-08-25T19:02:34.381047Z", + "iopub.status.idle": "2023-08-25T19:02:34.388629Z", + "shell.execute_reply": "2023-08-25T19:02:34.387612Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "\n", + "info_tech_id = ...\n", + "info_tech_id_budget_2022 = ...\n", + "\n", + "info_tech_id_budget_2022" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4721c48c", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q6\")" + ] + }, + { + "cell_type": "markdown", + "id": "70f7df87", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 7:** What was the budget of the agency *Streets* in *2021*?\n", + "\n", + "You **must not** hardcode the `id` of the agency" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5cb0b073", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.405353Z", + "iopub.status.busy": "2023-08-25T19:02:34.405353Z", + "iopub.status.idle": "2023-08-25T19:02:34.413882Z", + "shell.execute_reply": "2023-08-25T19:02:34.412852Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'streets_budget_2021'\n", + "\n", + "# display the variable 'streets_budget_2021' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1f7c668c", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q7\")" + ] + }, + { + "cell_type": "markdown", + "id": "ce5bd6ac", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Task 3.2: Calling and defining functions\n", + "\n", + "You will first demonstrate how to call some *built-in* functions (`max` and `min`) that will be useful for you in p3." + ] + }, + { + "cell_type": "markdown", + "id": "9750d50d", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 8:** What is the *minimum* of the three numbers: *220*, *319*, and *320*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bdc761e6", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.428375Z", + "iopub.status.busy": "2023-08-25T19:02:34.427374Z", + "iopub.status.idle": "2023-08-25T19:02:34.433495Z", + "shell.execute_reply": "2023-08-25T19:02:34.433495Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# we have done this one for you\n", + "min_three_numbers = min(220, 319, 320)\n", + "\n", + "min_three_numbers" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5ca3b356", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q8\")" + ] + }, + { + "cell_type": "markdown", + "id": "a6a8da8d", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 9:** What is the *minimum* of the two numbers: *220* and *200*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1121526c", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.447356Z", + "iopub.status.busy": "2023-08-25T19:02:34.446372Z", + "iopub.status.idle": "2023-08-25T19:02:34.452704Z", + "shell.execute_reply": "2023-08-25T19:02:34.452704Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'min_two_numbers'.\n", + "\n", + "# display the variable 'min_two_numbers' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "155eb64d", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q9\")" + ] + }, + { + "cell_type": "markdown", + "id": "4e43984f", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 10:** What is the *maximum* of the three numbers: *200*, *300*, and *400*?\n", + "\n", + "**Hint:** Just like the `min` function, there is a `max` function. You can either inspect or read its documentation to figure out how to use it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cee64390", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.466306Z", + "iopub.status.busy": "2023-08-25T19:02:34.465323Z", + "iopub.status.idle": "2023-08-25T19:02:34.471662Z", + "shell.execute_reply": "2023-08-25T19:02:34.471662Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'max_three_numbers'.\n", + "\n", + "# display the variable 'max_three_numbers' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "04ef5ccc", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q10\")" + ] + }, + { + "cell_type": "markdown", + "id": "a70326ff", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Task 3.2.1: Defining your own functions\n", + "\n", + "You will now demonstrate how to define your own functions and how to call them." + ] + }, + { + "cell_type": "markdown", + "id": "825fcf8d", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 1: `get_avg_drop_lowest(score1, score2, score3)`\n", + "\n", + "- This function will have three parameters, `score1`, `score2`, and `score3`.\n", + "- It should add up the 3 scores, subtract out the smallest of these scores, and then determine the average of the remaining two.\n", + "\n", + "For example, given the three scores `2, 4, 7`, your function should sum all the scores together `2 + 4 + 7` and then subtract `2`. Finally, it should return the average of the two remaining scores i.e it should be `(2 + 4 + 7 - 2)/2 = 5.5`.\n", + "\n", + "You will be provided with some code snippets to start with, but you will have to fill out the rest of the function. If you are not sure how to write this function, ask your TA/PM for help." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "85fa2a94", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.485889Z", + "iopub.status.busy": "2023-08-25T19:02:34.485889Z", + "iopub.status.idle": "2023-08-25T19:02:34.492031Z", + "shell.execute_reply": "2023-08-25T19:02:34.491001Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def get_avg_drop_lowest(score1, score2, score3):\n", + " # replace the ... with your code\n", + " smallest = ...\n", + " sum_of_larger_two = ...\n", + " \n", + " avg = ...\n", + " return avg" + ] + }, + { + "cell_type": "markdown", + "id": "7da789d0", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 11:** What is the output of `get_avg_drop_lowest(18, 20, 17)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "18d15943", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.496033Z", + "iopub.status.busy": "2023-08-25T19:02:34.496033Z", + "iopub.status.idle": "2023-08-25T19:02:34.502503Z", + "shell.execute_reply": "2023-08-25T19:02:34.502503Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# we have done this one for you\n", + "avg_drop_lowest_18_20_17 = get_avg_drop_lowest(18, 20, 17)\n", + "\n", + "avg_drop_lowest_18_20_17" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ba09d43d", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q11\")" + ] + }, + { + "cell_type": "markdown", + "id": "36e3e7fb", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 12:** What is the output of `get_avg_drop_lowest(40, 45, 35)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "24387b0b", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.517046Z", + "iopub.status.busy": "2023-08-25T19:02:34.517046Z", + "iopub.status.idle": "2023-08-25T19:02:34.524256Z", + "shell.execute_reply": "2023-08-25T19:02:34.523226Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'avg_drop_lowest_40_45_35'\n", + "\n", + "# display the variable 'avg_drop_lowest_40_45_35' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "64e8a377", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q12\")" + ] + }, + { + "cell_type": "markdown", + "id": "9fdc55b4", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 2: `get_range(num1, num2, num3, num4)`\n", + "\n", + "- This function will have four parameters, `num1`, `num2`, `num3`, and `num4`.\n", + "- It should find the maximum of the four numbers, and the minimum of the four numbers, and subtract the minimum from the maximum.\n", + "\n", + "For example, given the four numbers `1, 2, 3, 4`, your function should subtract the minimum `1` from the maximum `4`, and return `4 - 1 = 3`.\n", + "\n", + "You will be provided with some code snippets to start with, but you will have to fill out the rest of the function. If you are not sure how to write this function, ask your TA/PM for help." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "526f41d1", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.537762Z", + "iopub.status.busy": "2023-08-25T19:02:34.537762Z", + "iopub.status.idle": "2023-08-25T19:02:34.543872Z", + "shell.execute_reply": "2023-08-25T19:02:34.542842Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def get_range(num1, num2, num3, num4):\n", + " # replace the ... with your code\n", + " maximum = ...\n", + " # define a variable called `minimum` and store the minimum of the four numbers as its value\n", + " \n", + " range_four_nums = ...\n", + " return range_four_nums" + ] + }, + { + "cell_type": "markdown", + "id": "26040ada", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 13:** What is the output of `get_range(10, 20, 40, 60)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4defd5c", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.547859Z", + "iopub.status.busy": "2023-08-25T19:02:34.547859Z", + "iopub.status.idle": "2023-08-25T19:02:34.554608Z", + "shell.execute_reply": "2023-08-25T19:02:34.553578Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'range_10_20_40_60'\n", + "\n", + "# display the variable 'range_10_20_40_60' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bea54771", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q13\")" + ] + }, + { + "cell_type": "markdown", + "id": "79db1688", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 14:** What is the output of `get_range(40, 20, 10, 30)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f2f46cf8", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.567545Z", + "iopub.status.busy": "2023-08-25T19:02:34.566546Z", + "iopub.status.idle": "2023-08-25T19:02:34.573957Z", + "shell.execute_reply": "2023-08-25T19:02:34.572940Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'range_40_20_10_30'\n", + "\n", + "# display the variable 'range_40_20_10_30' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cfcd72f3", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q14\")" + ] + }, + { + "cell_type": "markdown", + "id": "8c5cb960", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 3: `change_in_value_per_year(year1, value1, year2, value2)`\n", + "\n", + "- This function will have four parameters, `year1`, `value1`, `year2`, and `value2`.\n", + "- `year1` and `year2` will be two different years represented as `int` and `value1` and `value2` will be the *value* of `year1` and `year2` respectively.\n", + "- The default value of `year2` is set to *2023* and the default value of `value2` is set to *6* (refer to the function in below cell)\n", + "- This function should find the average yearly change in *value* between `year1` and `year2`. It's not mandatory that `year2` > `year1`.\n", + "\n", + "**Hint:** This function basically computes the slope of the line that passes through the points (`year1`, `value1`) and (`year2`, `value2`).\n", + "\n", + "For example, if we are given the inputs `year1 = 2021`, `value1 = 4`, `year2 = 2023`, and `value2 = 0`, then the output of the function should be `(0 - 4)/(2023 - 2021) = -2.0`.\n", + "\n", + "You will be provided with some code snippets to start with, but you will have to fill out the rest of the function. Note that you will have to define a very similar function in `p3`, so this will be good practice for you. If you are not sure how to write this function, ask your TA/PM for help." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9713d0bc", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.587272Z", + "iopub.status.busy": "2023-08-25T19:02:34.586285Z", + "iopub.status.idle": "2023-08-25T19:02:34.593584Z", + "shell.execute_reply": "2023-08-25T19:02:34.592573Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def change_in_value_per_year(year1, value1, year2=2023, value2=6): # DO NOT EDIT THIS LINE\n", + " pass # this statement tells Python to do nothing.\n", + " # since this function has no code inside, we have added the pass statement inside so the code does not crash.\n", + " # once you have added code to this function, you can (and should) remove the pass statement as it does nothing.\n", + "\n", + " # TODO: find the change in *value* between `year1` and `year2` and store it in a variable `change_in_value`\n", + " # TODO: find the number of years between `year1` and `year2` and store it in a variable `num_years`\n", + " # TODO: compute the average change in value between `year1` and `year2` and store it in a variable `change_per_year`\n", + " # TODO: you *should* use the variables `change_in_value` and `num_years` to define this variable `change_per_year`\n", + " # TODO: return the variable `change_per_year` " + ] + }, + { + "cell_type": "markdown", + "id": "976983db", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 15:** What is the output of `change_in_value_per_year(2019, 12, 2020, 4)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "be011d28", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.598604Z", + "iopub.status.busy": "2023-08-25T19:02:34.597589Z", + "iopub.status.idle": "2023-08-25T19:02:34.605262Z", + "shell.execute_reply": "2023-08-25T19:02:34.605262Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'change_per_year_2019_12_2020_4'\n", + "\n", + "# display the variable 'change_per_year_2019_12_2020_4' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c466422f", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q15\")" + ] + }, + { + "cell_type": "markdown", + "id": "33f82a2b", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 16:** What is the output of `change_in_value_per_year(2015, 8, 2018, 11)`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6232f555", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.619925Z", + "iopub.status.busy": "2023-08-25T19:02:34.619925Z", + "iopub.status.idle": "2023-08-25T19:02:34.627049Z", + "shell.execute_reply": "2023-08-25T19:02:34.626020Z" + }, + "scrolled": true, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'change_per_year_2015_8_2018_11'\n", + "\n", + "# display the variable 'change_per_year_2015_8_2018_11' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a92cb5e7", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q16\")" + ] + }, + { + "cell_type": "markdown", + "id": "162fb256", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Task 3.2.2: Using default arguments\n", + "\n", + "You will now demonstrate how to use default arguments in functions.\n", + "\n", + "If you look back at the definition of `change_in_value_per_year`, you notice that the parameters `year2` and `value2` were given the default arguments `2023` and `6` respectively. We will now use these default arguments in our function calls." + ] + }, + { + "cell_type": "markdown", + "id": "5bc18913", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 17:** Find the change in value per year when `year1` is *2020*, `value1` is *4*, `year2` is *2013*, and `value2` is *6*" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dbf101a4", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.640161Z", + "iopub.status.busy": "2023-08-25T19:02:34.640161Z", + "iopub.status.idle": "2023-08-25T19:02:34.645986Z", + "shell.execute_reply": "2023-08-25T19:02:34.645986Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# we have done this one for you\n", + "change_per_year_2020_4_2013_6 = change_in_value_per_year(2020, 4, 2013) # note that the default argument for `value2` is used\n", + "\n", + "change_per_year_2020_4_2013_6" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e9ccc682", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q17\")" + ] + }, + { + "cell_type": "markdown", + "id": "ad404454", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 18:** Find the change in value per year when `year1` is *2018*, `value1` is *11*, `year2` is *2023*, and `value2` is *6*\n", + "\n", + "You **must** use the default arguments (your call to `change_in_value_per_year` should **not** pass any more arguments than is absolutely necessary). Ask a TA/PM to review your code if you are unsure of your answer." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9e464ad2", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.659266Z", + "iopub.status.busy": "2023-08-25T19:02:34.659266Z", + "iopub.status.idle": "2023-08-25T19:02:34.666663Z", + "shell.execute_reply": "2023-08-25T19:02:34.665345Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'change_per_year_2018_11_2023_6'\n", + "\n", + "# display the variable 'change_per_year_2018_11_2023_6' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fd40153f", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q18\")" + ] + }, + { + "cell_type": "markdown", + "id": "d4712c82", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 19:** Find the change in value per year when `year1` is *2013*, `value1` is *3*, `year2` is *2023*, and `value2` is *2*\n", + "\n", + "**Note:** Pass the arguments only if they are not set as default in the function definition. This will be important when working with `p3`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a7282184", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.680143Z", + "iopub.status.busy": "2023-08-25T19:02:34.679156Z", + "iopub.status.idle": "2023-08-25T19:02:34.686140Z", + "shell.execute_reply": "2023-08-25T19:02:34.686140Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# we have done this one for you\n", + "change_per_year_2013_3_2023_2 = change_in_value_per_year(2013, 3, value2=2) # note the use of positional arguments here\n", + "\n", + "change_per_year_2013_3_2023_2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5d85fcbd", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q19\")" + ] + }, + { + "cell_type": "markdown", + "id": "d3c5f62a", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 20:** Find the change in value per year when `year1` is *2016*, `value1` is *4*, `year2` is *2023*, and `value2` is *0*\n", + "\n", + "**Note:** Pass the arguments only if they are not set as default in the function definition. This will be important when working with `p3`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bde1225b", + "metadata": { + "execution": { + "iopub.execute_input": "2023-08-25T19:02:34.699843Z", + "iopub.status.busy": "2023-08-25T19:02:34.699843Z", + "iopub.status.idle": "2023-08-25T19:02:34.706998Z", + "shell.execute_reply": "2023-08-25T19:02:34.705986Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'change_per_year_2016_4_2023_0'\n", + "\n", + "# display the variable 'change_per_year_2016_4_2023_0' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d3abfca8", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q20\")" + ] + }, + { + "cell_type": "markdown", + "id": "88797e8a", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## That's it! You are done with Lab-P3 and may start P3. You do not have to submit this practice notebook to Gradescope. Good luck!\n", + "\n", + "## It's a good practice to save notebook once in a while when working on projects as notebooks might lose network connection. So, try saving the notebook often!" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.3" + }, + "otter": { + "OK_FORMAT": true, + "tests": { + "q1": { + "name": "q1", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q1', streets_id)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q10": { + "name": "q10", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q10', max_three_numbers)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q11": { + "name": "q11", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q11', avg_drop_lowest_18_20_17)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q12": { + "name": "q12", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q12', avg_drop_lowest_40_45_35)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q13": { + "name": "q13", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q13', range_10_20_40_60)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q14": { + "name": "q14", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q14', range_40_20_10_30)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q15": { + "name": "q15", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q15', change_per_year_2019_12_2020_4)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q16": { + "name": "q16", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q16', change_per_year_2015_8_2018_11)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q17": { + "name": "q17", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q17', change_per_year_2020_4_2013_6)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q18": { + "name": "q18", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q18', change_per_year_2018_11_2023_6)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q19": { + "name": "q19", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q19', change_per_year_2013_3_2023_2)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q2": { + "name": "q2", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q2', info_tech_id)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q20": { + "name": "q20", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q20', change_per_year_2016_4_2023_0)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q3": { + "name": "q3", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q3', building_inspection_id)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q4": { + "name": "q4", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q4', mayor_budget_2023)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q5": { + "name": "q5", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q5', mayor_budget_2023)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q6": { + "name": "q6", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q6', info_tech_id_budget_2022)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q7": { + "name": "q7", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q7', streets_budget_2021)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q8": { + "name": "q8", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q8', min_three_numbers)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q9": { + "name": "q9", + "points": 5, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q9', min_two_numbers)\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + } + } + }, + "vscode": { + "interpreter": { + "hash": "f08154012ddadd8e950e6e9e035c7a7b32c136e7647e9b7c77e02eb723a8bedb" + } + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/lab-p3/lab_budget.csv b/lab-p3/lab_budget.csv new file mode 100644 index 0000000000000000000000000000000000000000..610dbe62ff86cb716419956912dcae7c41ced37f --- /dev/null +++ b/lab-p3/lab_budget.csv @@ -0,0 +1,5 @@ +id,agency,2021,2022,2023 +7,Information Technology,7.864314,9.438775,9.569373 +10,Mayor,1.123505,1.142239,1.259001 +14,Building Inspection,5.015456,4.935448,5.116290 +28,Streets,27.812921,26.703376,26.734510 diff --git a/lab-p3/project.py b/lab-p3/project.py new file mode 100644 index 0000000000000000000000000000000000000000..ac8faab3fb4c477fce60c157f71d027bb4510244 --- /dev/null +++ b/lab-p3/project.py @@ -0,0 +1,73 @@ +import csv as __csv__ + +# years in dataset +__years = None + +# key: (agency_id, year), val: spending in millions +__data = None + +# key: agency name, val: agency ID +__agency_to_id = None + + +def init(path): + """init(path) must be called to load data before other calls will work. You should call it like this: init("madison_budget.csv") or init("lab_budget.csv")""" + + global __years, __data, __agency_to_id + + if path != 'madison_budget.csv': + print("WARNING! Opening a path other than madison_budget.csv. " + + "That's fine for testing your code yourself, but madison_budget.csv " + + "will be the only file around when we test your code " + + "for grading.") + + __years = [] + __data = {} + __agency_to_id = {} + + f = open(path, encoding='utf-8') + data = list(__csv__.reader(f)) + f.close() + + for agency_idx in range(1, len(data)): + agency = data[agency_idx][1] + agency_id = int(data[agency_idx][0]) + __agency_to_id[agency] = agency_id + for year_idx in range(2, len(data[0])): + year = int(data[0][year_idx]) + if year not in __years: + __years.append(year) + agency_budget = float(data[agency_idx][year_idx]) + __data[(agency_id, year)] = agency_budget + +def dump(): + """prints all the data to the screen""" + if __agency_to_id == None: + raise Exception("you did not call init first") + + for agency in sorted(__agency_to_id.keys()): + agency_id = __agency_to_id[agency] + print("%-7s [ID %d]" % (agency, agency_id)) + for year in __years: + print(" %d: $%f MILLION" % (year, __data[(agency_id, year)])) + print() + + +def get_id(agency): + """get_id(agency) returns the ID of the specified agency.""" + if __agency_to_id == None: + raise Exception("you did not call init first") + if not agency in __agency_to_id: + raise Exception("No agency '%s', only these: %s" % + (str(agency), ','.join(list(__agency_to_id.keys())))) + return __agency_to_id[agency] + + +def get_budget(agency_id, year=2023): + """get_budget(agency_id, year) returns the dollars (in millions) allotted to the specified agency in specified year.""" + if __data == None: + raise Exception("you did not call init first") + if not (agency_id, year) in __data: + raise Exception("No data for agency %s, in year %s" % + (str(agency_id), str(year))) + return __data[(agency_id, year)] diff --git a/lab-p3/public_tests.py b/lab-p3/public_tests.py new file mode 100644 index 0000000000000000000000000000000000000000..9d6b1e1e97a4172ce8aefa7f35258dba5e2f0a9d --- /dev/null +++ b/lab-p3/public_tests.py @@ -0,0 +1,808 @@ +#!/usr/bin/python +# + +import os, json, math, copy +from collections import namedtuple +from bs4 import BeautifulSoup + +HIDDEN_FILE = os.path.join("hidden", "hidden_tests.py") +if os.path.exists(HIDDEN_FILE): + import hidden.hidden_tests as hidn +# - + +MAX_FILE_SIZE = 750 # units - KB +REL_TOL = 6e-04 # relative tolerance for floats +ABS_TOL = 15e-03 # absolute tolerance for floats +TOTAL_SCORE = 100 # total score for the project + +DF_FILE = 'expected_dfs.html' +PLOT_FILE = 'expected_plots.json' + +PASS = "All test cases passed!" + +TEXT_FORMAT = "TEXT_FORMAT" # question type when expected answer is a type, str, int, float, or bool +TEXT_FORMAT_UNORDERED_LIST = "TEXT_FORMAT_UNORDERED_LIST" # question type when the expected answer is a list or a set where the order does *not* matter +TEXT_FORMAT_ORDERED_LIST = "TEXT_FORMAT_ORDERED_LIST" # question type when the expected answer is a list or tuple where the order does matter +TEXT_FORMAT_DICT = "TEXT_FORMAT_DICT" # question type when the expected answer is a dictionary +TEXT_FORMAT_SPECIAL_ORDERED_LIST = "TEXT_FORMAT_SPECIAL_ORDERED_LIST" # question type when the expected answer is a list where order does matter, but with possible ties. Elements are ordered according to values in special_ordered_json (with ties allowed) +TEXT_FORMAT_NAMEDTUPLE = "TEXT_FORMAT_NAMEDTUPLE" # question type when expected answer is a namedtuple +PNG_FORMAT_SCATTER = "PNG_FORMAT_SCATTER" # question type when the expected answer is a scatter plot +HTML_FORMAT = "HTML_FORMAT" # question type when the expected answer is a DataFrame +FILE_JSON_FORMAT = "FILE_JSON_FORMAT" # question type when the expected answer is a JSON file +SLASHES = " SLASHES" # question SUFFIX when expected answer contains paths with slashes + +def get_expected_format(): + """get_expected_format() returns a dict mapping each question to the format + of the expected answer.""" + expected_format = {'q1': 'TEXT_FORMAT', + 'q2': 'TEXT_FORMAT', + 'q3': 'TEXT_FORMAT', + 'q4': 'TEXT_FORMAT', + 'q5': 'TEXT_FORMAT', + 'q6': 'TEXT_FORMAT', + 'q7': 'TEXT_FORMAT', + 'q8': 'TEXT_FORMAT', + 'q9': 'TEXT_FORMAT', + 'q10': 'TEXT_FORMAT', + 'q11': 'TEXT_FORMAT', + 'q12': 'TEXT_FORMAT', + 'q13': 'TEXT_FORMAT', + 'q14': 'TEXT_FORMAT', + 'q15': 'TEXT_FORMAT', + 'q16': 'TEXT_FORMAT', + 'q17': 'TEXT_FORMAT', + 'q18': 'TEXT_FORMAT', + 'q19': 'TEXT_FORMAT', + 'q20': 'TEXT_FORMAT'} + return expected_format + + +def get_expected_json(): + """get_expected_json() returns a dict mapping each question to the expected + answer (if the format permits it).""" + expected_json = {'q1': 28, + 'q2': 7, + 'q3': 14, + 'q4': 1.259001, + 'q5': 1.259001, + 'q6': 9.438775, + 'q7': 27.812921, + 'q8': 220, + 'q9': 200, + 'q10': 400, + 'q11': 19.0, + 'q12': 42.5, + 'q13': 50, + 'q14': 30, + 'q15': -8.0, + 'q16': 1.0, + 'q17': -0.2857142857142857, + 'q18': -1.0, + 'q19': -0.1, + 'q20': -0.5714285714285714} + return expected_json + + +def get_special_json(): + """get_special_json() returns a dict mapping each question to the expected + answer stored in a special format as a list of tuples. Each tuple contains + the element expected in the list, and its corresponding value. Any two + elements with the same value can appear in any order in the actual list, + but if two elements have different values, then they must appear in the + same order as in the expected list of tuples.""" + special_json = {} + return special_json + + +def compare(expected, actual, q_format=TEXT_FORMAT): + """compare(expected, actual) is used to compare when the format of + the expected answer is known for certain.""" + try: + if q_format == TEXT_FORMAT: + return simple_compare(expected, actual) + elif q_format == TEXT_FORMAT_UNORDERED_LIST: + return list_compare_unordered(expected, actual) + elif q_format == TEXT_FORMAT_ORDERED_LIST: + return list_compare_ordered(expected, actual) + elif q_format == TEXT_FORMAT_DICT: + return dict_compare(expected, actual) + elif q_format == TEXT_FORMAT_SPECIAL_ORDERED_LIST: + return list_compare_special(expected, actual) + elif q_format == TEXT_FORMAT_NAMEDTUPLE: + return namedtuple_compare(expected, actual) + elif q_format == PNG_FORMAT_SCATTER: + return compare_flip_dicts(expected, actual) + elif q_format == HTML_FORMAT: + return compare_cell_html(expected, actual) + elif q_format == FILE_JSON_FORMAT: + return compare_json(expected, actual) + else: + if expected != actual: + return "expected %s but found %s " % (repr(expected), repr(actual)) + except: + if expected != actual: + return "expected %s" % (repr(expected)) + return PASS + + +def print_message(expected, actual, complete_msg=True): + """print_message(expected, actual) displays a simple error message.""" + msg = "expected %s" % (repr(expected)) + if complete_msg: + msg = msg + " but found %s" % (repr(actual)) + return msg + + +def simple_compare(expected, actual, complete_msg=True): + """simple_compare(expected, actual) is used to compare when the expected answer + is a type/Nones/str/int/float/bool. When the expected answer is a float, + the actual answer is allowed to be within the tolerance limit. Otherwise, + the values must match exactly, or a very simple error message is displayed.""" + msg = PASS + if 'numpy' in repr(type((actual))): + actual = actual.item() + if isinstance(expected, type): + if expected != actual: + if isinstance(actual, type): + msg = "expected %s but found %s" % (expected.__name__, actual.__name__) + else: + msg = "expected %s but found %s" % (expected.__name__, repr(actual)) + elif not isinstance(actual, type(expected)) and not (isinstance(expected, (float, int)) and isinstance(actual, (float, int))): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + elif isinstance(expected, float): + if not math.isclose(actual, expected, rel_tol=REL_TOL, abs_tol=ABS_TOL): + msg = print_message(expected, actual, complete_msg) + elif isinstance(expected, (list, tuple)) or is_namedtuple(expected): + new_msg = print_message(expected, actual, complete_msg) + if len(expected) != len(actual): + return new_msg + for i in range(len(expected)): + val = simple_compare(expected[i], actual[i]) + if val != PASS: + return new_msg + elif isinstance(expected, dict): + new_msg = print_message(expected, actual, complete_msg) + if len(expected) != len(actual): + return new_msg + val = simple_compare(list(expected.keys()), list(actual.keys())) + if val != PASS: + return new_msg + for key in expected: + val = simple_compare(expected[key], actual[key]) + if val != PASS: + return new_msg + else: + if expected != actual: + msg = print_message(expected, actual, complete_msg) + return msg + + +def intelligent_compare(expected, actual, obj=None): + """intelligent_compare(expected, actual) is used to compare when the + data type of the expected answer is not known for certain, and default + assumptions need to be made.""" + if obj == None: + obj = type(expected).__name__ + if is_namedtuple(expected): + msg = namedtuple_compare(expected, actual) + elif isinstance(expected, (list, tuple)): + msg = list_compare_ordered(expected, actual, obj) + elif isinstance(expected, set): + msg = list_compare_unordered(expected, actual, obj) + elif isinstance(expected, (dict)): + msg = dict_compare(expected, actual) + else: + msg = simple_compare(expected, actual) + msg = msg.replace("CompDict", "dict").replace("CompSet", "set").replace("NewNone", "None") + return msg + + +def is_namedtuple(obj, init_check=True): + """is_namedtuple(obj) returns True if `obj` is a namedtuple object + defined in the test file.""" + bases = type(obj).__bases__ + if len(bases) != 1 or bases[0] != tuple: + return False + fields = getattr(type(obj), '_fields', None) + if not isinstance(fields, tuple): + return False + if init_check and not type(obj).__name__ in [nt.__name__ for nt in _expected_namedtuples]: + return False + return True + + +def list_compare_ordered(expected, actual, obj=None): + """list_compare_ordered(expected, actual) is used to compare when the + expected answer is a list/tuple, where the order of the elements matters.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + for i in range(len(expected)): + if i >= len(actual): + msg = "at index %d of the %s, expected missing %s" % (i, obj, repr(expected[i])) + break + val = intelligent_compare(expected[i], actual[i], "sub" + obj) + if val != PASS: + msg = "at index %d of the %s, " % (i, obj) + val + break + if len(actual) > len(expected) and msg == PASS: + msg = "at index %d of the %s, found unexpected %s" % (len(expected), obj, repr(actual[len(expected)])) + if len(expected) != len(actual): + msg = msg + " (found %d entries in %s, but expected %d)" % (len(actual), obj, len(expected)) + + if len(expected) > 0: + try: + if msg != PASS and list_compare_unordered(expected, actual, obj) == PASS: + msg = msg + " (%s may not be ordered as required)" % (obj) + except: + pass + return msg + + +def list_compare_helper(larger, smaller): + """list_compare_helper(larger, smaller) is a helper function which takes in + two lists of possibly unequal sizes and finds the item that is not present + in the smaller list, if there is such an element.""" + msg = PASS + j = 0 + for i in range(len(larger)): + if i == len(smaller): + msg = "expected %s" % (repr(larger[i])) + break + found = False + while not found: + if j == len(smaller): + val = simple_compare(larger[i], smaller[j - 1], complete_msg=False) + break + val = simple_compare(larger[i], smaller[j], complete_msg=False) + j += 1 + if val == PASS: + found = True + break + if not found: + msg = val + break + return msg + +class NewNone(): + """alternate class in place of None, which allows for comparison with + all other data types.""" + def __str__(self): + return 'None' + def __repr__(self): + return 'None' + def __lt__(self, other): + return True + def __le__(self, other): + return True + def __gt__(self, other): + return False + def __ge__(self, other): + return other == None + def __eq__(self, other): + return other == None + def __ne__(self, other): + return other != None + +class CompDict(dict): + """subclass of dict, which allows for comparison with other dicts.""" + def __init__(self, vals): + super(self.__class__, self).__init__(vals) + if type(vals) == CompDict: + self.val = vals.val + elif isinstance(vals, dict): + self.val = self.get_equiv(vals) + else: + raise TypeError("'%s' object cannot be type casted to CompDict class" % type(vals).__name__) + + def get_equiv(self, vals): + val = [] + for key in sorted(list(vals.keys())): + val.append((key, vals[key])) + return val + + def __str__(self): + return str(dict(self.val)) + def __repr__(self): + return repr(dict(self.val)) + def __lt__(self, other): + return self.val < CompDict(other).val + def __le__(self, other): + return self.val <= CompDict(other).val + def __gt__(self, other): + return self.val > CompDict(other).val + def __ge__(self, other): + return self.val >= CompDict(other).val + def __eq__(self, other): + return self.val == CompDict(other).val + def __ne__(self, other): + return self.val != CompDict(other).val + +class CompSet(set): + """subclass of set, which allows for comparison with other sets.""" + def __init__(self, vals): + super(self.__class__, self).__init__(vals) + if type(vals) == CompSet: + self.val = vals.val + elif isinstance(vals, set): + self.val = self.get_equiv(vals) + else: + raise TypeError("'%s' object cannot be type casted to CompSet class" % type(vals).__name__) + + def get_equiv(self, vals): + return sorted(list(vals)) + + def __str__(self): + return str(set(self.val)) + def __repr__(self): + return repr(set(self.val)) + def __getitem__(self, index): + return self.val[index] + def __lt__(self, other): + return self.val < CompSet(other).val + def __le__(self, other): + return self.val <= CompSet(other).val + def __gt__(self, other): + return self.val > CompSet(other).val + def __ge__(self, other): + return self.val >= CompSet(other).val + def __eq__(self, other): + return self.val == CompSet(other).val + def __ne__(self, other): + return self.val != CompSet(other).val + +def make_sortable(item): + """make_sortable(item) replaces all Nones in `item` with an alternate + class that allows for comparison with str/int/float/bool/list/set/tuple/dict. + It also replaces all dicts (and sets) with a subclass that allows for + comparison with other dicts (and sets).""" + if item == None: + return NewNone() + elif isinstance(item, (type, str, int, float, bool)): + return item + elif isinstance(item, (list, set, tuple)): + new_item = [] + for subitem in item: + new_item.append(make_sortable(subitem)) + if is_namedtuple(item): + return type(item)(*new_item) + elif isinstance(item, set): + return CompSet(new_item) + else: + return type(item)(new_item) + elif isinstance(item, dict): + new_item = {} + for key in item: + new_item[key] = make_sortable(item[key]) + return CompDict(new_item) + return item + +def list_compare_unordered(expected, actual, obj=None): + """list_compare_unordered(expected, actual) is used to compare when the + expected answer is a list/set where the order of the elements does not matter.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + + try: + sort_expected = sorted(make_sortable(expected)) + sort_actual = sorted(make_sortable(actual)) + except: + return "unexpected datatype found in %s; expected entries of type %s" % (obj, obj, type(expected[0]).__name__) + + if len(actual) == 0 and len(expected) > 0: + msg = "in the %s, missing" % (obj) + sort_expected[0] + elif len(actual) > 0 and len(expected) > 0: + val = intelligent_compare(sort_expected[0], sort_actual[0]) + if val.startswith("expected to find type"): + msg = "in the %s, " % (obj) + simple_compare(sort_expected[0], sort_actual[0]) + else: + if len(expected) > len(actual): + msg = "in the %s, missing " % (obj) + list_compare_helper(sort_expected, sort_actual) + elif len(expected) < len(actual): + msg = "in the %s, found un" % (obj) + list_compare_helper(sort_actual, sort_expected) + if len(expected) != len(actual): + msg = msg + " (found %d entries in %s, but expected %d)" % (len(actual), obj, len(expected)) + return msg + else: + val = list_compare_helper(sort_expected, sort_actual) + if val != PASS: + msg = "in the %s, missing " % (obj) + val + ", but found un" + list_compare_helper(sort_actual, + sort_expected) + return msg + + +def namedtuple_compare(expected, actual): + """namedtuple_compare(expected, actual) is used to compare when the + expected answer is a namedtuple defined in the test file.""" + msg = PASS + if is_namedtuple(actual, False): + msg = "expected namedtuple but found %s" % (type(actual).__name__) + return msg + if type(expected).__name__ != type(actual).__name__: + return "expected namedtuple %s but found namedtuple %s" % (type(expected).__name__, type(actual).__name__) + expected_fields = expected._fields + actual_fields = actual._fields + msg = list_compare_ordered(list(expected_fields), list(actual_fields), "namedtuple attributes") + if msg != PASS: + return msg + for field in expected_fields: + val = intelligent_compare(getattr(expected, field), getattr(actual, field)) + if val != PASS: + msg = "at attribute %s of namedtuple %s, " % (field, type(expected).__name__) + val + return msg + return msg + + +def clean_slashes(item): + """clean_slashes()""" + if isinstance(item, str): + return item.replace("\\", "/").replace("/", os.path.sep) + elif item == None or isinstance(item, (type, int, float, bool)): + return item + elif isinstance(item, (list, tuple, set)) or is_namedtuple(item): + new_item = [] + for subitem in item: + new_item.append(clean_slashes(subitem)) + if is_namedtuple(item): + return type(item)(*new_item) + else: + return type(item)(new_item) + elif isinstance(item, dict): + new_item = {} + for key in item: + new_item[clean_slashes(key)] = clean_slashes(item[key]) + return item + + +def list_compare_special_initialize(special_expected): + """list_compare_special_initialize(special_expected) takes in the special + ordering stored as a sorted list of items, and returns a list of lists + where the ordering among the inner lists does not matter.""" + latest_val = None + clean_special = [] + for row in special_expected: + if latest_val == None or row[1] != latest_val: + clean_special.append([]) + latest_val = row[1] + clean_special[-1].append(row[0]) + return clean_special + + +def list_compare_special(special_expected, actual): + """list_compare_special(special_expected, actual) is used to compare when the + expected answer is a list with special ordering defined in `special_expected`.""" + msg = PASS + expected_list = [] + special_order = list_compare_special_initialize(special_expected) + for expected_item in special_order: + expected_list.extend(expected_item) + val = list_compare_unordered(expected_list, actual) + if val != PASS: + return val + i = 0 + for expected_item in special_order: + j = len(expected_item) + actual_item = actual[i: i + j] + val = list_compare_unordered(expected_item, actual_item) + if val != PASS: + if j == 1: + msg = "at index %d " % (i) + val + else: + msg = "between indices %d and %d " % (i, i + j - 1) + val + msg = msg + " (list may not be ordered as required)" + break + i += j + return msg + + +def dict_compare(expected, actual, obj=None): + """dict_compare(expected, actual) is used to compare when the expected answer + is a dict.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + + expected_keys = list(expected.keys()) + actual_keys = list(actual.keys()) + val = list_compare_unordered(expected_keys, actual_keys, obj) + + if val != PASS: + msg = "bad keys in %s: " % (obj) + val + if msg == PASS: + for key in expected: + new_obj = None + if isinstance(expected[key], (list, tuple, set)): + new_obj = 'value' + elif isinstance(expected[key], dict): + new_obj = 'sub' + obj + val = intelligent_compare(expected[key], actual[key], new_obj) + if val != PASS: + msg = "incorrect value for key %s in %s: " % (repr(key), obj) + val + return msg + + +def is_flippable(item): + """is_flippable(item) determines if the given dict of lists has lists of the + same length and is therefore flippable.""" + item_lens = set(([str(len(item[key])) for key in item])) + if len(item_lens) == 1: + return PASS + else: + return "found lists of lengths %s" % (", ".join(list(item_lens))) + +def flip_dict_of_lists(item): + """flip_dict_of_lists(item) flips a dict of lists into a list of dicts if the + lists are of same length.""" + new_item = [] + length = len(list(item.values())[0]) + for i in range(length): + new_dict = {} + for key in item: + new_dict[key] = item[key][i] + new_item.append(new_dict) + return new_item + +def compare_flip_dicts(expected, actual, obj="lists"): + """compare_flip_dicts(expected, actual) flips a dict of lists (or dicts) into + a list of dicts (or dict of dicts) and then compares the list ignoring order.""" + msg = PASS + example_item = list(expected.values())[0] + if isinstance(example_item, (list, tuple)): + val = is_flippable(actual) + if val != PASS: + msg = "expected to find lists of length %d, but " % (len(example_item)) + val + return msg + msg = list_compare_unordered(flip_dict_of_lists(expected), flip_dict_of_lists(actual), "lists") + elif isinstance(example_item, dict): + expected_keys = list(example_item.keys()) + for key in actual: + val = list_compare_unordered(expected_keys, list(actual[key].keys()), "dictionary %s" % key) + if val != PASS: + return val + for cat_key in expected_keys: + expected_category = {} + actual_category = {} + for key in expected: + expected_category[key] = expected[key][cat_key] + actual_category[key] = actual[key][cat_key] + val = list_compare_unordered(flip_dict_of_lists(expected), flip_dict_of_lists(actual), "category " + repr(cat_key)) + if val != PASS: + return val + return msg + + +def get_expected_tables(): + """get_expected_tables() reads the html file with the expected DataFrames + and returns a dict mapping each question to a html table.""" + if not os.path.exists(DF_FILE): + return None + + expected_tables = {} + f = open(DF_FILE, encoding='utf-8') + soup = BeautifulSoup(f.read(), 'html.parser') + f.close() + + tables = soup.find_all('table') + for table in tables: + expected_tables[table.get("data-question")] = table + + return expected_tables + +def parse_df_html_table(table): + """parse_df_html_table(table) takes in a table as a html string and returns + a dict mapping each row and column index to the value at that position.""" + rows = [] + for tr in table.find_all('tr'): + rows.append([]) + for cell in tr.find_all(['td', 'th']): + rows[-1].append(cell.get_text().strip("\n ")) + + cells = {} + for r in range(1, len(rows)): + for c in range(1, len(rows[0])): + rname = rows[r][0] + cname = rows[0][c] + cells[(rname,cname)] = rows[r][c] + return cells + + +def get_expected_namedtuples(): + """get_expected_namedtuples() defines the required namedtuple objects + globally. It also returns a tuple of the classes.""" + expected_namedtuples = [] + + return tuple(expected_namedtuples) + +_expected_namedtuples = get_expected_namedtuples() + + +def compare_cell_html(expected, actual): + """compare_cell_html(expected, actual) is used to compare when the + expected answer is a DataFrame stored in the `expected_dfs` html file.""" + expected_cells = parse_df_html_table(expected) + try: + actual_cells = parse_df_html_table(BeautifulSoup(actual, 'html.parser').find('table')) + except Exception as e: + return "expected to find type DataFrame but found type %s instead" % type(actual).__name__ + + expected_cols = list(set(["column %s" % (loc[1]) for loc in expected_cells])) + actual_cols = list(set(["column %s" % (loc[1]) for loc in actual_cells])) + msg = list_compare_unordered(expected_cols, actual_cols, "DataFrame") + if msg != PASS: + return msg + + expected_rows = list(set(["row index %s" % (loc[0]) for loc in expected_cells])) + actual_rows = list(set(["row index %s" % (loc[0]) for loc in actual_cells])) + msg = list_compare_unordered(expected_rows, actual_rows, "DataFrame") + if msg != PASS: + return msg + + for location, expected in expected_cells.items(): + location_name = "column {} at index {}".format(location[1], location[0]) + actual = actual_cells.get(location, None) + if actual == None: + return "in %s, expected to find %s" % (location_name, repr(expected)) + try: + actual_ans = float(actual) + expected_ans = float(expected) + if math.isnan(actual_ans) and math.isnan(expected_ans): + continue + except Exception as e: + actual_ans, expected_ans = actual, expected + msg = simple_compare(expected_ans, actual_ans) + if msg != PASS: + return "in %s, " % location_name + msg + return PASS + + +def get_expected_plots(): + """get_expected_plots() reads the json file with the expected plot data + and returns a dict mapping each question to a dictionary with the plots data.""" + if not os.path.exists(PLOT_FILE): + return None + + f = open(PLOT_FILE, encoding='utf-8') + expected_plots = json.load(f) + f.close() + return expected_plots + + +def compare_file_json(expected, actual): + """compare_file_json(expected, actual) is used to compare when the + expected answer is a JSON file.""" + msg = PASS + if not os.path.isfile(expected): + return "file %s not found; make sure it is downloaded and stored in the correct directory" % (expected) + elif not os.path.isfile(actual): + return "file %s not found; make sure that you have created the file with the correct name" % (actual) + try: + e = open(expected, encoding='utf-8') + expected_data = json.load(e) + e.close() + except json.JSONDecodeError: + return "file %s is broken and cannot be parsed; please delete and redownload the file correctly" % (expected) + try: + a = open(actual, encoding='utf-8') + actual_data = json.load(a) + a.close() + except json.JSONDecodeError: + return "file %s is broken and cannot be parsed" % (actual) + if type(expected_data) == list: + msg = list_compare_ordered(expected_data, actual_data, 'file ' + actual) + elif type(expected_data) == dict: + msg = dict_compare(expected_data, actual_data) + return msg + + +_expected_json = get_expected_json() +_special_json = get_special_json() +_expected_plots = get_expected_plots() +_expected_tables = get_expected_tables() +_expected_format = get_expected_format() + +def check(qnum, actual): + """check(qnum, actual) is used to check if the answer in the notebook is + the correct answer, and provide useful feedback if the answer is incorrect.""" + msg = PASS + error_msg = "<b style='color: red;'>ERROR:</b> " + q_format = _expected_format[qnum] + + if q_format == TEXT_FORMAT_SPECIAL_ORDERED_LIST: + expected = _special_json[qnum] + elif q_format == PNG_FORMAT_SCATTER: + if _expected_plots == None: + msg = error_msg + "file %s not parsed; make sure it is downloaded and stored in the correct directory" % (PLOT_FILE) + else: + expected = _expected_plots[qnum] + elif q_format == HTML_FORMAT: + if _expected_tables == None: + msg = error_msg + "file %s not parsed; make sure it is downloaded and stored in the correct directory" % (DF_FILE) + else: + expected = _expected_tables[qnum] + else: + expected = _expected_json[qnum] + + if SLASHES in q_format: + q_format = q_format.replace(SLASHES, "") + expected = clean_slashes(expected) + actual = clean_slashes(actual) + + if msg != PASS: + print(msg) + else: + msg = compare(expected, actual, q_format) + if msg != PASS: + msg = error_msg + msg + print(msg) + + +def check_file_size(path): + """check_file_size(path) throws an error if the file is too big to display + on Gradescope.""" + size = os.path.getsize(path) + assert size < MAX_FILE_SIZE * 10**3, "Your file is too big to be displayed by Gradescope; please delete unnecessary output cells so your file size is < %s KB" % MAX_FILE_SIZE + + +def reset_hidden_tests(): + """reset_hidden_tests() resets all hidden tests on the Gradescope autograder where the hidden test file exists""" + if not os.path.exists(HIDDEN_FILE): + return + hidn.reset_hidden_tests() + +def rubric_check(rubric_point, ignore_past_errors=True): + """rubric_check(rubric_point) uses the hidden test file on the Gradescope autograder to grade the `rubric_point`""" + if not os.path.exists(HIDDEN_FILE): + print(PASS) + return + error_msg_1 = "ERROR: " + error_msg_2 = "TEST DETAILS: " + try: + msg = hidn.rubric_check(rubric_point, ignore_past_errors) + except: + msg = "hidden tests crashed before execution" + if msg != PASS: + hidn.make_deductions(rubric_point) + if msg == "public tests failed": + comment = "The public tests have failed, so you will not receive any points for this question." + comment += "\nPlease confirm that the public tests pass locally before submitting." + elif msg == "answer is hardcoded": + comment = "In the datasets for testing hardcoding, all numbers are replaced with random values." + comment += "\nIf the answer is the same as in the original dataset for all these datasets" + comment += "\ndespite this, that implies that the answer in the notebook is hardcoded." + comment += "\nYou will not receive any points for this question." + else: + comment = hidn.get_comment(rubric_point) + msg = error_msg_1 + msg + if comment != "": + msg = msg + "\n" + error_msg_2 + comment + print(msg) + +def get_summary(): + """get_summary() returns the summary of the notebook using the hidden test file on the Gradescope autograder""" + if not os.path.exists(HIDDEN_FILE): + print("Total Score: %d/%d" % (TOTAL_SCORE, TOTAL_SCORE)) + return + score = min(TOTAL_SCORE, hidn.get_score(TOTAL_SCORE)) + display_msg = "Total Score: %d/%d" % (score, TOTAL_SCORE) + if score != TOTAL_SCORE: + display_msg += "\n" + hidn.get_deduction_string() + print(display_msg) + +def get_score_digit(digit): + """get_score_digit(digit) returns the `digit` of the score using the hidden test file on the Gradescope autograder""" + if not os.path.exists(HIDDEN_FILE): + score = TOTAL_SCORE + else: + score = hidn.get_score(TOTAL_SCORE) + digits = bin(score)[2:] + digits = "0"*(7 - len(digits)) + digits + return int(digits[6 - digit]) diff --git a/p3/README.md b/p3/README.md new file mode 100644 index 0000000000000000000000000000000000000000..1edef2a63a748256c46dfe9a39a7e4e17d004168 --- /dev/null +++ b/p3/README.md @@ -0,0 +1,48 @@ +# Project 3 (P3): City of Madison Budget + +## Clarifications/Corrections: + +* None yet. + +**Find any issues?** Report to us: + +- Rithik Jain <rjain55@wisc.edu> +- Muhammad Musa <mmusa2@wisc.edu> + + +## Note on Academic Misconduct: +You are **allowed** to work with a partner on your projects. While it is not required that you work with a partner, it is **recommended** that you find a project partner as soon as possible as the projects will get progressively harder. Be careful **not** to work with more than one partner. If you worked with a partner on Lab-P3, you are **not** allowed to finish your project with a different partner. You may either continue to work with the same partner, or work on P3 alone. Now may be a good time to review our [course policies](https://cs220.cs.wisc.edu/f23/syllabus.html). + +## Instructions: + +In this project, we will focus on function calls, function definitions, default arguments, and simple arithmetic operations. To start, create a `p3` directory, and download `p3.ipynb`, `project.py`, `public_tests.py`, and `madison_budget.csv`. + +**Note:** Please go through [Lab-P3](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p3) before you start the project. The lab contains some very important information that will be necessary for you to finish the project. + +You will work on `p3.ipynb` and hand it in. You should follow the provided directions for each question. Questions have **specific** directions on what **to do** and what **not to do**. + +After you've downloaded the file to your `p3` directory, open a terminal window and use `cd` to navigate to that directory. To make sure you're in the correct directory in the terminal, type `pwd`. To make sure you've downloaded the notebook file, type `ls` to ensure that `p3.ipynb`, `project.py`, `public_tests.py`, and `madison_budget.csv` are listed. Then run the command `jupyter notebook` to start Jupyter, and get started on the project! Make sure to run the initial cells in the notebook before proceeding. + +**IMPORTANT**: You should **NOT** terminate/close the session where you run the `jupyter notebook` command. If you need to use any other Terminal/PowerShell commands, open a new window instead. Keep constantly saving your notebook file, by either clicking the "Save and Checkpoint" button (floppy disk) or using the appropriate keyboard shortcut. + +------------------------------ + +## IMPORTANT Submission instructions: +- Review the [Grading Rubric](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/p3/rubric.md), to ensure that you don't lose points during code review. +- Login to [Gradescope](https://www.gradescope.com/) and upload the zip file into the P3 assignment. +- If you completed the project with a **partner**, make sure to **add their name** by clicking "Add Group Member" +in Gradescope when uploading the P3 zip file. + + <img src="images/add_group_member.png" width="400"> + + **Warning:** You will have to add your partner on Gradescope even if you have filled out this information in your `p3.ipynb` notebook. + +- It is **your responsibility** to make sure that your project clears auto-grader tests on the Gradescope test system. Otter test results should be available within forty minutes after your submission (usually within ten minutes). **Ignore** the `-/100.00` that is displayed to the right. You should be able to see both PASS / FAIL results for the 20 test cases, which is accessible via Gradescope Dashboard (as in the image below): + + <img src="images/gradescope.png" width="400"> + +- You can view your **final score** at the **end of the page**. If you pass all tests, then you will receive **full points** for the project. Otherwise, you can see your final score in the **summary** section of the test results (as in the image below): + + <img src="images/summary.png" width="400"> + + If you want more details on why you lost points on a particular test, you can scroll up to find more details about the test. diff --git a/p3/images/README.md b/p3/images/README.md new file mode 100644 index 0000000000000000000000000000000000000000..58de295751482954ef960c6197de0cafc3fe20fb --- /dev/null +++ b/p3/images/README.md @@ -0,0 +1,3 @@ +# Images + +Images from p3 are stored here. diff --git a/p3/images/add_group_member.png b/p3/images/add_group_member.png new file mode 100644 index 0000000000000000000000000000000000000000..402e5962e3e54ce8349f60ccfe4ce2b60840dd3b Binary files /dev/null and b/p3/images/add_group_member.png differ diff --git a/p3/images/gradescope.png b/p3/images/gradescope.png new file mode 100644 index 0000000000000000000000000000000000000000..7441faae41d8eb98bfceeb78855b67896b1ff911 Binary files /dev/null and b/p3/images/gradescope.png differ diff --git a/p3/images/summary.png b/p3/images/summary.png new file mode 100644 index 0000000000000000000000000000000000000000..4a63e32ff1a29903584746aa4873373855558e7b Binary files /dev/null and b/p3/images/summary.png differ diff --git a/p3/madison_budget.csv b/p3/madison_budget.csv new file mode 100644 index 0000000000000000000000000000000000000000..83e201187853cbf659fc72ba9d8f09a87532d618 --- /dev/null +++ b/p3/madison_budget.csv @@ -0,0 +1,8 @@ +id,agency,2019,2020,2021,2022,2023 +5,Finance,4.160221,4.175833,3.744979,4.159134,4.645472 +19,Library,17.703565,19.163603,18.849564,19.066904,19.770825 +20,Fire,52.853057,57.020341,61.180396,63.742785,68.098376 +21,Police,76.748435,81.830699,82.794221,83.995148,86.917117 +23,Public Health,5.384683,6.233474,6.937629,7.489070,9.656299 +25,Parks,14.236916,14.736923,15.585153,15.535002,16.007257 +27,Metro Transit,14.211148,8.552649,8.511315,9.126564,2.009664 diff --git a/p3/p3.ipynb b/p3/p3.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..3d4bf8b8db5fe95df28cc2c763c32517a3cb6265 --- /dev/null +++ b/p3/p3.ipynb @@ -0,0 +1,2504 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "6937a4ae", + "metadata": { + "cell_type": "code", + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "# import and initialize otter\n", + "import otter\n", + "grader = otter.Notebook(\"p3.ipynb\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9f5a2e1c", + "metadata": { + "editable": false, + "execution": { + "iopub.execute_input": "2023-09-19T23:21:48.754891Z", + "iopub.status.busy": "2023-09-19T23:21:48.754891Z", + "iopub.status.idle": "2023-09-19T23:21:50.291991Z", + "shell.execute_reply": "2023-09-19T23:21:50.290978Z" + } + }, + "outputs": [], + "source": [ + "import public_tests" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "883de3ed", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.296989Z", + "iopub.status.busy": "2023-09-19T23:21:50.295988Z", + "iopub.status.idle": "2023-09-19T23:21:50.301534Z", + "shell.execute_reply": "2023-09-19T23:21:50.300523Z" + } + }, + "outputs": [], + "source": [ + "# PLEASE FILL IN THE DETAILS\n", + "# enter none if you don't have a project partner\n", + "# you will have to add your partner as a group member on Gradescope even after you fill this\n", + "\n", + "# project: p3\n", + "# submitter: NETID1\n", + "# partner: NETID2\n", + "# hours: ????" + ] + }, + { + "cell_type": "markdown", + "id": "c349e754", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "# Project 3: City of Madison Budget" + ] + }, + { + "cell_type": "markdown", + "id": "5c801c63", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Learning Objectives:\n", + "\n", + "In this project you will demonstrate your ability to:\n", + "- import a module and use its functions,\n", + "- write functions,\n", + "- use default arguments when calling functions,\n", + "- use positional and keyword arguments when calling functions,\n", + "- avoid hardcoding, and\n", + "- work with the index of a row of data." + ] + }, + { + "cell_type": "markdown", + "id": "0574c7c0", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Testing your code:\n", + "\n", + "Along with this notebook, you must have downloaded the file `public_tests.py`. If you are curious about how we test your code, you can explore this file, and specifically the function `get_expected_json`, to understand the expected answers to the questions. You can have a look at [P2](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/p2) if you have forgotten how to read the outputs of the `grader.check(...)` function calls." + ] + }, + { + "cell_type": "markdown", + "id": "56cb80c6", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Please go through [Lab-P3](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p3) before starting this project.** The lab introduces some useful techniques necessary for this project." + ] + }, + { + "cell_type": "markdown", + "id": "3b069cec", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Project Description:\n", + "\n", + "In this project, you'll analyze the yearly budgets of seven different government agencies under the control of the City of Madison. The dataset we will analyze is obtained from [the City of Madison](https://www.cityofmadison.com/finance/budget) published by the Finance Department. In this project, we will be analyzing the **Adopted Budget** of a select few agencies between the years **2019** and **2023** (both years included). You'll get practice calling functions from the `project` module, which we've provided, and practice writing your own functions.\n", + "\n", + "If you haven't already downloaded `project.py`, `public_tests.py`, and `madison_budget.csv` (you can verify by running `ls` in a new terminal tab from your `p3` project directory). , please terminate the current `jupyter notebook` session, download all the required files, launch a `jupyter notebook` session again and click on *Kernel* > *Restart and Clear Output*. Start by executing all the cells (including the ones containing `import` statements).\n", + "\n", + "We won't explain how to use the `project` module here (i.e., the code in the `project.py` file). Refer to [Lab-P3](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/tree/main/lab-p3) to understand how the inspection process works and use the `help` function to learn about the various functions inside `project.py`. Feel free to take a look at the `project.py` code, if you are curious about how it works.\n", + "\n", + "This project consists of writing code to answer 20 questions." + ] + }, + { + "cell_type": "markdown", + "id": "80fc3ed7", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Dataset:\n", + "\n", + "The dataset you will be working with for this project is reproduced here:\n", + "\n", + "|id|agency|2019|2020|2021|2022|2023|\n", + "|--|------|----|----|----|----|----|\n", + "|5|Finance|4.160221|4.175833|3.744979|4.159134|4.645472|\n", + "|19|Library|17.703565|19.163603|18.849564|19.066904|19.770825|\n", + "|20|Fire|52.853057|57.020341|61.180396|63.742785|68.098376|\n", + "|21|Police|76.748435|81.830699|82.794221|83.995148|86.917117|\n", + "|23|Public Health|5.384683|6.233474|6.937629|7.489070|9.656299|\n", + "|25|Parks|14.236916|14.736923|15.585153|15.535002|16.007257|\n", + "|27|Metro Transit|14.211148|8.552649|8.511315|9.126564|2.009664|\n", + "\n", + "\n", + "This table lists seven different government agencies, and the budgets allotted to each of these agencies (in units of millions of dollars) between the years 2019 and 2023 (inclusive of both years).\n", + "\n", + "The dataset is in the `madison_budget.csv` file which you downloaded. Alternatively, you can open the `madison_budget.csv` file, to look at the same data and verify answers to simple questions." + ] + }, + { + "cell_type": "markdown", + "id": "c86b6d44", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Project Requirements:\n", + "\n", + "You **may not** hardcode indices in your code. For example, if we ask what the budget of the *Fire* department was, in *2019*, you **must** obtain the answer with this code: `get_budget(get_id(\"Fire\"), 2019)`. If you **do not** use `get_id` and instead use `get_budget(20, 2019)`, the Gradescope autograder will **deduct** points.\n", + "\n", + "For some of the questions, we'll ask you to write (then use) a function to compute the answer. If you compute the answer **without** creating the function we ask you to write, or answer these questions without using the function, the Gradescope autograder will **deduct** points, even if your answer is correct.\n", + "\n", + "Students are only allowed to use Python commands and concepts that have been taught in the course before the release of P3. In particular, you are **NOT** allowed to use conditionals or iteration on this project. The Gradescope autograder will **deduct** points if you use these concepts.\n", + "\n", + "For more details on what will cause you to lose points during code review, please take a look at the [Grading rubric](https://git.doit.wisc.edu/cdis/cs/courses/cs220/cs220-f23-projects/-/blob/main/p3/rubric.md)." + ] + }, + { + "cell_type": "markdown", + "id": "46c04791", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Incremental Coding and Testing:\n", + "\n", + "You should always strive to do incremental coding. Incremental coding enables you to avoid challenging bugs. Always write a few lines of code and then test those lines of code, before proceeding to write further code. You can call the `print` function to test intermediate step outputs. **Store your final answer for each question in the variable recommended for each question.** This step is important because Otter grades your work by comparing the value of this variable against the correct answer. So, if you store your answer in a different variable, you will not get points for it.\n", + "\n", + "We also recommend you do incremental testing: make sure to run the local tests as soon as you are done with a question. This will ensure that you haven't made a big mistake that might potentially impact the rest of your project solution. Please refrain from making multiple submissions on Gradescope for testing individual questions' answers. Instead use the local tests, to test your solution on your laptop. \n", + "\n", + "That said, it is very **important** that you check the *Gradescope* test results as soon as you submit your project on Gradescope. Test results on *Gradescope* are typically available somewhere between 10 to 15 minutes after the submission." + ] + }, + { + "cell_type": "markdown", + "id": "a337056b", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Project Questions and Functions:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb2c3b6a", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.306533Z", + "iopub.status.busy": "2023-09-19T23:21:50.306533Z", + "iopub.status.idle": "2023-09-19T23:21:50.313181Z", + "shell.execute_reply": "2023-09-19T23:21:50.312173Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# include the relevant import statements in this cell\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1e765e00", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.317182Z", + "iopub.status.busy": "2023-09-19T23:21:50.316181Z", + "iopub.status.idle": "2023-09-19T23:21:50.326333Z", + "shell.execute_reply": "2023-09-19T23:21:50.325318Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# call the init function to load the dataset\n", + "\n", + "# you may call the dump function here to test if you have loaded the dataset correctly." + ] + }, + { + "cell_type": "markdown", + "id": "5afca07f", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 1:** What is the `id` of the agency *Public Health*?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4751a35d", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.331332Z", + "iopub.status.busy": "2023-09-19T23:21:50.330330Z", + "iopub.status.idle": "2023-09-19T23:21:50.341359Z", + "shell.execute_reply": "2023-09-19T23:21:50.340322Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "# INCORRECT METHOD public_health_id = 23 => this is considered hardcoding\n", + "public_health_id = ...\n", + "\n", + "public_health_id" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ac119f9c", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q1\")" + ] + }, + { + "cell_type": "markdown", + "id": "b6e8b1af", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "Instead of repeatedly calling `project.get_id` function for each question, you could make these calls once at the beginning of your notebook and save the results in variables. Recall that calling the same function multiple times with the same argument(s) is a waste of computation. Complete the code in the below cell and make sure to use the relevant ID variables for the rest of the project questions." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "08b46678", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.365769Z", + "iopub.status.busy": "2023-09-19T23:21:50.365769Z", + "iopub.status.idle": "2023-09-19T23:21:50.371254Z", + "shell.execute_reply": "2023-09-19T23:21:50.371254Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "finance_id = project.get_id('Finance') # we have done this for you\n", + "\n", + "# replace the ... in the line below with code to get the id of 'Library'\n", + "library_id = ...\n", + "\n", + "# invoke get_id for the other agencies and store the result into similar variable names\n", + "\n", + "# considering that you already invokved get_id for Public Health, you need to \n", + "# make 4 more function calls to store the ID for the rest of the agencies\n" + ] + }, + { + "cell_type": "markdown", + "id": "e6847d06", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 2:** What was the budget of the agency *Finance* in *2019*?\n", + "\n", + "Your answer should just be a number (without any units at the end), that represents the budget of the agency in millions of dollars.\n", + "\n", + "You **must not** hardcode the ID of the agency. You **must** use the variable that you used to store the ID of *Finance* (assuming you already invoked `get_id` for all the agencies in the cell right below Question 1)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5c9d59fe", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.376265Z", + "iopub.status.busy": "2023-09-19T23:21:50.376265Z", + "iopub.status.idle": "2023-09-19T23:21:50.383302Z", + "shell.execute_reply": "2023-09-19T23:21:50.382293Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "finance_budget_2019 = ...\n", + "\n", + "finance_budget_2019" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "635ce868", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q2\")" + ] + }, + { + "cell_type": "markdown", + "id": "a617b437", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 1: `year_max(year)`\n", + "\n", + "This function will compute the **maximum** budget for any one agency in a given `year`.\n", + "\n", + "It has already been written for you, so you do not have to modify it. You can directly call this function to answer the following questions. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "136f63c9", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.416507Z", + "iopub.status.busy": "2023-09-19T23:21:50.416507Z", + "iopub.status.idle": "2023-09-19T23:21:50.424000Z", + "shell.execute_reply": "2023-09-19T23:21:50.422970Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def year_max(year):\n", + " \"\"\"\n", + " year_max(year) computes the maximum budget\n", + " for any agency in the given year\n", + " \"\"\"\n", + " # get the budget of each agency in the given year\n", + " finance_budget = project.get_budget(project.get_id('Finance'), year)\n", + " library_budget = project.get_budget(project.get_id('Library'), year)\n", + " fire_budget = project.get_budget(project.get_id('Fire'), year)\n", + " police_budget = project.get_budget(project.get_id('Police'), year)\n", + " public_health_budget = project.get_budget(project.get_id('Public Health'), year)\n", + " parks_budget = project.get_budget(project.get_id('Parks'), year)\n", + " metro_transit_budget = project.get_budget(project.get_id('Metro Transit'), year)\n", + "\n", + " # use the built-in max function to get the maximum of the seven values\n", + " return max(finance_budget, library_budget, fire_budget, police_budget, public_health_budget, parks_budget, metro_transit_budget)" + ] + }, + { + "cell_type": "markdown", + "id": "4828e0f3", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 3:** What was the highest budget for *any* agency in the year *2023*?\n", + "\n", + "You **must** call the `year_max` function to answer this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38fc1920", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.428988Z", + "iopub.status.busy": "2023-09-19T23:21:50.428988Z", + "iopub.status.idle": "2023-09-19T23:21:50.434737Z", + "shell.execute_reply": "2023-09-19T23:21:50.434737Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "max_budget_2023 = ...\n", + "\n", + "max_budget_2023" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b78ea3bc", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q3\")" + ] + }, + { + "cell_type": "markdown", + "id": "6115deb6", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 4:** What was the highest budget for *any* agency in a single year in the period *2020-2022* (both years included)?\n", + "\n", + "Recall that we can use the `max` function to compute the maximum of some values. Look at the examples in Lab-P3 where you used the `max` function or the `year_max` function definition. To be clear, the answer to this question is a single floating point number whose value is the highest budget allotted to an agency in a single year during these three years. \n", + "\n", + "You **must** invoke the `year_max` function in your answer to this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "62acbd89", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.459342Z", + "iopub.status.busy": "2023-09-19T23:21:50.458341Z", + "iopub.status.idle": "2023-09-19T23:21:50.466633Z", + "shell.execute_reply": "2023-09-19T23:21:50.465613Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "max_budget_2020 = ...\n", + "max_budget_2021 = ...\n", + "max_budget_2022 = ...\n", + "\n", + "max_budget_2020_to_2022 = ...\n", + "\n", + "max_budget_2020_to_2022" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3747e0bf", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q4\")" + ] + }, + { + "cell_type": "markdown", + "id": "c2a07def", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 2: `agency_min(agency)`\n", + "\n", + "This function **must** compute the **lowest** budget allotted to the given `agency` during any year in the dataset (*2019-2023*).\n", + "\n", + "We'll help you get started with this function, but you need to fill in the rest of the function yourself." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2c883ddd", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.508763Z", + "iopub.status.busy": "2023-09-19T23:21:50.508763Z", + "iopub.status.idle": "2023-09-19T23:21:50.515685Z", + "shell.execute_reply": "2023-09-19T23:21:50.514675Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def agency_min(agency):\n", + " \"\"\"\n", + " agency_min(agency) computes the lowest budget allotted\n", + " to the given `agency` in any year\n", + " \"\"\"\n", + " agency_id = project.get_id(agency) \n", + " budget_2019 = project.get_budget(agency_id, 2019)\n", + " budget_2020 = project.get_budget(agency_id, 2020)\n", + " # get the budgets from other years\n", + " \n", + " # use the built-in min function (similar to the max function) to get the minimum across the \n", + " # five years and return that value\n", + " \n", + " min_budget_2019_to_2023 = ...\n", + " return min_budget_2019_to_2023" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9023305e", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"agency_min\")" + ] + }, + { + "cell_type": "markdown", + "id": "149e571c", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 5:** What was the lowest budget allotted to the agency *Library* in a *single* year?\n", + "\n", + "You **must** call the `agency_min` function to answer this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bf3f077a", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.542257Z", + "iopub.status.busy": "2023-09-19T23:21:50.541256Z", + "iopub.status.idle": "2023-09-19T23:21:50.549476Z", + "shell.execute_reply": "2023-09-19T23:21:50.548458Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# replace the ... with your code\n", + "min_budget_library = ...\n", + "\n", + "min_budget_library" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "674a40d9", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q5\")" + ] + }, + { + "cell_type": "markdown", + "id": "50a835d8", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 6:** What was the least budget allotted in any *single* year between the agencies *Fire*, and *Police*?\n", + "\n", + "Recall that we can use the `min` function to compute the minimum of some values. To be clear, the answer to this question is a single floating point number whose value is the lowest budget allotted during a single year during this entire period between *2019-2023* to any of the 2 agencies mentioned.\n", + "\n", + "You **must** invoke the `agency_min` function in your answer to this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2dfd9c6a", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.574872Z", + "iopub.status.busy": "2023-09-19T23:21:50.574872Z", + "iopub.status.idle": "2023-09-19T23:21:50.582495Z", + "shell.execute_reply": "2023-09-19T23:21:50.581463Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'min_budget_fire_police'\n", + "\n", + "# display the variable 'min_budget_fire_police' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "97cedf73", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q6\")" + ] + }, + { + "cell_type": "markdown", + "id": "b87c7625", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 3: `agency_avg(agency) `\n", + "\n", + "This function must compute the **average** budget for the given `agency` across the five years in the dataset (i.e. *2019 - 2023*).\n", + "\n", + "**Hint:** start by copy/pasting the `agency_min` function definition, and renaming your copy to `agency_avg` (this is **not necessary**, but it will save you time). \n", + "Instead of returning the minimum of `budget_2019`, `budget_2020`, etc., return the **average** of these by adding them together, then dividing by five. \n", + "**You may hardcode the number 5 for this computation**.\n", + "\n", + "The type of the *return value* **must** be `float`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b6e19d26", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.624369Z", + "iopub.status.busy": "2023-09-19T23:21:50.624369Z", + "iopub.status.idle": "2023-09-19T23:21:50.631778Z", + "shell.execute_reply": "2023-09-19T23:21:50.630746Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# define the function 'agency_avg' here\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "95874d71", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"agency_avg\")" + ] + }, + { + "cell_type": "markdown", + "id": "1cfb229d", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 7:** What was the average budget of the agency *Parks* between *2019* and *2023*?\n", + "\n", + "You **must** call the `agency_avg` function to answer this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "531cf48f", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.654843Z", + "iopub.status.busy": "2023-09-19T23:21:50.654843Z", + "iopub.status.idle": "2023-09-19T23:21:50.662623Z", + "shell.execute_reply": "2023-09-19T23:21:50.661612Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'parks_avg_budget'\n", + "\n", + "# display the variable 'parks_avg_budget' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a2fcf0b9", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q7\")" + ] + }, + { + "cell_type": "markdown", + "id": "2d8b63b1", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 8:** What was the average budget of the agency *Public Health* between *2019* and *2023*?\n", + "\n", + "You **must** call the `agency_avg` function to answer this question." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5f165638", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.685283Z", + "iopub.status.busy": "2023-09-19T23:21:50.685283Z", + "iopub.status.idle": "2023-09-19T23:21:50.692508Z", + "shell.execute_reply": "2023-09-19T23:21:50.691496Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'public_health_avg_budget'\n", + "\n", + "# display the variable 'public_health_avg_budget' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ebc71ec0", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q8\")" + ] + }, + { + "cell_type": "markdown", + "id": "ce1d7d60", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 9:** Relative to its **average**, how much **higher** or **lower** was the budget of the agency *Public Health* in *2023*?\n", + "\n", + "**Hint:** Compute the difference between the **average** budget and the budget in *2023* of *Public Health*. Your answer must be a **positive** number if the budget was **higher** in *2018* than on average. Your answer must be a **negative** number if the budget was **lower** in *2023* than on average." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c10b44dd", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.716931Z", + "iopub.status.busy": "2023-09-19T23:21:50.715950Z", + "iopub.status.idle": "2023-09-19T23:21:50.725196Z", + "shell.execute_reply": "2023-09-19T23:21:50.724185Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'diff_public_health_2023_to_average'.\n", + "# it is recommended that you create more intermediary variables to make your code easier to write and read.\n", + "# some useful intermediary variables you could use/create are: 'public_health_id', 'public_health_avg_budget', and\n", + "# 'public_health_budget_2023'.\n", + "\n", + "\n", + "# display the variable 'diff_public_health_2023_to_average' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "84f7cc34", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q9\")" + ] + }, + { + "cell_type": "markdown", + "id": "1839010a", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 4: `year_budget(year)`\n", + "\n", + "This function must compute the **total** budget across all agencies for the given `year`.\n", + "\n", + "You can start from the following code snippet:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d79b2804", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.768165Z", + "iopub.status.busy": "2023-09-19T23:21:50.768165Z", + "iopub.status.idle": "2023-09-19T23:21:50.776208Z", + "shell.execute_reply": "2023-09-19T23:21:50.775185Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def year_budget(year=2023): # DO NOT EDIT THIS LINE\n", + " \"\"\"\n", + " year_budget(year) computes the total budget\n", + " across all agencies for the given year\n", + " \"\"\"\n", + " pass # this statement tells Python to do nothing.\n", + " # since this function has no code inside, we have added the pass statement \n", + " # inside so the code does not crash.\n", + " # once you have added code to this function, you can (and should) \n", + " # remove the pass statement as it does nothing.\n", + " \n", + " # finish this function definition and return the total budget\n", + " # across all agencies for the given `year`\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "573146c6", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"year_budget\")" + ] + }, + { + "cell_type": "markdown", + "id": "812eff4b", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 10:** What was the **total** budget across all seven agencies in *2023*?\n", + "\n", + "You **must** call the `year_budget` function to answer this question. Use the default argument (your call to `year_sum` function **must not** pass any arguments)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6653c8eb", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.809566Z", + "iopub.status.busy": "2023-09-19T23:21:50.809566Z", + "iopub.status.idle": "2023-09-19T23:21:50.817166Z", + "shell.execute_reply": "2023-09-19T23:21:50.816159Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'total_budget_2023'\n", + "\n", + "# display the variable 'total_budget_2023' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b8bb0bf6", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q10\")" + ] + }, + { + "cell_type": "markdown", + "id": "6d544c4c", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 11:** What was the **total** budget across all seven agencies across the years *2019-2021* (both years included)?\n", + "\n", + "You **must** invoke the `year_budget` function in your answer to this question. To be clear, the answer to this question is a single floating point number whose value is the total budget across all seven agencies during these three years." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ae09ba46", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.853807Z", + "iopub.status.busy": "2023-09-19T23:21:50.853807Z", + "iopub.status.idle": "2023-09-19T23:21:50.862542Z", + "shell.execute_reply": "2023-09-19T23:21:50.861525Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'total_budget_2019_to_2021'\n", + "\n", + "# display the variable 'total_budget_2019_to_2021' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "397a5698", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q11\")" + ] + }, + { + "cell_type": "markdown", + "id": "10dceff6", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 5: `change_per_year(agency, start_year, end_year)`\n", + "\n", + "This function should return the **average increase or decrease** in budget (must be **positive** if there's an **increase**, and **negative** if there’s a **decrease**) over the period from `start_year` to `end_year` for the given `agency`.\n", + "\n", + "The type of the *return value* **must** be `float`.\n", + "\n", + "We're not asking you to do anything complicated here; you just need to compute the **difference** in budget between the end year and the start year, then **divide** by the number of elapsed years. Recall that you created a similar function in the lab. You can start with the following code snippet (with the default arguments):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ebd6bf18", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.908070Z", + "iopub.status.busy": "2023-09-19T23:21:50.908070Z", + "iopub.status.idle": "2023-09-19T23:21:50.915535Z", + "shell.execute_reply": "2023-09-19T23:21:50.914504Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "def change_per_year(agency, start_year=2019, end_year=2023): # DO NOT EDIT THIS LINE\n", + " \"\"\"\n", + " change_per_year(agency, start_year, end_year) computes the average increase \n", + " or decrease in budget over the period from `start_year` to `end_year` for the \n", + " given `agency`\n", + " \"\"\"\n", + " pass # as before, you should delete this statement after finishing your function.\n", + " \n", + " # TODO: compute and return the change per year in the budget of the agency between start_year and end_year\n", + " # TODO: it is recommended that you create intermediary variables to make your code easier to write and read.\n", + " # TODO: some useful intermediary variables you could create are: \n", + " # 'budget_start_year', 'budget_end_year', 'budget_difference'.\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5f196020", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"change_per_year\")" + ] + }, + { + "cell_type": "markdown", + "id": "2d315faf", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 12:** How much has the budget of the agency *Police* changed per year (on average) from *2019* to *2023*?\n", + "\n", + "You **must** call the `change_per_year` function to answer this question. Use the default arguments (your call to `change_per_year` function **must not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7237ce6f", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.938940Z", + "iopub.status.busy": "2023-09-19T23:21:50.938940Z", + "iopub.status.idle": "2023-09-19T23:21:50.944876Z", + "shell.execute_reply": "2023-09-19T23:21:50.944876Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'police_average_change'\n", + "\n", + "# display the variable 'police_average_change' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c19399ef", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q12\")" + ] + }, + { + "cell_type": "markdown", + "id": "8b3d297f", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 13:** How much has the budget of the agency *Fire* changed per year (on average) from *2020* to *2023*?\n", + "\n", + "You **must** call the `change_per_year` function to answer this question. Use the default arguments (your call to `change_per_year` function **should not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8ae586cf", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:50.978710Z", + "iopub.status.busy": "2023-09-19T23:21:50.978710Z", + "iopub.status.idle": "2023-09-19T23:21:50.985480Z", + "shell.execute_reply": "2023-09-19T23:21:50.984449Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'fire_average_change'\n", + "\n", + "# display the variable 'fire_average_change' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2c2d2c64", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q13\")" + ] + }, + { + "cell_type": "markdown", + "id": "50410eda", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 14:** How much has the budget of the agency *Finance* changed per year (on average) from *2019* to *2021*?\n", + "\n", + "You **must** call the `change_per_year` function to answer this question. Use the default arguments (your call to `change_per_year` function **should not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "79efde72", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.019206Z", + "iopub.status.busy": "2023-09-19T23:21:51.018216Z", + "iopub.status.idle": "2023-09-19T23:21:51.025777Z", + "shell.execute_reply": "2023-09-19T23:21:51.024761Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'finance_average_change'\n", + "\n", + "# display the variable 'finance_average_change' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "072ef37a", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q14\")" + ] + }, + { + "cell_type": "markdown", + "id": "f0ec8724", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 15:** How much has the budget of the agency *Metro Transit* changed per year (on average) from *2020* to *2022*?\n", + "\n", + "You **must** call the `change_per_year` function to answer this question. Use the default arguments (your call to `change_per_year` function **should not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e6a59fc4", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.058613Z", + "iopub.status.busy": "2023-09-19T23:21:51.058613Z", + "iopub.status.idle": "2023-09-19T23:21:51.065263Z", + "shell.execute_reply": "2023-09-19T23:21:51.065263Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'metro_transit_average_change'\n", + "\n", + "# display the variable 'metro_transit_average_change' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8da5b4b2", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q15\")" + ] + }, + { + "cell_type": "markdown", + "id": "0e741c19", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "### Function 6: `extrapolate(agency, target_year, start_year, end_year)`\n", + "\n", + "This function **must** compute the **average** change per year from the data from `start_year` to `end_year` for `agency`. It **must** then return the **predicted budget** in `target_year`, assuming budget continues increasing (or decreasing) by that same **constant** amount each year.\n", + "\n", + "The type of the *return value* **must** be `float`.\n", + "\n", + "You **must** define `extrapolate` so that the parameter `start_year` has the **default argument** `2019` and `end_year` has the **default argument** `2023`.\n", + "\n", + "You **must** call the `change_per_year` function in the definition of `extrapolate`. **Do not** manually recompute the average change in budget." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "64b3a31f", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.099463Z", + "iopub.status.busy": "2023-09-19T23:21:51.099463Z", + "iopub.status.idle": "2023-09-19T23:21:51.105076Z", + "shell.execute_reply": "2023-09-19T23:21:51.105076Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# define the function extrapolate(agency, target_year, start_year, end_year) here.\n", + "# it should return the estimated budget of the `agency` in `target_year` based on the \n", + "# average change in the budget between `start_year` and `end_year`.\n", + "# it is recommended that you create intermediary variables to make your code easier to write and read.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38f5c145", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"extrapolate\")" + ] + }, + { + "cell_type": "markdown", + "id": "0b9aef22", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 16:** What is the **estimated** budget for the agency *Library* in *2025* based on the **average change** in budget per year for it between *2019* and *2023*?\n", + "\n", + "You **must** call the `extrapolate` function to answer this question. Use the default arguments if possible (your call to `extrapolate` function **must not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce41449b", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.140608Z", + "iopub.status.busy": "2023-09-19T23:21:51.139606Z", + "iopub.status.idle": "2023-09-19T23:21:51.146826Z", + "shell.execute_reply": "2023-09-19T23:21:51.145816Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'library_budget_2025'\n", + "\n", + "# display the variable 'library_budget_2025' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a7949545", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q16\")" + ] + }, + { + "cell_type": "markdown", + "id": "53cd9e27", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 17:** What is the **estimated budget** for the agency *Parks* in *2030* based on the **average change** in budget per year for it between *2021* and *2023*?\n", + "\n", + "You **must** call the `extrapolate` function to answer this question. Use the default arguments if possible (your call to `extrapolate` function **must not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "520c9927", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.179129Z", + "iopub.status.busy": "2023-09-19T23:21:51.179129Z", + "iopub.status.idle": "2023-09-19T23:21:51.186078Z", + "shell.execute_reply": "2023-09-19T23:21:51.185068Z" + }, + "scrolled": true, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'parks_budget_2030'\n", + "\n", + "# display the variable 'parks_budget_2030' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2aee9682", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q17\")" + ] + }, + { + "cell_type": "markdown", + "id": "97027b73", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 18:** What is the **difference** between the **estimated budget** for the agency *Police* in *2023* based on the **average change** per year between *2019* and *2022* and the **actual** budget in *2023*?\n", + "\n", + "You **must** invoke the `extrapolate` function in your answer to this question. A **positive** answer implies that the actual budget in *2023* is **higher**, while a negative answer implies that it is lower. Use the default arguments if possible (your call to `extrapolate` function **must not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c40e7778", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.219761Z", + "iopub.status.busy": "2023-09-19T23:21:51.219761Z", + "iopub.status.idle": "2023-09-19T23:21:51.226807Z", + "shell.execute_reply": "2023-09-19T23:21:51.225770Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'police_diff_estimate_budget'\n", + "\n", + "# display the variable 'police_diff_estimate_budget' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d0d97cac", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q18\")" + ] + }, + { + "cell_type": "markdown", + "id": "86d37b62", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 19:** What is the **difference** between the **estimated budget** for the agency *Metro Transit* in *2023* based on the **average change** per year between *2019* and *2022* and the **actual** budget in *2023*?\n", + "\n", + "You **must** invoke the `extrapolate` function in your answer to this question. A **positive** answer implies that the actual budget in *2023* is **higher**, while a negative answer implies that it is lower. Use the default arguments if possible (your call to `extrapolate` function **must not** pass any more arguments than is absolutely necessary)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d1d2a714", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.270008Z", + "iopub.status.busy": "2023-09-19T23:21:51.270008Z", + "iopub.status.idle": "2023-09-19T23:21:51.277412Z", + "shell.execute_reply": "2023-09-19T23:21:51.276382Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'metro_transit_diff_estimate_budget'\n", + "\n", + "# display the variable 'metro_transit_diff_estimate_budget' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f07bf5f2", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q19\")" + ] + }, + { + "cell_type": "markdown", + "id": "6c660b95", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "**Question 20:** What is the **standard deviation** of the budget allotted to the agency *Metro Transit* over the five years?\n", + "\n", + "**Hint:** You **must** compute the *population standard deviation* as in this [example](https://en.wikipedia.org/wiki/Standard_deviation#Population_standard_deviation_of_grades_of_eight_students). **You may hardcode the number 5 for this computation**.\n", + "\n", + "**Hint:** You can find the square root of any number by raising it to the exponent `0.5`. In other words, the square root of `2` can be computed as `2**(0.5)`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f8b10fcc", + "metadata": { + "execution": { + "iopub.execute_input": "2023-09-19T23:21:51.320921Z", + "iopub.status.busy": "2023-09-19T23:21:51.320921Z", + "iopub.status.idle": "2023-09-19T23:21:51.332220Z", + "shell.execute_reply": "2023-09-19T23:21:51.331190Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# compute and store the answer in the variable 'metro_transit_budget_std_dev'\n", + "\n", + "# display the variable 'metro_transit_budget_std_dev' here" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30748c9d", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"q20\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "41cb5e90", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"general_deductions\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "33f3e280", + "metadata": { + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "grader.check(\"summary\")" + ] + }, + { + "cell_type": "markdown", + "id": "b0a39a68", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + "## Submission\n", + "It is recommended that at this stage, you Restart and Run all Cells in your notebook.\n", + "That will automatically save your work and generate a zip file for you to submit.\n", + "\n", + "**SUBMISSION INSTRUCTIONS**:\n", + "1. **Upload** the zipfile to Gradescope.\n", + "2. If you completed the project with a **partner**, make sure to **add their name** by clicking \"Add Group Member\"\n", + "in Gradescope when uploading the zip file.\n", + "3. Check **Gradescope** results as soon as the auto-grader execution gets completed.\n", + "4. Your **final score** for this project is the score that you see on **Gradescope**.\n", + "5. You are **allowed** to resubmit on Gradescope as many times as you want to.\n", + "6. **Contact** a TA/PM if you lose any points on Gradescope for any **unclear reasons**." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "105996b1", + "metadata": { + "cell_type": "code", + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "# running this cell will create a new save checkpoint for your notebook\n", + "from IPython.display import display, Javascript\n", + "display(Javascript('IPython.notebook.save_checkpoint();'))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c153cce7", + "metadata": { + "cell_type": "code", + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "!jupytext --to py p3.ipynb" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a2836cec", + "metadata": { + "cell_type": "code", + "deletable": false, + "editable": false + }, + "outputs": [], + "source": [ + "public_tests.check_file_size(\"p3.ipynb\")\n", + "grader.export(pdf=False, run_tests=False, files=[\"p3.py\"])" + ] + }, + { + "cell_type": "markdown", + "id": "8ae2f2fd", + "metadata": { + "deletable": false, + "editable": false + }, + "source": [ + " " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" + }, + "otter": { + "OK_FORMAT": true, + "tests": { + "agency_avg": { + "name": "agency_avg", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('agency_avg: function logic is incorrect')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'function logic is incorrect (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('agency_avg: `get_budget` and `get_id` functions are not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`get_budget` and `get_id` functions are not used (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "agency_min": { + "name": "agency_min", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('agency_min: function logic is incorrect')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'function logic is incorrect (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('agency_min: the built-in function `min` is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'the built-in function `min` is not used (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "change_per_year": { + "name": "change_per_year", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('change_per_year: function logic is incorrect')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'function logic is incorrect (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('change_per_year: default values of `start_year=2019` and `end_year=2023` are changed in `change_per_year`')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'default values of `start_year=2019` and `end_year=2023` are changed in `change_per_year` (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "extrapolate": { + "name": "extrapolate", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('extrapolate: function logic is incorrect')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'function logic is incorrect (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('extrapolate: `change_per_year` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`change_per_year` function is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('extrapolate: default value are not specified in `extrapolate`')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'default value are not specified in `extrapolate` (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "general_deductions": { + "name": "general_deductions", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('general_deductions: Did not save the notebook file prior to running the cell containing \"export\". We cannot see your output if you do not save before generating the zip file. This deduction will become stricter for future projects.')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'Did not save the notebook file prior to running the cell containing \"export\". We cannot see your output if you do not save before generating the zip file. This deduction will become stricter for future projects. (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('general_deductions: Used conditionals/loops or other material not covered in class yet.')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'Used conditionals/loops or other material not covered in class yet. (-20)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q1": { + "name": "q1", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q1', public_health_id)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> public_tests.rubric_check('q1: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q10": { + "name": "q10", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q10', total_budget_2023)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q10: `year_budget` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`year_budget` function is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q10: passed more arguments than necessary to `year_budget` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `year_budget` function (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q11": { + "name": "q11", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q11', total_budget_2019_to_2021)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q11: `year_budget` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`year_budget` function is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q11: passed more arguments than necessary to `year_budget` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `year_budget` function (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> public_tests.rubric_check('q11: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q12": { + "name": "q12", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q12', police_average_change)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q12: `change_per_year` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`change_per_year` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q12: passed more arguments than necessary to `change_per_year` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `change_per_year` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q13": { + "name": "q13", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q13', fire_average_change)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q13: `change_per_year` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`change_per_year` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q13: passed more arguments than necessary to `change_per_year` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `change_per_year` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q14": { + "name": "q14", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q14', finance_average_change)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q14: `change_per_year` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`change_per_year` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q14: passed more arguments than necessary to `change_per_year` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `change_per_year` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q15": { + "name": "q15", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q15', metro_transit_average_change)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q15: `change_per_year` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`change_per_year` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q15: passed more arguments than necessary to `change_per_year` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `change_per_year` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q16": { + "name": "q16", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q16', library_budget_2025)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q16: `extrapolate` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`extrapolate` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q16: passed more arguments than necessary to `extrapolate` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `extrapolate` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q17": { + "name": "q17", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q17', parks_budget_2030)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q17: `extrapolate` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`extrapolate` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q17: passed more arguments than necessary to `extrapolate` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `extrapolate` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q18": { + "name": "q18", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q18', police_diff_estimate_budget)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q18: `extrapolate` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`extrapolate` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q18: agency id is hardcoded')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'agency id is hardcoded (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q18: passed more arguments than necessary to `extrapolate` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `extrapolate` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q19": { + "name": "q19", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q19', metro_transit_diff_estimate_budget)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q19: `extrapolate` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`extrapolate` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q19: agency id is hardcoded')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'agency id is hardcoded (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q19: passed more arguments than necessary to `extrapolate` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'passed more arguments than necessary to `extrapolate` function (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q2": { + "name": "q2", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q2', finance_budget_2019)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q2: agency id is hardcoded')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'agency id is hardcoded (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q2: variable `finance_id` is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'variable `finance_id` is not used (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q20": { + "name": "q20", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q20', metro_transit_budget_std_dev)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q20: agency id is hardcoded')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'agency id is hardcoded (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q20: `agency_avg` function is not used to compute the average budget', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`agency_avg` function is not used to compute the average budget (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> public_tests.rubric_check('q20: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q3": { + "name": "q3", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q3', max_budget_2023)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q3: `year_max` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`year_max` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q4": { + "name": "q4", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q4', max_budget_2020_to_2022)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q4: `year_max` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`year_max` function is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q4: the built-in function `max` is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'the built-in function `max` is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> public_tests.rubric_check('q4: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q5": { + "name": "q5", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q5', min_budget_library)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q5: `agency_min` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`agency_min` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q6": { + "name": "q6", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q6', min_budget_fire_police)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q6: `agency_min` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`agency_min` function is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q6: the built-in function `min` is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'the built-in function `min` is not used (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> public_tests.rubric_check('q6: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q7": { + "name": "q7", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q7', parks_avg_budget)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q7: `agency_avg` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`agency_avg` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q8": { + "name": "q8", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q8', public_health_avg_budget)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q8: `agency_avg` function is not used', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`agency_avg` function is not used (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "q9": { + "name": "q9", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.check('q9', diff_public_health_2023_to_average)\nAll test cases passed!\n", + "hidden": false, + "locked": false + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q9: average budget is computed without using the `agency_avg` function')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'average budget is computed without using the `agency_avg` function (-3)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('q9: `get_budget` function is not used for getting Public Health budget', False)\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - '`get_budget` function is not used for getting Public Health budget (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> public_tests.rubric_check('q9: public tests')\nAll test cases passed!\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "summary": { + "name": "summary", + "points": 127, + "suites": [ + { + "cases": [ + { + "code": ">>> public_tests.get_summary()\nTotal Score: 100/100\n", + "hidden": false, + "locked": false + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + }, + "year_budget": { + "name": "year_budget", + "points": 0, + "suites": [ + { + "cases": [ + { + "code": ">>> \n>>> public_tests.rubric_check('year_budget: function logic is incorrect')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'function logic is incorrect (-2)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('year_budget: agency ids are hardcoded')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'agency ids are hardcoded (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + }, + { + "code": ">>> \n>>> public_tests.rubric_check('year_budget: default value of `year=2023` is changed in `year_budget`')\nAll test cases passed!\n", + "hidden": false, + "locked": false, + "success_message": "Note that the Gradescope autograder will deduct points if your code fails the following rubric point - 'default value of `year=2023` is changed in `year_budget` (-1)'. The public tests cannot determine if your code satisfies these requirements. Verify your code manually." + } + ], + "scored": true, + "setup": "", + "teardown": "", + "type": "doctest" + } + ] + } + } + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/p3/project.py b/p3/project.py new file mode 100644 index 0000000000000000000000000000000000000000..ac8faab3fb4c477fce60c157f71d027bb4510244 --- /dev/null +++ b/p3/project.py @@ -0,0 +1,73 @@ +import csv as __csv__ + +# years in dataset +__years = None + +# key: (agency_id, year), val: spending in millions +__data = None + +# key: agency name, val: agency ID +__agency_to_id = None + + +def init(path): + """init(path) must be called to load data before other calls will work. You should call it like this: init("madison_budget.csv") or init("lab_budget.csv")""" + + global __years, __data, __agency_to_id + + if path != 'madison_budget.csv': + print("WARNING! Opening a path other than madison_budget.csv. " + + "That's fine for testing your code yourself, but madison_budget.csv " + + "will be the only file around when we test your code " + + "for grading.") + + __years = [] + __data = {} + __agency_to_id = {} + + f = open(path, encoding='utf-8') + data = list(__csv__.reader(f)) + f.close() + + for agency_idx in range(1, len(data)): + agency = data[agency_idx][1] + agency_id = int(data[agency_idx][0]) + __agency_to_id[agency] = agency_id + for year_idx in range(2, len(data[0])): + year = int(data[0][year_idx]) + if year not in __years: + __years.append(year) + agency_budget = float(data[agency_idx][year_idx]) + __data[(agency_id, year)] = agency_budget + +def dump(): + """prints all the data to the screen""" + if __agency_to_id == None: + raise Exception("you did not call init first") + + for agency in sorted(__agency_to_id.keys()): + agency_id = __agency_to_id[agency] + print("%-7s [ID %d]" % (agency, agency_id)) + for year in __years: + print(" %d: $%f MILLION" % (year, __data[(agency_id, year)])) + print() + + +def get_id(agency): + """get_id(agency) returns the ID of the specified agency.""" + if __agency_to_id == None: + raise Exception("you did not call init first") + if not agency in __agency_to_id: + raise Exception("No agency '%s', only these: %s" % + (str(agency), ','.join(list(__agency_to_id.keys())))) + return __agency_to_id[agency] + + +def get_budget(agency_id, year=2023): + """get_budget(agency_id, year) returns the dollars (in millions) allotted to the specified agency in specified year.""" + if __data == None: + raise Exception("you did not call init first") + if not (agency_id, year) in __data: + raise Exception("No data for agency %s, in year %s" % + (str(agency_id), str(year))) + return __data[(agency_id, year)] diff --git a/p3/public_tests.py b/p3/public_tests.py new file mode 100644 index 0000000000000000000000000000000000000000..b2e4a65a9468444630f4c64c1e5ad9fe96f321ba --- /dev/null +++ b/p3/public_tests.py @@ -0,0 +1,808 @@ +#!/usr/bin/python +# + +import os, json, math, copy +from collections import namedtuple +from bs4 import BeautifulSoup + +HIDDEN_FILE = os.path.join("hidden", "hidden_tests.py") +if os.path.exists(HIDDEN_FILE): + import hidden.hidden_tests as hidn +# - + +MAX_FILE_SIZE = 750 # units - KB +REL_TOL = 6e-04 # relative tolerance for floats +ABS_TOL = 15e-03 # absolute tolerance for floats +TOTAL_SCORE = 100 # total score for the project + +DF_FILE = 'expected_dfs.html' +PLOT_FILE = 'expected_plots.json' + +PASS = "All test cases passed!" + +TEXT_FORMAT = "TEXT_FORMAT" # question type when expected answer is a type, str, int, float, or bool +TEXT_FORMAT_UNORDERED_LIST = "TEXT_FORMAT_UNORDERED_LIST" # question type when the expected answer is a list or a set where the order does *not* matter +TEXT_FORMAT_ORDERED_LIST = "TEXT_FORMAT_ORDERED_LIST" # question type when the expected answer is a list or tuple where the order does matter +TEXT_FORMAT_DICT = "TEXT_FORMAT_DICT" # question type when the expected answer is a dictionary +TEXT_FORMAT_SPECIAL_ORDERED_LIST = "TEXT_FORMAT_SPECIAL_ORDERED_LIST" # question type when the expected answer is a list where order does matter, but with possible ties. Elements are ordered according to values in special_ordered_json (with ties allowed) +TEXT_FORMAT_NAMEDTUPLE = "TEXT_FORMAT_NAMEDTUPLE" # question type when expected answer is a namedtuple +PNG_FORMAT_SCATTER = "PNG_FORMAT_SCATTER" # question type when the expected answer is a scatter plot +HTML_FORMAT = "HTML_FORMAT" # question type when the expected answer is a DataFrame +FILE_JSON_FORMAT = "FILE_JSON_FORMAT" # question type when the expected answer is a JSON file +SLASHES = " SLASHES" # question SUFFIX when expected answer contains paths with slashes + +def get_expected_format(): + """get_expected_format() returns a dict mapping each question to the format + of the expected answer.""" + expected_format = {'q1': 'TEXT_FORMAT', + 'q2': 'TEXT_FORMAT', + 'q3': 'TEXT_FORMAT', + 'q4': 'TEXT_FORMAT', + 'q5': 'TEXT_FORMAT', + 'q6': 'TEXT_FORMAT', + 'q7': 'TEXT_FORMAT', + 'q8': 'TEXT_FORMAT', + 'q9': 'TEXT_FORMAT', + 'q10': 'TEXT_FORMAT', + 'q11': 'TEXT_FORMAT', + 'q12': 'TEXT_FORMAT', + 'q13': 'TEXT_FORMAT', + 'q14': 'TEXT_FORMAT', + 'q15': 'TEXT_FORMAT', + 'q16': 'TEXT_FORMAT', + 'q17': 'TEXT_FORMAT', + 'q18': 'TEXT_FORMAT', + 'q19': 'TEXT_FORMAT', + 'q20': 'TEXT_FORMAT'} + return expected_format + + +def get_expected_json(): + """get_expected_json() returns a dict mapping each question to the expected + answer (if the format permits it).""" + expected_json = {'q1': 23, + 'q2': 4.160221, + 'q3': 86.917117, + 'q4': 83.995148, + 'q5': 17.703565, + 'q6': 52.853057, + 'q7': 15.2202502, + 'q8': 7.140231, + 'q9': 2.5160680000000006, + 'q10': 207.10501, + 'q11': 574.614804, + 'q12': 2.542170500000001, + 'q13': 3.6926783333333333, + 'q14': -0.20762100000000006, + 'q15': 0.28695749999999975, + 'q16': 20.804454999999997, + 'q17': 17.484620999999997, + 'q18': 0.5063980000000043, + 'q19': -5.422038666666667, + 'q20': 3.876482507227448} + return expected_json + + +def get_special_json(): + """get_special_json() returns a dict mapping each question to the expected + answer stored in a special format as a list of tuples. Each tuple contains + the element expected in the list, and its corresponding value. Any two + elements with the same value can appear in any order in the actual list, + but if two elements have different values, then they must appear in the + same order as in the expected list of tuples.""" + special_json = {} + return special_json + + +def compare(expected, actual, q_format=TEXT_FORMAT): + """compare(expected, actual) is used to compare when the format of + the expected answer is known for certain.""" + try: + if q_format == TEXT_FORMAT: + return simple_compare(expected, actual) + elif q_format == TEXT_FORMAT_UNORDERED_LIST: + return list_compare_unordered(expected, actual) + elif q_format == TEXT_FORMAT_ORDERED_LIST: + return list_compare_ordered(expected, actual) + elif q_format == TEXT_FORMAT_DICT: + return dict_compare(expected, actual) + elif q_format == TEXT_FORMAT_SPECIAL_ORDERED_LIST: + return list_compare_special(expected, actual) + elif q_format == TEXT_FORMAT_NAMEDTUPLE: + return namedtuple_compare(expected, actual) + elif q_format == PNG_FORMAT_SCATTER: + return compare_flip_dicts(expected, actual) + elif q_format == HTML_FORMAT: + return compare_cell_html(expected, actual) + elif q_format == FILE_JSON_FORMAT: + return compare_json(expected, actual) + else: + if expected != actual: + return "expected %s but found %s " % (repr(expected), repr(actual)) + except: + if expected != actual: + return "expected %s" % (repr(expected)) + return PASS + + +def print_message(expected, actual, complete_msg=True): + """print_message(expected, actual) displays a simple error message.""" + msg = "expected %s" % (repr(expected)) + if complete_msg: + msg = msg + " but found %s" % (repr(actual)) + return msg + + +def simple_compare(expected, actual, complete_msg=True): + """simple_compare(expected, actual) is used to compare when the expected answer + is a type/Nones/str/int/float/bool. When the expected answer is a float, + the actual answer is allowed to be within the tolerance limit. Otherwise, + the values must match exactly, or a very simple error message is displayed.""" + msg = PASS + if 'numpy' in repr(type((actual))): + actual = actual.item() + if isinstance(expected, type): + if expected != actual: + if isinstance(actual, type): + msg = "expected %s but found %s" % (expected.__name__, actual.__name__) + else: + msg = "expected %s but found %s" % (expected.__name__, repr(actual)) + elif not isinstance(actual, type(expected)) and not (isinstance(expected, (float, int)) and isinstance(actual, (float, int))): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + elif isinstance(expected, float): + if not math.isclose(actual, expected, rel_tol=REL_TOL, abs_tol=ABS_TOL): + msg = print_message(expected, actual, complete_msg) + elif isinstance(expected, (list, tuple)) or is_namedtuple(expected): + new_msg = print_message(expected, actual, complete_msg) + if len(expected) != len(actual): + return new_msg + for i in range(len(expected)): + val = simple_compare(expected[i], actual[i]) + if val != PASS: + return new_msg + elif isinstance(expected, dict): + new_msg = print_message(expected, actual, complete_msg) + if len(expected) != len(actual): + return new_msg + val = simple_compare(list(expected.keys()), list(actual.keys())) + if val != PASS: + return new_msg + for key in expected: + val = simple_compare(expected[key], actual[key]) + if val != PASS: + return new_msg + else: + if expected != actual: + msg = print_message(expected, actual, complete_msg) + return msg + + +def intelligent_compare(expected, actual, obj=None): + """intelligent_compare(expected, actual) is used to compare when the + data type of the expected answer is not known for certain, and default + assumptions need to be made.""" + if obj == None: + obj = type(expected).__name__ + if is_namedtuple(expected): + msg = namedtuple_compare(expected, actual) + elif isinstance(expected, (list, tuple)): + msg = list_compare_ordered(expected, actual, obj) + elif isinstance(expected, set): + msg = list_compare_unordered(expected, actual, obj) + elif isinstance(expected, (dict)): + msg = dict_compare(expected, actual) + else: + msg = simple_compare(expected, actual) + msg = msg.replace("CompDict", "dict").replace("CompSet", "set").replace("NewNone", "None") + return msg + + +def is_namedtuple(obj, init_check=True): + """is_namedtuple(obj) returns True if `obj` is a namedtuple object + defined in the test file.""" + bases = type(obj).__bases__ + if len(bases) != 1 or bases[0] != tuple: + return False + fields = getattr(type(obj), '_fields', None) + if not isinstance(fields, tuple): + return False + if init_check and not type(obj).__name__ in [nt.__name__ for nt in _expected_namedtuples]: + return False + return True + + +def list_compare_ordered(expected, actual, obj=None): + """list_compare_ordered(expected, actual) is used to compare when the + expected answer is a list/tuple, where the order of the elements matters.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + for i in range(len(expected)): + if i >= len(actual): + msg = "at index %d of the %s, expected missing %s" % (i, obj, repr(expected[i])) + break + val = intelligent_compare(expected[i], actual[i], "sub" + obj) + if val != PASS: + msg = "at index %d of the %s, " % (i, obj) + val + break + if len(actual) > len(expected) and msg == PASS: + msg = "at index %d of the %s, found unexpected %s" % (len(expected), obj, repr(actual[len(expected)])) + if len(expected) != len(actual): + msg = msg + " (found %d entries in %s, but expected %d)" % (len(actual), obj, len(expected)) + + if len(expected) > 0: + try: + if msg != PASS and list_compare_unordered(expected, actual, obj) == PASS: + msg = msg + " (%s may not be ordered as required)" % (obj) + except: + pass + return msg + + +def list_compare_helper(larger, smaller): + """list_compare_helper(larger, smaller) is a helper function which takes in + two lists of possibly unequal sizes and finds the item that is not present + in the smaller list, if there is such an element.""" + msg = PASS + j = 0 + for i in range(len(larger)): + if i == len(smaller): + msg = "expected %s" % (repr(larger[i])) + break + found = False + while not found: + if j == len(smaller): + val = simple_compare(larger[i], smaller[j - 1], complete_msg=False) + break + val = simple_compare(larger[i], smaller[j], complete_msg=False) + j += 1 + if val == PASS: + found = True + break + if not found: + msg = val + break + return msg + +class NewNone(): + """alternate class in place of None, which allows for comparison with + all other data types.""" + def __str__(self): + return 'None' + def __repr__(self): + return 'None' + def __lt__(self, other): + return True + def __le__(self, other): + return True + def __gt__(self, other): + return False + def __ge__(self, other): + return other == None + def __eq__(self, other): + return other == None + def __ne__(self, other): + return other != None + +class CompDict(dict): + """subclass of dict, which allows for comparison with other dicts.""" + def __init__(self, vals): + super(self.__class__, self).__init__(vals) + if type(vals) == CompDict: + self.val = vals.val + elif isinstance(vals, dict): + self.val = self.get_equiv(vals) + else: + raise TypeError("'%s' object cannot be type casted to CompDict class" % type(vals).__name__) + + def get_equiv(self, vals): + val = [] + for key in sorted(list(vals.keys())): + val.append((key, vals[key])) + return val + + def __str__(self): + return str(dict(self.val)) + def __repr__(self): + return repr(dict(self.val)) + def __lt__(self, other): + return self.val < CompDict(other).val + def __le__(self, other): + return self.val <= CompDict(other).val + def __gt__(self, other): + return self.val > CompDict(other).val + def __ge__(self, other): + return self.val >= CompDict(other).val + def __eq__(self, other): + return self.val == CompDict(other).val + def __ne__(self, other): + return self.val != CompDict(other).val + +class CompSet(set): + """subclass of set, which allows for comparison with other sets.""" + def __init__(self, vals): + super(self.__class__, self).__init__(vals) + if type(vals) == CompSet: + self.val = vals.val + elif isinstance(vals, set): + self.val = self.get_equiv(vals) + else: + raise TypeError("'%s' object cannot be type casted to CompSet class" % type(vals).__name__) + + def get_equiv(self, vals): + return sorted(list(vals)) + + def __str__(self): + return str(set(self.val)) + def __repr__(self): + return repr(set(self.val)) + def __getitem__(self, index): + return self.val[index] + def __lt__(self, other): + return self.val < CompSet(other).val + def __le__(self, other): + return self.val <= CompSet(other).val + def __gt__(self, other): + return self.val > CompSet(other).val + def __ge__(self, other): + return self.val >= CompSet(other).val + def __eq__(self, other): + return self.val == CompSet(other).val + def __ne__(self, other): + return self.val != CompSet(other).val + +def make_sortable(item): + """make_sortable(item) replaces all Nones in `item` with an alternate + class that allows for comparison with str/int/float/bool/list/set/tuple/dict. + It also replaces all dicts (and sets) with a subclass that allows for + comparison with other dicts (and sets).""" + if item == None: + return NewNone() + elif isinstance(item, (type, str, int, float, bool)): + return item + elif isinstance(item, (list, set, tuple)): + new_item = [] + for subitem in item: + new_item.append(make_sortable(subitem)) + if is_namedtuple(item): + return type(item)(*new_item) + elif isinstance(item, set): + return CompSet(new_item) + else: + return type(item)(new_item) + elif isinstance(item, dict): + new_item = {} + for key in item: + new_item[key] = make_sortable(item[key]) + return CompDict(new_item) + return item + +def list_compare_unordered(expected, actual, obj=None): + """list_compare_unordered(expected, actual) is used to compare when the + expected answer is a list/set where the order of the elements does not matter.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + + try: + sort_expected = sorted(make_sortable(expected)) + sort_actual = sorted(make_sortable(actual)) + except: + return "unexpected datatype found in %s; expected entries of type %s" % (obj, obj, type(expected[0]).__name__) + + if len(actual) == 0 and len(expected) > 0: + msg = "in the %s, missing" % (obj) + sort_expected[0] + elif len(actual) > 0 and len(expected) > 0: + val = intelligent_compare(sort_expected[0], sort_actual[0]) + if val.startswith("expected to find type"): + msg = "in the %s, " % (obj) + simple_compare(sort_expected[0], sort_actual[0]) + else: + if len(expected) > len(actual): + msg = "in the %s, missing " % (obj) + list_compare_helper(sort_expected, sort_actual) + elif len(expected) < len(actual): + msg = "in the %s, found un" % (obj) + list_compare_helper(sort_actual, sort_expected) + if len(expected) != len(actual): + msg = msg + " (found %d entries in %s, but expected %d)" % (len(actual), obj, len(expected)) + return msg + else: + val = list_compare_helper(sort_expected, sort_actual) + if val != PASS: + msg = "in the %s, missing " % (obj) + val + ", but found un" + list_compare_helper(sort_actual, + sort_expected) + return msg + + +def namedtuple_compare(expected, actual): + """namedtuple_compare(expected, actual) is used to compare when the + expected answer is a namedtuple defined in the test file.""" + msg = PASS + if is_namedtuple(actual, False): + msg = "expected namedtuple but found %s" % (type(actual).__name__) + return msg + if type(expected).__name__ != type(actual).__name__: + return "expected namedtuple %s but found namedtuple %s" % (type(expected).__name__, type(actual).__name__) + expected_fields = expected._fields + actual_fields = actual._fields + msg = list_compare_ordered(list(expected_fields), list(actual_fields), "namedtuple attributes") + if msg != PASS: + return msg + for field in expected_fields: + val = intelligent_compare(getattr(expected, field), getattr(actual, field)) + if val != PASS: + msg = "at attribute %s of namedtuple %s, " % (field, type(expected).__name__) + val + return msg + return msg + + +def clean_slashes(item): + """clean_slashes()""" + if isinstance(item, str): + return item.replace("\\", "/").replace("/", os.path.sep) + elif item == None or isinstance(item, (type, int, float, bool)): + return item + elif isinstance(item, (list, tuple, set)) or is_namedtuple(item): + new_item = [] + for subitem in item: + new_item.append(clean_slashes(subitem)) + if is_namedtuple(item): + return type(item)(*new_item) + else: + return type(item)(new_item) + elif isinstance(item, dict): + new_item = {} + for key in item: + new_item[clean_slashes(key)] = clean_slashes(item[key]) + return item + + +def list_compare_special_initialize(special_expected): + """list_compare_special_initialize(special_expected) takes in the special + ordering stored as a sorted list of items, and returns a list of lists + where the ordering among the inner lists does not matter.""" + latest_val = None + clean_special = [] + for row in special_expected: + if latest_val == None or row[1] != latest_val: + clean_special.append([]) + latest_val = row[1] + clean_special[-1].append(row[0]) + return clean_special + + +def list_compare_special(special_expected, actual): + """list_compare_special(special_expected, actual) is used to compare when the + expected answer is a list with special ordering defined in `special_expected`.""" + msg = PASS + expected_list = [] + special_order = list_compare_special_initialize(special_expected) + for expected_item in special_order: + expected_list.extend(expected_item) + val = list_compare_unordered(expected_list, actual) + if val != PASS: + return val + i = 0 + for expected_item in special_order: + j = len(expected_item) + actual_item = actual[i: i + j] + val = list_compare_unordered(expected_item, actual_item) + if val != PASS: + if j == 1: + msg = "at index %d " % (i) + val + else: + msg = "between indices %d and %d " % (i, i + j - 1) + val + msg = msg + " (list may not be ordered as required)" + break + i += j + return msg + + +def dict_compare(expected, actual, obj=None): + """dict_compare(expected, actual) is used to compare when the expected answer + is a dict.""" + msg = PASS + if not isinstance(actual, type(expected)): + msg = "expected to find type %s but found type %s" % (type(expected).__name__, type(actual).__name__) + return msg + if obj == None: + obj = type(expected).__name__ + + expected_keys = list(expected.keys()) + actual_keys = list(actual.keys()) + val = list_compare_unordered(expected_keys, actual_keys, obj) + + if val != PASS: + msg = "bad keys in %s: " % (obj) + val + if msg == PASS: + for key in expected: + new_obj = None + if isinstance(expected[key], (list, tuple, set)): + new_obj = 'value' + elif isinstance(expected[key], dict): + new_obj = 'sub' + obj + val = intelligent_compare(expected[key], actual[key], new_obj) + if val != PASS: + msg = "incorrect value for key %s in %s: " % (repr(key), obj) + val + return msg + + +def is_flippable(item): + """is_flippable(item) determines if the given dict of lists has lists of the + same length and is therefore flippable.""" + item_lens = set(([str(len(item[key])) for key in item])) + if len(item_lens) == 1: + return PASS + else: + return "found lists of lengths %s" % (", ".join(list(item_lens))) + +def flip_dict_of_lists(item): + """flip_dict_of_lists(item) flips a dict of lists into a list of dicts if the + lists are of same length.""" + new_item = [] + length = len(list(item.values())[0]) + for i in range(length): + new_dict = {} + for key in item: + new_dict[key] = item[key][i] + new_item.append(new_dict) + return new_item + +def compare_flip_dicts(expected, actual, obj="lists"): + """compare_flip_dicts(expected, actual) flips a dict of lists (or dicts) into + a list of dicts (or dict of dicts) and then compares the list ignoring order.""" + msg = PASS + example_item = list(expected.values())[0] + if isinstance(example_item, (list, tuple)): + val = is_flippable(actual) + if val != PASS: + msg = "expected to find lists of length %d, but " % (len(example_item)) + val + return msg + msg = list_compare_unordered(flip_dict_of_lists(expected), flip_dict_of_lists(actual), "lists") + elif isinstance(example_item, dict): + expected_keys = list(example_item.keys()) + for key in actual: + val = list_compare_unordered(expected_keys, list(actual[key].keys()), "dictionary %s" % key) + if val != PASS: + return val + for cat_key in expected_keys: + expected_category = {} + actual_category = {} + for key in expected: + expected_category[key] = expected[key][cat_key] + actual_category[key] = actual[key][cat_key] + val = list_compare_unordered(flip_dict_of_lists(expected), flip_dict_of_lists(actual), "category " + repr(cat_key)) + if val != PASS: + return val + return msg + + +def get_expected_tables(): + """get_expected_tables() reads the html file with the expected DataFrames + and returns a dict mapping each question to a html table.""" + if not os.path.exists(DF_FILE): + return None + + expected_tables = {} + f = open(DF_FILE, encoding='utf-8') + soup = BeautifulSoup(f.read(), 'html.parser') + f.close() + + tables = soup.find_all('table') + for table in tables: + expected_tables[table.get("data-question")] = table + + return expected_tables + +def parse_df_html_table(table): + """parse_df_html_table(table) takes in a table as a html string and returns + a dict mapping each row and column index to the value at that position.""" + rows = [] + for tr in table.find_all('tr'): + rows.append([]) + for cell in tr.find_all(['td', 'th']): + rows[-1].append(cell.get_text().strip("\n ")) + + cells = {} + for r in range(1, len(rows)): + for c in range(1, len(rows[0])): + rname = rows[r][0] + cname = rows[0][c] + cells[(rname,cname)] = rows[r][c] + return cells + + +def get_expected_namedtuples(): + """get_expected_namedtuples() defines the required namedtuple objects + globally. It also returns a tuple of the classes.""" + expected_namedtuples = [] + + return tuple(expected_namedtuples) + +_expected_namedtuples = get_expected_namedtuples() + + +def compare_cell_html(expected, actual): + """compare_cell_html(expected, actual) is used to compare when the + expected answer is a DataFrame stored in the `expected_dfs` html file.""" + expected_cells = parse_df_html_table(expected) + try: + actual_cells = parse_df_html_table(BeautifulSoup(actual, 'html.parser').find('table')) + except Exception as e: + return "expected to find type DataFrame but found type %s instead" % type(actual).__name__ + + expected_cols = list(set(["column %s" % (loc[1]) for loc in expected_cells])) + actual_cols = list(set(["column %s" % (loc[1]) for loc in actual_cells])) + msg = list_compare_unordered(expected_cols, actual_cols, "DataFrame") + if msg != PASS: + return msg + + expected_rows = list(set(["row index %s" % (loc[0]) for loc in expected_cells])) + actual_rows = list(set(["row index %s" % (loc[0]) for loc in actual_cells])) + msg = list_compare_unordered(expected_rows, actual_rows, "DataFrame") + if msg != PASS: + return msg + + for location, expected in expected_cells.items(): + location_name = "column {} at index {}".format(location[1], location[0]) + actual = actual_cells.get(location, None) + if actual == None: + return "in %s, expected to find %s" % (location_name, repr(expected)) + try: + actual_ans = float(actual) + expected_ans = float(expected) + if math.isnan(actual_ans) and math.isnan(expected_ans): + continue + except Exception as e: + actual_ans, expected_ans = actual, expected + msg = simple_compare(expected_ans, actual_ans) + if msg != PASS: + return "in %s, " % location_name + msg + return PASS + + +def get_expected_plots(): + """get_expected_plots() reads the json file with the expected plot data + and returns a dict mapping each question to a dictionary with the plots data.""" + if not os.path.exists(PLOT_FILE): + return None + + f = open(PLOT_FILE, encoding='utf-8') + expected_plots = json.load(f) + f.close() + return expected_plots + + +def compare_file_json(expected, actual): + """compare_file_json(expected, actual) is used to compare when the + expected answer is a JSON file.""" + msg = PASS + if not os.path.isfile(expected): + return "file %s not found; make sure it is downloaded and stored in the correct directory" % (expected) + elif not os.path.isfile(actual): + return "file %s not found; make sure that you have created the file with the correct name" % (actual) + try: + e = open(expected, encoding='utf-8') + expected_data = json.load(e) + e.close() + except json.JSONDecodeError: + return "file %s is broken and cannot be parsed; please delete and redownload the file correctly" % (expected) + try: + a = open(actual, encoding='utf-8') + actual_data = json.load(a) + a.close() + except json.JSONDecodeError: + return "file %s is broken and cannot be parsed" % (actual) + if type(expected_data) == list: + msg = list_compare_ordered(expected_data, actual_data, 'file ' + actual) + elif type(expected_data) == dict: + msg = dict_compare(expected_data, actual_data) + return msg + + +_expected_json = get_expected_json() +_special_json = get_special_json() +_expected_plots = get_expected_plots() +_expected_tables = get_expected_tables() +_expected_format = get_expected_format() + +def check(qnum, actual): + """check(qnum, actual) is used to check if the answer in the notebook is + the correct answer, and provide useful feedback if the answer is incorrect.""" + msg = PASS + error_msg = "<b style='color: red;'>ERROR:</b> " + q_format = _expected_format[qnum] + + if q_format == TEXT_FORMAT_SPECIAL_ORDERED_LIST: + expected = _special_json[qnum] + elif q_format == PNG_FORMAT_SCATTER: + if _expected_plots == None: + msg = error_msg + "file %s not parsed; make sure it is downloaded and stored in the correct directory" % (PLOT_FILE) + else: + expected = _expected_plots[qnum] + elif q_format == HTML_FORMAT: + if _expected_tables == None: + msg = error_msg + "file %s not parsed; make sure it is downloaded and stored in the correct directory" % (DF_FILE) + else: + expected = _expected_tables[qnum] + else: + expected = _expected_json[qnum] + + if SLASHES in q_format: + q_format = q_format.replace(SLASHES, "") + expected = clean_slashes(expected) + actual = clean_slashes(actual) + + if msg != PASS: + print(msg) + else: + msg = compare(expected, actual, q_format) + if msg != PASS: + msg = error_msg + msg + print(msg) + + +def check_file_size(path): + """check_file_size(path) throws an error if the file is too big to display + on Gradescope.""" + size = os.path.getsize(path) + assert size < MAX_FILE_SIZE * 10**3, "Your file is too big to be displayed by Gradescope; please delete unnecessary output cells so your file size is < %s KB" % MAX_FILE_SIZE + + +def reset_hidden_tests(): + """reset_hidden_tests() resets all hidden tests on the Gradescope autograder where the hidden test file exists""" + if not os.path.exists(HIDDEN_FILE): + return + hidn.reset_hidden_tests() + +def rubric_check(rubric_point, ignore_past_errors=True): + """rubric_check(rubric_point) uses the hidden test file on the Gradescope autograder to grade the `rubric_point`""" + if not os.path.exists(HIDDEN_FILE): + print(PASS) + return + error_msg_1 = "ERROR: " + error_msg_2 = "TEST DETAILS: " + try: + msg = hidn.rubric_check(rubric_point, ignore_past_errors) + except: + msg = "hidden tests crashed before execution" + if msg != PASS: + hidn.make_deductions(rubric_point) + if msg == "public tests failed": + comment = "The public tests have failed, so you will not receive any points for this question." + comment += "\nPlease confirm that the public tests pass locally before submitting." + elif msg == "answer is hardcoded": + comment = "In the datasets for testing hardcoding, all numbers are replaced with random values." + comment += "\nIf the answer is the same as in the original dataset for all these datasets" + comment += "\ndespite this, that implies that the answer in the notebook is hardcoded." + comment += "\nYou will not receive any points for this question." + else: + comment = hidn.get_comment(rubric_point) + msg = error_msg_1 + msg + if comment != "": + msg = msg + "\n" + error_msg_2 + comment + print(msg) + +def get_summary(): + """get_summary() returns the summary of the notebook using the hidden test file on the Gradescope autograder""" + if not os.path.exists(HIDDEN_FILE): + print("Total Score: %d/%d" % (TOTAL_SCORE, TOTAL_SCORE)) + return + score = min(TOTAL_SCORE, hidn.get_score(TOTAL_SCORE)) + display_msg = "Total Score: %d/%d" % (score, TOTAL_SCORE) + if score != TOTAL_SCORE: + display_msg += "\n" + hidn.get_deduction_string() + print(display_msg) + +def get_score_digit(digit): + """get_score_digit(digit) returns the `digit` of the score using the hidden test file on the Gradescope autograder""" + if not os.path.exists(HIDDEN_FILE): + score = TOTAL_SCORE + else: + score = hidn.get_score(TOTAL_SCORE) + digits = bin(score)[2:] + digits = "0"*(7 - len(digits)) + digits + return int(digits[6 - digit]) diff --git a/p3/rubric.md b/p3/rubric.md new file mode 100644 index 0000000000000000000000000000000000000000..eda98c712e68f2d56db8d8086ae65bcc153a57dd --- /dev/null +++ b/p3/rubric.md @@ -0,0 +1,114 @@ +# Project 3 (P3) grading rubric + +## Code reviews + +- The Gradescope autograder will make deductions based on the rubric provided below. +- To ensure that you don't lose any points, you must review the rubric and make sure that you have followed the instructions provided in the project correctly. + +## Rubric + +### General guidelines: + +- Did not save the notebook file prior to running the cell containing "export". We cannot see your output if you do not save before generating the zip file. This deduction will become stricter for future projects. (-1) +- Used conditionals/loops or other material not covered in class yet. (-20) +- Hardcoded answers. (all points allotted for that question) + +### Question specific guidelines: + +- q1 (2) + +- q2 (3) + - agency id is hardcoded (-2) + - variable `finance_id` is not used (-1) + +- q3 (3) + - `year_max` function is not used (-3) + +- q4 (5) + - `year_max` function is not used (-2) + - the built-in function `max` is not used (-2) + +- `agency_min` (3) + - function logic is incorrect (-2) + - the built-in function `min` is not used (-1) + +- q5 (3) + - `agency_min` function is not used (-3) + +- q6 (5) + - `agency_min` function is not used (-2) + - the built-in function `min` is not used (-2) + +- `agency_avg` (3) + - function logic is incorrect (-2) + - `get_budget` and `get_id` functions are not used (-1) + +- q7 (3) + - `agency_avg` function is not used (-3) + +- q8 (3) + - `agency_avg` function is not used (-3) + +- q9 (5) + - average budget is computed without using the `agency_avg` function (-3) + - `get_budget` function is not used for getting Public Health budget (-1) + +- `year_budget` (4) + - function logic is incorrect (-2) + - agency ids are hardcoded (-1) + - default value of `year=2023` is changed in `year_budget` (-1) + +- q10 (4) + - `year_budget` function is not used (-2) + - passed more arguments than necessary to `year_budget` function (-2) + +- q11 (5) + - `year_budget` function is not used (-2) + - passed more arguments than necessary to `year_budget` function (-2) + +- `change_per_year` (5) + - function logic is incorrect (-3) + - default values of `start_year=2019` and `end_year=2023` are changed in `change_per_year` (-2) + +- q12 (4) + - `change_per_year` function is not used (-3) + - passed more arguments than necessary to `change_per_year` function (-1) + +- q13 (4) + - `change_per_year` function is not used (-3) + - passed more arguments than necessary to `change_per_year` function (-1) + +- q14 (4) + - `change_per_year` function is not used (-3) + - passed more arguments than necessary to `change_per_year` function (-1) + +- q15 (4) + - `change_per_year` function is not used (-3) + - passed more arguments than necessary to `change_per_year` function (-1) + +- `extrapolate` (5) + - function logic is incorrect (-2) + - `change_per_year` function is not used (-2) + - default value are not specified in `extrapolate` (-1) + +- q16 (4) + - `extrapolate` function is not used (-3) + - passed more arguments than necessary to `extrapolate` function (-1) + +- q17 (4) + - `extrapolate` function is not used (-3) + - passed more arguments than necessary to `extrapolate` function (-1) + +- q18 (5) + - `extrapolate` function is not used (-3) + - agency id is hardcoded (-1) + - passed more arguments than necessary to `extrapolate` function (-1) + +- q19 (5) + - `extrapolate` function is not used (-3) + - agency id is hardcoded (-1) + - passed more arguments than necessary to `extrapolate` function (-1) + +- q20 (5) + - agency id is hardcoded (-2) + - `agency_avg` function is not used to compute the average budget (-1)