Newer
Older
# DRAFT: DO NOT START
## Overview
In this project, you'll build a server that handles the uploading of CSV files, storing their contents, and performing query operations on the data. The server maintains **only ONE** logical table. You should think of each uploaded CSV as containing a portion of that larger table, which grows with each upload.
The server will write two files for each uploaded CSV file: one in CSV format and another in Parquet (i.e., they are two copies of the table in different formats). Clients that we provide will communicate with your server via RPC calls.
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
### Workflow and Format Walkthrough
In P3, our `client.py` takes in a batch of operation commands stored in file `workload.txt` and executes them line by line. There are two types of commands you can put into `workload.txt` to control the client behavior. First, each *upload* command:
```
u file.csv
```
will instruct the client to read a CSV data file named `file.csv` as binary bytes, and use the corresponding RPC call to upload it to the server. Next, you can use a subsequent *sum* command to perform summation to the table over one specified column. For example:
```
s p x
```
asks the client to send an RPC request to the server and instruct the server to return the total sum value of column `x`. As there are two copies of the same table in `CSV` and `Parquet` format, `p` in the command asks the server to read column data only from `Parquet` files. Below is a minimal example. Assume that the `server` has uploaded two files `file1.csv` and `fil12.csv`, which contain these records respectively:
```
x,y,z
1,2,3
4,5,6
```
And:
```
x,y
5,10
0,20
10,15
```
You can assume columns contain only integers. You should be able to upload the files and do sums with the following `workload.txt` description:
```
u file1.csv
u file2.csv
s p x
s p z
s c w
```
Expected ouptut would be:
```
20
9
0
```
Inspect both the `workload.txt` file content and client code (i.e., `read_workload_file()`) to understand how each text command leads to one `gRPC` call. A separate `purge.txt` workload file is provided and *should not be modified*. The client can use a RPC call `Purge()` to reset the server and remove all files stored by the remote peer.
* Implement logic for uploading and processing CSV and Parquet files.
* Perform computations like summing values from specific columns.
* Manage concurrency with locking in multi-threading server/client.
* Benchmark a server/client system and visualize the results.
Before starting, please review the [general project directions](../projects.md).
## Clarifications/Corrections
In this project, the client program `client.py` will communicate with a server, `server.py`, via gRPC. We provide starter code for the client program. Your job is to write a `.proto` file to generate a gRPC stub (used by our client) and servicer class that you will inherit from in server.py.
Take a moment to look at code for the client code and answer the following questions:
* what are the names of the imported gRPC modules? This will determine what you name your `.proto` file.
* what methods are called on the stubs? This will determine the RPC definitions in your `.proto` file.
* what arguments are passed to the methods, and what values are extracted from the return values? This will determine the fields in the messages in your `.proto` file.
* what port number does the client use? This will determine the port that the gRPC server should expose.
Write a `.proto` file based on your above observations and run the `grpc_tools.protoc` compiler to generate stub code for our client and servicer code for your server. All field types will be strings, except `total` and `csv_data`,which should be `int64` and `bytes` respectively.
Now build the .proto on your VM. Install the tools like this:
```bash
python3 -m venv venv
source venv/bin/activate
pip3 install grpcio==1.70.0 grpcio-tools==1.70.0 protobuf==5.29.3
```
Then use `grpc_tools.protoc` to build your `.proto` file.
In your server, override the *three* RPC methods for the generated servicer. For now, the methods do nothing but returning messages with the error field set to "TODO", leaving any other field unspecified.
If communication is working correctly so far, you should be able to start a server and used a client to get back a "TODO" error message via gRPC:
```bash
python3 -u server.py &> log.txt &
Create a `Dockerfile.server` to build an image that will also let you run your server in a container. It should be possible to build and run your server like this:
```bash
docker build . -f Dockerfile.server -t ${PROJECT}-server
docker run -d -m 512m -p 127.0.0.1:5440:5440
```
Like P2, the compose file assumes a "PROJECT" environment variable. You can set it to p3 in your environment with (the autograder may use another prefix for testing):
```bash
export PROJECT=p3
```
The client program should then be able to communicate with the server program the same way it communicated with that outside of a container. Once your client program successfully interacts with the dockerized server, you should similarly draft a `Dockerfile.client` to build a container for `client.py`. Finally, test your setup with `docker compose`:
```bash
docker compose up -d
docker ps
# should see:
CONTAINER ID IMAGE COMMAND CREATED ...
fa8de65e0e7c p3-client "python3 -u /client.…" 2 seconds ago ...
4c899de6e43f p3-server "python3 -u /server.…" 2 seconds ago ...
**HINT:** consider writing a .sh script that helps you redeploy code changes. Everytime you modify the source code `client.py/server.py/benchmark.py`, you may want to rebuild the images, bring down the previous docker cluster, and re-instantiate a new cluster.
You will need to implement three RPC calls on the server side:
1. Recover the uploaded CSV table from *binary* bytes carried by the RPC request message.
2. Write the table to a CSV file and write the same table to another file in Parquet format.
**Requirement:** Write two files to disk per upload. We will test your server with a 512MB memory limit. Do *NOT* keep the table data in memory.
**HINT 1:** You are free to decide the names and locations of the stored files. However, the server must keep these records to process future queries (for instance, you can add paths to a data structure like a list or dictionary).
**HINT 2:** Both `pandas` and `pyarrow` provide interfaces to write a table to file.
### ColSum
Whenever your server receives a column summation request, it should loop over all the data files that have been uploaded, compute a local sum for each such file, and finally return a total sum for the whole table.
The table does not have a fixed schema (i.e., it is not guaranteed that a column appears in any uploaded file). You should skip a file if it lacks the target column (e.g., z and w in the above example). The server should sum over either Parquet or CSV files according to the input `format` (not both). For a given column, the query results for format="parquet" should be the same as for format="csv", while performance may differ.
### Purge
This method facilitates testing and subsequent benchmarking. The method should:
1. Remove all local file previously uploaded by method `Upload()`
2. Reset all associated server state (e.g., names, paths, etc.)
With the Global Interpreter Lock (GIL), commonly-used CPython does not support parallel multi-threading execution. However, multi-threading can still boost the performance of our small system (why?). In Part 3, you are required to add threading support to `client.py`, then `server.py`.
More specifically, you will need to manually create *N* threads for `client.py` (with thread management primitives come with the `threading` module) to concurrently process the provided `workload.txt`. For example, each worker thread may repeatedly fetch one command line from `workload.txt` and process it. You can load all command strings to a list, then provide thread-safe access to all launched threads (how?).
**HINT:** Before moving to work on the `server`, test your multi-threading client by running it with a single thread:
Now with concurrent requests sent from `client.py`, you must correspondingly protect your server from data race with `threading.Lock()`. Make sure only one thread can modify the server state (e.g., lists of names or paths). Note that you don't need to explicitly create threads for `server.py` as gRPC can do that for you. The following example code creates a thread pool with 8 threads:
futures.ThreadPoolExecutor(max_workers=8),
options=[("grpc.so_reuseport", 0)]
**Requirement 1:** The server should properly acquire then release the lock. A single global lock is sufficient. Lock release should also work with any potential exceptions.
**Requirement 2:** The server *MUST NOT* hold the lock when reading or writing files. A thread should release the lock right after it has done accessing the shared data structure. How could this behavior affect the performance?
## Part 4: Benchmarking the System
Congratulations, you have implemented a minimal multi-threading data system! Let's write a small script to finally benchmark it with different scales (i.e., number of worker threads). Overall, the script is expected to perform the following tasks:
1. Run `client.py` multiple times with different therading parameters, record their execution time.
2. plot the data to visualize the performance trend.
Each time `benchmark.py` should collect 4 pairs of data by running `client.py` with 1, 2, 4, and 8 thread(s). Wrap each `client.py` execution with a pair of timestamp collection. Then calculate the execution time. Make sure you always reset the server before sending the `workload.txt`, by issuing a `Purge()` command through `client.py`:
```bash
python3 client.py purge
# let sometime for the reset to complete
time.sleep(3)
# test follows...
```
You may also want `benchmark.py` to wait a few seconds for the `server` to get ready for any client RPC requests.
**HINT 2:** There are multiple tools to launch a Python program from within another. Examples are [`os.system()`](https://docs.python.org/3/library/os.html#os.system) and [`subprocess.run`](https://docs.python.org/3/library/subprocess.html#subprocess.run).
Plot a simple line graph with the execution time acquired by the previous step. Save the figure to a file called `plot.png`. Your figure must include at least 4 data points as mentioned above. You will find an example plot `plot.png` in the project repo.
**HINT:** Feel free to any tools to plot the figure. Below is a minimal example to plot a dictionary with 2 data points:
```python
import pandas as pd
data = {1: 100, 2: 200}
series = pd.Series(data)
ax = series.plot.line()
ax.get_figure().savefig("plot.png")
```
1. `Dockerfile.client` must launch `benchmark.py` **(NOT `client.py`)**. To achieve this, you need to copy both `client.py` and the driver `benchmark.py` to the image, as well as `workload.txt`, `purge.txt`, and the two sample input CSV files `file1.csv` `file2.csv`.
**Requirement:** Do **NOT** submit the `venv` directory (e.g., use `.gitignore`).
## Grading