diff --git a/p4/README.md b/p4/README.md
index 5505a3b34c9ada9172a48594255d868ead753dc3..a454283f339083d477b638e1cfb140574debd17d 100644
--- a/p4/README.md
+++ b/p4/README.md
@@ -20,6 +20,15 @@ Before starting, please review the [general project directions](../projects.md).
 
 - Mar 6: Released `autobadger` for `p4` (`0.1.6`)
 
+- Mar 7: 
+  - Some minor updates on p4 `Readme.md`.
+  - Update `autobadgere` to version `0.1.7`
+    - Fixed exception handling, now Autobadger can correctly print error messages. 
+    - Expanded the expected file size range in test4 `test_Hdfs_size`. 
+    - Make the error messages clearer.
+  
+
+
 ## Introduction
 
 You'll need to deploy a system including 6 docker containers like this:
@@ -32,7 +41,7 @@ The data flow roughly follows this:
 
 We have provided the other components; what you only need is to complete the work within the gRPC server and its Dockerfile.
 ### Client
-This project will use `docker exec -it` to run the client on the gRPC server's container. Usage of `client.py` is as follows:
+This project will use `docker exec` to run the client on the gRPC server's container. Usage of `client.py` is as follows:
 ```
 #Inside the server container
 python3 client.py DbToHdfs
@@ -71,7 +80,7 @@ export PROJECT=p4
 
 **Hint 2:** Think about whether there is any .sh script that will help you quickly test code changes.  For example, you may want it to rebuild your Dockerfiles, cleanup an old Compose cluster, and deploy a new cluster.
 
-**Hint 3:** If you're low on disk space, consider running `docker system prune -a --volumes -f`
+**Hint 3:** If you're low on disk space, consider running `docker system prune --volumes -f`
 
 ## Part 1: `DbToHdfs` gRPC Call
 
@@ -99,15 +108,17 @@ In this part, your task is to implement the `DbToHdfs` gRPC call (you can find t
 3. Filter all rows where `loan_amount` is **greater than 30,000** and **less than 800,000**. After filtering, this table should have only **426716** rows.
 4. Upload the generated table to `/hdma-wi-2021.parquet` in the HDFS, with **2x** replication and a **1-MB** block size, using PyArrow (https://arrow.apache.org/docs/python/generated/pyarrow.fs.HadoopFileSystem.html).
 
-To check whether the upload was correct, you can use `docker exec -it` to enter the gRPC server's container and use HDFS command `hdfs dfs -du -h <path>`to see the file size. The expected result is:
+To check whether the upload was correct, you can use `docker exec -it <container_name> bash` to enter the gRPC server's container and use HDFS command `hdfs dfs -du -h <path>`to see the file size. The expected result should like:
 
 ```
 14.4 M   28.9 M  hdfs://nn:9000/hdma-wi-2021.parquet
 ```
+Note: Your file size might have slight difference from this. 
+>That's because when we join two tables, rows from one table get matches with rows in the other, but the order of output rows is not guaranteed. If we have the same rows in a different order, the compressibility of snappy (used by Parquet by default) will vary because it is based on compression windows, and there may be more or less redundancy in a window depending on row ordering. 
 
 **Hint 1:** We used similar tables in lecture: https://git.doit.wisc.edu/cdis/cs/courses/cs544/s25/main/-/tree/main/lec/15-sql
 
-**Hint 2:**  To get more familiar with these tables, you can use SQL queries to print the table schema or retrieve sample data. A convenient way to do this is to use `docker exec -it` to enter the SQL Server, then run mysql client `mysql -p CS544` to access the SQL Server and then perform queries.
+**Hint 2:**  To get more familiar with these tables, you can use SQL queries to print the table schema or retrieve sample data. A convenient way to do this is to use `docker exec -it <container name> bash` to enter the SQL Server, then run mysql client `mysql -p CS544` to access the SQL Server and then perform queries.
 
 **Hint 3:** After `docker compose up`, the SQL Server needs some time to load the data before it is ready. Therefore, you need to wait for a while, or preferably, add a retry mechanism for the SQL connection.
 
@@ -197,9 +208,9 @@ docker compose up -d
 Then run the client like this:
 
 ```
-docker exec -it p4-server-1 python3 /client.py DbToHdfs
-docker exec -it p4-server-1 python3 /client.py BlockLocations -f /hdma-wi-2021.parquet
-docker exec -it p4-server-1 python3 /client.py CalcAvgLoan -c 55001
+docker exec p4-server-1 python3 /client.py DbToHdfs
+docker exec p4-server-1 python3 /client.py BlockLocations -f /hdma-wi-2021.parquet
+docker exec p4-server-1 python3 /client.py CalcAvgLoan -c 55001
 ```
 
 Note that we will copy in the the provided files (docker-compose.yml, client.py, lender.proto, hdma-wi-2021.sql.gz, etc.), overwriting anything you might have changed.  Please do NOT push hdma-wi-2021.sql.gz to your repo because it is large, and we want to keep the repos small.
@@ -208,4 +219,11 @@ Please make sure you have `client.py` copied into the p4-server image. We will r
 
 ## Tester
 
-Not released yet.
+Please be sure that your installed `autobadger` is on version `0.1.7`. You can print the version using
+
+```bash
+autobadger --info
+```
+
+See [projects.md](https://git.doit.wisc.edu/cdis/cs/courses/cs544/s25/main/-/blob/main/projects.md#testing) for more information.
+