||3 years ago|
|hptp||3 years ago|
|hptp-recv||3 years ago|
|hptp-send||3 years ago|
|scripts||3 years ago|
|.gitignore||3 years ago|
|Cargo.lock||3 years ago|
|Cargo.toml||3 years ago|
|Makefile||3 years ago|
|README.md||3 years ago|
|rust-toolchain||3 years ago|
__ __ __ __/ // /_/ /___ _____ ____ _ __ ___ ____ __ /_ _ __/ / __ `/ __ \/ __ `/ / / / / | /| / / / / / /_ _ __/ / /_/ / / / / /_/ / / /_/ /| |/ |/ / /_/ / /_//_/ /_/\__,_/_/ /_/\__, / \__,_/ |__/|__/\__,_/ /____/
High level approach
For this assignment we developed a new protocol that would be particularly efficient at transferring a file over an unreliable connection. The protocol divides the input file into fixed-length "segments", numbers them sequentially, and attempts to transfer them to the receiver. It repeatedly retransmits segments that it is not confident that the receiver has obtained. The receiver periodically sends batched-acknowledgements that list received segments. The segments are compressed using state-of-the-art compression algorithms to ensure expedient delivery. The rate of sending and acknowledgement is continuously tuned to optimize for network conditions using coding and algorithms.
We faced many challenges implementing this protocol, such as the following:
- Learning how to use cutting-edge event-loop libraries in Rust
- Selecting a re-transmission algorithms that would be resistant to drops and duplicates
- Choosing constants that would tune the transmission rate to adjust automatically to network latency and bandwidth
- Ensuring that both connections would close once the file had been successfully transferred.
Various internal algorithms and data structures were tested using simple unit tests. For instance, we have unit tests to check that our packet encoding/decoding schemes work correctly.
However, testing general consistency of the protocol was done by running the scripts provided to us, which automatically adjust network conditions and send large files.