A mono-repo for a session type API code generation toolchain for modern web programming.
This project was originally built for the author's undergraduate Master's thesis at Imperial College London.
The following steps assume a Unix environment with Docker properly installed. Other platforms supported by Docker may find a similar way to import the Docker image.
$ git clone --recursive \
https://github.com/ansonmiu0214/TypeScript-Multiparty-Sessions.git
$ cd TypeScript-Multiparty-Sessions
$ docker-compose run --service-ports dev
This command exposes the terminal of the container. To run the toolchain (e.g. show the helptext):
dev@dev:~$ codegen --help
scribble-java
contains the Scribble toolchain, for handling multiparty protocol descriptions, a dependency of our toolchain.codegen
contains the source code of our code generator, written in Python, which generates TypeScript code for implementing the provided multiparty protocol.protocols
contains various Scribble protocol descriptions, including those used in the case studies.case-studies
contains 3 case studies of implementing interactive web applications with our toolchain, namely Noughts and Crosses, Travel Agency, and Battleships.perf-benchmarks
contains the code to generate performance benchmarks, including an iPython notebook to visualise the benchmarks collected from an experiment run.scripts
contains various convenient scripts to run the toolchain and build the case studies.setup
contains scripts to set up the Docker container.web-sandbox
contains configuration files for the web development, e.g. TypeScript configurations and NPMpackage.json
files.
Refer to the helptext for detailed information:
$ codegen --help
We illustrate how to use our toolchain to generate TypeScript APIs:
The following command reads as follows:
$ codegen ~/protocols/TravelAgency.scr TravelAgency S \
node -o ~/case-studies/TravelAgency/src
-
Generate APIs for role
S
of theTravelAgency
protocol specified in~/protocols/TravelAgency.scr
; -
Role
S
is implemented as anode
(server-side) endpoint; -
Output the generated APIs under the path
~/case-studies/TravelAgency/src
The following command reads as follows:
$ codegen ~/protocols/TravelAgency.scr TravelAgency A \
browser -s S -o ~/case-studies/TravelAgency/client/src
-
Generate APIs for role
A
of theTravelAgency
protocol specified in~/protocols/TravelAgency.scr
; -
Role
A
is implemented as abrowser
endpoint, and assume roleS
to be the server; -
Output the generated APIs under the path
~/case-studies/TravelAgency/client/src
To run the end-to-end tests:
# Run from any directory
$ run_tests
The end-to-end tests verify that
- The toolchain correctly parses the Scribble protocol specification files, and,
- The toolchain correctly generates TypeScript APIs, and,
- The generated APIs can be type-checked by the TypeScript Compiler successfully.
The protocol specification files, describing the multiparty communication, are
located in ~/codegen/tests/system/examples
.
The generated APIs are saved under ~/web-sandbox
(which is a
sandbox environment set up for the TypeScript Compiler) and are deleted when the test
finishes.
Run the following to install dependencies for any pre-existing case studies:
$ setup_case-studies
We include three case studies of realistic web applications implemented using the generated APIs.
For example,
to generate the APIs for the case study NoughtsAndCrosses
:
# Run from any directory
$ build_noughts-and-crosses
Note that the identifier used in the build_
command converts the camelCase convention into
a lower-case hyphenated string.
To run the case study NoughtsAndCrosses
:
$ cd ~/case-studies/NoughtsAndCrosses
$ npm start
and visit http://localhost:8080
.
Other case studies currently available include:
- TravelAgency
- Battleships
Run the following to install dependencies for the case studies:
$ setup_benchmarks
We include a script to run the performance benchmarks on web applications built using the generated APIs, against a baseline implementation.
To run the performance benchmarks:
$ cd ~/perf-benchmarks
$ ./run_benchmark.sh
Note: If the terminal log gets stuck at
Loaded client page
, open a web browser and access
http://localhost:5000.
Customisation::
You can customise the number of messages exchanged and the
number of runs for each experiment.
These parameters are represented in the run_benchmark.sh
script by the -m
and -r
flags respectively.
For example, to set up two configurations -- running the benchmark with 100
round trips and 1000
round trips -- and run each configuration 100
times:
$ cd ~/perf-benchmarks
$ ./run_benchmark.sh -m 100 1000 -r 100
Running ./run_benchmark.sh
will clear any existing logs.
To visualise the performance benchmarks, run:
$ cd ~/perf-benchmarks
$ jupyter notebook --ip=0.0.0.0
/* ...snip... */
To access the notebook, open this file in a browser:
/* ...snip... */
Or copy and paste one of these URLs:
http://dev:8888/?token=<token>
or http://127.0.0.1:8888/?token=<token>
Use a web browser to open the URL
in the terminal output
beginning with http://127.0.0.1:8888
.
Open the Benchmark Visualisation.ipynb notebook.
Click on Kernel -> Restart & Run All from the top menu bar.
Note: If you change the message configuration (i.e.
the -m
flag), update the NUM_MSGS
tuple located
in the first cell of the notebook as shown below:
# Update these variables if you wish to
# visualise other benchmarks.
VARIANTS = ('bare', 'mpst')
NUM_MSGS = (100, 1000)
Consult the wiki for more documentation.