Skip to content

Commit 6f33276

Browse files
authored
Port to spago from bower/pulp (#8)
* Convert tests from bower to spago * Port to spago from bower/pulp * Upgrade to PureScript 0.13.8 * Introduce spago and get it working in a sandboxed environment * Remove bower/pulp * Documentation * Removed colorized output (will reintroduce later) * Refactor Dockerfile and bin/* scripts * Slim down docker build context Prevent sending unrelated files to the docker daemon as part of the build context.
1 parent 39b9392 commit 6f33276

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1749
-4405
lines changed

.dockerignore

+10
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,13 @@
1+
.dockerignore
12
.git/
23
.github/
4+
Dockerfile
5+
examples/
6+
tap2json/
37
tests/
8+
# The following pre-compiled sub-directories are created during the
9+
# install/build step inside the container (see Dockerfile). We don't want any
10+
# leftovers from the local file-system here.
11+
pre-compiled/.spago
12+
pre-compiled/node_modules
13+
pre-compiled/output

.gitignore

-6
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,3 @@
11
*~
22
.*.swp
3-
*.cabal
4-
stack.yaml
5-
tests/*/node_modules
6-
tests/*/bower_components
7-
tests/*/.pulp-cache/
8-
tests/*/output/
93
tests/*/results.json

Dockerfile

+17-19
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,21 @@
11
FROM node:16-buster-slim
22

3-
RUN apt-get update && \
4-
apt-get install -y git jq libncurses5 && \
5-
apt-get purge --auto-remove -y && \
6-
apt-get clean && \
7-
rm -rf /var/lib/apt/lists/*
3+
RUN apt-get update \
4+
&& apt-get install -y --no-install-recommends \
5+
ca-certificates=20200601~deb10u2 \
6+
git=1:2.20.1-2+deb10u3 \
7+
jq=1.5+dfsg-2+b1 \
8+
libncurses5=6.1+20181013-2+deb10u2 \
9+
&& apt-get purge --auto-remove -y \
10+
&& apt-get clean \
11+
&& rm -rf /var/lib/apt/lists/*
812

9-
WORKDIR /opt/test-runner
13+
# Pre-compile exercise dependencies
14+
WORKDIR /opt/test-runner/pre-compiled
15+
COPY pre-compiled .
16+
RUN npm install && npx spago install && npx spago build --deps-only
1017

11-
ENV PATH="/opt/test-runner/node_modules/.bin:$PATH"
12-
13-
COPY pre-compiled/package.json pre-compiled/package-lock.json ./
14-
RUN npm install
15-
16-
COPY pre-compiled/bower.json .
17-
RUN bower install --allow-root
18-
19-
COPY pre-compiled/ .
20-
RUN pulp build
21-
22-
COPY . .
23-
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
18+
# Setup bin directory
19+
WORKDIR /opt/test-runner/bin
20+
COPY bin/run.sh bin/run-tests.sh ./
21+
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]

README.md

+20-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
The Docker image for automatically run tests on PureScript solutions submitted
44
to [exercism][web-exercism].
55

6-
This repository contains the Java test runner, which implements the
6+
This repository contains the PureScript test runner, which implements the
77
[test runner interface][test-runner-interface].
88

99

@@ -15,3 +15,22 @@ To run a solution's test in the Docker container, do the following:
1515

1616
[test-runner-interface]: https://github.com/exercism/automated-tests/blob/master/docs/interface.md
1717
[web-exercism]: https://exercism.io/
18+
19+
20+
## Design Goal and Implementation
21+
22+
Due to the sandboxed environment we need to prepare everything we need in
23+
advance. All the PureScript packages that may be used for a solution are
24+
downloaded and pre-compiled. To make this happen we've setup a basic spago
25+
project under `./pre-compiled`. Note that the package-set set in
26+
`packages.dhall` must correspond with the one used by in the exercises
27+
repository (exercism/purescript). This directory is copied into the Docker
28+
image and from there all dependencies are installed and compiled. All the
29+
necessary bits are then available to be used by `bin/run.sh` to setup a spago
30+
project to build the submitted solution.
31+
32+
The `bin/run.sh` script will piece together a spago project to build and test
33+
the submitted solution. The project is built under `/tmp/build` which is
34+
mounted as a `tmpfs` which is required for write-access. A `tmpfs` is also
35+
speedier than reading from or writing to a `bind` mount. See `docs/spago.md`
36+
for more details on running spago in a sandbox.

bin/run-in-docker.sh

+14-10
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
#!/usr/bin/env sh
1+
#!/usr/bin/env bash
22

33
# Synopsis:
44
# Run the test runner on a solution using the test runner Docker image.
@@ -11,20 +11,24 @@
1111

1212
# Output:
1313
# Writes the test results to a results.json file in the passed-in output directory.
14-
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
14+
# The test results are formatted according to the specifications at
15+
# https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
1516

1617
# Example:
1718
# ./bin/run-in-docker.sh two-fer /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/
1819

20+
set -o pipefail
21+
set -u
22+
1923
# If any required arguments is missing, print the usage and exit
20-
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
24+
if [ $# != 3 ]; then
2125
echo "usage: ./bin/run-in-docker.sh exercise-slug /absolute/path/to/solution/folder/ /absolute/path/to/output/directory/"
2226
exit 1
2327
fi
2428

25-
slug="$1"
26-
input_dir="${2%/}"
27-
output_dir="${3%/}"
29+
slug=${1}
30+
input_dir=${2}
31+
output_dir=${3}
2832

2933
# Create the output directory if it doesn't exist
3034
mkdir -p "${output_dir}"
@@ -36,7 +40,7 @@ docker build --rm -t exercism/test-runner .
3640
docker run \
3741
--read-only \
3842
--network none \
39-
--mount type=bind,src="${input_dir}",dst=/solution \
40-
--mount type=bind,src="${output_dir}",dst=/output \
41-
--mount type=tmpfs,dst=/tmp \
42-
exercism/test-runner "${slug}" /solution /output
43+
--mount type=bind,source="${input_dir}",destination=/solution \
44+
--mount type=bind,source="${output_dir}",destination=/output \
45+
--mount type=tmpfs,destination=/tmp \
46+
exercism/test-runner "${slug}" /solution /output

bin/run-solutions-in-docker.sh

+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
#!/usr/bin/env bash
2+
3+
# Synopsis:
4+
# Test the test runner Docker image by running it against a predefined set of
5+
# solutions with an expected output.
6+
# The test runner Docker image is built automatically.
7+
8+
# Output:
9+
# Outputs the diff of the expected test results against the actual test results
10+
# generated by the test runner Docker image.
11+
12+
# Example:
13+
# ./bin/run-tests-in-docker.sh
14+
15+
set -o pipefail
16+
set -u
17+
18+
if [ $# != 1 ]; then
19+
echo "Usage ${BASH_SOURCE[0]} /path/to/exercises"
20+
exit 1
21+
fi
22+
23+
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)
24+
exercises_dir="${1%/}"
25+
26+
# Build the Docker image
27+
docker build --rm -t exercism/test-runner "${base_dir}"
28+
29+
for config in "${exercises_dir}"/*/*/.solution.dhall; do
30+
exercise_dir=$(dirname "${config}")
31+
slug=$(basename "${exercise_dir}")
32+
33+
echo "Working in ${exercise_dir}..."
34+
35+
"${base_dir}/bin/run-in-docker.sh" "${slug}" "${exercise_dir}" /tmp
36+
done

bin/run-tests-in-docker.sh

+8-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
#!/usr/bin/env sh
1+
#!/usr/bin/env bash
22

33
# Synopsis:
44
# Test the test runner Docker image by running it against a predefined set of
@@ -12,15 +12,20 @@
1212
# Example:
1313
# ./bin/run-tests-in-docker.sh
1414

15+
set -o pipefail
16+
set -u
17+
18+
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)
19+
1520
# Build the Docker image
1621
docker build --rm -t exercism/test-runner .
1722

1823
# Run the Docker image using the settings mimicking the production environment
1924
docker run \
2025
--network none \
2126
--read-only \
22-
--mount type=bind,src="${PWD}/tests",dst=/opt/test-runner/tests \
23-
--mount type=tmpfs,dst=/tmp \
27+
--mount type=bind,source="${base_dir}/tests",destination=/opt/test-runner/tests \
28+
--mount type=tmpfs,destination=/tmp \
2429
--workdir /opt/test-runner \
2530
--entrypoint /opt/test-runner/bin/run-tests.sh \
2631
exercism/test-runner

bin/run-tests.sh

+14-17
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
#!/usr/bin/env sh
1+
#!/usr/bin/env bash
22

33
# Synopsis:
44
# Test the test runner by running it against a predefined set of solutions
@@ -11,28 +11,25 @@
1111
# Example:
1212
# ./bin/run-tests.sh
1313

14+
set -o pipefail
15+
set -u
16+
1417
exit_code=0
1518

16-
# Iterate over all test directories
17-
for test_dir in tests/*; do
18-
test_dir_name=$(basename "${test_dir}")
19-
test_dir_path=$(realpath "${test_dir}")
20-
results_file_path="${test_dir_path}/results.json"
21-
expected_results_file_path="${test_dir_path}/expected_results.json"
19+
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)
2220

23-
bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"
21+
# Iterate over all test Spago projects
22+
for config in "${base_dir}"/tests/*/spago.dhall; do
23+
exercise_dir=$(dirname "${config}")
24+
slug=$(basename "${exercise_dir}")
25+
expected_results_file="${exercise_dir}/expected_results.json"
26+
actual_results_file="${exercise_dir}/results.json"
2427

25-
# Normalize the results file
26-
sed -i -E \
27-
-e 's/Time:.*[0-9]+\.[0-9]+s//g' \
28-
-e 's/ *\([0-9]+ms\)//g' \
29-
-e "s~${test_dir_path}~/solution~g" \
30-
"${results_file_path}"
28+
bin/run.sh "${slug}" "${exercise_dir}" "${exercise_dir}"
3129

32-
echo "${test_dir_name}: comparing results.json to expected_results.json"
33-
diff "${results_file_path}" "${expected_results_file_path}"
30+
echo "${slug}: comparing results.json to expected_results.json"
3431

35-
if [ $? -ne 0 ]; then
32+
if ! diff -u "${actual_results_file}" "${expected_results_file}"; then
3633
exit_code=1
3734
fi
3835
done

bin/run.sh

+67-31
Original file line numberDiff line numberDiff line change
@@ -10,60 +10,96 @@
1010

1111
# Output:
1212
# Writes the test results to a results.json file in the passed-in output directory.
13-
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
13+
# The test results are formatted according to the specifications at
14+
# https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
1415

1516
# Example:
1617
# ./bin/run.sh two-fer /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/
1718

18-
# If any required arguments is missing, print the usage and exit
19-
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
19+
set -o pipefail
20+
set -u
21+
22+
# If required arguments are missing, print the usage and exit
23+
if [ $# != 3 ]; then
2024
echo "usage: ./bin/run.sh exercise-slug /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/"
2125
exit 1
2226
fi
2327

24-
slug="$1"
25-
input_dir="${2%/}"
26-
output_dir="${3%/}"
27-
root_dir=$(realpath $(dirname "$0")/..)
28+
# Establish the base directory so we can build fully-qualified directories.
29+
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)
30+
31+
slug=${1}
32+
input_dir=${2}
33+
output_dir=${3}
2834
results_file="${output_dir}/results.json"
2935

36+
# Under Docker the build directory is mounted as a read-write tmpfs so that:
37+
# - We can work with a write-able file-system
38+
# - We avoid copying files between the docker host and client giving a nice speedup.
39+
build_dir=/tmp/build
40+
cache_dir=${build_dir}/cache
41+
42+
if [ ! -d "${input_dir}" ]; then
43+
echo "No such directory: ${input_dir}"
44+
exit 1
45+
fi
46+
3047
# Create the output directory if it doesn't exist
3148
mkdir -p "${output_dir}"
3249

33-
echo "${slug}: testing..."
50+
# Prepare build directory
51+
if [ -d "${build_dir}" ]; then
52+
rm -rf ${build_dir}
53+
fi
3454

35-
pushd "${input_dir}" > /dev/null
55+
mkdir -p ${build_dir}
56+
pushd "${build_dir}" > /dev/null || exit
3657

37-
ln -s "${root_dir}/node_modules"
38-
ln -s "${root_dir}/bower_components"
39-
cp -r "${root_dir}/output" . # We can't symlink this as pulp needs to write to it
58+
# Put the basic spago project in place
59+
cp "${input_dir}"/*.dhall .
60+
ln -s "${input_dir}"/src .
61+
ln -s "${input_dir}"/test .
62+
63+
# Setup cache directory. We require a writable dhall cache because dhall will
64+
# attempt to fetch the upstream package-set definition.
65+
mkdir ${cache_dir}
66+
cp -R "${HOME}"/.cache/dhall ${cache_dir}
67+
cp -R "${HOME}"/.cache/dhall-haskell ${cache_dir}
68+
69+
# Setup our prepared node setup.
70+
ln -s "${base_dir}/pre-compiled/node_modules" .
71+
72+
# The timestamps of the `output/` directory must be preserved or else
73+
# PureScript compiler (`purs`) will invalidate the cache and force a rebuild
74+
# defeating pre-compiling altogether (and thus the usage of the `cp` `-p`
75+
# flag).
76+
cp -R -p "${base_dir}/pre-compiled/output" .
77+
cp -R "${base_dir}/pre-compiled/.spago" .
78+
79+
echo "Build and test ${slug} in ${build_dir}..."
4080

4181
# Run the tests for the provided implementation file and redirect stdout and
42-
# stderr to capture it
43-
test_output=$(pulp test 2>&1)
82+
# stderr to capture it. We do our best to minimize the output to emit and
83+
# compiler errors or unit test output as this scrubbed and presented to the
84+
# student. In addition spago will try to write to ~/cache/.spago and will fail
85+
# on a read-only mount and thus we skip the global cache and request to not
86+
# install packages.
87+
export XDG_CACHE_HOME=${cache_dir}
88+
spago_output=$(npx spago --global-cache skip --no-psa test --no-install 2>&1)
4489
exit_code=$?
4590

46-
popd > /dev/null
91+
popd > /dev/null || exit
4792

48-
# Write the results.json file based on the exit code of the command that was
49-
# just executed that tested the implementation file
93+
# Write the results.json file based on the exit code of the command that was
94+
# just executed that tested the implementation file.
5095
if [ $exit_code -eq 0 ]; then
51-
jq -n '{version: 1, status: "pass"}' > ${results_file}
96+
jq -n '{version: 1, status: "pass"}' > "${results_file}"
5297
else
53-
# Sanitize the output
54-
sanitized_test_output=$(echo "${test_output}" | sed -E \
55-
-e '/^\* Building project/d' \
98+
sanitized_spago_output=$(echo "${spago_output}" | sed -E \
5699
-e '/^Compiling/d' \
57-
-e '/at .*(node:internal|.*\/opt\/test-runner\/.*\.js)/d')
58-
59-
# Manually add colors to the output to help scanning the output for errors
60-
colorized_test_output=$(echo "${sanitized_test_output}" | \
61-
GREP_COLOR='01;31' grep --color=always -E -e '(Error found:|Error:|\* ERROR:|.*Failed:).*$|$' | \
62-
GREP_COLOR='01;32' grep --color=always -E -e '.*Passed:.*$|$')
63-
64-
printf "${colorized_test_output}"
100+
-e '/at.*:[[:digit:]]+:[[:digit:]]+\)?/d')
65101

66-
jq -n --arg output "${colorized_test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
102+
jq --null-input --arg output "${sanitized_spago_output}" '{version: 1, status: "fail", message: $output}' > "${results_file}"
67103
fi
68104

69-
echo "${slug}: done"
105+
echo "Done"

0 commit comments

Comments
 (0)