Skip to content

Port to spago from bower/pulp #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 28, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
.dockerignore
.git/
.github/
Dockerfile
examples/
tap2json/
tests/
# The following pre-compiled sub-directories are created during the
# install/build step inside the container (see Dockerfile). We don't want any
# leftovers from the local file-system here.
pre-compiled/.spago
pre-compiled/node_modules
pre-compiled/output
6 changes: 0 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,9 +1,3 @@
*~
.*.swp
*.cabal
stack.yaml
tests/*/node_modules
tests/*/bower_components
tests/*/.pulp-cache/
tests/*/output/
tests/*/results.json
36 changes: 17 additions & 19 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,23 +1,21 @@
FROM node:16-buster-slim

RUN apt-get update && \
apt-get install -y git jq libncurses5 && \
apt-get purge --auto-remove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates=20200601~deb10u2 \
git=1:2.20.1-2+deb10u3 \
jq=1.5+dfsg-2+b1 \
libncurses5=6.1+20181013-2+deb10u2 \
&& apt-get purge --auto-remove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

WORKDIR /opt/test-runner
# Pre-compile exercise dependencies
WORKDIR /opt/test-runner/pre-compiled
COPY pre-compiled .
RUN npm install && npx spago install && npx spago build --deps-only

ENV PATH="/opt/test-runner/node_modules/.bin:$PATH"

COPY pre-compiled/package.json pre-compiled/package-lock.json ./
RUN npm install

COPY pre-compiled/bower.json .
RUN bower install --allow-root

COPY pre-compiled/ .
RUN pulp build

COPY . .
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
# Setup bin directory
WORKDIR /opt/test-runner/bin
COPY bin/run.sh bin/run-tests.sh ./
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
21 changes: 20 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
The Docker image for automatically run tests on PureScript solutions submitted
to [exercism][web-exercism].

This repository contains the Java test runner, which implements the
This repository contains the PureScript test runner, which implements the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops!

[test runner interface][test-runner-interface].


Expand All @@ -15,3 +15,22 @@ To run a solution's test in the Docker container, do the following:

[test-runner-interface]: https://github.com/exercism/automated-tests/blob/master/docs/interface.md
[web-exercism]: https://exercism.io/


## Design Goal and Implementation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to have this explanation!


Due to the sandboxed environment we need to prepare everything we need in
advance. All the PureScript packages that may be used for a solution are
downloaded and pre-compiled. To make this happen we've setup a basic spago
project under `./pre-compiled`. Note that the package-set set in
`packages.dhall` must correspond with the one used by in the exercises
repository (exercism/purescript). This directory is copied into the Docker
image and from there all dependencies are installed and compiled. All the
necessary bits are then available to be used by `bin/run.sh` to setup a spago
project to build the submitted solution.

The `bin/run.sh` script will piece together a spago project to build and test
the submitted solution. The project is built under `/tmp/build` which is
mounted as a `tmpfs` which is required for write-access. A `tmpfs` is also
speedier than reading from or writing to a `bind` mount. See `docs/spago.md`
for more details on running spago in a sandbox.
24 changes: 14 additions & 10 deletions bin/run-in-docker.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash

# Synopsis:
# Run the test runner on a solution using the test runner Docker image.
Expand All @@ -11,20 +11,24 @@

# Output:
# Writes the test results to a results.json file in the passed-in output directory.
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
# The test results are formatted according to the specifications at
# https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

# Example:
# ./bin/run-in-docker.sh two-fer /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/

set -o pipefail
set -u

# If any required arguments is missing, print the usage and exit
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
if [ $# != 3 ]; then
echo "usage: ./bin/run-in-docker.sh exercise-slug /absolute/path/to/solution/folder/ /absolute/path/to/output/directory/"
exit 1
fi

slug="$1"
input_dir="${2%/}"
output_dir="${3%/}"
slug=${1}
input_dir=${2}
output_dir=${3}

# Create the output directory if it doesn't exist
mkdir -p "${output_dir}"
Expand All @@ -36,7 +40,7 @@ docker build --rm -t exercism/test-runner .
docker run \
--read-only \
--network none \
--mount type=bind,src="${input_dir}",dst=/solution \
--mount type=bind,src="${output_dir}",dst=/output \
--mount type=tmpfs,dst=/tmp \
exercism/test-runner "${slug}" /solution /output
--mount type=bind,source="${input_dir}",destination=/solution \
--mount type=bind,source="${output_dir}",destination=/output \
--mount type=tmpfs,destination=/tmp \
Comment on lines +43 to +45
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the added explicitness!

exercism/test-runner "${slug}" /solution /output
36 changes: 36 additions & 0 deletions bin/run-solutions-in-docker.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/usr/bin/env bash

# Synopsis:
# Test the test runner Docker image by running it against a predefined set of
# solutions with an expected output.
# The test runner Docker image is built automatically.

# Output:
# Outputs the diff of the expected test results against the actual test results
# generated by the test runner Docker image.

# Example:
# ./bin/run-tests-in-docker.sh

set -o pipefail
set -u

if [ $# != 1 ]; then
echo "Usage ${BASH_SOURCE[0]} /path/to/exercises"
exit 1
fi

base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)
exercises_dir="${1%/}"

# Build the Docker image
docker build --rm -t exercism/test-runner "${base_dir}"

for config in "${exercises_dir}"/*/*/.solution.dhall; do
exercise_dir=$(dirname "${config}")
slug=$(basename "${exercise_dir}")

echo "Working in ${exercise_dir}..."

"${base_dir}/bin/run-in-docker.sh" "${slug}" "${exercise_dir}" /tmp
done
11 changes: 8 additions & 3 deletions bin/run-tests-in-docker.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash

# Synopsis:
# Test the test runner Docker image by running it against a predefined set of
Expand All @@ -12,15 +12,20 @@
# Example:
# ./bin/run-tests-in-docker.sh

set -o pipefail
set -u

base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)

# Build the Docker image
docker build --rm -t exercism/test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
--network none \
--read-only \
--mount type=bind,src="${PWD}/tests",dst=/opt/test-runner/tests \
--mount type=tmpfs,dst=/tmp \
--mount type=bind,source="${base_dir}/tests",destination=/opt/test-runner/tests \
--mount type=tmpfs,destination=/tmp \
--workdir /opt/test-runner \
--entrypoint /opt/test-runner/bin/run-tests.sh \
exercism/test-runner
31 changes: 14 additions & 17 deletions bin/run-tests.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash

# Synopsis:
# Test the test runner by running it against a predefined set of solutions
Expand All @@ -11,28 +11,25 @@
# Example:
# ./bin/run-tests.sh

set -o pipefail
set -u

exit_code=0

# Iterate over all test directories
for test_dir in tests/*; do
test_dir_name=$(basename "${test_dir}")
test_dir_path=$(realpath "${test_dir}")
results_file_path="${test_dir_path}/results.json"
expected_results_file_path="${test_dir_path}/expected_results.json"
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)

bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"
# Iterate over all test Spago projects
for config in "${base_dir}"/tests/*/spago.dhall; do
exercise_dir=$(dirname "${config}")
slug=$(basename "${exercise_dir}")
expected_results_file="${exercise_dir}/expected_results.json"
actual_results_file="${exercise_dir}/results.json"

# Normalize the results file
sed -i -E \
-e 's/Time:.*[0-9]+\.[0-9]+s//g' \
-e 's/ *\([0-9]+ms\)//g' \
-e "s~${test_dir_path}~/solution~g" \
"${results_file_path}"
bin/run.sh "${slug}" "${exercise_dir}" "${exercise_dir}"

echo "${test_dir_name}: comparing results.json to expected_results.json"
diff "${results_file_path}" "${expected_results_file_path}"
echo "${slug}: comparing results.json to expected_results.json"

if [ $? -ne 0 ]; then
if ! diff -u "${actual_results_file}" "${expected_results_file}"; then
exit_code=1
fi
done
Expand Down
98 changes: 67 additions & 31 deletions bin/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,60 +10,96 @@

# Output:
# Writes the test results to a results.json file in the passed-in output directory.
# The test results are formatted according to the specifications at https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md
# The test results are formatted according to the specifications at
# https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

# Example:
# ./bin/run.sh two-fer /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/

# If any required arguments is missing, print the usage and exit
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
set -o pipefail
set -u

# If required arguments are missing, print the usage and exit
if [ $# != 3 ]; then
echo "usage: ./bin/run.sh exercise-slug /absolute/path/to/two-fer/solution/folder/ /absolute/path/to/output/directory/"
exit 1
fi

slug="$1"
input_dir="${2%/}"
output_dir="${3%/}"
root_dir=$(realpath $(dirname "$0")/..)
# Establish the base directory so we can build fully-qualified directories.
base_dir=$(builtin cd "${BASH_SOURCE%/*}/.." || exit; pwd)

slug=${1}
input_dir=${2}
output_dir=${3}
results_file="${output_dir}/results.json"

# Under Docker the build directory is mounted as a read-write tmpfs so that:
# - We can work with a write-able file-system
# - We avoid copying files between the docker host and client giving a nice speedup.
build_dir=/tmp/build
cache_dir=${build_dir}/cache

if [ ! -d "${input_dir}" ]; then
echo "No such directory: ${input_dir}"
exit 1
fi

# Create the output directory if it doesn't exist
mkdir -p "${output_dir}"

echo "${slug}: testing..."
# Prepare build directory
if [ -d "${build_dir}" ]; then
rm -rf ${build_dir}
fi

pushd "${input_dir}" > /dev/null
mkdir -p ${build_dir}
pushd "${build_dir}" > /dev/null || exit

ln -s "${root_dir}/node_modules"
ln -s "${root_dir}/bower_components"
cp -r "${root_dir}/output" . # We can't symlink this as pulp needs to write to it
# Put the basic spago project in place
cp "${input_dir}"/*.dhall .
ln -s "${input_dir}"/src .
ln -s "${input_dir}"/test .

# Setup cache directory. We require a writable dhall cache because dhall will
# attempt to fetch the upstream package-set definition.
mkdir ${cache_dir}
cp -R "${HOME}"/.cache/dhall ${cache_dir}
cp -R "${HOME}"/.cache/dhall-haskell ${cache_dir}

# Setup our prepared node setup.
ln -s "${base_dir}/pre-compiled/node_modules" .

# The timestamps of the `output/` directory must be preserved or else
# PureScript compiler (`purs`) will invalidate the cache and force a rebuild
# defeating pre-compiling altogether (and thus the usage of the `cp` `-p`
# flag).
cp -R -p "${base_dir}/pre-compiled/output" .
cp -R "${base_dir}/pre-compiled/.spago" .

echo "Build and test ${slug} in ${build_dir}..."

# Run the tests for the provided implementation file and redirect stdout and
# stderr to capture it
test_output=$(pulp test 2>&1)
# stderr to capture it. We do our best to minimize the output to emit and
# compiler errors or unit test output as this scrubbed and presented to the
# student. In addition spago will try to write to ~/cache/.spago and will fail
# on a read-only mount and thus we skip the global cache and request to not
# install packages.
export XDG_CACHE_HOME=${cache_dir}
spago_output=$(npx spago --global-cache skip --no-psa test --no-install 2>&1)
exit_code=$?

popd > /dev/null
popd > /dev/null || exit

# Write the results.json file based on the exit code of the command that was
# just executed that tested the implementation file
# Write the results.json file based on the exit code of the command that was
# just executed that tested the implementation file.
if [ $exit_code -eq 0 ]; then
jq -n '{version: 1, status: "pass"}' > ${results_file}
jq -n '{version: 1, status: "pass"}' > "${results_file}"
else
# Sanitize the output
sanitized_test_output=$(echo "${test_output}" | sed -E \
-e '/^\* Building project/d' \
sanitized_spago_output=$(echo "${spago_output}" | sed -E \
-e '/^Compiling/d' \
-e '/at .*(node:internal|.*\/opt\/test-runner\/.*\.js)/d')

# Manually add colors to the output to help scanning the output for errors
colorized_test_output=$(echo "${sanitized_test_output}" | \
GREP_COLOR='01;31' grep --color=always -E -e '(Error found:|Error:|\* ERROR:|.*Failed:).*$|$' | \
GREP_COLOR='01;32' grep --color=always -E -e '.*Passed:.*$|$')

printf "${colorized_test_output}"
-e '/at.*:[[:digit:]]+:[[:digit:]]+\)?/d')

jq -n --arg output "${colorized_test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
jq --null-input --arg output "${sanitized_spago_output}" '{version: 1, status: "fail", message: $output}' > "${results_file}"
fi

echo "${slug}: done"
echo "Done"
Loading