Skip to content

BLD: Refactor configure #808

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 31, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 21 additions & 22 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,29 +40,28 @@ jobs:
- python -m pip install -q -U twine --ignore-installed six
- twine upload -u $PYPI_USER -p $PYPI_PW wheelhouse/*.whl

# Windows nightlies for TensorFlow are not available
# Windows Builds
# - stage: build
# name: "Build on Windows for Python 2.7 3.5 3.6 3.7"
# os: windows
# language: shell
# before_install:
# - choco install python --version 2.7.11
# - choco install python --version 3.5.4
# - choco install python --version 3.6.8
# - choco install python --version 3.7.5
# script:
# - export PATH=/c/tools/python:/c/tools/python/Scripts:$PATH
# - bash -x -e tools/ci_build/builds/release_windows.sh
# - export PATH=/c/Python35:/c/Python35/Scripts:$PATH
# - bash -x -e tools/ci_build/builds/release_windows.sh
# - export PATH=/c/Python36:/c/Python36/Scripts:$PATH
# - bash -x -e tools/ci_build/builds/release_windows.sh
# - export PATH=/c/Python37:/c/Python37/Scripts:$PATH
# - bash -x -e tools/ci_build/builds/release_windows.sh
# after_success:
# - python -m pip install -q -U twine --ignore-installed six
# - twine upload -u $PYPI_USER -p $PYPI_PW artifacts/*.whl
- stage: build
name: "Build on Windows for Python 2.7 3.5 3.6 3.7"
os: windows
language: shell
before_install:
- choco install python --version 2.7.11
- choco install python --version 3.5.4
- choco install python --version 3.6.8
- choco install python --version 3.7.5
script:
- export PATH=/c/tools/python:/c/tools/python/Scripts:$PATH
- bash -x -e tools/ci_build/builds/release_windows.sh
- export PATH=/c/Python35:/c/Python35/Scripts:$PATH
- bash -x -e tools/ci_build/builds/release_windows.sh
- export PATH=/c/Python36:/c/Python36/Scripts:$PATH
- bash -x -e tools/ci_build/builds/release_windows.sh
- export PATH=/c/Python37:/c/Python37/Scripts:$PATH
- bash -x -e tools/ci_build/builds/release_windows.sh
after_success:
- python -m pip install -q -U twine --ignore-installed six
- twine upload -u $PYPI_USER -p $PYPI_PW artifacts/*.whl

notifications:
email:
Expand Down
41 changes: 12 additions & 29 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,56 +67,39 @@ Please see our [Style Guide](STYLE_GUIDE.md) for more details.
Nightly CI tests are ran and results can be found on the central README. To
subscribe for alerts please join the [addons-testing mailing list](https://groups.google.com/a/tensorflow.org/forum/#!forum/addons-testing).

### Locally Testing CPU
Run all tests in docker:
### Locally Testing

#### CPU Testing Script
```bash
bash tools/run_docker.sh -c 'make unit-test'
```

or run manually:

#### GPU Testing Script
```bash
docker run --rm -it -v ${PWD}:/addons -w /addons gcr.io/tensorflow-testing/nosla-ubuntu16.04-manylinux2010 /bin/bash
./configure.sh # Links project with TensorFlow dependency
bash tools/run_docker.sh -d gpu -c 'make gpu-unit-test'
```

Run selected tests:
#### Run Manually

```bash
bazel test -c opt -k \
--test_timeout 300,450,1200,3600 \
--test_output=all \
//tensorflow_addons/<test_selection>
```
It is recommend that tests are ran within docker images, but should still work on host.

`<test_selection>` can be `...` for all tests or `<package>:<py_test_name>` for individual tests.
`<package>` can be any package name like `metrics` for example.
`<py_test_name>` can be any test name given by the `BUILD` file or `*` for all tests of the given package.
CPU Docker: `docker run --rm -it -v ${PWD}:/addons -w /addons gcr.io/tensorflow-testing/nosla-ubuntu16.04-manylinux2010 /bin/bash`

### Locally Testing GPU
Run all tests in docker:
GPU Docker: `docker run --runtime=nvidia --rm -it -v ${PWD}:/addons -w /addons gcr.io/tensorflow-testing/nosla-cuda10.1-cudnn7-ubuntu16.04-manylinux2010 /bin/bash`

```bash
bash tools/run_docker.sh -d gpu -c 'make gpu-unit-test'
Configure:
```
# Temporary until we remove py2 support
ln -sf /usr/bin/python3.6 /usr/bin/python && rm /usr/bin/python2

or run manually:

```bash
docker run --runtime=nvidia --rm -it -v ${PWD}:/addons -w /addons gcr.io/tensorflow-testing/nosla-cuda10.1-cudnn7-ubuntu16.04-manylinux2010 /bin/bash
export TF_NEED_CUDA=1
./configure.sh # Links project with TensorFlow dependency
```

Run selected tests:

```bash
bazel test -c opt -k \
--test_timeout 300,450,1200,3600 \
--crosstool_top=//build_deps/toolchains/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \
--test_output=all \
--jobs=1 \
--run_under=$(readlink -f tools/ci_testing/parallel_gpu_execute.sh) \
//tensorflow_addons/<test_selection>
```

Expand Down
7 changes: 1 addition & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,16 +80,11 @@ https://bazel.build/) build system (version >= 1.0.0).
git clone https://github.com/tensorflow/addons.git
cd addons

# If building GPU Ops (Requires CUDA 10.1 and CuDNN 7)
export TF_NEED_CUDA=1
export CUDA_HOME="/path/to/cuda10.1" (default: /usr/local/cuda)
export CUDNN_INSTALL_PATH="/path/to/cudnn" (default: /usr/lib/x86_64-linux-gnu)

# This script links project with TensorFlow dependency
./configure.sh

bazel build --enable_runfiles build_pip_pkg
bazel-bin/build_pip_pkg artifacts --nightly
bazel-bin/build_pip_pkg artifacts

pip install artifacts/tensorflow_addons-*.whl
```
Expand Down
35 changes: 35 additions & 0 deletions build_deps/check_deps.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import sys
import pkg_resources
from pip._internal.req import parse_requirements
from pkg_resources import DistributionNotFound, VersionConflict


def check_dependencies(requirement_file_name):
"""Checks to see if the python dependencies are fullfilled.

If check passes return 0. Otherwise print error and return 1
"""
dependencies = []
for req in parse_requirements(requirement_file_name, session=False):
dependencies.append(str(req.req))
try:
pkg_resources.working_set.require(dependencies)
except VersionConflict as e:
try:
print("{} was found on your system, "
"but {} is required for this build.\n".format(e.dist, e.req))
sys.exit(1)
except AttributeError:
sys.exit(1)
except DistributionNotFound as e:
print(e)
sys.exit(1)
sys.exit(0)


if __name__ == "__main__":
check_dependencies('build_deps/requirements.txt')
1 change: 0 additions & 1 deletion build_deps/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1 @@
# TensorFlow greater than this date is manylinux2010 compliant
tf-nightly>=2.1.0.dev20191004
10 changes: 5 additions & 5 deletions build_deps/toolchains/gpu/find_cuda_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -320,19 +320,19 @@ def get_header_version(path):
header_path, cublas_version = _find_header(
base_paths, "cublas_api.h", required_version, get_header_version)

cublas_major_verison = cublas_version.split(".")[0]
if not _matches_version(cuda_version, cublas_major_verison):
cublas_major_version = cublas_version.split(".")[0]
if not _matches_version(cuda_version, cublas_major_version):
raise ConfigError(
"cuBLAS version %s does not match CUDA version %s" %
(cublas_major_verison, cuda_version))
(cublas_major_version, cuda_version))

else:
# There is no version info available before CUDA 10.1, just find the file.
header_path = _find_file(base_paths, _header_paths(), "cublas_api.h")
# cuBLAS version is the same as CUDA version (x.y).
cublas_version = required_version
cublas_major_version = required_version

library_path = _find_library(base_paths, "cublas", cublas_major_verison)
library_path = _find_library(base_paths, "cublas", cublas_major_version)

return {
"cublas_include_dir": os.path.dirname(header_path),
Expand Down
127 changes: 100 additions & 27 deletions configure.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,10 @@


PLATFORM="$(uname -s | tr 'A-Z' 'a-z')"

DEFAULT_CUDA_VERISON="10.1"
DEFAULT_CUDA_PATH="/usr/local/cuda"
DEFAULT_CUDNN_VERSION="7"
DEFAULT_CUDNN_PATH="/usr/lib/x86_64-linux-gnu"

# Writes variables to bazelrc file
function write_to_bazelrc() {
Expand All @@ -40,7 +43,6 @@ function is_macos() {
}

function is_windows() {
# On windows, the shell script is actually running in msys
[[ "${PLATFORM}" =~ msys_nt*|mingw*|cygwin*|uwin* ]]
}

Expand All @@ -51,76 +53,147 @@ function is_ppc64le() {
# Converts the linkflag namespec to the full shared library name
function generate_shared_lib_name() {
if is_macos; then
# MacOS
local namespec="$1"
echo "lib"${namespec:2}".dylib"
elif is_windows; then
# Windows
echo "_pywrap_tensorflow_internal.lib"
else
# Linux
local namespec="$1"
echo ${namespec:3}
fi
}

echo ""
echo "Configuring TensorFlow Addons to be built from source..."

PIP_INSTALL_OPTS="--upgrade"
if [[ $1 == "--quiet" ]]; then
PIP_INSTALL_OPTS="$PIP_INSTALL_OPTS --quiet"
elif [[ ! -z "$1" ]]; then
elif [[ -n "$1" ]]; then
echo "Found unsupported args: $@"
exit 1
fi

# Install python dependencies
read -r -p "Tensorflow 2.0 will be installed if it is not already. Are You Sure? [y/n] " reply
case $reply in
[yY]*) echo "Installing...";;
* ) echo "Goodbye!"; exit;;
esac
BRANCH=$(git rev-parse --abbrev-ref HEAD)
PYTHON_PATH=$(which python)
REQUIRED_PKG=$(cat build_deps/requirements.txt)

BUILD_DEPS_DIR=build_deps
REQUIREMENTS_TXT=$BUILD_DEPS_DIR/requirements.txt
if [[ ${BRANCH} == "master" ]]; then
echo "WARN: You're building from master branch, please ensure that you want to build \
against tf-nightly. Otherwise please checkout a recent stable release branch."
fi

${PYTHON_VERSION:=python} -m pip install $PIP_INSTALL_OPTS -r $REQUIREMENTS_TXT
echo ""
echo "> TensorFlow Addons will link to the framework in a pre-installed TF pacakge..."
echo "> Checking installed packages in ${PYTHON_PATH}"
python build_deps/check_deps.py

if [[ $? == 1 ]]; then
read -r -p "Package ${REQUIRED_PKG} will be installed. Are You Sure? [y/n] " reply
case $reply in
[yY]*) echo "> Installing..."
python -m pip install $PIP_INSTALL_OPTS -r build_deps/requirements.txt;;
* ) echo "> Exiting..."; exit;;
esac
else
echo "> Using pre-installed ${REQUIRED_PKG}..."
fi

[[ -f .bazelrc ]] && rm .bazelrc

TF_CFLAGS=( $(${PYTHON_VERSION} -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
TF_LFLAGS=( $(${PYTHON_VERSION} -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
TF_CXX11_ABI_FLAG=( $(${PYTHON_VERSION} -c 'import tensorflow as tf; print(tf.sysconfig.CXX11_ABI_FLAG)') )

if is_windows; then
# Use pywrap_tensorflow instead of tensorflow_framework on Windows
TF_SHARED_LIBRARY_DIR=${TF_CFLAGS:2:-7}"python"
else
TF_SHARED_LIBRARY_DIR=${TF_LFLAGS[0]:2}
fi
TF_CFLAGS=($(python -c 'import logging; logging.disable(logging.WARNING);import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))'))
TF_LFLAGS=($(python -c 'import logging; logging.disable(logging.WARNING);import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))'))
TF_CXX11_ABI_FLAG=($(python -c 'import logging; logging.disable(logging.WARNING);import tensorflow as tf; print(tf.sysconfig.CXX11_ABI_FLAG)'))

TF_SHARED_LIBRARY_NAME=$(generate_shared_lib_name ${TF_LFLAGS[1]})
TF_HEADER_DIR=${TF_CFLAGS:2}

# OS Specific parsing
if is_windows; then
TF_SHARED_LIBRARY_DIR=${TF_CFLAGS:2:-7}"python"
TF_SHARED_LIBRARY_DIR=${TF_SHARED_LIBRARY_DIR//\\//}

TF_SHARED_LIBRARY_NAME=${TF_SHARED_LIBRARY_NAME//\\//}
TF_HEADER_DIR=${TF_HEADER_DIR//\\//}
else
TF_SHARED_LIBRARY_DIR=${TF_LFLAGS[0]:2}
fi

write_action_env_to_bazelrc "TF_HEADER_DIR" ${TF_HEADER_DIR}
write_action_env_to_bazelrc "TF_SHARED_LIBRARY_DIR" ${TF_SHARED_LIBRARY_DIR}
write_action_env_to_bazelrc "TF_SHARED_LIBRARY_NAME" ${TF_SHARED_LIBRARY_NAME}
write_action_env_to_bazelrc "TF_CXX11_ABI_FLAG" ${TF_CXX11_ABI_FLAG}

write_to_bazelrc "build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true"
write_to_bazelrc "build --spawn_strategy=standalone"
write_to_bazelrc "build --strategy=Genrule=standalone"
write_to_bazelrc "build -c opt"

while [[ "$TF_NEED_CUDA" == "" ]]; do
echo ""
read -p "Do you want to build GPU ops? [y/N] " INPUT
case $INPUT in
[Yy]* ) echo "> Building GPU & CPU ops"; TF_NEED_CUDA=1;;
[Nn]* ) echo "> Building only CPU ops"; TF_NEED_CUDA=0;;
"" ) echo "> Building only CPU ops"; TF_NEED_CUDA=0;;
* ) echo "Invalid selection: " $INPUT;;
esac
done

if [[ "$TF_NEED_CUDA" == "1" ]]; then
echo ""
echo "Configuring GPU setup..."

while [[ "$TF_CUDA_VERSION" == "" ]]; do
read -p "Please specify the CUDA version [Default is $DEFAULT_CUDA_VERISON]: " INPUT
case $INPUT in
"" ) echo "> Using CUDA version: 10.1"; TF_CUDA_VERSION=$DEFAULT_CUDA_VERISON;;
* ) echo "> Using CUDA version:" $INPUT; TF_CUDA_VERSION=$INPUT;;
esac
echo ""
done

while [[ "$CUDA_TOOLKIT_PATH" == "" ]]; do
read -p "Please specify the location of CUDA. [Default is $DEFAULT_CUDA_PATH]: " INPUT
case $INPUT in
"" ) echo "> CUDA installation path: /usr/local/cuda"; CUDA_TOOLKIT_PATH=$DEFAULT_CUDA_PATH;;
* ) echo "> CUDA installation path:" $INPUT; CUDA_TOOLKIT_PATH=$INPUT;;
esac
echo ""
done

while [[ "$TF_CUDNN_VERSION" == "" ]]; do
read -p "Please specify the cuDNN major version [Default is $DEFAULT_CUDNN_VERSION]: " INPUT
case $INPUT in
"" ) echo "> Using cuDNN version: 7"; TF_CUDNN_VERSION=$DEFAULT_CUDNN_VERSION;;
* ) echo "> Using cuDNN version:" $INPUT; TF_CUDNN_VERSION=$INPUT;;
esac
echo ""
done

while [[ "$CUDNN_INSTALL_PATH" == "" ]]; do
read -p "Please specify the location of cuDNN installation. [Default is $DEFAULT_CUDNN_PATH]: " INPUT
case $INPUT in
"" ) echo "> cuDNN installation path: /usr/lib/x86_64-linux-gnu"; CUDNN_INSTALL_PATH=$DEFAULT_CUDNN_PATH;;
* ) echo "> cuDNN installation path:" $INPUT; CUDNN_INSTALL_PATH=$INPUT;;
esac
echo ""
done

write_action_env_to_bazelrc "TF_NEED_CUDA" ${TF_NEED_CUDA}
write_action_env_to_bazelrc "CUDNN_INSTALL_PATH" "${CUDNN_INSTALL_PATH:=/usr/lib/x86_64-linux-gnu}"
write_action_env_to_bazelrc "TF_CUDA_VERSION" "10.1"
write_action_env_to_bazelrc "TF_CUDNN_VERSION" "7"
write_action_env_to_bazelrc "CUDA_TOOLKIT_PATH" "${CUDA_HOME:=/usr/local/cuda}"
write_action_env_to_bazelrc "CUDA_TOOLKIT_PATH" "${CUDA_TOOLKIT_PATH}"
write_action_env_to_bazelrc "CUDNN_INSTALL_PATH" "${CUDNN_INSTALL_PATH}"
write_action_env_to_bazelrc "TF_CUDA_VERSION" "${TF_CUDA_VERSION}"
write_action_env_to_bazelrc "TF_CUDNN_VERSION" "${TF_CUDNN_VERSION}"

write_to_bazelrc "test --config=cuda"
write_to_bazelrc "build --config=cuda"
write_to_bazelrc "build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true"
write_to_bazelrc "build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain"
fi

echo ""
echo "Build configurations successfully written to .bazelrc"
echo ""
Loading