TensorFlow2.0源码编译步骤
TensorFlow 2.0源码编译步骤
本⽂作者:Phillweston,未经允许禁⽌转载
传统pip安装tensorflow限制
1.AVX指令集CPU使⽤⽼版本TensorFlow报错
对于不⽀持AVX指令集的CPU服务器,在python中使⽤ import tensorflow as tf 命令时,若tensorflow版本⾼于1.5.0会进⾏如下报错。
ubuntu:⾮法指令(核⼼转储);
Win 10:ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
2.TensorFlow版本与CUDA版本不配套
⽬前最⾼版本的TensorFlow 2.1仅⽀持CUDA10.0版本,⽽NVIDIA最新Ubuntu驱动440.44配套的CUDA版本为10.2,⽆法兼容已有的TensorFlow版本。
因此本⽂讨论如何通过通过源码编译的⽅式编译TensorFlow的Python版本和C++版本
可选:通过conda安装tf2
1.更换conda默认镜像源(这⾥以中科⼤源为例)
$ conda config --add channels mirrors.ustc.edu/anaconda/pkgs/main/
$ conda config --add channels mirrors.ustc.edu/anaconda/pkgs/free/
$ conda config --set show_channel_urls yes
$ conda create -n tf2 python=3.6
某版本的protobuf有bug,使⽤Python3.7编译TF源码可能会报错,不建议使⽤Python3.7版本,使⽤Linux系统⾃带的Python3版本即可
3.进⼊环境
$ source activate tf2
git clone之后⼀定记得要–recurse-submodules递归克隆⼦模块(此选项⽬的是克隆protobuf源码部分,如果电脑中已有google protobuf也可不克隆⼦模块)
需要前置安装的源码有:
build-essential git protobuf bazel eigen
注:
1.bazel安装有版本限制,tensorflow-v2以上版本要求bazel版本在1.
2.1以上,可以修改bazel编译版本限制
2.TensorFlow源码编译依赖keras相关库,需要使⽤pip3安装keras相关模块
$ sudo pip3 install keras
3.eigen库要通过源码编译的⽅式安装,详情请搜索github
修改bazel编译版本限制
需要修改的三个⽂件分别是
configure.py
WORKSPACE
.bazelrc
这三个⽂件在源码根⽬录下
修改如下两⾏
_TF_MIN_BAZEL_VERSION = ‘0.29.0’
_TF_MAX_BAZEL_VERSION = ‘0.29.1’
最⼩版本要低于等于现在使⽤的版本
最⼤版本要⾼于等于现在使⽤的版本
2.WORKSPACE⽂件修改
修改如下⼀⾏
check_bazel_version_at_least(“0.29.0”)
括号内的版本要低于等于现在使⽤的版本
3…bazelrc⽂件修改
注释掉这⼀⾏:build --enable_platform_specific_config
如果你使⽤windows编译请加⼊这⼀⾏:build --config==linux
前置依赖库安装完成后,我们输⼊./configure配置bazel编译TensorFlow编译选项
已知编译存在的问题
1.TensorFlow 1.x版本使⽤的bazel编译器版本不能⾼于1.0.0,建议版本低于0.29.1;⽽
2.1及以上的版本使⽤的bazel编译器版本不能低于1.1.0,但使⽤⾼版本bazel编译可能报错,建议参照上述⽅法修改bazel编译版本限制
2.TensorFlow 2.0及以上版本使⽤TensorRT 7.0版本会报错,报错提⽰如下所⽰:
ImportError:/home/phillweston/.cache/bazel/_bazel_phillweston/116338b0ad1de73f45727b0ef63c0bc9/execroot/org_tensorflow/bazel-out/host/bin/tensorfl ow/python/keras/api/create_tensorflow.python_api_keras_python_api_gen.runfiles/org_tensorflow/tensorflow/compiler/tf2tensorrt/_wrap_py_utils.so: undefi ned symbol: _ZN15stream_executor14StreamExecutor18EnablePeerAccessToEPS0_
Target //tensorflow/tools/pip_package:build_pip_package failed to build
ERROR:/home/phillweston/git-repository/tensorflow/tensorflow/python/tools/BUILD:141:1 Executing genrule //tensorflow/python/keras/api:keras_python_a pi_gen_compat_v2 failed (Exit 1)
原因未知,解决办法如下:
2.使⽤低版本的bazel,参照 修改bazel编译版本限制这⼀步骤
配置configure编译选项
WARNING:--batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have installed.
Please specify the location of python.[Default is /usr/bin/python]:
#检查⼀下这⾥的Python环境,建议使⽤Python3
Found possible Python library paths:
/opt/intel/openvino_2019.3.376/python/python3.6
/usr/lib/python3/dist-packages
/usr/lib/python3.6/dist-packages
/opt/intel/openvino_2019.3.376/python/python3
/opt/ros/melodic/lib/python2.7/dist-packages
/usr/local/lib/python3.6/dist-packages
/opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer
/
opt/intel/openvino_2019.3.376/deployment_tools/open_model_zoo/tools/accuracy_checker
Please input the desired Python library path to use.  Default is [/usr/lib/python3.6/dist-packages]
/usr/local/lib/python3.6/dist-packages
#务必检查⼀下这⾥的Python环境,确保环境与上⾯的python链接位置保持⼀致,⽽且要与pip install的默认安装位置⼀致
#这个选项是询问是否开启XLA JIT编译⽀持。XLA(Accelerated Linear Algebra/加速线性代数)⽬前还是TensorFlow的实验项⽬,XLA使⽤JIT(Just in Tim e,即时编译)技术来分析⽤户在运⾏时(runtime)创建的 TensorFlow 图,专门⽤于实际运⾏时的维度和类型。作为新技术,这项编译技术还不成熟,爱折腾的“极客”读者可以选“y”,否则选择默认值“N”。
Do you wish to build TensorFlow with XLA JIT support?[Y/n]: y
XLA JIT support will be enabled for TensorFlow.
#这个是OpenCL⾼级编程模型,NVIDIA GPU不⽀持,Intel和AMD的GPU⽀持
Do you wish to build TensorFlow with OpenCL SYCL support?[y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.
#这个是AMD GPU的加速库,作⽤类似
#安装之前需要提前安装keras库,否则编译会报错
Do you wish to build TensorFlow with ROCm support?[y/N]: n
No ROCm support will be enabled for TensorFlow.
#这个选项是询问是否使⽤CUDA。CUDA是⼀种由NVIDIA推出的通⽤并⾏计算架构,该架构使GPU能够解决复杂的计算问题。如果⽤户配备有NVIDIA的GPU,可以选择“y”,如果仅使⽤TensorFlow的CPU版本,回车确认“N”。
Do you wish to build TensorFlow with CUDA support?[y/N]: y
CUDA support will be enabled for TensorFlow.
Do you wish to build TensorFlow with TensorRT support?[y/N]: y
TensorRT support will be enabled for TensorFlow.
Found CUDA10.2in:
/usr/local/cuda/lib64
/usr/local/cuda/include
Found cuDNN 7in:
/usr/local/cuda/lib64
/usr/local/cuda/include
Found TensorRT 7in:
/usr/lib/x86_64-linux-gnu
/usr/include/x86_64-linux-gnu
#这个是设置NVIDIA显卡的计算⼒,建议与NVIDIA官⽹的数据⼀致
#developer.nvidia/cuda-gpus
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: developer.nvidia/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute cap abilities >=3.5[Default is:6.1]:
Do you want to use clang as CUDA compiler?[y/N]: y
Clang will be used as CUDA compiler.
Do you wish to download a fresh release of clang?(Experimental)[y/N]: y
Clang will be downloaded and used to compile tensorflow.
#这个选项是指定CPU编译优化选项。默认值就是“-march=native”。这⾥“m”表⽰“machine(机器)”,“arch”就是“architecture”简写。“march”合在⼀起表⽰机器的结构,如果选择“-march=native”,则表⽰选择本地(native)CPU,如果本地CPU⽐较⾼级,就可以⽀持SSE4.2、AVX等选项。这⾥建议选择默认值。Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
#这部分是编译适⽤于Android NDK和SDK的TensorFlow的,如果没有需求可以填N
Would you like to interactively configure ./WORKSPACE for Android builds?[y/N]: y
Searching for NDK and SDK installations.
Please specify the home path of the Android NDK to use.[Default is /home/phillweston/Android/Sdk/ndk-bundle]:
WARNING: The NDK version in/home/phillweston/Android/Sdk/ndk-bundle is 21, which is not supported by Bazel(officially supported versions:[10,11,12, 13,14,15,16,17,18]). Please use another version. Compiling Android targets may result in confusing errors.
#这⾥如果使⽤⾼版本的NDK API限制可能导致部分低版本的API⽆法使⽤TF框架,推荐填写最低版本
Please specify the(min) Android NDK API level to use.[Available levels:['16','17','18','19','21','22','23','24','26','27','28','29']][Default is 21]:16 Please specify the home path of the Android SDK to use.[Default is /home/phillweston/Android/Sdk]:
androidsdk安装步骤Please specify the Android SDK API level to use.[Available levels:['29']][Default is 29]:
Please specify an Android build tools version to use.[Available versions:['29.0.2']][Default is 29.0.2]:
#Android部分到这⾥截⽌
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl          # Build with MKL support.
--config=monolithic  # Config for mostly static monolithic build.
--config=ngraph      # Build with Intel nGraph support.
--config=numa        # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2          # Build TensorFlow 2.x
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws        # Disable AWS S3 filesystem support.
--config=nogcp        # Disable GCP support.
--config=nohdfs      # Disable HDFS support.
--config=nonccl      # Disable NVIDIA NCCL support.
Configuration finished
注:bazel build编译的时候,如果遇到各种问题。command会提⽰Use --verbose_failures to see the command lines of failed build steps.加–verbose_failures命令后bazel即可在终端显⽰报错信息
命令1:编译C++版本的tensorflow
$ bazel build --config=opt --config=cuda --config=v2 //tensorflow:libtensorflow_cc.so --verbose_failures --local_resources=8192,6,10
命令2:编译python版本的tensorflow
$ bazel build --config=opt --config=cuda --config=v2 //tensorflow/tools/pip_package:build_pip_package --verbose_failures --local_resources=8192,6,10
注意:双横线不代表这⾥是注释,其中的双斜杠//是名字的⼀种起⼿写法,与⽬录表⽰的/完全是两个概念。
Tips:
从源代码编译 TensorFlow 可能会消耗⼤量内存。如果系统内存有限,请使⽤以下命令限制 bazel 的系统资源消耗量:
1.限制内存消耗量
–local_ram_resources=2048
2.限制系统资源消耗量(推荐加⼊此编译选项,否则可能导致编译过程死机)
–local_resources=8192,6,10是限制占⽤内存最多为8192 M,最多占⽤6个CPU,最多10个IO线程
3.限制同时并⾏的最⼤线程数(即同时使⽤的最多CPU个数)
–jobs X(线程数)

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。