Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[libtorch] create a new port #17199

Merged
merged 90 commits into from
Jan 18, 2023
Merged
Show file tree
Hide file tree
Changes from 68 commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
8ee205b
[onnx] feature: foxi
luncliff Sep 11, 2021
d42ca2f
[onnx] force onnx/onnx_proto static in Windows
luncliff Sep 16, 2021
5114b48
[onnx] support windows static triplets
luncliff Sep 16, 2021
28cec17
[onnx] fix wrong LICENSE install
luncliff Sep 16, 2021
f8ec3f5
[onnx] remove feature 'foxi'
luncliff Sep 18, 2021
18d4965
Merge branch 'master' of https://github.com/microsoft/vcpkg into port…
Sep 23, 2021
2a77308
[libtorch] rework patch files
luncliff Sep 11, 2021
1c19e75
[libtorch] config fixup ATen, Torch
luncliff Sep 11, 2021
287b524
[libtorch] make shared only
luncliff Sep 11, 2021
335282e
[libtorch] remove headers after install
luncliff Sep 12, 2021
44ae8f7
Merge branch 'port/onnx' into port/libtorch-1
luncliff Sep 28, 2021
62cd69e
[libtorch] rewrite patches and feature options
luncliff Sep 28, 2021
b1807fb
[libtorch] use eigen3 always
luncliff Sep 28, 2021
5d00ceb
[libtorch] error if BLAS feature collision
luncliff Sep 28, 2021
ecbc08f
[libtorch] remove !static
luncliff Sep 28, 2021
34ec43f
[libtorch] replace vcpkg_find_acquire_program
luncliff Sep 29, 2021
8fd7788
Merge branch 'master' of https://github.com/microsoft/vcpkg into port…
Oct 20, 2021
d3b2cd9
Dependency python3
Oct 20, 2021
835dc6b
Merge branch 'master' into port/libtorch
luncliff Nov 13, 2021
94560c3
[libtorch] migrate works from luncliff/vcpkg-registry
luncliff Nov 13, 2021
06f0a02
[libtorch] misc fix, update version, baseline
luncliff Nov 13, 2021
45944fc
fix merge confict for 'onnx'
luncliff Nov 13, 2021
2c18c57
[libtorch] install pip packages
luncliff Nov 14, 2021
31595af
[libtorch] turn off Metal options
luncliff Nov 14, 2021
929dfc4
[onnx] revert 'onnx' changes
luncliff Nov 14, 2021
080d3e5
Merge branch 'master' into port/libtorch
luncliff Nov 18, 2021
1fb5799
[libtorch] refine patches
luncliff Nov 22, 2021
d44c15e
Merge branch 'master' into port/libtorch
luncliff Dec 5, 2021
d7b3957
[libtorch] link with foxi_loader
luncliff Dec 5, 2021
0ab2fe8
[libtorch] update git-tree
luncliff Dec 5, 2021
b9c4a07
[libtorch] reduce patch size
luncliff Dec 10, 2021
16b5612
Merge branch 'master' into port/libtorch
luncliff Dec 10, 2021
86290b8
[libtorch] find numa and activate USE_NUMA
luncliff Jan 18, 2022
ac8687c
Update ports/libtorch/portfile.cmake
luncliff Jan 19, 2022
a447925
Update ports/libtorch/portfile.cmake
luncliff Jan 19, 2022
7954279
Update ports/libtorch/portfile.cmake
luncliff Jan 19, 2022
65c5545
[libtorch] fix mistype and update version JSON
luncliff Jan 19, 2022
9703ca3
Merge branch 'master' of https://github.com/microsoft/vcpkg into port…
Mar 1, 2022
a04ecfd
Add double quotes
Mar 1, 2022
7156a29
version
Mar 1, 2022
a9a2b2c
Fix support expression
Mar 1, 2022
4424ca4
version
Mar 1, 2022
f9a4d3d
Merge remote-tracking branch 'origin/master' into HEAD
BillyONeal May 18, 2022
478af24
[libtorch] update cpuinfo usage
luncliff May 19, 2022
f11119d
[tensorpipe] fix linux install
luncliff May 27, 2022
b1763bf
[tensorpipe] update versions JSON
luncliff May 27, 2022
bc7c98d
[libtorch] fix feature failures
luncliff May 27, 2022
4afebf9
[libtorch] remove CUDA feature
luncliff May 28, 2022
619e481
[libtorch] giveup 'fbgemm' feature
luncliff Jun 2, 2022
c9d9bcf
[libtorch] use mpi, openmpi in Linux
luncliff Jun 7, 2022
93b2d4f
Merge branch 'master' of https://github.com/microsoft/vcpkg into port…
Jul 5, 2022
dd41633
[libtorch] fix glog link error
luncliff Jul 9, 2022
e198064
Merge branch 'master' into port/libtorch
luncliff Jul 10, 2022
9686405
[tensorpipe] bump port version
luncliff Jul 10, 2022
64d400f
[libtorch] fix patch list
luncliff Jul 10, 2022
c66f133
[libtorch] use official libuv config
luncliff Jul 11, 2022
c6ed667
Update ports/libtorch/portfile.cmake
luncliff Jul 11, 2022
0c9cc66
Update ports/libtorch/portfile.cmake
luncliff Jul 11, 2022
aa94e7e
update versions JSON
luncliff Jul 11, 2022
d8a19cf
revert unnecessary 'nnpack' changes
luncliff Jul 11, 2022
df88c0d
Update ports/libtorch/portfile.cmake
luncliff Jul 17, 2022
dd7c22d
[libtorch] use vcpkg-get-python-packages
luncliff Jul 18, 2022
1339a1b
[libtorch] provide path of python3
luncliff Sep 19, 2022
64495c3
Merge branch 'master' into port/libtorch
luncliff Sep 19, 2022
3317aed
Merge remote-tracking branch 'origin/master' into HEAD
BillyONeal Sep 26, 2022
707d441
Update ports/libtorch/portfile.cmake
luncliff Oct 26, 2022
4947704
Merge remote-tracking branch 'origin/master' into HEAD
BillyONeal Oct 26, 2022
04aeefc
Fix version database.
BillyONeal Oct 26, 2022
4c50475
[libtorch] use openmpi in linux/osx
luncliff Oct 27, 2022
b0e0d81
[libtorch] update to v1.12.1
luncliff Nov 14, 2022
260d530
[libtorch] find_program(python3, python)
luncliff Nov 14, 2022
08637fc
[libtorch] provide PYTHON_EXECUTABLE directly
luncliff Nov 14, 2022
b97c1c5
[xnnpack] update to 2022-02-17
luncliff Nov 14, 2022
26dda6a
[xnnpack] use C11, C++11
luncliff Nov 15, 2022
4085d19
[libtorch] more patches, DISABLE_PARALLEL_CONFIGURE
luncliff Nov 21, 2022
5fd4ef0
[libtorch] allow static torch_cpu build
luncliff Nov 21, 2022
a9dbd78
Revert "[libtorch] allow static torch_cpu build"
luncliff Nov 22, 2022
94bd147
[libtorch] find_package(BLAS)
luncliff Nov 23, 2022
0e63c13
[libtorch] simplify Python3, NumPy option use
luncliff Nov 23, 2022
b8c212e
[libtorch] fix install in Windows
luncliff Nov 23, 2022
981d24d
[libtorch] exclude torch_global_deps in Windows
luncliff Nov 23, 2022
3f43bba
[libtorch] platform of nnpack feature
luncliff Nov 24, 2022
17cd326
[libtorch] fix MPI option in Windows
luncliff Dec 4, 2022
88bb895
[libtorch] fixing LNK1161
luncliff Dec 5, 2022
9598f03
[libtorch] fix some mistypes
luncliff Dec 5, 2022
46e381d
[libtorch] define NOMINMAX for c10
luncliff Dec 10, 2022
62c5081
[libtorch] disable vulkan feature in Windows
luncliff Dec 11, 2022
6566732
ci.baseline.txt: allow libtorch failure
luncliff Dec 18, 2022
027b927
Enable testing port on Windows
vicroms Jan 17, 2023
a3e9e13
[caffe2] redirect to libtorch
luncliff Jan 18, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 95 additions & 0 deletions ports/libtorch/fix-cmake.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 0c11507..b47ebae 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -619,7 +619,7 @@ if(NOT CMAKE_BUILD_TYPE)
endif()

# The below means we are cross compiling for arm64 or x86_64 on MacOSX
-if(NOT IOS AND CMAKE_SYSTEM_NAME STREQUAL "Darwin" AND CMAKE_OSX_ARCHITECTURES MATCHES "^(x86_64|arm64)$")
+if(false)
set(CROSS_COMPILING_MACOSX TRUE)
# We need to compile a universal protoc to not fail protobuf build
# We set CMAKE_TRY_COMPILE_TARGET_TYPE to STATIC_LIBRARY (vs executable) to succeed the cmake compiler check for cross-compiling
@@ -637,6 +637,13 @@ if(NOT IOS AND CMAKE_SYSTEM_NAME STREQUAL "Darwin" AND CMAKE_OSX_ARCHITECTURES M
set(PROTOBUF_PROTOC_EXECUTABLE "${PROJECT_SOURCE_DIR}/build_host_protoc/bin/protoc")
set(CAFFE2_CUSTOM_PROTOC_EXECUTABLE "${PROJECT_SOURCE_DIR}/build_host_protoc/bin/protoc")
endif()
+find_package(protobuf CONFIG REQUIRED)
+find_program(PROTOBUF_PROTOC_EXECUTABLE
+ NAMES protoc
+ PATHS ${_VCPKG_INSTALLED_DIR}/${VCPKG_HOST_TRIPLET}/tools
+)
+set(CAFFE2_CUSTOM_PROTOC_EXECUTABLE ${PROTOBUF_PROTOC_EXECUTABLE})
+include(cmake/ProtoBuf.cmake)

# ---[ Misc checks to cope with various compiler modes
include(cmake/MiscCheck.cmake)
@@ -650,7 +657,7 @@ if(USE_FBGEMM AND ((CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64" AND CMAKE_SIZEOF_VO
set(USE_FBGEMM OFF)
endif()

-include(cmake/Dependencies.cmake)
+include(cmake/vcpkg-dependencies.cmake) # we will import vcpkg ports instead of CAFFE2_THIRD_PARTY_ROOT

if(USE_FBGEMM)
string(APPEND CMAKE_CXX_FLAGS " -DUSE_FBGEMM")
diff --git a/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt b/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
index 3901f73..4954c3e 100644
--- a/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
+++ b/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
@@ -380,6 +380,7 @@ else()
target_link_libraries(pytorch_qnnpack PUBLIC pthreadpool)
endif()

+if(false) # use packages of vcpkg. see L433
# ---[ Configure FXdiv
if(NOT TARGET fxdiv AND NOT USE_SYSTEM_FXDIV)
set(FXDIV_BUILD_TESTS OFF CACHE BOOL "")
@@ -428,6 +429,14 @@ elseif(NOT TARGET fp16 AND USE_SYSTEM_FP16)
set_target_properties(fp16 PROPERTIES LINKER_LANGUAGE C)
endif()
target_link_libraries(pytorch_qnnpack PRIVATE fp16)
+endif()
+target_link_libraries(pytorch_qnnpack PRIVATE
+ cpuinfo::clog cpuinfo::cpuinfo
+ unofficial::pthreadpool
+)
+target_include_directories(pytorch_qnnpack PRIVATE
+ ${FP16_INCLUDE_DIRS} ${PSIMD_INCLUDE_DIRS} ${FXDIV_INCLUDE_DIRS}
+)

install(TARGETS pytorch_qnnpack
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index 26210cb..5f4618e 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -106,7 +106,7 @@ endif()
# Note: the folders that are being commented out have not been properly
# addressed yet.

-if(NOT MSVC AND USE_XNNPACK)
+if(false)
if(NOT TARGET fxdiv)
set(FXDIV_BUILD_TESTS OFF CACHE BOOL "")
set(FXDIV_BUILD_BENCHMARKS OFF CACHE BOOL "")
@@ -798,6 +798,9 @@ if(HAVE_SOVERSION)
endif()
torch_compile_options(torch_cpu) # see cmake/public/utils.cmake

+find_package(Eigen3 CONFIG REQUIRED)
+target_link_libraries(torch_cpu PRIVATE Eigen3::Eigen) # for caffe2 operators
+
if(USE_PRECOMPILED_HEADERS)
target_precompile_headers(torch_cpu PRIVATE
"$<$<COMPILE_LANGUAGE:CXX>:ATen/ATen.h>")
@@ -990,7 +993,7 @@ if(USE_CUDA OR USE_ROCM)
target_include_directories(${TORCHLIB_FLAVOR} PRIVATE "${CMAKE_BINARY_DIR}/include")
endif()

-if(NOT MSVC AND USE_XNNPACK)
+if(false)
TARGET_LINK_LIBRARIES(torch_cpu PRIVATE fxdiv)
endif()

15 changes: 15 additions & 0 deletions ports/libtorch/fix-sources.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
diff --git a/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp b/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp
index 614e274..b59a4d4 100644
--- a/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp
+++ b/aten/src/ATen/native/quantized/cpu/qembeddingbag_prepack.cpp
@@ -8,7 +8,9 @@
#include <torch/library.h>

#include <c10/util/irange.h>
-
+#if defined(USE_FBGEMM)
+#include <fbgemm/QuantUtils.h>
+#endif
torch::class_<EmbeddingPackedParamsBase> register_embedding_params();

/*
179 changes: 179 additions & 0 deletions ports/libtorch/portfile.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
vcpkg_check_linkage(ONLY_DYNAMIC_LIBRARY)

vcpkg_from_github(
OUT_SOURCE_PATH SOURCE_PATH
REPO pytorch/pytorch
REF v1.10.0
SHA512 92b70e6170a7f173c4a9cb29f6cec6dfa598587aa9cf6a620ec861b95da6ea555cbc7285914c0dab6cfc8af320fad4999be4a788acc1f15140664a67ad9dc35d
HEAD_REF master
PATCHES
fix-cmake.patch
fix-sources.patch
use-glog-header.patch
)
file(REMOVE_RECURSE "${SOURCE_PATH}/caffe2/core/macros.h") # We must use generated header files

vcpkg_find_acquire_program(PYTHON3)

x_vcpkg_get_python_packages(
PYTHON_VERSION 3
PYTHON_EXECUTABLE "${PYTHON3}"
PACKAGES typing-extensions pyyaml
OUT_PYTHON_VAR PYTHON3
)
# Make the configure step use same Python executable
get_filename_component(PYTHON_DIR "${PYTHON3}" PATH)
vcpkg_add_to_path(PREPEND "${PYTHON_DIR}")

# Editing ${SOURCE_PATH}/cmake/Dependencies.cmake makes HORRIBLE readability...
file(COPY "${CMAKE_CURRENT_LIST_DIR}/vcpkg-dependencies.cmake" DESTINATION "${SOURCE_PATH}/cmake")

vcpkg_check_features(OUT_FEATURE_OPTIONS FEATURE_OPTIONS
FEATURES
dist USE_DISTRIBUTED # MPI, Gloo, TensorPipe
zstd USE_ZSTD
fftw3 USE_FFTW
fftw3 AT_FFTW_ENABLED
fbgemm USE_FBGEMM
opencv USE_OPENCV
tbb USE_TBB
leveldb USE_LEVELDB
opencl USE_OPENCL
cuda USE_CUDA
cuda USE_CUDNN
cuda USE_NCCL
cuda USE_SYSTEM_NCCL
cuda USE_NVRTC
cuda AT_CUDA_ENABLED
cuda AT_CUDNN_ENABLED
vulkan USE_VULKAN
vulkan USE_VULKAN_WRAPPER
vulkan USE_VULKAN_SHADERC_RUNTIME
vulkan USE_VULKAN_RELAXED_PRECISION
nnpack USE_NNPACK # todo: check use of `DISABLE_NNPACK_AND_FAMILY`
nnpack AT_NNPACK_ENABLED
xnnpack USE_XNNPACK
xnnpack USE_SYSTEM_XNNPACK
qnnpack USE_QNNPACK # todo: check use of `USE_PYTORCH_QNNPACK`
)

if(CMAKE_CXX_COMPILER_ID MATCHES GNU)
list(APPEND FEATURE_OPTIONS -DUSE_NATIVE_ARCH=ON)
endif()
if("dist" IN_LIST FEATURES)
if(VCPKG_TARGET_IS_LINUX OR VCPKG_TARGET_IS_OSX)
list(APPEND FEATURE_OPTIONS -DUSE_TENSORPIPE=ON)
endif()
if(VCPKG_TARGET_IS_WINDOWS OR VCPKG_TARGET_IS_OSX)
list(APPEND FEATURE_OPTIONS -DUSE_LIBUV=ON)
endif()
endif()

if(VCPKG_TARGET_IS_LINUX)
# Linux package `libnuma-dev`
find_library(Numa_LIBPATH NAMES numa PATHS "/usr/lib" "/usr/lib/x86_64-linux-gnu")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Neumann-A suggests in Discord to hook this up to the port numactl.

if(Numa_LIBPATH)
message(STATUS "Detected numa: ${Numa_LIBPATH}")
list(APPEND FEATURE_OPTIONS -DUSE_NUMA=ON)
else()
message(FATAL_ERROR "To enable USE_NUMA build option, install 'libnuma-dev' package")
endif()
else()
list(APPEND FEATURE_OPTIONS -DUSE_NUMA=OFF)
endif()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get why vcpkg has port numactl but isn't using it as a dependency for numa. in #27279 I added port libnuma just to find out that numactl is essentially the same


if(VCPKG_TARGET_IS_OSX)
list(APPEND FEATURE_OPTIONS -DBLAS=Accelerate) # Accelerate.framework will be used for Apple platforms
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to ask folks about BLAS.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BLAS(Basic Linear Algebra Subprograms).
I heard most of projects use Eigen3(eigen3), but this project supports Accelerate.framework, which is in Apple's SDK.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem in my experience is that everyone has different ideas for what the defaults should be, and we get PRs changing the default to what people want, and we don't understand ourselves why one would pick one or another and don't have a clear documented policy on which one we use.

FindBLAS.cmake that comes with CMake is similarly confused.

I left this comment as a note to myself to follow up with the team and figure out where we left the BLAS situation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blas vendor should be forced via the blas meta port (see #24327)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ras0219-msft @dan-shaw @JavierMatosD @markle11m @AugP @valeriaconde and I discussed this today and have the following notes on BLAS:

If possible, this should gain a dependency on the port "blas", and use the vendor selected by FindBLAS.cmake (via find_package(BLAS REQUIRED)).

We observe that there is special handling for some implementations like Eigen which may make that impossible, since Eigen isn't actually a BLAS implementation; we should explicitly document the relationship here and how/why the "normal way" to get BLAS isn't used.

We also observe that portfile.cmake does:

-DCAFFE2_USE_EIGEN_FOR_BLAS=ON

which suggests maybe BLAS isn't used at all?

else()
list(APPEND FEATURE_OPTIONS -DBLAS=Eigen)
endif()

if("tbb" IN_LIST FEATURES)
list(APPEND FEATURE_OPTIONS
-DMKLDNN_CPU_RUNTIME=TBB
)
endif()

if(VCPKG_TARGET_IS_ANDROID)
list(APPEND FEATURE_OPTIONS
-DINTERN_BUILD_MOBILE=ON
-DBUILD_JNI=ON
-DUSE_NNAPI=ON
)
else()
list(APPEND FEATURE_OPTIONS -DINTERN_BUILD_MOBILE=OFF)
endif()

string(COMPARE EQUAL "${VCPKG_CRT_LINKAGE}" "static" USE_STATIC_RUNTIME)

vcpkg_cmake_configure(
SOURCE_PATH "${SOURCE_PATH}"
OPTIONS
${FEATURE_OPTIONS}
-DPython3_EXECUTABLE="${PYTHON3}"
-DCAFFE2_USE_MSVC_STATIC_RUNTIME=${USE_STATIC_RUNTIME}
-DBUILD_CUSTOM_PROTOBUF=OFF -DUSE_LITE_PROTO=OFF
-DBUILD_TEST=OFF -DATEN_NO_TEST=ON
-DUSE_SYSTEM_LIBS=ON
-DBUILD_PYTHON=OFF
-DUSE_GLOO=${VCPKG_TARGET_IS_LINUX}
-DUSE_MPI=${VCPKG_TARGET_IS_LINUX} # Linux package `libopenmpi-dev`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to have this installed in our VMs? Need to confirm what happens if this package is missing. (If missing it needs to fail to build not silently do nothing to preserve build-path-independence)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like vcpkg-dependencies.cmake says REQUIRED.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The internal log is too complicated. I think we can OFF it if VM needs too much changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me check it's REQUIRED once more

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mpi port ? Also this should probably be a feature instead.

-DUSE_METAL=OFF
-DUSE_PYTORCH_METAL=OFF
-DUSE_PYTORCH_METAL_EXPORT=OFF
-DUSE_BLAS=ON # Eigen, MKL, or Accelerate
-DUSE_GFLAGS=ON
-DUSE_GLOG=ON
-DUSE_LMDB=ON
-DUSE_ROCKSDB=OFF
-DUSE_OPENMP=OFF
-DUSE_OBSERVERS=OFF
-DUSE_PYTORCH_QNNPACK=OFF
-DUSE_KINETO=OFF
-DUSE_ROCM=OFF
-DUSE_DEPLOY=OFF
-DUSE_BREAKPAD=OFF
-DUSE_FFTW=OFF
-DCAFFE2_USE_EIGEN_FOR_BLAS=ON
# BLAS=MKL not supported
-DUSE_MKLDNN=OFF
-DUSE_MKLDNN_CBLAS=OFF
-DCAFFE2_USE_MKL=OFF
-DCAFFE2_USE_MKLDNN=OFF
-DAT_MKL_ENABLED=OFF
-DAT_MKLDNN_ENABLED=OFF
OPTIONS_RELEASE
-DBUILD_LIBTORCH_CPU_WITH_DEBUG=ON
MAYBE_UNUSED_VARIABLES
USE_NUMA
USE_SYSTEM_BIND11
USE_VULKAN_WRAPPER
MKLDNN_CPU_RUNTIME
)
vcpkg_cmake_build(TARGET __aten_op_header_gen) # explicit codegen is required
vcpkg_cmake_install()
vcpkg_copy_pdbs()

file(REMOVE_RECURSE "${CURRENT_PACKAGES_DIR}/debug/include"
"${CURRENT_PACKAGES_DIR}/debug/share"
"${CURRENT_PACKAGES_DIR}/share"
"${CURRENT_PACKAGES_DIR}/include/c10/test/core/impl"
"${CURRENT_PACKAGES_DIR}/include/c10/hip"
"${CURRENT_PACKAGES_DIR}/include/c10/benchmark"
"${CURRENT_PACKAGES_DIR}/include/c10/test"
"${CURRENT_PACKAGES_DIR}/include/c10/cuda"
"${CURRENT_PACKAGES_DIR}/include/c10d/quantization"
"${CURRENT_PACKAGES_DIR}/include/caffe2/ideep/operators/quantization"
"${CURRENT_PACKAGES_DIR}/include/caffe2/python"
"${CURRENT_PACKAGES_DIR}/include/caffe2/share/contrib/depthwise"
"${CURRENT_PACKAGES_DIR}/include/caffe2/share/contrib/nnpack"
"${CURRENT_PACKAGES_DIR}/include/caffe2/mobile"
"${CURRENT_PACKAGES_DIR}/include/caffe2/experiments/python"
"${CURRENT_PACKAGES_DIR}/include/caffe2/test"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean it got vendored and we need to fix it to use the port?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, it isn't quite "vendored", it is that the projects are one in the same. According to upstream:

## Caffe2 notes

In 2018, we merged Caffe2 into the PyTorch source repository. While the
steady state aspiration is that Caffe2 and PyTorch share code freely,
in the meantime there will be some separation.

This either needs to (1) use the caffe2 from that port, or (2) caffe2 needs to be marked "deprecated" in favor of this one?

"${CURRENT_PACKAGES_DIR}/include/caffe2/utils/hip"
"${CURRENT_PACKAGES_DIR}/include/caffe2/opt/nql/tests"
"${CURRENT_PACKAGES_DIR}/include/caffe2/contrib"
"${CURRENT_PACKAGES_DIR}/include/caffe2/core/nomnigraph/Representations"
"${CURRENT_PACKAGES_DIR}/include/torch/csrc"
JackBoosY marked this conversation as resolved.
Show resolved Hide resolved
)
file(INSTALL "${SOURCE_PATH}/LICENSE" DESTINATION "${CURRENT_PACKAGES_DIR}/share/${PORT}" RENAME "copyright")
37 changes: 37 additions & 0 deletions ports/libtorch/use-glog-header.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
diff --git a/c10/util/Logging.cpp b/c10/util/Logging.cpp
index bf7bd98..2cedc9a 100644
--- a/c10/util/Logging.cpp
+++ b/c10/util/Logging.cpp
@@ -4,6 +4,7 @@
#ifdef FBCODE_CAFFE2
#include <folly/synchronization/SanitizeThread.h>
#endif
+#include <glog/logging.h>

#include <algorithm>
#include <cstdlib>
@@ -187,23 +188,13 @@ C10_DEFINE_int(
google::GLOG_WARNING,
"The minimum log level that caffe2 will output.");

-// Google glog's api does not have an external function that allows one to check
-// if glog is initialized or not. It does have an internal function - so we are
-// declaring it here. This is a hack but has been used by a bunch of others too
-// (e.g. Torch).
-namespace google {
-namespace glog_internal_namespace_ {
-bool IsGoogleLoggingInitialized();
-} // namespace glog_internal_namespace_
-} // namespace google
-
namespace c10 {
bool InitCaffeLogging(int* argc, char** argv) {
if (*argc == 0)
return true;
#if !defined(_MSC_VER)
// This trick can only be used on UNIX platforms
- if (!::google::glog_internal_namespace_::IsGoogleLoggingInitialized())
+ if (!::google::IsGoogleLoggingInitialized())
#endif
{
::google::InitGoogleLogging(argv[0]);
Loading