-
Notifications
You must be signed in to change notification settings - Fork 470
Document how to do cross-compilation #276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @arlyon , I am trying this myself. Did you manage to compile rust to be used in dockers in the end? |
Unfortunately, no. My main motivation is to use this with skaffold, and I have resorted to maintaining my own docker files instead. I may take another crack at it when o get the time so if you make any headway, feel free to include it here. |
#770 also seems related to this. |
It seems https://github.com/GoogleContainerTools/distroless/pull/462/files is a nice example for doing what seems like what you ask about (even though it uses quite an old version of rules Rust). Is there anything more to do here? One thing I think we could do is to document how to cross compile Rust locally (without docker, just locally, so the docs are not distracted by docker). |
Volunteers welcomed :) |
I have an example here on doing cross-compilation on a macOS host to a Linux musl target. It leverages the platforms infrastructure and Not sure if this is the correct approach, but it seems to be working. The example omits any dependencies, which complicate things a bit. In particular, if we want to compile the target for multiple platforms (e.g., to run unit tests on the host), it may be necessary to sprinkle The biggest missing piece I think is lack of support for |
I'm trying to cross-compile from macOS intel -> macOS arm64. I had to set up this
And then set this up in the root
This appeared to work when called with Is there something about deps that still needs to be solved before cross compilation will work? |
🤦 This error is caused because I needed to specify |
I found the example from @duarten very useful and got it working for basic crates when cross-compiling from MacOS to Linux. However, when a crate contains custom Rust macros, it throws an error like this:
More info about the issue here: duarten/rust-bazel-cross#2 |
FWIW I started down this path, learning about transitions and whatnot, and then realized, oh, I don't think I need that. I ended up writing the following two entries in my WORKSPACE: rust_repository_set(
name = "rust_macos_arm64_linux_tuple",
edition = "2021",
exec_triple = "aarch64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu"],
version = "1.61.0",
)
rust_repository_set(
name = "rust_macos_x86_64_linux_tuple",
edition = "2021",
exec_triple = "x86_64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu"],
version = "1.61.0",
) ...and everything just started working. (I put both in because we have folks on M1 macs and folks on Intel macs.) I suspect that this was simple for us because we had already done the work to configure a clang-based toolchain that could target But the problems seem to be separable, and this was not obvious to me from the discussion above. Step 1: Configure a cc toolchain that runs on MacOS and targets Linux. |
@DeCarabas any config/code you can share on this exact cc toolchain setup? |
* use local repo to debug - rust_rules needs generator_urls provided. use Github release page - needs to add edition attr. to rust_binary/_image method by some reasons. * toolchain not found for cpp (or rust). * can build and run rust_binary solely, but with rust_image it fails with toolchain not found error. * image_transition target seems missing os/cpu target. * disable image_transition avoid this issue. But instead causing linux exec format error. Need to choose amd64 platform target? - rust_binary can run without selecting amd64 platform, but rust_image. * it seems rust rules do not provide platform definition for our case. manually adding a custom platform for linux/x86_64 target. setting this to --target_platform resolves execution platform for rust. However, we still have resolution error with cpp toolchain, because it seems there are no linux/x86_64 toolchain? * execution (also host?) platform is aarch64/osx by local_config. * rust's toolchain doesn't have exec_compatible_with aarch64/osx and target_compatible_with amd64/linux. * Only reasonable rust toolchain which can run on x86 docker is to have x86 target for both exe_/target_compatible_with. But now we still miss cpp_toolchain for that combination. * cpp toolchains are auto-generated by cc_configure methods which use local_config information coupled with m1 mac. So it does not contain x86/linux pattens. * another solution is to replace docker base image to arm64 image. can override image base by 'base' option in rust_image. * even using aarch64 docker image and arm64 binary, it gives Exec format error. Why? - Turned out the binary has Mach-O format, which is only for M1?, but not ELF format. * cc_toolchain does not have x86 or aarch64, linux target compatible toolchain, but only x86 or aarch64 and osx. - cc_toolchain does not have non-osx exec_compatible toolchain, because bazel auto-detect osx local env to make osx only toolchains. - check cc_configure.bzl. * rust only have x86,linux -> x86,linux, or aarch64,linux -> aarch64,linux exec_/target_compatible combination. * normal build outputs Mach-O format, not able to run on docker images which are mostly based on linux and supports ELF format. * image_transition generates new settings with proper cpu/os parameters but actually set nothing. Not sure why * Only way to solve this issue is to write a custom toolchain for cpp or generate it from the official implementation for osx/arm64 -> linux/amd64 or arm64 cross compilation. * needs to write osx/aarch64 -> linux/amd64 cross compile toolchain for both rust and cpp. * ref. bazelbuild/rules_rust#276
This is all really unclear. I can kind of see that you use
(Though the docs for That will presumably make sure there's a Rust toolchain that can compile to that There's no parameter to How can I have a workspace that compiles different things for different targets? |
Ok it seems like Bazel's model is that there is only one target per compilation. Unfortunate. I guess that's all Google really needs internally. Apparently you need to use custom transitions to solve it but that looks very complicated. |
It's worth noting that i.e. you can do: FWIW there was quite a bit of discussion at BazelCon this week about folks, including Google, wanting it to be easier to have different targets specify in their BUILD files what platforms they should build for, and properly respecting that, but work needs to be done. I think bazelbuild/bazel#14669 is a reasonable tracking issue here, which links to https://groups.google.com/g/bazel-dev/c/QK7CI__ReDM which concluded: "Needs more design work".
I agree that this is more complicated than it needs to be. FWIW I currently have roughly the following code to do this: In a load("@bazel_skylib//lib:paths.bzl", "paths")
def _transition_to_impl(ctx):
# We need to forward the DefaultInfo provider from the underlying rule.
# Unfortunately, we can't do this directly, because Bazel requires that the executable to run
# is actually generated by this rule, so we need to symlink to it, and generate a synthetic
# forwarding DefaultInfo.
result = []
binary = ctx.attr.binary[0]
default_info = binary[DefaultInfo]
new_executable = None
files = default_info.files
original_executable = default_info.files_to_run.executable
data_runfiles = default_info.data_runfiles
default_runfiles = default_info.default_runfiles
if original_executable:
new_executable_name = ctx.attr.basename if ctx.attr.basename else original_executable.basename
# In order for the symlink to have the same basename as the original
# executable (important in the case of proto plugins), put it in a
# subdirectory named after the label to prevent collisions.
new_executable = ctx.actions.declare_file(paths.join(ctx.label.name, new_executable_name))
ctx.actions.symlink(
output = new_executable,
target_file = original_executable,
is_executable = True,
)
files = depset(direct = [new_executable])
data_runfiles = data_runfiles.merge(ctx.runfiles([new_executable]))
default_runfiles = default_runfiles.merge(ctx.runfiles([new_executable]))
result.append(
DefaultInfo(
files = files,
data_runfiles = data_runfiles,
default_runfiles = default_runfiles,
executable = new_executable,
),
)
return result
def _transition_to_linux_arm64_transition_impl(settings, attr):
return {"//command_line_option:platforms": [
Label("//some:linux_arm64"),
]}
_transition_to_linux_arm64_transition = transition(
implementation = _transition_to_linux_arm64_transition_impl,
inputs = [],
outputs = ["//command_line_option:platforms"],
)
linux_arm64_binary = rule(
implementation = transition_to_impl,
attrs = {
"basename": attr.string(),
"binary": attr.label(allow_files = True, cfg = _transition_to_linux_arm64_transition),
"_allowlist_function_transition": attr.label(
default = "@bazel_tools//tools/allowlists/function_transition_allowlist",
),
},
executable = True,
) and then in a BUILD file you can write: rust_binary(
name = "platform_generic_binary",
...
)
linux_arm64_binary(
name = "my_binary_for_linux_arm64",
binary = ":platform_generic_binary",
) |
re: #276 (comment) Doesn't this require you to define custom |
Yes, you would need appropriate toolchains set up for both Rust and C++. |
I ran across this thread while trying to get Rust cross-compiling from MacOS x86_64 to Linux x86_64. With the help of musl-toolchain (thanks @illicitonion!), I've gotten a pretty clean bzlmod-only configuration for Rust cross compilation working. It feels like this tooling is really coming together (which I quite appreciate). I figured I'd comment here so others running across the thread (from search, etc.) have another pointer to help them out in their cross-compilation journey ;). Example of musl-toolchain w/ WORKSPACE already in rules_rust (so there's a link from here to there). |
This PR provides documentation of Bazelmod and several code examples that addresses a number of issues related to Bazelmod. Preview of the documentation: https://github.com/marvin-hansen/rules_rust/blob/main/docs/crate_universe_bzlmod.md First and foremost it paves the way for a meaningful update the Bazelmod documentation that references these and existing code examples. This touches at least the following issues: * #2670 * #2181 The compile_opt example addresses or resolves: * #515 * #2701 The musl_cross_compilling example addresses or resolves * #390 * #276 The oci_container does not relate to any open issue, although the tokio example in it gives a nice end to end example so this definitely helps those looking for something non-trivial. The proto example addresses or resolves: * #2668 * #302 * #2534 * Possibly a few more if I were to search longer Formalities * I've signed the CLA * I've signed all commits --------- Signed-off-by: Marvin Hansen <[email protected]> Co-authored-by: Daniel Wagner-Hall <[email protected]>
The readme mentions how to cross compile for wasm, but doesn't explain cross compilation for any other platforms.
I am using rules_rust in conjunction with rules_docker and would like to compile with macOS as the host, targeting linux. Currently, without modifications, the docker image is build for macOS and fails on launch:
My current attempt adds an entry to the .bazelrc file which specifies the platform for the image target.
The platform is defined as such:
The error on build is
Edit
I have loaded a custom toolchain in the WORKSPACE which seems like a step in the right direction but I am still hitting the same error.
And updated .bazelrc, having tried both
@io_bazel_rules_rust//rust/platform:linux
and//server:docker_platform
for the platforms argumentThe text was updated successfully, but these errors were encountered: