Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows support #4045

Closed
wants to merge 7 commits into from
Closed

Conversation

mantaionut
Copy link

@mantaionut mantaionut commented May 31, 2024

Implemented support for Windows based on #2738 with MSVC in order for torch.compile to work on GPU on Windows pytorch/pytorch#122094.
The build was done by building LLVM directly according to the documentation.
Tested by running the unit tests. The failures happen on my Linux machine too, most of them due to not enough GPU memory.

=== 32 failed, 9811 passed, 1530 skipped, 91 warnings in 1192.69s (0:19:52) ===

Running the Unit Test from https://triton-lang.org/main/getting-started/tutorials/03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py I got
Screenshot 2024-06-07 165223

  • PR Description is written in clear, idiomatic English and follows the
    rules for a good PR description.

    (The LLM of your choice can help copyedit your PR description. You can even
    give it your whole patch to analyze.)

  • Pre-commit checks pass.

    pre-commit install
    pre-commit run --all-files
  • Tests have been added and/or updated.

    • For changes to the backend: /test/ (for lit), /unittest/ (for
      gtest), or occasionally end-to-end tests like in
      /python/test/unit/language/test_core.py.
    • For changes to the frontend: /python/test/
  • Documentation

    • The code contains comments where appropriate, written in clear,
      idiomatic English. Again, an LLM can help.
    • If appropriate, the Triton documentation have been updated.

@ptillet
Copy link
Collaborator

ptillet commented May 31, 2024

Hello! Thank you for the PR but we don't have the bandwidth to commit to supporting Windows at this time. Would you mind maintaining a fork?

 * based on triton-lang#2465
 * manually applied, rebased, fix lint errors
 * use set_target_properties(), cleanup for windows
 * remove '/A' platform option to use windows ninja
 * remove unknown option '/m'
 * use sysconfig.get_config_var() to get the path of python*.lib
 * clang fix for windows
 * remove '-fPIC' for windows clang
 * fix download_and_copy() to support windows
 * add "exe" extension for windows
 * use "pyd" extension for windows to make importlib work
 * rework for latest triton (2024/01/14)

Original-author-by: Andrei Gheorghe <[email protected]>
Signed-off-by: Won-Kyu Park <[email protected]>
 * based on Windows support PR triton-lang#2456 by @andreigh
 * WIN32 fix using LoadLibrary
 * win32 fix _path_to_binary()
 * add library_dir, include_dir for win32
@mantaionut mantaionut marked this pull request as ready for review June 7, 2024 14:50
@ptillet
Copy link
Collaborator

ptillet commented Jun 7, 2024

(closing the PR following the above comment.)

@parlance-zz
Copy link

Very disappointing to see this closed without merging the changes

@FurkanGozukara
Copy link

this is so shameless to be getting closed. Shame on you open AI. it is your duty to support Windows and you are closing this one!

aren't you the one getting billions from Microsoft and you are not supporting the biggest product of Microsoft!

only left library that doesnt give support to Windows is triton now!

@FurkanGozukara
Copy link

Are there any wheels that can be installed on Windows for Python 3.10? Please reply me thank you so much

@FurkanGozukara
Copy link

@mantaionut @wkpark do you have aynwhere instructions how to compile your fork? or anywhere precompiled wheels like for python 3.10?

I would appreciate very much

@ptillet
Copy link
Collaborator

ptillet commented Jul 26, 2024

There is a big difference between having a commit that compiles/run tests with MSVC and committing to actually supporting Windows. Merging this PR wouldn't have addressed the root cause behind our decision: the core Triton team doesn't have the bandwidth to fix future Windows issues. This means that we can't have Windows CI -- and that Triton would keep breaking on the main branch -- which wouldn't actually do a service to the community. I think that Windows user will be best served teaming up to maintain a fork that is guaranteed to work on every commit.

I should have said that more clearly when I closed the PR. My intention was not to dismiss the work that was done by the author of this PR.

@FurkanGozukara
Copy link

@ptillet currently a major open source app CogVLM 2 is unusable since OpenAI don't try to help people. OpenAI has billions of dollars. My words are for OpenAI. Also they are getting billions from Microsoft. I find this situation unacceptable

xFormers, DeepSpeed, BitsAndBytes and all others are starting to give full support to Windows except Triton

@Systemcluster
Copy link

@ptillet I think there is value in supporting Windows on a "no-effort" basis and letting the community do the rest. You (the Triton team, or external reviewers) only have to sporadically review PR's when something needs to be changed. Make the Windows CI and wheels optional, and in case a release doesn't compile on Windows, downstream users or libraries only need to pin a lower version.

There is a level between "providing support" and "denying contributions" that I think would satisfy both sides.

@FurkanGozukara
Copy link

@Systemcluster well said

@stellaraccident
Copy link

stellaraccident commented Aug 8, 2024

There is a big difference between having a commit that compiles/run tests with MSVC and committing to actually supporting Windows. Merging this PR wouldn't have addressed the root cause behind our decision: the core Triton team doesn't have the bandwidth to fix future Windows issues. This means that we can't have Windows CI -- and that Triton would keep breaking on the main branch -- which wouldn't actually do a service to the community. I think that Windows user will be best served teaming up to maintain a fork that is guaranteed to work on every commit.

I should have said that more clearly when I closed the PR. My intention was not to dismiss the work that was done by the author of this PR.

One potential other middle ground that we do on adjacent projects is to maintain a post-submit Windows bot (i.e. just for visibility, doesn't block anything). We do that for a variety of less than fully supported platforms. But it lets the community fast follow and plan sane work for making releases. My experience over time is that sometimes eventually (which can be years), such platforms can be promoted to more of a default-on case and maintained without much/any effort.

Native Windows users/communities are used to this kind of dichotomy. It helps them immensely to be able to land basic patches in the main project and have some health signal.

(I don't have a horse in this race -- just sharing an approach that has worked for me on other projects when navigating the issue of Windows and misc config support)

@iperov
Copy link

iperov commented Aug 19, 2024

forget triton, forget pytorch
Jax is the future.

@umarbutler
Copy link

umarbutler commented Sep 25, 2024

For anyone else wondering whether its worth your time to install triton on WSL2, given its overhead, the answer is yes. It managed to shave 2 days off an on-going train, dropping its ETA from 9.5 days on native Windows 11 to 7.5 days.

Just make sure you store your data and scripts inside your WSL2 instance: microsoft/WSL#4197

I've spent far too many hours trying to build Triton 3 myself, including using the repos of @wkpark, @mantaionut and @eaplatanios, but no dice.

@wkpark used to have wheels available, however, they are all expired (unavailable from GitHub and there is no mirror on the Wayback Machine), and they were for Python 3.10 and 3.11, not 3.12.

@ACGNnsj has Python 3.11 and 3.12 wheels available here, however, I have not tested them myself and so cannot vouch for them either way.

Based on the most recent comments made by the maintainers of Triton on the question of its compatibility with Windows, it does not appear that there is any real motivation to invest any degree of effort, however small, into supporting Windows.

Seeing as how @xuhancn is hard at work on getting PyTorch Inductor to work on CPU on Windows (pytorch/pytorch#124245), and also that it is now possible to have DeepSpeed, bitsandbytes, flash attention and xformers all installed on Windows, I am hopeful that we will have some sort of alternative to Triton, at least when it comes to compiling PyTorch models (which is my primary use for it), that is just as platform-agonstic as the rest of PyTorch.

See also pytorch/pytorch#122094

@FurkanGozukara
Copy link

it is just total shame for OpenAI taking 10s of billions from Microsoft and not giving support to Windows, while all others are fully supporting Windows

I wish researchers would completely abandon using Triton in their research and apps

@FurkanGozukara
Copy link

Thank you OpenAI taking 10s billions from Microsoft

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants