Popular repositories Loading
-
Waking-Up
Waking-Up PublicForked from wolverinn/Waking-Up
计算机基础(计算机网络/操作系统/数据库/Git...)面试问题全面总结,包含详细的follow-up question以及答案;全部采用【问题+追问+答案】的形式,即拿即用,直击互联网大厂面试:rocket:;可用于模拟面试、面试前复习、短期内快速备战面试...
-
-
Prompt
Prompt PublicForked from purduedb/Prompt
Dynamic Data-Partitioning for Distributed Micro-batch Stream Processing Systems
Scala
-
BigDL
BigDL PublicForked from intel/ipex-llm
Building Large-Scale AI Applications for Distributed Big Data
Python
-
occlum
occlum PublicForked from occlum/occlum
Occlum is a memory-safe, multi-process library OS for Intel SGX
Rust
-
BigDL-PPML-Azure-Occlum-Example
BigDL-PPML-Azure-Occlum-Example PublicForked from intel/BigDL-PPML-Azure-Occlum-Example
Shell
225 contributions in the last year
Day of Week | May May | June Jun | July Jul | August Aug | September Sep | October Oct | November Nov | December Dec | January Jan | February Feb | March Mar | April Apr | |||||||||||||||||||||||||||||||||||||||||
Sunday Sun | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Monday Mon | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday Tue | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Wednesday Wed | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Thursday Thu | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Friday Fri | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Saturday Sat |
Activity overview
Contribution activity
April 2025
Created 1 commit in 1 repository
Opened 5 pull requests in 2 repositories
analytics-zoo/vllm
3
merged
1
closed
-
Enable more models on 083
This contribution was made on Apr 27
-
optimize qwen2.5-vl next_token
This contribution was made on Apr 25
-
Enable more models on 083
This contribution was made on Apr 17
-
Update 083
This contribution was made on Apr 10
intel/ipex-llm
1
merged
-
enable glm4v and gemma-3 on vllm 083
This contribution was made on Apr 27
Reviewed 3 pull requests in 1 repository
intel/ipex-llm
3 pull requests
-
Update README.md
This contribution was made on Apr 27
-
Update vLLM patch
This contribution was made on Apr 10
-
Update vLLM patch
This contribution was made on Apr 10
Created an issue in vllm-project/vllm that received 2 comments
[Bug]: GLM-4-32B-0414-FP8 output !!!!! error (tensor is nan)
Your current environment image: vllm/vllm-openai:v0.8.4 Update glm4.py use this pr. docker rm -f test-cuda sudo docker run -itd \ --gpus 2 \ --name…