Skip to content
View Coobiw's full-sized avatar
🎯
Focusing
🎯
Focusing

Organizations

@Mixture-AI

Block or report Coobiw

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Coobiw/README.md

Hi ! Here is Brian Qu (officially Bowen Qu)! 👋

🙋‍♂️ About Me:

  • 👨‍🦰 I’m currently a Master of Science candidate of Peking University (PKU).
  • 👦 Before that, I received the Honours Bachelor, Huazhong University of Science and Technology (HUST).
  • ❤️‍🔥 Now, I am intersted in Multi-modal Learning especially MLLM.

😋 Projects:

  • 💥 In 2023 summer, I take part in OSPP(Open Source Promotion Plan) Summer Camp , with the honor of contributing for MMPretrain to build prompt-based classifier.
    • Now, the implement of zero-shot CLIP classifier has been merged to the main branch. Codebase
    • The implement of RAM(Recognize Anything Model) has been merged to the dev branch. Welcome to use the gradio WebUI to test it on MMPretrain! Codebase
  • 💥 2023.11 - 2024.5: MPP-Qwen-Next is released! All training is conducted on 3090/4090 GPUs. To prevent poverty (24GB of VRAM) from limiting imagination, I implemented an MLLM version based on deepspeed Pipeline Parallel. The Repo supports {video/image/multi-image} {single/multi-turn} conversations. Let's have a try!
  • 💥 2024.9: We release ChartMoE, a multimodal large language model with Mixture-of-Expert connector, for advanced chart 1)understanding, 2)replot, 3)editing, 4)highlighting and 5)transformation.
  • 💥💥💥 2024.10: I am really fortunate to be involved in the development of Aria. Aria is a Naive Multimodal MoE model, with best-in-class performance across multimodal, language, and coding tasks!
  • 🎉🎉🎉 2025.1: ChartMoE is accepted by ICLR2025!
  • 🎉🎉🎉 2025.2: ChartMoE is selected as ICLR2025 Oral(1.8%)!


Anurag's GitHub stats

Pinned Loading

  1. rhymes-ai/Aria Public

    Codebase for Aria - an Open Multimodal Native MoE

    Jupyter Notebook 1k 86

  2. IDEA-FinAI/ChartMoE Public

    [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding

    Jupyter Notebook 77 2

  3. MPP-LLaVA Public

    Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Tr…

    Jupyter Notebook 442 23

  4. IP-IQA Public

    [ICME2024, Official Code] for paper "Bringing Textual Prompt to AI-Generated Image Quality Assessment"

    Python 19

  5. TriVQA Public

    [CVPRW2024, Official Code] for paper "Exploring AIGC Video Quality: A Focus on Visual Harmony, Video-Text Consistency and Domain Distribution Gap".

    12

  6. open-mmlab/mmpretrain Public

    OpenMMLab Pre-training Toolbox and Benchmark

    Python 3.6k 1.1k

499 contributions in the last year

Contribution Graph
Day of Week May June July August September October November December January February March April
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to IDEA-FinAI/ChartMoE, Coobiw/MPP-LLaVA, rhymes-ai/Aria and 22 other repositories
Loading A graph representing Coobiw's contributions from April 21, 2024 to April 27, 2025. The contributions are 84% commits, 11% pull requests, 5% issues, 0% code review.

Contribution activity

April 2025

Created 3 repositories
Loading