|
1 | 1 | # Contributing to RubyLLM |
2 | 2 |
|
3 | | -Thanks for considering contributing! Here's what you need to know. |
| 3 | +## Did you find a bug? |
4 | 4 |
|
5 | | -## Philosophy & Scope |
| 5 | +* **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/crmne/ruby_llm/issues). |
6 | 6 |
|
7 | | -RubyLLM does one thing well: **LLM communication in Ruby**. |
| 7 | +* If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/crmne/ruby_llm/issues/new). Include a **title and clear description**, relevant information, and a **code sample** demonstrating the issue. |
8 | 8 |
|
9 | | -### ✅ We Want |
10 | | -- LLM provider integrations and new provider features |
11 | | -- Convenience that benefits most users (Rails generators, etc.) |
12 | | -- Performance and API consistency improvements |
| 9 | +* **Verify it's a RubyLLM bug**, not your application code, before opening an issue. |
13 | 10 |
|
14 | | -### ❌ We Don't Want |
15 | | -- Application architecture (testing, persistence, error tracking) |
16 | | -- One-off solutions you can build in your app |
17 | | -- Auxiliary features unrelated to LLM communication |
| 11 | +## Did you write a patch that fixes a bug? |
18 | 12 |
|
19 | | -### Requests We'll Close |
20 | | -- **RAG support** → Use dedicated libraries |
21 | | -- **Prompt templates** → Use ERB/Mustache in your app |
22 | | -- **Model data fixes** → File with [Parsera](https://github.com/parsera-labs/api-llm-specs/issues) |
23 | | -- **Auto-failover** → Use `.with_model()` (works mid-conversation, even across providers) |
24 | | -- **Tool interface changes** → Handle in your tool's initializer |
25 | | -- **Testing helpers** → Use dependency injection |
| 13 | +* Open a new GitHub pull request with the patch. |
26 | 14 |
|
27 | | -**The rule:** If you can solve it in application code, you should. |
| 15 | +* Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable. |
28 | 16 |
|
29 | | -## Response Times & Paid Work |
| 17 | +* Run `overcommit --install` before committing - it handles code style and tests automatically. |
30 | 18 |
|
31 | | -This is unpaid work I do between other priorities. I respond when I can. |
| 19 | +## Do you intend to add a new feature or change an existing one? |
32 | 20 |
|
33 | | -**Need something fast? ** Email **[email protected]** for paid development. $200/hour, 10-hour minimum ($2000). |
| 21 | +* **First check if this belongs in RubyLLM or your application:** |
| 22 | + - ✅ Core LLM communication (provider integrations, streaming, cost tracking) |
| 23 | + - ❌ Application architecture (RAG, agents, prompt templates, testing helpers) |
| 24 | + |
| 25 | +* Features we'll reject: |
| 26 | + - Multi-agent orchestration |
| 27 | + - RAG pipelines |
| 28 | + - Prompt management systems |
| 29 | + - Vector database integrations |
| 30 | + - Testing frameworks |
| 31 | + - Anything you can implement in 5-10 lines of application code |
| 32 | + |
| 33 | +* Start by opening an issue to discuss the feature and its design. We want to keep RubyLLM simple and focused. |
34 | 34 |
|
35 | 35 | ## Quick Start |
36 | 36 |
|
37 | 37 | ```bash |
38 | 38 | gh repo fork crmne/ruby_llm --clone && cd ruby_llm |
39 | 39 | bundle install |
40 | | -overcommit --install |
| 40 | +overcommit --install # Required - sets up git hooks |
41 | 41 | gh issue develop 123 --checkout # or create your own branch |
42 | 42 | # make changes, add tests |
43 | 43 | gh pr create --web |
44 | 44 | ``` |
45 | 45 |
|
46 | | -## Essential Rules |
47 | | - |
48 | | -1. **Run `overcommit --install` before doing anything** - it auto-fixes style, runs tests, and updates model files on commit |
49 | | -2. **Don't edit `models.json` or `aliases.json`** - overcommit regenerates them automatically |
50 | | -3. **Write clear PR descriptions** - explain what and why |
51 | | - |
52 | | -The git hooks handle style, tests, and file generation for you. No excuses for broken commits. |
53 | | - |
54 | 46 | ## Testing |
55 | 47 |
|
56 | | -Run linting: `bundle exec rubocop` |
57 | | - |
58 | | -Run tests: `bundle exec rspec` |
59 | | - |
60 | | -**Re-recording VCR cassettes?** Set API keys and run: |
61 | 48 | ```bash |
62 | | -bundle exec rake vcr:record[openai,anthropic] # specific providers |
63 | | -bundle exec rake vcr:record[all] # everything |
| 49 | +overcommit --run |
| 50 | + |
| 51 | +# Re-recording VCR cassettes (requires API keys): |
| 52 | +rake vcr:record[openai,anthropic] # Specific providers |
| 53 | +rake vcr:record[all] # Everything |
64 | 54 | ``` |
65 | 55 |
|
66 | | -**Then inspect the YAML files** - make sure no API keys leaked. |
| 56 | +Always check cassettes for leaked API keys before committing. |
| 57 | + |
| 58 | +## Important Notes |
| 59 | + |
| 60 | +* **Never edit `models.json`, `aliases.json`, or `available-models.md`** - they're auto-generated by `rake models` |
| 61 | +* **Write tests** for any new functionality |
| 62 | +* **Keep it simple** - if it needs extensive documentation, reconsider the approach |
| 63 | +* Model data comes from [Parsera](https://parsera.org). Firstly, go say thanks for their free service to the LLM dev community! They scrape LLM documentation and make it available to all of us in JSON. Secondly, [file model data issues with them](https://github.com/parsera-labs/api-llm-specs/issues). |
67 | 64 |
|
68 | | -## Model Registry |
| 65 | +## Response Times |
69 | 66 |
|
70 | | -**Don't touch these files directly:** |
71 | | -- `models.json` - auto-generated from provider APIs + [Parsera](https://api.parsera.org/v1/llm-specs) |
72 | | -- `aliases.json` - auto-generated from models.json |
| 67 | +This is my gift to the Ruby community. |
73 | 68 |
|
74 | | -**To update model info:** |
75 | | -- Public model issues → File with [Parsera](https://github.com/parsera-labs/api-llm-specs/issues) |
| 69 | +Gifts don't come with SLAs. I respond when I can. |
76 | 70 |
|
77 | | -## Support the Project |
| 71 | +## Support |
78 | 72 |
|
79 | | -Consider [sponsoring RubyLLM](https://github.com/sponsors/crmne) to help with ongoing costs. Sponsorship supports general maintenance - for priority features, use paid development above. |
| 73 | +If RubyLLM helps you, consider [sponsoring](https://github.com/sponsors/crmne). |
80 | 74 |
|
81 | | ---- |
| 75 | +Sponsorship is just a way to say thanks - it doesn't buy priority support or feature requests. |
82 | 76 |
|
83 | | -That's it. Let's make Ruby the best AI development experience possible. |
| 77 | +Go ship AI apps! |
84 | 78 |
|
85 | 79 | — Carmine |
0 commit comments