Skip to content

Commit

Permalink
Install and use ruff format instead of black for code formatting. (l…
Browse files Browse the repository at this point in the history
…angchain-ai#12585)

Best to review one commit at a time, since two of the commits are 100%
autogenerated changes from running `ruff format`:
- Install and use `ruff format` instead of black for code formatting.
- Output of `ruff format .` in the `langchain` package.
- Use `ruff format` in experimental package.
- Format changes in experimental package by `ruff format`.
- Manual formatting fixes to make `ruff .` pass.
  • Loading branch information
obi1kenobi authored Oct 31, 2023
1 parent bfd719f commit f94e24d
Show file tree
Hide file tree
Showing 61 changed files with 246 additions and 399 deletions.
8 changes: 4 additions & 4 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ Run these locally before submitting a PR; the CI system will check also.

#### Code Formatting

Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [ruff](https://docs.astral.sh/ruff/rules/).
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).

To run formatting for docs, cookbook and templates:

Expand All @@ -159,7 +159,7 @@ This is especially useful when you have made changes to a subset of the project

#### Linting

Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [ruff](https://docs.astral.sh/ruff/rules/), and [mypy](http://mypy-lang.org/).
Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/).

To run linting for docs, cookbook and templates:

Expand Down Expand Up @@ -302,8 +302,8 @@ make api_docs_linkcheck

### Verify Documentation changes

After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).

Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ spell_fix:

lint:
poetry run ruff docs templates cookbook
poetry run black docs templates cookbook --check
poetry run ruff format docs templates cookbook --diff

format format_diff:
poetry run black docs templates cookbook
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook

######################
Expand Down
6 changes: 3 additions & 3 deletions libs/experimental/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ TEST_FILE ?= tests/unit_tests/
test:
poetry run pytest $(TEST_FILE)

tests:
tests:
poetry run pytest $(TEST_FILE)

test_watch:
Expand All @@ -33,11 +33,11 @@ lint_diff format_diff: PYTHON_FILES=$(shell git diff --relative=libs/experimenta

lint lint_diff:
poetry run mypy $(PYTHON_FILES)
poetry run black $(PYTHON_FILES) --check
poetry run ruff format $(PYTHON_FILES) --diff
poetry run ruff .

format format_diff:
poetry run black $(PYTHON_FILES)
poetry run ruff format $(PYTHON_FILES)
poetry run ruff --select I --fix $(PYTHON_FILES)

spell_check:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def create_openai_data_generator(
llm: ChatOpenAI,
prompt: BasePromptTemplate,
output_parser: Optional[BaseLLMOutputParser] = None,
**kwargs: Any
**kwargs: Any,
) -> SyntheticDataGenerator:
"""
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def next_thought(
self,
problem_description: str,
thoughts_path: Tuple[str, ...] = (),
**kwargs: Any
**kwargs: Any,
) -> str:
"""
Generate the next thought given the problem description and the thoughts
Expand All @@ -52,7 +52,7 @@ def next_thought(
self,
problem_description: str,
thoughts_path: Tuple[str, ...] = (),
**kwargs: Any
**kwargs: Any,
) -> str:
response_text = self.predict_and_parse(
problem_description=problem_description, thoughts=thoughts_path, **kwargs
Expand All @@ -76,14 +76,14 @@ def next_thought(
self,
problem_description: str,
thoughts_path: Tuple[str, ...] = (),
**kwargs: Any
**kwargs: Any,
) -> str:
if thoughts_path not in self.tot_memory or not self.tot_memory[thoughts_path]:
new_thoughts = self.predict_and_parse(
problem_description=problem_description,
thoughts=thoughts_path,
n=self.c,
**kwargs
**kwargs,
)
if not new_thoughts:
return ""
Expand Down
122 changes: 22 additions & 100 deletions libs/experimental/poetry.lock

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions libs/experimental/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,7 @@ vowpal-wabbit-next = {version = "0.6.0", optional = true}
sentence-transformers = {version = "^2", optional = true}

[tool.poetry.group.lint.dependencies]
ruff = "^0.1"
black = "^23.10.0"
ruff = "^0.1.3"

[tool.poetry.group.typing.dependencies]
mypy = "^0.991"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,8 @@ def test_update_with_delayed_score_with_auto_validator_throws() -> None:
assert selection_metadata.selected.score == 3.0 # type: ignore
with pytest.raises(RuntimeError):
chain.update_with_delayed_score(
chain_response=response, score=100 # type: ignore
chain_response=response,
score=100, # type: ignore
)


Expand All @@ -121,7 +122,9 @@ def test_update_with_delayed_score_force() -> None:
selection_metadata = response["selection_metadata"] # type: ignore
assert selection_metadata.selected.score == 3.0 # type: ignore
chain.update_with_delayed_score(
chain_response=response, score=100, force_score=True # type: ignore
chain_response=response,
score=100,
force_score=True, # type: ignore
)
assert selection_metadata.selected.score == 100.0 # type: ignore

Expand Down
4 changes: 2 additions & 2 deletions libs/langchain/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ lint lint_diff:
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "$(PYTHON_FILES)" = "" ] || poetry run black $(PYTHON_FILES) --check
[ "$(PYTHON_FILES)" = "" ] || poetry run ruff format $(PYTHON_FILES) --diff
[ "$(PYTHON_FILES)" = "" ] || poetry run mypy $(PYTHON_FILES)

format format_diff:
[ "$(PYTHON_FILES)" = "" ] || poetry run black $(PYTHON_FILES)
[ "$(PYTHON_FILES)" = "" ] || poetry run ruff format $(PYTHON_FILES)
[ "$(PYTHON_FILES)" = "" ] || poetry run ruff --select I --fix $(PYTHON_FILES)

spell_check:
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/_api/path.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def as_import_path(
file: Union[Path, str],
*,
suffix: Optional[str] = None,
relative_to: Path = PACKAGE_DIR
relative_to: Path = PACKAGE_DIR,
) -> str:
"""Path of the file as a LangChain import exclude langchain top namespace."""
if isinstance(file, str):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def create_conversational_retrieval_agent(
system_message: Optional[SystemMessage] = None,
verbose: bool = False,
max_token_limit: int = 2000,
**kwargs: Any
**kwargs: Any,
) -> AgentExecutor:
"""A convenience method for creating a conversational retrieval agent.
Expand Down Expand Up @@ -83,5 +83,5 @@ def create_conversational_retrieval_agent(
memory=memory,
verbose=verbose,
return_intermediate_steps=remember_intermediate_steps,
**kwargs
**kwargs,
)
3 changes: 2 additions & 1 deletion libs/langchain/langchain/callbacks/argilla_callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,8 @@ def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
},
}
for prompt, output in zip(
prompts, chain_output_val # type: ignore
prompts, # type: ignore
chain_output_val,
)
]
)
Expand Down
4 changes: 3 additions & 1 deletion libs/langchain/langchain/callbacks/flyte_callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,9 @@ def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
)
)

complexity_metrics: Dict[str, float] = generation_resp.pop("text_complexity_metrics") # type: ignore # noqa: E501
complexity_metrics: Dict[str, float] = generation_resp.pop(
"text_complexity_metrics"
) # type: ignore # noqa: E501
self.deck.append(
self.markdown_renderer().to_html("#### Text Complexity Metrics")
)
Expand Down
12 changes: 3 additions & 9 deletions libs/langchain/langchain/callbacks/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,20 +64,14 @@
openai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar(
"openai_callback", default=None
)
tracing_callback_var: ContextVar[
Optional[LangChainTracerV1]
] = ContextVar( # noqa: E501
tracing_callback_var: ContextVar[Optional[LangChainTracerV1]] = ContextVar( # noqa: E501
"tracing_callback", default=None
)
wandb_tracing_callback_var: ContextVar[
Optional[WandbTracer]
] = ContextVar( # noqa: E501
wandb_tracing_callback_var: ContextVar[Optional[WandbTracer]] = ContextVar( # noqa: E501
"tracing_wandb_callback", default=None
)

tracing_v2_callback_var: ContextVar[
Optional[LangChainTracer]
] = ContextVar( # noqa: E501
tracing_v2_callback_var: ContextVar[Optional[LangChainTracer]] = ContextVar( # noqa: E501
"tracing_callback_v2", default=None
)
run_collector_var: ContextVar[
Expand Down
4 changes: 3 additions & 1 deletion libs/langchain/langchain/callbacks/mlflow_callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -371,7 +371,9 @@ def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
nlp=self.nlp,
)
)
complexity_metrics: Dict[str, float] = generation_resp.pop("text_complexity_metrics") # type: ignore # noqa: E501
complexity_metrics: Dict[str, float] = generation_resp.pop(
"text_complexity_metrics"
) # type: ignore # noqa: E501
self.mlflg.metrics(
complexity_metrics,
step=self.metrics["step"],
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/callbacks/streaming_stdout.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
**kwargs: Any
**kwargs: Any,
) -> None:
"""Run when LLM starts running."""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def __init__(
*,
answer_prefix_tokens: Optional[List[str]] = None,
strip_tokens: bool = True,
stream_prefix: bool = False
stream_prefix: bool = False,
) -> None:
"""Instantiate FinalStreamingStdOutCallbackHandler.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def __init__(
*,
on_start: Optional[Callable[[Run], None]],
on_end: Optional[Callable[[Run], None]],
on_error: Optional[Callable[[Run], None]]
on_error: Optional[Callable[[Run], None]],
) -> None:
super().__init__()

Expand Down
4 changes: 2 additions & 2 deletions libs/langchain/langchain/chains/api/openapi/chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ def from_url_and_method(
llm: BaseLanguageModel,
requests: Optional[Requests] = None,
return_intermediate_steps: bool = False,
**kwargs: Any
**kwargs: Any,
# TODO: Handle async
) -> "OpenAPIEndpointChain":
"""Create an OpenAPIEndpoint from a spec at the specified url."""
Expand All @@ -194,7 +194,7 @@ def from_api_operation(
return_intermediate_steps: bool = False,
raw_response: bool = False,
callbacks: Callbacks = None,
**kwargs: Any
**kwargs: Any,
# TODO: Handle async
) -> "OpenAPIEndpointChain":
"""Create an OpenAPIEndpointChain from an operation and a spec."""
Expand Down
4 changes: 2 additions & 2 deletions libs/langchain/langchain/chains/openai_functions/tagging.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def create_tagging_chain(
schema: dict,
llm: BaseLanguageModel,
prompt: Optional[ChatPromptTemplate] = None,
**kwargs: Any
**kwargs: Any,
) -> Chain:
"""Creates a chain that extracts information from a passage
based on a schema.
Expand Down Expand Up @@ -62,7 +62,7 @@ def create_tagging_chain_pydantic(
pydantic_schema: Any,
llm: BaseLanguageModel,
prompt: Optional[ChatPromptTemplate] = None,
**kwargs: Any
**kwargs: Any,
) -> Chain:
"""Creates a chain that extracts information from a passage
based on a pydantic schema.
Expand Down
4 changes: 2 additions & 2 deletions libs/langchain/langchain/chat_models/promptlayer_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def _generate(
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any
**kwargs: Any,
) -> ChatResult:
"""Call ChatOpenAI generate and then call PromptLayer API to log the request."""
from promptlayer.utils import get_api_key, promptlayer_api_request
Expand Down Expand Up @@ -86,7 +86,7 @@ async def _agenerate(
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any
**kwargs: Any,
) -> ChatResult:
"""Call ChatOpenAI agenerate and then call PromptLayer to log."""
from promptlayer.utils import get_api_key, promptlayer_api_request_async
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,9 @@ def load(self) -> List[Document]:
blob_list = container.list_blobs(name_starts_with=self.prefix)
for blob in blob_list:
loader = AzureBlobStorageFileLoader(
self.conn_str, self.container, blob.name # type: ignore
self.conn_str,
self.container,
blob.name, # type: ignore
)
docs.extend(loader.load())
return docs
4 changes: 3 additions & 1 deletion libs/langchain/langchain/document_loaders/blackboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,9 @@ def _load_documents(self) -> List[Document]:
"""
# Create the document loader
loader = DirectoryLoader(
path=self.folder_path, glob="*.pdf", loader_cls=PyPDFLoader # type: ignore
path=self.folder_path,
glob="*.pdf",
loader_cls=PyPDFLoader, # type: ignore
)
# Load the documents
documents = loader.load()
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/embeddings/johnsnowlabs.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def __init__(
self,
model: Any = "embed_sentence.bert",
hardware_target: str = "cpu",
**kwargs: Any
**kwargs: Any,
):
"""Initialize the johnsnowlabs model."""
super().__init__(**kwargs)
Expand Down
4 changes: 1 addition & 3 deletions libs/langchain/langchain/embeddings/localai.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,9 +269,7 @@ def _embedding_func(self, text: str, *, engine: str) -> List[float]:
self,
input=[text],
**self._invocation_params,
)["data"][
0
]["embedding"]
)["data"][0]["embedding"]

async def _aembedding_func(self, text: str, *, engine: str) -> List[float]:
"""Call out to LocalAI's embedding endpoint."""
Expand Down
2 changes: 1 addition & 1 deletion libs/langchain/langchain/embeddings/nlpcloud.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def __init__(
self,
model_name: str = "paraphrase-multilingual-mpnet-base-v2",
gpu: bool = False,
**kwargs: Any
**kwargs: Any,
) -> None:
super().__init__(model_name=model_name, gpu=gpu, **kwargs)

Expand Down
4 changes: 1 addition & 3 deletions libs/langchain/langchain/embeddings/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -393,9 +393,7 @@ def _get_len_safe_embeddings(
self,
input="",
**self._invocation_params,
)[
"data"
][0]["embedding"]
)["data"][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
embeddings[i] = (average / np.linalg.norm(average)).tolist()
Expand Down
4 changes: 2 additions & 2 deletions libs/langchain/langchain/indexes/vectorstore.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def query(
question: str,
llm: Optional[BaseLanguageModel] = None,
retriever_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Any
**kwargs: Any,
) -> str:
"""Query the vectorstore."""
llm = llm or OpenAI(temperature=0)
Expand All @@ -49,7 +49,7 @@ def query_with_sources(
question: str,
llm: Optional[BaseLanguageModel] = None,
retriever_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Any
**kwargs: Any,
) -> dict:
"""Query the vectorstore and get back sources."""
llm = llm or OpenAI(temperature=0)
Expand Down
Loading

0 comments on commit f94e24d

Please sign in to comment.