Skip to content

Stabilize parenthesize_lambda_bodies#22744

Merged
dylwil3 merged 3 commits into2026-stylefrom
stabilize-parenthesize_lambda_bodies
Jan 26, 2026
Merged

Stabilize parenthesize_lambda_bodies#22744
dylwil3 merged 3 commits into2026-stylefrom
stabilize-parenthesize_lambda_bodies

Conversation

@dylwil3
Copy link
Collaborator

@dylwil3 dylwil3 commented Jan 19, 2026

No description provided.

@dylwil3 dylwil3 requested a review from MichaReiser as a code owner January 19, 2026 20:50
@dylwil3 dylwil3 added breaking Breaking API change formatter Related to the formatter labels Jan 19, 2026
@astral-sh-bot
Copy link

astral-sh-bot bot commented Jan 19, 2026

ruff-ecosystem results

Formatter (stable)

ℹ️ ecosystem check detected format changes. (+387 -326 lines in 47 files in 15 projects; 40 projects unchanged)

RasaHQ/rasa (+6 -6 lines across 2 files)

rasa/nlu/extractors/crf_entity_extractor.py~L101

         CRFEntityExtractorOptions.SUFFIX1: lambda crf_token: crf_token.text[-1:],
         CRFEntityExtractorOptions.BIAS: lambda _: "bias",
         CRFEntityExtractorOptions.POS: lambda crf_token: crf_token.pos_tag,
-        CRFEntityExtractorOptions.POS2: lambda crf_token: crf_token.pos_tag[:2]
-        if crf_token.pos_tag is not None
-        else None,
+        CRFEntityExtractorOptions.POS2: lambda crf_token: (
+            crf_token.pos_tag[:2] if crf_token.pos_tag is not None else None
+        ),
         CRFEntityExtractorOptions.UPPER: lambda crf_token: crf_token.text.isupper(),
         CRFEntityExtractorOptions.DIGIT: lambda crf_token: crf_token.text.isdigit(),
         CRFEntityExtractorOptions.PATTERN: lambda crf_token: crf_token.pattern,

rasa/nlu/featurizers/sparse_featurizer/lexical_syntactic_featurizer.py~L86

         "suffix2": lambda token: token.text[-2:],
         "suffix1": lambda token: token.text[-1:],
         "pos": lambda token: token.data.get(POS_TAG_KEY, None),
-        "pos2": lambda token: token.data.get(POS_TAG_KEY, [])[:2]
-        if POS_TAG_KEY in token.data
-        else None,
+        "pos2": lambda token: (
+            token.data.get(POS_TAG_KEY, [])[:2] if POS_TAG_KEY in token.data else None
+        ),
         "upper": lambda token: token.text.isupper(),
         "digit": lambda token: token.text.isdigit(),
     }

apache/superset (+9 -7 lines across 1 file)

tests/integration_tests/datasource_tests.py~L217

     def test_external_metadata_by_name_for_virtual_table_uses_mutator(self):
         self.login(ADMIN_USERNAME)
         with create_and_cleanup_table() as tbl:
-            current_app.config["SQL_QUERY_MUTATOR"] = (
-                lambda sql, **kwargs: "SELECT 456 as intcol, 'def' as mutated_strcol"
+            current_app.config["SQL_QUERY_MUTATOR"] = lambda sql, **kwargs: (
+                "SELECT 456 as intcol, 'def' as mutated_strcol"
             )
 
             params = prison.dumps(

tests/integration_tests/datasource_tests.py~L353

 
         pytest.raises(
             SupersetGenericDBErrorException,
-            lambda: db.session
-            .query(SqlaTable)
-            .filter_by(id=tbl.id)
-            .one_or_none()
-            .external_metadata(),
+            lambda: (
+                db.session
+                .query(SqlaTable)
+                .filter_by(id=tbl.id)
+                .one_or_none()
+                .external_metadata()
+            ),
         )
 
         resp = self.client.get(url)

aws/aws-sam-cli (+15 -12 lines across 1 file)

samcli/lib/cli_validation/image_repository_validation.py~L72

 
             validators = [
                 Validator(
-                    validation_function=lambda: bool(image_repository)
-                    + bool(image_repositories)
-                    + bool(resolve_image_repos)
-                    > 1,
+                    validation_function=lambda: (
+                        bool(image_repository) + bool(image_repositories) + bool(resolve_image_repos) > 1
+                    ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories",
                         ctx=ctx,

samcli/lib/cli_validation/image_repository_validation.py~L84

                     ),
                 ),
                 Validator(
-                    validation_function=lambda: not guided
-                    and not (image_repository or image_repositories or resolve_image_repos)
-                    and required,
+                    validation_function=lambda: (
+                        not guided and not (image_repository or image_repositories or resolve_image_repos) and required
+                    ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories",
                         ctx=ctx,

samcli/lib/cli_validation/image_repository_validation.py~L94

                     ),
                 ),
                 Validator(
-                    validation_function=lambda: not guided
-                    and (
-                        image_repositories
-                        and not resolve_image_repos
-                        and not _is_all_image_funcs_provided(template_file, image_repositories, parameters_overrides)
+                    validation_function=lambda: (
+                        not guided
+                        and (
+                            image_repositories
+                            and not resolve_image_repos
+                            and not _is_all_image_funcs_provided(
+                                template_file, image_repositories, parameters_overrides
+                            )
+                        )
                     ),
                     exception=click.BadOptionUsage(
                         option_name="--image-repositories", ctx=ctx, message=image_repos_error_msg

binary-husky/gpt_academic (+30 -26 lines across 3 files)

crazy_functions/agent_fns/general.py~L83

             }
             kwargs.update(agent_kwargs)
             agent_handle = agent_cls(**kwargs)
-            agent_handle._print_received_message = (
-                lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
+            agent_handle._print_received_message = lambda a, b: (
+                self.gpt_academic_print_override(agent_kwargs, a, b)
             )
             for d in agent_handle._reply_func_list:
                 if (

crazy_functions/agent_fns/general.py~L93

                 ):
                     d["reply_func"] = gpt_academic_generate_oai_reply
             if agent_kwargs["name"] == "user_proxy":
-                agent_handle.get_human_input = (
-                    lambda a: self.gpt_academic_get_human_input(user_proxy, a)
+                agent_handle.get_human_input = lambda a: (
+                    self.gpt_academic_get_human_input(user_proxy, a)
                 )
                 user_proxy = agent_handle
             if agent_kwargs["name"] == "assistant":

crazy_functions/agent_fns/general.py~L134

                 kwargs = {"code_execution_config": code_execution_config}
                 kwargs.update(agent_kwargs)
                 agent_handle = agent_cls(**kwargs)
-                agent_handle._print_received_message = (
-                    lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
+                agent_handle._print_received_message = lambda a, b: (
+                    self.gpt_academic_print_override(agent_kwargs, a, b)
                 )
                 agents_instances.append(agent_handle)
                 if agent_kwargs["name"] == "user_proxy":
                     user_proxy = agent_handle
-                    user_proxy.get_human_input = (
-                        lambda a: self.gpt_academic_get_human_input(user_proxy, a)
+                    user_proxy.get_human_input = lambda a: (
+                        self.gpt_academic_get_human_input(user_proxy, a)
                     )
             try:
                 groupchat = autogen.GroupChat(

crazy_functions/agent_fns/general.py~L150

                 manager = autogen.GroupChatManager(
                     groupchat=groupchat, **self.define_group_chat_manager_config()
                 )
-                manager._print_received_message = (
-                    lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
+                manager._print_received_message = lambda a, b: (
+                    self.gpt_academic_print_override(agent_kwargs, a, b)
                 )
                 manager.get_human_input = lambda a: self.gpt_academic_get_human_input(
                     manager, a

crazy_functions/crazy_utils.py~L299

         retry_op = retry_times_at_unknown_error
         exceeded_cnt = 0
         mutable[index][2] = "执行中"
-        detect_timeout = (
-            lambda: len(mutable[index]) >= 2
+        detect_timeout = lambda: (
+            len(mutable[index]) >= 2
             and (time.time() - mutable[index][1]) > watch_dog_patience
         )
         while True:

crazy_functions/review_fns/paper_processor/paper_llm_ranker.py~L143

                     )
                 elif search_criteria.query_type == "review":
                     papers.sort(
-                        key=lambda x: 1
-                        if any(
-                            keyword in (getattr(x, "title", "") or "").lower()
-                            or keyword in (getattr(x, "abstract", "") or "").lower()
-                            for keyword in ["review", "survey", "overview"]
-                        )
-                        else 0,
+                        key=lambda x: (
+                            1
+                            if any(
+                                keyword in (getattr(x, "title", "") or "").lower()
+                                or keyword in (getattr(x, "abstract", "") or "").lower()
+                                for keyword in ["review", "survey", "overview"]
+                            )
+                            else 0
+                        ),
                         reverse=True,
                     )
             return papers[:top_k]

crazy_functions/review_fns/paper_processor/paper_llm_ranker.py~L164

         if search_criteria and search_criteria.query_type == "review":
             papers = sorted(
                 papers,
-                key=lambda x: 1
-                if any(
-                    keyword in (getattr(x, "title", "") or "").lower()
-                    or keyword in (getattr(x, "abstract", "") or "").lower()
-                    for keyword in ["review", "survey", "overview"]
-                )
-                else 0,
+                key=lambda x: (
+                    1
+                    if any(
+                        keyword in (getattr(x, "title", "") or "").lower()
+                        or keyword in (getattr(x, "abstract", "") or "").lower()
+                        for keyword in ["review", "survey", "overview"]
+                    )
+                    else 0
+                ),
                 reverse=True,
             )
 

ibis-project/ibis (+162 -140 lines across 10 files)

docs/_renderer.py~L24

         quartodoc_skip_doctest = "quartodoc: +SKIP"
 
         chunker = lambda line: line.startswith((prompt, continuation))
-        should_skip = (
-            lambda line: quartodoc_skip_doctest in line or skip_doctest in line
+        should_skip = lambda line: (
+            quartodoc_skip_doctest in line or skip_doctest in line
         )
 
         for chunk in toolz.partitionby(chunker, lines):

ibis/backends/datafusion/init.py~L243

 
         for name, func in inspect.getmembers(
             udfs,
-            predicate=lambda m: callable(m)
-            and not m.__name__.startswith("_")
-            and m.__module__ == udfs.__name__,
+            predicate=lambda m: (
+                callable(m)
+                and not m.__name__.startswith("_")
+                and m.__module__ == udfs.__name__
+            ),
         ):
             annotations = typing.get_type_hints(func)
             argnames = list(inspect.signature(func).parameters.keys())

ibis/backends/tests/sql/test_sql.py~L557

     )
 
     products = products.mutate(
-        product_level_name=lambda t: ibis
-        .literal("-")
-        .lpad(((t.ancestor_level_number - 1) * 7), "-")
-        .concat(t.ancestor_level_name)
+        product_level_name=lambda t: (
+            ibis
+            .literal("-")
+            .lpad(((t.ancestor_level_number - 1) * 7), "-")
+            .concat(t.ancestor_level_name)
+        )
     )
 
     predicate = facts.product_id == products.descendant_node_natural_key

ibis/backends/tests/test_aggregation.py~L1318

         )
         .groupby("bigint_col")
         .string_col.agg(
-            lambda s: (np.nan if pd.isna(s).all() else pandas_sep.join(s.values))
+            lambda s: np.nan if pd.isna(s).all() else pandas_sep.join(s.values)
         )
         .rename("tmp")
         .sort_index()

ibis/backends/tests/test_temporal.py~L675

             ],
         ),
         param(
-            lambda t, _: t.timestamp_col
-            + (ibis.interval(days=4) - ibis.interval(days=2)),
-            lambda t, _: t.timestamp_col
-            + (pd.Timedelta(days=4) - pd.Timedelta(days=2)),
+            lambda t, _: (
+                t.timestamp_col + (ibis.interval(days=4) - ibis.interval(days=2))
+            ),
+            lambda t, _: (
+                t.timestamp_col + (pd.Timedelta(days=4) - pd.Timedelta(days=2))
+            ),
             id="timestamp-add-interval-binop",
             marks=[
                 pytest.mark.notimpl(

ibis/backends/tests/test_temporal.py~L697

             ],
         ),
         param(
-            lambda t, _: t.timestamp_col
-            + (ibis.interval(days=4) + ibis.interval(hours=2)),
-            lambda t, _: t.timestamp_col
-            + (pd.Timedelta(days=4) + pd.Timedelta(hours=2)),
+            lambda t, _: (
+                t.timestamp_col + (ibis.interval(days=4) + ibis.interval(hours=2))
+            ),
+            lambda t, _: (
+                t.timestamp_col + (pd.Timedelta(days=4) + pd.Timedelta(hours=2))
+            ),
             id="timestamp-add-interval-binop-different-units",
             marks=[
                 pytest.mark.notimpl(

ibis/backends/tests/test_window.py~L222

         ),
         param(
             lambda t, win: t.double_col.cummean().over(win),
-            lambda t: (t.double_col.expanding().mean().reset_index(drop=True, level=0)),
+            lambda t: t.double_col.expanding().mean().reset_index(drop=True, level=0),
             id="cummean",
             marks=pytest.mark.notimpl(["druid"], raises=PyDruidProgrammingError),
         ),

ibis/backends/tests/test_window.py~L291

         ),
         param(
             lambda t, win: t.double_col.mean().over(win),
-            lambda gb: (
-                gb.double_col.expanding().mean().reset_index(drop=True, level=0)
-            ),
+            lambda gb: gb.double_col.expanding().mean().reset_index(drop=True, level=0),
             id="mean",
             marks=pytest.mark.notimpl(["druid"], raises=PyDruidProgrammingError),
         ),

ibis/backends/tests/test_window.py~L346

     [
         param(
             lambda t, win: t.double_col.mean().over(win),
-            lambda df: (df.double_col.expanding().mean()),
+            lambda df: df.double_col.expanding().mean(),
             id="mean",
             marks=[
                 pytest.mark.notimpl(

ibis/backends/tests/test_window.py~L361

             # Disabled on PySpark and Spark backends because in pyspark<3.0.0,
             # Pandas UDFs are only supported on unbounded windows
             lambda t, win: mean_udf(t.double_col).over(win),
-            lambda df: (df.double_col.expanding().mean()),
+            lambda df: df.double_col.expanding().mean(),
             id="mean_udf",
             marks=[
                 pytest.mark.notimpl(

ibis/backends/tests/test_window.py~L549

     [
         param(
             lambda t, win: t.double_col.mean().over(win),
-            lambda gb: (gb.double_col.transform("mean")),
+            lambda gb: gb.double_col.transform("mean"),
             id="mean",
             marks=pytest.mark.notimpl(["druid"], raises=PyDruidProgrammingError),
         ),
         param(
             lambda t, win: mean_udf(t.double_col).over(win),
-            lambda gb: (gb.double_col.transform("mean")),
+            lambda gb: gb.double_col.transform("mean"),
             id="mean_udf",
             marks=[
                 pytest.mark.notimpl(

ibis/backends/tests/test_window.py~L1205

         df
         .sort_values("int_col")
         .groupby(df["int_col"].notnull())
-        .apply(lambda df: (df.int_col.rank(method="min").sub(1).div(len(df) - 1)))
+        .apply(lambda df: df.int_col.rank(method="min").sub(1).div(len(df) - 1))
         .T.reset_index(drop=True)
         .iloc[:, 0]
         .rename(expr.get_name())

ibis/backends/tests/tpc/ds/test_queries.py~L28

         .join(store.filter(_.s_state == "TN"), [("ctr_store_sk", "s_store_sk")])
         .join(customer, _.ctr_customer_sk == customer.c_customer_sk)
         .filter(
-            lambda t: t.ctr_total_return
-            > ctr2
-            .filter(t.ctr_store_sk == ctr2.ctr_store_sk)
-            .ctr_total_return.mean()
-            .as_scalar()
-            * 1.2
+            lambda t: (
+                t.ctr_total_return
+                > ctr2
+                .filter(t.ctr_store_sk == ctr2.ctr_store_sk)
+                .ctr_total_return.mean()
+                .as_scalar()
+                * 1.2
+            )
         )
         .select(_.c_customer_id)
         .order_by(_.c_customer_id)

ibis/backends/tests/tpc/ds/test_queries.py~L785

                 > 0
             ),
             lambda t: (
-                web_sales
-                .join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.c_customer_sk == web_sales.ws_bill_customer_sk,
-                    _.d_year == 2002,
-                    _.d_moy.between(1, 1 + 3),
+                (
+                    web_sales
+                    .join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.c_customer_sk == web_sales.ws_bill_customer_sk,
+                        _.d_year == 2002,
+                        _.d_moy.between(1, 1 + 3),
+                    )
+                    .count()
+                    > 0
                 )
-                .count()
-                > 0
-            )
-            | (
-                catalog_sales
-                .join(date_dim, [("cs_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.c_customer_sk == catalog_sales.cs_ship_customer_sk,
-                    _.d_year == 2002,
-                    _.d_moy.between(1, 1 + 3),
+                | (
+                    catalog_sales
+                    .join(date_dim, [("cs_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.c_customer_sk == catalog_sales.cs_ship_customer_sk,
+                        _.d_year == 2002,
+                        _.d_moy.between(1, 1 + 3),
+                    )
+                    .count()
+                    > 0
                 )
-                .count()
-                > 0
             ),
         )
         .group_by(

ibis/backends/tests/tpc/ds/test_queries.py~L1049

             _.d_date.between(date("2002-02-01"), date("2002-04-02")),
             _.ca_state == "GA",
             _.cc_county == "Williamson County",
-            lambda t: catalog_sales.filter(
-                t.cs_order_number == _.cs_order_number,
-                t.cs_warehouse_sk != _.cs_warehouse_sk,
-            ).count()
-            > 0,
-            lambda t: catalog_returns.filter(
-                t.cs_order_number == _.cr_order_number
-            ).count()
-            == 0,
+            lambda t: (
+                catalog_sales.filter(
+                    t.cs_order_number == _.cs_order_number,
+                    t.cs_warehouse_sk != _.cs_warehouse_sk,
+                ).count()
+                > 0
+            ),
+            lambda t: (
+                catalog_returns.filter(t.cs_order_number == _.cr_order_number).count()
+                == 0
+            ),
         )
         .agg(
             **{

ibis/backends/tests/tpc/ds/test_queries.py~L2105

         .view()
         .filter(
             _.i_manufact_id.between(738, 738 + 40),
-            lambda i1: item.filter(
-                lambda s: (
-                    (i1.i_manufact == s.i_manufact)
-                    & (
-                        (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("powder", "khaki"))
-                            & s.i_units.isin(("Ounce", "Oz"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                        | (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("brown", "honeydew"))
-                            & s.i_units.isin(("Bunch", "Ton"))
-                            & s.i_size.isin(("N/A", "small"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("floral", "deep"))
-                            & s.i_units.isin(("N/A", "Dozen"))
-                            & s.i_size.isin(("petite", "petite"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("light", "cornflower"))
-                            & s.i_units.isin(("Box", "Pound"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                    )
-                )
-                | (
-                    (i1.i_manufact == s.i_manufact)
-                    & (
+            lambda i1: (
+                item.filter(
+                    lambda s: (
                         (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("midnight", "snow"))
-                            & s.i_units.isin(("Pallet", "Gross"))
-                            & s.i_size.isin(("medium", "extra large"))
-                        )
-                        | (
-                            (s.i_category == "Women")
-                            & s.i_color.isin(("cyan", "papaya"))
-                            & s.i_units.isin(("Cup", "Dram"))
-                            & s.i_size.isin(("N/A", "small"))
-                        )
-                        | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("orange", "frosted"))
-                            & s.i_units.isin(("Each", "Tbl"))
-                            & s.i_size.isin(("petite", "petite"))
+                            (i1.i_manufact == s.i_manufact)
+                            & (
+                                (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("powder", "khaki"))
+                                    & s.i_units.isin(("Ounce", "Oz"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                                | (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("brown", "honeydew"))
+                                    & s.i_units.isin(("Bunch", "Ton"))
+                                    & s.i_size.isin(("N/A", "small"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("floral", "deep"))
+                                    & s.i_units.isin(("N/A", "Dozen"))
+                                    & s.i_size.isin(("petite", "petite"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("light", "cornflower"))
+                                    & s.i_units.isin(("Box", "Pound"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                            )
                         )
                         | (
-                            (s.i_category == "Men")
-                            & s.i_color.isin(("forest", "ghost"))
-                            & s.i_units.isin(("Lb", "Bundle"))
-                            & s.i_size.isin(("medium", "extra large"))
+                            (i1.i_manufact == s.i_manufact)
+                            & (
+                                (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("midnight", "snow"))
+                                    & s.i_units.isin(("Pallet", "Gross"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                                | (
+                                    (s.i_category == "Women")
+                                    & s.i_color.isin(("cyan", "papaya"))
+                                    & s.i_units.isin(("Cup", "Dram"))
+                                    & s.i_size.isin(("N/A", "small"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("orange", "frosted"))
+                                    & s.i_units.isin(("Each", "Tbl"))
+                                    & s.i_size.isin(("petite", "petite"))
+                                )
+                                | (
+                                    (s.i_category == "Men")
+                                    & s.i_color.isin(("forest", "ghost"))
+                                    & s.i_units.isin(("Lb", "Bundle"))
+                                    & s.i_size.isin(("medium", "extra large"))
+                                )
+                            )
                         )
                     )
-                )
-            ).count()
-            > 0,
+                ).count()
+                > 0
+            ),
         )
         .select(_.i_product_name)
         .distinct()

ibis/backends/tests/tpc/ds/test_queries.py~L4643

         .join(customer, [("ctr_customer_sk", "c_customer_sk")])
         .join(customer_address, [("c_current_addr_sk", "ca_address_sk")])
         .filter(
-            lambda ctr1: ctr1.ctr_total_return
-            > (
-                ctr2.filter(ctr1.ctr_state == _.ctr_state).ctr_total_return.mean() * 1.2
-            ).as_scalar(),
+            lambda ctr1: (
+                ctr1.ctr_total_return
+                > (
+                    ctr2.filter(ctr1.ctr_state == _.ctr_state).ctr_total_return.mean()
+                    * 1.2
+                ).as_scalar()
+            ),
             _.ca_state == "GA",
         )
         .select(

ibis/backends/tests/tpc/ds/test_queries.py~L5071

         .filter(
             _.i_manufact_id == 350,
             _.d_date.between(date("2000-01-07"), date("2000-04-26")),
-            lambda t: t.ws_ext_discount_amt
-            > (
-                web_sales
-                .join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
-                .filter(
-                    t.i_item_sk == _.ws_item_sk,
-                    _.d_date.between(date("2000-01-07"), date("2000-04-26")),
+            lambda t: (
+                t.ws_ext_discount_amt
+                > (
+                    web_sales
+                    .join(date_dim, [("ws_sold_date_sk", "d_date_sk")])
+                    .filter(
+                        t.i_item_sk == _.ws_item_sk,
+                        _.d_date.between(date("2000-01-07"), date("2000-04-26")),
+                    )
+                    .ws_ext_discount_amt.mean()
+                    .as_scalar()
+                    * 1.3
                 )
-                .ws_ext_discount_amt.mean()
-                .as_scalar()
-                * 1.3
             ),
         )
         .select(_.ws_ext_discount_amt.sum().name("Excess Discount Amount"))

ibis/expr/tests/test_visualize.py~L31

         lambda t: t.a + t.b,
         lambda t: t.a + t.b > 3**t.a,
         lambda t: t.filter((t.a + t.b * 2 * t.b / t.b**3 > 4) & (t.b > 5)),
-        lambda t: t
-        .filter((t.a + t.b * 2 * t.b / t.b**3 > 4) & (t.b > 5))
-        .group_by("c")
-        .aggregate(amean=lambda f: f.a.mean(), bsum=lambda f: f.b.sum()),
+        lambda t: (
+            t
+            .filter((t.a + t.b * 2 * t.b / t.b**3 > 4) & (t.b > 5))
+            .group_by("c")
+            .aggregate(amean=lambda f: f.a.mean(), bsum=lambda f: f.b.sum())
+        ),
     ],
 )
 def test_exprs(alltypes, expr_func):

ibis/tests/benchmarks/test_benchmarks.py~L703

     N = 20_000_000
 
     path = str(tmp_path_factory.mktemp("duckdb") / "data.ddb")
-    sql = (
-        lambda var, table, n=N: f"""
+    sql = lambda var, table, n=N: (
+        f"""
         CREATE TABLE {table} AS
         SELECT ROW_NUMBER() OVER () AS id, {var}
         FROM (

ibis/tests/expr/test_value_exprs.py~L928

         operator.gt,
         operator.ge,
         lambda left, right: ibis.timestamp("2017-04-01 00:02:34").between(left, right),
-        lambda left, right: ibis
-        .timestamp("2017-04-01")
-        .cast(dt.date)
-        .between(left, right),
+        lambda left, right: (
+            ibis.timestamp("2017-04-01").cast(dt.date).between(left, right)
+        ),
     ],
 )
 def test_string_temporal_compare(op, left, right):

pandas-dev/pandas (+22 -18 lines across 3 files)

pandas/tests/copy_view/test_indexing.py~L478

         lambda s: s["a":"c"]["a":"b"],  # type: ignore[misc]
         lambda s: s.iloc[0:3].iloc[0:2],
         lambda s: s.loc["a":"c"].loc["a":"b"],  # type: ignore[misc]
-        lambda s: s
-        .loc["a":"c"]  # type: ignore[misc]
-        .iloc[0:3]
-        .iloc[0:2]
-        .loc["a":"b"]  # type: ignore[misc]
-        .iloc[0:1],
+        lambda s: (
+            s
+            .loc["a":"c"]  # type: ignore[misc]
+            .iloc[0:3]
+            .iloc[0:2]
+            .loc["a":"b"]  # type: ignore[misc]
+            .iloc[0:1]
+        ),
     ],
     ids=["getitem", "iloc", "loc", "long-chain"],
 )

pandas/tests/frame/methods/test_shift.py~L437

         # Explicit cast to float to avoid implicit cast when setting nan.
         # Column names aren't unique, so directly calling `expected.astype` won't work.
         expected = expected.pipe(
-            lambda df: df
-            .set_axis(range(df.shape[1]), axis=1)
-            .astype({0: "float", 1: "float"})
-            .set_axis(df.columns, axis=1)
+            lambda df: (
+                df
+                .set_axis(range(df.shape[1]), axis=1)
+                .astype({0: "float", 1: "float"})
+                .set_axis(df.columns, axis=1)
+            )
         )
         expected.iloc[:, :2] = np.nan
         expected.columns = df3.columns

pandas/tests/frame/methods/test_shift.py~L457

         # Explicit cast to float to avoid implicit cast when setting nan.
         # Column names aren't unique, so directly calling `expected.astype` won't work.
         expected = expected.pipe(
-            lambda df: df
-            .set_axis(range(df.shape[1]), axis=1)
-            .astype({3: "float", 4: "float"})
-            .set_axis(df.columns, axis=1)
+            lambda df: (
+                df
+                .set_axis(range(df.shape[1]), axis=1)
+                .astype({3: "float", 4: "float"})
+                .set_axis(df.columns, axis=1)
+            )
         )
         expected.iloc[:, -2:] = np.nan
         expected.columns = df3.columns

pandas/tests/reshape/merge/test_merge_asof.py~L1902

         tm.assert_frame_equal(result, expected)
 
     def test_basic_no_by(self, trades, asof, quotes):
-        f = (
-            lambda x: x[x.ticker == "MSFT"]
-            .drop("ticker", axis=1)
-            .reset_index(drop=True)
+        f = lambda x: (
+            x[x.ticker == "MSFT"].drop("ticker", axis=1).reset_index(drop=True)
         )
 
         # just use a single ticker

prefecthq/prefect (+9 -3 lines across 2 files)

src/prefect/server/services/cancellation_cleanup.py~L102

 
 # Perpetual monitor for cancelled flow runs with child tasks (find and flood pattern)
 @perpetual_service(
-    enabled_getter=lambda: get_current_settings().server.services.cancellation_cleanup.enabled,
+    enabled_getter=lambda: (
+        get_current_settings().server.services.cancellation_cleanup.enabled
+    ),
 )
 async def monitor_cancelled_flow_runs(
     docket: Docket = CurrentDocket(),

src/prefect/server/services/cancellation_cleanup.py~L139

 
 # Perpetual monitor for subflow runs that need cancellation (find and flood pattern)
 @perpetual_service(
-    enabled_getter=lambda: get_current_settings().server.services.cancellation_cleanup.enabled,
+    enabled_getter=lambda: (
+        get_current_settings().server.services.cancellation_cleanup.enabled
+    ),
 )
 async def monitor_subflow_runs(
     docket: Docket = CurrentDocket(),

src/prefect/server/services/pause_expirations.py~L47

 
 
 @perpetual_service(
-    enabled_getter=lambda: get_current_settings().server.services.pause_expirations.enabled,
+    enabled_getter=lambda: (
+        get_current_settings().server.services.pause_expirations.enabled
+    ),
 )
 async def monitor_expired_pauses(
     docket: Docket = CurrentDocket(),

qdrant/qdrant-client (+14 -12 lines across 2 files)

tests/congruence_tests/test_common.py~L331

 
     if isinstance(res1, list):
         if is_context_search is True:
-            sorted_1 = sorted(res1, key=lambda x: (x.id))
-            sorted_2 = sorted(res2, key=lambda x: (x.id))
+            sorted_1 = sorted(res1, key=lambda x: x.id)
+            sorted_2 = sorted(res2, key=lambda x: x.id)
             compare_records(sorted_1, sorted_2, abs_tol=1e-5)
         else:
             compare_records(res1, res2)

tests/congruence_tests/test_common.py~L340

         res2, models.QueryResponse
     ):
         if is_context_search is True:
-            sorted_1 = sorted(res1.points, key=lambda x: (x.id))
-            sorted_2 = sorted(res2.points, key=lambda x: (x.id))
+            sorted_1 = sorted(res1.points, key=lambda x: x.id)
+            sorted_2 = sorted(res2.points, key=lambda x: x.id)
             compare_records(sorted_1, sorted_2, abs_tol=1e-5)
         else:
             compare_records(res1.points, res2.points)

tests/congruence_tests/test_delete_points.py~L70

     compare_client_results(
         local_client,
         remote_client,
-        lambda c: c.query_points(
-            COLLECTION_NAME,
-            query=vector,
-            using="sparse-image",
-        ).points,
+        lambda c: (
+            c.query_points(
+                COLLECTION_NAME,
+                query=vector,
+                using="sparse-image",
+            ).points
+        ),
     )
 
     found_ids = [

tests/congruence_tests/test_delete_points.py~L92

     compare_client_results(
         local_client,
         remote_client,
-        lambda c: c.query_points(
-            COLLECTION_NAME, query=vector, using="sparse-image"
-        ).points,
+        lambda c: (
+            c.query_points(COLLECTION_NAME, query=vector, using="sparse-image").points
+        ),
     )

reflex-dev/reflex (+1 -7 lines across 1 file)

reflex/reflex.py~L803

         app_name=app_name,
         app_id=app_id,
         export_fn=(
-            lambda zip_dest_dir,
-            api_url,
-            deploy_url,
-            frontend,
-            backend,
-            upload_db,
-            zipping: (
+            lambda zip_dest_dir, api_url, deploy_url, frontend, backend, upload_db, zipping: (
                 export_utils.export(
                     zip_dest_dir=zip_dest_dir,
                     api_url=api_url,

rotki/rotki (+50 -51 lines across 9 files)

rotkehlchen/chain/evm/decoding/aave/v3/decoder.py~L460

                 ordered_events=ordered_events,
                 interest_event_lookup=interest_event_lookup,
                 used_interest_event_ids=used_interest_event_ids,
-                match_fn=lambda primary,
-                secondary: (  # use symbols due to Monerium and its different versions  # noqa: E501
+                match_fn=lambda primary, secondary: (  # use symbols due to Monerium and its different versions  # noqa: E501
                     (
                         underlying_token := get_single_underlying_token(
                             primary.asset.resolve_to_evm_token()

rotkehlchen/chain/evm/decoding/balancer/decoder.py~L51

             self,
             evm_inquirer=evm_inquirer,
             cache_type_to_check_for_freshness=BALANCER_CACHE_TYPE_MAPPING[counterparty],
-            query_data_method=lambda inquirer,
-            cache_type,
-            msg_aggregator,
-            reload_all: query_balancer_data(  # noqa: E501
-                inquirer=inquirer,
-                cache_type=cache_type,
-                protocol=counterparty,
-                msg_aggregator=msg_aggregator,
-                version=BALANCER_VERSION_MAPPING[counterparty],
-                reload_all=reload_all,
+            query_data_method=lambda inquirer, cache_type, msg_aggregator, reload_all: (
+                query_balancer_data(  # noqa: E501
+                    inquirer=inquirer,
+                    cache_type=cache_type,
+                    protocol=counterparty,
+                    msg_aggregator=msg_aggregator,
+                    version=BALANCER_VERSION_MAPPING[counterparty],
+                    reload_all=reload_all,
+                )
             ),
             read_data_from_cache_method=read_fn,
             chain_id=evm_inquirer.chain_id,

rotkehlchen/chain/evm/node_inquirer.py~L984

                     end_block = min(start_block + blocks_step, until_block)
                     try:
                         new_events = self._try_indexers(
-                            func=lambda indexer,
-                            start=start_block,
-                            end=end_block: indexer.get_logs(  # type: ignore[misc]  # noqa: E501
-                                chain_id=self.chain_id,
-                                contract_address=contract_address,
-                                topics=filter_args.get("topics", []),
-                                from_block=start,
-                                to_block=end,
-                                existing_events=events,
+                            func=lambda indexer, start=start_block, end=end_block: (
+                                indexer.get_logs(  # type: ignore[misc]  # noqa: E501
+                                    chain_id=self.chain_id,
+                                    contract_address=contract_address,
+                                    topics=filter_args.get("topics", []),
+                                    from_block=start,
+                                    to_block=end,
+                                    existing_events=events,
+                                )
                             )
                         )
                     except RemoteError as e:

rotkehlchen/chain/solana/node_inquirer.py~L376

         signatures = []
         while True:
             response: GetSignaturesForAddressResp = self.query(
-                method=lambda client,
-                _before=before,
-                _until=until: client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
-                    account=Pubkey.from_string(address),
-                    limit=SIGNATURES_PAGE_SIZE,
-                    before=_before,
-                    until=_until,
+                method=lambda client, _before=before, _until=until: (
+                    client.get_signatures_for_address(  # type: ignore[misc]  # noqa: E501
+                        account=Pubkey.from_string(address),
+                        limit=SIGNATURES_PAGE_SIZE,
+                        before=_before,
+                        until=_until,
+                    )
                 ),
                 only_archive_nodes=True,
             )

rotkehlchen/data_import/importers/binance.py~L331

 
         for rows_group in rows_grouped_by_fee.values():
             rows_group.sort(
-                key=lambda x: x["Change"]
-                if same_assets
-                else x["Change"] * price_at_timestamp[x["Coin"]],
+                key=lambda x: (
+                    x["Change"] if same_assets else x["Change"] * price_at_timestamp[x["Coin"]]
+                ),
                 reverse=True,
             )  # noqa: E501
 

rotkehlchen/globaldb/handler.py~L1192

                 entry.protocol,
         

... (truncated 388 lines) ...

@dylwil3
Copy link
Collaborator Author

dylwil3 commented Jan 19, 2026

@ntBre - we don't run the black compatibility tests against our preview rules, so it looks like some syntax errors slipped past the original implementation.

ntBre added a commit that referenced this pull request Jan 19, 2026
Summary
--

This PR fixes the issues revealed in #22744 by adding an additional branch to
the lambda body formatting that checks if the body `needs_parentheses` before
falling back on the `Parentheses::Never` case. I also updated the
`ExprNamed::needs_parentheses` implementation to match the one from #8465.

Test Plan
--

New test based on the failing cases in #22744. I also checked out #22744 and
checked that the tests pass after applying the changes from this PR.
ntBre added a commit that referenced this pull request Jan 20, 2026
Summary
--

This PR fixes the issues revealed in #22744 by adding an additional
branch to
the lambda body formatting that checks if the body `needs_parentheses`
before
falling back on the `Parentheses::Never` case. I also updated the
`ExprNamed::needs_parentheses` implementation to match the one from
#8465.

Test Plan
--

New test based on the failing cases in #22744. I also checked out #22744
and
checked that the tests pass after applying the changes from this PR.
@dylwil3 dylwil3 force-pushed the stabilize-parenthesize_lambda_bodies branch from e862cb5 to 76c1939 Compare January 23, 2026 14:29
@dylwil3 dylwil3 force-pushed the stabilize-parenthesize_lambda_bodies branch from 76c1939 to bf077c5 Compare January 26, 2026 14:48
@dylwil3 dylwil3 merged commit f3cc386 into 2026-style Jan 26, 2026
41 checks passed
@dylwil3 dylwil3 deleted the stabilize-parenthesize_lambda_bodies branch January 26, 2026 16:55
dylwil3 added a commit that referenced this pull request Jan 26, 2026
dylwil3 added a commit that referenced this pull request Jan 26, 2026
dylwil3 added a commit that referenced this pull request Jan 29, 2026
@dylwil3 dylwil3 mentioned this pull request Jan 29, 2026
3 tasks
dylwil3 added a commit that referenced this pull request Feb 2, 2026
ntBre added a commit that referenced this pull request Feb 3, 2026
Styles stabilized:

-
[`avoid_parens_for_long_as_captures`](#22743)
-
[`remove_parens_around_except_types`](#22741)
-
[`allow_newline_after_block_open`](#22742)
- [`no_chaperone_for_escaped_quote_in_triple_quoted_docstring
`](#22739)
- [`blank_line_before_decorated_class_in_stub
`](#22740)
-
[`parenthesize_lambda_bodies`](#22744)

To-do:

- [x] Change target branch to 0.15 release branch
- [x] Update documentation
- [x] Remove empty commit

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
ntBre added a commit that referenced this pull request Feb 3, 2026
Styles stabilized:

-
[`avoid_parens_for_long_as_captures`](#22743)
-
[`remove_parens_around_except_types`](#22741)
-
[`allow_newline_after_block_open`](#22742)
- [`no_chaperone_for_escaped_quote_in_triple_quoted_docstring
`](#22739)
- [`blank_line_before_decorated_class_in_stub
`](#22740)
-
[`parenthesize_lambda_bodies`](#22744)

To-do:

- [x] Change target branch to 0.15 release branch
- [x] Update documentation
- [x] Remove empty commit

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
ntBre added a commit that referenced this pull request Feb 3, 2026
Styles stabilized:

-
[`avoid_parens_for_long_as_captures`](#22743)
-
[`remove_parens_around_except_types`](#22741)
-
[`allow_newline_after_block_open`](#22742)
- [`no_chaperone_for_escaped_quote_in_triple_quoted_docstring
`](#22739)
- [`blank_line_before_decorated_class_in_stub
`](#22740)
-
[`parenthesize_lambda_bodies`](#22744)

To-do:

- [x] Change target branch to 0.15 release branch
- [x] Update documentation
- [x] Remove empty commit

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking Breaking API change formatter Related to the formatter

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants