Skip to content

can set a patchData w. an all_primal centering#805

Closed
rochSmets wants to merge 10 commits intoPHAREHUB:masterfrom
rochSmets:oper
Closed

can set a patchData w. an all_primal centering#805
rochSmets wants to merge 10 commits intoPHAREHUB:masterfrom
rochSmets:oper

Conversation

@rochSmets
Copy link
Contributor

@rochSmets rochSmets commented Jan 14, 2024

This is just the beginning of the work... I will continue once we will be agree with the way to do it.
My notebook to test it is here :
https://github.com/rochSmets/farJunk/blob/main/nb/ohm.ipynb

Summary by CodeRabbit

  • New Features

    • Introduced advanced mathematical operations for vector and scalar fields, including dot product, cross product, square root, gradient, and modulus calculations.
    • Added new methods and functions for handling hierarchical data structures, including renaming, arithmetic operations, and domain slicing.
  • Enhancements

    • Updated PatchHierarchy class with new methods for mapping quantities to spatial directions and retrieving patch data names.
    • Improved field computation methods to handle additional parameters and new arithmetic operations.

@coderabbitai
Copy link

coderabbitai bot commented Jan 14, 2024

Warning

.coderabbit.yaml is using a deprecated configuration schema

The configuration schema (version 1) is deprecated and will be removed soon. Please migrate to the latest schema version.

New configuration YAML
early_access: false
enable_free_tier: true
language: en
tone_instructions: ''
knowledge_base:
  learnings:
    scope: auto
  issues:
    scope: auto
  jira:
    project_keys: []
  linear:
    team_keys: []
chat:
  auto_reply: true
reviews:
  profile: chill
  request_changes_workflow: false
  high_level_summary: true
  high_level_summary_placeholder: '@coderabbitai summary'
  poem: false
  review_status: true
  collapse_walkthrough: true
  path_filters: []
  path_instructions:
    - path: '**.hpp'
      instructions: >-
        Review the C++ code, point out issues relative to principles of clean
        code, expressiveness, and performance.
  tools:
    shellcheck:
      enabled: true
    ruff:
      enabled: true
    markdownlint:
      enabled: true
    github-checks:
      enabled: true
      timeout_ms: 90000
    languagetool:
      enabled: true
      disabled_rules:
        - EN_UNPAIRED_BRACKETS
        - EN_UNPAIRED_QUOTES
      disabled_categories:
        - TYPOS
        - TYPOGRAPHY
        - CASING
      enabled_only: false
      level: default
    biome:
      enabled: true
    hadolint:
      enabled: true
    swiftlint:
      enabled: true
    phpstan:
      enabled: true
      level: '1'
    golangci-lint:
      enabled: true
  auto_review:
    enabled: true
    auto_incremental_review: true
    ignore_title_keywords:
      - WIP
    labels: []
    drafts: true
    base_branches:
      - master
  auto_title_placeholder: '@coderabbitai title'
Configuration instructions
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Walkthrough

Walkthrough

The recent updates introduce a comprehensive set of mathematical and field operations across hierarchical data structures in the pyphare library. These enhancements include new functions for computing dot products, cross products, gradients, and more for vector and scalar fields. Additionally, various classes and methods have been updated to support these operations, handle different field centerings, and manage patch data more effectively.

Changes

File Path Change Summary
pyphare/core/operators.py Introduced functions for computing dot products, cross products, square roots, gradients, and modulus of fields in a hierarchy.
pyphare/pharesee/hierarchy.py Added flat_name method, updated global_min and global_max methods, introduced ScalarField and VectorField classes with arithmetic operations.
pyphare/pharesee/run.py Added imports and functions for domain slicing and field computations. Modified existing functions to handle new parameters and logic for all-primal data.

Sequence Diagram(s) (Beta)

sequenceDiagram
    participant User
    participant PatchHierarchy
    participant ScalarField
    participant VectorField
    participant Operators

    User->>PatchHierarchy: Initialize hierarchy
    PatchHierarchy-->>User: Return hierarchy instance

    User->>ScalarField: Perform arithmetic operation
    ScalarField-->>User: Return result

    User->>VectorField: Perform arithmetic operation
    VectorField-->>User: Return result

    User->>Operators: Compute dot product
    Operators-->>User: Return dot product

    User->>Operators: Compute cross product
    Operators-->>User: Return cross product

    User->>Operators: Compute gradient
    Operators-->>User: Return gradient
Loading

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 4

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 618c8f6 and a955eb2.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (2 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Additional comments: 13
pyphare/pyphare/core/operators.py (3)
  • 9-35: The sqrt function is designed to compute the square root of a ScalarField. The function iterates over time levels and patches to apply the square root operation to the dataset. The implementation seems correct, but the TODO comment on line 35 raises a question about handling multiple times within the hierarchy. This needs clarification or implementation before proceeding.

  • 40-45: The modulus function calculates the modulus of a VectorField by squaring the hierarchy and then taking the square root. This is a neat use of operator overloading and function composition. Ensure that the __mul__ operator is properly defined for VectorField objects to support this operation.

  • 51-105: The sqrt_old function is designed to handle ScalarField, VectorField, and TensorField hierarchies. The function uses assertions and conditional logic to determine the type of field and applies the square root operation accordingly. The function seems to be well-implemented, but it's marked as "old". It's important to ensure that any references to this function have been updated to use the new sqrt function if that is the intention.

pyphare/pyphare/pharesee/run.py (8)
  • 135-154: The function _compute_Bx_to_primal is designed to compute the Bx component of a magnetic field on a primal grid. The implementation seems correct, but the use of np.nan to fill the dataset may not be the best choice for all use cases. It's important to ensure that this is the desired behavior and that any downstream functions can handle np.nan values appropriately.

  • 157-176: The function _compute_By_to_primal is similar to _compute_Bx_to_primal but for the By component. The same concerns apply regarding the use of np.nan and ensuring that downstream functions can handle these values.

  • 179-200: The function _compute_Bz_to_primal is similar to _compute_Bx_to_primal and _compute_By_to_primal but for the Bz component. The same concerns apply regarding the use of np.nan and ensuring that downstream functions can handle these values.

  • 203-222: The function _compute_Ex_to_primal is designed to compute the Ex component of an electric field on a primal grid. The implementation seems correct, but the same concerns about np.nan apply.

  • 225-244: The function _compute_Ey_to_primal is similar to _compute_Ex_to_primal but for the Ey component. The same concerns apply regarding the use of np.nan and ensuring that downstream functions can handle these values.

  • 247-265: The function _compute_Ez_to_primal is similar to _compute_Ex_to_primal and _compute_Ey_to_primal but for the Ez component. The same concerns apply regarding the use of np.nan and ensuring that downstream functions can handle these values.

  • 507-516: The GetB function has been modified to support the computation of magnetic fields on primal grids. The use of the compute_hier_from function and the creation of a VectorField object seem appropriate. However, the TODO comment on line 509 suggests that there is still work to be done when all_primal is False. This needs to be resolved.

  • 567-570: The GetJ function has been updated to support the computation of current densities on primal grids. The implementation seems correct, but it's important to ensure that the compute_hier_from function is properly handling the conversion to primal grids.

pyphare/pyphare/pharesee/hierarchy.py (2)
  • 566-576: The flat_name method in the PatchHierarchy class is a simple mapping from a given quantity to a string representation. It's a straightforward implementation and seems correct.

  • 1103-1110: The VectorField class is another new class inheriting from PatchHierarchy. The __init__ method is similar to ScalarField and correctly sets up the class. The names attribute is initialized with a list of strings representing the vector components, which is appropriate for the class's purpose.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between a955eb2 and 2a94080.
Files selected for processing (2)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (2 hunks)
Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/hierarchy.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 5

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 2a94080 and 764f31c.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (2 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/pharesee/run.py
Additional comments: 4
pyphare/pyphare/core/operators.py (1)
  • 71-76: The modulus function correctly asserts that the input hier is an instance of VectorField. However, the operation hier*hier on line 75 implies that the VectorField class has an overridden __mul__ method. Ensure that this method is implemented correctly and handles the multiplication operation as intended for vector fields.
Verification successful

The search with rg has found a __mul__ method within the VectorField class in the file pyphare/pyphare/pharesee/hierarchy.py. This confirms that the VectorField class does indeed have an overridden __mul__ method, which is necessary for the modulus function in operators.py to work correctly as it performs the operation hier*hier.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the VectorField class has an overridden __mul__ method.
ast-grep --lang python --pattern $'class VectorField {
  $$$
  def __mul__($$$) {
    $$$
  }
  $$$
}'

Length of output: 448



Script:

#!/bin/bash
# Refine the search pattern for the __mul__ method in the VectorField class.
ast-grep --lang python --pattern 'class VectorField { def __mul__($$$) { $$$ } }'

Length of output: 428



Script:

#!/bin/bash
# Use ripgrep to search for the __mul__ method in the VectorField class.
rg "def __mul__\(self" -A 3 -t py

Length of output: 575

pyphare/pyphare/pharesee/hierarchy.py (3)
  • 566-576: The method flat_name in the PatchHierarchy class is straightforward and correctly raises a ValueError for invalid quantities.
  • 1103-1109: The VectorField class is correctly initialized, but it's not clear why the names attribute is a list of strings. This could be a design decision, but it's worth verifying that this is intentional and properly documented.
Verification successful

The search results indicate that the names attribute is indeed used within the VectorField class and other parts of the hierarchy.py file. Specifically, the attribute is set to ['x', 'y', 'z'], which likely corresponds to the components of the vector field in a three-dimensional space. The usage of names in the context of for loops and function parameters suggests that it is an integral part of the data structure and is used to iterate over the components of vector fields and rename patch data.

Given this context, it appears that the names attribute is intentionally a list of strings and is used to represent the different dimensions or components of the vector fields within the pyphare library. This design decision seems to be consistent with the handling of vector data in computational physics, where components are often labeled or accessed using identifiers like 'x', 'y', and 'z'.

Based on the information gathered, the review comment questioning the intention and documentation of the names attribute as a list of strings is addressed. The attribute is used as expected for vector field components in the codebase.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `names` attribute in VectorField is used as intended throughout the codebase.
rg --type python "names" "pyphare/pyphare/pharesee/hierarchy.py"

Length of output: 93



Script:

#!/bin/bash
# Correcting the file type and re-running the search for the `names` attribute in VectorField within the `hierarchy.py` file.
rg --type py "names" "pyphare/pyphare/pharesee/hierarchy.py"

Length of output: 748

* 1213-1214: The `rename` function is a utility that allows renaming patch datas in the hierarchy. Ensure that the `names` argument is always provided and properly validated before use.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 764f31c and 23a374b.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (3 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Files skipped from review as they are similar to previous changes (3)
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/hierarchy.py
  • pyphare/pyphare/pharesee/run.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 23a374b and 529e584.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (3 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Files skipped from review as they are similar to previous changes (3)
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/hierarchy.py
  • pyphare/pyphare/pharesee/run.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 618c8f6 and a2c803d.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/core/operators.py
  • pyphare/pyphare/pharesee/run.py
Additional comments: 13
pyphare/pyphare/pharesee/hierarchy.py (13)
  • 566-576: The method flat_name in the PatchHierarchy class correctly categorizes quantities into 'scalar', 'x', 'y', 'z', or raises a ValueError for invalid quantities. This method is well-implemented for its intended purpose.
  • 749-752: The use of np.nanmin and np.nanmax in the global_min and global_max methods ensures that NaN values are handled appropriately when calculating the global minimum and maximum values across the dataset. This is a good practice for data containing NaN values.
  • 1087-1096: The __neg__ method in the PatchHierarchy class lacks direct manipulation or usage of its own instance attributes and instead relies on external functions like compute_hier_from and _compute_neg. Ensure that the external functions properly handle the negation operation for the hierarchy's data. It's also important to verify that these operations maintain the integrity and structure of the hierarchy.
  • 1110-1117: The __mul__ method in the ScalarField class properly asserts that the other parameter is an integer or float, ensuring type safety for the multiplication operation. This is a good practice for preventing unexpected behavior or errors.
  • 1134-1142: Similar to the ScalarField class, the VectorField class's __mul__ method correctly asserts the type of the other parameter. This consistency across classes for arithmetic operations is good for maintainability and understanding of the codebase.
  • 1147-1168: The __add__ method in the VectorField class attempts to handle addition operations between vector fields. However, the method's implementation involves renaming and external computation functions, which could be complex and error-prone. Ensure that these operations are correctly implemented and that they preserve the hierarchy's structure and data integrity.
  • 1193-1205: The __truediv__ method in the VectorField class correctly checks if the other parameter is an instance of ScalarField before proceeding with the division operation. This type checking is crucial for ensuring that the operation is meaningful and prevents runtime errors.
  • 1210-1216: The _compute_mul function correctly multiplies the dataset by the other parameter and retains the centering information. This function is essential for performing scalar multiplication on the dataset while preserving its spatial properties.
  • 1223-1233: The _compute_add function handles the addition of datasets from different vector components ('x', 'y', 'z'). It ensures that the result maintains the correct centering information. This approach is appropriate for vector field operations within the hierarchy.
  • 1238-1248: The _compute_sub function, similar to _compute_add, correctly performs subtraction between datasets of vector components. It maintains the centering information, which is crucial for the integrity of the spatial data representation.
  • 1255-1258: The _compute_neg function properly negates the dataset values while preserving the centering information. This function is essential for applying unary negation operations to the hierarchy's data, ensuring the spatial properties remain consistent.
  • 1267-1270: The _compute_truediv function correctly divides the dataset by another dataset, specifically for dividing vector fields by scalar fields. This operation is crucial for operations that require normalization or scaling of vector fields by scalar quantities.
  • 1287-1289: The rename function provides a mechanism for renaming the datasets within the hierarchy. This function is important for operations that require temporary or permanent renaming of datasets to avoid naming conflicts or to clarify the dataset's purpose.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 18

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 6ff0df7 and 2751076.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 21

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 69f3027 and 01a0e1e.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Additional comments: 2
pyphare/pyphare/core/operators.py (2)
  • 19-25: The function _compute_sqrt correctly computes the square root of a scalar field. It follows a similar pattern to _compute_dot_product, including the return structure. Ensure that the key "scalar" is always present in patch_datas to avoid potential KeyError.

Consider adding a key presence check similar to the one suggested for _compute_dot_product.

  • 16-16: Regarding PhilipDeegan's comment about line 16 looking like it returns a tuple of size one, this pattern is consistent across similar functions (_compute_sqrt, _compute_cross_product, etc.). It seems to be a deliberate design choice to return a tuple containing a dictionary for each computed field. Clarification from the author would be helpful to understand if this is indeed required.

Request clarification from the author on the necessity of returning a tuple of size one.

@rochSmets
Copy link
Contributor Author

I still have 2 commits to keep memory of what I did if it happens I need to restore anything. But if this is fine for you, I will squash in a single commit

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 6

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 01a0e1e and 4fcd068.
Files selected for processing (2)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Additional comments: 5
pyphare/pyphare/pharesee/run.py (2)
  • 10-16: The imports from pyphare.pharesee.hierarchy are correctly added, including ScalarField and VectorField which are essential for the new functionalities introduced in this PR.
  • 506-547: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [509-594]

The modifications to the GetB, GetE, GetNi, GetPe, and GetJ functions to handle the all_primal flag are correctly implemented. These changes enable the functions to compute electromagnetic field components on primal grids, enhancing the library's computational capabilities. However, it's important to ensure that these modifications are thoroughly tested, especially since they introduce conditional logic that affects the core functionality of the library.

Ensure comprehensive testing of the modified functions (GetB, GetE, GetNi, GetPe, GetJ) to verify correct behavior with the all_primal flag, both when it's set to True and False.

pyphare/pyphare/pharesee/hierarchy.py (3)
  • 566-587: The method flat_name in the PatchHierarchy class provides a clean and efficient way to map quantity names to their respective types. This implementation is both readable and maintainable.
  • 790-793: Ensure that the variable m is initialized before the loop in the global_min method to avoid potential NameError if the first flag is not set to True before entering the loop. This issue was previously identified and a fix was suggested. Verify that this correction has been applied to prevent runtime errors.
  • 804-807: Similar to the global_min method, the global_max method should initialize the variable m before the loop to prevent a NameError. This correction was previously suggested. Confirm that this initialization is present to ensure the method's robustness.

Comment on lines +10 to +16
from pyphare.pharesee.hierarchy import (
compute_hier_from,
flat_finest_field,
hierarchy_from,
ScalarField,
VectorField,
)

Check notice

Code scanning / CodeQL

Cyclic import

Import of module [pyphare.pharesee.hierarchy](1) begins an import cycle.
from pyphare.pharesee.hierarchy import compute_hier_from_

from .hierarchy import flat_finest_field, hierarchy_from
import pyphare.pharein as ph

Check notice

Code scanning / CodeQL

Unused import

Import of 'ph' is not used.
Comment on lines +589 to +598
# if qty in ['rho', 'tags']:
# return 'scalar'
# elif qty in ['Bx', 'Ex', 'Fx', 'Vx']:
# return 'x'
# elif qty in ['By', 'Ey', 'Fy', 'Vy']:
# return 'y'
# elif qty in ['Bz', 'Ez', 'Fz', 'Vz']:
# return 'z'
# else:
# raise ValueError("{} is not a valid quantity".format(qty))

Check notice

Code scanning / CodeQL

Commented-out code

This comment appears to contain commented-out code.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 4fcd068 and e619bc1.
Files selected for processing (3)
  • src/hdf5/phare_hdf5.hpp (1 hunks)
  • src/python3/cpp_etc.cpp (1 hunks)
  • tests/simulator/test_diagnostics.py (2 hunks)
Additional comments: 2
src/python3/cpp_etc.cpp (1)
  • 43-43: The change from PHARE_TO_STR(HIGHFIVE_VERSION) to HIGHFIVE_VERSION_STRING in the phare_deps function improves consistency and clarity in version reporting. This aligns well with the Highfive version check added in phare_hdf5.hpp.
tests/simulator/test_diagnostics.py (1)
  • 203-213: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [206-219]

The replacement of assert statements with self.assertTrue() assertions for checking conditions related to version splitting, digit validation, and asserting the value of Te aligns with best practices for writing unit tests in Python. This change enhances the integration with the unittest framework and provides more informative error messages.

@PhilipDeegan
Copy link
Member

I still have 2 commits to keep memory of what I did if it happens I need to restore anything. But if this is fine for you, I will squash in a single commit

it can all be squashed on merge so you don't need to bother

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 4

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between ea1cdd9 and 12a67b0.
Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (4 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/core/operators.py
Additional comments: 7
pyphare/pyphare/pharesee/run.py (1)
  • 10-16: The imports from pyphare.pharesee.hierarchy have been reorganized to include ScalarField and VectorField. This change is necessary for the new functionality introduced in this PR and follows Python's import best practices.
pyphare/pyphare/pharesee/hierarchy.py (6)
  • 566-587: The method flat_name in the PatchHierarchy class provides a clean and efficient way to map quantity names to their respective types. However, the commented-out code from lines 589 to 598 seems redundant and should be removed to maintain code cleanliness.
-        # if qty in ['rho', 'tags']:
-        #     return 'scalar'
-        # elif qty in ['Bx', 'Ex', 'Fx', 'Vx']:
-        #     return 'x'
-        # elif qty in ['By', 'Ey', 'Fy', 'Vy']:
-        #     return 'y'
-        # elif qty in ['Bz', 'Ez', 'Fz', 'Vz']:
-        #     return 'z'
-        # else:
-        #     raise ValueError("{} is not a valid quantity".format(qty))
  • 790-793: In the global_min method, the initialization of m inside the loop based on the first flag is a valid approach. However, it's important to ensure that the first flag is correctly managed to avoid any potential logic errors. The method correctly uses np.nanmin to handle NaN values, which is a good practice for numerical computations.
  • 804-807: The global_max method follows a similar pattern to global_min, using a first flag for initialization and np.nanmax for handling NaN values. This consistency in handling global minimum and maximum calculations is good for maintainability and readability of the code.
  • 846-847: The get_names method assumes that all patches across all levels contain the same data names by retrieving names from the first patch of the first level. While this might be true in many cases, it's a strong assumption that could lead to issues in heterogeneous hierarchies. Consider adding a check or documentation note to clarify this assumption.
  • 1139-1158: The ScalarField class provides a specialized representation for scalar fields within the hierarchy, including support for multiplication operations. This class enhances the modularity of the code by separating scalar field-specific logic. Ensure that the class is used appropriately wherever scalar fields are manipulated within the hierarchy.
  • 1171-1173: The VectorField class initializes the names attribute but does not use it within the class. If this attribute is not required for the class's functionality, consider removing it to avoid confusion and maintain code cleanliness.
-        self.names = ['x', 'y', 'z']

(h_self, h_other),
)

self = rename(h_self, names_self_kept)

Check notice

Code scanning / CodeQL

Unused local variable

Variable self is not used.
)

self = rename(h_self, names_self_kept)
other = rename(h_other, names_other_kept)

Check notice

Code scanning / CodeQL

Unused local variable

Variable other is not used.
(h_self, h_other),
)

self = rename(h_self, names_self_kept)

Check notice

Code scanning / CodeQL

Unused local variable

Variable self is not used.
)

self = rename(h_self, names_self_kept)
other = rename(h_other, names_other_kept)

Check notice

Code scanning / CodeQL

Unused local variable

Variable other is not used.
try:
fn(*args, **kwargs)
except:
... # file not found

Check notice

Code scanning / CodeQL

Statement has no effect

This statement has no effect.
def _try(fn, *args, **kwargs):
try:
fn(*args, **kwargs)
except:

Check notice

Code scanning / CodeQL

Except block handles 'BaseException'

Except block directly handles BaseException.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 9

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 97f91ae and 9711eea.
Files selected for processing (5)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (5 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
  • tests/core/data/gridlayout/test_laplacian.py (1 hunks)
  • tests/simulator/test_restarts.py (2 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/core/operators.py
Additional comments: 8
tests/simulator/test_restarts.py (2)
  • 169-170: The addition of the all_primal argument with a default value of False in the GetAllAvailableQties method call is consistent with the PR objectives to enhance data retrieval with an "all_primal" centering approach. However, it's important to ensure that all calls to this method throughout the codebase are updated to include this new argument if necessary.
  • 211-213: The addition of the all_primal argument with a default value of False to the GetB method call aligns with the PR's goal of implementing an "all_primal" centering approach. Similar to the previous comment, ensure that all instances of GetB method calls are updated accordingly.
Verification successful

Given the execution of the script and its results, it appears that there are no instances of the GetB method being called without the all_primal argument in the codebase. This aligns with the PR's goal of implementing an "all_primal" centering approach and updating all instances of GetB method calls accordingly. Therefore, the changes seem to be correctly implemented as per the PR's intention.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Search for calls to GetB without the all_primal argument.
ast-grep --lang python --pattern $'GetB($_)'

Length of output: 45

tests/core/data/gridlayout/test_laplacian.py (1)
  • 213-213: The modification in the slicing operation for Jx_x calculation appears to be correct and aligns with the intended computation adjustments. However, it's crucial to ensure that the indices used in slicing (psi_d_X, pei_d_X) accurately reflect the desired range for the dual centering in the x-direction. This change could potentially impact the accuracy of the laplacian calculation, so it's recommended to verify the correctness of the indices and the overall computation through additional testing or validation.

Ensure that the modified slicing operation for Jx_x accurately captures the intended range and that the computation aligns with the theoretical expectations for the laplacian calculation. Consider adding unit tests or validation checks to confirm the correctness of this change.

pyphare/pyphare/pharesee/run.py (3)
  • 10-16: The imports from pyphare.pharesee.hierarchy are correctly consolidated into a single import statement, improving code readability and maintainability. However, ensure that the cyclic import issue flagged by the security bot has been resolved.
  • 219-231: The function _pdd_to_ppp_domain_slicing provides domain slicing for converting dual to primal centering. It currently raises a RuntimeError for dimensions other than 2. If the framework plans to support 3D in the future, it would be beneficial to start planning and implementing the necessary logic for 3D domain slicing now to ensure scalability.
  • 585-607: The GetPe function has been updated to use the all_primal flag and to retrieve Te from the hierarchy's simulation attributes. This is a significant improvement in leveraging the framework's capabilities for more accurate calculations. However, ensure that the Te attribute is always available and correctly initialized in the hierarchy to prevent runtime errors.
pyphare/pyphare/pharesee/hierarchy.py (2)
  • 846-847: The get_names method provides a straightforward way to retrieve the names of patch data from the first patch of the first level. This method assumes that all patches across all levels contain the same types of patch data, which might not always be the case in more complex scenarios. It's recommended to document this assumption or consider a more robust approach if the assumption does not hold.

Consider verifying the assumption that all patches across all levels contain the same types of patch data. If this assumption might not hold in all cases, consider implementing a more comprehensive method to gather patch data names from the entire hierarchy.

  • 1128-1139: The __neg__ method in the PatchHierarchy class demonstrates an attempt to apply a negation operation across the hierarchy. However, the method's implementation is not shown, and it's unclear how the compute_hier_from and _compute_neg functions are defined or used. Ensure these functions correctly apply the negation operation to the appropriate data within the hierarchy.

Verify the implementation of compute_hier_from and _compute_neg functions to ensure they correctly apply the negation operation to the hierarchy's data.

Comment on lines +566 to +587
def flat_name(self, qty):
name_dict = {
"rho": "scalar",
"tags": "scalar",
"Bx": "x",
"Ex": "x",
"Fx": "x",
"Vx": "x",
"By": "y",
"Ey": "y",
"Fy": "y",
"Vy": "y",
"Bz": "z",
"Ez": "z",
"Fz": "z",
"Vz": "z",
}

if qty in name_dict.keys():
return name_dict[qty]
else:
raise ValueError("{} is not a valid quantity".format(qty))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation of the flat_name method in the PatchHierarchy class is concise and correctly uses a dictionary to map quantity names to their respective types. However, the commented-out code from lines 589 to 598 is redundant and should be removed to keep the codebase clean and maintainable.

-        # if qty in ['rho', 'tags']:
-        #     return 'scalar'
-        # elif qty in ['Bx', 'Ex', 'Fx', 'Vx']:
-        #     return 'x'
-        # elif qty in ['By', 'Ey', 'Fy', 'Vy']:
-        #     return 'y'
-        # elif qty in ['Bz', 'Ez', 'Fz', 'Vz']:
-        #     return 'z'
-        # else:
-        #     raise ValueError("{} is not a valid quantity".format(qty))

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def flat_name(self, qty):
name_dict = {
"rho": "scalar",
"tags": "scalar",
"Bx": "x",
"Ex": "x",
"Fx": "x",
"Vx": "x",
"By": "y",
"Ey": "y",
"Fy": "y",
"Vy": "y",
"Bz": "z",
"Ez": "z",
"Fz": "z",
"Vz": "z",
}
if qty in name_dict.keys():
return name_dict[qty]
else:
raise ValueError("{} is not a valid quantity".format(qty))
def flat_name(self, qty):
name_dict = {
"rho": "scalar",
"tags": "scalar",
"Bx": "x",
"Ex": "x",
"Fx": "x",
"Vx": "x",
"By": "y",
"Ey": "y",
"Fy": "y",
"Vy": "y",
"Bz": "z",
"Ez": "z",
"Fz": "z",
"Vz": "z",
}
if qty in name_dict.keys():
return name_dict[qty]
else:
raise ValueError("{} is not a valid quantity".format(qty))

Comment on lines +790 to +793
m = np.nanmin(pd.dataset[:])
first = False
else:
m = min(m, pd.dataset[:].min())
m = min(m, np.nanmin(pd.dataset[:]))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the global_min method, the use of np.nanmin to find the minimum value while ignoring NaNs is appropriate for numerical datasets that may contain missing values. This approach ensures robustness in calculations. However, the initialization of m and the conditional check for first could be simplified by initializing m to np.inf outside the loop, eliminating the need for the first flag.

-        first = True
+        m = np.inf
         for ilvl, lvl in self.levels(time).items():
             for patch in lvl.patches:
                 pd = patch.patch_datas[qty]
-                if first:
-                    m = np.nanmin(pd.dataset[:])
-                    first = False
-                else:
-                    m = min(m, np.nanmin(pd.dataset[:]))
+                m = min(m, np.nanmin(pd.dataset[:]))

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
m = np.nanmin(pd.dataset[:])
first = False
else:
m = min(m, pd.dataset[:].min())
m = min(m, np.nanmin(pd.dataset[:]))
m = np.inf
m = min(m, np.nanmin(pd.dataset[:]))

Comment on lines +804 to +807
m = np.nanmax(pd.dataset[:])
first = False
else:
m = max(m, pd.dataset[:].max())
m = max(m, np.nanmax(pd.dataset[:]))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the global_min method, the global_max method correctly uses np.nanmax for handling NaN values. The same refinement suggested for global_min applies here as well, where initializing m to -np.inf outside the loop can simplify the code by removing the need for the first flag.

-        first = True
+        m = -np.inf
         for ilvl, lvl in self.levels(time).items():
             for patch in lvl.patches:
                 pd = patch.patch_datas[qty]
-                if first:
-                    m = np.nanmax(pd.dataset[:])
-                    first = False
-                else:
-                    m = max(m, np.nanmax(pd.dataset[:]))
+                m = max(m, np.nanmax(pd.dataset[:]))

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
m = np.nanmax(pd.dataset[:])
first = False
else:
m = max(m, pd.dataset[:].max())
m = max(m, np.nanmax(pd.dataset[:]))
m = max(m, np.nanmax(pd.dataset[:]))

Comment on lines +1242 to +1243
self = rename(h_self, names_self_kept)
other = rename(h_other, names_other_kept)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the __add__ method, variables self and other are assigned but not used in the __sub__ method of VectorField. This could indicate a misunderstanding of the method's functionality or an incomplete implementation. Clarify the intended use of these variables or remove them if unnecessary.

As with the __add__ method, ensure that the variables self and other are used appropriately in the __sub__ method or remove them if they are unnecessary.



def dot(hier_left, hier_right, **kwargs):
names_left_kept = hier_left.get_names()

Check notice

Code scanning / CodeQL

Unused local variable

Variable names_left_kept is not used.

def dot(hier_left, hier_right, **kwargs):
names_left_kept = hier_left.get_names()
names_right_kept = hier_right.get_names()

Check notice

Code scanning / CodeQL

Unused local variable

Variable names_right_kept is not used.


def cross(hier_left, hier_right, **kwargs):
names_left_kept = hier_left.get_names()

Check notice

Code scanning / CodeQL

Unused local variable

Variable names_left_kept is not used.

def cross(hier_left, hier_right, **kwargs):
names_left_kept = hier_left.get_names()
names_right_kept = hier_right.get_names()

Check notice

Code scanning / CodeQL

Unused local variable

Variable names_right_kept is not used.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 4

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between 9711eea and a1c16c2.
Files selected for processing (2)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/core/operators.py
Additional comments: 3
pyphare/pyphare/pharesee/run.py (3)
  • 10-16: The imports from pyphare.pharesee.hierarchy have been reorganized to include ScalarField and VectorField. This change is consistent with the PR's objective to enhance patch data management with an "all_primal" centering approach.
  • 507-520: The modification to the GetB function to handle the all_primal flag is consistent with the PR's objectives. The use of compute_hier_from with _compute_to_primal for all_primal flag handling is a good approach. However, ensure that the compute_hier_from function and the _compute_to_primal function are thoroughly tested, especially for edge cases and performance implications.
Verification successful

The modification to the GetB function and similar modifications in other functions to handle the all_primal flag are confirmed by the script execution results. The usage of compute_hier_from with _compute_to_primal across various parts of the run.py file for different physical quantities supports the PR's objectives. While the search did not reveal markdown documentation, the presence of these functions in the codebase for handling all_primal is verified. Ensure that these implementations are covered by comprehensive tests for reliability and performance.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify the correct behavior of compute_hier_from with _compute_to_primal in various scenarios
# This script is a placeholder for the concept and should be replaced with actual test commands
echo "Ensure comprehensive tests for compute_hier_from with _compute_to_primal"

Length of output: 154



Script:

#!/bin/bash
# Search for test cases or documentation related to compute_hier_from and _compute_to_primal
rg "compute_hier_from" -A 3 | grep "_compute_to_primal"
rg "_compute_to_primal" -A 3 | grep "compute_hier_from"
# Additionally, look for any markdown files that might contain documentation about these functions
fd "\.md$" --exec rg "compute_hier_from|_compute_to_primal"

Length of output: 1249

* 595-599: The update to the `GetPe` function to use `hier.sim.electrons.closure.Te` for obtaining `Te` is a good improvement and aligns with the latest project developments. This change ensures that the code remains consistent and utilizes the most accurate data source for `Te`.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: .coderabbit.yaml

Commits Files that changed from the base of the PR and between a1c16c2 and 68e506a.
Files selected for processing (1)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/pharesee/run.py


if ndim == 2:
inner_all = tuple([inner] * 2)
return inner_all, ((inner, inner_shift_left), (inner, inner_shift_right))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a return only for 2D, not for 1D?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes ; will do it very soon... before merging !

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Review Details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits Files that changed from the base of the PR and between 68e506a and d650785.
Files selected for processing (1)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Additional Context Used
Ruff (5)
pyphare/pyphare/pharesee/run.py (5)

4-4: pyphare.pharein imported but unused


638-638: f-string without any placeholders


640-640: f-string without any placeholders


767-767: Avoid equality comparisons to True; use if self.single_hier_for_all_quantities: for truth checks


772-772: Do not use bare except

def _try(fn, *args, **kwargs):
try:
fn(*args, **kwargs)
except:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace the bare except with a more specific exception handler.

- except:
+ except Exception:

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
except:
except Exception:

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Outside diff range and nitpick comments (4)
pyphare/pyphare/core/operators.py (1)

180-180: Clarify or resolve the TODO comment regarding gradient plotting.

The TODO comment at line 180 mentions an issue with gradient plotting. It would be beneficial to either resolve this issue or provide more details in the comment about what needs to be investigated or changed.

pyphare/pyphare/pharesee/run.py (2)

14-14: Consider adding a TODO for future improvement regarding the hard-coded Yee layout.


14-14: Consider adding a TODO for future improvement regarding the hard-coded Yee layout.

pyphare/pyphare/pharesee/hierarchy.py (1)

Line range hint 701-701: Comparison to None should be cond is None.

- if time == None:
+ if time is None:

Use is None for comparisons to None to follow Python best practices and ensure correct behavior.

Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d650785 and 22a7b28.

Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (9 hunks)
  • pyphare/pyphare/pharesee/run.py (5 hunks)
Additional context used
Ruff
pyphare/pyphare/core/operators.py

91-91: Local variable names_left_kept is assigned to but never used


92-92: Local variable names_right_kept is assigned to but never used


123-123: Local variable names_left_kept is assigned to but never used


124-124: Local variable names_right_kept is assigned to but never used

pyphare/pyphare/pharesee/run.py

4-4: pyphare.pharein imported but unused


646-646: f-string without any placeholders


648-648: f-string without any placeholders


772-772: Avoid equality comparisons to True; use if self.single_hier_for_all_quantities: for truth checks


777-777: Do not use bare except

pyphare/pyphare/pharesee/hierarchy.py

701-701: Comparison to None should be cond is None

Additional comments not posted (9)
pyphare/pyphare/pharesee/run.py (5)

14-14: Proper handling of different dimensions and appropriate error management.


14-14: Proper handling of different dimensions and appropriate error management.


283-285: Straightforward implementation using list comprehension.


288-289: Straightforward implementation that delegates to another function based on a key.


349-349: Proper handling of MPI ranks based on patch data.

pyphare/pyphare/pharesee/hierarchy.py (4)

566-587: The implementation of the flat_name method in the PatchHierarchy class is concise and correctly uses a dictionary to map quantity names to their respective types. However, the commented-out code from lines 589 to 598 is redundant and should be removed to keep the codebase clean and maintainable.


589-598: Remove commented-out code to maintain code clarity and cleanliness.


800-803: In the global_min method, the use of np.nanmin to find the minimum value while ignoring NaNs is appropriate for numerical datasets that may contain missing values. This approach ensures robustness in calculations. However, the initialization of m and the conditional check for first could be simplified by initializing m to np.inf outside the loop, eliminating the need for the first flag.


814-817: Similar to the global_min method, the global_max method correctly uses np.nanmax for handling NaN values. The same refinement suggested for global_min applies here as well, where initializing m to -np.inf outside the loop can simplify the code by removing the need for the first flag.


def dot(hier_left, hier_right, **kwargs):
names_left_kept = hier_left.get_names()
names_right_kept = hier_right.get_names()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused local variables names_right_kept and names_left_kept.

- names_left_kept = hier_left.get_names()
- names_right_kept = hier_right.get_names()

Similar to the previous comment, these variables are assigned but not utilized. Removing these lines can help in reducing the clutter and improving the readability of the code.

Also applies to: 124-124


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
names_right_kept = hier_right.get_names()

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Outside diff range and nitpick comments (1)
pyphare/pyphare/pharesee/hierarchy.py (1)

Line range hint 690-690: Comparison to None should be cond is None.

- if qty == None:
+ if qty is None:
Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 22a7b28 and 8232584.

Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (9 hunks)
  • pyphare/pyphare/pharesee/run.py (6 hunks)
Additional context used
Ruff
pyphare/pyphare/core/operators.py

91-91: Local variable names_left_kept is assigned to but never used


92-92: Local variable names_right_kept is assigned to but never used


123-123: Local variable names_left_kept is assigned to but never used


124-124: Local variable names_right_kept is assigned to but never used

pyphare/pyphare/pharesee/run.py

4-4: pyphare.pharein imported but unused


593-593: f-string without any placeholders


595-595: f-string without any placeholders


703-703: Avoid equality comparisons to True; use if self.single_hier_for_all_quantities: for truth checks


708-708: Do not use bare except

pyphare/pyphare/pharesee/hierarchy.py

690-690: Comparison to None should be cond is None

Additional comments not posted (10)
pyphare/pyphare/core/operators.py (2)

154-160: LGTM! The implementation of the sqrt function is straightforward and correct.


176-182: Consider adding support for 3D or improving error handling for unsupported dimensions.

+ # Add support for 3D or improve error handling for unsupported dimensions

Likely invalid or redundant comment.

pyphare/pyphare/pharesee/run.py (5)

544-550: Ensure the all_primal flag is handled correctly in the GetB method.

Verification successful

The all_primal flag is handled correctly in the GetB method and its various calls throughout the codebase.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `all_primal` flag is used correctly in the `GetB` method.
grep "GetB" pyphare/pyphare/pharesee/run.py

Length of output: 291



Script:

#!/bin/bash
# Extract lines around each `GetB` call to understand the context and verify the correct handling of the `all_primal` flag.
grep -C 3 "GetB" pyphare/pyphare/pharesee/run.py

Length of output: 1524


610-616: Ensure the all_primal flag is handled correctly in the GetJ method.

Verification successful

The GetJ method is called in three places within the codebase, but none of these calls explicitly set the all_primal flag, meaning it defaults to True. This indicates that the all_primal flag is not being actively managed in these instances.

  • tests/functional/tdtagged/td1dtagged.py:

    • JwT = rwT.GetJ(plot_time, merged=True, interp="linear")
    • JNoRef = rNoRef.GetJ(plot_time, merged=True, interp="linear")
  • pyphare/pyphare/pharesee/plotting.py:

    • interpolator, finest_coords = r.GetJ(time, merged=True, interp=interp)[qty]

Since the all_primal flag defaults to True, the current implementation will always compute the hierarchical structure to primal grids unless explicitly set otherwise. This behavior seems consistent with the method's design, but it is important to ensure that this default behavior is intended and documented.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `all_primal` flag is used correctly in the `GetJ` method.
grep "GetJ" pyphare/pyphare/pharesee/run.py

Length of output: 118



Script:

#!/bin/bash
# Find all occurrences of the `GetJ` method in the codebase to verify the handling of the `all_primal` flag.
rg '\.GetJ\('

Length of output: 342



Script:

#!/bin/bash
# Extract relevant lines from the identified files to review the usage of the `all_primal` flag in the `GetJ` method calls.
rg -A 5 'GetJ\(' tests/functional/tdtagged/td1dtagged.py pyphare/pyphare/pharesee/plotting.py

Length of output: 1175


701-715: Ensure the all_primal flag is handled correctly in the GetAllAvailableQties method.

Verification successful

The all_primal flag is handled correctly in the GetAllAvailableQties method and the methods it calls (GetB, GetE, GetNi). The flag is passed as an argument and used appropriately within these methods to determine the behavior of the data retrieval process.

  • In GetAllAvailableQties, the flag is passed to GetB, GetE, and GetNi.
  • In GetB and GetE, the flag is checked to decide whether to return the primal data or not.
  • In GetNi, the flag is not used, which is consistent with its implementation.

The implementation ensures that the all_primal flag is used correctly to control the retrieval of primal data.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `all_primal` flag is used correctly in the `GetAllAvailableQties` method.
grep "GetAllAvailableQties" pyphare/pyphare/pharesee/run.py

Length of output: 124



Script:

#!/bin/bash
# Extract the full implementation of the GetAllAvailableQties method and the methods it calls.
ast-grep --lang python --pattern $'def GetAllAvailableQties($_, $_, $_):\n    $$$' pyphare/pyphare/pharesee/run.py
ast-grep --lang python --pattern $'def GetB($_, $_):\n    $$$' pyphare/pyphare/pharesee/run.py
ast-grep --lang python --pattern $'def GetE($_, $_):\n    $$$' pyphare/pyphare/pharesee/run.py
ast-grep --lang python --pattern $'def GetNi($_, $_):\n    $$$' pyphare/pyphare/pharesee/run.py

Length of output: 400



Script:

#!/bin/bash
# Extract the full implementation of the GetAllAvailableQties method and the methods it calls.
rg -A 20 "def GetAllAvailableQties" pyphare/pyphare/pharesee/run.py
rg -A 20 "def GetB" pyphare/pyphare/pharesee/run.py
rg -A 20 "def GetE" pyphare/pyphare/pharesee/run.py
rg -A 20 "def GetNi" pyphare/pyphare/pharesee/run.py

Length of output: 3879


599-608: Ensure the all_primal flag is handled correctly in the GetPe method.

Verification successful

The GetPe method and other methods like GetB, GetE, and GetJ use the all_primal flag to determine different computation paths. This suggests a consistent pattern in handling this flag across the codebase. The all_primal flag appears to be used correctly within the GetPe method, as it follows the same pattern observed in other methods.

  • GetPe method implementation:

    def GetPe(self, time, merged=False, interp="nearest", all_primal=True):
        hier = self._get_hierarchy(time, "ions_density.h5")
        Te = hier.sim.electrons.closure.Te
        if not all_primal:
            return Te * self._get(hier, time, merged, interp)
        h = compute_hier_from(_compute_to_primal, hier, scalar="rho")
        return ScalarField(h) * Te
  • Other references to all_primal:

    • GetB, GetE, GetJ, and GetAllAvailableQties methods also use the all_primal flag in a similar manner.

Based on the retrieved information, the all_primal flag is handled consistently and correctly within the GetPe method and other related methods.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `all_primal` flag is used correctly in the `GetPe` method.
grep "GetPe" pyphare/pyphare/pharesee/run.py

Length of output: 120



Script:

#!/bin/bash
# Retrieve the full implementation of the `GetPe` method.
ast-grep --lang python --pattern $'def GetPe(self, time, merged=False, interp="nearest", all_primal=True):\n    $$$\n' pyphare/pyphare/pharesee/run.py

# Check for any other references to `all_primal` within the codebase.
grep -r "all_primal" .

Length of output: 2976


552-558: Ensure the all_primal flag is handled correctly in the GetE method.

Verification successful

The extracted definitions provide the necessary context to verify the handling of the all_primal flag in the GetE method. Here are the key points:

  • _try function attempts to call the provided function and catches any exceptions.
  • _get_hierarchy retrieves the hierarchy based on the provided time and filename.
  • _get retrieves data from the hierarchy, with an option to merge and interpolate.
  • compute_hier_from creates a new hierarchy using a callback function on all patch data.
  • VectorField is a class derived from PatchHierarchy.

The all_primal flag is used to determine whether to compute the hierarchy from primal fields (_compute_to_primal) and return a VectorField. If all_primal is False, it directly returns the result of _get.

The handling of the all_primal flag in the GetE method appears to be correct based on the provided context.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the `all_primal` flag is used correctly in the `GetE` method.
grep "GetE" pyphare/pyphare/pharesee/run.py

Length of output: 171



Script:

#!/bin/bash
# Verify the definition and usage of `_try`, `_get_hierarchy`, `_get`, `compute_hier_from`, and `VectorField`.

# Search for the definition of `_try`
ast-grep --lang python --pattern $'def _try($_, $_, $$$):' 

# Search for the definition of `_get_hierarchy`
ast-grep --lang python --pattern $'def _get_hierarchy($_, $_):' 

# Search for the definition of `_get`
ast-grep --lang python --pattern $'def _get($_, $_, $_, $_):' 

# Search for the definition of `compute_hier_from`
ast-grep --lang python --pattern $'def compute_hier_from($_, $_, $$$):' 

# Search for the definition of `VectorField`
ast-grep --lang python --pattern $'class VectorField($_):' 

Length of output: 320



Script:

#!/bin/bash
# Search the entire codebase for the definitions of `_try`, `_get_hierarchy`, `_get`, `compute_hier_from`, and `VectorField`.

# Search for the definition of `_try`
rg --pcre2 --glob '!*.{pyc,pyo,pyd}' 'def _try\('

# Search for the definition of `_get_hierarchy`
rg --pcre2 --glob '!*.{pyc,pyo,pyd}' 'def _get_hierarchy\('

# Search for the definition of `_get`
rg --pcre2 --glob '!*.{pyc,pyo,pyd}' 'def _get\('

# Search for the definition of `compute_hier_from`
rg --pcre2 --glob '!*.{pyc,pyo,pyd}' 'def compute_hier_from\('

# Search for the definition of `VectorField`
rg --pcre2 --glob '!*.{pyc,pyo,pyd}' 'class VectorField\('

Length of output: 757



Script:

#!/bin/bash
# Extract the definitions of `_try`, `_get_hierarchy`, `_get`, `compute_hier_from`, and `VectorField` from the respective files.

# Extract the definition of `_try`
sed -n '/def _try/,/^$/p' pyphare/pyphare/pharesee/run.py

# Extract the definition of `_get_hierarchy`
sed -n '/def _get_hierarchy/,/^$/p' pyphare/pyphare/pharesee/run.py

# Extract the definition of `_get`
sed -n '/def _get/,/^$/p' pyphare/pyphare/pharesee/run.py

# Extract the definition of `compute_hier_from`
sed -n '/def compute_hier_from/,/^$/p' pyphare/pyphare/pharesee/hierarchy.py

# Extract the definition of `VectorField`
sed -n '/class VectorField/,/^$/p' pyphare/pyphare/pharesee/hierarchy.py

Length of output: 1903

pyphare/pyphare/pharesee/hierarchy.py (3)

566-587: The implementation of the flat_name method in the PatchHierarchy class is concise and correctly uses a dictionary to map quantity names to their respective types. However, the commented-out code from lines 589 to 598 is redundant and should be removed to keep the codebase clean and maintainable.

-        # if qty in ['rho', 'tags']:
-        #     return 'scalar'
-        # elif qty in ['Bx', 'Ex', 'Fx', 'Vx']:
-        #     return 'x'
-        # elif qty in ['By', 'Ey', 'Fy', 'Vy']:
-        #     return 'y'
-        # elif qty in ['Bz', 'Ez', 'Fz', 'Vz']:
-        #     return 'z'
-        # else:
-        #     raise ValueError("{} is not a valid quantity".format(qty))

789-792: In the global_min method, the use of np.nanmin to find the minimum value while ignoring NaNs is appropriate for numerical datasets that may contain missing values. This approach ensures robustness in calculations. However, the initialization of m and the conditional check for first could be simplified by initializing m to np.inf outside the loop, eliminating the need for the first flag.

-        first = True
+        m = np.inf
         for ilvl, lvl in self.levels(time).items():
             for patch in lvl.patches:
                 pd = patch.patch_datas[qty]
-                if first:
-                    m = np.nanmin(pd.dataset[:])
-                    first = False
-                else:
-                    m = min(m, np.nanmin(pd.dataset[:]))
+                m = min(m, np.nanmin(pd.dataset[:]))

803-806: Similar to the global_min method, the global_max method correctly uses np.nanmax for handling NaN values. The same refinement suggested for global_min applies here as well, where initializing m to -np.inf outside the loop can simplify the code by removing the need for the first flag.

-        first = True
+        m = -np.inf
         for ilvl, lvl in self.levels(time).items():
             for patch in lvl.patches:
                 pd = patch.patch_datas[qty]
-                if first:
-                    m = np.nanmax(pd.dataset[:])
-                    first = False
-                else:
-                    m = max(m, np.nanmax(pd.dataset[:]))
+                m = max(m, np.nanmax(pd.dataset[:]))

Comment on lines +56 to +87
def _compute_grad(patch_data, **kwargs):
ndim = patch_data["value"].box.ndim
nb_ghosts = kwargs["nb_ghosts"]
ds = patch_data["value"].dataset

ds_shape = list(ds.shape)

ds_x = np.full(ds_shape, np.nan)
ds_y = np.full(ds_shape, np.nan)
ds_z = np.full(ds_shape, np.nan)

grad_ds = np.gradient(ds)

if ndim == 2:
ds_x[nb_ghosts:-nb_ghosts, nb_ghosts:-nb_ghosts] = np.asarray(
grad_ds[0][nb_ghosts:-nb_ghosts, nb_ghosts:-nb_ghosts]
)
ds_y[nb_ghosts:-nb_ghosts, nb_ghosts:-nb_ghosts] = np.asarray(
grad_ds[1][nb_ghosts:-nb_ghosts, nb_ghosts:-nb_ghosts]
)
ds_z[nb_ghosts:-nb_ghosts, nb_ghosts:-nb_ghosts].fill(
0.0
) # TODO at 2D, gradient is null in z dir

else:
raise RuntimeError("dimension not yet implemented")

return (
{"name": "x", "data": ds_x, "centering": patch_data["value"].centerings},
{"name": "y", "data": ds_y, "centering": patch_data["value"].centerings},
{"name": "z", "data": ds_z, "centering": patch_data["value"].centerings},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding support for 3D or improving error handling for unsupported dimensions.

+ # Add support for 3D or improve error handling for unsupported dimensions

Committable suggestion was skipped due low confidence.

Comment on lines +23 to +30
def _compute_sqrt(patch_datas, **kwargs):
ref_name = next(iter(patch_datas.keys()))

dset = np.sqrt(patch_datas["value"].dataset[:])

return (
{"name": "value", "data": dset, "centering": patch_datas[ref_name].centerings},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure key presence before accessing dictionary values to avoid potential KeyError.

+ if "value" not in patch_datas:
+     raise KeyError("'value' key is missing in patch_datas.")

Committable suggestion was skipped due low confidence.

Comment on lines +9 to +20
def _compute_dot_product(patch_datas, **kwargs):
ref_name = next(iter(patch_datas.keys()))

dset = (
patch_datas["left_x"].dataset[:] * patch_datas["right_x"].dataset[:]
+ patch_datas["left_y"].dataset[:] * patch_datas["right_y"].dataset[:]
+ patch_datas["left_z"].dataset[:] * patch_datas["right_z"].dataset[:]
)

return (
{"name": "value", "data": dset, "centering": patch_datas[ref_name].centerings},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure key presence before accessing dictionary values to avoid potential KeyError.

+ if not all(key in patch_datas for key in ["left_x", "left_y", "left_z", "right_x", "right_y", "right_z"]):
+     raise KeyError("One or more required keys are missing in patch_datas.")
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def _compute_dot_product(patch_datas, **kwargs):
ref_name = next(iter(patch_datas.keys()))
dset = (
patch_datas["left_x"].dataset[:] * patch_datas["right_x"].dataset[:]
+ patch_datas["left_y"].dataset[:] * patch_datas["right_y"].dataset[:]
+ patch_datas["left_z"].dataset[:] * patch_datas["right_z"].dataset[:]
)
return (
{"name": "value", "data": dset, "centering": patch_datas[ref_name].centerings},
)
def _compute_dot_product(patch_datas, **kwargs):
if not all(key in patch_datas for key in ["left_x", "left_y", "left_z", "right_x", "right_y", "right_z"]):
raise KeyError("One or more required keys are missing in patch_datas.")
ref_name = next(iter(patch_datas.keys()))
dset = (
patch_datas["left_x"].dataset[:] * patch_datas["right_x"].dataset[:]
+ patch_datas["left_y"].dataset[:] * patch_datas["right_y"].dataset[:]
+ patch_datas["left_z"].dataset[:] * patch_datas["right_z"].dataset[:]
)
return (
{"name": "value", "data": dset, "centering": patch_datas[ref_name].centerings},
)

Comment on lines +33 to +53
def _compute_cross_product(patch_datas, **kwargs):
ref_name = next(iter(patch_datas.keys()))

dset_x = (
patch_datas["left_y"].dataset[:] * patch_datas["right_z"].dataset[:]
- patch_datas["left_z"].dataset[:] * patch_datas["right_y"].dataset[:]
)
dset_y = (
patch_datas["left_z"].dataset[:] * patch_datas["right_x"].dataset[:]
- patch_datas["left_x"].dataset[:] * patch_datas["right_z"].dataset[:]
)
dset_z = (
patch_datas["left_x"].dataset[:] * patch_datas["right_y"].dataset[:]
- patch_datas["left_y"].dataset[:] * patch_datas["right_x"].dataset[:]
)

return (
{"name": "x", "data": dset_x, "centering": patch_datas[ref_name].centerings},
{"name": "y", "data": dset_y, "centering": patch_datas[ref_name].centerings},
{"name": "z", "data": dset_z, "centering": patch_datas[ref_name].centerings},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure key presence before accessing dictionary values to avoid potential KeyError.

+ if not all(key in patch_datas for key in ["left_x", "left_y", "left_z", "right_x", "right_y", "right_z"]):
+     raise KeyError("One or more required keys are missing in patch_datas.")
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def _compute_cross_product(patch_datas, **kwargs):
ref_name = next(iter(patch_datas.keys()))
dset_x = (
patch_datas["left_y"].dataset[:] * patch_datas["right_z"].dataset[:]
- patch_datas["left_z"].dataset[:] * patch_datas["right_y"].dataset[:]
)
dset_y = (
patch_datas["left_z"].dataset[:] * patch_datas["right_x"].dataset[:]
- patch_datas["left_x"].dataset[:] * patch_datas["right_z"].dataset[:]
)
dset_z = (
patch_datas["left_x"].dataset[:] * patch_datas["right_y"].dataset[:]
- patch_datas["left_y"].dataset[:] * patch_datas["right_x"].dataset[:]
)
return (
{"name": "x", "data": dset_x, "centering": patch_datas[ref_name].centerings},
{"name": "y", "data": dset_y, "centering": patch_datas[ref_name].centerings},
{"name": "z", "data": dset_z, "centering": patch_datas[ref_name].centerings},
)
def _compute_cross_product(patch_datas, **kwargs):
if not all(key in patch_datas for key in ["left_x", "left_y", "left_z", "right_x", "right_y", "right_z"]):
raise KeyError("One or more required keys are missing in patch_datas.")
ref_name = next(iter(patch_datas.keys()))
dset_x = (
patch_datas["left_y"].dataset[:] * patch_datas["right_z"].dataset[:]
- patch_datas["left_z"].dataset[:] * patch_datas["right_y"].dataset[:]
)
dset_y = (
patch_datas["left_z"].dataset[:] * patch_datas["right_x"].dataset[:]
- patch_datas["left_x"].dataset[:] * patch_datas["right_z"].dataset[:]
)
dset_z = (
patch_datas["left_x"].dataset[:] * patch_datas["right_y"].dataset[:]
- patch_datas["left_y"].dataset[:] * patch_datas["right_x"].dataset[:]
)
return (
{"name": "x", "data": dset_x, "centering": patch_datas[ref_name].centerings},
{"name": "y", "data": dset_y, "centering": patch_datas[ref_name].centerings},
{"name": "z", "data": dset_z, "centering": patch_datas[ref_name].centerings},
)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 8232584 and 6c9f1f2.

Files selected for processing (1)
  • pyphare/pyphare/pharesee/hierarchy.py (9 hunks)
Additional context used
Ruff
pyphare/pyphare/pharesee/hierarchy.py

690-690: Comparison to None should be cond is None

Additional comments not posted (3)
pyphare/pyphare/pharesee/hierarchy.py (3)

566-587: The implementation of the flat_name method in the PatchHierarchy class is concise and correctly uses a dictionary to map quantity names to their respective types. However, consider removing the commented-out code from lines 589 to 598 to keep the codebase clean and maintainable.


789-792: The use of np.nanmin and np.nanmax to find the minimum and maximum values while ignoring NaNs is appropriate for numerical datasets that may contain missing values. This approach ensures robustness in calculations. However, the initialization of m and the conditional check for first could be simplified by initializing m to np.inf or -np.inf outside the loop, eliminating the need for the first flag.

Also applies to: 803-806


Line range hint 690-690: Comparison to None should be cond is None.

Comment on lines +1127 to +1131
def __neg__(self):
names_self = self.get_names()
h = compute_hier_from(_compute_neg, self, names=names_self)
return VectorField(h)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __neg__ method in the PatchHierarchy class seems to be incorrectly returning a VectorField instead of a ScalarField. This could potentially lead to type mismatches or unexpected behavior in the application.

- return VectorField(h)
+ return ScalarField(h)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def __neg__(self):
names_self = self.get_names()
h = compute_hier_from(_compute_neg, self, names=names_self)
return VectorField(h)
def __neg__(self):
names_self = self.get_names()
h = compute_hier_from(_compute_neg, self, names=names_self)
return ScalarField(h)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Outside diff range and nitpick comments (4)
pyphare/pyphare/pharesee/run.py (3)

615-615: Consider the necessity of the all_primal argument for electron pressure.

The all_primal argument might not be necessary for the electron pressure as it is by construction all primal already. Reviewing the usage of this argument could simplify the method.


620-620: Clarify the condition that is always true.

The condition on line 620 is noted to always be true. If this is the case, consider removing the condition to simplify the code and improve readability.


635-635: Same comment as for J regarding the setting of all_primal.

The same reasoning for setting all_primal to False in the GetJ method applies here as well. Ensuring consistency in the comments will help maintain clarity.

pyphare/pyphare/pharesee/hierarchy.py (1)

Line range hint 690-690: Comparison to None should be cond is None.

- if cond == None
+ if cond is None
Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 6c9f1f2 and fbe506b.

Files selected for processing (3)
  • pyphare/pyphare/core/operators.py (1 hunks)
  • pyphare/pyphare/pharesee/hierarchy.py (9 hunks)
  • pyphare/pyphare/pharesee/run.py (6 hunks)
Files skipped from review as they are similar to previous changes (1)
  • pyphare/pyphare/core/operators.py
Additional context used
Ruff
pyphare/pyphare/pharesee/run.py

609-609: f-string without any placeholders (F541)


611-611: f-string without any placeholders (F541)


719-719: Avoid equality comparisons to True; use if self.single_hier_for_all_quantities: for truth checks (E712)


724-724: Do not use bare except (E722)

pyphare/pyphare/pharesee/hierarchy.py

690-690: Comparison to None should be cond is None (E711)

Additional comments not posted (4)
pyphare/pyphare/pharesee/run.py (1)

616-616: Update the method for obtaining Te in GetPe to reflect the latest project developments.

-        Te = 1.0
+        hier = self._get_hierarchy(time, "ions_density.h5")
+        Te = hier.sim.electrons.closure.Te

This change aligns the code with the latest project updates, ensuring consistency and leveraging the new data structures.

pyphare/pyphare/pharesee/hierarchy.py (3)

566-587: The implementation of the flat_name method in the PatchHierarchy class is concise and correctly uses a dictionary to map quantity names to their respective types. However, the commented-out code from lines 589 to 598 is redundant and should be removed to keep the codebase clean and maintainable.


789-792: In the global_min method, the use of np.nanmin to find the minimum value while ignoring NaNs is appropriate for numerical datasets that may contain missing values. This approach ensures robustness in calculations. However, the initialization of m and the conditional check for first could be simplified by initializing m to np.inf outside the loop, eliminating the need for the first flag.


803-806: Similar to the global_min method, the global_max method correctly uses np.nanmax for handling NaN values. The same refinement suggested for global_min applies here as well, where initializing m to -np.inf outside the loop can simplify the code by removing the need for the first flag.

Comment on lines +4 to +10
from pyphare.pharesee.hierarchy import (
compute_hier_from,
flat_finest_field,
hierarchy_from,
ScalarField,
VectorField,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider resolving the cyclic import issue to enhance maintainability.

The cyclic import involving pyphare.pharesee.hierarchy could lead to problems with initialization order and unexpected behaviors. Consider restructuring your imports or using local imports within functions to resolve this.

return ScalarField(h)*Te

def GetJ(self, time, merged=False, interp="nearest", all_primal=True):
B = self.GetB(time, all_primal=False)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment explaining why all_primal is set to False here.

+ # compute_hier_from() for J assumes B is on its Yee locations, so we need `all_primal=False`

Adding a comment here will clarify the reasoning behind the decision to set all_primal to False, aiding future maintainability.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
B = self.GetB(time, all_primal=False)
# compute_hier_from() for J assumes B is on its Yee locations, so we need `all_primal=False`
B = self.GetB(time, all_primal=False)


def GetAllAvailableQties(self, time, pops):
def GetAllAvailableQties(self, time, pops, all_primal=True):
assert not all_primal
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion assert not all_primal seems to contradict the later use of the all_primal flag in the same function.

- assert not all_primal

Consider removing or modifying this assertion to allow for the intended functionality of handling both primal and non-primal centering based on the all_primal flag.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
assert not all_primal

def GetAllAvailableQties(self, time, pops):
def GetAllAvailableQties(self, time, pops, all_primal=True):
assert not all_primal
assert self.single_hier_for_all_quantities == True # can't work otherwise
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace equality comparison to True with a direct truth check.

- assert self.single_hier_for_all_quantities == True
+ assert self.single_hier_for_all_quantities

Direct truth checks are more Pythonic and improve the readability of the code.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
assert self.single_hier_for_all_quantities == True # can't work otherwise
assert self.single_hier_for_all_quantities # can't work otherwise
Tools
Ruff

719-719: Avoid equality comparisons to True; use if self.single_hier_for_all_quantities: for truth checks (E712)

if not all_primal:
return self._get(hier, time, merged, interp)

h = compute_hier_from(_compute_to_primal, hier, x="Bx", y="By", z="Bz")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure unreachable code is addressed to maintain code quality.

The statement on line 565 is flagged as unreachable. This needs to be investigated to ensure that the code flow is correct and that there are no logical errors leading to unreachable code.

@nicolasaunai
Copy link
Member

Closing because is in #859

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants