Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(fix)Make the weighted avarange fit for all kinds of systems #4593

Draft
wants to merge 18 commits into
base: devel
Choose a base branch
from

Conversation

SumGuo-88
Copy link
Collaborator

@SumGuo-88 SumGuo-88 commented Feb 10, 2025

Summary by CodeRabbit

  • Refactor

    • Enhanced error reporting and logging to selectively display key performance metrics based on model capabilities.
    • Streamlined output handling by removing obsolete metric processing, improving overall clarity.
  • Tests

    • Introduced comprehensive test cases for verifying weighted average error computations under varied conditions, ensuring better accuracy and reliability.
    • Added new test suite for evaluating the functionality of weighted average calculations in system error metrics.

Copy link
Contributor

coderabbitai bot commented Feb 10, 2025

📝 Walkthrough

Walkthrough

This pull request modifies the testing functionalities within the DeePMD framework. In deepmd/entrypoints/test.py, the error handling and reporting in the test functions are improved by adding new flags for energy, force, and virial calculations, implementing conditional logging, and restructuring the error dictionary aggregation. Function signatures are updated to return typed tuples, and references to Hessian calculations are removed. Additionally, a new test suite is introduced in source/tests/pt/test_weighted_avg.py to validate weighted average error calculations across various input scenarios.

Changes

Files Change Summary
deepmd/entrypoints/test.py Enhanced error handling in test functions by introducing find_energy, find_force, and find_virial flags; added conditional logging and modular error aggregation with err_part; updated function signatures; removed Hessian code.
source/tests/pt/test_weighted_avg.py Added new test suite with functions fake_test and fake_test_ori and the TestWeightedAverage class to validate weighted average error calculations across scenarios (energy only, energy and force, and full metrics).

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant test_ener
    participant Logger
    participant DetailFile

    Caller->>test_ener: Invoke test_ener(test_data, flags, ...)
    test_ener->>test_ener: Initialize error dictionary (err_part)
    alt Energy flag set
        test_ener->>test_ener: Compute energy error metric
    end
    alt Force flag set
        test_ener->>Logger: Log force metric information
    end
    alt Virial flag set
        test_ener->>Logger: Log virial metric information
    end
    test_ener->>DetailFile: Write headers and detailed output if applicable
    test_ener->>Caller: Return (error arrays, error codes)
Loading
sequenceDiagram
    participant TestCase
    participant WeightedAvgFunc
    participant Assertion

    TestCase->>WeightedAvgFunc: Call fake_test() / fake_test_ori() with system data
    WeightedAvgFunc->>WeightedAvgFunc: Compute aggregated error metrics
    WeightedAvgFunc->>TestCase: Return computed results
    TestCase->>Assertion: Assert computed values match expected outcomes
Loading

Possibly related PRs

Suggested reviewers

  • njzjz

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 984a78e and 0f669b8.

📒 Files selected for processing (1)
  • deepmd/entrypoints/test.py (5 hunks)
🔇 Additional comments (5)
deepmd/entrypoints/test.py (5)

137-138: LGTM! Return value change aligns with function signature.

The updated return value unpacking matches the function signature change and provides necessary flags for error filtering.


146-167: LGTM! Error filtering logic is well-structured.

The new error filtering logic correctly handles the selective inclusion of metrics based on flags, addressing the inconsistency issue from past reviews.


333-335: LGTM! Unused flags issue resolved.

The flags are now properly utilized in error filtering and return values, resolving the "unused flags" issue from past reviews.


477-477: LGTM! Conditional logging aligns with error filtering.

The added find_virial check ensures consistency between error filtering and logging behavior.


588-624: LGTM! Return values are consistently structured.

The return values are well-organized and maintain consistency between spin and non-spin cases, with proper inclusion of flags.

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
source/tests/pt/test_weighted_avg.py (1)

70-101: Well-structured test cases with good coverage!

The test cases effectively validate different component combinations. Consider making the variable names more descriptive for better readability.

Consider renaming variables to be more descriptive:

-    expected_mae_f = (2*3 +1*3 )/(3+3)
+    expected_force_mae = (2*3 + 1*3)/(3+3)
-    expected_mae_v = (3*5 +1*5 )/(5+5)
+    expected_virial_mae = (3*5 + 1*5)/(5+5)
deepmd/entrypoints/test.py (3)

331-333: Good addition of component flags!

Consider using more descriptive variable names for better clarity.

-    find_energy = test_data.get('find_energy')
-    find_force = test_data.get('find_force')
-    find_virial = test_data.get('find_virial')
+    has_energy_component = test_data.get('find_energy')
+    has_force_component = test_data.get('find_force')
+    has_virial_component = test_data.get('find_virial')

146-167: Good selective error collection logic!

Consider adding error handling for missing components.

Add error handling for missing components:

     err_part = {}
+    if test_data.get('find_energy') is None:
+        log.warning("Energy component flag not found in test data")
+    if test_data.get('find_force') is None:
+        log.warning("Force component flag not found in test data")
+    if test_data.get('find_virial') is None:
+        log.warning("Virial component flag not found in test data")

     if find_energy == 1:
         err_part['mae_e'] = err['mae_e']

459-470: Good conditional logging implementation!

Consider adding debug logging for better troubleshooting.

Add debug logging:

+    log.debug(f"Processing system with energy={find_energy}, force={find_force}, virial={find_virial}")
     if find_force == 1:
         if not out_put_spin:
             log.info(f"Force  MAE         : {mae_f:e} eV/A")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65facbb and b3a925d.

📒 Files selected for processing (2)
  • deepmd/entrypoints/test.py (12 hunks)
  • source/tests/pt/test_weighted_avg.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (19)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Analyze (python)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (3)
source/tests/pt/test_weighted_avg.py (3)

7-31: Well-structured implementation for handling different error metrics!

The function effectively handles different combinations of energy, force, and virial metrics with clean conditional logic and proper error collection.


33-39: Clean baseline implementation!

The function provides a good reference point for comparing weighted averages with and without filtering.


43-67: Comprehensive test coverage for energy-only metrics!

The test case effectively validates:

  • Correct weighted average calculations
  • Proper handling of energy-only systems
  • Edge cases with force and virial metrics

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
source/tests/pt/test_weighted_avg.py (2)

11-36: Consider adding type hints and docstring.

The function lacks type hints and documentation which would improve code maintainability and help users understand the expected input/output format.

Apply this diff to add type hints and docstring:

-def test(all_sys):
+def test(all_sys: list[tuple[dict, int, int, int]]) -> dict[str, tuple[float, int]]:
+    """Calculate weighted average of errors with selective inclusion of metrics.
+    
+    Args:
+        all_sys: List of tuples containing (error_dict, find_energy, find_force, find_virial)
+                where error_dict contains the error metrics
+    
+    Returns:
+        Dictionary mapping error names to tuples of (error_value, sample_size)
+    """
     err_coll = []

38-44: Add type hints and docstring to test_ori function.

Similar to the test function, this function would benefit from type hints and documentation.

Apply this diff:

-def test_ori(all_sys):
+def test_ori(all_sys: list[tuple[dict, int, int, int]]) -> dict[str, tuple[float, int]]:
+    """Calculate weighted average of all errors without selective inclusion.
+    
+    Args:
+        all_sys: List of tuples containing (error_dict, find_energy, find_force, find_virial)
+                where error_dict contains the error metrics
+    
+    Returns:
+        Dictionary mapping error names to tuples of (error_value, sample_size)
+    """
     err_coll = []
deepmd/entrypoints/test.py (2)

331-334: Consider using dictionary get() with default values.

The code uses get() without default values which could return None. Consider providing default values for safety.

Apply this diff:

-    find_energy = test_data.get("find_energy")
-    find_force = test_data.get("find_force")
-    find_virial = test_data.get("find_virial")
+    find_energy = test_data.get("find_energy", 0)
+    find_force = test_data.get("find_force", 0)
+    find_virial = test_data.get("find_virial", 0)

744-747: Use f-strings instead of % operator for string formatting.

The code uses the older % operator for string formatting. Consider using f-strings for better readability and maintainability.

Apply this diff:

-                detail_path.with_suffix(".dos.out.%.d" % ii),
+                detail_path.with_suffix(f".dos.out.{ii:d}"),
                 frame_output,
-                header="%s - %.d: data_dos pred_dos" % (system, ii),
+                header=f"{system} - {ii:d}: data_dos pred_dos",
-                    detail_path.with_suffix(".ados.out.%.d" % ii),
+                    detail_path.with_suffix(f".ados.out.{ii:d}"),
                     frame_output,
-                    header="%s - %.d: data_ados pred_ados" % (system, ii),
+                    header=f"{system} - {ii:d}: data_ados pred_ados",

Also applies to: 758-761

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b3a925d and 08e4a55.

📒 Files selected for processing (2)
  • deepmd/entrypoints/test.py (10 hunks)
  • source/tests/pt/test_weighted_avg.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (19)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (6)
source/tests/pt/test_weighted_avg.py (3)

47-113: LGTM! Well-structured test case for energy-only scenario.

The test case thoroughly validates the weighted average calculation for energy metrics, including proper assertions for both the main function and original implementation.


114-171: LGTM! Comprehensive test case for energy and force metrics.

The test case effectively validates the combined energy and force calculations, with appropriate assertions to verify the differences between the two implementations.


172-229: LGTM! Complete test coverage for all components.

The test case provides thorough validation of all metrics (energy, force, virial) with appropriate assertions.

deepmd/entrypoints/test.py (3)

137-168: LGTM! Improved error handling with selective metric inclusion.

The changes enhance error handling by:

  1. Using flags to determine which metrics to include
  2. Organizing errors into a separate dictionary
  3. Handling different force metric cases (regular vs spin)

459-468: LGTM! Improved conditional logging for force metrics.

The changes enhance logging by:

  1. Only logging when force data is present
  2. Handling different force metric types (regular vs spin)

469-469: LGTM! Added condition for virial logging.

The change ensures virial metrics are only logged when appropriate (PBC enabled and virial data present).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
source/tests/pt/test_weighted_avg.py (3)

11-36: Add type hints and docstring for better maintainability.

The function needs documentation and type hints to improve maintainability. Also, consider using an enum or constants for flag values.

-def fake_test(all_sys):
+from typing import List, Dict, Tuple, Union
+
+def fake_test(all_sys: List[Tuple[Dict[str, Tuple[float, float]], int, int, int]]) -> Dict[str, float]:
+    """Calculate weighted average of error metrics based on system components.
+    
+    Args:
+        all_sys: List of tuples containing (error_dict, has_energy, has_force, has_virial)
+               where error_dict contains metrics like mae_e, rmse_e etc.
+    
+    Returns:
+        Dictionary containing weighted averages of error metrics.
+    """

Also, add error handling for missing force metrics:

         if find_force == 1:
+            if not any(key in err for key in ['rmse_f', 'rmse_fr']):
+                raise KeyError("No force metrics found in error dictionary")
             if "rmse_f" in err:

38-44: Add type hints and docstring for consistency.

For consistency with fake_test, add type hints and documentation.

-def fake_test_ori(all_sys):
+def fake_test_ori(all_sys: List[Tuple[Dict[str, Tuple[float, float]], int, int, int]]) -> Dict[str, float]:
+    """Calculate weighted average of error metrics ignoring component flags.
+    
+    Used as a baseline for comparison with fake_test.
+    
+    Args:
+        all_sys: List of tuples containing (error_dict, has_energy, has_force, has_virial)
+    
+    Returns:
+        Dictionary containing weighted averages of all error metrics.
+    """

47-229: Enhance test maintainability and coverage.

While the test cases are comprehensive, consider these improvements:

  1. Move test data to class-level setup
  2. Add docstrings to test methods
  3. Add negative test cases

Example refactor:

 class TestWeightedAverage(unittest.TestCase):
+    def setUp(self):
+        """Set up test data."""
+        # Define common test data structure
+        self.base_system = {
+            "mae_e": (2, 2),
+            "mae_ea": (4, 2),
+            "rmse_e": (3, 2),
+            "rmse_ea": (5, 2),
+            "mae_f": (2, 3),
+            "rmse_f": (1, 3),
+            "mae_v": (3, 5),
+            "rmse_v": (3, 3),
+        }
+
     def test_case1_energy_only(self):
+        """Test weighted average calculation with energy metrics only."""

Add negative test cases:

def test_invalid_input(self):
    """Test handling of invalid input data."""
    # Test with empty system list
    with self.assertRaises(ValueError):
        fake_test([])
    
    # Test with invalid flag values
    invalid_sys = [(self.base_system, 2, 0, 0)]  # Invalid energy flag
    with self.assertRaises(ValueError):
        fake_test(invalid_sys)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 08e4a55 and db637f9.

📒 Files selected for processing (1)
  • source/tests/pt/test_weighted_avg.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (20)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (2)
source/tests/pt/test_weighted_avg.py (2)

1-8: LGTM! Imports are well-organized.

The license is properly specified and imports follow Python conventions.


231-232: LGTM! Standard unittest main block.

Follows Python conventions for unittest execution.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (2)
deepmd/entrypoints/test.py (2)

562-574: ⚠️ Potential issue

Fix incorrect type hints.

The function's return type hint tuple[list[np.ndarray], list[int]] doesn't match the actual return type, which is a dictionary of tuples containing error metrics and their sizes.

Update the type hint to match the actual return type:

-) -> tuple[list[np.ndarray], list[int]]:
+) -> dict[str, tuple[float, int]]:

Also, remove the commented-out flags from the return statements:

-        }  # find_energy,find_force,find_virial,
+        }

Also applies to: 575-588


137-169: 💡 Verification agent

🧩 Analysis chain

Verify the impact of removing error filtering.

The commented-out code previously filtered errors based on flags (find_energy, find_force, find_virial). Removing this filtering could lead to inconsistencies in error reporting, especially when certain metrics are not available or relevant.

Run the following script to verify error handling in test cases:


🏁 Script executed:

#!/bin/bash
# Description: Check test cases for error handling patterns.

# Search for test cases that rely on filtered errors
rg -A 5 'test_ener.*find_' 'source/tests'

# Search for error dictionary usage in test cases
rg -A 5 'err\[.*(mae|rmse)_(e|f|v)' 'source/tests'

Length of output: 4540


Inconsistent error reporting: Unfiltered error dictionary vs. filtered expectations in tests

The removal of the error filtering (i.e., the commented-out aggregation logic that created a filtered err_part) in the test_ener function means that the function now returns a complete error dictionary. However, test cases in source/tests/pt/test_weighted_avg.py still expect the filtered error metrics (e.g., keys such as "mae_e", "rmse_e", etc., populated conditionally via find_energy, find_force, and find_virial). This mismatch can lead to discrepancies in the averaged error computation or even test failures if additional keys or unexpected values are present.

  • Action Item: Either update the test case expectations to work with the full error dictionary or reintroduce/adjust the filtering logic to ensure that the returned error dictionary strictly contains the expected keys and values.
🧹 Nitpick comments (1)
deepmd/entrypoints/test.py (1)

735-738: Use f-strings for better readability.

The code uses the % operator for string formatting, which is outdated in Python. Using f-strings would improve readability and maintainability.

Apply this diff to update the string formatting:

-                detail_path.with_suffix(".dos.out.%.d" % ii),
-                header="%s - %.d: data_dos pred_dos" % (system, ii),
+                detail_path.with_suffix(f".dos.out.{ii}"),
+                header=f"{system} - {ii}: data_dos pred_dos",

-                    detail_path.with_suffix(".ados.out.%.d" % ii),
-                    header="%s - %.d: data_ados pred_ados" % (system, ii),
+                    detail_path.with_suffix(f".ados.out.{ii}"),
+                    header=f"{system} - {ii}: data_ados pred_ados",

-                detail_path.with_suffix(".property.out.%.d" % ii),
-                header="%s - %.d: data_property pred_property" % (system, ii),
+                detail_path.with_suffix(f".property.out.{ii}"),
+                header=f"{system} - {ii}: data_property pred_property",

-                    detail_path.with_suffix(".aproperty.out.%.d" % ii),
-                    header="%s - %.d: data_aproperty pred_aproperty" % (system, ii),
+                    detail_path.with_suffix(f".aproperty.out.{ii}"),
+                    header=f"{system} - {ii}: data_aproperty pred_aproperty",

Also applies to: 749-752, 891-894, 905-908

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between db637f9 and d2c9c4b.

📒 Files selected for processing (1)
  • deepmd/entrypoints/test.py (12 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
deepmd/entrypoints/test.py

332-332: Local variable find_energy is assigned to but never used

Remove assignment to unused variable find_energy

(F841)

⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (1)
deepmd/entrypoints/test.py (1)

810-812: LGTM! Improved code consistency.

The changes to the property test function improve readability by:

  • Using consistent variable names (property instead of mixed usage)
  • Using task_dim consistently for dimensionality

Also applies to: 863-864, 868-869

Comment on lines 460 to 469
if find_force == 1:
if not out_put_spin:
log.info(f"Force MAE : {mae_f:e} eV/A")
log.info(f"Force RMSE : {rmse_f:e} eV/A")
else:
log.info(f"Force atom MAE : {mae_fr:e} eV/A")
log.info(f"Force atom RMSE : {rmse_fr:e} eV/A")
log.info(f"Force spin MAE : {mae_fm:e} eV/uB")
log.info(f"Force spin RMSE : {rmse_fm:e} eV/uB")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Resolve inconsistency in flag usage.

The flags find_force and find_virial are used in conditional logging but were marked for removal earlier. We need to maintain consistency in error handling and logging.

Consider one of these approaches:

  1. Keep the flags and restore the error filtering code
  2. Remove conditional logging and always log all available metrics

If keeping the flags, restore the error filtering code:

+    err_part = {}
+    if find_energy == 1:
+        err_part["mae_e"] = err["mae_e"]
+        err_part["mae_ea"] = err["mae_ea"]
+        err_part["rmse_e"] = err["rmse_e"]
+        err_part["rmse_ea"] = err["rmse_ea"]

Also applies to: 470-470

Comment on lines +332 to +334
find_energy = test_data.get("find_energy")
find_force = test_data.get("find_force")
find_virial = test_data.get("find_virial")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused flags.

The flags find_energy, find_force, and find_virial are obtained from test_data but are not used in error filtering since the filtering code is commented out. This creates dead code.

Apply this diff to remove the unused assignments:

-    find_energy = test_data.get("find_energy")
-    find_force = test_data.get("find_force")
-    find_virial = test_data.get("find_virial")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
find_energy = test_data.get("find_energy")
find_force = test_data.get("find_force")
find_virial = test_data.get("find_virial")
🧰 Tools
🪛 Ruff (0.8.2)

332-332: Local variable find_energy is assigned to but never used

Remove assignment to unused variable find_energy

(F841)

deepmd/entrypoints/test.py Fixed Show fixed Hide fixed
deepmd/entrypoints/test.py Fixed Show fixed Hide fixed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deepmd/entrypoints/test.py (2)

734-750: Use f-strings for string formatting.

For better readability and consistency with modern Python practices, consider using f-strings instead of the % formatting style.

Apply this diff to update the string formatting:

-                detail_path.with_suffix(".dos.out.%.d" % ii),
+                detail_path.with_suffix(f".dos.out.{ii:d}"),
                 frame_output,
-                header="%s - %.d: data_dos pred_dos" % (system, ii),
+                header=f"{system} - {ii:d}: data_dos pred_dos",
                 append=append_detail,
             )

             save_txt_file(
-                    detail_path.with_suffix(".ados.out.%.d" % ii),
+                    detail_path.with_suffix(f".ados.out.{ii:d}"),
                     frame_output,
-                    header="%s - %.d: data_ados pred_ados" % (system, ii),
+                    header=f"{system} - {ii:d}: data_ados pred_ados",
                     append=append_detail,
                 )

890-906: Use f-strings for string formatting.

For better readability and consistency with modern Python practices, consider using f-strings instead of the % formatting style.

Apply this diff to update the string formatting:

-                detail_path.with_suffix(".property.out.%.d" % ii),
+                detail_path.with_suffix(f".property.out.{ii:d}"),
                 frame_output,
-                header="%s - %.d: data_property pred_property" % (system, ii),
+                header=f"{system} - {ii:d}: data_property pred_property",
                 append=append_detail,
             )

             save_txt_file(
-                    detail_path.with_suffix(".aproperty.out.%.d" % ii),
+                    detail_path.with_suffix(f".aproperty.out.{ii:d}"),
                     frame_output,
-                    header="%s - %.d: data_aproperty pred_aproperty" % (system, ii),
+                    header=f"{system} - {ii:d}: data_aproperty pred_aproperty",
                     append=append_detail,
                 )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2c9c4b and 99d6942.

📒 Files selected for processing (1)
  • deepmd/entrypoints/test.py (12 hunks)
🔇 Additional comments (4)
deepmd/entrypoints/test.py (4)

137-145: LGTM! Function signature update improves type safety.

The updated function signature now correctly returns the error dictionary along with the flags, making the return type explicit and improving type safety.


146-167: LGTM! Error filtering logic is now consistent.

The error filtering logic has been improved to selectively include metrics based on the flags, addressing the inconsistency mentioned in past review comments.


809-811: LGTM! Using task_dim improves flexibility.

Using dp.task_dim for property dimensions makes the code more flexible and maintainable by relying on the model's configuration.


331-333: Static analysis warning can be ignored.

The flags find_energy, find_force, and find_virial are now properly used in error filtering (lines 148-165) and returned from the function. The static analysis warning is outdated.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
deepmd/entrypoints/test.py (1)

744-747: Modernize string formatting.

Replace old-style % string formatting with f-strings for better readability and maintainability.

-                detail_path.with_suffix(".dos.out.%.d" % ii),
+                detail_path.with_suffix(f".dos.out.{ii:d}"),
-                header="%s - %.d: data_dos pred_dos" % (system, ii),
+                header=f"{system} - {ii:d}: data_dos pred_dos",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99d6942 and 984a78e.

📒 Files selected for processing (1)
  • deepmd/entrypoints/test.py (10 hunks)
🔇 Additional comments (4)
deepmd/entrypoints/test.py (4)

137-167: LGTM! Improved error handling and type safety.

The changes improve error handling by making the function return type explicit and filtering error metrics based on what was actually calculated. The structured error dictionary now only includes relevant metrics.


331-333: LGTM! Resolved unused flag issues.

The previously unused flags are now properly utilized for error filtering and conditional logging, addressing past review comments and static analysis warnings.


459-470: LGTM! Consistent flag usage in logging.

The changes implement consistent conditional logging based on the presence of force and virial calculations, with proper handling of spin calculations.


819-821: LGTM! Consistent property testing implementation.

The property testing implementation follows the same patterns as other test functions, with consistent dimension handling and error calculation.

Also applies to: 872-878

Copy link

codecov bot commented Feb 12, 2025

Codecov Report

Attention: Patch coverage is 92.00000% with 2 lines in your changes missing coverage. Please review.

Project coverage is 84.57%. Comparing base (176c746) to head (0f669b8).

Files with missing lines Patch % Lines
deepmd/entrypoints/test.py 92.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4593      +/-   ##
==========================================
- Coverage   84.58%   84.57%   -0.01%     
==========================================
  Files         680      680              
  Lines       64509    64527      +18     
  Branches     3540     3540              
==========================================
+ Hits        54562    54575      +13     
- Misses       8807     8812       +5     
  Partials     1140     1140              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@iProzd iProzd marked this pull request as draft February 13, 2025 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant