-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New test #276
New test #276
Conversation
…restore-endpoint-in-backup-api
…restore-endpoint-in-backup-api
…restore-endpoint-in-backup-api
…restore-endpoint-in-backup-api
…restore-endpoint-in-backup-api
…ttps://github.com/jembi/JeMPI into CU-86bymxhug_Create-restore-endpoint-in-backup-api
…restore-endpoint-in-backup-api
…nt-in-backup-api Restore Script
WalkthroughThe modifications introduce a new Backup and Restore API for JeMPI. They encompass the addition of various configuration files, scripts, and Java classes. Key areas include Docker support, PostgreSQL integration, Akka actor systems, Kafka messaging, and Dgraph interactions. The changes aim to streamline backup and restore operations while enhancing system configurability and logging. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant HTTPServer
participant BackupRestoreAPI
participant BackEnd
participant PsqlClient
participant Dgraph
User->>HTTPServer: Sends backup request
HTTPServer->>BackupRestoreAPI: Forward request
BackupRestoreAPI->>BackEnd: Process backup request
BackEnd->>PsqlClient: Fetch data
PsqlClient-->>BackEnd: Return data
BackEnd->>Dgraph: Store data
Dgraph-->>BackEnd: Acknowledge storage
BackEnd-->>BackupRestoreAPI: Confirm backup
BackupRestoreAPI-->>HTTPServer: Respond to request
HTTPServer-->>User: Backup successful
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 30
Outside diff range, codebase verification and nitpick comments (12)
devops/linux/docker/conf/haproxy/haproxy.cfg (1)
Line range hint
43-72
: Review of HAProxy configuration for backup-restore APIThe new frontend and backend configurations for the backup-restore API are set up correctly with appropriate bindings and conditions. However, consider the following:
- Load Balancing: The
roundrobin
method is used (line 70), which is suitable for equal distribution but consider using other methods if different loads are expected on different servers.- Security: Ensure that all communications with the backup-restore API are secured, possibly by using SSL/TLS configurations.
- Monitoring: Adding more detailed monitoring and logging for the backup-restore API traffic could help in maintaining the service effectively.
The configuration is functional but enhancements in security, load balancing, and monitoring could improve service reliability and security.
devops/linux/docker/conf/images/conf-app-images.sh (1)
51-54
: Inconsistent usage of BACKUP_RESTORE_API_HUB_IMAGEThe variable
BACKUP_RESTORE_API_HUB_IMAGE
is defined but not used in any other scripts or configuration files. This could indicate an inconsistency or an unused variable that should be addressed.
- File:
devops/linux/docker/conf/images/conf-app-images.sh
- Line: 51
Ensure that
BACKUP_RESTORE_API_HUB_IMAGE
is either utilized in the necessary scripts or removed if it is not required.Analysis chain
Review of Docker image configurations for backup-restore API
The new environment variables for the backup-restore API are set correctly. However, consider the following:
- Consistency: Ensure that these new variables are consistently used across all scripts and configuration files that deal with Docker deployments.
- Security: Verify that the Docker images and tags do not expose any sensitive information and are pulled from secure registries.
The configuration is adequate, but ensuring consistency and security across deployments is crucial.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of BACKUP_RESTORE_API_* variables across the deployment scripts. # Test: Search for the variable usage. Expect: Consistent usage across all related scripts. rg --type sh "BACKUP_RESTORE_API_"Length of output: 1199
devops/linux/docker/backup_restore/postgres-backup.sh (1)
24-24
: Logging improvements suggested.The script logs the backup process to a file. Consider adding more detailed logging for each step of the process to aid in troubleshooting.
+ echo "$(date) - Checking for remote server details..." >> "${LOG_FILE}"
devops/linux/docker/conf/env/conf-env-high-1-pc.template (1)
16-18
: Review and ensure proper usage of new environment variablesThe
BACKUP_RESTORE_API_HTTP_PORT
variable is correctly integrated in various Docker stack YAML files and environment setup scripts. However, theBACKUP_RESTORE_API_IP
variable is not found in any related configurations or scripts.
Properly used:
BACKUP_RESTORE_API_HTTP_PORT
in:
devops/linux/docker/conf/stack/docker-stack-low-0.yml
devops/linux/docker/conf/stack/docker-stack-low-1.yml
devops/linux/docker/conf/env/create-env-wsl-low-1.sh
devops/linux/docker/conf/env/create-env-linux-low-1.sh
devops/linux/docker/conf/stack/docker-stack-high-0.yml
devops/linux/docker/conf/env/create-env-linux-high-1.sh
devops/linux/docker/conf/stack/docker-stack-high-1.yml
Not found:
BACKUP_RESTORE_API_IP
Please review and ensure that
BACKUP_RESTORE_API_IP
is correctly integrated where necessary.Analysis chain
New environment variables for backup and restore API
The addition of
BACKUP_RESTORE_API_IP
andBACKUP_RESTORE_API_HTTP_PORT
is consistent with the system's expansion to include backup and restore functionalities.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of new environment variables in Docker configurations and scripts. # Test: Search for the new variables in Docker-related scripts and configurations. Expect: Proper usage in Docker compose files and scripts. rg --type shell --type yaml "BACKUP_RESTORE_API_IP|BACKUP_RESTORE_API_HTTP_PORT"Length of output: 112
Script:
#!/bin/bash # Description: Verify the usage of new environment variables in Docker configurations and scripts. # Test: Search for the new variables in Docker-related scripts and configurations. Expect: Proper usage in Docker compose files and scripts. rg --type sh --type yaml "BACKUP_RESTORE_API_IP|BACKUP_RESTORE_API_HTTP_PORT"Length of output: 859
JeMPI_Apps/JeMPI_BackupRestoreAPI/pom.xml (1)
1-244
: Missing Property DefinitionThe property
version.org.springframework.boot.spring-boot-maven-plugin
is not defined in the POM files. This could lead to build issues.
- Ensure that the property
version.org.springframework.boot.spring-boot-maven-plugin
is defined in the properties section of the relevant POM file or inherited correctly from a parent POM.Analysis chain
Comprehensive Maven Configuration
The POM file for
JeMPI_BackupRestoreAPI
is well-configured with appropriate dependencies and plugins. Dependencies are clearly defined with scopes where necessary, and plugins are set up for build management and style checking.Verify Dependency Versions
Ensure that the versions of dependencies, especially those with
${scala.tools.version}
, are correctly defined in the properties section to avoid build issues.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the consistency of dependency versions across the project. # Test: Search for version definitions in Maven property files. Expect: Consistent version numbers across different POM files. rg --type xml "<version>"Length of output: 9250
Script:
#!/bin/bash # Description: Verify the definitions of version properties across the project. # Test: Search for property definitions in Maven POM files. Expect: Properties like akka.version, jackson.version, log4j.version, and version.org.springframework.boot.spring-boot-maven-plugin to be defined. rg --type xml "<akka.version>|<jackson.version>|<log4j.version>|<version.org.springframework.boot.spring-boot-maven-plugin>"Length of output: 410
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/LibDgraph.java (1)
82-85
: Lack of Error Handling inrestoreGoldenRecord
MethodThe
restoreGoldenRecord
method in theDgraphMutations
class and its related methods (restoreSourceIds
,restoreInteraction
,createGoldenRecord
,updateGoldenRecord
) do not include explicit error handling mechanisms such as try-catch blocks. This could result in unhandled exceptions and make debugging difficult.
- File:
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java
- Methods:
restoreGoldenRecord
restoreSourceIds
restoreInteraction
createGoldenRecord
updateGoldenRecord
Please ensure that these methods handle potential errors appropriately and log them for easier debugging and maintenance.
Analysis chain
Review of the new
restoreGoldenRecord
methodThe method correctly delegates the restoration of a golden record to the
DgraphMutations
class, which is a good use of encapsulation. However, it's important to ensure that theDgraphMutations.restoreGoldenRecord
method handles all possible errors and logs them appropriately.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in the `restoreGoldenRecord` method in `DgraphMutations`. # Test: Search for error handling in the method. Expect: Catch blocks or equivalent error handling mechanisms. ast-grep --lang java --pattern $'class DgraphMutations { $$$ restoreGoldenRecord($_) { $$$ } $$$ }'Length of output: 114
Script:
#!/bin/bash # Locate the implementation of the `restoreGoldenRecord` method within the `DgraphMutations` class and display its content. rg -A 20 'class DgraphMutations' --files-with-matches | xargs rg -A 20 'restoreGoldenRecord'Length of output: 1191
Script:
#!/bin/bash # Search for error handling mechanisms within the `restoreGoldenRecord` method and the methods it calls in the `DgraphMutations` class. # Locate the implementation of the `restoreGoldenRecord` method within the `DgraphMutations` class and display its content. rg -A 20 'class DgraphMutations' --files-with-matches | xargs rg -A 50 'restoreGoldenRecord' # Locate and display the content of the methods called within `restoreGoldenRecord`: restoreSourceIds, restoreInteraction, createGoldenRecord, updateGoldenRecord. rg -A 20 'restoreSourceIds' --files-with-matches | xargs rg -A 20 'restoreSourceIds' rg -A 20 'restoreInteraction' --files-with-matches | xargs rg -A 20 'restoreInteraction' rg -A 20 'createGoldenRecord' --files-with-matches | xargs rg -A 20 'createGoldenRecord' rg -A 20 'updateGoldenRecord' --files-with-matches | xargs rg -A 20 'updateGoldenRecord'Length of output: 96728
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/LibMPI.java (1)
166-169
: Lack of Error Handling inpostGoldenRecord
Method Call ChainThe
restoreGoldenRecord
method inDgraphMutations
, as well as the methods it calls (restoreSourceIds
andrestoreInteraction
), do not have explicit error handling or logging mechanisms. This confirms the concern about the absence of error handling in thepostGoldenRecord
method's call chain. It is recommended to add appropriate error handling and logging to ensure robustness and maintainability.
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java
Analysis chain
Review of the
postGoldenRecord
methodThe method
postGoldenRecord
correctly delegates the task to theLibDgraph
client. It's important to verify that theLibDgraph.restoreGoldenRecord
method, which is being called here, handles errors appropriately and logs them. This is crucial for maintaining data integrity and providing clear error messages to the users.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in the `restoreGoldenRecord` method in `LibDgraph`. # Test: Search for error handling in the method. Expect: Catch blocks or equivalent error handling mechanisms. ast-grep --lang java --pattern $'class LibDgraph { $$$ restoreGoldenRecord($_) { $$$ } $$$ }'Length of output: 108
Script:
#!/bin/bash # Description: Verify error handling in the `restoreGoldenRecord` method in `LibDgraph`. # Find the `LibDgraph` class definition file libdgraph_file=$(fd LibDgraph.java) # Search for the `restoreGoldenRecord` method within the `LibDgraph` class and include context lines to check for error handling rg -A 10 'restoreGoldenRecord' $libdgraph_fileLength of output: 642
Script:
#!/bin/bash # Description: Verify error handling in the `restoreGoldenRecord` method in `dgraphMutations`. # Find the `dgraphMutations` class definition file dgraph_mutations_file=$(fd dgraphMutations.java) # Search for the `restoreGoldenRecord` method within the `dgraphMutations` class and include context lines to check for error handling rg -A 10 'restoreGoldenRecord' $dgraph_mutations_fileLength of output: 8104
Script:
#!/bin/bash # Description: Verify error handling in the `restoreSourceIds` and `restoreInteraction` methods in `DgraphMutations`. # Find the `DgraphMutations` class definition file dgraph_mutations_file=$(fd DgraphMutations.java) # Search for the `restoreSourceIds` and `restoreInteraction` methods within the `DgraphMutations` class and include context lines to check for error handling rg -A 10 'restoreSourceIds' $dgraph_mutations_file rg -A 10 'restoreInteraction' $dgraph_mutations_fileLength of output: 3952
devops/linux/docker/conf/stack/docker-stack-high-0.yml (1)
560-563
: Missingdata-config
directoryThe
data-config
directory, which is crucial for the operation of multiple components (em-scala
,linker
,api-kc
, andbootstrapper
), is missing. This directory needs to be created and populated with the necessary configuration files.
- Ensure the
data-config
directory exists at the expected location.- Verify that it contains up-to-date and secure configurations.
- Check for consistency in how this directory is used across different services to prevent configuration drift.
Analysis chain
Binding system configuration directory across services
Multiple services (
em-scala
,linker
,api-kc
, andbootstrapper
) bind the$DATA_SYSTEM_CONFIG_DIR
for configuration. Ensure this directory contains all necessary configuration files and is secured, as it is crucial for the operation of multiple components.
- Verify the contents and permissions of
$DATA_SYSTEM_CONFIG_DIR
to ensure it contains up-to-date and secure configurations.- Check for consistency in how this directory is used across different services to prevent configuration drift.
Also applies to: 610-612, 722-725, 787-789
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the contents and permissions of the system configuration directory. # Test: List files and check permissions in the system configuration directory. ls -l $DATA_SYSTEM_CONFIG_DIRLength of output: 604
Script:
#!/bin/bash # Description: Find the definition or usage of $DATA_SYSTEM_CONFIG_DIR in the docker-stack-high-0.yml file. # Search for the definition or usage of $DATA_SYSTEM_CONFIG_DIR grep -A 5 -B 5 'DATA_SYSTEM_CONFIG_DIR' devops/linux/docker/conf/stack/docker-stack-high-0.ymlLength of output: 1703
Script:
#!/bin/bash # Description: Search for the definition of DATA_SYSTEM_CONFIG_DIR in the repository. # Search for the definition of DATA_SYSTEM_CONFIG_DIR grep -r 'DATA_SYSTEM_CONFIG_DIR' .Length of output: 3195
Script:
#!/bin/bash # Description: Search for the definition of PROJECT_DIR in the repository. # Search for the definition of PROJECT_DIR grep -r 'PROJECT_DIR' .Length of output: 7617
Script:
#!/bin/bash # Description: Verify the contents and permissions of the data-config directory. # List files and check permissions in the data-config directory. ls -l ../../data-configLength of output: 90
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java (3)
152-162
: Review of new methodrestoreSourceIdQuery
The use of Java's text blocks in this method enhances readability and maintainability. The method constructs a Dgraph query for restoring source IDs based on provided
SourceId
objects.Consider adding more detailed comments to explain the purpose and usage of each part of the query to aid future maintainability.
+ // This query formats a Dgraph mutation to restore source IDs with facility and patient details.
102-124
: Potential Issue: Lack of Input Validation forcreateLinkedGoldenRecordTripleWithGoldenUID
The method
createLinkedGoldenRecordTripleWithGoldenUID
does not appear to have validation logic for its inputs (goldenUID
,auxGoldenRecordData
,demographicData
). This could lead to potential security risks.Please review and ensure that appropriate validation or sanitization is applied to these inputs to prevent security vulnerabilities.
- File:
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java
- Lines: 102-124
Analysis chain
Review of new method
createLinkedGoldenRecordTripleWithGoldenUID
This method is similar to
createInteractionTripleWithUid
, focusing on creating linked golden record triples. It uses a consistent pattern for constructing the mutation string, ensuring maintainability.As with other dynamic query constructions, ensure that the inputs, especially
goldenUID
, are validated or sanitized to prevent potential security issues.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify data validation for createLinkedGoldenRecordTripleWithGoldenUID method inputs. # Test: Search for validation logic related to `goldenUID`, `auxGoldenRecordData`, and `demographicData`. rg --type java -A 5 $'createLinkedGoldenRecordTripleWithGoldenUID'Length of output: 1939
Script:
#!/bin/bash # Description: Verify data validation for createLinkedGoldenRecordTripleWithGoldenUID method inputs by searching for validation in related classes and methods. # Search for validation logic for `goldenUID` rg --type java 'validate.*goldenUID' -A 5 # Search for validation logic for `auxGoldenRecordData` rg --type java 'validate.*auxGoldenRecordData' -A 5 # Search for validation logic for `demographicData` rg --type java 'validate.*demographicData' -A 5Length of output: 5802
421-438
: Add validation and error handling logicThe
cloneGoldenRecordFromInteractionWithGoldenUid
method and its dependencies (createLinkedGoldenRecordTripleWithGoldenUID
anddoMutateTransaction
) lack explicit validation and error handling for input parameters. This could lead to potential issues if the input data is incomplete or invalid.
- Ensure that all input parameters are validated for completeness and correctness before proceeding with the mutation.
- Add appropriate error handling to manage cases where the input data does not meet the required criteria.
Analysis chain
Review of new method
cloneGoldenRecordFromInteractionWithGoldenUid
This method extends the functionality of golden record cloning by allowing a specific golden UID to be used. It follows a similar pattern to other mutation construction methods in this class, ensuring consistency.
Verify that the cloning logic correctly handles all edge cases, particularly in scenarios where the source data might be incomplete or invalid.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify cloning logic in cloneGoldenRecordFromInteractionWithGoldenUid method. # Test: Search for edge case handling in the cloning logic. rg --type java -A 5 $'cloneGoldenRecordFromInteractionWithGoldenUid'Length of output: 1608
Script:
#!/bin/bash # Description: Verify edge case handling in the cloning logic of cloneGoldenRecordFromInteractionWithGoldenUid method. # Search for the implementation details of createLinkedGoldenRecordTripleWithGoldenUID method. rg --type java -A 10 $'createLinkedGoldenRecordTripleWithGoldenUID' # Search for error handling or validation logic within cloneGoldenRecordFromInteractionWithGoldenUid method. rg --type java -A 10 $'cloneGoldenRecordFromInteractionWithGoldenUid'Length of output: 6691
Script:
#!/bin/bash # Description: Verify validation and error handling logic for the cloneGoldenRecordFromInteractionWithGoldenUid method and its dependencies. # Search for validation or error handling logic in the cloneGoldenRecordFromInteractionWithGoldenUid method. rg --type java -A 10 'cloneGoldenRecordFromInteractionWithGoldenUid' # Search for validation or error handling logic in the createLinkedGoldenRecordTripleWithGoldenUID method. rg --type java -A 10 'createLinkedGoldenRecordTripleWithGoldenUID' # Search for validation or error handling logic in the DgraphClient class, particularly in the doMutateTransaction method. rg --type java -A 10 'doMutateTransaction'Length of output: 29548
devops/linux/docker/backup_restore/dgraph-backup.py (1)
55-61
: Ensure proper file handling increate_backup_json
.The function correctly handles file creation and JSON dumping. However, it's always good practice to explicitly handle potential I/O errors when working with file operations.
Consider wrapping the file operations in a try-except block to catch and log potential I/O errors.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (54)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/.gitignore (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/build.sh (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/checkstyle/suppression.xml (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/docker/.gitignore (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/docker/Dockerfile (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/pom.xml (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/push.sh (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/AppConfig.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/Ask.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/BackEnd.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/JsonFieldsConfig.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/MapError.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/PsqlClient.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/PsqlNotifications.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/Routes.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/api/BackupRestoreAPI.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/api/HttpServer.java (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/application.conf (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/log4j.properties (1 hunks)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/log4j2.properties (1 hunks)
- JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/LibMPI.java (1 hunks)
- JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/LibMPIClientInterface.java (1 hunks)
- JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java (7 hunks)
- JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/LibDgraph.java (1 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/config/DGraphConfig.java (2 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/config/dgraph/MutationCreateInteractionTripleWithUID.java (1 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/config/dgraph/MutationCreateLinkedGoldenRecordTripleWithUID.java (1 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/ApiModels.java (1 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/GlobalConstants.java (1 hunks)
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/RestoreGoldenRecords.java (1 hunks)
- JeMPI_Apps/build-all-java.sh (3 hunks)
- JeMPI_Apps/pom.xml (1 hunks)
- devops/linux/docker/backup_restore/dgraph-backup-api.sh (1 hunks)
- devops/linux/docker/backup_restore/dgraph-backup.py (1 hunks)
- devops/linux/docker/backup_restore/dgraph-restore-api.py (1 hunks)
- devops/linux/docker/backup_restore/dgraph-restore-api.sh (1 hunks)
- devops/linux/docker/backup_restore/postgres-backup.sh (4 hunks)
- devops/linux/docker/conf/env/conf-env-high-1-pc.template (3 hunks)
- devops/linux/docker/conf/env/conf-env-low-1-pc.template (1 hunks)
- devops/linux/docker/conf/env/create-env-linux-high-1.sh (3 hunks)
- devops/linux/docker/conf/env/create-env-linux-low-1.sh (1 hunks)
- devops/linux/docker/conf/env/create-env-wsl-low-1.sh (1 hunks)
- devops/linux/docker/conf/haproxy/haproxy.cfg (3 hunks)
- devops/linux/docker/conf/images/conf-app-images.sh (1 hunks)
- devops/linux/docker/conf/stack/docker-stack-high-0.yml (9 hunks)
- devops/linux/docker/conf/stack/docker-stack-high-1.yml (15 hunks)
- devops/linux/docker/conf/stack/docker-stack-low-0.yml (2 hunks)
- devops/linux/docker/conf/stack/docker-stack-low-1.yml (2 hunks)
- devops/linux/docker/deployment/build_and_reboot/d-stack-2-build-java.sh (1 hunks)
- devops/linux/docker/deployment/deploy-local.sh (7 hunks)
- devops/linux/docker/deployment/down/d-stack-stop-backup-restore-api-services.sh (1 hunks)
- devops/linux/docker/deployment/down/d-stack-stop-services.sh (1 hunks)
- devops/linux/docker/deployment/reboot/d-stack-start-backup-restore-api-services.sh (1 hunks)
- devops/linux/docker/deployment/reboot/d-stack-start-services.sh (1 hunks)
Files skipped from review due to trivial changes (13)
- JeMPI_Apps/JeMPI_BackupRestoreAPI/.gitignore
- JeMPI_Apps/JeMPI_BackupRestoreAPI/checkstyle/suppression.xml
- JeMPI_Apps/JeMPI_BackupRestoreAPI/docker/.gitignore
- JeMPI_Apps/JeMPI_BackupRestoreAPI/push.sh
- JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/log4j.properties
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/config/DGraphConfig.java
- JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/RestoreGoldenRecords.java
- JeMPI_Apps/build-all-java.sh
- JeMPI_Apps/pom.xml
- devops/linux/docker/conf/env/conf-env-low-1-pc.template
- devops/linux/docker/conf/env/create-env-linux-low-1.sh
- devops/linux/docker/conf/env/create-env-wsl-low-1.sh
- devops/linux/docker/deployment/build_and_reboot/d-stack-2-build-java.sh
Additional context used
Shellcheck
JeMPI_Apps/JeMPI_BackupRestoreAPI/build.sh
[warning] 9-9: JAR_FILE appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 10-10: APP_IMAGE appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 11-11: APP appears unused. Verify use (or export if used externally).
(SC2034)
devops/linux/docker/backup_restore/postgres-backup.sh
[warning] 11-11: DB_NAME appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 37-37: backup_file appears unused. Verify use (or export if used externally).
(SC2034)
devops/linux/docker/backup_restore/dgraph-restore-api.sh
[warning] 5-5: BACKUP_DATE_TIME appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
[warning] 18-18: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 21-21: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
[warning] 25-25: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 28-28: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
[warning] 32-32: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 35-35: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
[warning] 40-40: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 43-43: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
[warning] 50-50: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 53-53: Don't use ls | grep. Use a glob or a for loop with a condition to allow non-alphanumeric filenames.
(SC2010)
[warning] 55-55: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
[warning] 70-70: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 80-80: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
devops/linux/docker/deployment/deploy-local.sh
[warning] 87-87: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[warning] 142-142: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 220-220: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 223-223: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
Hadolint
JeMPI_Apps/JeMPI_BackupRestoreAPI/docker/Dockerfile
[error] 5-5: Use COPY instead of ADD for files and folders
(DL3020)
Ruff
devops/linux/docker/backup_restore/dgraph-restore-api.py
7-7:
uuid
imported but unusedRemove unused import:
uuid
(F401)
yamllint
devops/linux/docker/conf/stack/docker-stack-low-0.yml
[error] 614-614: trailing spaces
(trailing-spaces)
devops/linux/docker/conf/stack/docker-stack-low-1.yml
[error] 614-614: trailing spaces
(trailing-spaces)
devops/linux/docker/conf/stack/docker-stack-high-0.yml
[warning] 786-786: wrong indentation: expected 4 but found 6
(indentation)
[error] 806-806: trailing spaces
(trailing-spaces)
[error] 810-810: trailing spaces
(trailing-spaces)
devops/linux/docker/conf/stack/docker-stack-high-1.yml
[warning] 132-132: wrong indentation: expected 8 but found 10
(indentation)
[warning] 172-172: wrong indentation: expected 8 but found 10
(indentation)
[warning] 212-212: wrong indentation: expected 8 but found 10
(indentation)
[warning] 828-828: wrong indentation: expected 4 but found 6
(indentation)
[warning] 830-830: wrong indentation: expected 4 but found 6
(indentation)
[warning] 849-849: wrong indentation: expected 8 but found 10
(indentation)
Additional comments not posted (38)
JeMPI_Apps/JeMPI_BackupRestoreAPI/build.sh (1)
9-11
: Verify the usage of declared variables.Variables
JAR_FILE
,APP_IMAGE
, andAPP
are declared but flagged as potentially unused. If these variables are used in the sourced scripts or externally, consider explicitly exporting them to avoid confusion and ensure their availability in other scripts.Tools
Shellcheck
[warning] 9-9: JAR_FILE appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 10-10: APP_IMAGE appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 11-11: APP appears unused. Verify use (or export if used externally).
(SC2034)
JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/log4j2.properties (1)
1-14
: Review logging configuration for appropriateness.The logging configuration sets a debug level for the
org.jembi.jempi
logger and an info level for the root logger. Ensure these levels are appropriate for your production environment as debug logs can be verbose and potentially expose sensitive information.JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/config/dgraph/MutationCreateInteractionTripleWithUID.java (1)
13-32
: Ensure the correctness and efficiency of Dgraph mutation string construction.The method constructs Dgraph mutation strings using complex string operations and stream processing. Verify the correctness of the string formatting, especially the use of
formatted
method and the dynamic generation of mutation strings based on configuration fields.devops/linux/docker/backup_restore/postgres-backup.sh (1)
4-10
: Ensure robust error handling for date-time input.The script uses the first command line argument to set the backup date and time. Consider adding error handling to ensure the provided argument is in the correct format.
Verification successful
Ensure robust error handling for date-time input.
The verification confirms that the script successfully distinguishes between valid and invalid date-time formats. The provided date-time format handling is robust and correctly identifies invalid inputs.
- The valid date-time format
2023-07-16 12:34:56
was recognized correctly.- The invalid date-time format
invalid_datetime
was flagged as invalid.This indicates that the script can handle date-time arguments appropriately.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the date-time format of the first argument. # Test: Check if the date-time format is valid. Expect: Valid date-time format. if ! date -d "$1" > /dev/null 2>&1; then echo "Invalid date-time format." fiLength of output: 12
Script:
#!/bin/bash # Description: Verify the date-time format of the first argument. # Test: Check if the date-time format is valid when an argument is provided. if [ -n "$1" ]; then if ! date -d "$1" > /dev/null 2>&1; then echo "Invalid date-time format." else echo "Valid date-time format." fi else echo "No date-time argument provided." fiLength of output: 88
Script:
#!/bin/bash # Description: Verify the date-time format of the first argument with both valid and invalid inputs. # Test with a valid date-time format valid_datetime="20230716_123456" if [ -n "$valid_datetime" ]; then if ! date -d "$valid_datetime" > /dev/null 2>&1; then echo "Invalid date-time format for valid input." else echo "Valid date-time format for valid input." fi else echo "No date-time argument provided for valid input." fi # Test with an invalid date-time format invalid_datetime="invalid_datetime" if [ -n "$invalid_datetime" ]; then if ! date -d "$invalid_datetime" > /dev/null 2>&1; then echo "Invalid date-time format for invalid input." else echo "Valid date-time format for invalid input." fi else echo "No date-time argument provided for invalid input." fiLength of output: 370
Script:
#!/bin/bash # Description: Verify the date-time format of the first argument with both valid and invalid inputs. # Test with a valid date-time format valid_datetime="2023-07-16 12:34:56" if [ -n "$valid_datetime" ]; then if ! date -d "$valid_datetime" > /dev/null 2>&1; then echo "Invalid date-time format for valid input." else echo "Valid date-time format for valid input." fi else echo "No date-time argument provided for valid input." fi # Test with an invalid date-time format invalid_datetime="invalid_datetime" if [ -n "$invalid_datetime" ]; then if ! date -d "$invalid_datetime" > /dev/null 2>&1; then echo "Invalid date-time format for invalid input." else echo "Valid date-time format for invalid input." fi else echo "No date-time argument provided for invalid input." fiLength of output: 384
JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/resources/application.conf (3)
3-7
: Review session and server timeout configurations.The session is enabled, and the server timeouts are set to 10 seconds for idle and 5 seconds for request and linger. Ensure these values align with the expected load and performance requirements.
17-23
: Review CORS credentials settings.The
allow-credentials
setting is enabled, which allows user credentials in CORS requests. This can have security implications, so ensure it is necessary for your application's functionality.
66-73
: Review thread pool configuration.The
my-blocking-dispatcher
is configured with a fixed pool size of 512 and a throughput of 1. Verify that these settings are appropriate for the expected concurrency and load.devops/linux/docker/conf/env/create-env-linux-high-1.sh (2)
14-16
: Review new placement node environment variables.New environment variables for node placements (
PLACEMENT_NODE1
,PLACEMENT_NODE2
,PLACEMENT_NODE3
) are added. Ensure these are correctly configured in your deployment topology.
46-46
: Review new API port configuration.The
BACKUP_RESTORE_API_HTTP_PORT
is set to 50000. Ensure this port is open and does not conflict with other services.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/api/BackupRestoreAPI.java (1)
36-56
: Review actor system and HTTP server setup.The method
create
sets up the actor system and HTTP server. Ensure that all configurations and parameters (e.g., ports, database credentials) are securely managed and validated.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/PsqlClient.java (2)
88-96
: Review: Disconnect operation indisconnect
method.The method ensures the connection is closed and handles potential SQL exceptions well. The null check before attempting to close the connection is correctly implemented.
99-105
: Review: Statement preparation methods.The methods
createStatement
andprepareStatement
are straightforward and correctly throwSQLException
for the caller to handle. This is a good practice as it allows the caller to decide on the error handling strategy.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/api/HttpServer.java (2)
42-56
: Review: Server setup and closure inopen
andclose
methods.The
open
method correctly initializes the server and logs its status. Theclose
method properly handles the unbinding and termination of the actor system. Both methods are well-implemented with appropriate logging.
58-85
: Review: CORS and route handling increateCorsRoutes
.The method sets up CORS settings and routes for the HTTP server. It includes comprehensive error and rejection handling. However, consider adding more specific logging for different types of exceptions and rejections to improve maintainability and debugging.
+ LOGGER.warn("Rejection occurred with message: {}", message); + LOGGER.error("Exception in route: {}", x.getMessage());Likely invalid or redundant comment.
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/LibMPIClientInterface.java (1)
66-67
: Review the new methodrestoreGoldenRecord
.The addition of
restoreGoldenRecord
in the interface is a significant enhancement. It's essential to ensure that this method is implemented consistently across all classes that implement this interface.JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/GlobalConstants.java (1)
66-66
: New constant for REST endpoint.The addition of
SEGMENT_POST_GOLDEN_RECORD_RESTORE
is appropriate for supporting new functionalities related to golden record restoration. Ensure that this constant is used in the corresponding HTTP route configurations.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/Routes.java (2)
71-92
: Review the routegetExpandedGoldenRecord
.This route handles fetching expanded golden records. Ensure that error handling is robust and that the route correctly marshals the response. The use of
mapError
for error handling seems appropriate.
48-69
: Review the routepostGoldenRecord
.This route is responsible for posting golden records. It is crucial to ensure that the payload is properly validated before processing and that errors are handled gracefully. The conditional check for
payload != null
is a good practice.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/AppConfig.java (1)
55-61
: Review the methods for DGraph configuration.The methods
getDGraphHosts
andgetDGraphPorts
are crucial for connecting to the DGraph database. Ensure that these methods are used consistently throughout the application and that the configuration is secure and efficient.devops/linux/docker/conf/env/conf-env-high-1-pc.template (2)
57-57
: Addition of DATA_SYSTEM_CONFIG_DIR and other directoriesThe new variable
DATA_SYSTEM_CONFIG_DIR
and the expansion of directories for various services are likely to support the enhanced data management needs of the backup and restore functionalities.
105-107
: Introduction of placement variables for additional nodesThe addition of
PLACEMENT_NODE1
,PLACEMENT_NODE2
, andPLACEMENT_NODE3
suggests a scaling or distribution strategy that might be related to the deployment or management of services across multiple nodes.Verification successful
Introduction of placement variables for additional nodes verified
The variables
PLACEMENT_NODE1
,PLACEMENT_NODE2
, andPLACEMENT_NODE3
are correctly used and defined in the deployment scripts and configurations, confirming their role in node placement for scaling or distribution strategies.
docker-stack-high-1.yml
: Used for node placement constraints.create-env-linux-high-1.sh
: Defined with specific node values.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of new node placement variables in deployment scripts or configurations. # Test: Search for the new placement variables in deployment scripts. Expect: Proper usage and definition in relevant scripts. rg --type shell "PLACEMENT_NODE1|PLACEMENT_NODE2|PLACEMENT_NODE3"Length of output: 97
Script:
#!/bin/bash # Description: Verify the usage of new node placement variables in deployment scripts or configurations. # Test: Search for the new placement variables in all files. Expect: Proper usage and definition in relevant scripts. rg "PLACEMENT_NODE1|PLACEMENT_NODE2|PLACEMENT_NODE3" -A 3 -B 3Length of output: 9542
devops/linux/docker/deployment/deploy-local.sh (1)
218-224
: Enhanced Backup Procedure with TimestampsThe modification to include a timestamp in the backup process (
BACKUP_DATE_TIME
) is a good practice for maintaining clear records of backup operations.Tools
Shellcheck
[warning] 220-220: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 223-223: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
JeMPI_Apps/JeMPI_LibShared/src/main/java/org/jembi/jempi/shared/models/ApiModels.java (1)
107-125
: Review of New Model Records for Restoring DataThe new records
RestoreGoldenRecord
,RestoreInteraction
, andRestoreInteractionRecord
are well-defined with appropriate JSON property annotations. Ensure that these models are used consistently across the application and that they are properly documented.devops/linux/docker/conf/stack/docker-stack-low-0.yml (2)
611-654
: New service configuration forbackup-restore-api
The
backup-restore-api
service configuration has been added. This includes environment variables for database and Kafka connections, as well as deployment settings. Ensure all environment variables are correctly set and that the service is correctly placed according to its node label constraints.
- Check if
PLACEMENT_API
is properly defined in the node labels.- Validate memory limits and other resource allocations to ensure they are appropriate for the service's expected load.
Verification successful
Node label
PLACEMENT_API
is properly definedThe
PLACEMENT_API
variable is correctly defined in the environment setup scripts, ensuring that the node label constraints will be properly applied.
NODE1
is defined using$(hostname)
or a hardcoded value in various environment setup scripts.PLACEMENT_API
is set to the value ofNODE1
.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the node label `PLACEMENT_API` is defined. # Test: Search for the node label definition in node configuration files. rg --type yaml "PLACEMENT_API" devops/linux/docker/conf/nodes/Length of output: 134
Script:
#!/bin/bash # Description: Verify the node label `PLACEMENT_API` is defined. # Test: Search for the node label definition across the entire repository. rg --type yaml "PLACEMENT_API"Length of output: 1262
Script:
#!/bin/bash # Description: Verify if `PLACEMENT_API` is defined as an environment variable or in a configuration file. # Test: Search for the definition of `PLACEMENT_API` across the entire repository. rg "PLACEMENT_API"Length of output: 2006
Script:
#!/bin/bash # Description: Verify if `NODE1` is defined as an environment variable or in a configuration file. # Test: Search for the definition of `NODE1` across the entire repository. rg "NODE1"Length of output: 18565
Tools
yamllint
[error] 614-614: trailing spaces
(trailing-spaces)
74-77
: Addition of new port configuration for HAProxyThe configuration for port
50010
is added to the HAProxy service. Ensure that this port is intended for use and is properly secured if exposed publicly.devops/linux/docker/conf/stack/docker-stack-low-1.yml (2)
74-77
: Consistent port addition across deployment filesThe addition of port
50010
for HAProxy is consistent with the previous file. Ensure this consistency is intentional and that the port configuration aligns with network and security policies.Verification successful
Consistent port addition across deployment files
The port
50010
configuration is consistent acrossdocker-stack-low-0.yml
anddocker-stack-low-1.yml
. This alignment ensures that the deployment configurations are uniform.
devops/linux/docker/conf/stack/docker-stack-low-0.yml
devops/linux/docker/conf/stack/docker-stack-low-1.yml
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check consistency of port configuration across deployment files. # Test: Compare port configurations across deployment files. diff <(rg --type yaml "50010" devops/linux/docker/conf/stack/docker-stack-low-0.yml) <(rg --type yaml "50010" devops/linux/docker/conf/stack/docker-stack-low-1.yml)Length of output: 272
Script:
#!/bin/bash # Description: Check consistency of port configuration across deployment files. # Capture the output of rg into temporary files rg --type yaml "50010" devops/linux/docker/conf/stack/docker-stack-low-0.yml > /tmp/stack_low_0_ports.txt rg --type yaml "50010" devops/linux/docker/conf/stack/docker-stack-low-1.yml > /tmp/stack_low_1_ports.txt # Compare the port configurations across deployment files diff /tmp/stack_low_0_ports.txt /tmp/stack_low_1_ports.txtLength of output: 214
611-654
: Review ofbackup-restore-api
service in a different environmentThis configuration mirrors the setup seen in
docker-stack-low-0.yml
. Confirm that all environment-specific variables (likePLACEMENT_API
, memory limits) are correctly adjusted for this deployment scenario.
- Verify that the service's resource allocations are suitable for this environment's expected traffic and load.
- Check for any discrepancies in environment variable settings between different deployment files.
Tools
yamllint
[error] 614-614: trailing spaces
(trailing-spaces)
devops/linux/docker/conf/stack/docker-stack-high-0.yml (1)
343-343
: Node placement constraints for DGraph AlphasThe placement constraints for
alpha-01
,alpha-02
, andalpha-03
are specified to ensure they are distributed across different nodes (PLACEMENT_NODE1
,PLACEMENT_NODE2
,PLACEMENT_NODE3
). Confirm that these nodes are configured to handle the load and that the placement aligns with the overall high-availability strategy.
- Check if these nodes have the necessary resources and are configured for high availability.
- Ensure that the node labels are correctly defined and match the intended physical or virtual machines.
Also applies to: 376-376, 409-409
devops/linux/docker/conf/stack/docker-stack-high-1.yml (2)
807-850
: Review the configuration of thebackup-restore-api
service.The
backup-restore-api
service configuration includes several environment variables and deployment settings that need to be carefully managed to ensure proper functionality and security.Ensure that the environment variables such as
POSTGRESQL_IP
,DGRAPH_HOSTS
,KAFKA_BOOTSTRAP_SERVERS
, etc., are correctly set and that the network and volume configurations do not expose sensitive data or create bottlenecks.Tools
yamllint
[warning] 828-828: wrong indentation: expected 4 but found 6
(indentation)
[warning] 830-830: wrong indentation: expected 4 but found 6
(indentation)
[warning] 849-849: wrong indentation: expected 8 but found 10
(indentation)
560-563
: Verify the use of$DATA_SYSTEM_CONFIG_DIR
across various services.The binding of
$DATA_SYSTEM_CONFIG_DIR
to/app/conf_system
in multiple services suggests centralized configuration management. Ensure that this directory's permissions and contents are correctly managed to prevent configuration conflicts or security issues.Also applies to: 609-612, 660-663, 722-725, 787-789
JeMPI_Apps/JeMPI_LibMPI/src/main/java/org/jembi/jempi/libmpi/dgraph/DgraphMutations.java (3)
31-32
: Approved ObjectMapper configurationThe configuration of the
ObjectMapper
instance is appropriate for handling date-time values correctly in JSON. This is crucial for ensuring that date-time fields are serialized in a human-readable and universally accepted ISO8601 format.
164-190
: Review of new methodrestoreGoldenRecord
This method is crucial for the restoration of golden records. It processes multiple interactions and manages updates to the Dgraph database efficiently. The method uses loops and conditional checks to handle various scenarios, which is appropriate for its complexity.
Ensure that error handling is robust, especially in scenarios where Dgraph transactions might fail, to prevent data inconsistency.
59-76
: Review of new methodcreateInteractionTripleWithUid
This method efficiently constructs a mutation string for Dgraph interactions using a dynamic list of parameters. It is well-structured and leverages Java's
List
andIntStream
for building the query parameters dynamically.Ensure that the input data to this method, particularly
sourceUID
and values fromauxInteractionData
anddemographicData
, are validated or sanitized elsewhere in the application to prevent injection vulnerabilities.devops/linux/docker/backup_restore/dgraph-backup.py (1)
18-21
: Good implementation of folder creation utility.The
create_folder_if_not_exists
function is well-implemented with proper checks and usesos.makedirs
appropriately.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/Ask.java (3)
21-30
: Review ofgetGidsAll
method inAsk.java
:This method uses the Akka
AskPattern
to interact with actors asynchronously. The method signature and implementation are correct, and it properly handles timeouts usingGlobalConstants.TIMEOUT_DGRAPH_QUERY_SECS
. The method returns aCompletionStage
which is a good practice for asynchronous handling in Java.
33-43
: Review ofgetExpandedGoldenRecord
method inAsk.java
:This method also uses the Akka
AskPattern
. It properly constructs aGetExpandedGoldenRecordRequest
with a replyTo actor and a payload. The timeout and scheduler are correctly used. The method implementation is clean and adheres to best practices.
45-55
: Review ofpostGoldenRecord
method inAsk.java
:Similar to the other methods, this one correctly uses the
AskPattern
for asynchronous actor interaction. The method takes aRestoreGoldenRecords
payload and constructs aPostGoldenRecordRequest
. It handles timeouts and scheduling appropriately.JeMPI_Apps/JeMPI_BackupRestoreAPI/src/main/java/org/jembi/jempi/backuprestoreapi/MapError.java (1)
21-40
: Review ofmapError
method inMapError.java
:The method uses a switch statement to map various types of
MpiServiceError
to appropriate HTTP status codes. It correctly uses theJackson.marshaller
for JSON serialization of error messages. Each case in the switch statement is handled appropriately, and the use of specific HTTP status codes for different error types is a good practice. This method enhances the API's error handling capabilities by providing clear, actionable HTTP responses based on the type of error encountered.
DB_NAME="${POSTGRESQL_DATABASE}" | ||
DB_USER="${POSTGRESQL_USERNAME}" | ||
DB_PASSWORD="${POSTGRESQL_PASSWORD}" | ||
DB_HOST="${POSTGRES_HOST:-localhost}" | ||
BACKUP_PATH="${POSTGRES_BACKUP_DIRECTORY}/$(date +%Y%m%d_%H%M%S)" | ||
BACKUP_PATH="${POSTGRES_BACKUP_DIRECTORY}/$BACKUP_DATE_TIME" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using a more secure method for handling passwords.
The script uses environment variables for database credentials, which can be insecure if not handled properly.
Consider using a more secure storage method for sensitive information, such as encrypted secrets management solutions.
@@ -42,7 +48,7 @@ copy_to_remote() { | |||
if [ -n "${REMOTE_SERVER}" ] && [ -n "${REMOTE_PATH}" ]; then | |||
for db in "${databases[@]}"; do | |||
echo "$(date) - Starting remote transfer" >> "${LOG_FILE}" | |||
scp "${BACKUP_PATH}/${db}_$(date +%Y%m%d_%H%M%S).sql" ${REMOTE_SERVER}:${REMOTE_PATH} | |||
scp "${BACKUP_PATH}/${db}_$BACKUP_DATE_TIME.sql" ${REMOTE_SERVER}:${REMOTE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance security for remote file transfers.
The script uses scp
for transferring backup files. Consider using more secure methods or ensuring that SSH keys are properly managed.
Consider implementing SSH key rotation and strict host key checking.
@@ -129,7 +129,7 @@ services: | |||
condition: on-failure | |||
placement: | |||
constraints: | |||
- node.labels.name == $PLACEMENT_KAFKA_01 | |||
- node.labels.name == $PLACEMENT_NODE1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct indentation issues.
Several lines have incorrect indentation which could cause issues with YAML parsing and service deployment.
- - node.labels.name == $PLACEMENT_NODE1
+ - node.labels.name == $PLACEMENT_NODE1
- - node.labels.name == $PLACEMENT_NODE2
+ - node.labels.name == $PLACEMENT_NODE2
- - node.labels.name == $PLACEMENT_NODE3
+ - node.labels.name == $PLACEMENT_NODE3
- - backend
+ - backend
- - type: bind
+ - type: bind
- - node.labels.name == ${PLACEMENT_API}
+ - node.labels.name == ${PLACEMENT_API}
Also applies to: 172-172, 212-212, 828-828, 830-830, 849-849
Tools
yamllint
[warning] 132-132: wrong indentation: expected 8 but found 10
(indentation)
backup_file="${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql" | ||
|
||
echo "$(date) - Starting backup for database: ${db}" >> "${LOG_FILE}" | ||
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER -d $db -F c -f "${BACKUP_PATH}/${db}--$(date +%Y%m%d_%H%M%S).sql" | ||
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER -d $db -F c -f "${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable backup_file
.
The variable backup_file
is declared but not used in any meaningful way. Consider removing it or using it appropriately.
- backup_file="${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql"
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
backup_file="${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql" | |
echo "$(date) - Starting backup for database: ${db}" >> "${LOG_FILE}" | |
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER -d $db -F c -f "${BACKUP_PATH}/${db}--$(date +%Y%m%d_%H%M%S).sql" | |
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER -d $db -F c -f "${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql" | |
echo "$(date) - Starting backup for database: ${db}" >> "${LOG_FILE}" | |
PGPASSWORD=$DB_PASSWORD pg_dump -h $DB_HOST -U $DB_USER -d $db -F c -f "${BACKUP_PATH}/${db}--$BACKUP_DATE_TIME.sql" |
Tools
Shellcheck
[warning] 37-37: backup_file appears unused. Verify use (or export if used externally).
(SC2034)
docker service scale jempi_api=1 | ||
docker service scale jempi_async-receiver=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider parameterizing service names for flexibility.
The script hardcodes service names (jempi_api
, jempi_async-receiver
). Consider using a parameter or environment variable to allow flexibility in specifying which services to scale.
- docker service scale jempi_api=1
- docker service scale jempi_async-receiver=1
+ docker service scale ${SERVICE_NAME}=1
Committable suggestion was skipped due to low confidence.
private BackEnd( | ||
final Level debugLevel, | ||
final ActorContext<Event> context, | ||
final String[] dgraphHosts, | ||
final int[] dgraphPorts, | ||
final String sqlIP, | ||
final int sqlPort, | ||
final String sqlUser, | ||
final String sqlPassword, | ||
final String sqlNotificationsDb, | ||
final String sqlAuditDb, | ||
final String kafkaBootstrapServers, | ||
final String kafkaClientId) { | ||
super(context); | ||
try { | ||
this.libMPI = null; | ||
this.dgraphHosts = dgraphHosts; | ||
this.dgraphPorts = dgraphPorts; | ||
this.pgIP = sqlIP; | ||
this.pgPort = sqlPort; | ||
this.pgUser = sqlUser; | ||
this.pgPassword = sqlPassword; | ||
this.pgNotificationsDb = sqlNotificationsDb; | ||
this.pgAuditDb = sqlAuditDb; | ||
psqlNotifications = new PsqlNotifications(sqlIP, sqlPort, sqlNotificationsDb, sqlUser, sqlPassword); | ||
openMPI(kafkaBootstrapServers, kafkaClientId, debugLevel); | ||
} catch (Exception e) { | ||
LOGGER.error(e.getMessage(), e); | ||
throw e; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review of Constructor and Error Handling in BackEnd
The constructor initializes various services and logs the initialization steps. However, it catches a generic Exception
, which may not provide detailed insights into what might go wrong during initialization. It would be beneficial to catch more specific exceptions and handle them appropriately to provide more detailed error information.
- catch (Exception e) {
+ catch (SpecificExceptionType e) { // Replace with more specific exception types as applicable
LOGGER.error(e.getMessage(), e);
throw e;
}
Committable suggestion was skipped due to low confidence.
public JsonFieldsConfig(final String resourceFilename) { | ||
try { | ||
load(resourceFilename); | ||
} catch (Exception e) { | ||
LOGGER.debug(e); | ||
} | ||
} | ||
|
||
|
||
private JSONArray buildFieldsResponsePayload( | ||
final JSONArray systemFields, | ||
final JSONArray customFields) { | ||
JSONArray result = new JSONArray(); | ||
// Process system fields | ||
for (Object systemField : systemFields) { | ||
JSONObject field = (JSONObject) systemField; | ||
// Mark field as readonly | ||
field.put("readOnly", true); | ||
// Merge array values | ||
result.add(field); | ||
} | ||
// Process custom fields | ||
for (Object customField : customFields) { | ||
// Convert field names from snake case to camel case | ||
JSONObject field = (JSONObject) customField; | ||
String fieldName = (String) field.get("fieldName"); | ||
field.put("fieldName", AppUtils.snakeToCamelCase(fieldName)); | ||
// Remove extra attributes | ||
field.remove("indexGoldenRecord"); | ||
field.remove("indexPatient"); | ||
field.remove("m"); | ||
field.remove("u"); | ||
// Mark field as editable | ||
field.put("readOnly", false); | ||
// Merge array values | ||
result.add(field); | ||
} | ||
return result; | ||
} | ||
|
||
private InputStream getFileStreamFromResource(final String resourceFilename) { | ||
ClassLoader classLoader = getClass().getClassLoader(); | ||
return classLoader.getResourceAsStream(resourceFilename); | ||
} | ||
|
||
public void load(final String filename) { | ||
final var separator = FileSystems.getDefault().getSeparator(); | ||
final var filePath = "%sapp%sconf_system%s%s".formatted(separator, separator, separator, filename); | ||
final var file = new File(filePath); | ||
try (Reader reader = new InputStreamReader(new FileInputStream(file))) { | ||
JSONParser jsonParser = new JSONParser(); | ||
Object obj = jsonParser.parse(reader); | ||
JSONObject config = (JSONObject) obj; | ||
// System fields are fields that exists regardless of the implementation | ||
JSONArray systemFields = (JSONArray) config.get("systemFields"); | ||
// Custom fields depend on the needs of the implementation | ||
JSONArray customFields = (JSONArray) config.get("fields"); | ||
jsonFields = buildFieldsResponsePayload(systemFields, customFields).toJSONString(); | ||
} catch (IOException | ParseException e) { | ||
throw new RuntimeException(e); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review: Configuration loading and response payload building in JsonFieldsConfig
.
The load
method correctly handles file reading and JSON parsing, and it builds the response payload based on system and custom fields. The error handling is robust, throwing a runtime exception when necessary. Consider adding more detailed logging at each step to aid in debugging and maintenance.
+ LOGGER.debug("Loading configuration from file: {}", filename);
+ LOGGER.debug("Building response payload with system and custom fields.");
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
public JsonFieldsConfig(final String resourceFilename) { | |
try { | |
load(resourceFilename); | |
} catch (Exception e) { | |
LOGGER.debug(e); | |
} | |
} | |
private JSONArray buildFieldsResponsePayload( | |
final JSONArray systemFields, | |
final JSONArray customFields) { | |
JSONArray result = new JSONArray(); | |
// Process system fields | |
for (Object systemField : systemFields) { | |
JSONObject field = (JSONObject) systemField; | |
// Mark field as readonly | |
field.put("readOnly", true); | |
// Merge array values | |
result.add(field); | |
} | |
// Process custom fields | |
for (Object customField : customFields) { | |
// Convert field names from snake case to camel case | |
JSONObject field = (JSONObject) customField; | |
String fieldName = (String) field.get("fieldName"); | |
field.put("fieldName", AppUtils.snakeToCamelCase(fieldName)); | |
// Remove extra attributes | |
field.remove("indexGoldenRecord"); | |
field.remove("indexPatient"); | |
field.remove("m"); | |
field.remove("u"); | |
// Mark field as editable | |
field.put("readOnly", false); | |
// Merge array values | |
result.add(field); | |
} | |
return result; | |
} | |
private InputStream getFileStreamFromResource(final String resourceFilename) { | |
ClassLoader classLoader = getClass().getClassLoader(); | |
return classLoader.getResourceAsStream(resourceFilename); | |
} | |
public void load(final String filename) { | |
final var separator = FileSystems.getDefault().getSeparator(); | |
final var filePath = "%sapp%sconf_system%s%s".formatted(separator, separator, separator, filename); | |
final var file = new File(filePath); | |
try (Reader reader = new InputStreamReader(new FileInputStream(file))) { | |
JSONParser jsonParser = new JSONParser(); | |
Object obj = jsonParser.parse(reader); | |
JSONObject config = (JSONObject) obj; | |
// System fields are fields that exists regardless of the implementation | |
JSONArray systemFields = (JSONArray) config.get("systemFields"); | |
// Custom fields depend on the needs of the implementation | |
JSONArray customFields = (JSONArray) config.get("fields"); | |
jsonFields = buildFieldsResponsePayload(systemFields, customFields).toJSONString(); | |
} catch (IOException | ParseException e) { | |
throw new RuntimeException(e); | |
} | |
public JsonFieldsConfig(final String resourceFilename) { | |
try { | |
load(resourceFilename); | |
} catch (Exception e) { | |
LOGGER.debug(e); | |
} | |
} | |
private JSONArray buildFieldsResponsePayload( | |
final JSONArray systemFields, | |
final JSONArray customFields) { | |
JSONArray result = new JSONArray(); | |
// Process system fields | |
for (Object systemField : systemFields) { | |
JSONObject field = (JSONObject) systemField; | |
// Mark field as readonly | |
field.put("readOnly", true); | |
// Merge array values | |
result.add(field); | |
} | |
// Process custom fields | |
for (Object customField : customFields) { | |
// Convert field names from snake case to camel case | |
JSONObject field = (JSONObject) customField; | |
String fieldName = (String) field.get("fieldName"); | |
field.put("fieldName", AppUtils.snakeToCamelCase(fieldName)); | |
// Remove extra attributes | |
field.remove("indexGoldenRecord"); | |
field.remove("indexPatient"); | |
field.remove("m"); | |
field.remove("u"); | |
// Mark field as editable | |
field.put("readOnly", false); | |
// Merge array values | |
result.add(field); | |
} | |
return result; | |
} | |
private InputStream getFileStreamFromResource(final String resourceFilename) { | |
ClassLoader classLoader = getClass().getClassLoader(); | |
return classLoader.getResourceAsStream(resourceFilename); | |
} | |
public void load(final String filename) { | |
final var separator = FileSystems.getDefault().getSeparator(); | |
final var filePath = "%sapp%sconf_system%s%s".formatted(separator, separator, separator, filename); | |
final var file = new File(filePath); | |
LOGGER.debug("Loading configuration from file: {}", filename); | |
try (Reader reader = new InputStreamReader(new FileInputStream(file))) { | |
JSONParser jsonParser = new JSONParser(); | |
Object obj = jsonParser.parse(reader); | |
JSONObject config = (JSONObject) obj; | |
// System fields are fields that exists regardless of the implementation | |
JSONArray systemFields = (JSONArray) config.get("systemFields"); | |
// Custom fields depend on the needs of the implementation | |
JSONArray customFields = (JSONArray) config.get("fields"); | |
LOGGER.debug("Building response payload with system and custom fields."); | |
jsonFields = buildFieldsResponsePayload(systemFields, customFields).toJSONString(); | |
} catch (IOException | ParseException e) { | |
throw new RuntimeException(e); | |
} |
def fetch_data_for_id(gid): | ||
get_expanded_golden_record = f'http://{host}:{port}/JeMPI/expandedGoldenRecord' | ||
payload = json.dumps({"gid": gid}) | ||
headers = {'Content-Type': 'application/json'} | ||
response = requests.post(get_expanded_golden_record, headers=headers, data=payload) | ||
return response.json() if response.status_code == 200 else None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for HTTP requests.
The function fetch_data_for_id
lacks error handling beyond checking for a 200 status code. Consider adding more robust error handling to manage different HTTP statuses and potential exceptions from the requests.post
call.
try:
response = requests.post(get_expanded_golden_record, headers=headers, data=payload)
if response.status_code == 200:
return response.json()
else:
# Handle other statuses appropriately
return None
except requests.exceptions.RequestException as e:
print(f"Failed to fetch data for ID {gid}: {str(e)}")
return None
if len(sys.argv) >= 1: | ||
current_datetime = sys.argv[1] | ||
else: | ||
current_datetime = datetime.now().strftime('%Y%m%d_%H%M%S') | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider enhancing the argument handling logic.
Currently, the script checks if there's at least one argument and uses it as the current_datetime
. If not, it defaults to the current time. This can lead to unexpected behaviors if additional arguments are passed unintentionally.
- if len(sys.argv) >= 1:
+ if len(sys.argv) > 1:
This change ensures that the script only uses the first argument when exactly one additional argument is provided.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if len(sys.argv) >= 1: | |
current_datetime = sys.argv[1] | |
else: | |
current_datetime = datetime.now().strftime('%Y%m%d_%H%M%S') | |
if len(sys.argv) > 1: | |
current_datetime = sys.argv[1] | |
else: | |
current_datetime = datetime.now().strftime('%Y%m%d_%H%M%S') |
# Function to backup Dgraph data | ||
def backup_dgraph_data(): | ||
get_gid_url = f'http://{host}:{port}/JeMPI/gidsAll' | ||
print("Fetching list of all golden record id's") | ||
response = requests.get(get_gid_url) | ||
if response.status_code == 200: | ||
new_golden_records = response.json() | ||
backup_data = [] | ||
for gid in new_golden_records.get("records"): | ||
golden_record_data = fetch_data_for_id(gid) | ||
if golden_record_data: | ||
backup_data.append(golden_record_data) | ||
|
||
file_name = f'dgraph_backup_{current_datetime}.json' | ||
print(f'Total {str(len(backup_data))} Golden records backed up.') | ||
|
||
backup_path_folder = create_backup_json(backup_data, file_name) | ||
|
||
print(f'All data saved to {backup_path_folder + "/" + file_name}') | ||
else: | ||
print('Failed to retrieve list of IDs from the API') | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor to improve error handling and data fetching logic.
The backup_dgraph_data
function can be optimized by separating concerns and improving the error handling when fetching IDs. Also, consider handling potential exceptions from the requests.get
call.
def backup_dgraph_data():
get_gid_url = f'http://{host}:{port}/JeMPI/gidsAll'
print("Fetching list of all golden record id's")
try:
response = requests.get(get_gid_url)
if response.status_code == 200:
new_golden_records = response.json()
backup_data = [fetch_data_for_id(gid) for gid in new_golden_records.get("records", []) if fetch_data_for_id(gid)]
file_name = f'dgraph_backup_{current_datetime}.json'
print(f'Total {len(backup_data)} Golden records backed up.')
backup_path_folder = create_backup_json(backup_data, file_name)
print(f'All data saved to {backup_path_folder + "/" + file_name}')
else:
print('Failed to retrieve list of IDs from the API')
except requests.exceptions.RequestException as e:
print(f"Failed to retrieve list of IDs: {str(e)}")
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Function to backup Dgraph data | |
def backup_dgraph_data(): | |
get_gid_url = f'http://{host}:{port}/JeMPI/gidsAll' | |
print("Fetching list of all golden record id's") | |
response = requests.get(get_gid_url) | |
if response.status_code == 200: | |
new_golden_records = response.json() | |
backup_data = [] | |
for gid in new_golden_records.get("records"): | |
golden_record_data = fetch_data_for_id(gid) | |
if golden_record_data: | |
backup_data.append(golden_record_data) | |
file_name = f'dgraph_backup_{current_datetime}.json' | |
print(f'Total {str(len(backup_data))} Golden records backed up.') | |
backup_path_folder = create_backup_json(backup_data, file_name) | |
print(f'All data saved to {backup_path_folder + "/" + file_name}') | |
else: | |
print('Failed to retrieve list of IDs from the API') | |
def backup_dgraph_data(): | |
get_gid_url = f'http://{host}:{port}/JeMPI/gidsAll' | |
print("Fetching list of all golden record id's") | |
try: | |
response = requests.get(get_gid_url) | |
if response.status_code == 200: | |
new_golden_records = response.json() | |
backup_data = [fetch_data_for_id(gid) for gid in new_golden_records.get("records", []) if fetch_data_for_id(gid)] | |
file_name = f'dgraph_backup_{current_datetime}.json' | |
print(f'Total {len(backup_data)} Golden records backed up.') | |
backup_path_folder = create_backup_json(backup_data, file_name) | |
print(f'All data saved to {backup_path_folder + "/" + file_name}') | |
else: | |
print('Failed to retrieve list of IDs from the API') | |
except requests.exceptions.RequestException as e: | |
print(f"Failed to retrieve list of IDs: {str(e)}") |
Summary by CodeRabbit
New Features
Configurations
Documentation
.gitignore
files for Java, Maven, and Docker directories.Bug Fixes
Refactor
Style
Tests
Chores
Revert