You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please note that this project is released with a **HELLO-FOSS**.<br>
4
+
By participating in this project you agree to abide by its terms.
5
+
6
+
If you would like to contribute to the project, please follow these guidelines:
7
+
8
+
1. Fork the original WnCC repository to your personal account.
9
+
10
+
2. Clone the forked repository locally.
11
+
12
+
3. Create a new branch for your feature or bug fix.
13
+
14
+
4. Make the necessary changes and commit them.
15
+
16
+
5. Push your changes to your forked repository.
17
+
18
+
6. Submit a pull request to the main repository with your branch, explaining the changes you made and any additional information that might be helpful for review.
Copy file name to clipboardExpand all lines: src/README.md
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -5,50 +5,50 @@ A list of functions that have been implemented can be found here :-
5
5
>This C++ code implements LU factorization using OpenMP for parallel execution of matrix updates. It optimizes the decomposition by distributing computations for the lower (L) and upper (U) triangular matrices across multiple threads.
6
6
7
7
### 2) Maximum element search
8
-
>The code for this function can be found in [max.cpp](max.cpp), and input for the following can be found in input.cpp
8
+
>The code for this function can be found in [max.cpp](src/max.cpp), and input for the following can be found in input.cpp
9
9
The code uses OpenMP for parallel programming to find the maximum element in an array. The search is distributed across multiple threads, improving performance by dividing the workload.
10
10
11
11
### 3) Matrix Matrix Multiplication
12
-
>The code for the following function can be found in [mm.cpp](mm.cpp)<br>
12
+
>The code for the following function can be found in [mm.cpp](src/mm.cpp)<br>
13
13
This code performs matrix-matrix multiplication using OpenMP to parallelize the computation across multiple threads. It optimizes the multiplication process for large matrices, reducing execution time by distributing the workload across available CPU cores.
14
14
15
15
### 4) Montecarlo Method
16
-
>The code for the following function can be found in [montecarlo.cpp](montecarlo.cpp)<br>
16
+
>The code for the following function can be found in [montecarlo.cpp](src/montecarlo.cpp)<br>
17
17
The code estimates the value of Pi using the Monte Carlo method with OpenMP for parallel processing. It simulates random points within a unit square and counts how many fall within the unit circle, then uses multiple threads to improve performance and speed up the estimation process.
18
18
19
19
### 5) Matrix Vector Multiplication
20
-
>The code for the following function can be found in [mv.cpp](mv.cpp)<br>
20
+
>The code for the following function can be found in [mv.cpp](src/mv.cpp)<br>
21
21
The code performs matrix-vector multiplication using OpenMP for parallel processing. The dynamic scheduling with a chunk size of 16 distributes the computation of each row of the matrix across multiple threads, optimizing the execution for large-scale data by balancing the load dynamically.
22
22
23
23
### 6) Product of elements of an array
24
-
>The code for the following function can be found in [prod.cpp](prod.cpp)<br>
24
+
>The code for the following function can be found in [prod.cpp](src/prod.cpp)<br>
25
25
This C++ code calculates the product of elements in an array using OpenMP to parallelize the computation. It optimizes large product calculations by summing the logarithms of array elements in parallel and exponentiating the result to obtain the final product, reducing potential overflow risks.
26
26
27
27
### 7) Pi reduction
28
-
>The code for the following function can be found in [pi-reduction.cpp](pi-reduction.cpp)<br>
28
+
>The code for the following function can be found in [pi-reduction.cpp](src/pi-reduction.cpp)<br>
29
29
This C++ code estimates the value of Pi using numerical integration with the OpenMP library for parallelization. It divides the computation of the integral into multiple threads, summing partial results in parallel using a reduction clause to optimize the performance and accuracy when calculating Pi across a large number of steps.
30
30
31
31
### 8) Calculation of Standard Deviation
32
-
>The code for the following function can be found in [standard_dev.cpp](standard_dev.cpp)<br>
32
+
>The code for the following function can be found in [standard_dev.cpp](src/standard_dev.cpp)<br>
33
33
This C++ code calculates the standard deviation of a dataset using OpenMP for parallel processing. It first computes the mean in parallel, then calculates the variance by summing the squared differences from the mean, distributing both tasks across multiple threads to improve performance with large datasets.
34
34
35
35
### 9) Sum of elements of an array
36
-
>The code for the following function can be found in [sum2.cpp](sum2.cpp) <br>
36
+
>The code for the following function can be found in [sum2.cpp](src/sum2.cpp) <br>
37
37
This C++ code computes the sum of a large array (with 10 million elements) in parallel using OpenMP. It divides the workload among multiple threads based on the total number of threads, each thread calculates a partial sum, and the results are combined in a critical section to avoid race conditions. The execution time for the sum computation is also measured and displayed.
38
38
39
39
### 10) Vector-Vector Dot product calculation
40
-
>The code for the following function can be found in [vvd.cpp](vvd.cpp) <br>
40
+
>The code for the following function can be found in [vvd.cpp](src/vvd.cpp) <br>
41
41
This C++ code calculates the dot product of two arrays using OpenMP for parallelization. It initializes two arrays, A and B, each containing 1000 elements set to 1. The dot product is computed in parallel using a dynamic scheduling strategy, with a chunk size of 100, and the results are combined using a reduction operation. The final result is printed to the console.
42
42
43
43
### 11) Sum calculation (wrong as pragma barrier is not calculated)
44
-
>The code for the following function can be found in [wrong_sum.cpp](wrong.cpp)<br>
44
+
>The code for the following function can be found in [wrong_sum.cpp](src/wrong.cpp)<br>
45
45
This C++ code computes the sum of an array using OpenMP with task-based parallelism. It initializes an array of size 600 with all elements set to 1. The code divides the summation task into segments of size 100, allowing multiple threads to process these segments concurrently. The results from each task are accumulated into a shared variable sum using a critical section to prevent data races.
0 commit comments