Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Wait For #2854][FSU] Modify SimpleFC Application for FSU-CPU Test @open sesame 02/10 14:56 #2906

Merged
merged 6 commits into from
Feb 11, 2025

Conversation

DonghakPark
Copy link
Member

@DonghakPark DonghakPark commented Feb 3, 2025

Modiy Simple FC Application for FSU-CPU Side TEST

commit summary
commit 1 [Application] Update FSU SimpleFC Application : update SimpleFC Application for more easy test & actual environment
commit 2 [FSU] update layer weight load logic at fsu : update layernode not to load weight at load()

commit 3 [FSU] Modify Application for CPU side Test : update Application

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK [email protected]

Copy link
Collaborator

@jijoongmoon jijoongmoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@DonghakPark
Copy link
Member Author

After Merge this PR, i will close #2854 #2846

@DonghakPark DonghakPark changed the title [Wait For #2854][FSU] Modify SimpleFC Application for FSU-CPU Test [Wait For #2854][FSU] Modify SimpleFC Application for FSU-CPU Test @open sesame 02/10 14:56 Feb 10, 2025
Copy link
Collaborator

@jijoongmoon jijoongmoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@baek2sm baek2sm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Nice work!

@DonghakPark DonghakPark self-assigned this Feb 11, 2025
Copy link
Collaborator

@dkjung dkjung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments


for (unsigned int j = 0; j < feature_size; ++j)
input[j] = j;
input[j] = (j / feature_size);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just the same as

input[j] = 0;

because this is the integer division and all js are smaller than feature_size.

Copy link
Member Author

@DonghakPark DonghakPark Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh.. i see i will update, Thank you for Review

Comment on lines +394 to +396
if (!((model_graph.getNumLoadedWeightPoolTensors() + 1) / 2 <
lookahead + 1)) {
model_graph.checkUnloadComplete(f - 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for checking:

This layer waits for the previous layer only if

!((model_graph.getNumLoadedWeightPoolTensors() + 1) / 2 < lookahead + 1)

It doesn't need to wait if

(model_graph.getNumLoadedWeightPoolTensors() + 1) / 2 < lookahead + 1)

Am I correct?

Copy link
Member Author

@DonghakPark DonghakPark Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. this will wait unload tensor for check mem dealloc

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🫶

@DonghakPark DonghakPark force-pushed the fsu_cpu_Application branch 2 times, most recently from ddbf5a0 to 22aa3ea Compare February 11, 2025 02:15
SeoHyungjun and others added 6 commits February 11, 2025 17:47
Can find out the number of currently loaded tensors through the
getNumLoadedTensors function. When LoadTensors is executed once,
getNumLoadedTensors increases twice.

Added the leave_lookahead argument to LoadTensors.
Calculate leave_lookahead through getNumLoadedTensors and execute
loadTensors.

Signed-off-by: SeoHyungjun <[email protected]>
Update FSU SimpleFC Application for real case

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
at case of fsu, layer's load not needed
- add swap parm
- when swap enabled : load not work
- update SimpleFC Application

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
Modify Application for CPU side Test

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
1. fix uint -> unsigned int
2. remove some debug cout for accurate performance record

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
Add doxygen comment on getNumLoadedTensors

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <[email protected]>
@jijoongmoon jijoongmoon merged commit bc06bd8 into nnstreamer:main Feb 11, 2025
12 of 17 checks passed
@DonghakPark DonghakPark deleted the fsu_cpu_Application branch February 12, 2025 04:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants