Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Irrelevant tests should be completely skipped #804

Closed
plbossart opened this issue Nov 9, 2021 · 11 comments
Closed

[BUG] Irrelevant tests should be completely skipped #804

plbossart opened this issue Nov 9, 2021 · 11 comments
Labels
area:logs Log and results collection, storage, etc.

Comments

@plbossart
Copy link
Member

Describe the bug

The test results of PR #3136 are shown failing in capture cases on UP2_HDA. There is no capture on this platform, therefore the FAIL status is just wrong.

To Reproduce

test thesofproject/linux#3136 on UP2_HDA

Expected behavior

Skip irrelevant tests and don't try to look at the logger output or etrace.

@marc-hb
Copy link
Collaborator

marc-hb commented Nov 9, 2021

Copied from: thesofproject/linux#3136

The real question is here: how should logging behave in unusual DSP power-cycling situations. Consider yourself lucky that the current sof-test caught this undefined behavior by chance, even when it was especially designed for this.
Define this formally (in a better and more visible place than scattered in the comments above) and then the test can adjusted accordingly

Until the undefined behavior is defined this issue is blocked.

My guess is that in some cases we still have the DSP powered on when the capture case test starts and we capture some of it.

Maybe the logs should be empty, maybe they should not be empty, or maybe they can be either but invoking the logger should never fail.

@plbossart
Copy link
Member Author

I don't understand the question @marc-hb, checking the logger should be happening after checking if the test is relevant or not. There is no logger behavior to define if there is no test to run!

@marc-hb
Copy link
Collaborator

marc-hb commented Nov 9, 2021

We have (at least) two different ways to skip. One is from outside the test suite, then the test does not run at all. The other way is from the test code itself. It's a "partial skip" and in that case: the shared SetUp and TearDown routines do run. This just proved very useful to detect undefined behaviors like "what should happen when using the logger and the DSP is turned on and off again". These behaviors must not be left undefined. Once they are clearly defined then the test code can be adjusted.

thesofproject/linux#3136 is apparently changing (some of) these behaviors. How so?

@plbossart
Copy link
Member Author

@marc-hb even within the test suite, you can detect that e.g. a capture test cannot possibly run because there is no capture device. That's already something that's reported by the topology parsing.

@marc-hb
Copy link
Collaborator

marc-hb commented Nov 9, 2021

@marc-hb even within the test suite, you can detect that e.g. a capture test cannot possibly run because there is no capture device. That's already something that's reported by the topology parsing.

Yes and this is exactly what's happening. Then the shared TearDown routine fails because thesofproject/linux#3136 unexpectedly changes the logging behavior. So, what is the formally defined, expected logging behavior in that case? Not just "guessing"

@plbossart
Copy link
Member Author

"Ignore logging if no test was run" seems like a well-defined behavior, no?

@marc-hb
Copy link
Collaborator

marc-hb commented Nov 9, 2021

I'm not asking about the test behavior, I'm asking about the logging behavior. For instance, is crashing the whole system acceptable when trying to get logs when the DSP is powered off? I guess not. So might as well try it and not make any exception.

@plbossart
Copy link
Member Author

that'd be a different test.
We have to make a difference between "when we need to check the logger results" and "how do we check the logger results".

@marc-hb
Copy link
Collaborator

marc-hb commented Nov 9, 2021

that'd be a different test.

Merely looking at the logs should be part of every test, you asked that in #297 and I heartily agree. The expected logs may be different depending on what the test does and they may even be empty but merely looking at the logs should not be optional. Logging is not "just another feature", it is where we go and find what went wrong.

Now we also have a dedicated sof-logger test already and that particular test could/should test power management more than it does right now but the most basic feature of merely looking at the logs (empty or not) should work always in any test.

But all this test discussion is really a digression that avoids the crucial question of what supposed to happen with logging when the DSP power goes on and off. You're basically saying "let's not worry about that yet that cause right now I prefer not to know what's supposed to happen".

@kv2019i
Copy link
Contributor

kv2019i commented Nov 10, 2021

Documenting the current behaviour of logger (sof-docs PR also submitted):
#802 (comment)

@marc-hb marc-hb added the area:logs Log and results collection, storage, etc. label Jan 28, 2022
@plbossart
Copy link
Member Author

closing, this doesn't seem relevant any longer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:logs Log and results collection, storage, etc.
Projects
None yet
Development

No branches or pull requests

3 participants