-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Irrelevant tests should be completely skipped #804
Comments
Copied from: thesofproject/linux#3136
Until the undefined behavior is defined this issue is blocked.
Maybe the logs should be empty, maybe they should not be empty, or maybe they can be either but invoking the logger should never fail. |
I don't understand the question @marc-hb, checking the logger should be happening after checking if the test is relevant or not. There is no logger behavior to define if there is no test to run! |
We have (at least) two different ways to skip. One is from outside the test suite, then the test does not run at all. The other way is from the test code itself. It's a "partial skip" and in that case: the shared SetUp and TearDown routines do run. This just proved very useful to detect undefined behaviors like "what should happen when using the logger and the DSP is turned on and off again". These behaviors must not be left undefined. Once they are clearly defined then the test code can be adjusted. thesofproject/linux#3136 is apparently changing (some of) these behaviors. How so? |
@marc-hb even within the test suite, you can detect that e.g. a capture test cannot possibly run because there is no capture device. That's already something that's reported by the topology parsing. |
Yes and this is exactly what's happening. Then the shared TearDown routine fails because thesofproject/linux#3136 unexpectedly changes the logging behavior. So, what is the formally defined, expected logging behavior in that case? Not just "guessing" |
"Ignore logging if no test was run" seems like a well-defined behavior, no? |
I'm not asking about the test behavior, I'm asking about the logging behavior. For instance, is crashing the whole system acceptable when trying to get logs when the DSP is powered off? I guess not. So might as well try it and not make any exception. |
that'd be a different test. |
Merely looking at the logs should be part of every test, you asked that in #297 and I heartily agree. The expected logs may be different depending on what the test does and they may even be empty but merely looking at the logs should not be optional. Logging is not "just another feature", it is where we go and find what went wrong. Now we also have a dedicated sof-logger test already and that particular test could/should test power management more than it does right now but the most basic feature of merely looking at the logs (empty or not) should work always in any test. But all this test discussion is really a digression that avoids the crucial question of what supposed to happen with logging when the DSP power goes on and off. You're basically saying "let's not worry about that yet that cause right now I prefer not to know what's supposed to happen". |
Documenting the current behaviour of logger (sof-docs PR also submitted): |
closing, this doesn't seem relevant any longer. |
Describe the bug
The test results of PR #3136 are shown failing in capture cases on UP2_HDA. There is no capture on this platform, therefore the FAIL status is just wrong.
To Reproduce
test thesofproject/linux#3136 on UP2_HDA
Expected behavior
Skip irrelevant tests and don't try to look at the logger output or etrace.
The text was updated successfully, but these errors were encountered: