You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would love to leverage some of our existing functional Nightwatch testing scripts at load and have the results compiled into a load test report. This is something Artillery.io is doing with modified Playwright scripts as a different way of generating load than creating batches of API requests.
Using a UI script in the performance test would put a more accurate load on the system under test and would allow us to report on actual user experience navigating the system under test. It would save a tremendous time over decomposing the page requests into JMeter or Artillery requests which could also deviate over time with reality as the site changes.
Suggested solution
I feel that Nightwatch has the architecture in order to support this already in that it can run test scripts in parallel. What I propose we add is the ability to
Specify which scripts to run
How long to continuously run the scripts
How many users / threads should run the scripts over that time period
Optionally, ability to have phases to ramp up and down user load and run different scripts at different phases
Output a performance test report averaging the normal step timings with perhaps the browser metrics from the .getPerformanceMetrics method
Alternatives / Workarounds
Copy and paste existing tests multiple times and run them in parallel with a custom reporter that would provide me average times over the runs of the same tests
Additional Information
No response
The text was updated successfully, but these errors were encountered:
Description
I would love to leverage some of our existing functional Nightwatch testing scripts at load and have the results compiled into a load test report. This is something Artillery.io is doing with modified Playwright scripts as a different way of generating load than creating batches of API requests.
Using a UI script in the performance test would put a more accurate load on the system under test and would allow us to report on actual user experience navigating the system under test. It would save a tremendous time over decomposing the page requests into JMeter or Artillery requests which could also deviate over time with reality as the site changes.
Suggested solution
I feel that Nightwatch has the architecture in order to support this already in that it can run test scripts in parallel. What I propose we add is the ability to
Alternatives / Workarounds
Copy and paste existing tests multiple times and run them in parallel with a custom reporter that would provide me average times over the runs of the same tests
Additional Information
No response
The text was updated successfully, but these errors were encountered: