Skip to main content

Measuring Page Transitions with Scripts

Capturing page load performance for a specific URL is a great start to detect opportunities to improve performance, but some performance testing requires user interaction to facilitate, such as page load following a successful login or submission of a form. The Sauce Labs custom WebDriver command allows you to trigger performance capturing within an automation script at a precise point of interaction, ensuring you can isolate any issues related to the current app state.

note

Using automation to test performance after targeted interaction with your app in no way implies that you should integrate performance testing in your existing functional test suite. Testing function and performance in the same test is likely to result in compromised results for both objectives and can obscure failure troubleshooting.

What You'll Learn

  • How to enable performance in your automation script
  • How to measure performance as the automation runs
  • How to detect regressions
  • How to write the performance results to a log
  • How to view your results

What You'll Need

  • A Sauce Labs account (Log in or sign up for a free trial license).
  • Your Sauce Labs Username and Access Key.
    • Have your SAUCE_USERNAME and SAUCE_ACCESS_KEY defined for your environment.
  • Google Chrome (no older than 3 versions from latest) as the test browser.
  • An automation script that performs the interaction with your app during which you want to measure performance.

Setting Performance Capabilities

Before you configure your script to capture performance metrics as it executes, you must update your capabilities configuration file to enable performance actions. To do this, set the extendedDebugging and capturePerformance sauce:options attributes to True. The following excerpts show you the Webdriver.io sauce:options code samples for a variety of supported languages.

def driver(request):
sauceOptions = {
"screenResolution": "1280x768",
"platformName": "Windows 10",
"browserVersion": "61.0",
"seleniumVersion": "3.11.0",
'name': 'Pytest Chrome W3C Sample',
'extendedDebugging': 'true',
'capturePerformance': 'true'
}
chromeOpts = {
'platformName':"Windows 10",
'browserName': "chrome",
'browserVersion': '61.0',
'goog:chromeOptions': {'w3c': True},
'sauce:options': sauceOptions
}

Implementing the Performance Command Assertion

The custom sauce:performance command measures the performance output against a baseline of previously accepted performance values. If no baseline has been set, the Performance test will create one by measuring performance output 10 times to get an aggregate baseline. The command returns pass when the current results are within the baseline allowances or fail when the results fall outside the baseline. A fail result gives you the option to handle regressions.

caution

Enabling performance capturing can add up to 60 seconds per URL change in a test. We, therefore, advise separating your performance tests from your functional tests. See our Performance Requirements and Recommendations for more advice on optimizing your performance test results.

Command

sauce:performance

Arguments

ArgumentDescription
name
Required
A name of the test as it would appear on Sauce Labs.
metrics
Optional
Specifies one or more specific metrics you want to assert. If not specified, the test defaults to score, which automatically tests all metrics that currently make up a Lighthouse Performance Score.
See Metric Values for the list of supported metric values.

Script Examples

The following samples are excerpts that show the sauce:performance assertion used in a Selenium test script.

See the complete Python performance demo.
tests/test_performance.py
loading...

Target Specific URLs in a Script

You can use the sauce:performanceDisable and sauce:performanceEnable commands to limit the collection of performance metrics to specific URL pages. Implementing these pauses in metrics collection allows you to bypass navigational pages such as login so your tests are faster and more efficient, and your performance results are more relevant.

Example

JS Performance Pause Sample
browser.execute('sauce:performanceDisable')
browser.url('https://www.json.org/json-en.html')
browser.execute('sauce:performanceEnable')
browser.url('https://example.com')

In the preceding example, performance metrics will only be collected for https://example.com.

Defining a Performance Budget

Rather than letting a baseline determine the acceptable metric values for your pages, you can define your own metric value limits for individual pages in your app and then assert against those values to ensure your performance results are always within the range that your deem optimal.

First, create a separate file in which you define your target metric limits, as in the following example.

budget.json
{
"https://saucelabs.com/": {
"speedIndex": 2300,
"lastVisualChange": 2200,
"load": 4200
},
"https://saucelabs.com/platform/analytics-performance/sauce-performance": {
"score": 0.78
}
}

Then, import your budget file in your test script and assert your performance call against the values in your budget, as shown in the following sample.

const budgets = require('./budget.json')

for (const [url, budget] of Object.entries(budgets)) {
await browser.url(url)
const performanceLogs = await browser.execute('sauce:log', {
type: 'sauce:performance'
})

for (const [metric, value] of Object.keys(budget)) {
assert.ok(
performanceLogs[metric] < value`metric ${metric} is over the performance budget`
)
}
}

Handling Regressions

When one or more metric evaluations fail because the result falls outside the established baseline, it is considered a regression and the tester has an option to either troubleshoot and resolve the source of the regression to get the test back into the baseline range or update the baseline with the new performance values. If new baselines are accepted, the command will measure performance against those new values until another regression is detected, when you will again have the option to troubleshoot or update the baselines.

Since the command can be called throughout the test script, create tests that check for performance regressions across core business flows and screens. For example, evaluate pages that load following a successful login event or require multiple steps to trigger.

note

sauce:performance is only aware of the performance metrics of the get URL command that was called before it and not able to capture metrics for views that were navigated via webdriver actions (e.g button clicks). In this example, the custom command returns performance metrics for the /inventory.html URL.

describe('Performance Demo Test', function () {
const { title } = this;

it('Asserts for regressions on the inventory page', () => {
// log into the app
browser.url('https://www.saucedemo.com');
const username = $('[data-test="username"]');
username.setValue('standard_user');
const password = $('[data-test="password"]');
password.setValue('secret_sauce');
const loginButton = $('.login-button');
loginButton.click();
browser.url('/inventory.html');
// assert performance of pageLoad has not regressed.
const performanceOutput = browser.execute('sauce:performance', {
name: title,
metrics: ['load'] });
assert.equal(performanceOutput.result, 'pass', performanceOutput.reason);
});
});

Sample Response

The following response is returned when the Page Load metric is above the expected baseline.

"reason": "Metrics [load] are significantly out of the baseline",
"result": "fail",
"details": {
"load": {
"actual": 12178,
"lowerLimit": 813.6839391929948,
"upperLimit": 2910.759098781689
}
}

Logging Performance Results

You can also send your performance results to the log that is viewable from the Sauce Labs test result page.

View Logs

To enable this, configure sauce:performance within the sauce:log command. Set the fullReport option to true in the configuration to capture extended details about the performance configuration, aside from just the metrics output.

See the complete Python performance demo.
def test_speed_index(self, driver):
self.setUpClass(driver)
metrics = ["load", "speedIndex", "pageWeight", "pageWeightEncoded", "timeToFirstByte",
"timeToFirstInteractive", "firstContentfulPaint", "perceptualSpeedIndex", "domContentLoaded"]
performance = driver.execute_script("sauce:log", {"type": "sauce:performance", options: {fullReport: true}})
for metric in metrics:
assert performance["speedIndex"] < 1000

In addition to reviewing the log in Sauce Labs, you can retrieve the log locally by calling:

driver.execute('sauce:log', {type: 'sauce:performance'});

note

See Using Extended Debugging for additional network and logging options enabled with extended debugging.