Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The execution time of tests changes at repeated execution (durationMs), sometimes, at random times, the execution time goes beyond 2 minutes #2206

Closed
czdiaz opened this issue Mar 8, 2018 · 4 comments
Labels
STATE: Need clarification An issue lacks information for further research.

Comments

@czdiaz
Copy link

czdiaz commented Mar 8, 2018

Are you requesting a feature or reporting a bug?

Bug

What is the current behavior?

I am developing a prototype with a remote computer that executes tests against a web page at fixed times, and reports the resulting execution times of the tests to another server (JSON reporter).
The resulting times are saved in a database in the server. The remote computer uses Ubuntu 16.04, and the sole task is to execute the tests. There are no other jobs running on the remote.
I disabled automatic upgrades, or any other tasks that may alter the execution times. The problem is that I get varied result times, and sometimes the execution goes beyond two minutes.
To rule out problems in the web page, I tested with the basic test given as example in the testcafe distribution. I only modified the test names, as I use the test name to associate the duration time to the test in the database, and I included the page-model code in the same file so that the remote executes the test as one module (I include the test code).
The idea is that the server containing the database alerts when the execution times exceeds the expected value, but as it is now, the times do not reflect the response times of the page, but it seems that for some reason the execution of the test get stuck, and the resulting times do not reflect the real behavour of the page.
I attach the graphic with the results of several hours of test. The times are given in ms. Notice that at random times the times goes beyond 120000 (2 minutes).
The name of the variable in the graph coresponds to the test name (dialog = Handle native confirmation dialog)
I timed the process running in the remote, and the total time corresponds to the sum of reported durationMs, so it is not a problem of the value, the test process get stuck sometimes.

What is the expected behavior?

I would expect that at repeated executions, the durationMs should change at most a couple of seconds to reflect the response time of the page,

How would you reproduce the current behavior (if this is a bug)?

See attached code

Provide the test code and the tested page URL (if applicable)

Tested page URL: https://devexpress.github.io/testcafe/example/

Test code

import { Selector } from 'testcafe';

const label = Selector('label');

class Feature {
    constructor (text) {
        this.label    = label.withText(text);
        this.checkbox = this.label.find('input[type=checkbox]');
    }
}

export default class Page {
    constructor () {
        this.nameInput             = Selector('#developer-name');
        this.triedTestCafeCheckbox = Selector('#tried-test-cafe');
        this.populateButton        = Selector('#populate');
        this.submitButton          = Selector('#submit-button');
        this.results               = Selector('.result-content');
        this.macOSRadioButton      = Selector('input[type=radio][value=MacOS]');
        this.commentsTextArea      = Selector('#comments');

        this.featureList = [
            new Feature('Support for testing on remote devices'),
            new Feature('Re-using existing JavaScript code for testing'),
            new Feature('Easy embedding into a Continuous integration system')
        ];

        this.slider = {
            handle: Selector('.ui-slider-handle'),
            tick:   Selector('.slider-value')
        };

        this.interfaceSelect       = Selector('#preferred-interface');
        this.interfaceSelectOption = this.interfaceSelect.find('option');
    }
}
// Must send as one file to execution , so I included page-model instead
// import Page from './page-model';

fixture `A set of examples that illustrate how to use TestCafe API`
    .page `https://devexpress.github.io/testcafe/example/`;

// Page model
const page = new Page();

// Tests
// Changed name of test, as the name is used as field name in database
// test('Text typing basics', async t => {
test('textbasics', async t => {
    await t
        .typeText(page.nameInput, 'Peter') // Type name
        .typeText(page.nameInput, 'Paker', { replace: true }) // Replace with last name
        .typeText(page.nameInput, 'r', { caretPos: 2 }) // Correct last name
        .expect(page.nameInput.value).eql('Parker'); // Check result
});


// test('Click an array of labels and then check their states', async t => {
test('clickarray', async t => {
    for (const feature of page.featureList) {
        await t
            .click(feature.label)
            .expect(feature.checkbox.checked).ok();
    }
});


// test('Dealing with text using keyboard', async t => {
test('textkey', async t => {
    await t
        .typeText(page.nameInput, 'Peter Parker') // Type name
        .click(page.nameInput, { caretPos: 5 }) // Move caret position
        .pressKey('backspace') // Erase a character
        .expect(page.nameInput.value).eql('Pete Parker') // Check result
        .pressKey('home right . delete delete delete') // Pick even shorter form for name
        .expect(page.nameInput.value).eql('P. Parker'); // Check result
});


// test('Moving the slider', async t => {
test('slider', async t => {
    const initialOffset = await page.slider.handle.offsetLeft;

    await t
        .click(page.triedTestCafeCheckbox)
        .dragToElement(page.slider.handle, page.slider.tick.withText('9'))
        .expect(page.slider.handle.offsetLeft).gt(initialOffset);
});


// test('Dealing with text using selection', async t => {
test('textsel', async t => {
    await t
        .typeText(page.nameInput, 'Test Cafe')
        .selectText(page.nameInput, 7, 1)
        .pressKey('delete')
        .expect(page.nameInput.value).eql('Tfe'); // Check result
});


// test('Handle native confirmation dialog', async t => {
test('dialog', async t => {
    await t
        .setNativeDialogHandler(() => true)
        .click(page.populateButton);

    const dialogHistory = await t.getNativeDialogHistory();

    await t.expect(dialogHistory[0].text).eql('Reset information before proceeding?');

    await t
        .click(page.submitButton)
        .expect(page.results.innerText).contains('Peter Parker');
});


// test('Pick option from select', async t => {
test('select', async t => {
    await t
        .click(page.interfaceSelect)
        .click(page.interfaceSelectOption.withText('Both'))
        .expect(page.interfaceSelect.value).eql('Both');
});


// test('Filling a form', async t => {
test('form', async t => {
    // Fill some basic fields
    await t
        .typeText(page.nameInput, 'Bruce Wayne')
        .click(page.macOSRadioButton)
        .click(page.triedTestCafeCheckbox);

    // Let's leave a comment...
    await t
        .typeText(page.commentsTextArea, "It's...")
        .wait(500)
        .typeText(page.commentsTextArea, '\ngood');

    // I guess, I've changed my mind
    await t
        .wait(500)
        .selectTextAreaContent(page.commentsTextArea, 1, 0)
        .pressKey('delete')
        .typeText(page.commentsTextArea, 'awesome!!!');

    // Let's submit our form
    await t
        .wait(500)
        .click(page.submitButton)
        .expect(page.results.innerText).contains('Bruce Wayne');
});

Specify your

  • operating system: Ubuntu 16.04
  • testcafe version: 0.19.0
  • node.js version: 9.6.1
    compare1
    compare2
@czdiaz
Copy link
Author

czdiaz commented Mar 8, 2018

I attach also the result of one execution from the json reporter:

example_result.log

@AndreyBelym
Copy link
Contributor

Hello @czdiaz! Thank you for the research done. However, I need time to do similar tests in my environment to check it in more controllable environment.

Also, when a webserver and testing server are on different machines, and they aren't connected in a dedicated LAN segment, network can significantly interfere in testing delays. I would suggest you to try to measure test execution time when both webserver and TestCafe are on the same machine, and maybe a simpler scenario, when only basic HTTP request to the webpage is measured.

Also, I'm implementing capturing video of tests running in Chrome & Firefox (#2151), it'll be a very insightful feature for your case.

@czdiaz
Copy link
Author

czdiaz commented Mar 12, 2018

Hello Andrey! Thank you for your answer.
I attach a graph showing the response times for just loading a webpage. That is the reason why I started looking at this problem. In the graph, the normal response time is 4 seconds (time in ms), but sometimes it jumps to 20 seconds, and then 2 minutes !!
I expect that network delays and webserver load may produces changes in the response time of a couple of seconds, not more.
I attach a second graph, where I tested the response time of a web page using the selenium driver instead of testcafe. You may see that the time starts increasing when the webserver load increases at office times, but the difference stays below 2 seconds (time in seconds)
view_page
selenium_test

@miherlosev miherlosev added the STATE: Need clarification An issue lacks information for further research. label May 11, 2018
@miherlosev
Copy link
Collaborator

Hi @czdiaz

There are many reasons why the test execution can require different time for your complex system.
For example: database and webserver performance, network delays, browser updates, etc.

I will close the issue.
Try the latest stable TestCafe version (0.19.2).
If the problem persists, feel free to reopen the issue and provide us with a simple example showing it in action.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
STATE: Need clarification An issue lacks information for further research.
Projects
None yet
Development

No branches or pull requests

3 participants