Skip to content
Hanoh Haim edited this page May 26, 2019 · 20 revisions

TRex wiki

Release Notes

How to build TRex

$cd linux_dpdk
$./b configure  (only once)
$./b build

Build Output will be in "scripts" folder

more options

$./b configure --sanitized
$./b configure --gcc6
$./b configure --no-mlx
$./b configure --with-ntacc

How to debug with gdb TRex

from "scripts" folder

$./t-rex-64-debug-gdb [args]

this script will load set the patch to so and run gdb

How to build TRex Simulator

$cd linux
$./b configure  (only once)
$./b build

How to build doc

$cd doc
$./b configure  (only once)
$./b build

Build Output will be in "scripts" folder

Run simulation unit-test

$cd script
$./bp-sim-64 --ut
$./bp-sim-64-debug --ut

Run simulation functional

$cd script
$ ./run_regression --func

Documentation

Presentations

Video of DPDK 2015 summit: https://www.youtube.com/watch?v=U0gRalB7DOs

Manual

Download

TRex on your laptop (using VirtualBox OVA)

TRex Sandbox

Python API

Running regression

In case you did some changes in code and/or want to check some new NIC, you can run our regression:

  1. Create TRex config file: sudo ./dpdk_setup_ports.py -i

  2. Run TRex daemon: sudo ./trex_daemon_server start

  3. Make a copy of directory with setup parameters: automation/regression/setups/trex07

  4. Update yaml files in that directory if needed

  5. Run full regression:
    ./run_regression --cfg ./automation/regression/setups/<new dir>

Note
  • Running specific test:
    ./run_regression --cfg ./automation/regression/setups/<new dir> -t <part of test name>

  • Running only stateless tests can be done without waiting for TRex server to be brought up each time:

    • Run in one shell the interactive server: sudo ./t-rex-64 -i

    • Run in another shell the regression with flags: --stl --no-daemon

  • Specifying setup directory can be done only once with environment variable SETUP_DIR, for example:
    export SETUP_DIR=trex07

How to contribute

  • For small fix, just create a PR and make sure it solves the problem you are facing.

  • For a big feature do this:

    • Open an issue describing the feature, why?, what are the high level design etc

    • Try to deliver the feature in small pieces. If it is a full stack feature (Python/CP/DP) you can commit the new feature in small chunks untested and not working as long as it does not break the current code base and regression (see below)

    • Using small chunks you will get early code review and design review. if you will try to commit everything in a big commit you might get rejected at the end

    • Each commit needs to pass the following criteria in this order:

      • Pass current functional tests (see above)

      • Code review (without passing functional the code will not be inspected)

      • Regression with at least one physical setup with XL710 NIC bare-metal (see trex-08/trex/09 setups below)

      • Add documentation in asciidoc format (see $root/doc/..)

      • Add Phyton API documentation inside the code

      • Add new gtests (Google tests) in the code

      • Add new regression tests that test your specific features — test it on your XL710 setup

Regression/setups matrix (partial)

Our regression in running on multiple setups (see below) We had a plan to provide the ability to run any github remote branch against our setups, but we stop this activity due to lack of time and lack of contributors.

The regression

  • System tests on a real setups — python/nose

  • Performance tests

  • Save the results to ES/Grafana/Kibana

  • Runs 24/7 in loop (1 hours for all tests/all setups)

Setups

see Setups

Travis CI

  1. Each PR will be tested aginst all the setups

  2. Time is about 1.5 hours for a full regression on all the setups however in case there is an internal run it could be more (3 hours worse)

  3. report