Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added ghdl/vunit:* images for CI environments #11

Closed
eine opened this issue Aug 21, 2019 · 18 comments
Closed

Added ghdl/vunit:* images for CI environments #11

eine opened this issue Aug 21, 2019 · 18 comments

Comments

@eine
Copy link
Collaborator

eine commented Aug 21, 2019

Coming from ghdl/ghdl#883

I've been reworking the images in ghdl/docker (see the README and the repository list at dockerhub). Here are some notes which are still to be properly documented, and which are related to the conversation in ghdl/ghdl#883:

  • All the ghdl/run:*gcc images include lcov now. As a result, all ghdl/ghdl:*gcc* images should be ready-to-use for coverage analysis now.
  • A new repository has been added, ghdl/vunit:*, which includes six images: mcode, mcode-master, llvm, llvm-master, gcc and gcc-master. These are built on top of ghdl/ghdl:buster-mcode, ghdl/ghdl:buster-llvm-7 and ghdl/ghdl:buster-gcc-8.3.0, respectively. *-master variants include latest VUnit (master branch), while others include the latest stable release (installed through pip).
  • In case GtkWave is needed, it is available in images ghdl/ext:gtkwave, ghdl/ext:broadway or ghdl/ext:latest. All of them include VUnit too. The first two of them are based on ghdl/vunit:llvm-master, and the last one is based on ghdl/ext:ls-debian (which includes GHDL with LLVM backend too).

@sjaeckel, these changes should allow you to rewrite your dockerfile as:

FROM ghdl/vunit:gcc

RUN apt-get update -qq \
 && apt-get -y install git vim \
 && apt-get autoclean -y && apt-get clean -y && apt-get autoremove -y

RUN mkdir /work && chmod 777 /work

Which makes me wonder: do you really need git and vim inside the container? You might have a good reason to do so. I'm just asking so I can help you rethink your workflow to get rid of those dependencies, should you want to do so.

[@sjaeckel]
and btw. I'd happily also test it!

I'd really appreciate if you could test this, since I have never used the coverage feature. There are five ghdl/ghdl:*gcc* tags and six tags in ghdl/vunit. I'm not going to ask you to test all of them! Should you want to try a few, I think the priority for your use case is:

  • ghdl/vunit:gcc
  • ghdl/vunit:gcc-master
  • ghdl/ghdl:ghdl/ghdl:buster-gcc-8.3.0 (if any of the two previous do work, this will work for sure)
  • ghdl/ghdl:ghdl/ghdl:sid-gcc-9.1.0 (might still fail, that's why we didn't close this issue yet)
  • ...

Overall, please do not hesitate to request changes/features, such as including lcov in images with GCC (this was, honestly, so stupid of me) or providing images with VUnit and coverage support (i.e. GCC bakend).

@sjaeckel
Copy link

sjaeckel commented Jan 7, 2020

Would you mind providing an image with lcov, gcov and VUnit? Then I can try it out these days

@sjaeckel
Copy link

sjaeckel commented Jan 7, 2020

Sorry btw. for the long time of silence!

@eine
Copy link
Collaborator Author

eine commented Jan 7, 2020

Would you mind providing an image with lcov, gcov and VUnit? Then I can try it out these days

The images mentioned above do already have all those tools. ghdl/vunit:gcc (latest stable VUnit) or ghdl/vunit:gcc-master (latest VUnit master) are the ones you want to try.

Sorry btw. for the long time of silence!

It's absolutely ok!

@sjaeckel
Copy link

Thanks a lot! I've re-built as described and it works, besides that gcovr is missing.

...
All passed!
Traceback (most recent call last):
  File "PeIMC6/run.py", line 27, in <module>
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])
  File "/usr/lib/python3.7/subprocess.py", line 323, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/usr/lib/python3.7/subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gcovr': 'gcovr'

Do you want to install it in the base image or should I in my custom image?

I think it'd make sense to have it per default as it's the coverage tool that is "supported" by GitLab more or less OOTB

@sjaeckel
Copy link

FYI the information that gcovr was installed was lost when I created the minimal example... in the original GHDL issue.

Thanks again for your work btw!

@eine
Copy link
Collaborator Author

eine commented Jan 18, 2020

Yes, I think it's a fair request. Images ghdl/vunit:gcc and ghdl/vunit:gcc-master do include gcovr now. I didn't include it in ghdl/ghdl:*gcc* images, which did not have Python already. You should be able to test it by just pulling the image again.

Thank you for testing! Do you have any repo (here or at GitLab) where you are actually using these images in a CI workflow/pipeline?

@sjaeckel
Copy link

sjaeckel commented Jan 20, 2020

by just pulling the image again.

yep, that worked and I rebuilt my image but I'm not sure what is wrong now... it seems like gcov never comes to an end

it stays in

$ ps ax | grep gcov
32630 pts/0    R+    16:56 gcov /work/sync_pkg-body.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /work

and after I killed the process I get

Traceback (most recent call last):
  File "/usr/local/bin/gcovr", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/gcovr/__main__.py", line 248, in main
    collect_coverage_from_gcov(covdata, options, logger)
  File "/usr/local/lib/python3.7/dist-packages/gcovr/__main__.py", line 301, in collect_coverage_from_gcov
    contexts = pool.wait()
  File "/usr/local/lib/python3.7/dist-packages/gcovr/workers.py", line 146, in wait
    w.join(timeout=1)
  File "/usr/lib/python3.7/threading.py", line 1036, in join
    self._wait_for_tstate_lock(timeout=max(timeout, 0))
  File "/usr/lib/python3.7/threading.py", line 1048, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
  File "PeIMC6/run.py", line 27, in <module>
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])
  File "/usr/lib/python3.7/subprocess.py", line 325, in call
    return p.wait(timeout=timeout)
  File "/usr/lib/python3.7/subprocess.py", line 990, in wait
    return self._wait(timeout=timeout)
  File "/usr/lib/python3.7/subprocess.py", line 1624, in _wait
    (pid, sts) = self._try_wait(0)
  File "/usr/lib/python3.7/subprocess.py", line 1582, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

the relevant parts of run.py are as follows

ui.set_compile_option("ghdl.flags",["-g", "-O2", "-Wc,-fprofile-arcs", "-Wc,-ftest-coverage"])
ui.set_sim_option("ghdl.elab_flags",["-Wl,-lgcov", "-Wl,--coverage"])

# Run vunit function
try:
    ui.main()
except SystemExit as exc:
    retval = exc.code


if retval == 0:
    srcs = join(dirname(__file__), "src/rtl/*").lstrip("/")
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])

exit(retval)

Do you have any repo (here or at GitLab) where you are actually using these images in a CI workflow/pipeline?

sorry, only in a private GitLab instance.

@eine
Copy link
Collaborator Author

eine commented Jan 20, 2020

Are you using exactly the same repo/commit as before? Did it work before? I ask it because all I did in this regard was to add a Python package through pip. It should be unrelated to gcov/lcov.

You might try using ghdl/ghdl:buster-gcc* as a base instead. This does not include Python or VUnit, but includes GHDL, gcov and lcov.

Nonetheless, please provide a MWE. It is very hard to guess what's going on otherwise. This might be an issue/bug in VUnit.

About the run.py, you should not need to use an exception. There is support for passing a post_run function to main: VUnit/vunit#578.

@sjaeckel
Copy link

sjaeckel commented Jan 20, 2020

Are you using exactly the same repo/commit as before? Did it work before?

of ghdl/vunit? or my own repo? if you meant ghdl/vunit I used ghdl/vunit:gcc as base

Now I re-tried based on latest ghdl/ghdl:buster-gcc-7.4.0 + python3 etc. and that one works as expected! I also have the impression that the tests based on ghdl/vunit:gcc took a lot longer to compile & execute than when running them based on ghdl/ghdl:buster-gcc-7.4.0 ...

Nonetheless, please provide a MWE

I'll see if I find the time to create one

About the run.py, you should not need to use an exception. There is support for passing a post_run function to main: VUnit/vunit#578.

I'll change that as soon as it's working again ;-)

@eine
Copy link
Collaborator Author

eine commented Jan 20, 2020

Are you using exactly the same repo/commit as before? Did it work before?

of ghdl/vunit? or my own repo? if you meant ghdl/vunit I used ghdl/vunit:gcc as base

I meant of your own repo. Anyway, if some of the images that we generate here do work and others don't, the issue is clearly unrelated to your repo. Let's focus on hunting the differences on our side.

Now I re-tried based on latest ghdl/ghdl:buster-gcc-7.4.0 + python3 etc. and that one works as expected! I also have the impression that the tests based on ghdl/vunit:gcc took a lot longer to compile & execute than when running them based on ghdl/ghdl:buster-gcc-7.4.0 ...

This is interesting. That image is 5 months old. The latest is ghdl/ghdl:buster-gcc-8.3.0. I'd be grateful if you could do the same test with this one. Performanse-wise, there is no reason for ghdl/vunit:gcc to be slower than ghdl/ghdl:buster-gcc-8.3.0 + Python added by you. However, either GCC, GHDL or gcov my have gone slower in the last five months.

This is just for you to understand the context:

Hence, we have that:

ghdl/ghdl:buster-gcc-8.3.0 -> ghdl/vunit:gcc
ghdl/ghdl:buster-gcc-8.3.0 -> ghdl/vunit:gcc-master

I'll see if I find the time to create one

As long as you can consistently test which images work and which do not, it's ok with you just running the tests.

@sjaeckel
Copy link

I tried now based on ghdl/ghdl:buster-gcc-8.3.0 and ghdl/vunit:gcc-master and both show the same behavior that gcov hangs ...

@eine
Copy link
Collaborator Author

eine commented Jan 20, 2020

It might be because of the GCC version. Unfortunately, I cannot reproduce. I modified example coverage from VUnit according to your comment above: https://github.com/eine/vunit/blob/test-coverage/examples/vhdl/coverage/run.py I can successfully run it on ghdl/vunit:gcc. A bunch of *.gcda and *.gcno files are created and the following output is produced:

All passed!
------------------------------------------------------------------------------
                           GCC Code Coverage Report
Directory: /src/examples/vhdl/coverage
------------------------------------------------------------------------------
File                                       Lines    Exec  Cover   Missing
------------------------------------------------------------------------------
tb_coverage.vhd                                7       7   100%   
------------------------------------------------------------------------------
TOTAL                                          7       7   100%
------------------------------------------------------------------------------

May know if you set sim option enable_coverage to True in your run.py file? Without it *.gcda files are created only, not *.gcno.

@sjaeckel
Copy link

I've stripped it down to a MWE

https://gist.github.com/sjaeckel/e6ace0cb5df3eac87b95cf9f587b1bf0

@sjaeckel
Copy link

FYI it's not limited to gcovr, lcov hangs as well

@eine
Copy link
Collaborator Author

eine commented Jan 21, 2020

I added the MWE with a few modifications to run.py and Makefile, as a commit in: eine/vunit@b2ba5f3 As you see, test.sh executes the example in ghdl/vunit:gcc and ghdl/vunit:gcc-master. In both cases, execution seems successful and coverage results are reported by gcovr. See log of a CI run: https://github.com/eine/vunit/runs/399915453#step:3:412

@sjaeckel
Copy link

I've rebuilt the docker image with the updated base from ghdl/vunit:gcc and the MWE now succeeds!

After running my real code gcov still hangs ... let's see if I can find the time to create another MWE ... :-\

@eine
Copy link
Collaborator Author

eine commented Jan 23, 2020

I've rebuilt the docker image with the updated base from ghdl/vunit:gcc and the MWE now succeeds!

Good! Unfortunately, we don't have a completely automated sequence to update all the images in order. Hence, when we push some breaking change, a couple of days might be needed for it to settle up. Nevertheless, it works! I'm closing this issue, because the purpose is fulfilled (we have images with GCC, VUnit and/or coverage available now). But, let's continue hunting your issue!

After running my real code gcov still hangs ... let's see if I can find the time to create another MWE ... :-\

A probably meaningful difference between your gist/example and the MWE, is that in the MWE coverage related sim/compile options are added to a LIBRARY object (lib), NOT to the VUnit object (vu, ui). Did you spot that?

@eine eine closed this as completed Jan 23, 2020
@sjaeckel
Copy link

A probably meaningful difference between your gist/example and the MWE, is that in the MWE coverage related sim/compile options are added to a LIBRARY object (lib), NOT to the VUnit object (vu, ui). Did you spot that?

Yep, I've seen that and I've tried both but the result is the same.

I'll open a fresh issue as soon as I've had the time to create a MWE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants