Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8.16 or Commits Fail to Compile on Linux #92

Open
joesixpack opened this issue Dec 17, 2017 · 40 comments
Open

8.16 or Commits Fail to Compile on Linux #92

joesixpack opened this issue Dec 17, 2017 · 40 comments

Comments

@joesixpack
Copy link

8.15 is fine. What a pity as I wanted the neoscrypt fix.

image

@joesixpack joesixpack changed the title 8.16 Fails to Compile on Linux 8.16 or Committs Fail to Compile on Linux Dec 17, 2017
@joesixpack joesixpack changed the title 8.16 or Committs Fail to Compile on Linux 8.16 or Commits Fail to Compile on Linux Dec 17, 2017
@natewalck
Copy link

It compiled on CentOS 7 with Cuda 9.1 for me. Instead of a picture, would it be possible to capture the entire output and put it into a gist?

@joesixpack
Copy link
Author

joesixpack commented Dec 17, 2017

Is this what you wanted?

https://gist.github.com/joesixpack/83be52a48f464b02168a15d79c53ca98

In the meantime, I was able to put in the neoscrypt hack into 8.15. No idea yet if it works or not.

@natewalck
Copy link

Which cuda version, driver version and distro?

@ksze
Copy link

ksze commented Dec 18, 2017

I also got "multiple definition of `ROTL64(unsigned long, unsigned char)'"

Ubuntu 16.04.3 64-bit Server Edition with all updates.

NVIDIA graphics driver and CUDA 9.1.85-1 from NVIDIA's deb repo (deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64 /):

$ dpkg -l '*nvidia*' | grep '^ii'
ii  nvidia-387                       387.26-0ubuntu1 amd64        NVIDIA binary driver - version 387.26
ii  nvidia-387-dev                   387.26-0ubuntu1 amd64        NVIDIA binary Xorg driver development files
ii  nvidia-modprobe                  387.26-0ubuntu1 amd64        Load the NVIDIA kernel driver and create device files
ii  nvidia-opencl-icd-387            387.26-0ubuntu1 amd64        NVIDIA OpenCL ICD
ii  nvidia-prime                     0.8.2           amd64        Tools to enable NVIDIA's Prime
ii  nvidia-settings                  387.26-0ubuntu1 amd64        Tool for configuring the NVIDIA graphics driver
$ dpkg -l cuda*9*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                         Version                     Architecture                Description
+++-============================================-===========================-===========================-==============================================================================================
ii  cuda-9-1                                     9.1.85-1                    amd64                       CUDA 9.1 meta-package
ii  cuda-command-line-tools-9-1                  9.1.85-1                    amd64                       CUDA command-line tools
ii  cuda-compiler-9-1                            9.1.85-1                    amd64                       CUDA compiler
ii  cuda-cublas-9-1                              9.1.85-1                    amd64                       CUBLAS native runtime libraries
ii  cuda-cublas-dev-9-1                          9.1.85-1                    amd64                       CUBLAS native dev links, headers
ii  cuda-cudart-9-1                              9.1.85-1                    amd64                       CUDA Runtime native Libraries
ii  cuda-cudart-dev-9-1                          9.1.85-1                    amd64                       CUDA Runtime native dev links, headers
ii  cuda-cufft-9-1                               9.1.85-1                    amd64                       CUFFT native runtime libraries
ii  cuda-cufft-dev-9-1                           9.1.85-1                    amd64                       CUFFT native dev links, headers
ii  cuda-cuobjdump-9-1                           9.1.85-1                    amd64                       CUDA cuobjdump
ii  cuda-cupti-9-1                               9.1.85-1                    amd64                       CUDA profiling tools interface.
ii  cuda-curand-9-1                              9.1.85-1                    amd64                       CURAND native runtime libraries
ii  cuda-curand-dev-9-1                          9.1.85-1                    amd64                       CURAND native dev links, headers
ii  cuda-cusolver-9-1                            9.1.85-1                    amd64                       CUDA solver native runtime libraries
ii  cuda-cusolver-dev-9-1                        9.1.85-1                    amd64                       CUDA solver native dev links, headers
ii  cuda-cusparse-9-1                            9.1.85-1                    amd64                       CUSPARSE native runtime libraries
ii  cuda-cusparse-dev-9-1                        9.1.85-1                    amd64                       CUSPARSE native dev links, headers
ii  cuda-demo-suite-9-1                          9.1.85-1                    amd64                       Demo suite for CUDA
ii  cuda-documentation-9-1                       9.1.85-1                    amd64                       CUDA documentation
ii  cuda-driver-dev-9-1                          9.1.85-1                    amd64                       CUDA Driver native dev stub library
ii  cuda-gdb-9-1                                 9.1.85-1                    amd64                       CUDA-GDB
ii  cuda-gpu-library-advisor-9-1                 9.1.85-1                    amd64                       CUDA GPU Library Advisor.
ii  cuda-libraries-9-1                           9.1.85-1                    amd64                       CUDA Libraries 9.1 meta-package
ii  cuda-libraries-dev-9-1                       9.1.85-1                    amd64                       CUDA Libraries 9.1 development meta-package
ii  cuda-license-9-1                             9.1.85-1                    amd64                       CUDA licenses
ii  cuda-memcheck-9-1                            9.1.85-1                    amd64                       CUDA-MEMCHECK
ii  cuda-misc-headers-9-1                        9.1.85-1                    amd64                       CUDA miscellaneous headers
ii  cuda-npp-9-1                                 9.1.85-1                    amd64                       NPP native runtime libraries
ii  cuda-npp-dev-9-1                             9.1.85-1                    amd64                       NPP native dev links, headers
ii  cuda-nsight-9-1                              9.1.85-1                    amd64                       CUDA nsight
ii  cuda-nvcc-9-1                                9.1.85-1                    amd64                       CUDA nvcc
ii  cuda-nvdisasm-9-1                            9.1.85-1                    amd64                       CUDA disassembler
ii  cuda-nvgraph-9-1                             9.1.85-1                    amd64                       NVGRAPH native runtime libraries
ii  cuda-nvgraph-dev-9-1                         9.1.85-1                    amd64                       NVGRAPH native dev links, headers
ii  cuda-nvml-dev-9-1                            9.1.85-1                    amd64                       NVML native dev links, headers
ii  cuda-nvprof-9-1                              9.1.85-1                    amd64                       CUDA Profiler tools
ii  cuda-nvprune-9-1                             9.1.85-1                    amd64                       CUDA nvprune
ii  cuda-nvrtc-9-1                               9.1.85-1                    amd64                       NVRTC native runtime libraries
ii  cuda-nvrtc-dev-9-1                           9.1.85-1                    amd64                       NVRTC native dev links, headers
ii  cuda-nvtx-9-1                                9.1.85-1                    amd64                       NVIDIA Tools Extension
ii  cuda-nvvp-9-1                                9.1.85-1                    amd64                       CUDA nvvp
ii  cuda-runtime-9-1                             9.1.85-1                    amd64                       CUDA Runtime 9.1 meta-package
ii  cuda-samples-9-1                             9.1.85-1                    amd64                       CUDA example applications
ii  cuda-toolkit-9-1                             9.1.85-1                    amd64                       CUDA Toolkit 9.1 meta-package
ii  cuda-tools-9-1                               9.1.85-1                    amd64                       CUDA Tools meta-package
ii  cuda-visual-tools-9-1                        9.1.85-1                    amd64                       CUDA visual tools

@KlausT
Copy link
Owner

KlausT commented Dec 18, 2017

There are no multiple definitions of ROTL64.
I didn't touch the ROTL64 stuff since, like, forever.
It's a compiler issue, or you are mixing files from different versions.

@drpoom
Copy link

drpoom commented Dec 18, 2017

I got similar errors like above (collect2, and multiple ROTL64 definitions). Compiled on Ubuntu 16.4, 64-bit desktop. Neither CUDA 9.1 nor 9.0 succeeded, it's probably some other dependencies.

@KlausT
Copy link
Owner

KlausT commented Dec 18, 2017

Let me guess. You are all not using the build.sh script.

By the way, I fixed a small typo in configure.sh

@drpoom
Copy link

drpoom commented Dec 18, 2017

You are dead right! I used ./autogen.sh, then ./configure.sh, then make. facepalm
After using ./build.sh the problem is solved.

@ksze
Copy link

ksze commented Dec 19, 2017

I did use the build.sh script and I got that error.

@joesixpack
Copy link
Author

joesixpack commented Dec 19, 2017

No problems compiling now. Guess it was that small typo.

@AlexShpak
Copy link

AlexShpak commented Dec 19, 2017

Same here. I'm using build.sh and still getting multiple definition of `ROTL64(unsigned long, unsigned char)'
CentOS 7, CUDA 9.1 Driver 384.98

UPD: Sorry, windows branch compiled ok. all above concerns cuda9 branch. And yeah, i know that i should install a newer driver :)

@joesixpack
Copy link
Author

Spoke too soon. Compiles on Ubuntu 16.04 LTS but not 17.10.

@KlausT
Copy link
Owner

KlausT commented Dec 19, 2017

Wierd stuff is happening here...

CUDA 8 system requirements:

CUDA 9.1 system requirements:

@eruditej
Copy link

eruditej commented Dec 25, 2017

I am testing on Ubuntu 16 (history has taught me to never use latest).
Things I have learned.

  1. run autogen.sh
  2. install gcc-4.9 and g++-4.9
  3. CC=gcc-4.9 CPP=cpp-4.9 CXX=g++-4.9 ./configure.sh

The gcc 5.4 in current ubuntu 16.04 doesnt work with cuda 9.1. Might be a hard coded invariant in the headers (nvidia loves to do this, even on windows).

It builds, it doesn't work for me but it builds. I am trying to add some unified memory support code for Linux so it is much faster than Windows where Unified memory can't be used and the code spends all it's time run memCpy.

@eruditej
Copy link

./ccminer -i21 -a neoscrypt --url=stratum+tcp://neoscrypt.usa.nicehash.com:3341 -u 1PVw8bu
17HWPhLoqQShu5cS5qaoTQRDwFA.titans -p x
ccminer 8.17-KlausT (64bit) for nVidia GPUs
Compiled with GCC 4.9 using Nvidia CUDA Toolkit 9.1

Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
CUDA support by Christian Buchner, Christian H. and DJM34
Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.

[2017-12-25 10:31:02] NVML GPU monitoring enabled.
[2017-12-25 10:31:02] Intensity set to 21, 2097152 cuda threads
[2017-12-25 10:31:02] 2 miner threads started, using 'neoscrypt' algorithm.
0
1
[2017-12-25 10:31:02] Starting Stratum on stratum+tcp://neoscrypt.usa.nicehash.com:3341
[2017-12-25 10:31:02] Stratum difficulty set to 1024
[2017-12-25 10:31:02] neoscrypt.usa.nicehash.com:3341 neoscrypt block 2560
Cuda error in func 'neoscrypt_cpu_init_2stream' at line 1432 : out of memory.
Cuda error in func 'scanhash_neoscrypt' at line 41 : driver shutting down.
Segmentation fault (core dumped)

So it will build on 16.04, but lyra2v2 and neoscrypt neither work. tpruvot 2.2.3-branch cuda9 does work currently (and has to be built the same way).

I an not familiar enough with the code to say what is going on, but this block is screaming for UMA support:

void neoscrypt_cpu_init_2stream(int thr_id, uint32_t threads)
{
        uint32_t *hash1;
        uint32_t *hash2; // 2 streams
        uint32_t *Trans1;
        uint32_t *Trans2; // 2 streams
        uint32_t *Trans3; // 2 streams
        uint32_t *Bhash;

        CUDA_SAFE_CALL(cudaStreamCreate(&stream[0]));
        CUDA_SAFE_CALL(cudaStreamCreate(&stream[1]));

        CUDA_SAFE_CALL(cudaMalloc(&d_NNonce[thr_id], 2 * sizeof(uint32_t)));
        CUDA_SAFE_CALL(cudaMalloc(&hash1, 32ULL * 128 * sizeof(uint64_t) * threads));
        CUDA_SAFE_CALL(cudaMalloc(&hash2, 32ULL * 128 * sizeof(uint64_t) * threads));
        CUDA_SAFE_CALL(cudaMalloc(&Trans1, 32ULL * sizeof(uint64_t) * threads));
        CUDA_SAFE_CALL(cudaMalloc(&Trans2, 32ULL * sizeof(uint64_t) * threads));
        CUDA_SAFE_CALL(cudaMalloc(&Trans3, 32ULL * sizeof(uint64_t) * threads));
        CUDA_SAFE_CALL(cudaMalloc(&Bhash, 128ULL * sizeof(uint32_t) * threads));

        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(B2, &Bhash, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(W, &hash1, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(W2, &hash2, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(Tr, &Trans1, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(Tr2, &Trans2, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        CUDA_SAFE_CALL(cudaMemcpyToSymbolAsync(Input, &Trans3, sizeof(uint28*), 0, cudaMemcpyHostToDevice, stream[0]));
        if(opt_debug)
                CUDA_SAFE_CALL(cudaDeviceSynchronize());

Apparently gcc5.4+ and cuda have known warts:
Differences in compiler macros between Ubuntu 16.04 GCC 5 and Ubuntu 17.10 GCC 5:

ref: https://aur.archlinux.org/cgit/aur.git/commit/?h=ccminer-git&id=0dc0253c4cee236eca1289aafae8fdce7d81fac5

ref2: https://github.com/caffe2/caffe2/issues/1633


TL;DR: its not KlausT (or tpruvot). If you want neoscrypt now though, use trpruvot until the big is fixed causing OOM crash.  

@eruditej
Copy link

https://bugs.launchpad.net/ubuntu/+source/gcc-5/+bug/1725848

They went from 5.3 to 5.4, broke the bbuild, wontfix. #justmaintainerthings

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

You are still using this wierd source code?

[2017-12-25 10:31:02] 2 miner threads started, using 'neoscrypt' algorithm.
0
1
[2017-12-25 10:31:02] Starting Stratum on stratum+tcp://neoscrypt.usa.nicehash.com:3341

Please try the latest commit from the cuda9 branch.

Also, -i 21 ? That's too high for neoscrypt.
What cards do you have?

@eruditej
Copy link

dual titan V :-)

I was just testing neoscript for the guy who created the issue.

To sum up:
cuda 9.1 works on Ubuntu 16 only w/ gcc 4.9 currently (configuring with gcc-4.9 as the compiler)
cuda 9.1 works on Ubuntu 17.04 w/ gcc-5
cuda 9.1 works on Ubuntu 17.10 maybe... using gcc-6

I think this persons' issue can be closed, although they will not like the answer. It isn't the code, its the toolchain.

@eruditej
Copy link

Updated my rig to ubuntu 17.04 (which might be the current sweet spot for cuda9.1)

my configure.sh
extracflags="-march=native -D_REENTRANT -falign-functions=16 -falign-jumps=16 -falign-labels=16"

CUDA_CFLAGS='-lineno --shared --compiler-options "-Wall -fPIC"'
./configure CXXFLAGS="-O3 $extracflags" --with-cuda=/usr/local/cuda --with-nvml=libnvidia-ml.so

Ultimately, people are going to have to learn to use autoconfig themselves versus expecting a guide or wrapper that works for all the toolchain combos.

Result:

./ccminer -a neoscrypt --url=stratum+tcp://neoscrypt.usa.nicehash.com:3341 -u .titans -p x
ccminer 8.17-KlausT (64bit) for nVidia GPUs
Compiled with GCC 6.3 using Nvidia CUDA Toolkit 9.1

Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
CUDA support by Christian Buchner, Christian H. and DJM34
Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.

[2017-12-25 13:10:18] NVML GPU monitoring enabled.
0
1
[2017-12-25 13:10:18] Starting Stratum on stratum+tcp://neoscrypt.usa.nicehash.com:3341
[2017-12-25 13:10:18] 2 miner threads started, using 'neoscrypt' algorithm.
[2017-12-25 13:10:18] Stratum difficulty set to 256
[2017-12-25 13:10:18] neoscrypt.usa.nicehash.com:3341 neoscrypt block 2637
[2017-12-25 13:10:22] accepted: 1/1 (100.00%), 998.13 kH/s yay!!!
[2017-12-25 13:10:33] GPU #1: Graphics Device, 1398.44 kH/s
[2017-12-25 13:10:33] accepted: 2/2 (100.00%), 2596.48 kH/s yay!!!
[2017-12-25 13:10:33] GPU #1: Graphics Device, 1413.00 kH/s
[2017-12-25 13:10:34] accepted: 3/3 (100.00%), 2611.04 kH/s yay!!!
[2017-12-25 13:10:44] GPU #0: Graphics Device, 1440.68 kH/s
[2017-12-25 13:10:48] GPU #0: Graphics Device, 1463.33 kH/s
[2017-12-25 13:10:48] accepted: 4/4 (100.00%), 2865.01 kH/s yay!!!
[2017-12-25 13:10:49] neoscrypt.usa.nicehash.com:3341 neoscrypt block 2637
[2017-12-25 13:10:49] GPU #0: Graphics Device, 1477.40 kH/s
[2017-12-25 13:10:49] GPU #1: Graphics Device, 1426.44 kH/s
[2017-12-25 13:10:51] GPU #0: Graphics Device, 1418.96 kH/s
[2017-12-25 13:10:51] accepted: 5/5 (100.00%), 2869.81 kH/s yay!!!
[2017-12-25 13:10:52] GPU #0: Graphics Device, 1443.67 kH/s
[2017-12-25 13:10:52] accepted: 6/6 (100.00%), 2868.53 kH/s yay!!!

unfortunately:
./ccminer -a lyra2v2 --benchmark
ccminer 8.17-KlausT (64bit) for nVidia GPUs
Compiled with GCC 6.3 using Nvidia CUDA Toolkit 9.1

Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
CUDA support by Christian Buchner, Christian H. and DJM34
Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.

[2017-12-25 13:17:14] NVML GPU monitoring enabled.
0
[2017-12-25 13:17:14] 2 miner threads started, using 'lyra2v2' algorithm.
1
[2017-12-25 13:17:15] GPU #1: result does not validate on CPU!
[2017-12-25 13:17:15] GPU #1: result does not validate on CPU!
[2017-12-25 13:17:15] GPU #0: result does not validate on CPU!

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

I think that you have old and new files mixed together.

@eruditej
Copy link

I'm using current cuda9 branch. Git status shows I am up to date except for my own modified configure.sh, and I've run make clean.

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

"0" and "1" doesn't appear in the current version.
screenshot
It's probably a leftover from an old version where I had added that for debugging purposes.

@eruditej
Copy link

applog(LOG_WARNING, "GPU #%d: result does not validate on CPU!", thr_id);

cuda9 branch

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

Yeah, it should be device_map[thr_id]
But this doesn't explain the "0" and "1"

[2017-12-25 13:10:18] NVML GPU monitoring enabled.
0
1
[2017-12-25 13:10:18] Starting Stratum on stratum+tcp://neoscrypt.usa.nicehash.com:3341

@eruditej
Copy link

jeramy@miner1:~/repos/KlausT$ nvidia-smi
Mon Dec 25 14:09:34 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 387.34                 Driver Version: 387.34                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Graphics Device     Off  | 00000000:01:00.0 Off |                  N/A |
| 54%   73C    P2   141W / 250W |   3579MiB / 12055MiB |     97%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Graphics Device     Off  | 00000000:03:00.0 Off |                  N/A |
| 49%   68C    P2   145W / 250W |   3579MiB / 12058MiB |     97%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      9076      C   ./ccminer                                   3568MiB |
|    1      9076      C   ./ccminer                                   3568MiB |
+-----------------------------------------------------------------------------+

nvidia linux-ism?

@eruditej
Copy link

Nevermind, upgrade killed my driver. Hence, it didnt show up as Titan, fixing locally.

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

Would you like to do a little experiment?
You might be surprised.
Download this:
https://github.com/KlausT/ccminer/archive/cuda9.zip
then run build.sh and try to mine lyra2v2.

@eruditej
Copy link

ref: https://devtalk.nvidia.com/default/topic/1027645/linux/nvidia-smi-not-recognizing-titan-v-/post/5228400/#5228400

"Graphics Device" showing up for Volta Titans on Linux is a known issue.

I added a few lines to my local Lyra2Rev to add default intensity when the card is "too new and awesome". Testing your build now.

fyi:
nvidia-smi -L
GPU 0: Graphics Device (UUID: GPU-e3970f09-7b4b-59f7-7281-05cd4be92e74)
GPU 1: Graphics Device (UUID: GPU-48c1bc72-b872-29a3-cbd5-dcf835f74a14)

thats w/ latest drivers/cuda on linux, fun!

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

Maybe I should check for the string "Graphics Device" and assume it's a Titan V :-)

@eruditej
Copy link

stratum still fails with my local and your test,

this doesnt fix anything, but it makes it clear its working as intended

diff --git a/lyra2/lyra2REv2.cu b/lyra2/lyra2REv2.cu
index 7663a29..d507fcb 100644
--- a/lyra2/lyra2REv2.cu
+++ b/lyra2/lyra2REv2.cu
@@ -83,7 +83,15 @@ int scanhash_lyra2v2(int thr_id, uint32_t *pdata,

        cudaDeviceProp props;
        cudaGetDeviceProperties(&props, device_map[thr_id]);
-       if(strstr(props.name, "Titan"))
+       // if its too new and awesome, linux will give it a generic name
+       if(strstr(props.name, "Graphics Device"))
+       {
+               intensity = 256 * 256 * 15;
+#ifdef _WIN64
+               intensity = 256 * 256 * 22;
+#endif
+       }
+       else if(strstr(props.name, "Titan"))
        {
                intensity = 256 * 256 * 15;
 #ifdef _WIN64

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

Oh, and I have to change the ifdef
After all this time I still find silly bugs.

@eruditej
Copy link

I am getting connected to the stratum, but still not validating. tpruvot rebuilt on the same chain working.

./ccminer -i21 -a lyra2v2 --url=stratum+tcp://lyra2rev2.usa.nicehash.com:3347 -u <redacted>.titans -p x
ccminer 8.17-KlausT (64bit) for nVidia GPUs
Compiled with GCC 6.3 using Nvidia CUDA Toolkit 9.1

Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
CUDA support by Christian Buchner, Christian H. and DJM34
Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.

[2017-12-25 15:36:25] NVML GPU monitoring enabled.
[2017-12-25 15:36:25] Intensity set to 21, 2097152 cuda threads
0
1
[2017-12-25 15:36:25] 2 miner threads started, using 'lyra2v2' algorithm.
[2017-12-25 15:36:25] Starting Stratum on stratum+tcp://lyra2rev2.usa.nicehash.com:3347
[2017-12-25 15:36:25] Stratum difficulty set to 32
[2017-12-25 15:36:25] lyra2rev2.usa.nicehash.com:3347 lyra2v2 block 1749227
[2017-12-25 15:36:30] GPU #0: result does not validate on CPU!
[2017-12-25 15:36:30] GPU #0: result does not validate on CPU!

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

Since it works on Windows, it could be unexpected compiler behaviour or wrong compiler options.
Unfortunately I can't help you there.

@KlausT
Copy link
Owner

KlausT commented Dec 25, 2017

try the option --no-cpu-verify
then we can see where the problem is, the CUDA code, or the CPU code

@eruditej
Copy link

lol, smoking

 ../KlausT/ccminer -i21 -a lyra2v2 --url=stratum+tcp://lyra2rev2.usa.nicehash.com:3347 -u
<redacted>.titans -p x --no-cpu-verify
ccminer 8.17-KlausT (64bit) for nVidia GPUs
Compiled with GCC 6.3 using Nvidia CUDA Toolkit 9.1

Based on pooler cpuminer 2.3.2 and the tpruvot@github fork
CUDA support by Christian Buchner, Christian H. and DJM34
Includes optimizations implemented by sp-hash, klaust, tpruvot and tsiv.

[2017-12-25 16:30:29] NVML GPU monitoring enabled.
[2017-12-25 16:30:29] Intensity set to 21, 2097152 cuda threads
0
1
[2017-12-25 16:30:29] Starting Stratum on stratum+tcp://lyra2rev2.usa.nicehash.com:3347
[2017-12-25 16:30:29] 2 miner threads started, using 'lyra2v2' algorithm.
[2017-12-25 16:30:29] Stratum difficulty set to 32
[2017-12-25 16:30:29] lyra2rev2.usa.nicehash.com:3347 lyra2v2 block 1749316
[2017-12-25 16:30:32] accepted: 1/1 (100.00%), 92.15 MH/s yay!!!
[2017-12-25 16:30:33] accepted: 2/2 (100.00%), 92.15 MH/s yay!!!
[2017-12-25 16:30:33] lyra2rev2.usa.nicehash.com:3347 lyra2v2 block 1749317
[2017-12-25 16:30:33] GPU #1: Graphics Device, 97.87 MH/s
[2017-12-25 16:30:33] GPU #0: Graphics Device, 98.48 MH/s
[2017-12-25 16:30:37] GPU #1: Graphics Device, 99.22 MH/s
[2017-12-25 16:30:37] accepted: 3/3 (100.00%), 197.70 MH/s yay!!!
[2017-12-25 16:30:37] GPU #1: Graphics Device, 99.25 MH/s
[2017-12-25 16:30:37] accepted: 4/4 (100.00%), 197.71 MH/s yay!!!
[2017-12-25 16:30:37] GPU #1: Graphics Device, 99.31 MH/s
[2017-12-25 16:30:37] accepted: 5/5 (100.00%), 197.74 MH/s yay!!!
[2017-12-25 16:30:39] GPU #1: Graphics Device, 99.54 MH/s
[2017-12-25 16:30:39] accepted: 6/6 (100.00%), 197.81 MH/s yay!!!

@cgarnier
Copy link

Is it possible to provide a docker image for the different working version ?
I m not used to build cpp stuff and i m fighting with it :D
I m trying to adapt this one https://github.com/kahiroka/ccminer-in-docker/blob/master/Dockerfile with your fork but for now it s not working.

@KlausT
Copy link
Owner

KlausT commented Jan 22, 2018

Would you please open a seperate issue for the docker stuff?
Then other people will actually find it.
Thanks

Bye the way, I'm not a Linux user, so I will have to rely on other people who know more about Linux than me.

@sukoshi1507
Copy link

sukoshi1507 commented Jan 22, 2018 via email

@eruditej
Copy link

eruditej commented Jan 22, 2018 via email

@cgarnier
Copy link

Could you publish your dockerfiles ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants