You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've trained a small model using the --boosting N option, like for example --loss_function logistic -b 18 --l1 0.1 --l2 0.0001 --nn 50 --boosting 5 using python or the command line.
Then, when I try to get prediction scores out of it, I get "1.0" all the time when using python or using -p on the command line. When I use the -r command line parameter with vw cli executable, I get the list of all features with their scores and a final meaninfgul score at the end.
It seems impossible to get these values from within the python classes, no matter what I try (playing with the various PredictionType )
This seems either a documentation bug or a more serious issue w.r.t. boosting reduction perhaps.
Seems like a bug with boosting (and nn/loss function are not relevant here).
Simpler repro (can be done with binary and -p as well, so problem is not python specific):
from vowpalwabbit import pyvw
import numpy as np
vw = pyvw.Workspace('--boosting 1 -b 1')
yhat = lambda x: 2 * x + 3
for i in range(10000):
x = np.random.rand()
vw.learn(f'{yhat(x)} | 1:{x}')
print(f'w_x = {vw.get_weight(1, 0)}')
print(f'constant = {vw.get_weight(0, 0)}')
x = 2
print(f'y({x}) = {vw.predict("| 1:{x}")}')
Describe the bug
I've trained a small model using the
--boosting N
option, like for example--loss_function logistic -b 18 --l1 0.1 --l2 0.0001 --nn 50 --boosting 5
using python or the command line.Then, when I try to get prediction scores out of it, I get "1.0" all the time when using python or using
-p
on the command line. When I use the-r
command line parameter with vw cli executable, I get the list of all features with their scores and a final meaninfgul score at the end.It seems impossible to get these values from within the python classes, no matter what I try (playing with the various
PredictionType
)This seems either a documentation bug or a more serious issue w.r.t. boosting reduction perhaps.
How to reproduce
--loss_function logistic -b 18 --l1 0.1 --l2 0.0001 --nn 50 --boosting 5
vw -i model.vw -t test.txt -r /dev/stdout
Version
9.2.0
OS
Linux
Language
Python
Additional context
No response
The text was updated successfully, but these errors were encountered: