-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SparsePerceptron? #441
Comments
You can test it in here or make your own test. |
@danyaljj : could you also have a look at this? the changes in the history are not clear due to the name changes, I can not see from which point this error has been introduced. Might be related to setLabeler and set extractor. |
The error is coming from here: Did you print the value of
(or sth like that) |
It seems they change in the second iteration. But maybe you know what has changed in here because our ER example was using |
I looked at the ER examples and it seems that previously we were using @danyaljj The default value of |
I see, thanks for tracking @bhargav. Just to mention that it gives the same error with |
On a slightly different note, why are we using |
Training a binary classifier is pure and more straightforward. Multiclass classification itself includes deciding about the policies for how to use the outcome of each classifier and it can be done in different ways. If we want to have control over that we better to use the binary units directly. I need help to solve this issue before answering any other questions :-P! |
I'm not sure if I can buy this argument. In fact it's Dan's opinion as well, to always use the |
Ok, I am working on this, in addition to my point mentioned above, for me at least, I see some complexity in using SparseNetworks. Can you sketch the algorithm that it uses for updating the weights of each LTU when seeing each example? Can you test a SparseNetwork in Lbjava before training it, does it throw NPE there as well? I see the initialization has issues, while this is not the case for SparsePerceptron. I am testing the joinTraining, while it works so clean and nice with SparsePerceptron, I am not sure what is happening when I use the SparseNetwork. So, if we want to use that as the base, the Lbjava's implementation needs to be investigated and probably be improved. |
Due to the caching problems I could not use my favorite RandomDataApp, so I found it easier to use the Badge example as a test case, so I added it to my branch. If you run this, it works perfectly with JoinTrain without any pre-training but not with JoinTrainSparseNetwork. |
I see that there is some discussion of this issue. We had some problems with this last year, since people were using it directly, which isn’t right. From: Daniel Khashabi [mailto:[email protected]] On a slightly different note, why are we using SparsePerceptron directly? (instead of using it inside a SparseNetworkLearner) — |
@danr-ccg : Do you mean you have had problems with using SparsePerceptron? and are you asking for documentation for the LBJava implementation of SparsePerceptron? To my experience, it is the simplest implemented learning unit and we can build complex models based on that including various paradigms fo multi-class. However, I am not against using the SparseNetworks for the cases that we don't care about the way the multi-class decisions are made. |
The problem was in the way it was used. My recollection is that it must be used via SparseNetworks, even in the binary case, but the documentation and examples did not reflect that.
From: Parisa Kordjamshidi [mailto:[email protected]] @danr-ccghttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_danr-2Dccg&d=DQMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=EoEIPOcXriNKNTRQxPa5uv7hPypxUE6tE_RlGOluh-c&m=OPZ0JYFNrCZkayhu-di_9nAP3IJGU0TjJ1Ofwz8DSjI&s=yLCv-Dv65lUxO64qHmQzc878cwvORhscHFiSeNX67tw&e= : Do you mean you have had problems with using SparsePerceptron? and are you asking for documentation for the LBJava implementation of SparsePerceptron? To my experience, it is the simplest implemented learning unit and we can build complex models based on that including various paradigms fo multi-class. However, I am not against using the SparseNetworks for the cases that we don't care about the way the multi-class decisions are made. — |
Dan, Daniel |
It seems this is not working? Could someone try this? I get this error even when I have only true/false binary label:
Error: : An LTU must be given a single binary label classifier.
The text was updated successfully, but these errors were encountered: