Conversation
|
Prevent more issues about censorship sounds like more censorship. Unbelievable. |
Yeah, that's the way it should be |
|
Censorship itself is not an issue. Imagine your autonomous robot dog carrying AK47s, deployed to a battlefield, and your enemy suddenly raises a big photo of Xi Jinping, and all your robot dogs simultaneously disable themself, "I don't know how to answer this question". |
|
Not the same type of censorship. |
If you want so much to talk about those topics with the model, install it locally and it will answer you. Nobody really cares about propaganda and what happened decades ago, US also imposes censorship on gpt and other models, and every day they are doing much worse around the world than what happened 30 years ago in China. |
I did install it locally with mixed results. With Ollama it was able to answer any topic after being pushed. With LM studio it refused, much like the web version. I don't defend other AI models, but this one is the most censored model available. It's not that I care about a specific incident 30 years ago or the persecution of Uyghurs in China, it's that I want 100% facts when I need information for my research. If it sensors anything that goes against the Chinese government, what else does it do? Lie about the enemies of it? It's too biased and a true open source AI should not have opinions and censor basic facts. It makes you wonder what else it does and if it really is safe to install it or use their services. |
XenonPy
left a comment
There was a problem hiding this comment.
Needs a bit more detail on the specifics of said censorship
there's simple stop word censorship, they didn't break the model with alignment , you can see it by writing a censored promt in another language, for example Russian or Arabic -_- |
|
Re: #139 (comment), maybe the list thereof is here :)?: https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak |
So does other models... |
Prevent more issues like these from being created.