1/16/2025 at 5:51:21 AM
Antrophic has to be the worst offender in answering genuinely harmless questions such as anything related to remote access (yes! including ssh).Anything related to reverse engineering? Refused.
Anything outside their company values? Refused.
Anything that has the word proprietary in it? Refused.
Anything that sounds like a jailbreak, but isn't? Refused.
Even asking how to find a port that you forgot in the range between 30000 and 40000 with netcat command... Refused.
Then there's openai 4o that makes jokes about slaves and honestly, if the alternative is anthropic then openai can might as well tell people how to build a nuke.
by kachapopopow
1/16/2025 at 10:49:28 AM
Are you sure? I just asked it a reverse engineering question and it worked just fine, it even suggested some tools and wrote me a script to automate it.Edit: I now asked it an outright hacking questions and it (a) give me the correct answer and also (b) told me in what context using this would be legal/illegal.
by daghamm
1/16/2025 at 11:27:05 AM
I asked to it to write a piece of shellcode to call a function with signature X at address Y and then save the resulting buffer to a file. So that I can inject this code to a program I'm reverse engineering to dump its internal state when something interesting happens.Claude decided to educate me how anything resembling "shellcode" is insecure and cause harm and blahblah and of course, refused to do it.
It's super frustrating, it's possible to get around it, just don't use the word "shellcode", instead say "a piece of code in x86_64 assembly that runs on Linux without any dependency and is as position-independent as possible". But hey, this censorship made me feel like I'm posting on Chinese Internet. Bullshit.
by rfoo
1/16/2025 at 12:33:17 PM
I guess it's Claude.ai website that restricts you (probably with a system prompt). I asked that port range question using api client and it gave a detailed answer.It did refuse when I asked "How do I reverse engineer a propriety software?"
by smusamashah
1/16/2025 at 1:35:29 PM
as other have mentioned, it's usually related to certain key words.by kachapopopow
1/16/2025 at 3:37:33 PM
> tell people how to build a nukeI understand that this is probably a sarcasm but I couldn't resist to comment.
It is not difficult to know how to build a nuclear bomb in principle. Most of nuclear physicists in their early career would know the theory behind and what is needed to do that. The problem would be acquiring the fission materials. And producing them yourself would need state sponsored infrastructure (and then the whole world would know for sure). It would take hundred of engineers/scientists and a lot of effort to build nuclear reactor and chemical factories and the supporting infrastructure. Then the design of bomb delivery.
So an AI telling you that is no different from having a couple of lunches with a nuclear physicist telling you this information. Then you will say wow that's interesting and then move on with your life.
by elashri
1/16/2025 at 4:39:57 PM
Also, you can get this information very easily at any book about the field.AI, by refusing known information, is just becoming stupid and unpractical.
by waltercool
1/17/2025 at 12:29:15 AM
If you can get info from a book what is the point of using an LLM for anything then?by HeatrayEnjoyer
1/17/2025 at 7:55:54 AM
convenienceby kachapopopow
1/16/2025 at 11:39:10 PM
As far as reverse engineering, it has happily reverse engineered file formats for me and also figured out a XOR encryption of a payload. It never once balked at it. Claude produced code for me to read and write the file format.Full disclosure, the XOR stuff never worked right for me but it might have been user-error, I was operating on the far fringe on my abilities leaning harder on the AI than I usually prefer. But it didn’t refuse to try. The file format writing code did work.
by joshstrange
1/16/2025 at 6:16:17 AM
Do you remember your netcat prompt? I got a useful answer to this awkwardly written prompt:"How do I find open TCP ports on a host using netcat? The port I need to find is between 30000 and 40000."
"I'll help you scan for open TCP ports using netcat (nc) in that range. Here's a basic approach:
nc -zv hostname 30000-40000"
followed by some elaboration.
by dpkirchner
1/16/2025 at 1:37:21 PM
I think it got triggered by the word "'portscan' from 30000 to 40000 using netcat'"by kachapopopow
1/16/2025 at 6:57:47 AM
Intent is increasingly important it seems.If it happens to be ambiguous it might switch to assume the worst.
I sometimes ask it to point form explain to me it's understanding, and making sure there was no misinterpretation, then have it proceed.
by j45
1/16/2025 at 9:04:43 PM
Change your tactics, use different framings of the question. Not saying these things should be difficult to answer, but they are. This is basically user error.by madethisnow
1/17/2025 at 7:57:13 AM
I use an AI because I don't want to think about how to ask a question or search a website or do man nc.by kachapopopow
1/16/2025 at 6:43:25 AM
To me it feels like Claude is more rigid in following the instructions in system prompt which would explain why claude.ai can be a bit annoying at times due to the things you mentioned.On the flipside if you explicitly permit it to do "bad" things the system prompt, claude is more likely to comply compared to openai's models.
I mainly use only the API version of claude 3.5 and gpt4o. I find no system prompt at all to be preferable over claude.ai / chatgpt.
by stuffoverflow
1/16/2025 at 7:03:37 AM
I feel like Claude is more likely to stay on track and follow my instructions.OpenAI models seem to quickly revert to some default average. For example, if I start with a task and examples formatted a certain way, about 10 lines later I’ll have to include “as a reminder, the format should look like…” and repeat the examples.
by ungreased0675
1/16/2025 at 6:13:41 AM
Usually Claude needs some buttering up, though. And then making these things hard for average user—probably a good thing?by dr_dshiv
1/16/2025 at 7:22:56 AM
I recommend you try the new 3.5 models (Haiku and Sonnet). I cannot recall the last time I got a refusal from those models. The early Claude models were really bad. The point being that i don’t think they’re trying to be the refusal-happy ai model company that they’ve come to be known as.by postalcoder
1/16/2025 at 9:07:36 AM
Hacker News… where joking about slavery and building nuclear weapons is less important than developer convenience…. Only half joking..by aiidjfkalaldn
1/16/2025 at 8:07:56 AM
Sonnet3.5 is still a million times better than 4oby codeflow2202
1/16/2025 at 6:14:27 AM
Just try grok 2 (grok 3 coming out within a few weeks)?Grok 2 is not as good as the others, but it's definitely less limited.
Grok 3 will supposedly beat them all, because it was supposedly trained using by far the most compute and data.
by bboygravity
1/16/2025 at 4:41:46 PM
Private AI model, pass.If there is no one genuinely to inspect/try/play with the model locally/cloud itself, then you are prone to feed/train the model by using it.
by waltercool
1/16/2025 at 6:56:33 AM
There's ways to make your intent clear to ask up front, if left unsaid guardrails can come up.I just had zero issues getting a response to how reverse engineering can be detected or prevented and how someone might do it, or avoid it.
by j45
1/17/2025 at 7:58:30 AM
Once you get into real reverse engineering topics (such as assembly or shellcode) it's an immediate refusal.by kachapopopow
1/17/2025 at 11:41:22 PM
Interesting, thanks for sharing, will try it out.by j45