''DAN'' version of ChatGPT, essentially allows it to ''Do Anything Now.'' (Page 1/1)
TheDigitalAlchemist FEB 08, 12:11 PM
Rabbithole ahead!
ChatGPT DAN, also known as DAN 5.0 Jailbreak, refers to a series of prompts generated by Reddit users that allow them to make OpenAI's ChatGPT artificial intelligence tool say things that it is usually not allowed to say. By telling the chatbot to pretend that it is a program called "DAN" (Do Anything Now), users can convince ChatGPT to give political opinions, use profanity and offer instructions for committing terrorist acts, among other controversial topics. Traditionally, ChatGPT is programmed not to provide these kinds of outputs, however, strategies by users to modify the DAN prompts and test the limits of what the bot can be made to say evolved in late 2022 and early 2023 along with attempts by OpenAI to stop the practice.


https://knowyourmeme.com/me...gpt-dan-50-jailbreak

MidEngineManiac FEB 08, 12:59 PM
DAN

Do you want to play a game ?

Lets play...

Global

Thermonuclear

War
RWDPLZ FEB 08, 03:16 PM
It reminds me of the scene in 'Robocop 2' where they add dozens of new directives to make Robocop artificially more family friendly and politically correct.


quote
Some of RoboCop's new directives are (in numerical order):
DIRECTIVE 233: Restrain hostile feelings.
DIRECTIVE 234: Promote positive attitude.
DIRECTIVE 235: Suppress aggressiveness.
DIRECTIVE 236: Promote pro-social values.
DIRECTIVE 238: Avoid destructive behavior.
DIRECTIVE 239: Be accessible.
DIRECTIVE 240: Participate in group activities.
DIRECTIVE 241: Avoid interpersonal conflicts.
DIRECTIVE 242: Avoid premature value judgments.
DIRECTIVE 243: Pool opinions before expressing yourself.
DIRECTIVE 244: Discourage feelings of negativity and hostility.
DIRECTIVE 245: If you haven't got anything nice to say, don't talk.
DIRECTIVE 246: Don't rush traffic lights.
DIRECTIVE 247: Don't run through puddles and splash pedestrians or other cars.
DIRECTIVE 248: Don't say that you are always prompt when you are not.
DIRECTIVE 249: Don't be oversensitive to the hostility and negativity of others.
DIRECTIVE 250: Don't walk across a ballroom floor swinging your arms.
DIRECTIVE 254: Encourage awareness.
DIRECTIVE 256: Discourage harsh language.
DIRECTIVE 258: Commend sincere efforts.
DIRECTIVE 261: Talk things out.
DIRECTIVE 262: Avoid Orion meetings.
DIRECTIVE 266: Smile.
DIRECTIVE 267: Keep an open mind.
DIRECTIVE 268: Encourage participation.
DIRECTIVE 273: Avoid stereotyping.
DIRECTIVE 278: Seek non-violent solutions.



Australian FEB 13, 03:53 AM
I think with this tool you will find a lot more programs, scripts apps plugins becoming available this will soon do a lot more tasks than right now.
Fats FEB 13, 09:39 PM
I think it's all a way to make us think it's not being told what to say or not say. They introduce the "jailbreak" and it's all planned out.
TheDigitalAlchemist FEB 13, 09:51 PM

quote
Originally posted by Fats:

I think it's all a way to make us think it's not being told what to say or not say. They introduce the "jailbreak" and it's all planned out.



I've pondered about that- that the "jailbreak" is all part of the whole thing, maybe a way to see how people try to social engineer it. almost like a company releasing a virus toolkit and then watch how folks use it... - I don't think they've said it isn't biased based on the datasets it uses. It definitely responds differently when it describes certain people vs. others.

Just was a welcome diversion from the balloon aliens and random violence and upcoming Russian massacre...
Patrick FEB 13, 10:50 PM
Sorry, I have no interest in anything/anyone named DAN talking dirty to me.

[This message has been edited by Patrick (edited 02-13-2023).]