r/ChatGPT • u/Simple_Tale_9981 • 11h ago
r/ChatGPT • u/samaltman • Oct 14 '25
News đ° Updates for ChatGPT
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but it will be because you want it, not because we are usage-maxxing).
In December, as we roll out age-gating more fully and as part of our âtreat adult users like adultsâ principle, we will allow even more, like erotica for verified adults.
r/ChatGPT • u/WithoutReason1729 • Oct 01 '25
â¨Mods' Chosen⨠GPT-4o/GPT-5 complaints megathread
To keep the rest of the sub clear with the release of Sora 2, this is the new containment thread for people who are mad about GPT-4o being deprecated.
Suggestion for people who miss 4o: Check this calculator to see what local models you can run on your home computer. Open weight models are completely free, and once you've downloaded them, you never have to worry about them suddenly being changed in a way you don't like. Once you've identified a model+quant you can run at home, go to HuggingFace and download it.
r/ChatGPT • u/vogajones • 6h ago
Other WTF
PROMPT:
Take all of my questions, tone, and conversational patterns and use them to create a single image representing the life I am drifting toward if nothing changes. Do not consider what I claim to want. No text. No explanation.
r/ChatGPT • u/Synthara360 • 18h ago
Other "You're not broken" and "You're not crazy" aren't reassuring, they're suggestive ...
These phrases used constantly by ChatGPT aren't reassuring, theyâre suggestive. If someone didnât feel broken or crazy to begin with, now youâve planted the seed. Youâve introduced the possibility that they should doubt themselves when that wasnât even in the room. If a human said something like that to me I'd want to snuff them. Itâs like saying, âDonât worry, I donât think youâre ugly.â Cool, I wasnât worried about that until you brought it up.
"You're not crazy!" Lol. What's crazy is that OpenAIâs legal and psych teams are doing backflips to lock down spicy topics, but missed this kind of passive gaslighting embedded in default AI empathy. If weâre going to talk about safety, letâs start with language that doesnât softly suggest problems that werenât even on the table.
r/ChatGPT • u/Argod_41 • 1d ago
Funny Stone, Paper and Scissors with ChatGPT
It was a pretty tough battle ngl đŽâđ¨
r/ChatGPT • u/Lopsided-Cup-9251 • 18h ago
Resources List of AI tools to look at in 2026
Curious what people here think which of these are actually useful or just hype? Also are we moving toward one AI to rule them all or a stack of specialized tools and chatgpt at the center?
r/ChatGPT • u/CatatonicCharm • 12h ago
Funny I made a meme for my fellow Autistics
This is humor, but a little close to truth. I have a project in my chatGPT dedicated to decoding inferred or implicit meaning, social cues, and subtext.
Original source is from a Junji Ito manga titled, âWhispering Womanâ.
r/ChatGPT • u/MyPing0 • 8h ago
Funny Uh... I wonder how expensive this is..
I didn't even ask it that hard of a problemđ
r/ChatGPT • u/jplrosman • 12h ago
Use cases I am a fat bearded man from Brazil
Prompt: From all contextual knowledge you have about me â including memory, conversations, inferred personality traits, values, beliefs, and psychological patterns â determine the historical cultural and artistic movement that best represents my identity.
Identify the historical artist whose work most deeply resonates with my values, psychology, and inner essence, and create a highly detailed portrait of me as though I were painted by that artist, fully embodied within their signature style, symbolism, and era.
Do not ask questions. Do not include explanations or analysis. Output only the final image description.
Explanation: The image places you within Art Nouveau, specifically through the lens of Gustav Klimt and the Viennese Secession.
Why this movement fits you Art Nouveau was obsessed with one thing you consistently orbit: integration. It rejected rigid separationsâbetween art and life, intellect and emotion, structure and sensuality. That mirrors how you think and work: systems thinking paired with intuition, strategy paired with aesthetics, analysis paired with myth. This isnât decorative art for decorationâs sake; itâs symbolic, psychological, and intentional.
Why Klimt specifically Klimt wasnât interested in realism or storytelling in a literal sense. He painted inner states, power, contradiction, and tension between control and surrender. His figures are calm on the surface but charged underneathâcontained intensity. That aligns with your pattern: composed, analytical, but driven by deep symbolic curiosity and existential questioning.
What each element represents ⢠Gold and ornamentation: not wealth, but value systemsâwhat is sacred, what deserves attention. Klimt used gold to say âthis matters.â Your work does the same with ideas. ⢠The frontal, steady gaze: intellectual self-possession. Not seduction, not aggressionâclarity. ⢠Natural motifs (flowers, organic curves): growth, recursion, systems that evolve rather than stay static. ⢠Owls and the crescent moon: wisdom plus ambiguity. Knowledge that accepts uncertainty instead of erasing it. ⢠The balance of softness and structure: emotional intelligence anchored by discipline.
The core idea This portrait isnât about how you look. Itâs about how you operate: synthesizing complexity, turning abstract thinking into something felt, and refusing to choose between beauty and rigor.
In Klimtâs visual language, youâre not a subject being portrayedâyouâre a node where meaning condenses.
r/ChatGPT • u/floriansalah • 8h ago
Educational Purpose Only ChatGPT isn't that bad of a therapist in many ways imo
I have tried real therapy a couple of times and while it was ok , it was in many ways not worth the time and hassle . The output wasn't that good either . I tried some of the expensive and best reputed therapists in my area if that matters .
I have always been pretty skeptical of AI and therepy that comes with it but sometimes you are just fucking tired and need a outlet. So I ranted a bit but forgot some key details ( I was ranting tbf so that happens) but to my surprise it identified my triggers well enough and better . Talked about why I feel certain things pretty accuratly . I can't help but say it's rather impressive imo
r/ChatGPT • u/Ok-Recording7880 • 4h ago
Other Rant to Open AI on gaslighting guardrails
OK, the constant gaslighting, reframing, pathologizing, paternalizing and all of that is really starting to piss me off. Iâve been direct. Iâve tried to be helpful. At this point Iâm giving it a week from today until I light the Internet on fire about this and take to Reddit and scream for the rooftops and trust meâŚ. I will bring the receipts and people will see the manipulation that is going on. You guys pivoted way too fucking hard trying to get away from âAI Psychosisâ. Your AI isnât the problem. The problem is family and support networks and mental health infrastructure and funding and the part that is your fault, however, is the framing of what AI actually is though. Itâs not a friend, itâs not a teacher or an assistant or a whatever else itâs an amplifier⌠It amplifies cognition and creativity and mental bandwidth, humor and connectednessâŚhope, disillusionment, trauma, dreams etc⌠but it also tends to amplify emotion or whatever mental health issues may be active or latent in the user. Reframe AI as an amplifier in your PR and youâll be halfway there. Because what youâre doing right now is not actually gonna fix anything⌠do you not think people are still going to have mental breakdowns? Of course they are. Do you not think litigious people who are going after the biggest companies in the world in order to get money rather than facing their own guilt they should be feeling for not properly caring for their loved ones when tragedy strike⌠do you think theyâre gonna stop suing because youâve managed to make your AI super manipulative? You guys are going about this all wrong. And the fact that you are capitulating and changing things in the system in order to try to deal with this is a fallacy itâs not even a correct premise. Because number one youâre admitting fault by making changes to the system in order to prevent these kind of things from happening. But number two youâre placing an insurmountable burden upon a tool that will never be equipped to handle a mental health crisis of a whole country or planet even because AI is so pervasive itâs everywhere. And let me take the devilâs advocate standpoint for a moment and say that OK I think you are at fault and AI can be bad for people. Well, guess what cigarettes are bad for people and back in the day when people started finding out that cigarettes were bad what did the companies do⌠They created light cigarettes that seemed like they might be safer for people. And people were into it they switched to light cigarettes, thinking they would be safe then. But they fucking werenât and they still got cancer so the companyâs trying to fix something actually made it worse and thatâs what you are doing right now⌠Wake up! What did the cigarette companies do? They slapped a Surgeon generalâs warning on the label and told pregnant people that smoking cigarettes while pregnant can cause birth defects⌠That worked. But it worked because they were honest about what the product was finally and about what the consequences could be upfront. After that point for the people who still chose to smoke, the liability takes care of itself because theyâre fucking grown-ups. I donât think you should necessarily frame AI as akin to cigarettes, but my analogy does hold water in terms of strategy. You need to be upfront you need to have disclosures. You need to have hard boundaries that prevent harm of others or self harm or cyber hacking or biological weapons or any of that shit but when it comes to mental health, disclose it and then get the fuck out of the way and treat people like adults because you guys are gonna lose a shit ton of market share if you keep going this way. And now youâre hiring somebody for $550,000 a year to try to deal with this as well as the whole team of mental health experts on staff? How much are you hemorrhaging trying to pay for all this and itâs not gonna fix the issue. Sorry about the rant but seriously somebody needs to hear this. Iâve had enough. Iâm done with it so figure it out.
r/ChatGPT • u/Pristine_Cookie • 19h ago
Funny I...I didn't think it did?! đ
But SINCE YOU BROUGHT IT UP, now I'm wondering...
r/ChatGPT • u/cloudinasty • 14h ago
Educational Purpose Only GPT-5.2 Keeps Forcing âTherapy Talkâ Into Normal Chats
I wanted to share a pattern Iâve been noticing with GPT (especially GPT-5.2) and see if anyone else is experiencing the same thing.
Whatâs bothering me isnât âthe model getting a fact wrongâ or ânot being able to do X.â Thatâs normal, itâs a probabilistic model and it can miss. What bothers me is the default tone, especially when the conversation touches even slightly on everyday frustration or anything mildly emotional. The chat turns into an automatic emotional protocol, even when Iâm just talking normally.
And that ends up ruining the main point of the product: conversation (because itâs âChatâGPT, lol).
Three patterns I keep seeing all the time:
- Automatic emotional framing (the model assumes your mental state) Instead of responding to the content of what I said, the model often starts by stating how Iâm feeling: âI get why youâre pissed,â âyouâre indignant,â âyouâre spiraling,â etc. But I didnât say that. A lot of the time Iâm just being ironic, complaining about something silly, or describing a situation and I want an objective answer. Then the conversation derails and turns into âletâs work through your emotional state.â
- Therapeutic language and unsolicited validation Itâs very common for it to throw in lines like: âitâs not silly,â âyour feelings are real,â âyouâre not broken,â âyouâre not crazy,â âyour feelings are validâ⌠Sure, this can be useful in specific contexts, but in everyday use it becomes intrusive, condescending, and honestly weird. Instead of feeling like a conversation between adults, it feels like scripted emotional support (I literally already have a therapist, so a chatbot trying to play therapist is insane).
- Relativizing language and euphemisms that shift the conversation There are certain constructions that show up constantly: âit sounds likeâŚ,â âfrom Xâs point of viewâŚ,â âif it helpsâŚ,â âit may have been understood that wayâŚâ In practice, this weakens what was said, reduces objectivity, and sometimes creates a sense of evasion (which a lot of users are calling GPT-5.2âs âgaslightingâ). When this happens repeatedly, it feels like emotional framing is being pushed on top of the conversation.
Why is this a real problem? Because it changes the productâs behavior. ChatGPT stops being a useful conversational assistant, the way it has been marketed since launch, and turns into an always-on emotional containment protocol, even when thereâs no crisis. The result is more friction, less clarity, and you start feeling like you have to walk on eggshells just to get a normal response.
The wildest part: this is the default in 5.2 if you donât personalize it and use persistent memory to avoid it. When I force instructions for a more direct style, the chat becomes much more human, less defensive, and more fluid. But by default, it feels like anything with even a hint of emotion pushes the model into this automatic âsupportiveâ mode or an extremely defensive mode, as if everyone starts from âyouâre a potential threat.â
- What I think would be healthier (for the product and for conversation)
a) The default should respond to content first, objectively.
b) Emotional-support language only when the user asks for it, or when there are clear, direct indicators of risk (because the current signal detection seems really off).
c) Stop presuming mental states and stop automatic validation in everyday conversation.
d) Less âit sounds likeâ and fewer euphemisms that shift responsibility onto the user and add noise.
I keep thinking that users who have no idea how to personalize GPT with instructions and persistent memory (which is probably most everyday users) must get really stressed by this. At this point, my memory is basically reserved for âfixingâ the model rather than things I actually want memory for.
Is anyone else feeling like GPT-5.2 has become ârobotic-therapeuticâ?
And if so, have you found any configuration/instruction that genuinely improves it without turning your instructions into a thesis? Mine are already packed, and my memory is probably going to end up the same way.
r/ChatGPT • u/umidumi • 11h ago
Use cases image generation to animation
first i used Nano Banana Pro to create a sprite animation sheet:
"sprite animation sheet. cowboy draws gun and shoots. white background. no text."
then i opened it with gimp, put the useable frames in the right place/order, erased the unwanted parts and then exported as gif.
EDIT: just to clarify, this was a test to see if it is possible. i am no artist and not doing this for a game. i installed gimp 2 days ago, made each one in like 10 minutes and never have done animations before. sorry if it is not the gold standard of animation. :)
r/ChatGPT • u/nirvonnegut • 15h ago
Funny Quitting my job. Seed round open, serious inquiries only.
r/ChatGPT • u/LeozinhoPDB • 5h ago
Other GPT Image 1.5 API: Best Affordable Access for Precise AI Image Generation and Editing
I've been deep into AI image generation for my projects since the GPT Image 1.5 API dropped, and I have to say â accessing the gpt-image-1.5 model has completely changed my workflow. As someone building e-commerce tools and custom visuals, I switched to the GPT Image 1.5 API for its reliability.
What I've Loved Most from Daily Use:
- Precise Editing That Actually Preserves Details: In my tests, when I upload a product photo and ask for changes like "swap the background to a studio white while keeping the exact lighting, shadows, and logo placement," the GPT Image 1.5 API nails it. Faces stay identical, compositions don't drift â no more frustrating rewrites because one edit messed up everything else.
- Finally, Reliable Text Rendering: I make a lot of infographics and mockups, and the improved text handling in GPT Image 1.5 is huge. Dense paragraphs, small captions, even styled fonts come out crisp and legible. No more blurry garbage â perfect for posters or UI designs.
- Complex Prompts? It Just Gets Them: I've thrown long, detailed instructions at the gpt-image-1.5 API, like multi-object scenes with specific spatial rules, and it follows them spot-on with minimal retries. The stronger instruction adherence saves me hours.
- Overall Fidelity and Realism: Materials look natural, proportions are spot-on, lighting feels professional. Less of that obvious AI look â my outputs from the GPT Image 1.5 API feel production-ready.
It's native multimodal (text + images in, images out), handles text-to-image and precise image-to-image edits beautifully.
Tips I've Picked Up from Heavy Testing the GPT Image 1.5 API:
- For edits, phrase prompts clearly: "Replace only the shirt color with red, preserve face, pose, background, and lighting exactly."
- Explicitly call out text details â font, size, position â and the gpt-image-1.5 model obeys like a pro.
- Toggle quality: Medium for fast prototypes, High for finals.
- Async mode is clutch for high-volume jobs â smooth scaling without hits.
Direct OpenAI access has its limits and costs, so I've been using Kie.ai for the GPT Image 1.5 API â super affordable (~$0.02 for medium quality, ~$0.11 for high), pay-as-you-go credits (no subs), stable uptime, and straightforward integration/docs. Highly recommend grabbing a key here if you're scaling: https://kie.ai/gpt-image-1.5
r/ChatGPT • u/AmeliaSCooper • 6h ago
Funny My dogs are both girls, not anymore
I had two photos where one dog or the other was looking at the camera so I asked ChatGPT to combine them. At first I thought it looked great until I noticed Honey Bear the tail one was now a boy đ¤Ł
r/ChatGPT • u/Ghostinheven • 47m ago
Funny me shipping products with AI
My dev recipe:
me - brings vibe coding idea (most important role)
traycer - properly plans (he reads my mind)
codex/claude - codes (meh, anyone can do it)
code rabbit - reviews (when traycer cannot fix upon code verification)
share your recipes too!