<div class="bbWrapper"><blockquote data-attributes="member: 5905" data-quote="MTF" data-source="post: 1072619"
class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch">
<div class="bbCodeBlock-title">
<a href="/community/goto/post?id=1072619"
class="bbCodeBlock-sourceJump"
rel="nofollow"
data-xf-click="attribution"
data-content-selector="#post-1072619">MTF said:</a>
</div>
<div class="bbCodeBlock-content">
<div class="bbCodeBlock-expandContent js-expandContent ">
Not sure when it happened but ChatGPT has been definitely dumbed down recently.<br />
<br />
I used it relatively often some time ago. These days, whatever I ask it, it comes with terrible answers, makes ridiculous mistakes, or doesn't even correctly respond to my prompt.<br />
<br />
I assume they lobotomized it for safety/wokeness/regulators. Either way, it's definitely no Google killer so far as I have close to zero trust in what it says now.
</div>
<div class="bbCodeBlock-expandLink js-expandLink"><a role="button" tabindex="0">Click to expand...</a></div>
</div>
</blockquote>Interesting. I personally can't remember a time when it produced reliable responses, but maybe there are areas where it it did.<br />
<br />
Another explanation, besides wokeness, might be compression. They might compress their large models and see what they can get away with. If they can use less powerful GPUs for inference (as opposed to training), they could increase their margin significantly.<br />
<br />
<blockquote data-attributes="member: 9009" data-quote="theag" data-source="post: 1072630"
class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch">
<div class="bbCodeBlock-title">
<a href="/community/goto/post?id=1072630"
class="bbCodeBlock-sourceJump"
rel="nofollow"
data-xf-click="attribution"
data-content-selector="#post-1072630">theag said:</a>
</div>
<div class="bbCodeBlock-content">
<div class="bbCodeBlock-expandContent js-expandContent ">
I don't use ChatGPT much, but do use GitHub Copilot for programming, which is essentially based on the same tech from OpenAI. Quality definitely has gone down, to the point of it getting in the way with bad auto-complete recommendations that are hard to cancel. The "hallucinations" of the current LLMs get much more apparent when it recommends to use class methods that don't exist.
</div>
<div class="bbCodeBlock-expandLink js-expandLink"><a role="button" tabindex="0">Click to expand...</a></div>
</div>
</blockquote>A colleague came across an article which said that, in experiments, things with code completion got worse as the models got "better". They learned how to pick up on coding issues that the programmer made and ran with them in the generated code. That sounds like an alignment issue, but this suggests that there might also be explanations other than them dumbing things down / going backwards.</div>