I want to take a moment to talk a little bit about what GPT-like technology is, and why I started this blog.
Every week or so, I see a sensationalist article in the press about some large-language-model saying something that someone considers bad.
We all need to realize two things:
(1) language models don't offend people, it's the prompters that offend people
(2) the AI companies are in a chinese finger-cuff style "damned if you do, damned if you don't" scenario.
When they had near-zero content moderation in place, the press skewered them for the kinds of things one can get the AI models to spit out.
When they put some form of decency-boundaries in place, it amplifies the existing cultural decency movements (whether appropriate, mis-directed, extremist), and the press skewers them for the kinds of things one can get the AI to spit out.
This pattern repeats every week about a new topic. Sexuality, Gender, Politics, Violence, Accuracy -- on and on.
ChatGPT, GPT-3, Google Bard, and other large-language-model AI technology is not-sentient, it is not-racist, and it is not-violent..
...it is a natural language constraint satisfaction programmable auto-completion engine.
Let's see what ChatGPT can to do simplify that
restate "natural language constraint satisfaction programmable auto-completion engine" in fewer simpler words
A program that helps you type faster by suggesting words and phrases, based on the rules of language.
Nice! That is more plain to understand.
GPT technology takes a natural language document, and completes the document using a really really sophisticated execution of mathematics and statistics, along with a bit of randomness, one word at a time.
That's it. Nothing more, nothing less.
The only way to make the technology never do anything the press can't sensationalize with ignorant stupidity, is to make the technology not do anything.
That's not to say we shouldn't improve objective measures, like Accuracy, or Politeness of it's single-prompt responses. However, keep in mind that content moderation that blocks certain responses is also censoring the knowledge and ideas that the language models can reason about, and as a policy that's not necessarily a good thing. (unless we're only talking about front-page search results, and then, moderate away)
As we all explore and learn about this new technology, remember that when we receive an objectionable result, the technology is not at fault. It is merely combinging the knowledge and bias embedded in our collective works of language and the constraints imposed in the context-prompt.. Just like any one of us could choose to "act out" an objectionable rant whether we believed it or not, so too can the technology act out an objectionable-rant, especially when prompted to do so by click-bait motivated journalists.
It's time to stop looking for the worst thing you can get the technology to do -- and start to look for the best thing you can get the technology to do, such as in my first post:
Poetry writing with GPT-3 and ChatGPT
Welcome to my blog. Happy prompting.
No comments:
Post a Comment