It’s gonna be so fucking rich that the staggering mass of stupidity online prevents us from improving an AI beyond our intelligence level.
Thank the shitposter in your life.
You can’t really blame the amount of stupidity online.
The problem is that ChatGPT (and other LLM) produce content of the average quality of its input data. AI is not limited to LLM.
For chess we were able to build AI that vastly outperform even the best human grandmasters. Imagine if we were to release a chess AI that is just as good as the average human…
We call them chess ai. But they’re not actually real A.I. chess bots work off of opening books, predetermined best practices. And then analyzes each position and potential offshoots with an evaluation function.
They will then start to brute-force positions until it finds a path that is beneficial.
While it may sound very much alike. It works very differently than an A.I. However. It turned out that A.I software became better than humans at writing these functions.
So in a sense, chess computers are not A.I. They’re created by A.I. at least Stockfish 12 has these “A.I inspired” evaluations. (Currently they’re on Stockfish 15 I believe)
And yes. We also did make “chess AI” that is as bad as the average player. We even made some that are worse. Because we figured it would be nice if people can play a chess computer that is on the same skill level as the player. Rather than just being destroyed every time.
The definition of “AI” is fuzzy and keeps changing. Basically when an AI use case becomes solved and widespread it stopped being seen as AI.
Face recognition, OCR, speech recognition, all those used to be considered AI but now they’re just an app on your phone.
I’m sure in a few years we’ll stop thinking about text generation as AI, but just one more tool we can leverage.
There is no clear definition of “real AI”.
Those are all still AI. Scientists still have a functional definition that includes these plus more scripted AI like in video games.
Essentially, any algorithm that learns and acts on information that has not been explicitly programmed is considered AI.
Shitposting saves jobs
Shitposters on the Internet are the new clogs in the machine
I’m not too surprised, they’re probably downgrading the publicly available version of ChatGPT because of how expensive it is to run. Math was never its strong suit, but it could do it with enough resources. Without those resources, it’s essentially guessing random numbers.
Must be because of all the censoring. The more they try to prevent DAN jailbreaking and controversial replies, the worse it got.
This is my experience in general. ChatGTP when from amazingly good to overall terrible. I was asking it for snippets of javascript, explanations of technical terms and it was shockingly good. Now I’m lucky if even half of what it outputs is even remotely based on reality.
They probably laid off the guy behind the curtain.
deleted by creator
I’ve never been able to get a solution that was even remotely correct. Granted, most of the times I ask ChatGPT is when I’m having a hard time solving it myself.
If OpenAI is being roadblocked by all these social platforms why doesn’t it decentralize and use the fediverse to learn?
I mean, whose to say they aren’t? But also, the fediverse is worthless compared to the big players. The entirety of the fediverses content to date is like a days worth of twitter or reddit content.
Stop making a language model do math? We have already have calculators.
Do you think maybe it’s a simple and interesring way of discussing changes in the inner workings of the model, and that maybe people know that we already have calculators?
I think it’s a lazy way of doing it. OpenAI has clearly stated that math isn’t something that they are even trying to make it good at. It’s like testing how fast Usain bolt is by having him bake a cake.
If chatgpt is getting worse at math it might just be a side effect of them making it better at reading comprehension or something they want it to be good at there is no way to know that.
Measure something it is supposed to be good at.
All the things it’s supported to be good at are completely subjectively judged.
That’s why, u less you have a panel of experts in your back pocket, you need something with a yes or no answer to have an interesting discussion.
If people were discussing ChatGPT’s code writing ability, you’d complain that it wasn’t designed to do that either. The problem is that it was designed to transform inputs tk relatively beliveable outputs, representative of its training set. Great. That’s not super useful. It’s actual utility comes from its emergent behaviours.
Lemme know when you make a post detailing the opinions of some university “Transform inputs to outputs” professors. Until then, well ocmrinue to discuss its behaviour in observable, verifiable and useful areas.
We have people that assign numerical values to peoples ability to read and write every day. They are english teachers. They test all kinds of stuff like vocabulary, reading comprehension and grammar and in the end they assign grades to those skills. I don’t even need tiny professors in my pocket, they are just out there being teachers to children of all ages.
One of the task I have chatGPT was to name and describe 10 dwarven characters. Their names have to be adjectives like grumpy but the description can not be based on him being grumpy. He has to be something other than grumpy.
ChatGPT wrote 5 dwarves that followed the instructions and then defaulted to describing each dwarf based on their name. Sneezy was sickly, yawny was lazy and so on. This gives a score of 5/10 on the task I gave it.
There is a tapestry of clever tests you can give it with language in focus to test the ability of a natural language model without giving it a bunch of numbers.
OK, you go get a panel of highschool English teachers together and see how useful their opinions are. Lemme know when your post is up, I’ll be interested then.
Sorry, I thought we were having a discussion when we were supposed to just be smug cunts. I will correct my behaviour in the future.