I’ll gladly take the karmic hit on your behalf and wish it on Kissinger twice. Once going out, then again going back in.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
I’ll gladly take the karmic hit on your behalf and wish it on Kissinger twice. Once going out, then again going back in.
Albertan here. A couple of years back my brother and my dad both died of cancer (an unrelated coincidence) and I had the same experience - there was never a moment of stress about money. There also never felt like there were any untoward delays; when a situation was urgent we were able to jump straight to the surgery/MRI/whatever. There were a few times where we had to wait a few weeks for an appointment, but those were always the low-priority or followup things.
I know a lot of people think of Alberta as “North Texas” and imagine it’s an American-style hellscape, but even if it might be a little below the general Canadian standards on some things it’s nowhere near. It’s important to be aware of the baselines that things are measured relative to.
Yet another case of the Russian military apparatus looking impressive but turning out to be made of paper mache and corruption when put to the actual test.
I assume that some of those bunkers had nice mixtures of explosives and incendiaries in them, and when they went off they fountained themselves all over their neighbors.
Also, what do you mean by synthetic data? If it’s made by AI, that’s how collapse happens.
But that’s exactly my point. Synthetic data is made by AI, but it doesn’t cause collapse. The people who keep repeating this “AI fed on AI inevitably dies!” Headline are ignorant of the way this is actually working, of the details that actually matter when it comes to what causes model collapse.
If people want to oppose AI and wish for its downfall, fine, that’s their opinion. But they should do so based on actual real data, not an imaginary story they pass around among themselves. Model collapse isn’t a real threat to the continuing development of AI. At worst, it’s just another checkbox that AI trainers need to check off on their “am I ready to start this training run?” Checklist, alongside “have I paid my electricity bill?”
The problem with curated data is that you have to, well, curate it, and that’s hard to do at scale.
It was, before we had AI. Turns out that that’s another aspect of synthetic data creation that can be greatly assisted by automation.
For example, the Nemotron-4 AI family that NVIDIA released a few months back is specifically intended for creating synthetic data for LLM training. It consists of two LLMs, Nemotron-4 Instruct (which generates the training data) and Nemotron-4 Reward (which curates it). It’s not a fully automated process yet but the requirement for human labor is drastically reduced.
the only way to guarantee training data isn’t from its own model is to make it yourself
But that guarantee isn’t needed. AI-generated data isn’t a magical poison pill that kills anything that tries to train on it. Bad data is bad, of course, but that’s true whether it’s AI-generated or not. The same process of filtering good training data from bad training data can work on either.
It’s not wrong for either to draw inspiration from the other. It’s the hypocrisy that’s wrong.
I’ve made similar points in the past in discussions about robot soldiers going to war. There’s an upside to these things that people insist on overlooking; they follow their programming. If you program a robot soldier to never shoot at an ambulance, then it will never shoot at an ambulance even if it’s having a really bad day. Same here, if the security robot has been programmed never to leave the public sidewalk then it’ll never leave the public sidewalk.
It’s always possible for these sorts of things to be programed to do the wrong things, of course. But at least now we have the ability to audit that sort of thing.
Are you suggesting that the same amount of crime is happening but they’re deciding not to report it because there’s a robot there? That’s the measure they’re touting, the reduction in crime reports.
You joke, but presumably that’s when it recharges.
It’s a common pattern. Something actually bad exists, and a word is invented to describe that bad thing. People want to call the things they don’t like by that bad word, even if it’s not quite right, so the definition starts to widen a bit. It’s a very bad thing so it’s good to call things you don’t like by that word, it makes everyone else hate them too! The word stretches and stretches, and eventually everything vaguely bad is called that word. It loses its meaning.
A new word is invented to describe some specific actually bad thing. Repeat.
Things change. There was a period before this information was easily available; this repository only goes back to 2013. Now there’s a period after this information, too. Things start and eventually they end.
Here’s hoping that some neat new things start up in its place.
They’re not both true, though. It’s actually perfectly fine for a new dataset to contain AI generated content. Especially when it’s mixed in with non-AI-generated content. It can even be better in some circumstances, that’s what “synthetic data” is all about.
The various experiments demonstrating model collapse have to go out of their way to make it happen, by deliberately recycling model outputs over and over without using any of the methods that real-world AI trainers use to ensure that it doesn’t happen. As I said, real-world AI trainers are actually quite knowledgeable about this stuff, model collapse isn’t some surprising new development that they’re helpless in the face of. It’s just another factor to include in the criteria for curating training data sets. It’s already a “solved” problem.
The reason these articles keep coming around is that there are a lot of people that don’t want it to be a solved problem, and love clicking on headlines that say it isn’t. I guess if it makes them feel better they can go ahead and keep doing that, but supposedly this is a technology community and I would expect there to be some interest in the underlying truth of the matter.
No, researchers in the field knew about this potential problem ages ago. It’s easy enough to work around and prevent.
People who are just on the lookout for the latest “aha, AI bad!” Headline, on the other hand, discover this every couple of months.
AI already long ago stopped being trained on any old random stuff that came along off the web. Training data is carefully curated and processed these days. Much of it is synthetic, in fact.
These breathless articles about model collapse dooming AI are like discovering that the sun sets at night and declaring solar power to be doomed. The people working on this stuff know about it already and long ago worked around it.
This is “technology news and articles?”
Seems like this place is increasingly just people yelling at AI-generated clouds.
Sometimes headshots develop spontaneously. It’s a rare condition, but convenient. Some claim John F. Kennedy suffered from this condition.
I recall seeing a list of the most dangerous jobs in America and “President of the United States” topped it due to the high percentage of people with that job who’ve been shot.
But at least that crappy bug-riddled code has soul!
In Tyreek’s post-arrest press conference he asked rhetorically “what would have happened if I hadn’t been famous?”
Well, now we see. Wrist-slaps with no actual long-term impact.
The Fediverse seems a lot “bubblier” than Reddit, with people quicker to hit the downvote button for views that intrude. I’ve lost a lot of drive to engage here, I find myself often dropping a comment into a discussion and then never looking back at it. Unfortunate, but I suppose not too surprising when communities are smaller.
I’m Canadian. I would say that I don’t think much about it in terms of current events, I haven’t heard much in the news about it in recent years. And my assumption from that is that’s probably a good sign. There used to be a steady stream of bad news, and “no news” lies along the path in between “bad news” and “good news.”
I did see a video recently about Iraq’s plans for a giant new port facility on that little tidbit of Persian Gulf shoreline it has and road/rail link from it up through to Turkey, and thence onward into Europe. It sounded like a very optimistic development if it can be seen through to fruition, opening an alternative trade corridor to the Suez Canal. Anything that diversifies a country’s economy is a good thing, and anything that removes single points of failure in global shipping networks is also a good thing. I can’t imagine the Houthi obstruction of the Red Sea would still be a problem by the time that route opens up but at least it’ll be an option if something like it happens again.