Ask a man his salary. Do it. How else are you supposed to learn who is getting underpaid? The only way to rectify that problem is to learn about it in the first place.
I think context is important here. Asking a co-worker their salary is fine. Asking about the salary of someone you’re on a date with is not fine.
Exactly.
You should have asked them for their W-2 before agreeing to meet.
Yeah and get their credit score before you even reply.
The NLRB ensures that discussion of wages is a protected right.
Talk about your wages.
45 can fix that
Plans to, too.
Ask a woman her age. Do it. How else are you supposed to learn who is getting older? The only way to celebrate that is to learn about it in the first place.
“Publicly available data” - I wonder if that includes Disney’s catalogue? Or Nintendo’s IP? I think they are veeery selective about their “Publicly available data”, it also implies the only requirement for such training data is that it is publicly available, which almost every piece of media ever? How an AI model isn’t public domain by default baffles me.
Great articles, first is one of the best I’ve read about the implications of fair use. I argue that because of the broadness of human knowledge that is interpreted through these models, everyone is entitled to have unrestricted access to them (not the servers or algorithms used, the models). I’ll dub it “the library of the digital age” argument.
Great point.
There is a rumor that OpenAI downloaded the entirety of LibGen to train their AI models. No definite proof yet, but it seems very likely.
https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/
“It just like me fr fr” (cit.)
The problem is that if copyrighted works are used, you could generate a copyrighted artwork that would be made into public domain, stripping its protection. I would love this approach, the problem is the lobbyists don’t agree with me.
Not necessarily, if a model is public domain, there could still be a lot of proprietary elements used in interpreting that model and actually running it. If you own the hardware and generate something using AI, I’d say the copyright goes to you. You use AI as the brush to paint your painting and the painting belongs to you, but if a company allows you to use their canvas and their painting tools, it should go to them.
I think that if I paint with my own brush a mario artwork that isn’t to Nintendo’s standart, they have the legal power to take it down from wherever I upload
Really? Even if your artwork isn’t used in a commercial way?
I’m really not in the know abput these things but I have seen free fangames taken down because they used copyrighted property even though the creators don’t receive a penny.
I’ll compare it with the recent takedown of the Switch emulator Yuzu. It’s my understanding they actively solicited donations and piracy, both of which could be seen as commercial activities. Which in a project of that scale the latter was their downfall, meanwhile Ryujinx is still up and running. But we’ll see if that remains true.
Copyrights and IP laws don’t only come into effect if profit is made. Fan products are usually tolerated by companies because it’s free advertising and fans get angry when it does get taken down.
When a fan product starts making money, it’s usually because it directly competes with the original IP and then they act. Even then, Etsy has thousands of shops with copyrighted content but the small profit loss doesnt justify the loss of reputation for the companies.
That being said, it’s the user who uploads it who is at fault and not the tool used to create it.
Ultimately, I think it’s the platforms that let users upload copyrighted content and celebrity likenesses that should be at fault. Take for example the Taylor Swift debacle. An image generator was used to create the images sure but twitter chose to let it float on their website for a whole day even though it was most likely reported in the first 5 minutes.
There’s also the fact that if we start demanding AI doesn’t use copyrighted content, it kills the open source scene and cements Google and Microsoft’s grip on our economy as we move towards an AI driven society.
Yes, fanart is almost certainly copyright infringement unless the copyright holder grants a license. Many companies have an official license for non-commercial fanart and generally nobody cares about it but if someone really wanted to they could absolutely file takedown requests against all fanart of their work.
The existing legal precedence in most places is that most use of ML doesn’t count as human expression and doesn’t have copyright protection. You have to have significant control over the creation of the output to have copyright (the easiest workaround is simply manually modifying the ML output and then only releasing the modified version)
The existing legal precedence
I know that’s how law works, but there is no precedent for AI at this scale and will only get worse. What if AI gains full sentience? Are they a legally recognised person? Do they have rights and do they not own the copyright themselves? All very good questions with no precedent in law.
The law says human creative expression
At what point does human creative expression become a sentient being?
If you rent a brush to paint with, is the painting not yours? If you rent a musical instrument to record an original song with, is the song not yours?
Read the fine print on that agreement
Exactly! When you pay for a service you own the copyright, like having a photoshop license. I meant in other situations where it’s free or provided as research tools to engineers under a company.
It’s almost impossible to audit what data got into an AI model. Until this is true companies could scrape and use whatever they like and no one would be the wiser to what data got used or misused in the process. That makes it hard to make such companies accountable to what and how they are using.
Then it needs to be on companies to prove their audit trail, and until then require all development to be open source
That would be amazing. But it won’t happen any time soon if ever… I mean - just think about all that investment in GPU compute and the need to realize good profit margins. Until there are laws and legislation that requires AI companies to open their data pipelines and make public all details about the data sources I don’t think much would happen. They’ll just keep feeding any data they get their hands on and nothing can stop that today.
Until there are laws and legislation that requires AI companies to open their data pipelines and make public all details about the data sources I don’t think much would happen.
I don’t expect those laws to ever happen. They don’t benefit large corporations so there’s no reason those laws would ever be prioritized or considered by lawmakers, sadly.
Maybe not today and maybe not every AI but maybe some AI in the near future will have it’s data sources made explainable. There are a lot of applications where deploying AI would be an improvement over what we have. One example I can bring up is in-silico toxicology experiments. There’s been a huge desire to replace as many in-vivo experiments with in-vitro or even better in-silico to minimize the number of live animals tested on. Both for ethical reasons and cost savings. AI has been proposed as a new tool to accomplish this but it’s not there yet. One of the biggest challenges to overcome is making the AI models used in-silico to be explainable, because we can not regulate effectively what we can not explain. Regardless there is a profits incentive for AI developers to make at least some AI explainable. It’s just not where the big money is. To which end that will apply to all AI I haven’t the slightest idea. I can’t imagine OpenAI would do anything to expose their data.
Then it needs to be on companies to prove their audit trail, and until then require all development to be open source
Anyone know why most are a 2021 internet data cut off?
Training from scratch and retraining is expensive. Also, they want to avoid training on ML outputs as samples, they want primarily human made works as samples, and after the initial public release of LLMs it has become harder to create large datasets without ML stuff in them
There was a good paper that came out recently saying that training on ml data will result in a collapse of cohesion. It’s going to be real interesting, I don’t know if they’ll be able to train as easily ever again
I recall spotting a few things about Image Generators having their training data contaminated using generated images, and the output becoming significantly worse. So yeah, I guess LLMs and IGA’s need natural sources, or it gets more inbred than the Habsburgs.
I think it’s telling that they acknowledge that the stuff their bots churn out is often such garbage that training their bots on it would ruin them.
I think it’s just that most are based on chatgpt which cuts off at 2021.
Hey, did you know your profile is set to appear as a bot and as a result many may be filtering your posts and comments? You can change this in your Lemmy settings.
Unless you are a bot… In which case where did you get your data?
The data wasn’t stolen, I can at least assure you of that
You paid Hoffman?
Where do you get that from? At least ChatGPT isn’t limited to data from 2021. I haven’t researched about other models.
Gpt 3.5 is limited to 2021. Gpt 4; 4.5; the imaginary upcoming gpt 5 models are not, but that does not mean they aren’t limited in their own ways.
Are you sure those aren’t trained until 2021, frozen, and then fine tuned on later data?
I really don’t know, I’m speculating, but neither does openai know, that’s sure. So we have the most popular ML system used by millions based on…what exactly?
Yeah GPT 3.5 and some other FOSS models also say 2021
To be fair this tweet doesn’t say anything about training data but simply that it theoretically can use present day data if it looks it up online.
For gpt4 i think its was initially trained up to 2021 but it has gotten updates where data up to december 2023 was used in training. It “knows” this data and does not need to look ut up.
Whether they managed to further train the initial gpt4 model to do so or added something they trained separately is probably a trade secret.
Thanks!
I love of it isn’t just a image of the open ai logo but also a sad person besides it
Oh that is not just some person, that’s the CTO of "Open"AI when asked, if YT videos were used to train Sora.
Lying MF, unbelievable that’s the best they thought of.
I’m sorry, but we’ve made an internal decision not to reveal our proprietary methodology at this time.
There, now it’s not a lie (hurr durr I’m only the CTO how would I know whether a tiny startup like YOUTUBE was one of our sources)
Here is an alternative Piped link(s):
https://piped.video/mAUpxN-EIgU?feature=shared&t=270
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
What’s wrong with her face?
Poor training data presumably.
🤣
It’s this face: https://www.compdermcenter.com/wp-content/uploads/2016/09/vanheusen_5BSQnoz.jpg
She was asked about openai using copyrighted material for training data and literally made that face. Only thing more perfect would’ve been if she tugged at her collar while doing the face.