You can run it yourself, so that rules out it’s just Indian people like the Amazon no checkout store was.
Other than that, yeah, be suspicious, but OpenAI models have way more weird around them than this company.
I suspect that OpenAI and the rest just weren’t doing research into less costs because it makes no financial sense for them. As in it’s not a better model, it’s just easier to run, thus it makes it easier to catch up.
It does, because the reason the US stock market has lost a billion dollars in value is because this company can supposedly train an AI for cents on the dollar compared to what a US company can do.
It seems to me that understating the cost and complexity of training would cause a lot of problems to the states.
It’s open source and people are literally self-hosting it for fun right now. Current consensus appears to be that its not as good as chatGPT for many things. I haven’t personally tried it yet. But either way there’s little to be “suspicious” about since it’s self-hostable and you don’t have to give it internet access at all so it can’t call home.
Is there any way to verify the computing cost to generate the model though? That’s the most shocking claim they’ve made, and I’m not sure how you could verify that.
Open source is a very loose term when it comes to GenAI. Like Llama the weights are available with few restrictions but importantly how it was trained is still secret. Not being reproducible doesn’t seem very open to me.
Their paper outlines the training process but doesn’t supply the actual data or training code. There is a project on huggingface: https://huggingface.co/blog/open-r1 that is attempting a fully open recreation based on what is public.
I do feel deeply suspicious about this supposedly miraculous AI, to be fair. It just seems too amazing to be true.
You can run it yourself, so that rules out it’s just Indian people like the Amazon no checkout store was.
Other than that, yeah, be suspicious, but OpenAI models have way more weird around them than this company.
I suspect that OpenAI and the rest just weren’t doing research into less costs because it makes no financial sense for them. As in it’s not a better model, it’s just easier to run, thus it makes it easier to catch up.
Mostly, I’m suspicious about how honest the company is being about the cost to train the model, that’s one thing that is very difficult to verify.
They explained how you can train the model in order to create a similar A.I. on their white paper: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf
Does it matter though? It’s not like you will train it yourself, and US companies are also still in the dumping stage.
It does, because the reason the US stock market has lost a billion dollars in value is because this company can supposedly train an AI for cents on the dollar compared to what a US company can do.
It seems to me that understating the cost and complexity of training would cause a lot of problems to the states.
It’s open source and people are literally self-hosting it for fun right now. Current consensus appears to be that its not as good as chatGPT for many things. I haven’t personally tried it yet. But either way there’s little to be “suspicious” about since it’s self-hostable and you don’t have to give it internet access at all so it can’t call home.
https://www.reddit.com/r/selfhosted/comments/1ic8zil/yes_you_can_run_deepseekr1_locally_on_your_device/
Is there any way to verify the computing cost to generate the model though? That’s the most shocking claim they’ve made, and I’m not sure how you could verify that.
Open source means it can be publicly audited to help soothe suspicion, right? I imagine that would take time, though, if it’s incredibly complex
Open source is a very loose term when it comes to GenAI. Like Llama the weights are available with few restrictions but importantly how it was trained is still secret. Not being reproducible doesn’t seem very open to me.
True, but in this case I believe the also open sourced the training data and the training process.
Their paper outlines the training process but doesn’t supply the actual data or training code. There is a project on huggingface: https://huggingface.co/blog/open-r1 that is attempting a fully open recreation based on what is public.