Ahh, I see lol
Web Dev Person / Ex Performance ECU Calibrations Person
Ahh, I see lol
I’m not sure I understand at all?
It’s fully open source, can run/connect any number of fully local models as well as the big name models if a user chooses to use them.
Can you expand on what you mean?
Thanks!
Unfortunately currently there isn’t a true RAG implementation largely due to the fact that this site/app is fully self contained with no additional servers or database etc…which is typically required for RAG.
For now file uploads are stored in the browser’s own local database and the content can be extracted and added to the current conversation context easily.
I definitely want to add a more full RAG system but it’s a process to say the least, and if I implement it I want it to be quite effective. My experience with RAG generally has left me quite unimpressed with a few quite decent implementations being the exception.
Web search is definitely something I want to add, haven’t quite figured out the route I want to take implementing it just yet though.
Hopefully I can get it added sooner rather than later!
This project is entirely web based using Vue 3, it doesn’t use langchain and I haven’t looked into it before honestly but I do see they offer a JS library I could utilize. I’ll definitely be looking into that!
As a result there is no LLM function calling currently and apps like LM Studio don’t support function calling when hosting models locally from what I remember. It’s definitely on my list to add the ability to retrieve outside data like searching the web and generating a response with the results etc…
I haven’t personally tried it yet with Ollama but it should work since it looks like Ollama has the ability to use OpenAI Response Formatted API https://github.com/ollama/ollama/blob/main/docs/openai.md
I might give it go here in a bit to test and confirm.
Local models are indeed already supported! In fact any API (local or otherwise) that uses the OpenAI response format (which is the standard) will work.
So you can use something like LM Studio to host a model locally and connect to it via the local API it spins up.
If you want to get crazy…fully local browser models are also supported in Chrome and Edge currently. It will download the selected model fully and load it into the WebGPU of your browser and let you chat. It’s more experimental and takes actual hardware power since you’re fully hosting a model in your browser itself. As seen below.
Gotcha, you don’t like discussions. Noted.
Seems like a friendly enough response was given to your comment and you automatically assumed they were only interested in saying you’re wrong.
Having a discussion is not “proving everyone wrong”
I disliked signal app wise, and Matrix app was a buggy mess for me and the 4 other people who tried to use it as well
SimpleX was easy to setup and has been for the most part stable for all of us.
Basically to answer your question, people like different things.
SimpleX isn’t perfect by any means but it seems to be developed at a somewhat decent pace with noticeable improvements being made.
As a dev it’s nice to check all the official guideline boxes, as a user I’d much rather actually have features.
They’re just going to source the allowed parts from Red Bull basically exactly like they used to do with Toro Rosso.
To think that will equate to a RB19 is a bit insane in my opinion. They will likely improve, but still be a mid midfield team like they used to be with Toro Rosso.
You’ve never actually used them properly then.
That seems like a pretty naive and biased approach to software to me honestly.
Ease of use, community support, feature set, CI/CD etc…all should come into play when deciding what to use.
Freedom at all costs is great until you limit the community development and potential user base by 90% by using a completely open repo service that 5% of the population uses or some small discord alternative.
So then the option is to host on multiple platforms/communities and the management and time investment goes up keeping them in sync and active.
As with most things in life, it’s best to look at things with nuance rather than a hard stance imo.
I may stand it up on another service at some point, but also anyone else is totally free to do that as well. There are no restrictions.