I was looking back at some old lemmee posts and came across GPT4All. Didn’t get much sleep last night as it’s awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.
Still, I’m after more, I’d like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I’m using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.
Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I’d prefer more of a desktop client application than a docker container running in the background.
Took ages to produce answer, and only worked once on one model, then crashed since then.
Try the beta on the github repo, and use a smaller model!