AnythingLLM for self-hosted enterprise AI

Just came across this app:

It works as a frontend for different AI inference servers and services, adding the ability to manage sets of documents for retrieval augmented generation (RAG), with an integrated embedding model. I’ve been playing around with the desktop app on my laptop using Llama3-8B using locally-hosted Ollama and it seems really promising, especially on individual documents. You can also hook it up to external datasources, such as YouTube videos.

I’m thinking of testing this out on a larger Mac Studio model to see how much more effective it is with Llama3-70B, as well as potentially trying out some other embedder models (the integrated one seems to be just okay). Has anyone else played with this?