To run a local LLM, you haveLM Studio, but it doesn’t support ingesting local documents. There is GPT4ALL, but I find it much heavier to use andPrivateGPThas a command-line interface which is not suitable for average users. So comes AnythingLLM, in a slick graphical user interface that allows you to feed documents locally and chat with your files, even on consumer-grade computers. I used it extensively and found AnythingLLM much better than other solutions. Here is how you can use it.
Note:AnythingLLM runs on budget computers as well, leveraging both CPU and GPU. I tested it on an Intel i3 10th-gen processor with a low-end Nvidia GT 730 GPU. That said, token generation will be slow. If you have a powerful computer, it would be much faster at generating output.
Download and Set Up AnythingLLM
So this is how you can ingest your documents and files locally and chat with the LLM securely. No need to upload your private documents on cloud servers that have sketchy privacy policies. Nvidia has launched a similar program calledChat with RTX, but it only works with high-end Nvidia GPUs. AnythingLLM brings local inferencing even on consumer-grade computers, taking advantage of both CPU and GPU on any silicon.
Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.