Channel | Publish Date | Thumbnail & View Count | Download Video |
---|---|---|---|
Publish Date not found | 0 Views |
Back with a new video highlighting a super cool feature we just added to AnythingLLM, where you can create a fully refined model from your chats and documents and run it **locally** with Ollama and LMStudio – no privacy concerns or supplier blocking -in. At AnythingLLM, we believe that models trained on your data are your models. We do not monitor this technology or limit you to a single provider.
**This is not a local service** If you want to refine a model locally, you can export your data directly from AnythingLLM and do it yourself. A locally running no-code training video will be released soon – subscribe to stay informed!
Download EverythingLLM: https://anythingllm.com/download
Star on Github: https://github.com/Mintplex-Labs/anything-llm
Send me an email: [email protected]
Chapters
0:00 Introduction to no-code fine tuning with AnythingLLM
0:25 Disclaimers and notes
0:47 Download AllesLLM
1:27 no-code Overview of fine tuning
2:00 Who is this video for?
2:37 Collecting good training data with RAG AnythingLLM
3:44 We look at the dataset we will train on
5:28 Sophistication versus RAG for the common person
7:00 AM Order a fine-tuning
8:04 Privacy and data processing
8:44 Selecting a base model
9:18 Uploading our dataset
10:32 What happens once you start
10:54 What you get when tuning is done
11:49 Our custom tuning is being loaded into Ollama
15:38 Testing our new fine-tune!
16:43 How to delete a custom fine-tune from Ollama
5:30 PM Loading our custom tuning into LMStudio
20:57 Conclusion and thanks!
Please take the opportunity to connect and share this video with your friends and family if you find it useful.