Gemma 2 - Local RAG with Ollama and LangChain

13,805
0
Published 2024-06-28

All Comments (21)
  • @relaniumz
    Thank you for the video. Vote for the next video - Fully Local Multimodal RAG
  • @toadlguy
    Just wanted to give a big thumbs up to this, although I haven't yet watched the whole thing 😀. There are so many interesting things you can do with Local RAG and LangChain is very straight forward. I did something similar with Ollama's Llama3 model. Very interested in trying new Llama models that should be available soon.
  • @mrnakomoto7241
    useful videos. keep on uploading and make aussies proud of you
  • @aa-xn5hc
    memory, LangChain agents, streaming UI next, please. Thanks for the very useful video!
  • @5h3r10k
    Amazing tutorial, exactly what I was looking for! Running it with a few text documents, the results are great. Do you have any recommendations for making the QA faster? A different model or libraries?
  • Great video again. Question though. I can see you are more in favour of langchain but what’s your thoughts on autogen and Teachable agents to do something similar? And in general I suppose your thoughts on autogen and its agentic model?
  • @henkhbit5748
    Thanks for showing gemma2 and ollama. Would be nice to see with mesop. Maybe in combination with langsmith for debugging?
  • @flat-line
    LangChain and lamaIndex is really is boilerplate imho , they create more problems then they solve with their over abstraction , can you show vanilla example of how to do rag ?
  • @themax2go
    how does this compare to ms' recently opensourced graphrag? btw there are graphrag w/ ollama implementation tutorials (2 diff versions to do it, 1 is a "hack" / req graphrag python lib change to make it work w/ ollama, other one req lm studio)... with 2 types of querying: "global", which works fine, always; "local", which often / usually fails (w/ various error msgs / for various reasons)
  • @SwapperTheFirst
    i see that it is working quite fast on Mac Mini. But what are RAM requirements for model and chroma? Does it require GPU for acceptable performance? You've mentioned that choice of embedder is important. As I understand the same vector dimensionality is not required, since you use embeddings only during embedding process and vector search. But what about "semantic" compatibility between embedder and LLM? I can imagine that embedder could map semantic meaning in its vector space differently from Gemma or LLama. Is it even possible to compare to ensure that you use the best possible embedder for some model?
  • @yazanrisheh5127
    Hey sam can you explain why does your prompt template always seem in a different structure? By that I mean in this case you wrote at the start user\n then towrads the end you wrote . Does each llm have its own way of writing its own prompt template? If so, what & where do you refer to when you want to do prompt engineering for an llm ur using?
  • @supercurioTube
    Hey Sam! For now Gemma2 is still broken in Ollama, which doesn't include yet the latest llama.cpp fixes required. It's about the tokenizer: and are interpreted as text instead of special tokens, and of course things don't really work as expected as a result. I believe it'll be fixed in the next Ollama update tho - very soon. But it's too early for Gemma 2 evaluations using Ollama at the moment, like many are making on their own or publishing in videos.
  • @PestOnYT
    I find chroma is not very suitable for local RAG. It sends back telemetry data to their devs. One needs to set anonymized_telemetry=False to keep it quiet. Also, running ollama with some of the tools mentioned behind a firewall/proxy can be a challenge.
  • @nsfnd
    jezuz with the white background
  • @ShravanKumar147
    What are the system requirements? Do we need a GPU with certain size of VRAM?
  • @HmzaY
    it doesn't work very well, but it is informative.