How To Connect Llama3 to CrewAI [Groq + Ollama]

22,327
0
Published 2024-04-25
🤖 Download the Source Code Here:
brandonhancock.io/llama3-crewai

Don't forget to Like and Subscribe if you're a fan of free source code 😉

📆 Need help with CrewAI, join our free Skool Community:
skool.com/ai-developer-accelerator/about

This video is perfect for anyone eager to run Llama3 locally on their computer and in the cloud using Grog. We cover what Llama3 is and how it compares to other LLMs. Additionally, we explore how to connect Llama3 to CrewAI. The majority of the video is spent on building an Instagram posting crew that generates image descriptions and text for Instagram using Llama3. By the end of this tutorial, you'll know how to set up, customize, and use Llama3 to automate tasks and enhance your project's capabilities. Get ready to upgrade your tech skills and make your work with AI more productive and innovative. Start now and see how simple it is to bring the power of Llama3 into CrewAI.

📰 Stay updated with my latest projects and insights:
LinkedIn: www.linkedin.com/in/brandon-hancock-ai/
Twitter: twitter.com/bhancock_ai

Resources:
- CrewAI Crash Course -    • CrewAI Tutorial: Complete Crash Cours...  
- Updated CrewAI Tutorial -    • CrewAI Tutorial for Beginners: Learn ...  
- How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral] -    • How To Connect Local LLMs to CrewAI [...  
- ollama: ollama.com/
- Llama 3 ai.meta.com/blog/meta-llama-3/
- Configure LLMs for CrewAI - docs.crewai.com/how-to/LLM-Connections/
- Instagram crew example: github.com/joaomdmoura/crewAI-examples/tree/main/i…

Timestamps:
00:00 Introduction
00:12 Video Overview
02:44 Llama 3 Overview, Comparison, & Testing
07:06 Setup Llama3 Locally with Ollama
12:05 Crew Overview
13:20 Run CrewAI & Llama 3 Locally with Ollama & Crew Deep Dive
22:18 Run CrewAI & Llama 3 with Groq
27:39 Fix Rate Limiting with Groq
29:27 Results
31:01 O

All Comments (21)
  • Man, I do not know how to create and write code but you have made a video and I think I can do this! Bless you my friend!
  • @GregPeters1
    Hey Brandon, welcome back after your vacay!
  • @CodeSnap01
    refereshed after short vacation.. hope to see you frequently
  • @tapos999
    thanks! Your crewai tutorial are top-of-the-shelf stuff. do you have any crewai proejct with streamlit connected to show output on the ui? thanks
  • @am0x01
    Appreciate your support (with those contents), the only drawback, was the need to subscribe to get access to a project that isn't yours. 😞
  • @reidelliot1972
    Great content as always! Do you know if it's sustainable to use a single groqcloud API key to host LLM access for a multi-user app? Or would a service like AWS Sagemaker be better for simultaneous users? Cheers!
  • @d.d.z.
    Friendly commment: You look better with glasses, more professional. Great content.
  • @nathankasa6220
    Thanks! Is Claude 3 opus still not supported though? How come?
  • @Omobilo
    Great stuff. Maybe a silly question, but when it was fetching to read data from remote website (the analysis part), does it read it remotely OR does it capture screenshots & download text to feed into its prompt and then clear this cached data or such local cached data needs to be cleaned eventually? Hope it simply reads remotely without too much data saved locally as I plan to use this approach to analyze many websites without flooding my local storage.
  • @protovici1476
    Excellent video! Would be interesting to see these frameworks, but within LightningAI Studios. Also, I saw CrewAI will be having a more golden standard approach to their code structuring in the near future.
  • @shuntera
    With both the Groq 8b and 70b with crew max_rpm set at both 1 or 2 I do get it halting for a while with: [INFO]: Max RPM reached, waiting for next minute to start.
  • @bennie_pie
    Thank you for this and for the code.. How does Llama 3 compare to Dolphin-Mistral 2.8 running locally as the more junior agents do you know? Dolphin-Mistral with its extra conversatuon/coding training and bigger 32k context window appeals! Ive had agents go round in circles creating nonsense with other frameworks as they dont remember what they are supposed to do! A big context window defo could bring some benefits! I try and avoid using GPT3.5 or 4 for coding preferring for this reason. Id then like to use Claude 3 Opus with his 200k context window and extra capability for the heavy liftin and oversight!
  • @ag36015
    What would you say are the minimum hardware requirements to make it run smoothly?
  • @markdkberry
    perfect run with Groq. I get great speeds on my PC with Local LLAMA3, but nothing I can do stops it throwing errors. I find that before with other projects thgat a lot of local LLMs have odd issues that change each run from failing to run functions as they change the name to just saying they cant pass tool info. Must be something in the local limitations either with Ollama or CrewAi.
  • @shuntera
    That is using a very old version of CrewAI - if you run it with the current version of CrewAI it fails because of lack of expected_output parameter in the Tasks