What is an LLM Router?

25,183
0
2024-07-03に共有
In this video I take a look at a new open source framework and the accompanying paper from LMSys for helping you to automate LLM selection based on the input query.

Blog : lmsys.org/blog/2024-07-01-routellm/
Github: github.com/lm-sys/RouteLLM
Paper : arxiv.org/pdf/2406.18665
Models and datasets: huggingface.co/routellm


🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes

👨‍💻Github:
github.com/samwit/langchain-tutorials (updated)
github.com/samwit/llm-tutorials

⏱️Time Stamps:
00:00 Intro
01:15 LMSys RouterLLM Blog
03:24 RouteLLM Paper
03:46 RouteLLM Github
08:31 RouteLLM Hugging F

コメント (21)
  • @JohnBoen
    Haha. Last night I was chatting with someone about how to come up with data to solve this problem. I have a story writing engine that can blow through blow through $10 of tokens in minutes. It is getting really expensive just to develop it. This morning I was going to look around to see if anybody had something like this. And the solution to my quest is in the first video I watched this morning. I hope the rest of my day is this awesome.
  • @toadlguy
    Wow, this is great that it was released with the entire framework open source, as I believe that this (or something like it) will be part of the interface we will all be using soon. The other component is determining what data is required to respond. For instance, does the query require proprietary or personal data? This would first create a context (through RAG) for that data but also determine which LLMs would be available to that context based on the required security (do you even want to send the proprietary data to a commercial LLM?). Also with Llama3 8B, this could be done locally (at almost no cost). BTW, this is part of the framework that Apple will implementing, but can be tailored for many other applications now using this framework and LangChain (for instance).
  • Great one Sam. So, to make this all about me :-), I've been using GPT4x as the router/manager under the theory that it is the smartest (this is a Mixture of Agents). Then the agents are cheaper. I can see this is much better. Thanks!
  • @jeffg4686
    Would be interesting to see how this and MoA (mixture of agents) could be used together. Perhaps the route could go to a different model that uses several smaller agents (models) together, medium agents together, and larger agents together and/or mixed with smaller agents
  • @themax2go
    this might actually work real well in test scenarios, ie which llm provides the best accuracy vs speed compromise, for example in rag- / knowledge-graph systems
  • Claude 3.5 Haiku with this framework is gonna be insane. Nice video as always !
  • This is actually ery interesting. Concretely, when you use langchain and has satically linked LLMs on some custom tools, how could we redirect this from langchain directly from langchain so the routing is made afterwards ?
  • Worth comparing to how well it performs vs the semantic router lib which is also free to use