Why Agent Frameworks Will Fail (and what to use instead)

20,517
0
Published 2024-06-27
Want to get started with freelancing? Let me help: www.datalumina.com/data-freelancer
Need help with a project? Work with me: www.datalumina.com/consulting

You probably don't need an agent framework to solve your automation problem. In this video, I'll cover my approach.

👋🏻 About Me
Hi there! I'm Dave, an AI Engineer and the founder of Datalumina. On this channel, I share practical coding tutorials to help you become better at building intelligent systems. If you're interested in that, consider subscribin

All Comments (21)
  • @michaelirey
    While your critique of agent frameworks is spot on and compelling, it seems there's a misconception about their potential. Your custom system resembles langchain+langgraph, highlighting a need for deeper understanding before dismissing existing frameworks.
  • @Fezz016
    Basically what this video is saying is this, "I do not understand the Agentic Framework Flow yet, so I will just critique it in the meantime because I do not understand it"
  • I mostly agree with everything. But there are two kinds of pipelines. The first one is when you have a finite amount of transformations, and the second one is when you don't know all the transformations in advance and need to delegate decision-making (in this case, you need an agentic approach). However, every pipeline can be represented as a finite transformation when you know it. for example classification. and t that is the key. So, if your pipeline is research-like, then you can't know it in advance; in other cases, you can.
  • @MrEnriqueag
    I agree with tour core message. But i dont think you've used langchain at the designed level or maybe dont know about langraph? Its not opinionated and you can (and should) orchestrate the flow however you want, you can make it linear, acyclical (every langraph example), you can decide the flow however you want, deterministally, defined by the LLM output etc. None of my agents are even driven by any default langchain agents, i have my own prompt, output format, tools etc. The framework is there to: 1. Standarize the way you interact with the models 2. Have a trackable verifiable analyzable way to build those graphs
  • @EmilioGagliardi
    Interesting. Working on a CrewAI project atm and I found I was using a DAG approach to tasks because of my experience with Kedro. One task, one transformation, one output and keep working sequentially. In a nutshell, you're describing Kedro's approach and philosophy. Its just not fine-tuned for generative AI use cases yet. What I've found with multi agent apps is that I end up building tools that do all the heavy lifting and the agent Is used to generate a piece of data (like a query string) used in subsequent processing. The challenge is building guardrails to prevent an agent from going off the reservation when something doesn't work. If you give an agent access to a tool as simple as a search tool, if it gets stuck, it could end up calling the tool in a loop and there goes your credits. So we're still having to treat agents like toddlers... would be interesting to see your take on kedro.
  • langgraph + function calling + langsmith = production "LangGraph is a way to create these state machines by specifying them as graphs."(c) LangChain
  • @hailrider8188
    Cyclical/recursive algorithms are needed for many problems which in part, is what agentic frameworks attempt solve. Your sequential processing only paradigm is applicable only to certain problems.
  • @ContextFound
    You have a solid point about agentic frameworks usually not being the right tool for tangible business applications. It's about automating the repetitive.
  • @brianhauk8136
    You said this is a work in progress, and I'm wondering if you've compared the results of traditional Mixture of Agents responses with your pipeline approach for various common use cases.
  • @vidfan1967
    Agents are good for one-off activities, where you want the agent system to find a sequence of activities that gets the job done. Nice for non-coders or not-knowers. However, for a repetitive process, where you need to rely on the quality of output, you need to control every step and KNOW, that it will deliver a result you can handle in future steps. The issue with LLMs is the uncertainty they introduce, eg. unwanted bias, wrong facts, broken reasoning. Use the LLM only where it shines (understanding and generating text) - you would nit rely on the LLM to be good at math and use other functions instead. This same principle applies for a lot of steps, if you decompose the job into tasks. But you need to understand coding to properly do it (or use an AI to do it for you once and then write the task sequence for you with minimal LLM). And this is not even considering the high costs agent systems produce compared to restricting LLM use where it is beneficial - or understandability, how the result was achieved…
  • @noduslabs
    Good take. All those frameworks are good for getting familiar with the principles but if you want to make a unique specialized product you need to code everything on your own. Probably you won’t need agents for some tasks even.
  • @liron92
    Thank you, very informative. Which pipeline registry tool do you use?
  • @user-bt6pp1dt4w
    Hey, looks very reasonable. Have you looked at prefect and its new controlFlow libraty?. It helps to manage this data pipeline pattern for LLM
  • @Crates-Media
    Here's something you can help me understand, as an intermediate-level coder learning all of the nuances of AI/ML and their applcations. You're extolling the value of the directed acyclic graph approach towards data processing pipelines, to avoid sending data to earlier stages. As a fan of idempotency and functional programming, I think that I somewhat understand where you're coming from in your premise. But in my studies of models, I'm also seeing a lot of buzz around the differentiation between methodologies of KANs vs MLPs. My question is this: wouldn't there be some value in using information uncovered later in the pipeline to refine what you're doing earlier on? For instance, let's say you're entertaining guests, and planning to serve appetizers. A very early step might be purchasing ingredients. Later on, you realize that not all of the guests show up. If we're just going to keep moving forward, we make more appetizers than are needed. The alternative: when less guests show up or RSVP, instead of making as many apps as your ingredients/plans dictate, you make less. Now you have less appetizers and you store or freeze the ingredients you didn't use. You could make them, and freeze the unused portions. But by sending the information collected later back to an earlier step, you instead have the raw ingredients to use in other recipes instead. This is a really lousy and forced metaphor, but it's all I could come up with off the top of my head. It just seems like there's value in the concept. On a different level, isn't this just sort of a form of backpropagation? The ability to reinform earlier calculations with the results of later ones?
  • great work. please publish the next tutorial. is there a github for the code?
  • @teprox7690
    Everything is great. I have built a few tools myself with Instructor. To really automate business processes, however, I see the problem with data protection. In the EU, I can't just put a complete e-mail into an LLM. How do you solve this? It would be great if you could shed more light on the subject of data protection! Thank you very much for your excellent content!
  • Agreed. Take a simple airtable, input cell connected to an llm, and an output cell for its response. There you have your first step of an agent. The hours I lost on learning Langchain, Flowise, you name it