Quickstart

Get started in just 5 lines of code

Pre-requisites

To get started with Flo, you should be comfortable with Python 3.8 or higher, including its core concepts like async/await programming, object-oriented programming, and type hints.

You'll need a basic foundation in LangChain, particularly in working with chains, composing prompts, and handling LLM interactions. Additionally, experience with LangChain tools is a plus

What is Agentic AI ?

Agentic AI refers to AI systems that operate autonomously with the ability to perform tasks, make decisions, and interact with the environment or other systems as independent agents. Instead of just executing pre-defined commands, these AI agents are designed to take actions based on goals, adapt to changes, and iteratively refine their approach using various tools and resources. Here are some key aspects of Agentic AI

Examples of Agentic AI

  1. Task Automation Agents:

    • Systems that can automate end-to-end processes like email summarization, scheduling, or data analysis without manual intervention.

  2. Interactive Chatbots:

    • Chatbots that dynamically respond to user queries and can decide when to search the web, access a database, or perform an action based on the conversation context.

  3. Workflow Orchestration Agents:

    • In frameworks like FloAI, the focus is on building composable workflows where agents can perform specific tasks, route tasks to sub-agents, and manage complex, multi-step processes.

Installing Dependencies

pip install flo-ai==0.0.3

Create your first agent: Step by Step Guide

FloAI allows you to compose and execute complex workflows by configuring agents, tools, and their interactions using YAML definitions. We are going to create an AI agent that can research the internet and write a blog on the given topic.

FloAI follows an agent team architecture, where agents are the basic building blocks, and teams can have multiple agents and teams themselves can be part of bigger teams.

Building a working agent or team involves 3 steps:

  1. Create a session using FloSession, and register your tools and models

  2. Define you agent/team/team of teams using yaml or code

  3. Build and run using Flo

Step 1: Create an LLM Object

The first step in building a workflow with FloAI is to define the Large Language Model (LLM) that the agents will use for their tasks. FloAI integrates with various LLMs, and here, we’ll use gpt-4 from OpenAI. The LLM object here is an langchain object

Note: FloAI will soon remove the langchain dependency and come up with its own LLM class. For now you can use the langchain packages like langchain_openai

You can install langchain open ai using pip install langchain_openai

from langchain_openai import ChatOpenAI

# Create the LLM object
llm = ChatOpenAI(temperature=0, model_name='gpt-4o')

Step 2: Define the YAML for your team

Next, define your workflow in YAML. This YAML structure describes the agents, their roles, jobs, and the tools they will use.

agent_yaml = """
apiVersion: flo/alpha-v1
kind: FloAgent
name: blogging-agent-flo
agent:
    name: Blogger
    job: >
      You can research the internet and create a blog about the topic given by the user
    tools:
      - name: TavilySearchResults
"""

The Yaml defines an agent, with the tool `TavilySearchResults`. The tool will be registered in the next step. To learn more check YAML configurations

If you prefer to use code, you can directly use the following:

from flo_ai import Flo, FloSession
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search.tool import TavilySearchResults

session = FloSession(llm)

weather_agent = FloAgent.create(
    session=session,
    name="Blogger",
    job="You can research the internet and create a blog about the topic given by the user",
    tools=[TavilySearchResults()]
)

Step 3: Register Tools and Build Flo

To enable the agents to perform their jobs, you need to register any tools they use. In this case, both agents use the TavilySearchResults tool, which performs searches on the internet. After registering the tools, you can build the workflow (referred to as "Flo") from the session and the YAML definition.

from flo_ai import Flo, FloSession
from langchain_community.tools.tavily_search.tool import TavilySearchResults

# Register all tools and set up the session
session = FloSession(llm).register_tool(
    name="TavilySearchResults", 
    tool=TavilySearchResults()
)

# Build the Flo from the YAML definition
flo: Flo = Flo.build(session, yaml=agent_yaml)

If you are using code, you can build using the following line:

agent_flo: Flo = Flo.create(session, weather_agent)

Step 4: Execute the Flo

Once the Flo is built, you can execute it by streaming or invoking it directly. For this example, let’s stream the output to observe how the agents handle the task step by step.

# Define the input prompt
input_prompt = """
Question: Write me an interesting blog about latest advancements in agentic AI
"""

# Execute the Flo and stream the result
for response in flo.stream(input_prompt):
    if "__end__" not in response:
        print(response)
        print("----")

Here, Flo receives an input prompt requesting a blog on the latest advancements in agentic AI. The agents collaborate to perform the necessary research and write the blog, with the progress being streamed in real time.

In code, this would be

result = agent_flo.invoke("Whats the whether in New Delhi, India ?")

Recap of the Steps

  1. Create an LLM Object: Initialize the language model that the agents will use.

  2. Set Up the YAML: Define the agents, their roles, and tools in a YAML file.

  3. Build the Flo: Register the tools in the session and build the workflow using the YAML configuration.

  4. Execute the Flo: Stream or invoke the Flo to see the agents in action.

By following these steps, you can quickly build and execute complex workflows with agents collaborating to complete tasks in FloAI.

Create your first team

In the above example, we created a single agent with a single tool. In this example, let's create a team consisting of multiple agents.

A team consist of following components:

  1. Multiple agents or teams which are part of the team

  2. A router, which decides how the members of the team work together.

from flo_ai import Flo
from flo_ai import FloSession
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search.tool import TavilySearchResults

# Initialize the LLM
llm = ChatOpenAI(temperature=0, model_name='gpt-4o-mini')

# setup the agent yaml
agent_yaml = """
apiVersion: flo/alpha-v1
kind: FloRoutedTeam
name: blogging-flo
team:
    name: BloggingTeam
    router:
        name: TeamLead
        kind: supervisor
    agents:
      - name: Researcher
        role: Internet Researcher
        job: Do research on the internet and find articles relevant to the topic asked by the user, always try to find the latest information on the same
        tools:
          - name: TavilySearchResults
      - name: Blogger
        role: Though Leader
        kind: llm
        job: From the documents provided by the researcher write a blog of 300 words that can be readily published, make it engaging, and add reference links to original blogs
"""

# setup the flo session
session = FloSession(llm).register_tool(
    name="TavilySearchResults", 
    tool=TavilySearchResults()
)

# Build the final flow
flo: Flo = Flo.build(session, yaml=yaml_data)

# call invoke or stream
flo.invoke("Topic: Climate change")

The code is pretty similar to the previous agent example except for the yaml. Check yaml definition to understand more about how yamls can be built.

Here we created a team of 2 agents named Researcher and Blogger. These agents are connected by a router whose name is Team Lead. The router type is denoted by kind, and its supervisor, these routers work like a team manager. There are multiple other routers available check the routers documentation to know more.

Here is a similar team created using code:

from flo_ai import FloSupervisor, FloAgent, FloSession, FloTeam, FloLinear
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search.tool import TavilySearchResults

llm = ChatOpenAI(temperature=0, model_name='gpt-4o')
session = FloSession(llm).register_tool(
    name="TavilySearchResults",
    tool=TavilySearchResults()
)

researcher = FloAgent.create(
    session,
    name="Researcher", 
    role="Internet Researcher", # optional
    job="Do a research on the internet and find articles of relevent to the topic asked by the user", 
    tools=[TavilySearchResults()]
)

blogger = FloAgent.create(
    session, 
    name="BlogWriter", 
    role="Thought Leader", # optional
    job="Able to write a blog using information provided", 
    tools=[TavilySearchResults()]
)

marketing_team = FloTeam.create(session, "Marketing", [researcher, blogger])
head_of_marketing = FloSupervisor.create(session, "Head-of-Marketing", marketing_team)
marketing_flo = Flo.create(session, routed_team=head_of_marketing)

Compose Agents and Teams

Flo simplifies the creation of advanced AI systems through its composable architecture. Our framework offers a rich collection of specialized agents, routers, and team components that can be combined to build sophisticated AI solutions. Whether you need task-specific agents or complex multi-agent systems, Flo's modular design adapts to your requirements

Last updated