source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
123
label
stringclasses
1 value
content
stringlengths
6
6.94k
GitHub
autogen
autogen/CONTRIBUTING.md
autogen
# Contributing The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Possible contributions include but not limited to: - Pushing patches. - Code review of pull requests. - Documentation, examples and test cases. - Readability improvement, e.g., improvement on docstr and comments. - Community participation in [issues](https://github.com/microsoft/autogen/issues), [discussions](https://github.com/microsoft/autogen/discussions), and [twitter](https://twitter.com/pyautogen). - Tutorials, blog posts, talks that promote the project. - Sharing application scenarios and/or related research. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
GitHub
autogen
autogen/CONTRIBUTING.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap).
GitHub
autogen
autogen/CONTRIBUTING.md
autogen
Becoming a Reviewer There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
# AutoGen: Responsible AI FAQs
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is AutoGen? AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What can AutoGen do? AutoGen is an experimentational framework for building a complex multi-agent conversation system by: - Defining a set of agents with specialized capabilities and roles. - Defining the interaction behavior between agents, i.e., what to reply when an agent receives messages from another agent. The agent conversation-centric design has numerous benefits, including that it: - Naturally handles ambiguity, feedback, progress, and collaboration. - Enables effective coding-related tasks, like tool use with back-and-forth troubleshooting. - Allows users to seamlessly opt in or opt out via an agent in the chat. - Achieves a collective goal with the cooperation of multiple specialists.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What is/are AutoGen’s intended use(s)? Please note that AutoGen is an open-source library under active development and intended for use for research purposes. It should not be used in any downstream applications without additional detailed evaluation of robustness, safety issues and assessment of any potential harm or bias in the proposed application. AutoGen is a generic infrastructure that can be used in multiple scenarios. The system’s intended uses include: - Building LLM workflows that solve more complex tasks: Users can create agents that interleave reasoning and tool use capabilities of the latest LLMs such as GPT-4. To solve complex tasks, multiple agents can converse to work together (e.g., by partitioning a complex problem into simpler steps or by providing different viewpoints or perspectives). - Application-specific agent topologies: Users can create application specific agent topologies and patterns for agents to interact. The exact topology may depend on the domain’s complexity and semantic capabilities of the LLM available. - Code generation and execution: Users can implement agents that can assume the roles of writing code and other agents that can execute code. Agents can do this with varying levels of human involvement. Users can add more agents and program the conversations to enforce constraints on code and output. - Question answering: Users can create agents that can help answer questions using retrieval augmented generation. - End user and multi-agent chat and debate: Users can build chat applications where they converse with multiple agents at the same time. While AutoGen automates LLM workflows, decisions about how to use specific LLM outputs should always have a human in the loop. For example, you should not use AutoGen to automatically post LLM generated content to social media.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
How was AutoGen evaluated? What metrics are used to measure performance? - We performed testing for Responsible AI harm e.g., cross-domain prompt injection and all tests returned the expected results with no signs of jailbreak. - AutoGen was evaluated on six applications to illustrate its potential in simplifying the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance, problem difficulty and problem-solving capabilities enabled by AutoGen, and innovative potential. These applications involve using AutoGen to solve math problems, question answering, decision making in text world environments, supply chain optimization, etc. For each of these domains AutoGen was evaluated on various success-based metrics (i.e., how often the AutoGen based implementation solved the task). And, in some cases, AutoGen based approach was also evaluated on implementation efficiency (e.g., to track reductions in developer effort to build). More details can be found at: https://aka.ms/autogen-pdf. - We evaluated [a team of AutoGen agents](https://github.com/microsoft/autogen/tree/gaia_multiagent_v01_march_1st/samples/tools/autogenbench/scenarios/GAIA/Templates/Orchestrator) on the [GAIA benchmark](https://arxiv.org/abs/2311.12983), and got [SOTA results](https://huggingface.co./spaces/gaia-benchmark/leaderboard) as of March 1, 2024.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What are the limitations of AutoGen? How can users minimize the impact of AutoGen’s limitations when using the system? AutoGen relies on existing LLMs. Experimenting with AutoGen would retain common limitations of large language models; including: - Data Biases: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. - Lack of Contextual Understanding: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. - Lack of Transparency: Due to the complexity and size, large language models can act as `black boxes,' making it difficult to comprehend the rationale behind specific outputs or decisions. - Content Harms: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. - Inaccurate or ungrounded content: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models to fabricate content without high authority input sources. - Potential for Misuse: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. Additionally, AutoGen’s multi-agent framework may amplify or introduce additional risks, such as: - Privacy and Data Protection: The framework allows for human participation in conversations between agents. It is important to ensure that user data and conversations are protected and that developers use appropriate measures to safeguard privacy. - Accountability and Transparency: The framework involves multiple agents conversing and collaborating, it is important to establish clear accountability and transparency mechanisms. Users should be able to understand and trace the decision-making process of the agents involved in order to ensure accountability and address any potential issues or biases. - Trust and reliance: The framework leverages human understanding and intelligence while providing automation through conversations between agents. It is important to consider the impact of this interaction on user experience, trust, and reliance on AI systems. Clear communication and user education about the capabilities and limitations of the system will be essential. - Security & unintended consequences: The use of multi-agent conversations and automation in complex tasks may have unintended consequences. Especially, allowing LLM agents to make changes in external environments through code execution or function calls, such as install packages, could pose significant risks. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes, including keeping a human in the loop for decision making.
GitHub
autogen
autogen/TRANSPARENCY_FAQS.md
autogen
What operational factors and settings allow for effective and responsible use of AutoGen? - Code execution: AutoGen recommends using docker containers so that code execution can happen in a safer manner. Users can use function call instead of free-form code to execute pre-defined functions only. That helps increase the reliability and safety. Users can customize the code execution environment to tailor to their requirements. - Human involvement: AutoGen prioritizes human involvement in multi agent conversation. The overseers can step in to give feedback to agents and steer them in the correct direction. In all examples, users confirm code before it is executed. - Agent modularity: Modularity allows agents to have different levels of information access. Additional agents can assume roles that help keep other agents in check. For example, one can easily add a dedicated agent to play the role of safeguard. - LLMs: Users can choose the LLM that is optimized for responsible use. The default LLM in all examples is GPT-4o which inherits the existing RAI mechanisms and filters from the LLM provider. We encourage developers to review [OpenAI’s Usage policies](https://openai.com/policies/usage-policies) and [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct) when using GPT-4o. We encourage developers experimenting with agents to add content moderation and/or use safety metaprompts when using agents, like they would do when using LLMs.
GitHub
autogen
autogen/README.md
autogen
<a name="readme-top"></a> <div align="center"> <img src="https://microsoft.github.io/autogen/0.2/img/ag.svg" alt="AutoGen Logo" width="100"> [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) </div> # AutoGen > [!IMPORTANT] > - (11/14/24) ⚠️ In response to a number of asks to clarify and distinguish between official AutoGen and its forks that created confusion, we issued a [clarification statement](https://github.com/microsoft/autogen/discussions/4217). > - (10/13/24) Interested in the standard AutoGen as a prior user? Find it at the actively-maintained *AutoGen* [0.2 branch](https://github.com/microsoft/autogen/tree/0.2) and `autogen-agentchat~=0.2` PyPi package. > - (10/02/24) [AutoGen 0.4](https://microsoft.github.io/autogen/dev) is a from-the-ground-up rewrite of AutoGen. Learn more about the history, goals and future at [this blog post](https://microsoft.github.io/autogen/blog). We’re excited to work with the community to gather feedback, refine, and improve the project before we officially release 0.4. This is a big change, so AutoGen 0.2 is still available, maintained, and developed in the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2). AutoGen is an open-source framework for building AI agent systems. It simplifies the creation of event-driven, distributed, scalable, and resilient agentic applications. It allows you to quickly build systems where AI agents collaborate and perform tasks autonomously or with human oversight. - [Key Features](#key-features) - [API Layering](#api-layering) - [Quickstart](#quickstart) - [Roadmap](#roadmap) - [FAQs](#faqs) AutoGen streamlines AI development and research, enabling the use of multiple large language models (LLMs), integrated tools, and advanced multi-agent design patterns. You can develop and test your agent systems locally, then deploy to a distributed cloud environment as your needs grow.
GitHub
autogen
autogen/README.md
autogen
Key Features AutoGen offers the following key features: - **Asynchronous Messaging**: Agents communicate via asynchronous messages, supporting both event-driven and request/response interaction patterns. - **Full type support**: use types in all interfaces and enforced type check on build, with a focus on quality and cohesiveness - **Scalable & Distributed**: Design complex, distributed agent networks that can operate across organizational boundaries. - **Modular & Extensible**: Customize your system with pluggable components: custom agents, tools, memory, and models. - **Cross-Language Support**: Interoperate agents across different programming languages. Currently supports Python and .NET, with more languages coming soon. - **Observability & Debugging**: Built-in features and tools for tracking, tracing, and debugging agent interactions and workflows, including support for industry standard observability with OpenTelemetry <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p> # API Layering AutoGen has several packages and is built upon a layered architecture. Currently, there are three main APIs your application can target: - [Core](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/index.html) - [AgentChat](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/index.html) - [Extensions](https://microsoft.github.io/autogen/dev/reference/python/autogen_ext/autogen_ext.html)
GitHub
autogen
autogen/README.md
autogen
Core - [Installation](https://microsoft.github.io/autogen/dev/packages/index.html#pkg-info-autogen-core) - [Quickstart](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/quickstart.html) The core API of AutoGen, `autogen-core`, is built following the [actor model](https://en.wikipedia.org/wiki/Actor_model). It supports asynchronous message passing between agents and event-based workflows. Agents in the core layer handle and produce typed messages, using either direct messaging, which functions like RPC, or via broadcasting to topics, which is pub-sub. Agents can be distributed and implemented in different programming languages, while still communicating with one another. **Start here if you are building scalable, event-driven agentic systems.**
GitHub
autogen
autogen/README.md
autogen
AgentChat - [Installation](https://microsoft.github.io/autogen/dev/packages/index.html#pkg-info-autogen-agentchat) - [Quickstart](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/quickstart.html) The AgentChat API, `autogen-agentchat`, is task driven and at a high level like AutoGen 0.2. It allows you to define conversational agents, compose them into teams and then use them to solve tasks. AgentChat itself is built on the core layer, but it abstracts away much of its low-level system concepts. If your workflows don't fit into the AgentChat API, target core instead. **Start here if you just want to focus on quickly getting started with multi-agents workflows.**
GitHub
autogen
autogen/README.md
autogen
Extensions The extension package `autogen-ext` contains implementations of the core interfaces using 3rd party systems, such as OpenAI model client and Azure code executors. Besides the built-in extensions, the package accommodates community-contributed extensions through namespace sub-packages. We look forward to your contributions! <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Quickstart ### Python (AgentChat) First install the packages: ```bash pip install 'autogen-agentchat==0.4.0.dev7' 'autogen-ext[openai]==0.4.0.dev7' ``` The following code uses OpenAI's GPT-4o model and you need to provide your API key to run. To use Azure OpenAI models, follow the instruction [here](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.html). ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.task import Console, TextMentionTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_ext.models import OpenAIChatCompletionClient # Define a tool async def get_weather(city: str) -> str: return f"The weather in {city} is 73 degrees and Sunny." async def main() -> None: # Define an agent weather_agent = AssistantAgent( name="weather_agent", model_client=OpenAIChatCompletionClient( model="gpt-4o-2024-08-06", # api_key="YOUR_API_KEY", ), tools=[get_weather], ) # Define termination condition termination = TextMentionTermination("TERMINATE") # Define a team agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination) # Run the team and stream messages to the console stream = agent_team.run_stream(task="What is the weather in New York?") await Console(stream) asyncio.run(main()) ``` ### C\# The .NET SDK does not yet support all of the interfaces that the python SDK offers but we are working on bringing them to parity. To use the .NET SDK, you need to add a package reference to the src in your project. We will release nuget packages soon and will update these instructions when that happens. ``` git clone https://github.com/microsoft/autogen.git cd autogen # Switch to the branch that has this code git switch staging-dev # Build the project cd dotnet && dotnet build AutoGen.sln # In your source code, add AutoGen to your project dotnet add <your.csproj> reference <path to your checkout of autogen>/dotnet/src/Microsoft.AutoGen/Agents/Microsoft.AutoGen.Agents.csproj ``` Then, define and run your first agent: ```csharp using Microsoft.AutoGen.Abstractions; using Microsoft.AutoGen.Agents; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; // send a message to the agent var app = await App.PublishMessageAsync("HelloAgents", new NewMessageReceived { Message = "World" }, local: true); await App.RuntimeApp!.WaitForShutdownAsync(); await app.WaitForShutdownAsync(); [TopicSubscription("HelloAgents")] public class HelloAgent( IAgentContext context, [FromKeyedServices("EventTypes")] EventTypes typeRegistry) : ConsoleAgent( context, typeRegistry), ISayHello, IHandle<NewMessageReceived>, IHandle<ConversationClosed> { public async Task Handle(NewMessageReceived item) { var response = await SayHello(item.Message).ConfigureAwait(false); var evt = new Output { Message = response }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(evt).ConfigureAwait(false); var goodbye = new ConversationClosed { UserId = this.AgentId.Key, UserMessage = "Goodbye" }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(goodbye).ConfigureAwait(false); } public async Task Handle(ConversationClosed item) { var goodbye = $"********************* {item.UserId} said {item.UserMessage} ************************"; var evt = new Output { Message = goodbye }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(evt).ConfigureAwait(false); await Task.Delay(60000); await App.ShutdownAsync(); } public async Task<string> SayHello(string ask) { var response = $"\n\n\n\n***************Hello {ask}**********************\n\n\n\n"; return response; } } public interface ISayHello { public Task<string> SayHello(string ask); } ``` ```bash dotnet run ``` <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Roadmap - AutoGen 0.2 - This is the current stable release of AutoGen. We will continue to accept bug fixes and minor enhancements to this version. - AutoGen 0.4 - This is the first release of the new architecture. This release is still in *preview*. We will be focusing on the stability of the interfaces, documentation, tutorials, samples, and a collection of built-in agents which you can use. We are excited to work with our community to define the future of AutoGen. We are looking for feedback and contributions to help shape the future of this project. Here are some major planned items: - More programming languages (e.g., TypeScript) - More built-in agents and multi-agent workflows - Deployment of distributed agents - Re-implementation/migration of AutoGen Studio - Integration with other agent frameworks and data sources - Advanced RAG techniques and memory services <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
FAQs ### What is AutoGen 0.4? AutoGen v0.4 is a rewrite of AutoGen from the ground up to create a more robust, scalable, easier to use, cross-language library for building AI Agents. Some key features include asynchronous messaging, support for scalable distributed agents, modular extensible design (bring your own agents, implement behaviors however you like), cross-language support, improved observability, and full typing integration. It is a breaking change. ### Why these changes? We listened to our AutoGen users, learned from what was working, and adapted to fix what wasn't. We brought together wide-ranging teams working on many different types of AI Agents and collaborated to design an improved framework with a more flexible programming model and better scalability. ### Is this project still maintained? We want to reaffirm our commitment to supporting both the original version of AutoGen (0.2) and the redesign (0.4) . AutoGen 0.4 is still work-in-progress, and we shared the code now to build with the community. There are no plans to deprecate the original AutoGen anytime soon, and both versions will be actively maintained. ### Who should use it 0.4? This code is still experimental, so expect changes and bugs while we work towards a stable 0.4 release. We encourage early adopters to try it out, give us feedback, and contribute. For those looking for a stable version we recommend to continue using 0.2 ### I'm using AutoGen 0.2, should I upgrade? If you consider yourself an early adopter, you are comfortable making some changes to your code, and are willing to try it out, then yes. ### How do I still use AutoGen 0.2? AutoGen 0.2 can be installed with: ```sh pip install autogen-agentchat~=0.2 ``` ### Will AutoGen Studio be supported in 0.4? Yes, this is on the [roadmap](#roadmap). Our current plan is to enable an implementation of AutoGen Studio on the AgentChat high level API which implements a set of agent functionalities (agents, teams, etc). ### How do I migrate? For users familiar with AutoGen, the AgentChat library in 0.4 provides similar concepts. We are working on a migration guide. ### Is 0.4 done? We are still actively developing AutoGen 0.4. One exciting new feature is the emergence of new SDKs for .NET. The python SDKs are further ahead at this time but our goal is to achieve parity. We aim to add additional languages in future releases. ### What is happening next? When will this release be ready? We are still working on improving the documentation, samples, and enhancing the code. We are hoping to release before the end of the year when things are ready. ### What is the history of this project? The rearchitecture of the framework started with multiple Microsoft teams coming together to address the gaps and learnings from AutoGen 0.2 - merging ideas from several predecessor projects. The team worked on this internally for some time to ensure alignment before moving work back to the open in October 2024. ### What is the official channel for support? Use GitHub [Issues](https://github.com/microsoft/autogen/issues) for bug reports and feature requests. Use GitHub [Discussions](https://github.com/microsoft/autogen/discussions) for general questions and discussions. ### Do you use Discord for communications? We are unable to use Discord for project discussions. Therefore, we request that all discussions take place on <https://github.com/microsoft/autogen/discussions/> going forward. ### What about forks? <https://github.com/microsoft/autogen/> remains the only official repo for development and support of AutoGen. We are aware that there are thousands of forks of AutoGen, including many for personal development and startups building with or on top of the library. We are not involved with any of these forks and are not aware of any plans related to them. ### What is the status of the license and open source? Our project remains fully open-source and accessible to everyone. We understand that some forks use different licenses to align with different interests. We will continue to use the most permissive license (MIT) for the project. ### Can you clarify the current state of the packages? Currently, we are unable to make releases to the `pyautogen` package via Pypi due to a change to package ownership that was done without our involvement. Additionally, we are moving to using multiple packages to align with the new design. Please see details [here](https://microsoft.github.io/autogen/dev/packages/index.html). ### Can I still be involved? We are grateful to all the contributors to AutoGen 0.2 and we look forward to continuing to collaborate with everyone in the AutoGen community. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/README.md
autogen
Legal Notices Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the [LICENSE-CODE](LICENSE-CODE) file. Microsoft, Windows, Microsoft Azure, and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at <http://go.microsoft.com/fwlink/?LinkID=254653>. Privacy information can be found at <https://go.microsoft.com/fwlink/?LinkId=521839> Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel, or otherwise. <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ </a> </p>
GitHub
autogen
autogen/SUPPORT.md
autogen
# Support
GitHub
autogen
autogen/SUPPORT.md
autogen
How to file issues and get help This project uses [GitHub Issues](https://github.com/microsoft/autogen/issues) to track bugs and feature requests. Please search the existing issues before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new Issue. For help and questions about using this project, please use [GitHub Discussion](https://github.com/microsoft/autogen/discussions). Follow [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) when participating in the forum.
GitHub
autogen
autogen/SUPPORT.md
autogen
Microsoft Support Policy Support for this project is limited to the resources listed above.
GitHub
autogen
autogen/SECURITY.md
autogen
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
GitHub
autogen
autogen/SECURITY.md
autogen
Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
GitHub
autogen
autogen/SECURITY.md
autogen
Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report). If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
GitHub
autogen
autogen/SECURITY.md
autogen
Preferred Languages We prefer all communications to be in English.
GitHub
autogen
autogen/SECURITY.md
autogen
Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd). <!-- END MICROSOFT SECURITY.MD BLOCK -->
GitHub
autogen
autogen/CODE_OF_CONDUCT.md
autogen
# Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Resources: - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) - Contact [[email protected]](mailto:[email protected]) with questions or concerns
GitHub
autogen
autogen/python/README.md
autogen
# AutoGen Python packages See [`autogen-core`](./packages/autogen-core/) package for main functionality.
GitHub
autogen
autogen/python/README.md
autogen
Development **TL;DR**, run all checks with: ```sh uv sync --all-extras source .venv/bin/activate poe check ``` ### Setup `uv` is a package manager that assists in creating the necessary environment and installing packages to run AutoGen. - [Install `uv`](https://docs.astral.sh/uv/getting-started/installation/). ### Virtual Environment During development, you may need to test changes made to any of the packages.\ To do so, create a virtual environment where the AutoGen packages are installed based on the current state of the directory.\ Run the following commands at the root level of the Python directory: ```sh uv sync --all-extras source .venv/bin/activate ``` - `uv sync --all-extras` will create a `.venv` directory at the current level and install packages from the current directory along with any other dependencies. The `all-extras` flag adds optional dependencies. - `source .venv/bin/activate` activates the virtual environment. ### Common Tasks To create a pull request (PR), ensure the following checks are met. You can run each check individually: - Format: `poe format` - Lint: `poe lint` - Test: `poe test` - Mypy: `poe mypy` - Pyright: `poe pyright` - Build docs: `poe --directory ./packages/autogen-core/ docs-build` - Auto rebuild+serve docs: `poe --directory ./packages/autogen-core/ docs-serve` Alternatively, you can run all the checks with: - `poe check` > [!NOTE] > These need to be run in the virtual environment. ### Creating a New Package To create a new package, similar to `autogen-core` or `autogen-chat`, use the following: ```sh uv sync source .venv/bin/activate cookiecutter ./templates/new-package/ ```
GitHub
autogen
autogen/python/packages/autogen-agentchat/README.md
autogen
# AutoGen AgentChat - [Documentation](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/index.html)
GitHub
autogen
autogen/python/packages/autogen-agentchat/README.md
autogen
Package structure - `agents` are the building blocks for creating agents and built-in agents. - `teams` are the building blocks for creating teams of agents and built-in teams, such as group chats. - `logging` contains logging utilities.
GitHub
autogen
autogen/python/packages/autogen-ext/README.md
autogen
# autogen-ext
GitHub
autogen
autogen/python/packages/autogen-magentic-one/README.md
autogen
# Magentic-One > [!CAUTION] > Using Magentic-One involves interacting with a digital world designed for humans, which carries inherent risks. To minimize these risks, consider the following precautions: > > 1. **Use Containers**: Run all tasks in docker containers to isolate the agents and prevent direct system attacks. > 2. **Virtual Environment**: Use a virtual environment to run the agents and prevent them from accessing sensitive data. > 3. **Monitor Logs**: Closely monitor logs during and after execution to detect and mitigate risky behavior. > 4. **Human Oversight**: Run the examples with a human in the loop to supervise the agents and prevent unintended consequences. > 5. **Limit Access**: Restrict the agents' access to the internet and other resources to prevent unauthorized actions. > 6. **Safeguard Data**: Ensure that the agents do not have access to sensitive data or resources that could be compromised. Do not share sensitive information with the agents. > Be aware that agents may occasionally attempt risky actions, such as recruiting humans for help or accepting cookie agreements without human involvement. Always ensure agents are monitored and operate within a controlled environment to prevent unintended consequences. Moreover, be cautious that Magentic-One may be susceptible to prompt injection attacks from webpages. > [!NOTE] > This code is currently being ported to AutoGen AgentChat. If you want to build on top of Magentic-One, we recommend waiting for the port to be completed. In the meantime, you can use this codebase to experiment with Magentic-One. We are introducing Magentic-One, our new generalist multi-agent system for solving open-ended web and file-based tasks across a variety of domains. Magentic-One represents a significant step towards developing agents that can complete tasks that people encounter in their work and personal lives. Find additional information about Magentic-one in our [blog post](https://aka.ms/magentic-one-blog) and [technical report](https://arxiv.org/abs/2411.04468). ![](./imgs/autogen-magentic-one-example.png) > _Example_: The figure above illustrates Magentic-One mutli-agent team completing a complex task from the GAIA benchmark. Magentic-One's Orchestrator agent creates a plan, delegates tasks to other agents, and tracks progress towards the goal, dynamically revising the plan as needed. The Orchestrator can delegate tasks to a FileSurfer agent to read and handle files, a WebSurfer agent to operate a web browser, or a Coder or Computer Terminal agent to write or execute code, respectively.
GitHub
autogen
autogen/python/packages/autogen-magentic-one/README.md
autogen
Architecture ![](./imgs/autogen-magentic-one-agents.png) Magentic-One work is based on a multi-agent architecture where a lead Orchestrator agent is responsible for high-level planning, directing other agents and tracking task progress. The Orchestrator begins by creating a plan to tackle the task, gathering needed facts and educated guesses in a Task Ledger that is maintained. At each step of its plan, the Orchestrator creates a Progress Ledger where it self-reflects on task progress and checks whether the task is completed. If the task is not yet completed, it assigns one of Magentic-One other agents a subtask to complete. After the assigned agent completes its subtask, the Orchestrator updates the Progress Ledger and continues in this way until the task is complete. If the Orchestrator finds that progress is not being made for enough steps, it can update the Task Ledger and create a new plan. This is illustrated in the figure above; the Orchestrator work is thus divided into an outer loop where it updates the Task Ledger and an inner loop to update the Progress Ledger. Overall, Magentic-One consists of the following agents: - Orchestrator: the lead agent responsible for task decomposition and planning, directing other agents in executing subtasks, tracking overall progress, and taking corrective actions as needed - WebSurfer: This is an LLM-based agent that is proficient in commanding and managing the state of a Chromium-based web browser. With each incoming request, the WebSurfer performs an action on the browser then reports on the new state of the web page The action space of the WebSurfer includes navigation (e.g. visiting a URL, performing a web search); web page actions (e.g., clicking and typing); and reading actions (e.g., summarizing or answering questions). The WebSurfer relies on the accessibility tree of the browser and on set-of-marks prompting to perform its actions. - FileSurfer: This is an LLM-based agent that commands a markdown-based file preview application to read local files of most types. The FileSurfer can also perform common navigation tasks such as listing the contents of directories and navigating a folder structure. - Coder: This is an LLM-based agent specialized through its system prompt for writing code, analyzing information collected from the other agents, or creating new artifacts. - ComputerTerminal: Finally, ComputerTerminal provides the team with access to a console shell where the Coder’s programs can be executed, and where new programming libraries can be installed. Together, Magentic-One’s agents provide the Orchestrator with the tools and capabilities that it needs to solve a broad variety of open-ended problems, as well as the ability to autonomously adapt to, and act in, dynamic and ever-changing web and file-system environments. While the default multimodal LLM we use for all agents is GPT-4o, Magentic-One is model agnostic and can incorporate heterogonous models to support different capabilities or meet different cost requirements when getting tasks done. For example, it can use different LLMs and SLMs and their specialized versions to power different agents. We recommend a strong reasoning model for the Orchestrator agent such as GPT-4o. In a different configuration of Magentic-One, we also experiment with using OpenAI o1-preview for the outer loop of the Orchestrator and for the Coder, while other agents continue to use GPT-4o. ### Logging in Team One Agents Team One agents can emit several log events that can be consumed by a log handler (see the example log handler in [utils.py](src/autogen_magentic_one/utils.py)). A list of currently emitted events are: - OrchestrationEvent : emitted by a an [Orchestrator](src/autogen_magentic_one/agents/base_orchestrator.py) agent. - WebSurferEvent : emitted by a [WebSurfer](src/autogen_magentic_one/agents/multimodal_web_surfer/multimodal_web_surfer.py) agent. In addition, developers can also handle and process logs generated from the AutoGen core library (e.g., LLMCallEvent etc). See the example log handler in [utils.py](src/autogen_magentic_one/utils.py) on how this can be implemented. By default, the logs are written to a file named `log.jsonl` which can be configured as a parameter to the defined log handler. These logs can be parsed to retrieved data agent actions. # Setup and Usage You can install the Magentic-One package and then run the example code to see how the agents work together to accomplish a task. 1. Clone the code and install the package: The easiest way to install is with the [uv package installer](https://docs.astral.sh/uv/getting-started/installation/) which you need to install separately, however, this is not necessary. Clone repo, use uv to setup and activate virtual environment: ```bash git clone https://github.com/microsoft/autogen.git cd autogen/python uv sync --all-extras source .venv/bin/activate ``` For Windows, run `.venv\Scripts\activate` to activate the environment. 2. Install magentic-one from source: ```bash cd packages/autogen-magentic-one pip install -e . ``` The following instructions are for running the example code: 3. Configure the environment variables for the chat completion client. See instructions below [Environment Configuration for Chat Completion Client](#environment-configuration-for-chat-completion-client). 4. Magentic-One code uses code execution, you need to have [Docker installed](https://docs.docker.com/engine/install/) to run any examples. 5. Magentic-One uses playwright to interact with web pages. You need to install the playwright dependencies. Run the following command to install the playwright dependencies: ```bash playwright install --with-deps chromium ``` 6. Now you can run the example code to see how the agents work together to accomplish a task. > [!CAUTION] > The example code may download files from the internet, execute code, and interact with web pages. Ensure you are in a safe environment before running the example code. > [!NOTE] > You will need to ensure Docker is running prior to running the example. ```bash # Specify logs directory python examples/example.py --logs_dir ./logs # Enable human-in-the-loop mode python examples/example.py --logs_dir ./logs --hil_mode # Save screenshots of browser python examples/example.py --logs_dir ./logs --save_screenshots ``` Arguments: - logs_dir: (Required) Directory for logs, downloads and screenshots of browser (default: current directory) - hil_mode: (Optional) Enable human-in-the-loop mode (default: disabled) - save_screenshots: (Optional) Save screenshots of browser (default: disabled) 7. [Preview] We have a preview API for Magentic-One. You can use the `MagenticOneHelper` class to interact with the system and stream logs. See the [interface README](interface/README.md) for more details.
GitHub
autogen
autogen/python/packages/autogen-magentic-one/README.md
autogen
Environment Configuration for Chat Completion Client This guide outlines how to configure your environment to use the `create_completion_client_from_env` function, which reads environment variables to return an appropriate `ChatCompletionClient`. Currently, Magentic-One only supports OpenAI's GPT-4o as the underlying LLM. ### Azure OpenAI service To configure for Azure OpenAI service, set the following environment variables: - `CHAT_COMPLETION_PROVIDER='azure'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_version": "2024-02-15-preview", "azure_endpoint": "REPLACE_WITH_YOUR_ENDPOINT", "model_capabilities": { "function_calling": true, "json_output": true, "vision": true }, "azure_ad_token_provider": "DEFAULT", "model": "gpt-4o-2024-05-13" } ``` This project uses Azure OpenAI service with [Entra ID authentcation by default](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity). If you run the examples on a local device, you can use the Azure CLI cached credentials for testing: Log in to Azure using `az login`, and then run the examples. The account used must have [RBAC permissions](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control) like `Azure Cognitive Services OpenAI User` for the OpenAI service; otherwise, you will receive the error: Principal does not have access to API/Operation. Note that even if you are the owner of the subscription, you still need to grant the necessary Azure Cognitive Services OpenAI permissions to call the API. ### With OpenAI To configure for OpenAI, set the following environment variables: - `CHAT_COMPLETION_PROVIDER='openai'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_key": "REPLACE_WITH_YOUR_API", "model": "gpt-4o-2024-05-13" } ``` Feel free to replace the model with newer versions of gpt-4o if needed. ### Other Keys (Optional) Some functionalities, such as using web-search requires an API key for Bing. You can set it using: ```bash export BING_API_KEY=xxxxxxx ```
GitHub
autogen
autogen/python/packages/autogen-magentic-one/README.md
autogen
Citation ``` @misc{fourney2024magenticonegeneralistmultiagentsolving, title={Magentic-One: A Generalist Multi-Agent System for Solving Complex Tasks}, author={Adam Fourney and Gagan Bansal and Hussein Mozannar and Cheng Tan and Eduardo Salinas and Erkang and Zhu and Friederike Niedtner and Grace Proebsting and Griffin Bassman and Jack Gerrits and Jacob Alber and Peter Chang and Ricky Loynd and Robert West and Victor Dibia and Ahmed Awadallah and Ece Kamar and Rafah Hosn and Saleema Amershi}, year={2024}, eprint={2411.04468}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2411.04468}, } ```
GitHub
autogen
autogen/python/packages/autogen-magentic-one/interface/README.md
autogen
# MagenticOne Interface This repository contains a preview interface for interacting with the MagenticOne system. It includes helper classes, and example usage.
GitHub
autogen
autogen/python/packages/autogen-magentic-one/interface/README.md
autogen
Usage ### MagenticOneHelper The MagenticOneHelper class provides an interface to interact with the MagenticOne system. It saves logs to a user-specified directory and provides methods to run tasks, stream logs, and retrieve the final answer. The class provides the following methods: - async initialize(self) -> None: Initializes the MagenticOne system, setting up agents and runtime. - async run_task(self, task: str) -> None: Runs a specific task through the MagenticOne system. - get_final_answer(self) -> Optional[str]: Retrieves the final answer from the Orchestrator. - async stream_logs(self) -> AsyncGenerator[Dict[str, Any], None]: Streams logs from the system as they are generated. - get_all_logs(self) -> List[Dict[str, Any]]: Retrieves all logs that have been collected so far. We show an example of how to use the MagenticOneHelper class to in [example_magentic_one_helper.py](example_magentic_one_helper.py). ```python from magentic_one_helper import MagenticOneHelper import asyncio import json async def magentic_one_example(): # Create and initialize MagenticOne magnetic_one = MagenticOneHelper(logs_dir="./logs") await magnetic_one.initialize() print("MagenticOne initialized.") # Start a task and stream logs task = "How many members are in the MSR HAX Team" task_future = asyncio.create_task(magnetic_one.run_task(task)) # Stream and process logs async for log_entry in magnetic_one.stream_logs(): print(json.dumps(log_entry, indent=2)) # Wait for task to complete await task_future # Get the final answer final_answer = magnetic_one.get_final_answer() if final_answer is not None: print(f"Final answer: {final_answer}") else: print("No final answer found in logs.") ```
GitHub
autogen
autogen/python/packages/autogen-magentic-one/examples/README.md
autogen
# Examples of Magentic-One **Note**: The examples in this folder are ran at your own risk. They involve agents navigating the web, executing code and browsing local files. Please supervise the execution of the agents to reduce any risks. We also recommend running the examples in a virtual machine or a sandboxed environment. We include various examples for using Magentic-One and is agents: - [example.py](example.py): Is [human-in-the-loop] Magentic-One trying to solve a task specified by user input. ```bash # Specify logs directory python examples/example.py --logs_dir ./my_logs # Enable human-in-the-loop mode python examples/example.py -logs_dir ./my_logs --hil_mode # Save screenshots of browser python examples/example.py -logs_dir ./my_logs --save_screenshots ``` Arguments: - logs_dir: (Required) Directory for logs, downloads and screenshots of browser (default: current directory) - hil_mode: (Optional) Enable human-in-the-loop mode (default: disabled) - save_screenshots: (Optional) Save screenshots of browser (default: disabled) The following examples are for individual agents in Magentic-One: - [example_coder.py](example_coder.py): Is an example of the Coder + Execution agents in Magentic-One -- without the Magentic-One orchestrator. In a loop, specified by using the RoundRobinOrchestrator, the coder will write code based on user input, executor will run the code and then the user is asked for input again. - [example_file_surfer.py](example_file_surfer.py): Is an example of the FileSurfer agent individually. In a loop, specified by using the RoundRobinOrchestrator, the file surfer will respond to user input and then the user is asked for input again. - [example_userproxy.py](example_userproxy.py): Is an example of the Coder agent in Magentic-One. Compared to [example_coder.py](example_coder.py) this example is just meant to show how to interact with the Coder agent, which serves as a general purpose assistant without tools. In a loop, specified by using the RoundRobinOrchestrator, the coder will respond to user input and then the user is asked for input again. - [example_websurfer.py](example_websurfer.py): Is an example of the MultimodalWebSurfer agent in Magentic-one -- without the orchestrator. To view the browser the agent uses, pass the argument 'headless = False' to 'actual_surfer.init'. In a loop, specified by using the RoundRobinOrchestrator, the web surfer will perform a single action on the browser in response to user input and then the user is asked for input again.
GitHub
autogen
autogen/python/packages/autogen-studio/README.md
autogen
# AutoGen Studio [![PyPI version](https://badge.fury.io/py/autogenstudio.svg)](https://badge.fury.io/py/autogenstudio) [![Downloads](https://static.pepy.tech/badge/autogenstudio/week)](https://pepy.tech/project/autogenstudio) ![ARA](./docs/ags_screen.png) AutoGen Studio is an AutoGen-powered AI app (user interface) to help you rapidly prototype AI agents, enhance them with skills, compose them into workflows and interact with them to accomplish tasks. It is built on top of the [AutoGen](https://microsoft.github.io/autogen) framework, which is a toolkit for building AI agents. Code for AutoGen Studio is on GitHub at [microsoft/autogen](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-studio) > **Note**: AutoGen Studio is meant to help you rapidly prototype multi-agent workflows and demonstrate an example of end user interfaces built with AutoGen. It is not meant to be a production-ready app. > [!WARNING] > AutoGen Studio is currently under active development and we are iterating quickly. Kindly consider that we may introduce breaking changes in the releases during the upcoming weeks, and also the `README` might be outdated. Please see the AutoGen Studio [docs](https://microsoft.github.io/autogen/docs/autogen-studio/getting-started) page for the most up-to-date information. **Updates** > Nov 14: AutoGen Studio is being rewritten to use the updated AutoGen 0.4.0 api AgentChat api. > April 17: AutoGen Studio database layer is now rewritten to use [SQLModel](https://sqlmodel.tiangolo.com/) (Pydantic + SQLAlchemy). This provides entity linking (skills, models, agents and workflows are linked via association tables) and supports multiple [database backend dialects](https://docs.sqlalchemy.org/en/20/dialects/) supported in SQLAlchemy (SQLite, PostgreSQL, MySQL, Oracle, Microsoft SQL Server). The backend database can be specified a `--database-uri` argument when running the application. For example, `autogenstudio ui --database-uri sqlite:///database.sqlite` for SQLite and `autogenstudio ui --database-uri postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. > March 12: Default directory for AutoGen Studio is now /home/<user>/.autogenstudio. You can also specify this directory using the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database and other files in the specified directory e.g. `/path/to/folder/database.sqlite`. `.env` files in that directory will be used to set environment variables for the app. Project Structure: - _autogenstudio/_ code for the backend classes and web api (FastAPI) - _frontend/_ code for the webui, built with Gatsby and TailwindCSS ### Installation There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code. 1. **Install from PyPi** We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio: ```bash pip install autogenstudio ``` 2. **Install from Source** > Note: This approach requires some familiarity with building interfaces in React. If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started: - Clone the AutoGen Studio repository and install its Python dependencies: ```bash pip install -e . ``` - Navigate to the `python/packages/autogen-studio/frontend` directory, install dependencies, and build the UI: ```bash npm install -g gatsby-cli npm install --global yarn cd frontend yarn install yarn build ``` For Windows users, to build the frontend, you may need alternative commands to build the frontend. ```bash gatsby clean && rmdir /s /q ..\\autogenstudio\\web\\ui 2>nul & (set \"PREFIX_PATH_VALUE=\" || ver>nul) && gatsby build --prefix-paths && xcopy /E /I /Y public ..\\autogenstudio\\web\\ui ``` ### Running the Application Once installed, run the web UI by entering the following in your terminal: ```bash autogenstudio ui --port 8081 ``` This will start the application on the specified port. Open your web browser and go to `http://localhost:8081/` to begin using AutoGen Studio. AutoGen Studio also takes several parameters to customize the application: - `--host <host>` argument to specify the host address. By default, it is set to `localhost`. Y - `--appdir <appdir>` argument to specify the directory where the app files (e.g., database and generated user files) are stored. By default, it is set to the a `.autogenstudio` directory in the user's home directory. - `--port <port>` argument to specify the port number. By default, it is set to `8080`. - `--reload` argument to enable auto-reloading of the server when changes are made to the code. By default, it is set to `False`. - `--database-uri` argument to specify the database URI. Example values include `sqlite:///database.sqlite` for SQLite and `postgresql+psycopg://user:password@localhost/dbname` for PostgreSQL. If this is not specified, the database URIL defaults to a `database.sqlite` file in the `--appdir` directory. - `--upgrade-database` argument to upgrade the database schema to the latest version. By default, it is set to `False`. Now that you have AutoGen Studio installed and running, you are ready to explore its capabilities, including defining and modifying agent workflows, interacting with agents and sessions, and expanding agent skills. #### If running from source When running from source, you need to separately bring up the frontend server. 1. Open a separate terminal and change directory to the frontend ```bash cd frontend ``` 3. Create a `.env.development` file. ```bash cp .env.default .env.development ``` 3. Launch frontend server ```bash npm run start ```
GitHub
autogen
autogen/python/packages/autogen-studio/README.md
autogen
Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/4006) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted` - Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution. - Submit a pull request with your contribution! - If you are modifying AutoGen Studio, it has its own devcontainer. See instructions in `.devcontainer/README.md` to use it - Please use the tag `proj-studio` for any issues, questions, and PRs related to Studio
GitHub
autogen
autogen/python/packages/autogen-studio/README.md
autogen
FAQ Please refer to the AutoGen Studio [FAQs](https://microsoft.github.io/autogen/docs/autogen-studio/faqs) page for more information.
GitHub
autogen
autogen/python/packages/autogen-studio/README.md
autogen
Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
GitHub
autogen
autogen/python/packages/autogen-studio/frontend/README.md
autogen
## 🚀 Running UI in Dev Mode Run the UI in dev mode (make changes and see them reflected in the browser with hotreloading): - yarn install - yarn start This should start the server on port 8000.
GitHub
autogen
autogen/python/packages/autogen-studio/frontend/README.md
autogen
Design Elements - **Gatsby**: The app is created in Gatsby. A guide on bootstrapping a Gatsby app can be found here - https://www.gatsbyjs.com/docs/quick-start/. This provides an overview of the project file structure include functionality of files like `gatsby-config.js`, `gatsby-node.js`, `gatsby-browser.js` and `gatsby-ssr.js`. - **TailwindCSS**: The app uses TailwindCSS for styling. A guide on using TailwindCSS with Gatsby can be found here - https://tailwindcss.com/docs/guides/gatsby.https://tailwindcss.com/docs/guides/gatsby . This will explain the functionality in tailwind.config.js and postcss.config.js.
GitHub
autogen
autogen/python/packages/autogen-studio/frontend/README.md
autogen
Modifying the UI, Adding Pages The core of the app can be found in the `src` folder. To add pages, add a new folder in `src/pages` and add a `index.js` file. This will be the entry point for the page. For example to add a route in the app like `/about`, add a folder `about` in `src/pages` and add a `index.tsx` file. You can follow the content style in `src/pages/index.tsx` to add content to the page. Core logic for each component should be written in the `src/components` folder and then imported in pages as needed.
GitHub
autogen
autogen/python/packages/autogen-studio/frontend/README.md
autogen
connecting to front end the front end makes request to the backend api and expects it at /api on localhost port 8081
GitHub
autogen
autogen/python/packages/autogen-studio/frontend/README.md
autogen
setting env variables for the UI - please look at `.env.default` - make a copy of this file and name it `.env.development` - set the values for the variables in this file - The main variable here is `GATSBY_API_URL` which should be set to `http://localhost:8081/api` for local development. This tells the UI where to make requests to the backend.
GitHub
autogen
autogen/python/packages/autogen-core/README.md
autogen
# AutoGen Core - [Documentation](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/index.html)
GitHub
autogen
autogen/python/packages/autogen-core/README.md
autogen
Package layering - `base` are the the foundational generic interfaces upon which all else is built. This module must not depend on any other module. - `application` are implementations of core components that are used to compose an application. - `components` are the building blocks for creating agents.
GitHub
autogen
autogen/python/packages/autogen-core/samples/README.md
autogen
# Samples This directory contains sample apps that use AutoGen Core API. See [core user guide](../docs/src/user-guide/core-user-guide/) for notebook examples. See [Running the examples](#running-the-examples) for instructions on how to run the examples. - [`chess_game.py`](chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use. - [`slow_human_in_loop.py`](slow_human_in_loop.py): an example showing human-in-the-loop which waits for human input before making the tool call.
GitHub
autogen
autogen/python/packages/autogen-core/samples/README.md
autogen
Running the examples ### Prerequisites First, you need a shell with AutoGen core and required dependencies installed. ### Using Azure OpenAI API For Azure OpenAI API, you need to set the following environment variables: ```bash export OPENAI_API_TYPE=azure export AZURE_OPENAI_API_ENDPOINT=your_azure_openai_endpoint export AZURE_OPENAI_API_VERSION=your_azure_openai_api_version ``` By default, we use Azure Active Directory (AAD) for authentication. You need to run `az login` first to authenticate with Azure. You can also use API key authentication by setting the following environment variables: ```bash export AZURE_OPENAI_API_KEY=your_azure_openai_api_key ``` This requires azure-identity installation: ```bash pip install azure-identity ``` ### Using OpenAI API For OpenAI API, you need to set the following environment variables. ```bash export OPENAI_API_TYPE=openai export OPENAI_API_KEY=your_openai_api_key ```
GitHub
autogen
autogen/python/packages/autogen-core/samples/semantic_router/README.md
autogen
# Multi Agent Orchestration, Distributed Agent Runtime Example This repository is an example of how to run a distributed agent runtime. The system is composed of three main components: 1. The agent host runtime, which is responsible for managing the eventing engine, and the pub/sub message system. 2. The worker runtime, which is responsible for the lifecycle of the distributed agents, including the "semantic router". 3. The user proxy, which is responsible for managing the user interface and the user interactions with the agents.
GitHub
autogen
autogen/python/packages/autogen-core/samples/semantic_router/README.md
autogen
Example Scenario In this example, we have a simple scenario where we have a set of distributed agents (an "HR", and a "Finance" agent) which an enterprise may use to manage their HR and Finance operations. Each of these agents are independent, and can be running on different machines. While many multi-agent systems are built to have the agents collaborate to solve a difficult task - the goal of this example is to show how an enterprise may manage a large set of agents that are suited to individual tasks, and how to route a user to the most relevant agent for the task at hand. The way this system is designed, when a user initiates a session, the semantic router agent will identify the intent of the user (currently using the overly simple method of string matching), identify the most relevant agent, and then route the user to that agent. The agent will then manage the conversation with the user, and the user will be able to interact with the agent in a conversational manner. While the logic of the agents is simple in this example, the goal is to show how the distributed runtime capabilities of autogen supports this scenario independantly of the capabilities of the agents themselves.
GitHub
autogen
autogen/python/packages/autogen-core/samples/semantic_router/README.md
autogen
Getting Started 1. Install `autogen-core` and its dependencies
GitHub
autogen
autogen/python/packages/autogen-core/samples/semantic_router/README.md
autogen
To run Since this example is meant to demonstrate a distributed runtime, the components of this example are meant to run in different processes - i.e. different terminals. In 2 separate terminals, run: ```bash # Terminal 1, to run the Agent Host Runtime python run_host.py ``` ```bash # Terminal 2, to run the Worker Runtime python run_semantic_router.py ``` The first terminal should log a series of events where the vrious agents are registered against the runtime. In the second terminal, you may enter a request related to finance or hr scenarios. In our simple example here, this means using one of the following keywords in your request: - For the finance agent: "finance", "money", "budget" - For the hr agent: "hr", "human resources", "employee" You will then see the host and worker runtimes send messages back and forth, routing to the correct agent, before the final response is printed. The conversation can then continue with the selected agent until the user sends a message containing "END",at which point the agent will be disconnected from the user and a new conversation can start.
GitHub
autogen
autogen/python/packages/autogen-core/samples/semantic_router/README.md
autogen
Message Flow Using the "Topic" feature of the agent host runtime, the message flow of the system is as follows: ```mermaid sequenceDiagram participant User participant Closure_Agent participant User_Proxy_Agent participant Semantic_Router participant Worker_Agent User->>User_Proxy_Agent: Send initial message Semantic_Router->>Worker_Agent: Route message to appropriate agent Worker_Agent->>User_Proxy_Agent: Respond to user message User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent Closure_Agent->>User: Expose the response to the User User->>Worker_Agent: Directly send follow up message Worker_Agent->>User_Proxy_Agent: Respond to user message User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent Closure_Agent->>User: Return response User->>Worker_Agent: Send "END" message Worker_Agent->>User_Proxy_Agent: Confirm session end User_Proxy_Agent->>Closure_Agent: Confirm session end Closure_Agent->>User: Display session end message ``` ### Contributors - Diana Iftimie (@diftimieMSFT) - Oscar Fimbres (@ofimbres) - Taylor Rockey (@tarockey)
GitHub
autogen
autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md
autogen
# Distributed Group Chat from autogen_core.application import WorkerAgentRuntimeHost This example runs a gRPC server using [WorkerAgentRuntimeHost](../../src/autogen_core/application/_worker_runtime_host.py) and instantiates three distributed runtimes using [WorkerAgentRuntime](../../src/autogen_core/application/_worker_runtime.py). These runtimes connect to the gRPC server as hosts and facilitate a round-robin distributed group chat. This example leverages the [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) to implement writer and editor LLM agents. Agents are instructed to provide concise answers, as the primary goal of this example is to showcase the distributed runtime rather than the quality of agent responses.
GitHub
autogen
autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md
autogen
Setup ### Setup Python Environment 1. Create a virtual environment as instructed in [README](../../../../../../../../README.md). 2. Run `uv pip install chainlit` in the same virtual environment ### General Configuration In the `config.yaml` file, you can configure the `client_config` section to connect the code to the Azure OpenAI Service. ### Authentication The recommended method for authentication is through Azure Active Directory (AAD), as explained in [Model Clients - Azure AI](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/framework/model-clients.html#azure-openai). This example works with both the AAD approach (recommended) and by providing the `api_key` in the `config.yaml` file.
GitHub
autogen
autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md
autogen
Run ### Run Through Scripts The [run.sh](./run.sh) file provides commands to run the host and agents using [tmux](https://github.com/tmux/tmux/wiki). The steps for this approach are: 1. Install tmux. 2. Activate the Python environment: `source .venv/bin/activate`. 3. Run the bash script: `./run.sh`. Here is a screen recording of the execution: [![Distributed Group Chat Demo with Simple UI Integration](https://img.youtube.com/vi/503QJ1onV8I/0.jpg)](https://youtu.be/503QJ1onV8I?feature=shared) **Note**: Some `asyncio.sleep` commands have been added to the example code to make the `./run.sh` execution look sequential and visually easy to follow. In practice, these lines are not necessary. ### Run Individual Files If you prefer to run Python files individually, follow these steps. Note that each step must be run in a different terminal process, and the virtual environment should be activated using `source .venv/bin/activate`. 1. `python run_host.py`: Starts the host and listens for agent connections. 2. `chainlit run run_ui.py --port 8001`: Starts the Chainlit app and UI agent and listens on UI topic to display messages. We're using port 8001 as the default port 8000 is used to run host (assuming using same machine to run all of the agents) 3. `python run_editor.py`: Starts the <img src="./public/avatars/editor.png" width="20" height="20" style="vertical-align:middle"> editor agent and connects it to the host. 4. `python run_writer.py`: Starts the <img src="./public/avatars/writer.png" width="20" height="20" style="vertical-align:middle"> writer agent and connects it to the host. 5. `python run_group_chat_manager.py`: Run chainlit app which starts <img src="./public/avatars/group_chat_manager.png" width="20" height="20" style="vertical-align:middle"> group chat manager agent and sends the initial message to start the conversation.
GitHub
autogen
autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md
autogen
What's Going On? The general flow of this example is as follows: 0. The UI Agent runs starts the UI App, listens for stream of messages in the UI topic and displays them in the UI. 1. The <img src="./public/avatars/group_chat_manager.png" width="20" height="20" style="vertical-align:middle"> Group Chat Manager, on behalf of <img src="./public/avatars/user.png" width="20" height="20" style="vertical-align:middle"> `User`, sends a `RequestToSpeak` request to the <img src="./public/avatars/writer.png" width="20" height="20" style="vertical-align:middle"> `writer_agent`. 2. The <img src="./public/avatars/writer.png" width="20" height="20" style="vertical-align:middle"> `writer_agent` writes a short sentence into the group chat topic. 3. The <img src="./public/avatars/editor.png" width="20" height="20" style="vertical-align:middle"> `editor_agent` receives the message in the group chat topic and updates its memory. 4. The <img src="./public/avatars/group_chat_manager.png" width="20" height="20" style="vertical-align:middle"> Group Chat Manager receives the message sent by the writer into the group chat simultaneously and sends the next participant, the <img src="./public/avatars/editor.png" width="20" height="20" style="vertical-align:middle"> `editor_agent`, a `RequestToSpeak` message. 5. The <img src="./public/avatars/editor.png" width="20" height="20" style="vertical-align:middle"> `editor_agent` sends its feedback to the group chat topic. 6. The <img src="./public/avatars/writer.png" width="20" height="20" style="vertical-align:middle"> `writer_agent` receives the feedback and updates its memory. 7. The <img src="./public/avatars/group_chat_manager.png" width="20" height="20" style="vertical-align:middle"> Group Chat Manager receives the message simultaneously and repeats the loop from step 1. Here is an illustration of the system developed in this example: ```mermaid graph TD; subgraph Host A1[GRPC Server] wt[Writer Topic] et[Editor Topic] ut[UI Topic] gct[Group Chat Topic] end all_agents[All Agents - Simplified Arrows!] --> A1 subgraph Distributed Writer Runtime wt -.->|2 - Subscription| writer_agent gct -.->|4 - Subscription| writer_agent writer_agent -.->|3.1 - Publish: UI Message| ut writer_agent -.->|3.2 - Publish: Group Chat Message| gct end subgraph Distributed Editor Runtime et -.->|6 - Subscription| editor_agent gct -.->|4 - Subscription| editor_agent editor_agent -.->|7.1 - Publish: UI Message| ut editor_agent -.->|7.2 - Publish: Group Chat Message| gct end subgraph Distributed Group Chat Manager Runtime gct -.->|4 - Subscription| group_chat_manager group_chat_manager -.->|1 - Request To Speak| wt group_chat_manager -.->|5 - Request To Speak| et group_chat_manager -.->|\* - Publish Some of to UI Message| ut end subgraph Distributed UI Runtime ut -.->|\* - Subscription| ui_agent end style wt fill:#beb2c3,color:#000 style et fill:#beb2c3,color:#000 style gct fill:#beb2c3,color:#000 style ut fill:#beb2c3,color:#000 style writer_agent fill:#b7c4d7,color:#000 style editor_agent fill:#b7c4d7,color:#000 style group_chat_manager fill:#b7c4d7,color:#000 style ui_agent fill:#b7c4d7,color:#000 ```
GitHub
autogen
autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md
autogen
TODO: - [ ] Properly handle chat restarts. It complains about group chat manager being already registered - [ ] Add streaming to the UI like [this example](https://docs.chainlit.io/advanced-features/streaming) when [this bug](https://github.com/microsoft/autogen/issues/4213) is resolved
GitHub
autogen
autogen/python/packages/autogen-core/docs/README.md
autogen
## Building the AutoGen Documentation AutoGen documentation is based on the sphinx documentation system and uses the myst-parser to render markdown files. It uses the [pydata-sphinx-theme](https://pydata-sphinx-theme.readthedocs.io/en/latest/) to style the documentation. ### Prerequisites Ensure you have all of the dev dependencies for the `autogen-core` package installed. You can install them by running the following command from the root of the python repository: ```bash uv sync source .venv/bin/activate ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/README.md
autogen
Building Docs To build the documentation, run the following command from the root of the python repository: ```bash poe --directory ./packages/autogen-core/ docs-build ``` To serve the documentation locally, run the following command from the root of the python repository: ```bash poe --directory ./packages/autogen-core/ docs-serve ``` [!NOTE] Sphinx will only rebuild files that have changed since the last build. If you want to force a full rebuild, you can delete the `./packages/autogen-core/docs/build` directory before running the `docs-build` command.
GitHub
autogen
autogen/python/packages/autogen-core/docs/README.md
autogen
Versioning the Documentation The current theme - [pydata-sphinx-theme](https://pydata-sphinx-theme.readthedocs.io/en/latest/) - supports [switching between versions](https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/version-dropdown.html) of the documentation. To version the documentation, you need to create a new version of the documentation by copying the existing documentation to a new directory with the version number. For example, to create a new version of the documentation for version `0.1.0`, you would run the following command: How are various versions built? - TBD.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/index.md
autogen
--- myst: html_meta: "description lang=en": | Top-level documentation for AutoGen, a framework for developing applications using AI agents html_theme.sidebar_secondary.remove: false sd_hide_title: true --- <style> .hero-title { font-size: 60px; font-weight: bold; margin: 2rem auto 0; } .wip-card { border: 1px solid var(--pst-color-success); background-color: var(--pst-color-success-bg); border-radius: .25rem; padding: 0.3rem; display: flex; justify-content: center; align-items: center; margin-bottom: 1rem; } </style> # AutoGen <div class="container"> <div class="row text-center"> <div class="col-sm-12"> <h1 class="hero-title"> AutoGen </h1> <h3> A framework for building AI agents and multi-agent applications </h3> </div> </div> </div> <div style="margin-top: 2rem;"> ::::{grid} 1 1 2 2 :::{grid-item-card} :shadow: none :margin: 2 0 0 0 <div class="wip-card"> {fas}`triangle-exclamation` Work in progress </div> <div class="sd-card-title sd-font-weight-bold docutils"> {fas}`people-group;pst-color-primary` AgentChat </div> High-level API that includes preset agents and teams for building multi-agent systems. ```sh pip install 'autogen-agentchat==0.4.0.dev7' ``` 💡 *Start here if you are looking for an API similar to AutoGen 0.2* +++ ```{button-ref} user-guide/agentchat-user-guide/quickstart :color: secondary Get Started ``` ::: :::{grid-item-card} {fas}`cube;pst-color-primary` Core :shadow: none :margin: 2 0 0 0 Provides building blocks for creating asynchronous, event driven multi-agent systems. ```sh pip install 'autogen-core==0.4.0.dev7' ``` +++ ```{button-ref} user-guide/core-user-guide/quickstart :color: secondary Get Started ``` ::: :::: </div> ```{toctree} :maxdepth: 3 :hidden: user-guide/index packages/index reference/index ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/packages/index.md
autogen
--- myst: html_meta: "description lang=en": | AutoGen packages provide a set of functionality for building multi-agent applications with AI agents. --- <style> .card-title { font-size: 1.2rem; font-weight: bold; } .card-title svg { font-size: 2rem; vertical-align: bottom; margin-right: 5px; } </style> # Packages
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/packages/index.md
autogen
0.4 (pkg-info-autogen-agentchat)= :::{card} {fas}`people-group;pst-color-primary` AutoGen AgentChat :class-title: card-title :shadow: none Library that is at a similar level of abstraction as AutoGen 0.2, including default agents and group chat. ```sh pip install 'autogen-agentchat==0.4.0.dev7' ``` [{fas}`circle-info;pst-color-primary` User Guide](/user-guide/agentchat-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_agentchat/autogen_agentchat.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-agentchat) ::: (pkg-info-autogen-core)= :::{card} {fas}`cube;pst-color-primary` AutoGen Core :class-title: card-title :shadow: none Implements the core functionality of the AutoGen framework, providing basic building blocks for creating multi-agent systems. ```sh pip install 'autogen-core==0.4.0.dev7' ``` [{fas}`circle-info;pst-color-primary` User Guide](/user-guide/core-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_core/autogen_core.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-core/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core) ::: (pkg-info-autogen-ext)= :::{card} {fas}`puzzle-piece;pst-color-primary` AutoGen Extensions :class-title: card-title :shadow: none Implementations of core components that interface with external services, or use extra dependencies. For example, Docker based code execution. ```sh pip install 'autogen-ext==0.4.0.dev7' ``` Extras: - `langchain` needed for {py:class}`~autogen_ext.tools.LangChainToolAdapter` - `azure` needed for {py:class}`~autogen_ext.code_executors.ACADynamicSessionsCodeExecutor` - `docker` needed for {py:class}`~autogen_ext.code_executors.DockerCommandLineCodeExecutor` - `openai` needed for {py:class}`~autogen_ext.models.OpenAIChatCompletionClient` [{fas}`circle-info;pst-color-primary` User Guide](/user-guide/extensions-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_ext/autogen_ext.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-ext/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-ext) ::: (pkg-info-autogen-magentic-one)= :::{card} {fas}`users;pst-color-primary` Magentic One :class-title: card-title :shadow: none A generalist multi-agent softbot utilizing five agents to tackle intricate tasks involving multi-step planning and real-world actions. ```{note} Not yet available on PyPI. ``` [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one) :::
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/packages/index.md
autogen
0.2 (pkg-info-autogen-02)= :::{card} {fas}`robot;pst-color-primary` AutoGen :class-title: card-title :shadow: none Existing AutoGen library that provides a high-level abstraction for building multi-agent systems. ```sh pip install 'autogen-agentchat~=0.2' ``` [{fas}`circle-info;pst-color-primary` Documentation](https://microsoft.github.io/autogen/0.2/) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-agentchat/0.2.38/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/0.2/) :::
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/packages/index.md
autogen
Other (pkg-info-autogenbench)= :::{card} {fas}`chart-bar;pst-color-primary` AutoGen Bench :class-title: card-title :shadow: none AutoGenBench is a tool for repeatedly running pre-defined AutoGen tasks in tightly-controlled initial conditions. ```sh pip install autogenbench ``` [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogenbench/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/agbench) :::
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/reference/index.md
autogen
--- myst: html_meta: "description lang=en": | AutoGen is a community-driven project. Learn how to get involved, contribute, and connect with the community. --- # API Reference ```{toctree} :hidden: :caption: AutoGen AgentChat python/autogen_agentchat/autogen_agentchat ``` ```{toctree} :hidden: :caption: AutoGen Core python/autogen_core/autogen_core ``` ```{toctree} :hidden: :caption: AutoGen Extensions python/autogen_ext/autogen_ext ``` ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {fas}`people-group;pst-color-primary` <br> AutoGen AgentChat :link: python/autogen_agentchat/autogen_agentchat :link-type: doc :class-item: api-card ::: :::{grid-item-card} {fas}`cube;pst-color-primary` <br> AutoGen Core :link: python/autogen_core/autogen_core :link-type: doc :class-item: api-card ::: :::{grid-item-card} {fas}`puzzle-piece;pst-color-primary` <br> AutoGen Extensions :link: python/autogen_ext/autogen_ext :link-type: doc :class-item: api-card ::: ::::
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/index.md
autogen
# User Guide ```{toctree} :maxdepth: 3 :hidden: agentchat-user-guide/index core-user-guide/index extensions-user-guide/index ``` ::::{grid} 1 2 2 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {fas}`people-group;pst-color-primary` <br> AutoGen AgentChat :link: agentchat-user-guide/index :link-type: doc :class-item: api-card ::: :::{grid-item-card} {fas}`cube;pst-color-primary` <br> AutoGen Core :link: core-user-guide/index :link-type: doc :class-item: api-card ::: :::: <script type="text/javascript"> setTimeout(function() { window.location.href = "agentchat-user-guide/quickstart.html"; }, 0); </script>
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/index.md
autogen
--- myst: html_meta: "description lang=en": | User Guide for AutoGen Extensions, a framework for building multi-agent applications with AI agents. --- # Extensions ```{toctree} :maxdepth: 3 :hidden: azure-container-code-executor ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/index.md
autogen
Discover community projects: ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {fas}`globe;pst-color-primary` <br> Ecosystem :link: https://github.com/topics/autogen :class-item: api-card :columns: 12 Find samples, services and other things that work with AutoGen ::: :::{grid-item-card} {fas}`puzzle-piece;pst-color-primary` <br> Community Extensions :link: https://github.com/topics/autogen-extension :class-item: api-card Find AutoGen extensions for 3rd party tools, components and services ::: :::{grid-item-card} {fas}`vial;pst-color-primary` <br> Community Samples :link: https://github.com/topics/autogen-sample :class-item: api-card Find community samples and examples of how to use AutoGen ::: :::: ### List of community projects | Name | Package | Description | |---|---|---| | [autogen-watsonx-client](https://github.com/tsinggggg/autogen-watsonx-client) | [PyPi](https://pypi.org/project/autogen-watsonx-client/) | Model client for [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai) | <!-- Example --> <!-- | [My Model Client](https://github.com/example) | [PyPi](https://pypi.org/project/example) | Model client for my custom model service | --> <!-- - Name should link to the project page or repo - Package should link to the PyPi page - Description should be a brief description of the project. 1 short sentence is ideal. -->
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/index.md
autogen
Built-in extensions Read docs for built in extensions: ```{note} WIP ``` <!-- ::::{grid} 1 2 3 3 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} LangChain Tools :link: python/autogen_agentchat/autogen_agentchat :link-type: doc ::: :::{grid-item-card} ACA Dynamic Sessions Code Executor :link: python/autogen_agentchat/autogen_agentchat :link-type: doc ::: :::: -->
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/index.md
autogen
Creating your own community extension With the new package structure in 0.4, it is easier than ever to create and publish your own extension to the AutoGen ecosystem. This page details some best practices so that your extension package integrates well with the AutoGen ecosystem. ### Best practices #### Naming There is no requirement about naming. But prefixing the package name with `autogen-` makes it easier to find. #### Common interfaces Whenever possible, extensions should implement the provided interfaces from the `autogen_core` package. This will allow for a more consistent experience for users. ##### Dependency on AutoGen To ensure that the extension works with the version of AutoGen that it was designed for, it is recommended to specify the version of AutoGen the dependency section of the `pyproject.toml` with adequate constraints. ```toml [project] # ... dependencies = [ "autogen-core>=0.4,<0.5" ] ``` #### Usage of typing AutoGen embraces the use of type hints to provide a better development experience. Extensions should use type hints whenever possible. ### Discovery To make it easier for users to find your extension, sample, service or package, you can [add the topic](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/classifying-your-repository-with-topics) [`autogen`](https://github.com/topics/autogen) to the GitHub repo. More specific topics are also available: - [`autogen-extension`](https://github.com/topics/autogen-extension) for extensions - [`autogen-sample`](https://github.com/topics/autogen-sample) for samples ### Changes from 0.2 In AutoGen 0.2 it was common to merge 3rd party extensions and examples into the main repo. We are super appreciative of all of the users who have contributed to the ecosystem notebooks, modules and pages in 0.2. However, in general we are moving away from this model to allow for more flexibility and to reduce maintenance burden. There is the `autogen-ext` package for 1st party supported extensions, but we want to be selective to manage maintenance load. If you would like to see if your extension makes sense to add into `autogen-ext`, please open an issue and let's discuss. Otherwise, we encourage you to publish your extension as a separate package and follow the guidance under [discovery](#discovery) to make it easy for users to find.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/index.md
autogen
--- myst: html_meta: "description lang=en": | User Guide for AgentChat, a high-level API for AutoGen --- # AgentChat AgentChat is a high-level API for building multi-agent applications. It is built on top of the [`autogen-core`](../core-user-guide/index.md) package. For beginner users, AgentChat is the recommended starting point. For advanced users, [`autogen-core`](../core-user-guide/index.md)'s event-driven programming model provides more flexibility and control over the underlying components. AgentChat aims to provide intuitive defaults, such as **Agents** with preset behaviors and **Teams** with predefined [multi-agent design patterns](../core-user-guide/design-patterns/index.md). to simplify building multi-agent applications. ```{include} warning.md ``` ```{tip} If you are interested in implementing complex agent interaction behaviours, defining custom messaging protocols, or orchestration mechanisms, consider using the [ `autogen-core`](../core-user-guide/index.md) package. ``` ::::{grid} 2 2 2 2 :gutter: 3 :::{grid-item-card} {fas}`download;pst-color-primary` Installation :link: ./installation.html How to install AgentChat ::: :::{grid-item-card} {fas}`rocket;pst-color-primary` Quickstart :link: ./quickstart.html Build your first agent ::: :::{grid-item-card} {fas}`graduation-cap;pst-color-primary` Tutorial :link: ./tutorial/index.html Step-by-step guide to using AgentChat ::: :::{grid-item-card} {fas}`code;pst-color-primary` Examples :link: ./examples/index.html Sample code and use cases ::: :::: ```{toctree} :maxdepth: 1 :hidden: installation quickstart tutorial/index examples/index ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/installation.md
autogen
--- myst: html_meta: "description lang=en": | Installing AutoGen AgentChat --- # Installation
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/installation.md
autogen
Create a virtual environment (optional) When installing AgentChat locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AgentChat are isolated from the rest of your system. ``````{tab-set} `````{tab-item} venv Create and activate: ```bash python3 -m venv .venv source .venv/bin/activate ``` To deactivate later, run: ```bash deactivate ``` ````` `````{tab-item} conda [Install Conda](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html) if you have not already. Create and activate: ```bash conda create -n autogen python=3.10 conda activate autogen ``` To deactivate later, run: ```bash conda deactivate ``` ````` ``````
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/installation.md
autogen
Intall the AgentChat package using pip Install the `autogen-agentchat` package using pip: ```bash pip install 'autogen-agentchat==0.4.0.dev7' ``` ```{note} Python 3.10 or later is required. ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/installation.md
autogen
Install OpenAI for Model Client To use the OpenAI and Azure OpenAI models, you need to install the following extensions: ```bash pip install 'autogen-ext[openai]==0.4.0.dev7' ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/installation.md
autogen
Install Docker for Code Execution We recommend using Docker for code execution. To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/). A simple example of how to use Docker for code execution is shown below: <!-- ```{include} stocksnippet.md ``` --> To learn more about agents that execute code, see the [agents tutorial](./tutorial/agents.ipynb).
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/warning.md
autogen
```{warning} AgentChat is Work in Progress. APIs may change in future releases. ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/index.md
autogen
--- myst: html_meta: "description lang=en": | Tutorial for AutoGen AgentChat, a framework for building multi-agent applications with AI agents. --- # Tutorial Tutorial to get started with AgentChat. ```{include} ../warning.md ``` ::::{grid} 2 2 2 3 :gutter: 3 :::{grid-item-card} {fas}`book-open;pst-color-primary` Models :link: ./models.html Setting up model clients for agents and teams. ::: :::{grid-item-card} {fas}`users;pst-color-primary` Agents :link: ./agents.html Building agents that use models, tools, and code executors. ::: :::{grid-item-card} {fas}`users;pst-color-primary` Teams Intro :link: ./teams.html Introduction to teams and task termination. ::: :::{grid-item-card} {fas}`users;pst-color-primary` Selector Group Chat :link: ./selector-group-chat.html A smart team that uses a model-based strategy and custom selector. ::: :::{grid-item-card} {fas}`users;pst-color-primary` Swarm :link: ./swarm.html A dynamic team that uses handoffs to pass tasks between agents. ::: :::{grid-item-card} {fas}`users;pst-color-primary` Custom Agents :link: ./custom-agents.html How to build custom agents. ::: :::: ```{toctree} :maxdepth: 1 :hidden: models agents teams selector-group-chat swarm termination custom-agents ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/examples/index.md
autogen
--- myst: html_meta: "description lang=en": | Examples built using AgentChat, a high-level api for AutoGen --- # Examples A list of examples to help you get started with AgentChat. :::::{grid} 2 2 2 3 ::::{grid-item-card} Travel Planning :img-top: ../../../images/code.svg :img-alt: travel planning example :link: ./travel-planning.html ^^^ Generating a travel plan using multiple agents. :::: ::::{grid-item-card} Company Research :img-top: ../../../images/code.svg :img-alt: company research example :link: ./company-research.html ^^^ Generating a company research report using multiple agents with tools. :::: ::::{grid-item-card} Literature Review :img-top: ../../../images/code.svg :img-alt: literature review example :link: ./literature-review.html ^^^ Generating a literature review using agents with tools. :::: ::::: ```{toctree} :maxdepth: 1 :hidden: travel-planning company-research literature-review ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/faqs.md
autogen
# FAQs
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/faqs.md
autogen
How do I get the underlying agent instance? Agents might be distributed across multiple machines, so the underlying agent instance is intentionally discouraged from being accessed. If the agent is definitely running on the same machine, you can access the agent instance by calling {py:meth}`autogen_core.base.AgentRuntime.try_get_underlying_agent_instance` on the `AgentRuntime`. If the agent is not available this will throw an exception.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/faqs.md
autogen
How do I call call a function on an agent? Since the instance itself is not accessible, you can't call a function on an agent directly. Instead, you should create a type to represent the function call and its arguments, and then send that message to the agent. Then in the agent, create a handler for that message type and implement the required logic. This also supports returning a response to the caller. This allows your agent to work in a distributed environment a well as a local one.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/faqs.md
autogen
Why do I need to use a factory to register an agent? An {py:class}`autogen_core.base.AgentId` is composed of a `type` and a `key`. The type corresponds to the factory that created the agent, and the key is a runtime, data dependent key for this instance. The key can correspond to a user id, a session id, or could just be "default" if you don't need to differentiate between instances. Each unique key will create a new instance of the agent, based on the factory provided. This allows the system to automatically scale to different instances of the same agent, and to manage the lifecycle of each instance independently based on how you choose to handle keys in your application.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/faqs.md
autogen
How do I increase the GRPC message size? If you need to provide custom gRPC options, such as overriding the `max_send_message_length` and `max_receive_message_length`, you can define an `extra_grpc_config` variable and pass it to both the `WorkerAgentRuntimeHost` and `WorkerAgentRuntime` instances. ```python # Define custom gRPC options extra_grpc_config = [ ("grpc.max_send_message_length", new_max_size), ("grpc.max_receive_message_length", new_max_size), ] # Create instances of WorkerAgentRuntimeHost and WorkerAgentRuntime with the custom gRPC options host = WorkerAgentRuntimeHost(address=host_address, extra_grpc_config=extra_grpc_config) worker1 = WorkerAgentRuntime(host_address=host_address, extra_grpc_config=extra_grpc_config) ``` **Note**: When `WorkerAgentRuntime` creates a host connection for the clients, it uses `DEFAULT_GRPC_CONFIG` from `HostConnection` class as default set of values which will can be overriden if you pass parameters with the same name using `extra_grpc_config`.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/index.md
autogen
--- myst: html_meta: "description lang=en": | User Guide for AutoGen Core, a framework for building multi-agent applications with AI agents. --- # Core ```{toctree} :maxdepth: 1 :hidden: quickstart core-concepts/index framework/index design-patterns/index cookbook/index faqs ``` ```{warning} This project and documentation is a work in progress. If you have any questions or need help, please reach out to us on GitHub. ``` AutoGen core offers an easy way to quickly build event-driven, distributed, scalable, resilient AI agent systems. Agents are developed by using the [Actor model](https://en.wikipedia.org/wiki/Actor_model). You can build and run your agent system locally and easily move to a distributed system in the cloud when you are ready. Key features of AutoGen core include: ```{gallery-grid} :grid-columns: 1 2 2 3 - header: "{fas}`network-wired;pst-color-primary` Asynchronous Messaging" content: "Agents communicate through asynchronous messages, enabling event-driven and request/response communication models." - header: "{fas}`cube;pst-color-primary` Scalable & Distributed" content: "Enable complex scenarios with networks of agents across organizational boundaries." - header: "{fas}`code;pst-color-primary` Multi-Language Support" content: "Python & Dotnet interoperating agents today, with more languages coming soon." - header: "{fas}`globe;pst-color-primary` Modular & Extensible" content: "Highly customizable with features like custom agents, memory as a service, tools registry, and model library." - header: "{fas}`puzzle-piece;pst-color-primary` Observable & Debuggable" content: "Easily trace and debug your agent systems." - header: "{fas}`project-diagram;pst-color-primary` Event-Driven Architecture" content: "Build event-driven, distributed, scalable, and resilient AI agent systems." ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/telemetry.md
autogen
# Open Telemetry AutoGen has native support for [open telemetry](https://opentelemetry.io/). This allows you to collect telemetry data from your application and send it to a telemetry backend of your choosing. These are the components that are currently instrumented: - Runtime (Single Threaded Agent Runtime, Worker Agent Runtime)
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/telemetry.md
autogen
Instrumenting your application To instrument your application, you will need an sdk and an exporter. You may already have these if your application is already instrumented with open telemetry.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/telemetry.md
autogen
Clean instrumentation If you do not have open telemetry set up in your application, you can follow these steps to instrument your application. ```bash pip install opentelemetry-sdk ``` Depending on your open telemetry collector, you can use grpc or http to export your telemetry. ```bash # Pick one of the following pip install opentelemetry-exporter-otlp-proto-http pip install opentelemetry-exporter-otlp-proto-grpc ``` Next, we need to get a tracer provider: ```python from opentelemetry import trace from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor def configure_oltp_tracing(endpoint: str = None) -> trace.TracerProvider: # Configure Tracing tracer_provider = TracerProvider(resource=Resource({"service.name": "my-service"})) processor = BatchSpanProcessor(OTLPSpanExporter()) tracer_provider.add_span_processor(processor) trace.set_tracer_provider(tracer_provider) return tracer_provider ``` Now you can send the trace_provider when creating your runtime: ```python # for single threaded runtime single_threaded_runtime = SingleThreadedAgentRuntime(tracer_provider=tracer_provider) # or for worker runtime worker_runtime = WorkerAgentRuntime(tracer_provider=tracer_provider) ``` And that's it! Your application is now instrumented with open telemetry. You can now view your telemetry data in your telemetry backend. ### Exisiting instrumentation If you have open telemetry already set up in your application, you can pass the tracer provider to the runtime when creating it: ```python from opentelemetry import trace # Get the tracer provider from your application tracer_provider = trace.get_tracer_provider() # for single threaded runtime single_threaded_runtime = SingleThreadedAgentRuntime(tracer_provider=tracer_provider) # or for worker runtime worker_runtime = WorkerAgentRuntime(tracer_provider=tracer_provider) ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/index.md
autogen
# Framework Guide The following sections guides you through the usage of the Core API. At minimum, read [Agent and Agent Runtime](agent-and-agent-runtime.ipynb) and [Message and Communication](message-and-communication.ipynb) to get the basic understanding. ```{note} The Core API is designed to be unopinionated and flexible. So at times, you may find it challenging. Continue if you are building an interactive, scalable and distributed multi-agent system and want full control of all workflows. If you just want to get something running quickly, you may take a look at the [AgentChat API](../../agentchat-user-guide/index.md). ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/index.md
autogen
List of content ```{toctree} :maxdepth: 1 agent-and-agent-runtime message-and-communication model-clients tools logging telemetry command-line-code-executors distributed-agent-runtime ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/logging.md
autogen
# Logging AutoGen uses Python's built-in [`logging`](https://docs.python.org/3/library/logging.html) module. There are two kinds of logging: - **Trace logging**: This is used for debugging and is human readable messages to indicate what is going on. This is intended for a developer to understand what is happening in the code. The content and format of these logs should not be depended on by other systems. - Name: {py:attr}`~autogen_core.application.logging.TRACE_LOGGER_NAME`. - **Structured logging**: This logger emits structured events that can be consumed by other systems. The content and format of these logs can be depended on by other systems. - Name: {py:attr}`~autogen_core.application.logging.EVENT_LOGGER_NAME`. - See the module {py:mod}`autogen_core.application.logging.events` to see the available events. - {py:attr}`~autogen_core.application.logging.ROOT_LOGGER` can be used to enable or disable all logs.
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/logging.md
autogen
Enabling logging output To enable trace logging, you can use the following code: ```python import logging from autogen_core.application.logging import TRACE_LOGGER_NAME logging.basicConfig(level=logging.WARNING) logger = logging.getLogger(TRACE_LOGGER_NAME) logger.setLevel(logging.DEBUG) ``` ### Structured logging Structured logging allows you to write handling logic that deals with the actual events including all fields rather than just a formatted string. For example, if you had defined this custom event and were emitting it. Then you could write the following handler to receive it. ```python import logging from dataclasses import dataclass @dataclass class MyEvent: timestamp: str message: str class MyHandler(logging.Handler): def __init__(self) -> None: super().__init__() def emit(self, record: logging.LogRecord) -> None: try: # Use the StructuredMessage if the message is an instance of it if isinstance(record.msg, MyEvent): print(f"Timestamp: {record.msg.timestamp}, Message: {record.msg.message}") except Exception: self.handleError(record) ``` And this is how you could use it: ```python logger = logging.getLogger(EVENT_LOGGER_NAME) logger.setLevel(logging.INFO) my_handler = MyHandler() logger.handlers = [my_handler] ```
GitHub
autogen
autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/logging.md
autogen
Emitting logs These two names are the root loggers for these types. Code that emits logs should use a child logger of these loggers. For example, if you are writing a module `my_module` and you want to emit trace logs, you should use the logger named: ```python import logging from autogen_core.application.logging import TRACE_LOGGER_NAME logger = logging.getLogger(f"{TRACE_LOGGER_NAME}.my_module") ``` ### Emitting structured logs If your event looks like: ```python from dataclasses import dataclass @dataclass class MyEvent: timestamp: str message: str ``` Then it could be emitted in code like this: ```python from autogen_core.application.logging import EVENT_LOGGER_NAME logger = logging.getLogger(EVENT_LOGGER_NAME + ".my_module") logger.info(MyEvent("timestamp", "message")) ```
README.md exists but content is empty.
Downloads last month
32