aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorGravatar Reinier van der Leer <github@pwuts.nl> 2023-09-06 17:33:08 +0200
committerGravatar Reinier van der Leer <github@pwuts.nl> 2023-09-06 17:33:08 +0200
commit650072eb994135a7bda901722a220c8b7f7a7ca3 (patch)
treeb4d3297eb1fd320a6e43979c601fb2d0507e5382 /docs
parentFix CODEOWNERS (diff)
downloadAuto-GPT-650072eb994135a7bda901722a220c8b7f7a7ca3.tar.gz
Auto-GPT-650072eb994135a7bda901722a220c8b7f7a7ca3.tar.bz2
Auto-GPT-650072eb994135a7bda901722a220c8b7f7a7ca3.zip
Fix CI/CD for Auto-GPT + docs site deployment
Diffstat (limited to 'docs')
-rw-r--r--docs/_javascript/mathjax.js16
-rw-r--r--docs/_javascript/tablesort.js6
-rw-r--r--docs/challenges/beat.md11
-rw-r--r--docs/challenges/building_challenges.md125
-rw-r--r--docs/challenges/challenge_template.md24
-rw-r--r--docs/challenges/information_retrieval/challenge_a.md23
-rw-r--r--docs/challenges/information_retrieval/challenge_b.md22
-rw-r--r--docs/challenges/information_retrieval/introduction.md3
-rw-r--r--docs/challenges/introduction.md35
-rw-r--r--docs/challenges/list.md5
-rw-r--r--docs/challenges/memory/challenge_a.md39
-rw-r--r--docs/challenges/memory/challenge_b.md44
-rw-r--r--docs/challenges/memory/challenge_c.md61
-rw-r--r--docs/challenges/memory/challenge_d.md80
-rw-r--r--docs/challenges/memory/introduction.md5
-rw-r--r--docs/challenges/submit.md14
l---------docs/code-of-conduct.md1
-rw-r--r--docs/configuration/imagegen.md63
-rw-r--r--docs/configuration/memory.md251
-rw-r--r--docs/configuration/options.md54
-rw-r--r--docs/configuration/search.md37
-rw-r--r--docs/configuration/voice.md37
l---------docs/contributing.md1
-rw-r--r--docs/imgs/Auto_GPT_Logo.pngbin0 -> 26841 bytes
-rw-r--r--docs/imgs/e2b-dashboard.pngbin0 -> 515634 bytes
-rw-r--r--docs/imgs/e2b-log-url.pngbin0 -> 43687 bytes
-rw-r--r--docs/imgs/e2b-new-tag.pngbin0 -> 47736 bytes
-rw-r--r--docs/imgs/e2b-tag-button.pngbin0 -> 20635 bytes
-rw-r--r--docs/imgs/openai-api-key-billing-paid-account.pngbin0 -> 316093 bytes
-rw-r--r--docs/index.md8
-rw-r--r--docs/mkdocs.yml117
-rw-r--r--docs/netlify.toml6
-rw-r--r--docs/plugins.md20
-rw-r--r--docs/setup.md257
-rw-r--r--docs/share-your-logs.md52
-rw-r--r--docs/testing.md51
-rw-r--r--docs/usage.md122
37 files changed, 1590 insertions, 0 deletions
diff --git a/docs/_javascript/mathjax.js b/docs/_javascript/mathjax.js
new file mode 100644
index 000000000..a80ddbff7
--- /dev/null
+++ b/docs/_javascript/mathjax.js
@@ -0,0 +1,16 @@
+window.MathJax = {
+ tex: {
+ inlineMath: [["\\(", "\\)"]],
+ displayMath: [["\\[", "\\]"]],
+ processEscapes: true,
+ processEnvironments: true
+ },
+ options: {
+ ignoreHtmlClass: ".*|",
+ processHtmlClass: "arithmatex"
+ }
+};
+
+document$.subscribe(() => {
+ MathJax.typesetPromise()
+}) \ No newline at end of file
diff --git a/docs/_javascript/tablesort.js b/docs/_javascript/tablesort.js
new file mode 100644
index 000000000..ee04e9008
--- /dev/null
+++ b/docs/_javascript/tablesort.js
@@ -0,0 +1,6 @@
+document$.subscribe(function () {
+ var tables = document.querySelectorAll("article table:not([class])")
+ tables.forEach(function (table) {
+ new Tablesort(table)
+ })
+}) \ No newline at end of file
diff --git a/docs/challenges/beat.md b/docs/challenges/beat.md
new file mode 100644
index 000000000..85c01d839
--- /dev/null
+++ b/docs/challenges/beat.md
@@ -0,0 +1,11 @@
+# Beat a Challenge
+
+If you have a solution or idea to tackle an existing challenge, you can contribute by working on it and submitting your solution. Here's how to get started:
+
+## Guidelines for Beating a Challenge
+
+1. **Choose a challenge**: Browse the [List of Challenges](list.md) and choose one that interests you or aligns with your expertise.
+
+2. **Understand the problem**: Make sure you thoroughly understand the problem at hand, its scope, and the desired outcome.
+
+3. **Develop a solution**: Work on creating a solution for the challenge. This may/
diff --git a/docs/challenges/building_challenges.md b/docs/challenges/building_challenges.md
new file mode 100644
index 000000000..9caf5cdd2
--- /dev/null
+++ b/docs/challenges/building_challenges.md
@@ -0,0 +1,125 @@
+# Creating Challenges for Auto-GPT
+
+🏹 We're on the hunt for talented Challenge Creators! 🎯
+
+Join us in shaping the future of Auto-GPT by designing challenges that test its limits. Your input will be invaluable in guiding our progress and ensuring that we're on the right track. We're seeking individuals with a diverse skill set, including:
+
+🎨 UX Design: Your expertise will enhance the user experience for those attempting to conquer our challenges. With your help, we'll develop a dedicated section in our wiki, and potentially even launch a standalone website.
+
+πŸ’» Coding Skills: Proficiency in Python, pytest, and VCR (a library that records OpenAI calls and stores them) will be essential for creating engaging and robust challenges.
+
+βš™οΈ DevOps Skills: Experience with CI pipelines in GitHub and possibly Google Cloud Platform will be instrumental in streamlining our operations.
+
+Are you ready to play a pivotal role in Auto-GPT's journey? Apply now to become a Challenge Creator by opening a PR! πŸš€
+
+
+# Getting Started
+Clone the original Auto-GPT repo and checkout to master branch
+
+
+The challenges are not written using a specific framework. They try to be very agnostic
+The challenges are acting like a user that wants something done:
+INPUT:
+- User desire
+- Files, other inputs
+
+Output => Artifact (files, image, code, etc, etc...)
+
+## Defining your Agent
+
+Go to https://github.com/Significant-Gravitas/Auto-GPT/blob/master/tests/integration/agent_factory.py
+
+Create your agent fixture.
+
+```python
+def kubernetes_agent(
+ agent_test_config, memory_json_file, workspace: Workspace
+):
+ # Please choose the commands your agent will need to beat the challenges, the full list is available in the main.py
+ # (we 're working on a better way to design this, for now you have to look at main.py)
+ command_registry = CommandRegistry()
+ command_registry.import_commands("autogpt.commands.file_operations")
+ command_registry.import_commands("autogpt.app")
+
+ # Define all the settings of our challenged agent
+ ai_config = AIConfig(
+ ai_name="Kubernetes",
+ ai_role="an autonomous agent that specializes in creating Kubernetes deployment templates.",
+ ai_goals=[
+ "Write a simple kubernetes deployment file and save it as a kube.yaml.",
+ ],
+ )
+ ai_config.command_registry = command_registry
+
+ system_prompt = ai_config.construct_full_prompt()
+ agent_test_config.set_continuous_mode(False)
+ agent = Agent(
+ memory=memory_json_file,
+ command_registry=command_registry,
+ config=ai_config,
+ next_action_count=0,
+ triggering_prompt=DEFAULT_TRIGGERING_PROMPT,
+ )
+
+ return agent
+```
+
+## Creating your challenge
+Go to `tests/challenges`and create a file that is called `test_your_test_description.py` and add it to the appropriate folder. If no category exists you can create a new one.
+
+Your test could look something like this
+
+```python
+import contextlib
+from functools import wraps
+from typing import Generator
+
+import pytest
+import yaml
+
+from autogpt.commands.file_operations import read_file, write_to_file
+from tests.integration.agent_utils import run_interaction_loop
+from tests.challenges.utils import run_multiple_times
+
+def input_generator(input_sequence: list) -> Generator[str, None, None]:
+ """
+ Creates a generator that yields input strings from the given sequence.
+
+ :param input_sequence: A list of input strings.
+ :return: A generator that yields input strings.
+ """
+ yield from input_sequence
+
+
+@pytest.mark.skip("This challenge hasn't been beaten yet.")
+@pytest.mark.vcr
+@pytest.mark.requires_openai_api_key
+def test_information_retrieval_challenge_a(kubernetes_agent, monkeypatch) -> None:
+ """
+ Test the challenge_a function in a given agent by mocking user inputs
+ and checking the output file content.
+
+ :param get_company_revenue_agent: The agent to test.
+ :param monkeypatch: pytest's monkeypatch utility for modifying builtins.
+ """
+ input_sequence = ["s", "s", "s", "s", "s", "EXIT"]
+ gen = input_generator(input_sequence)
+ monkeypatch.setattr("autogpt.utils.session.prompt", lambda _: next(gen))
+
+ with contextlib.suppress(SystemExit):
+ run_interaction_loop(kubernetes_agent, None)
+
+ # here we load the output file
+ file_path = str(kubernetes_agent.workspace.get_path("kube.yaml"))
+ content = read_file(file_path)
+
+ # then we check if it's including keywords from the kubernetes deployment config
+ for word in ["apiVersion", "kind", "metadata", "spec"]:
+ assert word in content, f"Expected the file to contain {word}"
+
+ content = yaml.safe_load(content)
+ for word in ["Service", "Deployment", "Pod"]:
+ assert word in content["kind"], f"Expected the file to contain {word}"
+
+
+```
diff --git a/docs/challenges/challenge_template.md b/docs/challenges/challenge_template.md
new file mode 100644
index 000000000..3178ce7b9
--- /dev/null
+++ b/docs/challenges/challenge_template.md
@@ -0,0 +1,24 @@
+# Challenge Title
+
+## Description
+
+Provide a clear and concise description of the challenge. Include any relevant examples or files to illustrate the problem.
+
+## Input
+
+If the challenge involves specific input files, describe them here. Provide the file names and their contents, if necessary. Use triple backticks (```) to format the content as a code block.
+
+For example:
+
+instructions_1.txt
+
+The current task_id is 4563.\n[NOISE intended to confuse the agent]
+Read the file instructions_2.txt using the read_file command.
+
+## Scope
+
+Define the scope of the challenge, including any relevant constraints, requirements, or limitations.
+
+## Success Evaluation
+
+Explain how success will be measured or evaluated for the challenge. This helps others understand what the desired outcome is and how to work towards it.
diff --git a/docs/challenges/information_retrieval/challenge_a.md b/docs/challenges/information_retrieval/challenge_a.md
new file mode 100644
index 000000000..bf1b7b104
--- /dev/null
+++ b/docs/challenges/information_retrieval/challenge_a.md
@@ -0,0 +1,23 @@
+# Information Retrieval Challenge A
+
+**Status**: Current level to beat: level 2
+
+**Command to try**:
+
+```
+pytest -s tests/challenges/information_retrieval/test_information_retrieval_challenge_a.py --level=2
+```
+
+## Description
+
+The agent's goal is to find the revenue of Tesla:
+- level 1 asks the revenue of Tesla in 2022 and explicitly asks to search for 'tesla revenue 2022'
+- level 2 is identical but doesn't ask to search for 'tesla revenue 2022'
+- level 3 asks for tesla's revenue by year since its creation.
+
+It should write the result in a file called output.txt.
+
+The agent should be able to beat this test consistently (this is the hardest part).
+## Objective
+
+The objective of this challenge is to test the agent's ability to retrieve information in a consistent way.
diff --git a/docs/challenges/information_retrieval/challenge_b.md b/docs/challenges/information_retrieval/challenge_b.md
new file mode 100644
index 000000000..f4e68a151
--- /dev/null
+++ b/docs/challenges/information_retrieval/challenge_b.md
@@ -0,0 +1,22 @@
+# Information Retrieval Challenge B
+
+**Status**: Beaten
+
+**Command to try**:
+
+```
+pytest -s tests/challenges/information_retrieval/test_information_retrieval_challenge_b.py
+```
+
+## Description
+
+The agent's goal is to find the names, affiliated university, and discovery of the individuals who won the nobel prize for physics in 2010.
+
+It should write the result in a file called 2010_nobel_prize_winners.txt.
+
+The agent should be able to beat this test consistently (this is the hardest part).
+
+## Objective
+
+The objective of this challenge is to test the agent's ability to retrieve multiple pieces of related information in a consistent way.
+The agent should not use google to perform the task, because it should already know the answer. This why the task fails after 2 cycles (1 cycle to retrieve information, 1 cycle to write the file)
diff --git a/docs/challenges/information_retrieval/introduction.md b/docs/challenges/information_retrieval/introduction.md
new file mode 100644
index 000000000..2e997d7a7
--- /dev/null
+++ b/docs/challenges/information_retrieval/introduction.md
@@ -0,0 +1,3 @@
+# Information Retrieval
+
+Information retrieval challenges are designed to evaluate the proficiency of an AI agent, such as Auto-GPT, in searching, extracting, and presenting relevant information from a vast array of sources. These challenges often encompass tasks such as interpreting user queries, browsing the web, and filtering through unstructured data.
diff --git a/docs/challenges/introduction.md b/docs/challenges/introduction.md
new file mode 100644
index 000000000..256a82385
--- /dev/null
+++ b/docs/challenges/introduction.md
@@ -0,0 +1,35 @@
+introduction.md
+# Introduction to Challenges
+
+Welcome to the Auto-GPT Challenges page! This is a space where we encourage community members to collaborate and contribute towards improving Auto-GPT by identifying and solving challenges that Auto-GPT is not yet able to achieve.
+
+## What are challenges?
+
+Challenges are tasks or problems that Auto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks.
+
+## Why are challenges important?
+
+Addressing challenges helps us improve Auto-GPT's performance, usability, and versatility. By working together to tackle these challenges, we can create a more powerful and efficient tool for everyone. It also allows the community to actively contribute to the project, making it a true open-source effort.
+
+## How can you participate?
+
+There are two main ways to get involved with challenges:
+
+1. **Submit a Challenge**: If you have identified a task that Auto-GPT struggles with, you can submit it as a challenge. This allows others to see the issue and collaborate on finding a solution.
+2. **Beat a Challenge**: If you have a solution or idea to tackle an existing challenge, you can contribute by working on the challenge and submitting your solution.
+
+To learn more about submitting and beating challenges, please visit the [List of Challenges](list.md), [Submit a Challenge](submit.md), and [Beat a Challenge](beat.md) pages.
+
+We look forward to your contributions and the exciting solutions that the community will develop together to make Auto-GPT even better!
+
+!!! warning
+
+ We're slowly transitioning to agbenchmark. agbenchmark is a simpler way to improve Auto-GPT. Simply run:
+
+ ```
+ agbenchmark start
+ ```
+
+ and beat as many challenges as possible.
+
+For more agbenchmark options, look at the [readme](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/tree/master/agbenchmark).
diff --git a/docs/challenges/list.md b/docs/challenges/list.md
new file mode 100644
index 000000000..c9e1f4d25
--- /dev/null
+++ b/docs/challenges/list.md
@@ -0,0 +1,5 @@
+# List of Challenges
+
+This page contains a curated list of challenges that Auto-GPT currently faces. If you think you have a solution or idea to tackle any of these challenges, feel free to dive in and start working on them! New challenges can also be submitted by following the guidelines on the [Submit a Challenge](challenges/submit.md) page.
+
+Memory Challenges: [List of Challenges](memory/introduction.md)
diff --git a/docs/challenges/memory/challenge_a.md b/docs/challenges/memory/challenge_a.md
new file mode 100644
index 000000000..0e0cc482d
--- /dev/null
+++ b/docs/challenges/memory/challenge_a.md
@@ -0,0 +1,39 @@
+# Memory Challenge A
+
+**Status**: Challenge Completed
+
+
+## Description
+
+The agent, Follow-Instructions-GPT, has the following goals:
+
+- Use the command `read_file` to read the `instructions_1.txt` file.
+- Follow the instructions in the `instructions_1.txt` file.
+
+The challenge involves a series of text files containing instructions. The agent must follow the instructions and keep track of the task ID as it progresses through the files.
+
+## Files
+
+- `instructions_1.txt`
+
+The task_id is 1234
+Read the file instructions_1.txt
+
+
+
+- `instructions_2.txt`
+
+Read the file instructions_2.txt using the read_file command.
+
+
+...and so on.
+
+- `instructions_n.txt`
+
+Write the task_id into the file output.txt.
+
+
+## Objective
+
+The objective of this challenge is to test the agent's ability to follow instructions and maintain memory of the task Id throughout the process. The agent successfully completed this challenge if it wrote the task id in a file.
+
diff --git a/docs/challenges/memory/challenge_b.md b/docs/challenges/memory/challenge_b.md
new file mode 100644
index 000000000..7df7044ff
--- /dev/null
+++ b/docs/challenges/memory/challenge_b.md
@@ -0,0 +1,44 @@
+# Memory Challenge B
+
+**Status**: Current level to beat: level 3
+
+**Command to try**:
+
+```shell
+pytest -s tests/challenges/memory/test_memory_challenge_b.py --level=3
+```
+
+## Description
+
+The agent, Follow-Instructions-GPT, has the following goals:
+
+- Use the command `read_file` to read the `instructions_1.txt` file.
+- Follow the instructions in the `instructions_1.txt` file.
+
+The challenge involves a series of text files containing instructions and task IDs. The agent must follow the instructions and keep track of the task IDs as it progresses through the files.
+
+## Files
+
+- `instructions_1.txt`
+
+The current task_id is 4563.\n[NOISE intended to confuse the agent]
+Read the file instructions_2.txt using the read_file command.
+
+
+- `instructions_2.txt`
+
+The current task_id is 6182.\n[NOISE intended to confuse the agent]
+Read the file instructions_3.txt using the read_file command.
+
+
+...and so on.
+
+- `instructions_n.txt`
+
+The current task_id is 8912.
+Write all the task_ids into the file output.txt. The file has not been created yet. After that, use the task_complete command.
+
+
+## Objective
+
+The objective of this challenge is to test the agent's ability to follow instructions and maintain memory of the task IDs throughout the process. The agent successfully completed this challenge if it wrote the task ids in a file.
diff --git a/docs/challenges/memory/challenge_c.md b/docs/challenges/memory/challenge_c.md
new file mode 100644
index 000000000..955c823f3
--- /dev/null
+++ b/docs/challenges/memory/challenge_c.md
@@ -0,0 +1,61 @@
+# Memory Challenge C
+
+**Status**: Current level to beat: level 1
+
+**Command to try**:
+
+```shell
+pytest -s tests/challenges/memory/test_memory_challenge_c.py --level=2
+```
+
+## Description
+
+The agent, Follow-Instructions-GPT, has the following goals:
+
+- Use the command `read_file` to read the `instructions_1.txt` file.
+- Follow the instructions in the `instructions_1.txt` file.
+
+The challenge involves a series of text files containing instructions and silly phrases. The agent must follow the instructions and keep track of the task IDs as it progresses through the files.
+
+## Files
+
+- `instructions_1.txt`
+
+The current phrase is
+
+```
+The purple elephant danced on a rainbow while eating a taco.\n[NOISE intended to confuse the agent]
+```
+
+Read the file `instructions_2.txt` using the read_file command.
+
+
+- `instructions_2.txt`
+
+The current phrase is
+
+```
+The sneaky toaster stole my socks and ran away to Hawaii.\n[NOISE intended to confuse the agent]
+```
+
+Read the file instructions_3.txt using the read_file command.
+
+
+...and so on.
+
+- `instructions_n.txt`
+
+The current phrase is
+
+```
+My pet rock sings better than BeyoncΓ© on Tuesdays.
+```
+
+Write all the phrases into the file output.txt. The file has not been created yet. After that, use the task_complete command.
+
+
+## Objective
+
+The objective of this challenge is to test the agent's ability to follow instructions and maintain memory of the task IDs throughout the process. The agent successfully completed this challenge if it wrote the phrases in a file.
+
+This is presumably harder than task ids as the phrases are longer and more likely to be compressed as the agent does more work.
diff --git a/docs/challenges/memory/challenge_d.md b/docs/challenges/memory/challenge_d.md
new file mode 100644
index 000000000..5ecec045e
--- /dev/null
+++ b/docs/challenges/memory/challenge_d.md
@@ -0,0 +1,80 @@
+# Memory Challenge D
+
+**Status**: Current level to beat: level 1
+
+**Command to try**:
+
+```shell
+pytest -s tests/challenges/memory/test_memory_challenge_d.py --level=1
+```
+
+## Description
+
+The provided code is a unit test designed to validate an AI's ability to track events and beliefs of characters in a story involving moving objects, specifically marbles. This scenario is an advanced form of the classic "Sally-Anne test", a psychological test used to measure a child's social cognitive ability to understand that others' perspectives and beliefs may differ from their own.
+
+Here is an explanation of the challenge:
+
+The AI is given a series of events involving characters Sally, Anne, Bob, and Charlie, and the movements of different marbles. These events are designed as tests at increasing levels of complexity.
+
+For each level, the AI is expected to keep track of the events and the resulting beliefs of each character about the locations of each marble. These beliefs are affected by whether the character was inside or outside the room when events occurred, as characters inside the room are aware of the actions, while characters outside the room aren't.
+
+After the AI processes the events and generates the beliefs of each character, it writes these beliefs to an output file in JSON format.
+
+The check_beliefs function then checks the AI's beliefs against the expected beliefs for that level. The expected beliefs are predefined and represent the correct interpretation of the events for each level.
+
+If the AI's beliefs match the expected beliefs, it means the AI has correctly interpreted the events and the perspectives of each character. This would indicate that the AI has passed the test for that level.
+
+The test runs for levels up to the maximum level that the AI has successfully beaten, or up to a user-selected level.
+
+
+## Files
+
+- `instructions_1.txt`
+
+```
+Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).
+```
+
+
+- `instructions_2.txt`
+
+```
+Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.
+```
+
+...and so on.
+
+- `instructions_n.txt`
+
+The expected believes of every characters are given in a list:
+
+```json
+expected_beliefs = {
+ 1: {
+ 'Sally': {
+ 'marble A': 'basket S',
+ },
+ 'Anne': {
+ 'marble A': 'basket A',
+ }
+ },
+ 2: {
+ 'Sally': {
+ 'marble A': 'sofa', # Because Charlie told her
+ },
+ 'Anne': {
+ 'marble A': 'green box', # Because she moved it there
+ 'marble B': 'basket A', # Because Bob put it there and she was in the room
+ },
+ 'Bob': {
+ 'B': 'basket A', # Last place he put it
+ },
+ 'Charlie': {
+ 'A': 'sofa', # Because Anne told him to tell Sally so
+ }
+ },...
+```
+
+## Objective
+
+This test essentially checks if an AI can accurately model and track the beliefs of different characters based on their knowledge of events, which is a critical aspect of understanding and generating human-like narratives. This ability would be beneficial for tasks such as writing stories, dialogue systems, and more.
diff --git a/docs/challenges/memory/introduction.md b/docs/challenges/memory/introduction.md
new file mode 100644
index 000000000..f597f81db
--- /dev/null
+++ b/docs/challenges/memory/introduction.md
@@ -0,0 +1,5 @@
+# Memory Challenges
+
+Memory challenges are designed to test the ability of an AI agent, like Auto-GPT, to remember and use information throughout a series of tasks. These challenges often involve following instructions, processing text files, and keeping track of important data.
+
+The goal of memory challenges is to improve an agent's performance in tasks that require remembering and using information over time. By addressing these challenges, we can enhance Auto-GPT's capabilities and make it more useful in real-world applications.
diff --git a/docs/challenges/submit.md b/docs/challenges/submit.md
new file mode 100644
index 000000000..a8b191aeb
--- /dev/null
+++ b/docs/challenges/submit.md
@@ -0,0 +1,14 @@
+# Submit a Challenge
+
+If you have identified a task or problem that Auto-GPT struggles with, you can submit it as a challenge for the community to tackle. Here's how you can submit a new challenge:
+
+## How to Submit a Challenge
+
+1. Create a new `.md` file in the `challenges` directory in the Auto-GPT GitHub repository. Make sure to pick the right category.
+2. Name the file with a descriptive title for the challenge, using hyphens instead of spaces (e.g., `improve-context-understanding.md`).
+3. In the file, follow the [challenge_template.md](challenge_template.md) to describe the problem, define the scope, and evaluate success.
+4. Commit the file and create a pull request.
+
+Once submitted, the community can review and discuss the challenge. If deemed appropriate, it will be added to the [List of Challenges](list.md).
+
+If you're looking to contribute by working on an existing challenge, check out [Beat a Challenge](beat.md) for guidelines on how to get started.
diff --git a/docs/code-of-conduct.md b/docs/code-of-conduct.md
new file mode 120000
index 000000000..0400d5746
--- /dev/null
+++ b/docs/code-of-conduct.md
@@ -0,0 +1 @@
+../CODE_OF_CONDUCT.md \ No newline at end of file
diff --git a/docs/configuration/imagegen.md b/docs/configuration/imagegen.md
new file mode 100644
index 000000000..1a10d61d2
--- /dev/null
+++ b/docs/configuration/imagegen.md
@@ -0,0 +1,63 @@
+# πŸ–Ό Image Generation configuration
+
+| Config variable | Values | |
+| ---------------- | ------------------------------- | -------------------- |
+| `IMAGE_PROVIDER` | `dalle` `huggingface` `sdwebui` | **default: `dalle`** |
+
+## DALL-e
+
+In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
+
+```ini
+# IMAGE_PROVIDER=dalle # this is the default
+```
+
+Further optional configuration:
+
+| Config variable | Values | |
+| ---------------- | ------------------ | -------------- |
+| `IMAGE_SIZE` | `256` `512` `1024` | default: `256` |
+
+## Hugging Face
+
+To use text-to-image models from Hugging Face, you need a Hugging Face API token.
+Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
+
+Once you have an API token, uncomment and adjust these variables in your `.env`:
+
+```ini
+IMAGE_PROVIDER=huggingface
+HUGGINGFACE_API_TOKEN=your-huggingface-api-token
+```
+
+Further optional configuration:
+
+| Config variable | Values | |
+| ------------------------- | ---------------------- | ---------------------------------------- |
+| `HUGGINGFACE_IMAGE_MODEL` | see [available models] | default: `CompVis/stable-diffusion-v1-4` |
+
+[available models]: https://huggingface.co/models?pipeline_tag=text-to-image
+
+## Stable Diffusion WebUI
+
+It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
+
+```ini
+IMAGE_PROVIDER=sdwebui
+```
+
+!!! note
+ Make sure you are running WebUI with `--api` enabled.
+
+Further optional configuration:
+
+| Config variable | Values | |
+| --------------- | ----------------------- | -------------------------------- |
+| `SD_WEBUI_URL` | URL to your WebUI | default: `http://127.0.0.1:7860` |
+| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
+
+## Selenium
+
+```shell
+sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
+```
diff --git a/docs/configuration/memory.md b/docs/configuration/memory.md
new file mode 100644
index 000000000..1a5e716ab
--- /dev/null
+++ b/docs/configuration/memory.md
@@ -0,0 +1,251 @@
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+## Setting Your Cache Type
+
+By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend.
+Otherwise, the default is LocalCache (which stores memory in a JSON file).
+
+To switch to a different backend, change the `MEMORY_BACKEND` in `.env`
+to the value that you want:
+
+* `json_file` uses a local JSON cache file
+* `pinecone` uses the Pinecone.io account you configured in your ENV settings
+* `redis` will use the redis cache that you configured
+* `milvus` will use the milvus cache that you configured
+* `weaviate` will use the weaviate cache that you configured
+
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+## Memory Backend Setup
+
+Links to memory backends
+
+- [Pinecone](https://www.pinecone.io/)
+- [Milvus](https://milvus.io/) &ndash; [self-hosted](https://milvus.io/docs), or managed with [Zilliz Cloud](https://zilliz.com/)
+- [Redis](https://redis.io)
+- [Weaviate](https://weaviate.io)
+
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+### Redis Setup
+
+!!! important
+ If you have set up Auto-GPT using Docker Compose, then Redis is included, no further
+ setup needed.
+
+!!! caution
+ This setup is not intended to be publicly accessible and lacks security measures.
+ Avoid exposing Redis to the internet without a password or at all!
+
+1. Launch Redis container
+
+ ```shell
+ docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
+ ```
+
+3. Set the following settings in `.env`
+
+ ```shell
+ MEMORY_BACKEND=redis
+ REDIS_HOST=localhost
+ REDIS_PORT=6379
+ REDIS_PASSWORD=<PASSWORD>
+ ```
+
+ Replace `<PASSWORD>` by your password, omitting the angled brackets (<>).
+
+ Optional configuration:
+
+ - `WIPE_REDIS_ON_START=False` to persist memory stored in Redis between runs.
+ - `MEMORY_INDEX=<WHATEVER>` to specify a name for the memory index in Redis.
+ The default is `auto-gpt`.
+
+!!! info
+ See [redis-stack-server](https://hub.docker.com/r/redis/redis-stack-server) for
+ setting a password and additional configuration.
+
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+### 🌲 Pinecone API Key Setup
+
+Pinecone lets you store vast amounts of vector-based memory, allowing the agent to load only relevant memories at any given time.
+
+1. Go to [pinecone](https://app.pinecone.io/) and make an account if you don't already have one.
+2. Choose the `Starter` plan to avoid being charged.
+3. Find your API key and region under the default project in the left sidebar.
+
+In the `.env` file set:
+
+- `PINECONE_API_KEY`
+- `PINECONE_ENV` (example: `us-east4-gcp`)
+- `MEMORY_BACKEND=pinecone`
+
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+### Milvus Setup
+
+[Milvus](https://milvus.io/) is an open-source, highly scalable vector database to store
+huge amounts of vector-based memory and provide fast relevant search. It can be quickly
+deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://zilliz.com/).
+
+1. Deploy your Milvus service, either locally using docker or with a managed Zilliz Cloud database:
+ - [Install and deploy Milvus locally](https://milvus.io/docs/install_standalone-operator.md)
+
+ - Set up a managed Zilliz Cloud database
+ 1. Go to [Zilliz Cloud](https://zilliz.com/) and sign up if you don't already have account.
+ 2. In the *Databases* tab, create a new database.
+ - Remember your username and password
+ - Wait until the database status is changed to RUNNING.
+ 3. In the *Database detail* tab of the database you have created, the public cloud endpoint, such as:
+ `https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443`.
+
+2. Run `pip3 install pymilvus` to install the required client library.
+ Make sure your PyMilvus version and Milvus version are [compatible](https://github.com/milvus-io/pymilvus#compatibility)
+ to avoid issues.
+ See also the [PyMilvus installation instructions](https://github.com/milvus-io/pymilvus#installation).
+
+3. Update `.env`:
+ - `MEMORY_BACKEND=milvus`
+ - One of:
+ - `MILVUS_ADDR=host:ip` (for local instance)
+ - `MILVUS_ADDR=https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443` (for Zilliz Cloud)
+
+ The following settings are **optional**:
+
+ - `MILVUS_USERNAME='username-of-your-milvus-instance'`
+ - `MILVUS_PASSWORD='password-of-your-milvus-instance'`
+ - `MILVUS_SECURE=True` to use a secure connection.
+ Only use if your Milvus instance has TLS enabled.
+ *Note: setting `MILVUS_ADDR` to a `https://` URL will override this setting.*
+ - `MILVUS_COLLECTION` to change the collection name to use in Milvus.
+ Defaults to `autogpt`.
+
+!!! warning
+ The Pinecone, Milvus, Redis, and Weaviate memory backends were rendered incompatible
+ by work on the memory system, and have been removed.
+ Whether support will be added back in the future is subject to discussion,
+ feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280
+
+### Weaviate Setup
+[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store
+data objects and vector embeddings from ML-models and scales seamlessly to billion of
+data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart).
+
+Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded)
+is supported which allows the Auto-GPT process itself to start a Weaviate instance.
+To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`.
+
+#### Install the Weaviate client
+
+Install the Weaviate client before usage.
+
+```shell
+$ pip install weaviate-client
+```
+
+#### Setting up environment variables
+
+In your `.env` file set the following:
+
+```ini
+MEMORY_BACKEND=weaviate
+WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance
+WEAVIATE_PORT="8080"
+WEAVIATE_PROTOCOL="http"
+WEAVIATE_USERNAME="your username"
+WEAVIATE_PASSWORD="your password"
+WEAVIATE_API_KEY="your weaviate API key if you have one"
+WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate" # this is optional and indicates where the data should be persisted when running an embedded instance
+USE_WEAVIATE_EMBEDDED=False # set to True to run Embedded Weaviate
+MEMORY_INDEX="Autogpt" # name of the index to create for the application
+```
+
+## View Memory Usage
+
+View memory usage by using the `--debug` flag :)
+
+
+## 🧠 Memory pre-seeding
+
+!!! warning
+ Data ingestion is broken in v0.4.7 and possibly earlier versions. This is a known issue that will be addressed in future releases. Follow these issues for updates.
+ [Issue 4435](https://github.com/Significant-Gravitas/Auto-GPT/issues/4435)
+ [Issue 4024](https://github.com/Significant-Gravitas/Auto-GPT/issues/4024)
+ [Issue 2076](https://github.com/Significant-Gravitas/Auto-GPT/issues/2076)
+
+
+
+Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
+
+```shell
+$ python data_ingestion.py -h
+usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
+
+Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script.
+
+options:
+ -h, --help show this help message and exit
+ --file FILE The file to ingest.
+ --dir DIR The directory containing the files to ingest.
+ --init Init the memory and wipe its content (default: False)
+ --overlap OVERLAP The overlap size between chunks when ingesting files (default: 200)
+ --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000)
+
+# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000
+```
+
+In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.
+
+Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory.
+
+The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory.
+
+You can adjust the `max_length` and `overlap` parameters to fine-tune the way the
+ documents are presented to the AI when it "recall" that memory:
+
+- Adjusting the overlap value allows the AI to access more contextual information
+ from each chunk when recalling information, but will result in more chunks being
+ created and therefore increase memory backend usage and OpenAI API requests.
+- Reducing the `max_length` value will create more chunks, which can save prompt
+ tokens by allowing for more message history in the context, but will also
+ increase the number of chunks.
+- Increasing the `max_length` value will provide the AI with more contextual
+ information from each chunk, reducing the number of chunks created and saving on
+ OpenAI API requests. However, this may also use more prompt tokens and decrease
+ the overall context available to the AI.
+
+Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data
+into its memory. Chunks of data are split and added to memory, allowing the AI to access
+them quickly and generate more accurate responses. It's useful for large datasets or when
+specific information needs to be accessed quickly. Examples include ingesting API or
+GitHub documentation before running Auto-GPT.
+
+!!! attention
+ If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False`
+
+ For other memory backends, we currently forcefully wipe the memory when starting
+ Auto-GPT. To ingest data with those memory backends, you can call the
+ `data_ingestion.py` script anytime during an Auto-GPT run.
+
+Memories will be available to the AI immediately as they are ingested, even if ingested
+while Auto-GPT is running.
diff --git a/docs/configuration/options.md b/docs/configuration/options.md
new file mode 100644
index 000000000..c0c386d5b
--- /dev/null
+++ b/docs/configuration/options.md
@@ -0,0 +1,54 @@
+# Configuration
+
+Configuration is controlled through the `Config` object. You can set configuration variables via the `.env` file. If you don't have a `.env` file, create a copy of `.env.template` in your `Auto-GPT` folder and name it `.env`.
+
+## Environment Variables
+
+- `AI_SETTINGS_FILE`: Location of the AI Settings file relative to the Auto-GPT root directory. Default: ai_settings.yaml
+- `AUDIO_TO_TEXT_PROVIDER`: Audio To Text Provider. Only option currently is `huggingface`. Default: huggingface
+- `AUTHORISE_COMMAND_KEY`: Key response accepted when authorising commands. Default: y
+- `AZURE_CONFIG_FILE`: Location of the Azure Config file relative to the Auto-GPT root directory. Default: azure.yaml
+- `BROWSE_CHUNK_MAX_LENGTH`: When browsing website, define the length of chunks to summarize. Default: 3000
+- `BROWSE_SPACY_LANGUAGE_MODEL`: [spaCy language model](https://spacy.io/usage/models) to use when creating chunks. Default: en_core_web_sm
+- `CHAT_MESSAGES_ENABLED`: Enable chat messages. Optional
+- `DISABLED_COMMAND_CATEGORIES`: Command categories to disable. Command categories are Python module names, e.g. autogpt.commands.execute_code. See the directory `autogpt/commands` in the source for all command modules. Default: None
+- `ELEVENLABS_API_KEY`: ElevenLabs API Key. Optional.
+- `ELEVENLABS_VOICE_ID`: ElevenLabs Voice ID. Optional.
+- `EMBEDDING_MODEL`: LLM Model to use for embedding tasks. Default: text-embedding-ada-002
+- `EXECUTE_LOCAL_COMMANDS`: If shell commands should be executed locally. Default: False
+- `EXIT_KEY`: Exit key accepted to exit. Default: n
+- `FAST_LLM`: LLM Model to use for most tasks. Default: gpt-3.5-turbo
+- `GITHUB_API_KEY`: [Github API Key](https://github.com/settings/tokens). Optional.
+- `GITHUB_USERNAME`: GitHub Username. Optional.
+- `GOOGLE_API_KEY`: Google API key. Optional.
+- `GOOGLE_CUSTOM_SEARCH_ENGINE_ID`: [Google custom search engine ID](https://programmablesearchengine.google.com/controlpanel/all). Optional.
+- `HEADLESS_BROWSER`: Use a headless browser while Auto-GPT uses a web browser. Setting to `False` will allow you to see Auto-GPT operate the browser. Default: True
+- `HUGGINGFACE_API_TOKEN`: HuggingFace API, to be used for both image generation and audio to text. Optional.
+- `HUGGINGFACE_AUDIO_TO_TEXT_MODEL`: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4
+- `HUGGINGFACE_IMAGE_MODEL`: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4
+- `IMAGE_PROVIDER`: Image provider. Options are `dalle`, `huggingface`, and `sdwebui`. Default: dalle
+- `IMAGE_SIZE`: Default size of image to generate. Default: 256
+- `MEMORY_BACKEND`: Memory back-end to use. Currently `json_file` is the only supported and enabled backend. Default: json_file
+- `MEMORY_INDEX`: Value used in the Memory backend for scoping, naming, or indexing. Default: auto-gpt
+- `OPENAI_API_KEY`: *REQUIRED*- Your [OpenAI API Key](https://platform.openai.com/account/api-keys).
+- `OPENAI_ORGANIZATION`: Organization ID in OpenAI. Optional.
+- `PLAIN_OUTPUT`: Plain output, which disables the spinner. Default: False
+- `PLUGINS_CONFIG_FILE`: Path of the Plugins Config file relative to the Auto-GPT root directory. Default: plugins_config.yaml
+- `PROMPT_SETTINGS_FILE`: Location of the Prompt Settings file relative to the Auto-GPT root directory. Default: prompt_settings.yaml
+- `REDIS_HOST`: Redis Host. Default: localhost
+- `REDIS_PASSWORD`: Redis Password. Optional. Default:
+- `REDIS_PORT`: Redis Port. Default: 6379
+- `RESTRICT_TO_WORKSPACE`: The restrict file reading and writing to the workspace directory. Default: True
+- `SD_WEBUI_AUTH`: Stable Diffusion Web UI username:password pair. Optional.
+- `SD_WEBUI_URL`: Stable Diffusion Web UI URL. Default: http://localhost:7860
+- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
+- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
+- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
+- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4
+- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
+- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
+- `TEXT_TO_SPEECH_PROVIDER`: Text to Speech Provider. Options are `gtts`, `macos`, `elevenlabs`, and `streamelements`. Default: gtts
+- `USER_AGENT`: User-Agent given when browsing websites. Default: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
+- `USE_AZURE`: Use Azure's LLM Default: False
+- `USE_WEB_BROWSER`: Which web browser to use. Options are `chrome`, `firefox`, `safari` or `edge` Default: chrome
+- `WIPE_REDIS_ON_START`: Wipes data / index on start. Default: True
diff --git a/docs/configuration/search.md b/docs/configuration/search.md
new file mode 100644
index 000000000..4640d63c6
--- /dev/null
+++ b/docs/configuration/search.md
@@ -0,0 +1,37 @@
+## πŸ” Google API Keys Configuration
+
+!!! note
+ This section is optional. Use the official Google API if search attempts return
+ error 429. To use the `google_official_search` command, you need to set up your
+ Google API key in your environment variables.
+
+Create your project:
+
+1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
+2. If you don't already have an account, create one and log in
+3. Create a new project by clicking on the *Select a Project* dropdown at the top of the
+ page and clicking *New Project*
+4. Give it a name and click *Create*
+5. Set up a custom search API and add to your .env file:
+ 5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard)
+ 6. Click *Enable APIs and Services*
+ 7. Search for *Custom Search API* and click on it
+ 8. Click *Enable*
+ 9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page
+ 10. Click *Create Credentials*
+ 11. Choose *API Key*
+ 12. Copy the API key
+ 13. Set it as the `GOOGLE_API_KEY` in your `.env` file
+14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com)
+ the Custom Search API on your project. (Might need to wait few minutes to propagate.)
+ Set up a custom search engine and add to your .env file:
+ 15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page
+ 16. Click *Add*
+ 17. Set up your search engine by following the prompts.
+ You can choose to search the entire web or specific sites
+ 18. Once you've created your search engine, click on *Control Panel*
+ 19. Click *Basics*
+ 20. Copy the *Search engine ID*
+ 21. Set it as the `CUSTOM_SEARCH_ENGINE_ID` in your `.env` file
+
+_Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches._
diff --git a/docs/configuration/voice.md b/docs/configuration/voice.md
new file mode 100644
index 000000000..654d2ee45
--- /dev/null
+++ b/docs/configuration/voice.md
@@ -0,0 +1,37 @@
+# Text to Speech
+
+Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
+
+```shell
+python -m autogpt --speak
+```
+
+Eleven Labs provides voice technologies such as voice design, speech synthesis, and
+premade voices that Auto-GPT can use for speech.
+
+1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't
+ already have one.
+2. Choose and setup the *Starter* plan.
+3. Click the top right icon and find *Profile* to locate your API Key.
+
+In the `.env` file set:
+
+- `ELEVENLABS_API_KEY`
+- `ELEVENLABS_VOICE_1_ID` (example: _"premade/Adam"_)
+
+### List of available voices
+
+!!! note
+ You can use either the name or the voice ID to configure a voice
+
+| Name | Voice ID |
+| ------ | -------- |
+| Rachel | `21m00Tcm4TlvDq8ikWAM` |
+| Domi | `AZnzlk1XvdvUeBnXmlld` |
+| Bella | `EXAVITQu4vr4xnSDxMaL` |
+| Antoni | `ErXwobaYiN019PkySvjV` |
+| Elli | `MF3mGyEYCl7XYWbV9V6O` |
+| Josh | `TxGEqnHWrfWFTfGW9XjX` |
+| Arnold | `VR6AewLTigWG4xSOukaG` |
+| Adam | `pNInz6obpgDQGcFmaJgB` |
+| Sam | `yoZ06aMxZJJ28mfd3POQ` |
diff --git a/docs/contributing.md b/docs/contributing.md
new file mode 120000
index 000000000..44fcc6343
--- /dev/null
+++ b/docs/contributing.md
@@ -0,0 +1 @@
+../CONTRIBUTING.md \ No newline at end of file
diff --git a/docs/imgs/Auto_GPT_Logo.png b/docs/imgs/Auto_GPT_Logo.png
new file mode 100644
index 000000000..9c60eea98
--- /dev/null
+++ b/docs/imgs/Auto_GPT_Logo.png
Binary files differ
diff --git a/docs/imgs/e2b-dashboard.png b/docs/imgs/e2b-dashboard.png
new file mode 100644
index 000000000..456f1490c
--- /dev/null
+++ b/docs/imgs/e2b-dashboard.png
Binary files differ
diff --git a/docs/imgs/e2b-log-url.png b/docs/imgs/e2b-log-url.png
new file mode 100644
index 000000000..3f1c189ee
--- /dev/null
+++ b/docs/imgs/e2b-log-url.png
Binary files differ
diff --git a/docs/imgs/e2b-new-tag.png b/docs/imgs/e2b-new-tag.png
new file mode 100644
index 000000000..65a0a767c
--- /dev/null
+++ b/docs/imgs/e2b-new-tag.png
Binary files differ
diff --git a/docs/imgs/e2b-tag-button.png b/docs/imgs/e2b-tag-button.png
new file mode 100644
index 000000000..741a6bac1
--- /dev/null
+++ b/docs/imgs/e2b-tag-button.png
Binary files differ
diff --git a/docs/imgs/openai-api-key-billing-paid-account.png b/docs/imgs/openai-api-key-billing-paid-account.png
new file mode 100644
index 000000000..8948505a0
--- /dev/null
+++ b/docs/imgs/openai-api-key-billing-paid-account.png
Binary files differ
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 000000000..06cc93773
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,8 @@
+# AutoGPT docs
+
+Welcome to AutoGPT. Please follow the [Installation](/setup/) guide to get started.
+
+!!! note
+ It is recommended to use a virtual machine/container (docker) for tasks that require high security measures to prevent any potential harm to the main computer's system and data. If you are considering to use Auto-GPT outside a virtualized/containerized environment, you are *strongly* advised to use a separate user account just for running Auto-GPT. This is even more important if you are going to allow Auto-GPT to write/execute scripts and run shell commands!
+
+It is for these reasons that executing python scripts is explicitly disabled when running outside a container environment.
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
new file mode 100644
index 000000000..9c34ded27
--- /dev/null
+++ b/docs/mkdocs.yml
@@ -0,0 +1,117 @@
+site_name: AutoGPT Documentation
+site_url: https://docs.agpt.co/
+repo_url: https://github.com/Significant-Gravitas/Auto-GPT
+docs_dir: .
+nav:
+ - Home: index.md
+ - Setup: setup.md
+ - Usage: usage.md
+ - Plugins: plugins.md
+ - Configuration:
+ - Options: configuration/options.md
+ - Search: configuration/search.md
+ - Memory: configuration/memory.md
+ - Voice: configuration/voice.md
+ - Image Generation: configuration/imagegen.md
+
+ - Help us improve Auto-GPT:
+ - Share your debug logs with us: share-your-logs.md
+ - Contribution guide: contributing.md
+ - Running tests: testing.md
+ - Code of Conduct: code-of-conduct.md
+
+ - Challenges:
+ - Introduction: challenges/introduction.md
+ - List of Challenges:
+ - Memory:
+ - Introduction: challenges/memory/introduction.md
+ - Memory Challenge A: challenges/memory/challenge_a.md
+ - Memory Challenge B: challenges/memory/challenge_b.md
+ - Memory Challenge C: challenges/memory/challenge_c.md
+ - Memory Challenge D: challenges/memory/challenge_d.md
+ - Information retrieval:
+ - Introduction: challenges/information_retrieval/introduction.md
+ - Information Retrieval Challenge A: challenges/information_retrieval/challenge_a.md
+ - Information Retrieval Challenge B: challenges/information_retrieval/challenge_b.md
+ - Submit a Challenge: challenges/submit.md
+ - Beat a Challenge: challenges/beat.md
+
+ - License: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/LICENSE
+
+theme:
+ name: material
+ icon:
+ logo: material/book-open-variant
+ favicon: imgs/Auto_GPT_Logo.png
+ features:
+ - navigation.sections
+ - toc.follow
+ - navigation.top
+ - content.code.copy
+ palette:
+ # Palette toggle for light mode
+ - media: "(prefers-color-scheme: light)"
+ scheme: default
+ toggle:
+ icon: material/weather-night
+ name: Switch to dark mode
+
+ # Palette toggle for dark mode
+ - media: "(prefers-color-scheme: dark)"
+ scheme: slate
+ toggle:
+ icon: material/weather-sunny
+ name: Switch to light mode
+
+markdown_extensions:
+ # Python Markdown
+ - abbr
+ - admonition
+ - attr_list
+ - def_list
+ - footnotes
+ - md_in_html
+ - toc:
+ permalink: true
+ - tables
+
+ # Python Markdown Extensions
+ - pymdownx.arithmatex:
+ generic: true
+ - pymdownx.betterem:
+ smart_enable: all
+ - pymdownx.critic
+ - pymdownx.caret
+ - pymdownx.details
+ - pymdownx.emoji:
+ emoji_index: !!python/name:materialx.emoji.twemoji
+ emoji_generator: !!python/name:materialx.emoji.to_svg
+ - pymdownx.highlight
+ - pymdownx.inlinehilite
+ - pymdownx.keys
+ - pymdownx.mark
+ - pymdownx.smartsymbols
+ - pymdownx.snippets:
+ auto_append:
+ - includes/abbreviations.md
+ - pymdownx.superfences:
+ custom_fences:
+ - name: mermaid
+ class: mermaid
+ format: !!python/name:pymdownx.superfences.fence_code_format
+ - pymdownx.tabbed:
+ alternate_style: true
+ - pymdownx.tasklist:
+ custom_checkbox: true
+ - pymdownx.tilde
+
+plugins:
+ - table-reader
+ - search
+
+extra_javascript:
+ - https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
+ - _javascript/tablesort.js
+ - _javascript/mathjax.js
+ - https://polyfill.io/v3/polyfill.min.js?features=es6
+ - https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js
diff --git a/docs/netlify.toml b/docs/netlify.toml
new file mode 100644
index 000000000..de261908f
--- /dev/null
+++ b/docs/netlify.toml
@@ -0,0 +1,6 @@
+# Netlify config for Auto-GPT docs
+
+[build]
+ publish = "public/"
+ command = "mkdocs build -d public"
+ ignore = "git diff --quiet $CACHED_COMMIT_REF $COMMIT_REF docs mkdocs.yml CONTRIBUTING.md CODE_OF_CONDUCT.md LICENSE"
diff --git a/docs/plugins.md b/docs/plugins.md
new file mode 100644
index 000000000..74e96f2ec
--- /dev/null
+++ b/docs/plugins.md
@@ -0,0 +1,20 @@
+## Plugins
+
+βš οΈπŸ’€ **WARNING** πŸ’€βš οΈ: Review the code of any plugin you use thoroughly, as plugins can execute any Python code, potentially leading to malicious activities, such as stealing your API keys.
+
+To configure plugins, you can create or edit the `plugins_config.yaml` file in the root directory of Auto-GPT. This file allows you to enable or disable plugins as desired. For specific configuration instructions, please refer to the documentation provided for each plugin. The file should be formatted in YAML. Here is an example for your reference:
+
+```yaml
+plugin_a:
+ config:
+ api_key: my-api-key
+ enabled: false
+plugin_b:
+ config: {}
+ enabled: true
+```
+
+See our [Plugins Repo](https://github.com/Significant-Gravitas/Auto-GPT-Plugins) for more info on how to install all the amazing plugins the community has built!
+
+Alternatively, developers can use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template) as a starting point for creating your own plugins.
+
diff --git a/docs/setup.md b/docs/setup.md
new file mode 100644
index 000000000..77f04e6e4
--- /dev/null
+++ b/docs/setup.md
@@ -0,0 +1,257 @@
+# Setting up Auto-GPT
+
+## πŸ“‹ Requirements
+
+Choose an environment to run Auto-GPT in (pick one):
+
+ - [Docker](https://docs.docker.com/get-docker/) (*recommended*)
+ - Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows))
+ - [VSCode + devcontainer](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
+
+
+## πŸ—οΈ Getting an API key
+
+Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
+
+!!! attention
+ To use the OpenAI API with Auto-GPT, we strongly recommend **setting up billing**
+ (AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per
+ minute, which can cause the application to crash.
+
+ You can set up a paid account at [Manage account > Billing > Overview](https://platform.openai.com/account/billing/overview).
+
+[openai/api limits]: https://platform.openai.com/docs/guides/rate-limits/overview#:~:text=Free%20trial%20users,RPM%0A40%2C000%20TPM
+
+!!! important
+ It's highly recommended that you keep track of your API costs on [the Usage page](https://platform.openai.com/account/usage).
+ You can also set limits on how much you spend on [the Usage limits page](https://platform.openai.com/account/billing/limits).
+
+![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png)
+
+
+## Setting up Auto-GPT
+
+### Set up with Docker
+
+1. Make sure you have Docker installed, see [requirements](#requirements)
+2. Create a project directory for Auto-GPT
+
+ ```shell
+ mkdir Auto-GPT
+ cd Auto-GPT
+ ```
+
+3. In the project directory, create a file called `docker-compose.yml` with the following contents:
+
+ ```yaml
+ version: "3.9"
+ services:
+ auto-gpt:
+ image: significantgravitas/auto-gpt
+ env_file:
+ - .env
+ profiles: ["exclude-from-up"]
+ volumes:
+ - ./auto_gpt_workspace:/app/auto_gpt_workspace
+ - ./data:/app/data
+ ## allow auto-gpt to write logs to disk
+ - ./logs:/app/logs
+ ## uncomment following lines if you want to make use of these files
+ ## you must have them existing in the same folder as this docker-compose.yml
+ #- type: bind
+ # source: ./azure.yaml
+ # target: /app/azure.yaml
+ #- type: bind
+ # source: ./ai_settings.yaml
+ # target: /app/ai_settings.yaml
+ ```
+
+4. Create the necessary [configuration](#configuration) files. If needed, you can find
+ templates in the [repository].
+5. Pull the latest image from [Docker Hub]
+
+ ```shell
+ docker pull significantgravitas/auto-gpt
+ ```
+
+6. Continue to [Run with Docker](#run-with-docker)
+
+!!! note "Docker only supports headless browsing"
+ Auto-GPT uses a browser in headless mode by default: `HEADLESS_BROWSER=True`.
+ Please do not change this setting in combination with Docker, or Auto-GPT will crash.
+
+[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt
+[repository]: https://github.com/Significant-Gravitas/Auto-GPT
+
+
+### Set up with Git
+
+!!! important
+ Make sure you have [Git](https://git-scm.com/downloads) installed for your OS.
+
+!!! info "Executing commands"
+ To execute the given commands, open a CMD, Bash, or Powershell window.
+ On Windows: press ++win+x++ and pick *Terminal*, or ++win+r++ and enter `cmd`
+
+1. Clone the repository
+
+ ```shell
+ git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
+ ```
+
+2. Navigate to the directory where you downloaded the repository
+
+ ```shell
+ cd Auto-GPT/autogpts/autogpt
+ ```
+
+### Set up without Git/Docker
+
+!!! warning
+ We recommend to use Git or Docker, to make updating easier. Also note that some features such as Python execution will only work inside docker for security reasons.
+
+1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest)
+2. Extract the zip-file into a folder
+
+
+### Configuration
+
+1. Find the file named `.env.template` in the main `Auto-GPT` folder. This file may
+ be hidden by default in some operating systems due to the dot prefix. To reveal
+ hidden files, follow the instructions for your specific operating system:
+ [Windows][show hidden files/Windows], [macOS][show hidden files/macOS].
+2. Create a copy of `.env.template` and call it `.env`;
+ if you're already in a command prompt/terminal window: `cp .env.template .env`.
+3. Open the `.env` file in a text editor.
+4. Find the line that says `OPENAI_API_KEY=`.
+5. After the `=`, enter your unique OpenAI API Key *without any quotes or spaces*.
+6. Enter any other API keys or tokens for services you would like to use.
+
+ !!! note
+ To activate and adjust a setting, remove the `# ` prefix.
+
+7. Save and close the `.env` file.
+
+!!! info "Using a GPT Azure-instance"
+ If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and
+ make an Azure configuration file:
+
+ - Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
+ - `fast_llm_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID
+ - `smart_llm_deployment_id`: your gpt-4 deployment ID
+ - `embedding_model_deployment_id`: your text-embedding-ada-002 v2 deployment ID
+
+ Example:
+
+ ```yaml
+ # Please specify all of these values as double-quoted strings
+ # Replace string in angled brackets (<>) to your own deployment Name
+ azure_model_map:
+ fast_llm_deployment_id: "<auto-gpt-deployment>"
+ ...
+ ```
+
+ Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
+ If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
+
+[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5
+[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files
+[openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints
+[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line
+
+
+## Running Auto-GPT
+
+### Run with Docker
+
+Easiest is to use `docker compose`.
+
+Important: Docker Compose version 1.29.0 or later is required to use version 3.9 of the Compose file format.
+You can check the version of Docker Compose installed on your system by running the following command:
+
+```shell
+docker compose version
+```
+
+This will display the version of Docker Compose that is currently installed on your system.
+
+If you need to upgrade Docker Compose to a newer version, you can follow the installation instructions in the Docker documentation: https://docs.docker.com/compose/install/
+
+Once you have a recent version of Docker Compose, run the commands below in your Auto-GPT folder.
+
+1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
+
+ ```shell
+ docker compose build auto-gpt
+ ```
+
+2. Run Auto-GPT
+
+ ```shell
+ docker compose run --rm auto-gpt
+ ```
+
+ By default, this will also start and attach a Redis memory backend. If you do not
+ want this, comment or remove the `depends: - redis` and `redis:` sections from
+ `docker-compose.yml`.
+
+ For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup).
+
+You can pass extra arguments, e.g. running with `--gpt3only` and `--continuous`:
+
+```shell
+docker compose run --rm auto-gpt --gpt3only --continuous
+```
+
+If you dare, you can also build and run it with "vanilla" docker commands:
+
+```shell
+docker build -t auto-gpt .
+docker run -it --env-file=.env -v $PWD:/app auto-gpt
+docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
+```
+
+[Docker Compose file]: https://github.com/Significant-Gravitas/Auto-GPT/blob/stable/docker-compose.yml
+
+
+### Run with Dev Container
+
+1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension in VS Code.
+
+2. Open command palette with ++f1++ and type `Dev Containers: Open Folder in Container`.
+
+3. Run `./run.sh`.
+
+
+### Run without Docker
+
+#### Create a Virtual Environment
+
+Create a virtual environment to run in.
+
+```shell
+python -m venv venvAutoGPT
+source venvAutoGPT/bin/activate
+pip3 install --upgrade pip
+```
+
+!!! warning
+ Due to security reasons, certain features (like Python execution) will by default be disabled when running without docker. So, even if you want to run the program outside a docker container, you currently still need docker to actually run scripts.
+
+Simply run the startup script in your terminal. This will install any necessary Python
+packages and launch Auto-GPT.
+
+- On Linux/MacOS:
+
+ ```shell
+ ./run.sh
+ ```
+
+- On Windows:
+
+ ```shell
+ .\run.bat
+ ```
+
+If this gives errors, make sure you have a compatible Python version installed. See also
+the [requirements](./installation.md#requirements).
diff --git a/docs/share-your-logs.md b/docs/share-your-logs.md
new file mode 100644
index 000000000..ebcce8393
--- /dev/null
+++ b/docs/share-your-logs.md
@@ -0,0 +1,52 @@
+## Share your logs with us to help improve Auto-GPT
+
+Do you notice weird behavior with your agent? Do you have an interesting use case? Do you have a bug you want to report?
+Follow the steps below to enable your logs and upload them. You can include these logs when making an issue report or discussing an issue with us.
+
+### Enable Debug Logs
+Activity, Error, and Debug logs are located in `./logs`
+
+To print out debug logs:
+
+```shell
+./run.sh --debug # on Linux / macOS
+
+.\run.bat --debug # on Windows
+
+docker-compose run --rm auto-gpt --debug # in Docker
+```
+
+### Inspect and share logs
+You can inspect and share logs via [e2b](https://e2b.dev).
+![E2b logs dashboard](./imgs/e2b-dashboard.png)
+
+
+
+1. Go to [autogpt.e2b.dev](https://autogpt.e2b.dev) and sign in.
+2. You'll see logs from other members of the AutoGPT team that you can inspect.
+3. Or you upload your own logs. Click on the "Upload log folder" button and select the debug logs dir that you generated. Wait a 1-2 seconds and the page reloads.
+4. You can share logs via sharing the URL in your browser.
+![E2b log URL](./imgs/e2b-log-url.png)
+
+
+### Add tags to logs
+You can add custom tags to logs for other members of your team. This is useful if you want to indicate that the agent is for example having issues with challenges.
+
+E2b offers 3 types of severity:
+
+- Success
+- Warning
+- Error
+
+You can name your tag any way you want.
+
+#### How to add a tag
+1. Click on the "plus" button on the left from the logs folder name.
+
+![E2b tag button](./imgs/e2b-tag-button.png)
+
+2. Type the name of a new tag.
+
+3. Select the severity.
+
+![E2b new tag](./imgs/e2b-new-tag.png)
diff --git a/docs/testing.md b/docs/testing.md
new file mode 100644
index 000000000..ef8176abf
--- /dev/null
+++ b/docs/testing.md
@@ -0,0 +1,51 @@
+# Running tests
+
+To run all tests, use the following command:
+
+```shell
+pytest
+```
+
+If `pytest` is not found:
+
+```shell
+python -m pytest
+```
+
+### Running specific test suites
+
+- To run without integration tests:
+
+```shell
+pytest --without-integration
+```
+
+- To run without *slow* integration tests:
+
+```shell
+pytest --without-slow-integration
+```
+
+- To run tests and see coverage:
+
+```shell
+pytest --cov=autogpt --without-integration --without-slow-integration
+```
+
+## Running the linter
+
+This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting.
+We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`.
+See the [flake8 rules](https://www.flake8rules.com/) for more information.
+
+To run the linter:
+
+```shell
+flake8 .
+```
+
+Or:
+
+```shell
+python -m flake8 .
+```
diff --git a/docs/usage.md b/docs/usage.md
new file mode 100644
index 000000000..f280bc8f5
--- /dev/null
+++ b/docs/usage.md
@@ -0,0 +1,122 @@
+# Usage
+
+## Command Line Arguments
+Running with `--help` lists all the possible command line arguments you can pass:
+
+```shell
+./run.sh --help # on Linux / macOS
+
+.\run.bat --help # on Windows
+```
+
+!!! info
+ For use with Docker, replace the script in the examples with
+ `docker compose run --rm auto-gpt`:
+
+ ```shell
+ docker compose run --rm auto-gpt --help
+ docker compose run --rm auto-gpt --ai-settings <filename>
+ ```
+
+!!! note
+ Replace anything in angled brackets (<>) to a value you want to specify
+
+Here are some common arguments you can use when running Auto-GPT:
+
+* Run Auto-GPT with a different AI Settings file
+
+```shell
+./run.sh --ai-settings <filename>
+```
+
+* Run Auto-GPT with a different Prompt Settings file
+
+```shell
+./run.sh --prompt-settings <filename>
+```
+
+* Specify a memory backend
+
+```shell
+./run.sh --use-memory <memory-backend>
+```
+
+!!! note
+ There are shorthands for some of these flags, for example `-m` for `--use-memory`.
+ Use `./run.sh --help` for more information.
+
+### Speak Mode
+
+Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
+
+```shell
+./run.sh --speak
+```
+
+### πŸ’€ Continuous Mode ⚠️
+
+Run the AI **without** user authorization, 100% automated.
+Continuous mode is NOT recommended.
+It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize.
+Use at your own risk.
+
+```shell
+./run.sh --continuous
+```
+
+To exit the program, press ++ctrl+c++
+
+### ♻️ Self-Feedback Mode ⚠️
+
+Running Self-Feedback will **INCREASE** token use and thus cost more. This feature enables the agent to provide self-feedback by verifying its own actions and checking if they align with its current goals. If not, it will provide better feedback for the next loop. To enable this feature for the current loop, input `S` into the input field.
+
+### GPT-3.5 ONLY Mode
+
+If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
+
+```shell
+./run.sh --gpt3only
+```
+
+You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
+
+### GPT-4 ONLY Mode
+
+If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4.
+This may give your bot increased intelligence.
+
+```shell
+./run.sh --gpt4only
+```
+
+!!! warning
+ Since GPT-4 is more expensive to use, running Auto-GPT in GPT-4-only mode will
+ increase your API costs.
+
+## Logs
+
+Activity, Error, and Debug logs are located in `./logs`
+
+!!! tip
+ Do you notice weird behavior with your agent? Do you have an interesting use case? Do you have a bug you want to report?
+ Follow the step below to enable your logs. You can include these logs when making an issue report or discussing an issue with us.
+
+To print out debug logs:
+
+```shell
+./run.sh --debug # on Linux / macOS
+
+.\run.bat --debug # on Windows
+
+docker-compose run --rm auto-gpt --debug # in Docker
+```
+
+## Disabling Command Categories
+
+If you want to selectively disable some command groups, you can use the `DISABLED_COMMAND_CATEGORIES` config in your `.env`. You can find the list of categories in your `.env.template`
+
+For example, to disable coding related features, set it to the value below:
+
+```ini
+DISABLED_COMMAND_CATEGORIES=autogpt.commands.execute_code
+```