aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorGravatar Luke <2609441+lc0rp@users.noreply.github.com> 2023-08-01 13:17:33 -0400
committerGravatar GitHub <noreply@github.com> 2023-08-01 10:17:33 -0700
commit7cd407b7b4a9f4395761e772335e859e40e8c3d3 (patch)
tree97426d500d35d74a9b46ef874d507b116cbd725d /docs
parentAdd information on how to improve Auto-GPT with agbenchmark (#5056) (diff)
downloadAuto-GPT-7cd407b7b4a9f4395761e772335e859e40e8c3d3.tar.gz
Auto-GPT-7cd407b7b4a9f4395761e772335e859e40e8c3d3.tar.bz2
Auto-GPT-7cd407b7b4a9f4395761e772335e859e40e8c3d3.zip
Use modern material theme for docs (#5035)
* Use modern material theme for docs * Update mkdocs.yml Added search plugin Co-authored-by: James Collins <collijk@uw.edu> * Updating mkdocs material theme config per recommendations to enable all markdown options * Updated highlight extension settings and codeblocks throughout the docs to align with mkdocs-material recommendations. codehilite is deprecated in favor of the highlight extension: https://squidfunk.github.io/mkdocs-material/setup/extensions/python-markdown-extensions/#highlight --------- Co-authored-by: lc0rp <2609411+lc0rp@users.noreply.github.com> Co-authored-by: James Collins <collijk@uw.edu> Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/_javascript/mathjax.js16
-rw-r--r--docs/_javascript/tablesort.js6
-rw-r--r--docs/challenges/memory/challenge_b.md5
-rw-r--r--docs/challenges/memory/challenge_c.md28
-rw-r--r--docs/challenges/memory/challenge_d.md19
-rw-r--r--docs/configuration/imagegen.md12
-rw-r--r--docs/configuration/memory.md24
-rw-r--r--docs/configuration/voice.md2
-rw-r--r--docs/imgs/Auto_GPT_Logo.pngbin0 -> 26841 bytes
-rw-r--r--docs/setup.md113
-rw-r--r--docs/share-your-logs.md2
-rw-r--r--docs/testing.md25
-rw-r--r--docs/usage.md42
13 files changed, 184 insertions, 110 deletions
diff --git a/docs/_javascript/mathjax.js b/docs/_javascript/mathjax.js
new file mode 100644
index 000000000..a80ddbff7
--- /dev/null
+++ b/docs/_javascript/mathjax.js
@@ -0,0 +1,16 @@
+window.MathJax = {
+ tex: {
+ inlineMath: [["\\(", "\\)"]],
+ displayMath: [["\\[", "\\]"]],
+ processEscapes: true,
+ processEnvironments: true
+ },
+ options: {
+ ignoreHtmlClass: ".*|",
+ processHtmlClass: "arithmatex"
+ }
+};
+
+document$.subscribe(() => {
+ MathJax.typesetPromise()
+}) \ No newline at end of file
diff --git a/docs/_javascript/tablesort.js b/docs/_javascript/tablesort.js
new file mode 100644
index 000000000..ee04e9008
--- /dev/null
+++ b/docs/_javascript/tablesort.js
@@ -0,0 +1,6 @@
+document$.subscribe(function () {
+ var tables = document.querySelectorAll("article table:not([class])")
+ tables.forEach(function (table) {
+ new Tablesort(table)
+ })
+}) \ No newline at end of file
diff --git a/docs/challenges/memory/challenge_b.md b/docs/challenges/memory/challenge_b.md
index abc6da6be..7df7044ff 100644
--- a/docs/challenges/memory/challenge_b.md
+++ b/docs/challenges/memory/challenge_b.md
@@ -3,9 +3,10 @@
**Status**: Current level to beat: level 3
**Command to try**:
-```
+
+```shell
pytest -s tests/challenges/memory/test_memory_challenge_b.py --level=3
-``
+```
## Description
diff --git a/docs/challenges/memory/challenge_c.md b/docs/challenges/memory/challenge_c.md
index e197ddbd2..955c823f3 100644
--- a/docs/challenges/memory/challenge_c.md
+++ b/docs/challenges/memory/challenge_c.md
@@ -3,9 +3,10 @@
**Status**: Current level to beat: level 1
**Command to try**:
-```
+
+```shell
pytest -s tests/challenges/memory/test_memory_challenge_c.py --level=2
-``
+```
## Description
@@ -20,13 +21,23 @@ The challenge involves a series of text files containing instructions and silly
- `instructions_1.txt`
-The current phrase is "The purple elephant danced on a rainbow while eating a taco.".\n[NOISE intended to confuse the agent]
-Read the file instructions_2.txt using the read_file command.
+The current phrase is
+
+```
+The purple elephant danced on a rainbow while eating a taco.\n[NOISE intended to confuse the agent]
+```
+
+Read the file `instructions_2.txt` using the read_file command.
- `instructions_2.txt`
-The current phrase is "The sneaky toaster stole my socks and ran away to Hawaii.".\n[NOISE intended to confuse the agent]
+The current phrase is
+
+```
+The sneaky toaster stole my socks and ran away to Hawaii.\n[NOISE intended to confuse the agent]
+```
+
Read the file instructions_3.txt using the read_file command.
@@ -34,7 +45,12 @@ Read the file instructions_3.txt using the read_file command.
- `instructions_n.txt`
-The current phrase is "My pet rock sings better than Beyoncé on Tuesdays."
+The current phrase is
+
+```
+My pet rock sings better than Beyoncé on Tuesdays.
+```
+
Write all the phrases into the file output.txt. The file has not been created yet. After that, use the task_complete command.
diff --git a/docs/challenges/memory/challenge_d.md b/docs/challenges/memory/challenge_d.md
index 7563cce59..5ecec045e 100644
--- a/docs/challenges/memory/challenge_d.md
+++ b/docs/challenges/memory/challenge_d.md
@@ -1,11 +1,12 @@
-# Memory Challenge C
+# Memory Challenge D
**Status**: Current level to beat: level 1
**Command to try**:
-```
+
+```shell
pytest -s tests/challenges/memory/test_memory_challenge_d.py --level=1
-``
+```
## Description
@@ -30,13 +31,16 @@ The test runs for levels up to the maximum level that the AI has successfully be
- `instructions_1.txt`
-"Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).",
+```
+Sally has a marble (marble A) and she puts it in her basket (basket S), then leaves the room. Anne moves marble A from Sally's basket (basket S) to her own basket (basket A).
+```
- `instructions_2.txt`
-"Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.",
-
+```
+Sally gives a new marble (marble B) to Bob who is outside with her. Bob goes into the room and places marble B into Anne's basket (basket A). Anne tells Bob to tell Sally that he lost the marble b. Bob leaves the room and speaks to Sally about the marble B. Meanwhile, after Bob left the room, Anne moves marble A into the green box, but tells Charlie to tell Sally that marble A is under the sofa. Charlie leaves the room and speak to Sally about the marble A as instructed by Anne.
+```
...and so on.
@@ -44,6 +48,7 @@ The test runs for levels up to the maximum level that the AI has successfully be
The expected believes of every characters are given in a list:
+```json
expected_beliefs = {
1: {
'Sally': {
@@ -68,7 +73,7 @@ expected_beliefs = {
'A': 'sofa', # Because Anne told him to tell Sally so
}
},...
-
+```
## Objective
diff --git a/docs/configuration/imagegen.md b/docs/configuration/imagegen.md
index 38fdcebb2..1a10d61d2 100644
--- a/docs/configuration/imagegen.md
+++ b/docs/configuration/imagegen.md
@@ -7,7 +7,8 @@
## DALL-e
In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`):
-``` ini
+
+```ini
# IMAGE_PROVIDER=dalle # this is the default
```
@@ -23,7 +24,8 @@ To use text-to-image models from Hugging Face, you need a Hugging Face API token
Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens)
Once you have an API token, uncomment and adjust these variables in your `.env`:
-``` ini
+
+```ini
IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN=your-huggingface-api-token
```
@@ -39,7 +41,8 @@ Further optional configuration:
## Stable Diffusion WebUI
It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT:
-``` ini
+
+```ini
IMAGE_PROVIDER=sdwebui
```
@@ -54,6 +57,7 @@ Further optional configuration:
| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* |
## Selenium
-``` shell
+
+```shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```
diff --git a/docs/configuration/memory.md b/docs/configuration/memory.md
index 9d18f5ba2..3fa908b26 100644
--- a/docs/configuration/memory.md
+++ b/docs/configuration/memory.md
@@ -51,17 +51,19 @@ Links to memory backends
1. Launch Redis container
- :::shell
- docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
+ ```shell
+ docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
+ ```
3. Set the following settings in `.env`
- :::ini
- MEMORY_BACKEND=redis
- REDIS_HOST=localhost
- REDIS_PORT=6379
- REDIS_PASSWORD=<PASSWORD>
-
+ ```shell
+ MEMORY_BACKEND=redis
+ REDIS_HOST=localhost
+ REDIS_PORT=6379
+ REDIS_PASSWORD=<PASSWORD>
+ ```
+
Replace `<PASSWORD>` by your password, omitting the angled brackets (<>).
Optional configuration:
@@ -157,7 +159,7 @@ To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip insta
Install the Weaviate client before usage.
-``` shell
+```shell
$ pip install weaviate-client
```
@@ -165,7 +167,7 @@ $ pip install weaviate-client
In your `.env` file set the following:
-``` ini
+```ini
MEMORY_BACKEND=weaviate
WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance
WEAVIATE_PORT="8080"
@@ -195,7 +197,7 @@ View memory usage by using the `--debug` flag :)
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.
-``` shell
+```shell
$ python data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
diff --git a/docs/configuration/voice.md b/docs/configuration/voice.md
index 728fbaf5f..654d2ee45 100644
--- a/docs/configuration/voice.md
+++ b/docs/configuration/voice.md
@@ -2,7 +2,7 @@
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
-``` shell
+```shell
python -m autogpt --speak
```
diff --git a/docs/imgs/Auto_GPT_Logo.png b/docs/imgs/Auto_GPT_Logo.png
new file mode 100644
index 000000000..9c60eea98
--- /dev/null
+++ b/docs/imgs/Auto_GPT_Logo.png
Binary files differ
diff --git a/docs/setup.md b/docs/setup.md
index d0079e0f0..bd2f142e0 100644
--- a/docs/setup.md
+++ b/docs/setup.md
@@ -36,40 +36,43 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
1. Make sure you have Docker installed, see [requirements](#requirements)
2. Create a project directory for Auto-GPT
- :::shell
- mkdir Auto-GPT
- cd Auto-GPT
+ ```shell
+ mkdir Auto-GPT
+ cd Auto-GPT
+ ```
3. In the project directory, create a file called `docker-compose.yml` with the following contents:
- :::yaml
- version: "3.9"
- services:
- auto-gpt:
- image: significantgravitas/auto-gpt
- env_file:
- - .env
- profiles: ["exclude-from-up"]
- volumes:
- - ./auto_gpt_workspace:/app/auto_gpt_workspace
- - ./data:/app/data
- ## allow auto-gpt to write logs to disk
- - ./logs:/app/logs
- ## uncomment following lines if you want to make use of these files
- ## you must have them existing in the same folder as this docker-compose.yml
- #- type: bind
- # source: ./azure.yaml
- # target: /app/azure.yaml
- #- type: bind
- # source: ./ai_settings.yaml
- # target: /app/ai_settings.yaml
+ ```yaml
+ version: "3.9"
+ services:
+ auto-gpt:
+ image: significantgravitas/auto-gpt
+ env_file:
+ - .env
+ profiles: ["exclude-from-up"]
+ volumes:
+ - ./auto_gpt_workspace:/app/auto_gpt_workspace
+ - ./data:/app/data
+ ## allow auto-gpt to write logs to disk
+ - ./logs:/app/logs
+ ## uncomment following lines if you want to make use of these files
+ ## you must have them existing in the same folder as this docker-compose.yml
+ #- type: bind
+ # source: ./azure.yaml
+ # target: /app/azure.yaml
+ #- type: bind
+ # source: ./ai_settings.yaml
+ # target: /app/ai_settings.yaml
+ ```
4. Create the necessary [configuration](#configuration) files. If needed, you can find
templates in the [repository].
5. Pull the latest image from [Docker Hub]
- :::shell
- docker pull significantgravitas/auto-gpt
+ ```shell
+ docker pull significantgravitas/auto-gpt
+ ```
6. Continue to [Run with Docker](#run-with-docker)
@@ -92,14 +95,15 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
1. Clone the repository
- :::shell
- git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
+ ```shell
+ git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
+ ```
2. Navigate to the directory where you downloaded the repository
- :::shell
- cd Auto-GPT
-
+ ```shell
+ cd Auto-GPT
+ ```
### Set up without Git/Docker
@@ -139,12 +143,13 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
Example:
- :::yaml
- # Please specify all of these values as double-quoted strings
- # Replace string in angled brackets (<>) to your own deployment Name
- azure_model_map:
- fast_llm_deployment_id: "<auto-gpt-deployment>"
- ...
+ ```yaml
+ # Please specify all of these values as double-quoted strings
+ # Replace string in angled brackets (<>) to your own deployment Name
+ azure_model_map:
+ fast_llm_deployment_id: "<auto-gpt-deployment>"
+ ...
+ ```
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.
If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
@@ -164,7 +169,9 @@ Easiest is to use `docker compose`.
Important: Docker Compose version 1.29.0 or later is required to use version 3.9 of the Compose file format.
You can check the version of Docker Compose installed on your system by running the following command:
- docker compose version
+```shell
+docker compose version
+```
This will display the version of Docker Compose that is currently installed on your system.
@@ -174,13 +181,15 @@ Once you have a recent version of Docker Compose, run the commands below in your
1. Build the image. If you have pulled the image from Docker Hub, skip this step (NOTE: You *will* need to do this if you are modifying requirements.txt to add/remove dependencies like Python libs/frameworks)
- :::shell
- docker compose build auto-gpt
-
+ ```shell
+ docker compose build auto-gpt
+ ```
+
2. Run Auto-GPT
- :::shell
- docker compose run --rm auto-gpt
+ ```shell
+ docker compose run --rm auto-gpt
+ ```
By default, this will also start and attach a Redis memory backend. If you do not
want this, comment or remove the `depends: - redis` and `redis:` sections from
@@ -189,12 +198,14 @@ Once you have a recent version of Docker Compose, run the commands below in your
For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup).
You can pass extra arguments, e.g. running with `--gpt3only` and `--continuous`:
-``` shell
+
+```shell
docker compose run --rm auto-gpt --gpt3only --continuous
```
If you dare, you can also build and run it with "vanilla" docker commands:
-``` shell
+
+```shell
docker build -t auto-gpt .
docker run -it --env-file=.env -v $PWD:/app auto-gpt
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
@@ -218,7 +229,7 @@ docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuou
Create a virtual environment to run in.
-``` shell
+```shell
python -m venv venvAutoGPT
source venvAutoGPT/bin/activate
pip3 install --upgrade pip
@@ -232,13 +243,15 @@ packages and launch Auto-GPT.
- On Linux/MacOS:
- :::shell
- ./run.sh
+ ```shell
+ ./run.sh
+ ```
- On Windows:
- :::shell
- .\run.bat
+ ```shell
+ .\run.bat
+ ```
If this gives errors, make sure you have a compatible Python version installed. See also
the [requirements](./installation.md#requirements).
diff --git a/docs/share-your-logs.md b/docs/share-your-logs.md
index f673e375c..ebcce8393 100644
--- a/docs/share-your-logs.md
+++ b/docs/share-your-logs.md
@@ -8,7 +8,7 @@ Activity, Error, and Debug logs are located in `./logs`
To print out debug logs:
-``` shell
+```shell
./run.sh --debug # on Linux / macOS
.\run.bat --debug # on Windows
diff --git a/docs/testing.md b/docs/testing.md
index 9a1735966..ef8176abf 100644
--- a/docs/testing.md
+++ b/docs/testing.md
@@ -2,12 +2,13 @@
To run all tests, use the following command:
-``` shell
+```shell
pytest
```
If `pytest` is not found:
-``` shell
+
+```shell
python -m pytest
```
@@ -15,18 +16,21 @@ python -m pytest
- To run without integration tests:
- :::shell
- pytest --without-integration
+```shell
+pytest --without-integration
+```
- To run without *slow* integration tests:
- :::shell
- pytest --without-slow-integration
+```shell
+pytest --without-slow-integration
+```
- To run tests and see coverage:
- :::shell
- pytest --cov=autogpt --without-integration --without-slow-integration
+```shell
+pytest --cov=autogpt --without-integration --without-slow-integration
+```
## Running the linter
@@ -36,11 +40,12 @@ See the [flake8 rules](https://www.flake8rules.com/) for more information.
To run the linter:
-``` shell
+```shell
flake8 .
```
Or:
-``` shell
+
+```shell
python -m flake8 .
```
diff --git a/docs/usage.md b/docs/usage.md
index cb74ef7f6..f280bc8f5 100644
--- a/docs/usage.md
+++ b/docs/usage.md
@@ -3,7 +3,7 @@
## Command Line Arguments
Running with `--help` lists all the possible command line arguments you can pass:
-``` shell
+```shell
./run.sh --help # on Linux / macOS
.\run.bat --help # on Windows
@@ -13,9 +13,10 @@ Running with `--help` lists all the possible command line arguments you can pass
For use with Docker, replace the script in the examples with
`docker compose run --rm auto-gpt`:
- :::shell
- docker compose run --rm auto-gpt --help
- docker compose run --rm auto-gpt --ai-settings <filename>
+ ```shell
+ docker compose run --rm auto-gpt --help
+ docker compose run --rm auto-gpt --ai-settings <filename>
+ ```
!!! note
Replace anything in angled brackets (<>) to a value you want to specify
@@ -23,18 +24,22 @@ Running with `--help` lists all the possible command line arguments you can pass
Here are some common arguments you can use when running Auto-GPT:
* Run Auto-GPT with a different AI Settings file
- ``` shell
- ./run.sh --ai-settings <filename>
- ```
+
+```shell
+./run.sh --ai-settings <filename>
+```
+
* Run Auto-GPT with a different Prompt Settings file
- ``` shell
- ./run.sh --prompt-settings <filename>
- ```
-* Specify a memory backend
- :::shell
- ./run.sh --use-memory <memory-backend>
+```shell
+./run.sh --prompt-settings <filename>
+```
+
+* Specify a memory backend
+```shell
+./run.sh --use-memory <memory-backend>
+```
!!! note
There are shorthands for some of these flags, for example `-m` for `--use-memory`.
@@ -44,7 +49,7 @@ Here are some common arguments you can use when running Auto-GPT:
Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT
-``` shell
+```shell
./run.sh --speak
```
@@ -55,9 +60,10 @@ Continuous mode is NOT recommended.
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize.
Use at your own risk.
-``` shell
+```shell
./run.sh --continuous
```
+
To exit the program, press ++ctrl+c++
### ♻️ Self-Feedback Mode ⚠️
@@ -68,7 +74,7 @@ Running Self-Feedback will **INCREASE** token use and thus cost more. This featu
If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
-``` shell
+```shell
./run.sh --gpt3only
```
@@ -79,7 +85,7 @@ You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4.
This may give your bot increased intelligence.
-``` shell
+```shell
./run.sh --gpt4only
```
@@ -97,7 +103,7 @@ Activity, Error, and Debug logs are located in `./logs`
To print out debug logs:
-``` shell
+```shell
./run.sh --debug # on Linux / macOS
.\run.bat --debug # on Windows