Replies: 2 comments 3 replies
-
I saw this as well, he uses similar techniques for prompt engineering as me. Loved that vid. |
Beta Was this translation helpful? Give feedback.
-
If you use multible GPT instances or let it run over its response it just has a larger context window/working memory/tokens/processing time. If you use seperate isntances you get the memeory/context memory of multible agents complimenting each other meaning instead of 1 8k model you get two 8k making it 16k (overlap i guess -4k but still 14 vs 8k ~. Smart yes. |
Beta Was this translation helpful? Give feedback.
-
This guy made GPT-4 dramatically smarter - easily applicable to Auto-GPT:
[https://www.youtube.com/watch?v=wVzuvf9D9BU]
Beta Was this translation helpful? Give feedback.
All reactions