-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved Reply Prompt: Managing the last reply and the full thread differently. #150
Comments
I think it could possible, after implementing #146, to do something like this:
Where Regarding your last question, I’ve never tried something like that. It really depends on the tool you’re using, but I think you could find a tutorial or an AI expert forum online to ask that question. |
In pre-release version 2.3.0pre2 you'll find the necessary features to test your prompt. You can use a prompt like this:
To use this, you need to select the text of the initial email. ThunderAI will then also send the full HTML body. Alternatively, using the placeholder Testing this prompt was challenging because I often reached the maximum limit of 30,000 characters. |
Released in version 2.3.0. |
We really love this tool!
We would have liked the tool to have taken into account which part of a long email thread that chat gpt should respond to (the latest reply) and which part of the email thread (previously already answered) that it may have as a background to previous emails.
It would be good if the delimitation or prompt was sent with over to chat gpt which paragraph is the latest and which is previously already answered mail. As it is now, the entire email thread is sent over in one piece, which means that chat gpt answers already answered questions, sums up previous things, etc.
Then we wonder if there is a manual (So you as a technical idiot can connect it yourself and make the database) on how to connect the device to a database or a web page from which chat gpt can retrieve information and answers to questions in the email.
Best regards Axel
The text was updated successfully, but these errors were encountered: