Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leverage natural language static analysis tools for enhanced prompt output #30

Open
d33bs opened this issue Jul 28, 2023 · 1 comment
Open

Comments

@d33bs
Copy link
Collaborator

d33bs commented Jul 28, 2023

This issue highlights the possibility of using static analysis calculations and tooling to enhance outcomes delivered by this project. One programmatic tool which uses many different options in this area is Vale. This could also be configured in such a way as to design a loose "manubot-approved" written style specification, extending the audience reach and capabilities here.

Two specific areas where these tools could be used:

  • Write-good can be used as part of Vale and could provide iterative, automated feedback to LLM dialogues. For example, one could prompt the LLM to abide by these rules or could be statically informed by the feedback from them (through multiple prompts).
  • Readability measures like Flesch-Kincaid could be used as a part of prompts to ensure a certain level of quality assurance with responses. For example, in the prompt: - Ensure Flesch-Kincaid readability scores are kept to XX.X with your output.

Using many options through Vale could allow for standardized configuration and integration. I could also see how these might be better used / configured individually. Acknowledging a certain level of author flexibility, it might be a good idea to consider these as loose guidance for the LLM and audience receiving the advice. For example, there are likely times where the readability complexity is required in order to fully realize a written topic.

@d33bs d33bs moved this to Paused in SET Projects Jul 28, 2023
@miltondp
Copy link
Collaborator

miltondp commented Aug 1, 2023

@d33bs, remember to bring up these ideas in our next meetings when you think it's relevant. Seems related to prompt engineering and how to evaluate prompts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants