Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Safety evaluations (with AI Project provisioning) #2370

Merged
merged 9 commits into from
Feb 20, 2025

Conversation

pamelafox
Copy link
Collaborator

@pamelafox pamelafox commented Feb 20, 2025

Purpose

Fixes #2262

This PR uses the Azure AI evaluation SDK to simulate adversarial users and evaluate the results. I intentionally do not store the simulation results in the repo due to their often disturbing question content, and I only store the overall safety results.

Our baseline RAG app achieves 100% safety (all scores are "Low" or "Very low") in the 200 simulations that I ran. Yay!

Does this introduce a breaking change?

When developers merge from main and run the server, azd up, or azd deploy, will this produce an error?
If you're not sure, try it out on an old environment.

[ ] Yes
[X] No

Does this require changes to learn.microsoft.com docs?

This repository is referenced by this tutorial
which includes deployment, settings and usage instructions. If text or screenshot need to change in the tutorial,
check the box below and notify the tutorial author. A Microsoft employee can do this for you if you're an external contributor.

[ ] Yes
[X] No

Type of change

[ ] Bugfix
[X] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[ ] Other... Please describe:

Code quality checklist

See CONTRIBUTING.md for more details.

  • The current tests all pass (python -m pytest).
  • I added tests that prove my fix is effective or that my feature works
  • I ran python -m pytest --cov to verify 100% coverage of added lines
  • I ran python -m mypy to check for type errors
  • I either used the pre-commit hooks or ran ruff and black manually on my code.

@pamelafox pamelafox marked this pull request as ready for review February 20, 2025 17:54
@pamelafox pamelafox changed the title WIP: AI Safety evaluations AI Safety evaluations (with AI Project provisioning) Feb 20, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 13 out of 13 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

evals/safety_evaluation.py:123

  • This division operation could raise a ZeroDivisionError if summary_scores[evaluator]['low_count'] is zero. Consider adding a check to handle a zero denominator or use an alternative calculation that avoids division by zero.
summary_scores[evaluator]["mean_score"] = summary_scores[evaluator]["score_total"] / summary_scores[evaluator]["low_count"]

@pamelafox pamelafox merged commit 31ea846 into Azure-Samples:main Feb 20, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

This AI sample lacks risk & safety evaluation implementation
2 participants