Welcome to the AI-Augmented Pair Programming Hackathon Judging and Evaluation Guide. This guide provides the criteria and guidelines that judges will use to evaluate the hackathon projects. The evaluation focuses on several key aspects of the projects, ensuring a fair and comprehensive assessment.
Evaluation Points:
- Original Content: The project plan and user stories must be the team's own work.
- Specificity and Detail: User stories should be specific to the project scenario and detailed.
Examples:
- High Quality: "As a user, I want a homepage with a welcome message, an overview of services, and a featured wellness tip."
- Low Quality: "Create homepage" without any details or acceptance criteria.
Evaluation Points:
- Original Designs: Wireframes and UX design documentation should be original and customized to reflect the unique project requirements.
- Accessibility Considerations: Designs should document how accessibility has been considered.
Examples:
- High Quality: Detailed wireframes for each page with notes on layout, navigation, and accessibility.
- Low Quality: Simple, generic wireframes without detailed notes or accessibility considerations.
Evaluation Points:
- Consistent Use: Regular, descriptive commits demonstrate ongoing work and understanding.
- Documentation of Strategies: While branching is not required, any strategies used should be documented.
Examples:
- High Quality: Frequent commits with clear messages detailing the changes made.
- Low Quality: Few commits with vague messages.
Evaluation Points:
- Clear Usage Plan: A documented plan specifying which parts of the project will use AI-generated code and ensuring the quality and originality of this code.
Examples:
- High Quality: Detailed plan for using GitHub Copilot for generating HTML/CSS code and DALL-E for images.
- Low Quality: No clear plan or documentation of AI tool usage.
Evaluation Points:
- Adherence to Standards: Code should meet HTML5/CSS3 standards and follow best practices for readability and maintainability.
- Review and Optimize: AI-generated code should be reviewed and optimized with clear documentation of any modifications.
Examples:
- High Quality: Well-structured code with consistent indentation and meaningful naming conventions.
- Low Quality: Inconsistent code with poor readability.
Evaluation Points:
- Documentation of AI Use: Clear documentation where AI-generated code is used, with comments in the code to highlight these sections.
- Critical Assessment: AI-generated code should be critically assessed for quality and relevance, and necessary adjustments should be made.
Examples:
- High Quality: Seamlessly integrated AI-generated code with detailed comments and critical assessment.
- Low Quality: Poorly integrated AI-generated code without documentation.
Evaluation Points:
- Meet Acceptance Criteria: The implementation should meet all user stories' acceptance criteria.
- Original Implementation: The solution should be unique to the project scenario.
Examples:
- High Quality: Fully functional application meeting all acceptance criteria.
- Low Quality: Incomplete or non-functional application.
Evaluation Points:
- Complete Submission: The final project submission should include a fully functional web application, source code repository, and deployment link.
- Proper Attribution: Any external resources or libraries used should be properly attributed.
Examples:
- High Quality: Fully functional and well-documented web application with proper attribution.
- Low Quality: Incomplete submission with missing documentation.
Evaluation Points:
- Honest Reflection: The retrospective report should honestly reflect the development process, challenges faced, and lessons learned.
Examples:
- High Quality: Detailed and honest reflection with proposed improvements for future projects.
- Low Quality: Superficial report without meaningful reflection.
- Initial Review: Judges will review the project submissions, including code, documentation, and the retrospective report.
- Functional Testing: Judges will test the functionality of the web application to ensure it meets the user stories' acceptance criteria.
- AI Tool Usage Assessment: Judges will evaluate how effectively AI tools were used and integrated into the project.
- Final Presentation: Teams will present their projects to the judges, highlighting key aspects and answering questions.
- Consistency: Apply the same criteria to all projects.
- Transparency: Provide clear feedback on each evaluation point.
- Impartiality: Avoid any bias or favoritism in judging.
- Constructive: Provide constructive feedback that can help teams improve.
- Specific: Be specific about what was done well and what could be improved.
- Encouraging: Encourage teams to reflect on their learning experience and propose future improvements.
By following this Judging and Evaluation Guide, judges can ensure a fair, comprehensive, and transparent evaluation process that recognizes the efforts and achievements of all participating teams.