Module Rubric
This rubric is used to assess each module at graded checkpoints (after modules 1, 3, 5, and 7).
Criteria
1. Functionality (8 points)
Does the module work as specified?
| Points | Description |
|---|---|
| 8 | All features work correctly. Handles edge cases gracefully. No crashes or unexpected behavior. |
| 6 | Core features work correctly. Minor edge cases may be unhandled. Rare minor bugs. |
| 4 | Main functionality works but with notable bugs or missing features. |
| 2 | Partially functional. Major features broken or incomplete. |
| 0 | Non-functional or not submitted. |
2. Code Elegance and Quality (8 points)
Is the code clean, readable, and well-structured? See the Code Elegance Rubric for detailed criteria.
| Points | Description |
|---|---|
| 8 | Exemplary code quality. Clear structure, excellent naming, appropriate abstraction. |
| 6 | Good code quality. Readable and organized with minor issues. |
| 4 | Acceptable code quality. Functional but messy, inconsistent, or poorly organized. |
| 2 | Poor code quality. Difficult to read, understand, or maintain. |
| 0 | Unacceptable. Incomprehensible or no meaningful code submitted. |
3. Testing (8 points)
Are unit tests and integration tests present, meaningful, and passing?
| Points | Description |
|---|---|
| 8 | Comprehensive test coverage. Tests are well-designed, test meaningful behavior, and all pass. Edge cases covered. |
| 6 | Good test coverage. Most important functionality tested. Tests pass. Minor gaps in coverage. |
| 4 | Basic test coverage. Some tests present but incomplete or superficial. Tests may not cover key functionality. |
| 2 | Minimal testing. Few tests, poorly written, or significant failures. |
| 0 | No tests or tests completely non-functional. |
4. Individual Participation (6 points)
Do commit histories show meaningful contribution from all team members?
| Points | Description |
|---|---|
| 6 | All team members show substantial, balanced contributions. Commits reflect genuine work, not artificial splitting. |
| 4 | All team members contributed. Some imbalance but all participated meaningfully. |
| 2 | Participation imbalance is notable. One member dominates or one member's contributions are minimal. |
| 1 | Severe imbalance. One or more members show little evidence of contribution. |
| 0 | No evidence of team collaboration. Single contributor or no commits. |
5. Documentation (5 points)
Is the code documented according to standard Python practices?
| Points | Description |
|---|---|
| 5 | Excellent documentation. All public functions have docstrings with parameter and return descriptions. Type hints used consistently. Complex logic has inline comments. README explains module usage. |
| 4 | Good documentation. Most functions documented. Type hints present. Minor gaps. |
| 3 | Basic documentation. Some docstrings present but inconsistent or incomplete. |
| 1 | Minimal documentation. Little to no docstrings. Code is difficult to understand without reading implementation. |
| 0 | No documentation. |
6. I/O Clarity (5 points)
Are inputs and outputs clearly defined and easily assessable?
| Points | Description |
|---|---|
| 5 | Inputs and outputs are crystal clear. Easy to verify correctness. For ML modules, metrics are well-reported and interpretable. |
| 4 | Inputs and outputs are clear with minor ambiguity. Assessment is straightforward. |
| 3 | Inputs and outputs defined but require effort to interpret or assess. |
| 1 | Inputs and outputs unclear. Difficult to determine what the module does or whether it works. |
| 0 | No clear I/O specification. Cannot assess functionality. |
7. Topic Engagement (6 points)
Does the module genuinely engage with the AI concept(s) it claims to cover?
| Points | Description |
|---|---|
| 6 | Deep engagement with the topic. Demonstrates clear understanding. Implementation reflects core concepts accurately and meaningfully. |
| 4 | Solid engagement. Topic is addressed appropriately with minor superficiality. |
| 2 | Surface-level engagement. Topic is referenced but implementation does not demonstrate deep understanding. |
| 1 | Weak engagement. Topic is named but barely addressed in implementation. |
| 0 | No meaningful engagement with the stated topic. |
8. GitHub Practices (4 points)
Does the repository demonstrate professional development practices?
| Points | Description |
|---|---|
| 4 | Excellent practices. Meaningful commit messages, appropriate use of pull requests, issues tracked, merge conflicts resolved thoughtfully. |
| 3 | Good practices. Most commits have meaningful messages. PRs used. Minor lapses. |
| 2 | Basic practices. Commits present but messages often vague. PRs or issues underutilized. |
| 1 | Poor practices. Commit messages uninformative. No PRs or issues. Repository is disorganized. |
| 0 | No meaningful use of GitHub practices. |
Scoring
Total points possible: 50