Back to Getting Started
Getting Startedabout 7 hours ago

Understanding Your AI Feedback

Learn how to interpret the AI feedback scores and use them to improve your legal analysis skills.

Understanding Your AI Feedback

After submitting an IRAC analysis, you'll receive detailed AI-generated feedback. This guide explains how to interpret and use that feedback effectively.

Overall Grade

Your submission receives an overall letter grade:

GradeMeaningTypical Exam Equivalent
A+Exceptional analysis85-100% (Distinction)
AStrong analysis75-84% (Distinction)
B+Good analysis70-74% (High Merit)
BCompetent analysis65-69% (Merit)
C+Adequate analysis60-64% (Pass)
CWeak but passing50-59% (Pass)
DNeeds improvement40-49% (Fail)
FSignificant gapsBelow 40% (Fail)

What grade should you aim for?

  • Beginners: C+ or above shows you're on the right track
  • Intermediate: B+ consistently means you're ready for exams
  • Advanced: A grades indicate distinction-level analysis

Component Scores

Each IRAC component is scored out of 100:

Issue Identification (0-100)
Measures how well you framed the legal question. High scores mean:

  • Clear, precise legal issue
  • Relevant to the facts presented
  • Properly scoped (not too broad or narrow)

Rule Formulation (0-100)
Evaluates your statement of applicable law. High scores mean:

  • Correct legal principle identified
  • Accurate case citations
  • Clear articulation of legal tests or elements

Application Depth (0-100)
Assesses how well you connected law to facts. High scores mean:

  • Detailed fact-to-law mapping
  • Addresses all elements of the legal test
  • Considers counter-arguments

Conclusion Clarity (0-100)
Judges your final reasoning. High scores mean:

  • Directly answers the issue
  • Logically follows from your application
  • Stated with appropriate confidence

Feedback Categories

The AI provides feedback in several categories:

Strengths
What you did well. These are skills to maintain and apply consistently.

Areas for Improvement
Specific weaknesses in your analysis. Focus your next practice session here.

Suggestions
Concrete actions to improve. These might include:

  • "Add citations to support your rule statement"
  • "Address the counter-argument that..."
  • "Elaborate on how [fact] satisfies [element]"

Missing Elements
If the AI detects you skipped part of the analysis, it will flag this explicitly.

How to Use Feedback Effectively

1. Read feedback immediately
Review it while the analysis is fresh in your mind. Note patterns across submissions.

2. Compare to model answer
Click View Model Answer to see a distinction-level response. Notice:

  • How issues are framed
  • Depth of rule explanation
  • Fact-to-law connections in application
  • Structure and flow

3. Identify your recurring weaknesses
Track component scores over time (see Analytics > Progress). If Application is consistently low, focus practice there.

4. Act on specific suggestions
Don't just read feedback—implement it. If the AI says "cite authority for this proposition," look up relevant cases and add citations next time.

5. Practice deliberately
Choose your next question based on weak areas. If Constitutional Law application is weak, do 3 more Constitutional Law problems focusing on application depth.

Common Feedback Messages

"Issue is too broad"
You framed multiple questions or a general topic rather than the specific legal dispute. Narrow your focus.

"Missing authority for rule"
You stated a legal principle but didn't cite the case or statute. Always back rules with authority.

"Application lacks depth"
You mentioned facts but didn't explain how they satisfy legal elements. Add more "because..." reasoning.

"Conclusion doesn't follow"
Your conclusion contradicts your application, or you didn't tie it back to the issue. Ensure logical flow.

"Counter-argument not addressed"
In strong analyses, you should anticipate opposing views and explain why your position is correct.

Feedback Accuracy

The AI achieves 92% agreement with human legal educators. However:

  • Novel legal questions may receive less reliable feedback
  • Very short submissions (under 200 words) are harder to evaluate accurately
  • Complex multi-issue questions may have partial blind spots

If feedback seems off, ask a lecturer or use the Report Feedback button to flag for human review.

Next Steps

Feedback is a tool for growth—use it actively, not passively!