aamc.org does not support this web browser.

Use Case 1: Competency-Based Application Review

Challenge

Selection committees face mounting pressure to thoroughly evaluate professional competencies, especially nontechnical ones such as empathy and ethical responsibility, across thousands of applications. Manual review of personal statements, experiences, and letters to glean this type of information becomes increasingly difficult to accomplish in a standardized and fair manner as applicant volume grows.

Solution

An AI system trained on expert evaluations provides consistent, scalable competency assessment by:

  • Predicting competency ratings based on application materials.
  • Highlighting relevant evidence for reviewer validation.
  • Maintaining standardized evaluation criteria across all applications.

How it Works

  • Define key competencies. Engage stakeholders (faculty, program directors, trainees) to identify critical competencies that align with institutional goals.
  • Develop evaluation rubrics. Analyze past applications to clarify what “strong” versus “weak” competence looks like in real-world examples.
  • Create example library. Seasoned reviewers independently score sample applications using the new competency rubrics, then meet to discuss any scoring differences. Their consensus ratings form a library of real-world examples — clear benchmarks of excellent, average, and weak performance for each competency. This library helps ensure the AI’s assessments align with your institution’s standards.
  • Model development. Feed the library of curated examples and rubrics into the AI system, teaching it to replicate expert judgments.
  • Calibration. Compare AI predictions to expert ratings, adjusting parameters until the model aligns most consistently with reviewer consensus.
  • Competency predictions. The AI automatically scores new applications (e.g., Empathy: 4/5) and highlights text passages that justify its rating.
  • Reviewer validation. Selection committees quickly confirm or adjust the AI’s findings, rather than hunting for evidence entirely from scratch.
  • Streamlined workflow. By presenting pre-scored competencies and key excerpts, reviewers can focus on higher-level judgments.
  • Data-driven discussions. Committee members discuss the AI’s highlighted evidence, clarifying strengths or weaknesses.
  • Application analysis for Riley Jordan
    • Empathy: 4/5 — Strong reflections on two years working with a refugee program.
    • Ethical Responsibility: 5/5 — Led an ethics committee, demonstrated respect for confidentiality.
    • Teamwork: 1/5 — Personal “hero story” overshadowed team contributions, suggesting limited collaborative mindset.
  • Evidence highlights. The AI highlights relevant essay sections and activity descriptions for each score, letting reviewers see exactly why the applicant’s rating was assigned.

Key Takeaways

Core Benefits

  • Standardized evaluation. Consistent competency scoring across all applications.
  • Evidence-based. Direct quotes support each rating.
  • Expert knowledge scale. Replicates expert judgment through trained models.
  • Workflow efficiency. Pre-scored applications with highlighted evidence.
  • Increased transparency. Communicates competency criteria to build applicant trust and understanding.

Resource Requirements

  • Technical: AI model infrastructure, secure data handling.
  • Personnel: Domain experts for rubric development, technical team for implementation.
  • Effort: High initial investment in rubric development and model training.

Challenges, Solutions, and Information Triangulation

Table 1 provides a non-exhaustive list of key challenges and potential solutions when implementing competency-based application review.

Table 1. Competency-Based Application Review: Challenges, Solutions, and Triangulation.
Topic Challenge Solution Information Triangulation

Example Library

Costly expert labeling; sensitive data

- Unified, secure platform with clear rubrics.
- Thorough rater training.

Compare rubrics (i.e., competency indicators) across application documents

Expert Consensus

Conflicting ratings between experts

- Consensus-based rating with documented reasoning.
- Quality monitoring.

Compare ratings to concrete examples using rubrics or standard guides

Model Accuracy

Slow updates causing model drift

- Regular model updates aligned with admission cycles.
- Monitor performance metrics.

Compare model drift across application documents.

AI Recommendations

Unclear if human-AI disagreement reflects insight or model error

- Document reasoning for disagreements.
- Use edge cases in future training.

Compare edge cases and reasoning across application documents.

Workflow Integration

Disruption to existing processes

- Unified secure platform.
- Side-by-side review.

Enable simultaneous document review for each applicant

Resource Costs

High expert and technical staff costs

- AI-assisted review tools.
- Streamlined monitoring.
- Open-source options.

 

Note: We use “expert review” to refer to processes called labeling, rating, or annotation.

Best suited for

  • Large programs needing consistent competency evaluation.
  • Institutions with established evaluation criteria.
  • Programs with access to technical expertise.
  • Teams willing to invest in initial setup (e.g., annotation platform).

Bottom Line

Competency-based application review delivers consistent, expert-level competency scoring across large applicant pools. The system replicates expert judgment through trained models, allowing standardized evaluation that maintains quality at scale. This approach requires significant upfront investment to develop comprehensive rubrics and example libraries. It is particularly well-suited for large programs that have already established clear competency frameworks and can access the necessary technical resources for implementation.