LDT 506 My Evaluator Competencies: Strengths, Growth Areas, and a Path Forward
LDT 506 | March 24, 2025
Introduction
Evaluation, when done thoughtfully, goes beyond measurement and supports growth, equity, and accountability. As I reflect on the evaluator competencies defined by Stevahn, King, Ghere, and Minnema (2005), along with those outlined by the American Evaluation Association (AEA, 2018) and the International Board of Standards for Training, Performance, and Instruction (IBSTPI, 2012), I recognize that evaluation is task grounded in ethics, values, and technical precision. It requires accurate data analysis and the ability to navigate interpersonal dynamics, resulting in meaningful communication and collaboration. My self-assessment in this class offers a snapshot of where I currently stand as an evaluator and clarifies some areas I need to develop further.
Where I Stand as an Evaluator
Based on my self-assessment results, I consider myself at an intermediate level approximately a 3 on a 6-point scale. While I bring experience in instructional design and stakeholder engagement from both military and civilian contexts, I recognize gaps in formal evaluation methodology. For example, I scored a 4/6 in "Reflective Practice," which reflects my use of After-Action Reviews (AARs) and my commitment to improving training strategies based on learner and stakeholder feedback. However, I scored in the 1–2 range for competencies such as stakeholder engagement and integrating social justice considerations. This deficiency was surprising, particularly given the emphasis Stevahn et al. (2005) place on inclusive practice, interpersonal competence, and culturally responsive engagement as essential to ethical and effective evaluation.
Strengths Grounded in Experience
My strengths are rooted in years of military leadership and instructional training experience. Within the "Professional Practice" domain, I scored well in demonstrating ethical behavior and respect for cultural diversity. Serving in multinational environments reinforced the importance of honoring diverse perspectives and promoting culturally responsive design, aligning with Stevahn et al.’s (2005) emphasis on evaluator integrity and inclusivity. I also rated myself highly in advocating for the value of evaluation. As a leader of a training development team, I frequently support the need for data-driven decision-making and emphasize the importance of continuous improvement in learning programs (AEA, 2018).
I also demonstrated moderate strength in using systematic evidence to support evaluative judgments. This aligns with my background in data analysis from my MBA coursework and past roles, where I regularly compiled and reviewed metrics related to course completion rates and learner performance. However, my self-assessment revealed that applying this evidence within a formalized evaluation framework remains a key developmental need. While I can analyze and interpret data effectively, connecting that analysis to specific standards and competencies such as those outlined by IBSTPI (2012), is an area I plan to strengthen.
Areas for Growth and Why They Matter
One of the most significant areas for growth I identified is my limited engagement with foundational evaluation documents. I scored a 1/6 in this domain, which highlighted the need to become more familiar with guiding principles and their application in real-world evaluation contexts. It is not enough to know these documents exist; I must be able to understand and apply foundational principles such as utility, feasibility, propriety, and accuracy within my evaluation practice (Stevahn et al., 2005).
Even though I’ve studied statistics and research in school, I realize that evaluation isn’t exactly the same thing. It’s more specialized, and I need to learn how to choose the right kind of evaluation approach for each situation I work in, especially since I work in settings that focus on training, learning, and organizational performance. Once I strengthen these skills, I’ll be able to choose the best tools, models, and types of data depending on the specific needs of each evaluation project.
The self-assessment also made me realize that I don’t have much hands-on experience involving stakeholders in my evaluation work. Even though I know it’s important, I haven’t consistently included different perspectives when planning or reviewing evaluations. Russ-Eft and Preskill (2009) point out that involving stakeholders helps make evaluations more useful and trustworthy, especially when the goal is to improve learning and performance within an organization.
Real-World Application
In my current role, some training materials are reviewed informally with verbal feedback, though I now recognize that using a formalized template would provide greater structure and consistency. For example, I recently reviewed an audio narration project and provided informal feedback on clarity, tone, pacing, and pronunciation. While this was a formative evaluation and directly applicable to improving the course production, it lacked a structured framework, detailed evaluation criteria, and stakeholder involvement. Viewed through the lens of the IBSTPI competencies (2012), I see that applying formal standards would have improved the process by ensuring consistency, objectivity, and overall effectiveness.
Another example involves our team’s transition away from Adobe Captivate, which has complicated our ability to modify SCORM-based training content authored in Captivate. I led a redesign project where I manually modified SCORM packages and demonstrated procedures for modifying JavaScript to update external URLs that the training is dependent upon. Although this demonstrated project management and technical problem-solving skills, I did not involve end users or gather stakeholder feedback during development. Time constraints contributed to this, but I have since taken additional steps to communicate risks and mitigation strategies clearly and in plain language to stakeholders. This effort supports the ethical evaluation practices described by Stevahn et al. (2005).
My Plan for Growth
To build on my strengths and close identified gaps, I plan to take the following steps:
1. Revisit Foundational Documents. I will review the AEA’s Guiding Principles and IBSTPI Evaluator Competencies to better integrate them into evaluation planning and decision-making (AEA, 2018; IBSTPI, 2012).
2. Deepen My Knowledge of Evaluation Models. I will revisit content in Evaluation in Organizations to reinforce my understanding of evaluation types, logic, and models suited to organizational settings (Russ-Eft & Preskill, 2009, Ch. 3).
3. Improve My Methodological Rigor. I will enhance a mixed-methods evaluation plan for an upcoming project, using both quantitative and qualitative data collection, with clear alignment to evaluation goals (Stevahn et al., 2005).
4. Engage Stakeholders Intentionally. I plan to map stakeholder roles and collect structured feedback from learners and SMEs in our next vendor review cycle to promote ownership and relevance (Russ-Eft & Preskill, 2009).
5. Promote Ethical and Stakeholder-Responsive Practice. I will examine how my evaluation methods can better reflect the perspectives and needs of key stakeholder groups, ensuring that findings are fair, relevant, and transparently communicated (Stevahn et al., 2005).
Conclusion
This reflection helped me identify both the strengths I bring as an evaluator and the areas I need to develop further. I’ve come to understand that evaluation is not simply measuring impact, it’s a deliberate, values-driven process that requires technical, ethical, and interpersonal competency. By broadening my understanding of foundational documents, engaging stakeholders more intentionally, and integrating formal frameworks into my practice, I can become a more effective and equitable evaluator.
References:
American Evaluation Association. (2018). Evaluator competencies.
International Board of Standards for Training, Performance, and Instruction. (2012). Evaluator's competency set.
Russ-Eft, D., & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change (2nd ed.). Perseus Books.
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.
Comments
Post a Comment