LDT 506 M7: Self-Assessment and Reflection

 

LDT 506 | May 4, 2025
Self-Assessment and Reflection


Introduction

Evaluation is more than a technical task, it is an applied discipline that ties together inquiry, ethics, and collaboration to inform and improve practice. As emphasized by Russ-Eft and Preskill (2009), the most impactful evaluations don’t just measure outcomes; they help organizations learn, adapt, and thrive. Throughout the past six weeks in LDT 506, I've gained a deeper understanding of evaluation as a thoughtful and human-centered process. It is grounded in asking meaningful questions and guided by established principles that ensure the evaluation is ethical, relevant, and purposeful. Through the lens of Stevahn, King, Ghere, and Minnema’s (2005) taxonomy, the AEA Evaluator Competencies (2018), and IBSTPI’s (2012) standards, I’ve been able to reflect meaningfully on where I currently stand, and where I need to grow as an evaluator working at the intersection of learning design, federal training systems, and vendor oversight within my professional career. 


Where I Stand as an Evaluator

Based on my self-assessment, I would rate myself a 4 out of 6 in overall evaluator competency. I have developed good foundations in areas such as Project Management, Professional Practice, and Systematic Inquiry, due in part to my leadership role in overseeing federal training projects. However, I rated myself lower in areas like Stakeholder Engagement and Situational Analysis, where I still tend to focus more on compliance and workflow efficiency than on collaborative, inclusive evaluation practices.


My previous experience designing performance support tools and evaluating LMS analytics gave me confidence with data interpretation. But LDT 506 reminded me that evaluation isn’t just about what the data says, it’s about how and why the data is gathered, and how findings are interpreted within a broader organizational or social context (Russ-Eft & Preskill, 2009).


Strengths Grounded in Experience

In the "Professional Practice" domain, I bring strong competencies related to ethics, cultural awareness, and the importance of evaluation as a learning tool. My 24-year military background and current federal role required me to work across diverse teams and adhere to formal ethical frameworks, naturally aligning with Stevahn et al.’s (2005) emphasis on professional integrity and responsiveness.


Another strength lies in my ability to conduct a systematic inquiry. I have often led after-action reviews, created training dashboards, and reviewed contractor performance using measurable criteria. These align with Russ-Eft and Preskill’s (2009) definition of evaluation as an iterative, data-driven process. Additionally, my ongoing education in instructional design has exposed me to evidence-based approaches that are transferable to evaluation. But I now realize that aligning my analysis with formal evaluation frameworks like IBSTPI’s (2012) requires methodological discipline, rather than relying solely on intuition.


Areas for Growth and Why They Matter

The course helped me recognize that I am still developing the ability to reflect critically on my evaluation work and strengthen my collaboration skills with others involved in the process. I’ve often defaulted to technical precision and timelines, which is something that Russ-Eft and Preskill (2009) might describe as “instrumental orientation” without fully embracing the participatory nature of meaningful evaluation. For example, I scored myself low in engaging stakeholders early in the evaluation process. Even though I consult with SMEs often, I rarely co-develop evaluation tools or actively solicit their interpretation of findings. This matters because involving stakeholders is not just good practice, but it increases utility, trust, and shared ownership (Russ-Eft & Preskill, 2009).


A related gap emerged in my ability to critically analyze qualitative data. Before this course, I tended to see qualitative inputs as less important, more like informed opinions with limited weight, rather than equally valuable sources of data. However, working with Braun and Clarke’s (2006) six-phase approach to thematic analysis helped me see the depth and meaning that can emerge from open-ended feedback. In one assignment, coding learner interview transcripts showed me how patterns like peer support or practical relevance surfaced across responses. These would have been invisible using only survey data.


Real-World Application

Two recent projects exemplify how I’ve begun applying these insights. The first involved a vendor-delivered training course where I evaluated narrator audio files for pacing, clarity, and pronunciation. In the past, I would have simply flagged errors and passed along corrections. More recently, I developed a rubric and asked for a second opinion and concurrence from my team, which is a step toward systematic and collaborative evaluation.


The second case involved modifying some JavaScript within SCORM files to update external URLs ahead of our ancillary server's decommissioning. I developed an alternative workflow that resolved the technical challenge, but I initially overlooked the importance of including stakeholders in the decision-making process, mainly due to the pressure of a very tight timeline. In hindsight, involving stakeholders in the initial planning phase would have improved adoption and alignment. This experience has led me to start incorporating review checkpoints and formal stakeholder alignment plans into future updates, a practice that I have recently begun and placed emphasis on.


My Plan for Growth

To mature as an evaluator, I’ve developed a growth plan grounded in the competencies emphasized throughout LDT 506. First, I plan to periodically revisit foundational resources—specifically the AEA Guiding Principles and the IBSTPI Competency Set—and use them to create practical checklists that guide evaluation planning and reporting (AEA, 2018; IBSTPI, 2012). Drawing from Russ-Eft and Preskill’s (2009) emphasis on learning partnerships, I also aim to more fully engage stakeholders by co-developing evaluation criteria in upcoming instructional design projects. I plan to apply Braun and Clarke’s (2006) six-phase method of thematic analysis when possible, particularly in interventions intended to influence behavior or mindset. To improve consistency and objectivity, I will convert my informal evaluation checklists into formalized rubrics aligned with the evaluator competencies explored in this course. Lastly, I will strengthen my mixed-methods approach by combining quantitative data, such as LMS analytics and assessment results, with qualitative feedback gathered from open-ended survey responses. This will support more balanced and actionable evaluation outcomes.


LDT 506 has significantly advanced my mindset as an evaluator. I’ve moved beyond simply checking the box on performance metrics and now better understand that evaluation is a way to support learning, improvement, and inclusion. With a clearer alignment to the standards of practice outlined by Stevahn et al. (2005), the AEA (2018), and Russ-Eft and Preskill (2009), I feel well-prepared to refine my approach and grow into a more strategic and equitable evaluator.


References

American Evaluation Association. (2018). Evaluator competencies. https://www.eval.org/About/Guiding-Principles


Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. 


International Board of Standards for Training, Performance, and Instruction. (2012). Evaluator competency set. https://ibstpi.org/wp-content/uploads/2012/09/Evaluator_competency_2012.pdf


Russ-Eft, D. F., & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change (2nd ed.). Perseus Books.


Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.

Comments

Popular posts from this blog

LDT 504 - My Storyline Experiences and the Future of eLearning

EDP 540 Applying Learning Theory In ISD