Education Consultant
Transcript: "Education Consultant" Level B Assessment Job Application Presentation "I always use RIOT when determining who and when to test." Reliability refers to the consistency of measure. It is important to remember that there can be a "bad" side to assessment. These are; labeling, loss of individuality, inaccuracy, not enough cultural sensitivity and sometimes can be resource intensive....sometimes, Resource Teachers can spend too much time assessing and not enough time working with students. Validity By Caroline Murray Sand Verbenas grow in the dry, sandy prairies of South Texas. "Good" assessment is purposeful, systematic, multiple sources of information, a process and contextual. "Good" assessment helps identify those at risk, determines strength and needs, allows access for resources, provides tracking and monitoring and generally improves educational outcome. Review Interview Observe Test ...reliability and measurement error, validity, measurement scales on which tests are based, type of score interpretation (e.g. norm-referenced, content representativeness, criterion characteristics such as sufficiency and relevance) Follow up with parents and teachers!! (send an e-mail or telephone) Offer to be available for any questions or concerns parents, teachers or the student may have in regards to the report and programming Assist in programming for the student -take a collaborative approach Monitor student's progress (check to see if there needs to be some changes with the student's program) Be an active member of the student's educational journey Percentile Ranks allow us to determine a student's relative position within a normed group What are the test taker variables? e.g. age, gender, culture, race, disability... Reliability Does the test match the needs of the current examinee? Interpretation of Scores Major types of derived scores used in norm-referenced measurement are; standard scores percentile rank stanines age (AE) and grade (GE) equivalents (use AE and GE with caution if at all!) Validity refers to whether a test measures what it intends to measure. A test must be reliable to be considered valid. What to do once the assessment has been completed, reported and debriefed? How do I choose the right assessment? Factors affecting reliability; test length homogenity of items test-retest interval guessing variation in the test situation sample size improper administration judgemental scoring Here are some considerations when debriefing different audiences including students, parents, teachers, administrators, psychologists, professionals and paraprofessionals. Understanding the Considerations of Assessment -The "Good" and the "Bad" Sand Verbenas live in the dry, sandy prairies of South Texas. Ethics are a system of principles that guide behavior. The core principles of an ethical code are; respect for the dignity of others, professional competence, integrity, honesty and responsibility to others. What skills and considerations are required to effectively administer a variety of Level B assessment instruments? When conducting debriefing conferences; invite all members of the student's educational team begin with a personal anecdote about the students have a collaborative approach to planning take the time to explain assessment results and answer any questions talk about both strengths and areas of need respect the parents as "experts" on their children remember "I am there to provide support for this student's educational journey" Written Reports must; be professional, comprehensive and practical include background information, behavioral observations, any information regarding interviews with teachers, parents etc, include description of assessment instruments and a brief description of the subtests, strengths and areas of need, include recommendations that are useful and practical Foundations of Assessment Types of validity are; face validity content validity construct validity criterion-related validity "I will always remember to understand my own limits as well as the the limits of the assessment tools I select." Am I considering the characteristics of validation, normative standardization of groups and the legal rights of the examinee? Communicating Results Standardized scores are used in norm-referenced assessment to compare one student's performance on a test to the performance of other students her age. Validity and Reliability of Assessment Instruments Use of the bell curve assumes that human characteristics, including academic achievement follow a normal curve