This week we will use objectives you developed in the first part of Assignment Three to begin writing assessment items. In instructional design, we write assessment items after objectives to ensure we are testing what we plan to deliver instructionally to learners.
We have all been subjected to testing in various formats since the beginning of grade school. The majority of these tests were probably criterion-referenced, that is, tests designed to determine how well we learned a particular subject. Most of us have also had experience with norm-referenced tests, such as the SAT or ACT, which are designed to rank or compare us with the other test-takers. While it is important that you understand the difference between the two, for the purposes of this course, we will focus on criterion-referenced assessment instruments.
Think back on a test you took in school where it appeared as though the material on the test was not covered in class or in the textbook. Sound familiar? Very often this occurs when the test items are not congruent with the learning objectives. Constructing assessment instruments immediately after you have written the learning objectives helps to ensure that congruency. In fact, if you are unable to design an appropriate assessment instrument, you may want to go back and relook at your learning objectives. As instructional designers, you should become comfortable moving forward and backwards between the various phases of the design process, recognizing that it is an iterative, not a linear process.
This week's activity involves generating assessment items for each of your performance objectives. These assessment items constitute the second part of Assignment 3.1.
One of the first decisions in this phase of the instructional design process is to choose which type of test, or combination of tests, you will create—entry behavior tests, pretests, practice tests, and/or posttests. See the table on page 148-149 for a concise summary of each. In addition, you must choose between an objective test format, for verbal and intellectual skills objectives, or an instrument that evaluates intellectual skills that result in a product or performance.
Creating assessment instruments is cognitively challenging and much more difficult than it might appear. Here is an example of a procedure for constructing test items. Include this analysis as part of assignment 3.1, as it will help your peer reviewer (as well as me) follow your rationale behind the construction of the test item. Then, when you have constructed the test item, include it in the appropriate location in your version of the table that lists your objectives.
Example
Objective 4.0 - Given a research topic and a list of ten Google search results, select the three sites most appropriate to the research topic.
- What will they need to do? The learners should be able to select web sites from a list of search results.
- What conditions will need to be provided? The learners will need to be given a predetermined research topic and a list of actual Google search results related to that topic.
- Domain - Intellectual Skills: Rules. Students have to apply a set of criteria in order to make a decision.
- This objective requires an objective-style fill-in-the-blank test item, as the students will have to write down the three most appropriate sites based on certain criteria.
Test Item: Take a look at the following Google search results: (show screen capture of search results). Which three web sites are like to have specific and relevant information dealing the the subject of life on Mars?
Source: Virginia Tech
If you are having trouble deciding on what type of test to use, Table 7.1 on p. 154 has a nice matix correlating the types of test items with the type of behavior stated in the objective.
Note that rubrics are becoming increasingly popular as assessment instruments because, if constructed correctly, they can be a descriptive, holistic characterization of the quality of a student's work. Portfolios are also gaining popularity as a means of "authentic testing", or assessing performance in realistic contexts.
Monday night from 8-9:30 p.m., Eastern time via Webex.
Agenda:
Please email me with any questions.