AIOU Solved Assignments 1 & 2 Code 8628 Autumn & Spring 2024

aiou solved assignment code 207

Aiou Solved Assignments code B.ed 8628 Autumn & Spring 2024 assignments 1 and 2  Course: Assessment in Science Education (8628) spring 2024. aiou past papers

Assessment in Science Education (8628)
B. Ed (1/5 Years)
Spring & Autumn, 2024

AIOU Solved Assignments 1 & 2 Code 8628 Autumn & Spring 2024

Q.1 What are the characteristics of a good assessment? Also describe the limitations of assessment.

Different modes of assessment

Formative assessment

Formative assessment is an integral part of teaching and learning. It does not contribute to the final mark given for the module; instead it contributes to learning through providing feedback. It should indicate what is good about a piece of work and why this is good; it should also indicate what is not so good and how the work could be improved. Effective formative feedback will affect what the student and the teacher does next.

Summative assessment

Summative assessment demonstrates the extent of a learner’s success in meeting the assessment criteria used to gauge the intended learning outcomes of a module or programme, and which contributes to the final mark given for the module. It is normally, though not always, used at the end of a unit of teaching. Summative assessment is used to quantify achievement, to reward achievement, to provide data for selection (to the next stage in education or to employment). For all these reasons the validity and reliability of summative assessment are of the greatest importance. Summative assessment can provide information that has formative/diagnostic value.

Check Also: AIOU

‘Authentic’ or work-integrated assessment

‘Authentic’ or work-integrated assessment is an assessment where the tasks and conditions are more closely aligned to what you would experience within employment. This form of assessment is designed to develop students skills and competencies alongside academic development. The Collaborate project at Exeter developed a set of tools to support academic staff in the design of authentic assessments, including a dimensions model, iTest and associated Tech Trumps. There is also an online Assessment Designer available which will allow you to design an assessment using a PC or tablet device.

Diagnostic assessment

Like formative assessment, diagnostic assessment is intended to improve the learner’s experience and their level of achievement. However, diagnostic assessment looks backwards rather than forwards. It assesses what the learner already knows and/or the nature of difficulties that the learner might have, which, if undiagnosed, might limit their engagement in new learning. It is often used before teaching or when a problem arises.

Dynamic assessment

Dynamic assessment measures what the student achieves when given some teaching in an unfamiliar topic or field.  An example might be assessment of how much Swedish is learnt in a short block of teaching to students who have no prior knowledge of the language. It can be useful to assess potential for specific learning in the absence of relevant prior attainment, or to assess general learning potential for students who have a particularly disadvantaged background. It is often used in advance of the main body of teaching.

Synoptic assessment

Synoptic assessment encourages students to combine elements of their learning from different parts of a programme and to show their accumulated knowledge and understanding of a topic or subject area. A synoptic assessment normally enables students to show their ability to integrate and apply their skills, knowledge and understanding with breadth and depth in the subject. It can help to test a student’s capability of applying the knowledge and understanding gained in one part of a programme to increase their understanding in other parts of the programme, or across the programme as a whole. Synoptic assessment can be part of other forms of assessment.

Criterion referenced assessment

Each student’s achievement is judged against specific criteria. In principle no account is taken of how other students have performed. In practice, normative thinking can affect judgements of whether or not a specific criterion has been met. Reliability and validity should be assured through processes such as moderation, trial marking, and the collation of exemplars.

Ipsative assessment

This is assessment against the student’s own previous standards. It can measure how well a particular task has been undertaken against the student’s average attainment, against their best work, or against their most recent piece of work. Ipsative assessment tends to correlate with effort, to promote effort-based attributions of success, and to enhance motivation to learn.

Used for the assessment of science learning:

The assessment standards provide criteria to judge progress toward the science education vision of scientific literacy for all. The standards describe the quality of assessment practices used by teachers and state and federal agencies to measure student achievement and the opportunity provided students to learn science. By identifying essential characteristics of exemplary assessment practices, the standards serve as guides for developing assessment tasks, practices, and policies. These standards can be applied equally to the assessment of students, teachers, and programs; to summative and formative assessment practices; and to classroom assessments as well as large-scale, external assessments.

The assessment process is an effective tool for communicating the expectations of the science education system to all concerned with science education.

two sample assessment tasks, one to probe students’ understanding of the natural world and another to probe their ability to inquire.

In the vision described by the National Science Education Standards,assessment is a primary feedback mechanism in the science education system. For example, assessment data provide students with feedback on how well they are meeting the expectations of their teachers and parents, teachers with feedback on how well their students are learning, districts with feedback on the effectiveness of their teachers and programs, and policy makers with feedback on how well policies are working. Feedback leads to changes in the science education system by stimulating changes in policy, guiding teacher professional development, and encouraging students to improve their understanding of science.

The assessment process is an effective tool for communicating the expectations of the science education system to all concerned with science education. Assessment practices and policies provide operational definitions of what is important. For example, the use of an extended inquiry for an assessment task signals what students are to learn, how teachers are to teach, and where resources are to be allocated.

AIOU Solved Assignments 1 Code 8628 Spring 2024

Q.2 Differentiate between instructional objectives and assessment objectives with suitable examples.

Answer:

The following “GENERAL” rules should prove useful in writing instructional objectives

  • 1. Be Concise: at the most, objectives should be one or two sentences in length.
  • 2. Be Singular: An objective should focus on one and only one aspect of behavior.
  • 3. Describe Exprected behaviors: An objective should indicate the desired end product, not merely a direction of change or a teacher activity.
  • 4. Be Realistic: An objective should focus on observanble behavior, not on teracher illusions or undefinable traits.
  • 5. Use Definite Terms (VERBS!!!): Terms such as “write, define, list and compare” have defininte meanings, whereas terms such as “know, understand, and apply” have a multitude of meanings.

In Bloom’s Taxonomy of Educational Objectives he defines three broad categories in which Objectives can be written. Keep in mind that the term “taxonomy” is refering to the principles of ‘classification’ which he describes in his book. Bloom defined three broad categories and within each of them there is further differentiation: Just as we can categorize life forms into broad categories such as dogs, birds, etc., each of these categories could be further differentiated into or beagles, hounds and terriers or cardinals, blue jays, eagles. The three board categories are:

  • Cognitive Objectives (usually associated with specific domains of knowledge)
  • Affective Objectives (Usually associated with feelings and emotions.)
  • Psychomotor Objectives (Usually associated with body movement.)

The next few pages elaborate on each of these three areas of instructional objectives.

COGNITIVE OBJECTIVES

1. KNOWLEDGE. The simplest cognitive behavior, knowledge, involves the recall of information . Objectives concerned with the individual’s knowledge of terms and facts, knowledge of methods and criteria for handling terms and facts, and knowledge of the abstractions of a field are properly classified in this category. Most achievement test items measure objectives at this level.

2. COMPREHENSION. Objectives classified as “comprehension” require the ability to reorganize, restate, and interpret the facts, the methods and criteria for handling facts, and the generalizations and abstractions of a field.

3. APPLICATION. When instructional objectives are directed toward the utilization of knowledge in a new and different situation, they may be classified as “application” objectives. Test items in this category require examinees to make decisions in new situation which must be translated first into situation which are identical or parallel to those presented in the course content.

4. ANALYSIS. The analysis category contains objectives which require the individual to determine the elements of some problem or theory under consideration, the relationship among the ements, and the relationship of the elements to the whole. This level can be characterized as taking the “whole” of a problem and braking it down into its vari0ous parts to extract meaning from the situation. Breaking it down into its various parts to extract meaning from the situation. Test items of this type require examinees to isolate specifics in an overall problem situation, and use the inter-relationships among the specifics to solve the given problem. Items of this type are difficult to construct, particularly in the multiple-choice-selection type format.

5. SYNTHESIS. Objectives classified as synthesis include behaviors like the development of a plan or a set of abstract relations. This level can be characterized as taking the various parts of a problem and putting them together to derive meaning from the situation. Test items of this type require examinees to organize specifics into an overall problem statement, and from this statement draw conclusions or generalizations. Most items f this type are in the “Essay” format.

6-EVALUATION. Objectives requiring the evaluation or jdging of theory or products according to internal evidence or external criteria are properly classified as evaluation objectives. Measurement at this level requires utilization of the lower level mental skills (knowledge, comprehension). The student is required to decide between right and wrong, good and bad, relevant and irrelevant. These decisions require knowledge and ability to analyze and synthesize data in the forming of sound, logical judgements. Items of this type are often quite difficult to construct because the necessity of being able to defend one alternative as a better response to an item than all other possible alternatives.

AIOU Solved Assignments 2 Code 8628 Autumn & Spring 2024

Q.3 What types of questions are asked for knowledge of principles and generalization? Elaborate your answers by giving examples from science text book (S.S.C).

To ask good questions, we first need to clarify what types and levels of information we are trying to draw out by the questions (Bloom, 1956). There are 4 levels of knowledge: factual, conceptual, procedure, and metacognitive knowledge. And there are 9 types of knowledge: terminology, specific facts, conventions, trends and sequences, classifications and categories, criteria, methodology, principles and generalizations, theories and structures. The types of knowledge is straightforward, I will explain a bit more about the levels of knowledge.

Recall prerequisite learning and connect to new material. All new learning is hooked in some way into previous learning (2, 3). Comprehension involves bringing to mind previously learned knowledge related to the new learning. In this case it is likely that the student has encountered an explanation of Newton’s first and second laws. So they are familiar with the concepts of inertia, mass, force, acceleration. If during instruction these laws are tied together such that an understanding of one can be used to support understanding of the next, the chances are good that the students will learn the similarities and differences among them, and will be able to differentiate the examples that represent each of the theories or principles. 

Theories of how concepts like these are learned suggest that, after reminding students of where they might have encountered this concept before (either personally or in a previous class), the instructor would give a good, clear definition of the concept followed by what is called a “paradigmatic example,” which is simply the example that most people would think of if you asked for an example of the concept. For example, in the case of Newton’s laws, the example of rolling a ball along a surface is the simplest example that would come to mind for most people. The instructor could even use bowling or soccer as a more concrete example that most students would recognize. (This example later serves as a benchmark against which to check every other example they think of, so it pays to think it through thoroughly.) Then the instructor or the students generate other examples of the principle. Seeing or even categorizing positive and negative instances (non-examples) of the concept helps the students to clarify their understanding. The instructor can illustrate different relationships or characteristics of the concept by moving on to more complex or related examples, for example, using the example of how different strengths of the bowler would cause the ball to roll faster or slower. In fact, the instructor could even invite the students to suggest other scenarios and what they might say about the concept.

Use the three modes of understanding (translation, interpretation, and extrapolation) in the examples given during instruction. The use of these three modes of understanding would represent learning guidance in the form of elaboration with a variety of examples of the concepts or principles being learned. Translation can be accomplished by having the students state the principles in their own terms; there could even be a contest to see who comes up with the best alternative statement of the principle or theory. For interpretation, the students could be asked to demonstrate the principle or draw a graph of it. For extrapolation, the teacher might demonstrate the interaction of two moving objects and ask the students what they think will happen if some variable changes. The teacher might explore the related concepts and principles at the same time, so the students might see how they relate to each other. 

Incorporate practice and feedback. One important component of learning at this level is practice and feedback. The principle just learned should become the foundation for learning future principles. Furthermore, the more the principle is used in future activities, the better and stronger the neural connections (4), and the easier it will be to recall and use. Unfortunately, research in the area of transfer has shown that many students fail to recognize that previously learned skills can be transferred to a new task situation unless they are prompted to do so (5).  However, the more often this type of spaced practice occurs, the higher the probability that learners will develop an orientation for transfer (6).

The students would get practice in the elaboration activity suggested above, and the results could be used by the teacher to reinforce correct understanding and remediate misunderstanding. Practice and feedback can be accomplished in many different ways, from collaborative activity to computerized tutorials and quizzes. Especially helpful are engaging activities where the students can practice putting things into their own words, giving examples of the principles or theories, illustrating with graphics or models, and/or, given a set of conditions, setting up a demonstration. This practice allows students to get feedback on their understanding.  

The importance of feedback can’t be overstated. Students value feedback, as it confirms their understanding or misunderstanding while learning is still taking place. It’s easier to learn things the right way the first time than try to unlearn and relearn it later.

Model intellectual skills. Consider employing the “cognitive apprenticeship” model. In this model the instructor acts as a master model to illustrate the intellectual skill being learned and then coaches the students as they practice solving real problems using those illustrated strategies (7).

Assessment Issues

Assessment of comprehension tasks follows the same pattern as the behaviors practiced in instruction. The student can be asked to identify relevant theories or principles when given a scenario, or be asked to translate, interpret or extrapolate a particular principle within a range of conditions. However, assessment of comprehension should stay within the parameters described in the statement of instructional outcomes. That is, if learning is at the comprehension level, assessment should not test application or evaluation of the principles or concepts.

Finally, instruction should include opportunities for lots of practice spaced out across the learning. Spaced practice is periodic use of the principles in dialog and other learning activities. Knowledge that is not practiced or used to support new knowledge quickly decays, and becomes inert knowledge. Reminding students in successive class periods of what they learned before and having them do something with that information will keep it fresh and eventually more solidly stored in long term memory. This is the principle behind a spiral curriculum, in which the instruction returns to earlier principles but in more complex situations. An example would be moving from comprehension to application of a principle in a subsequent class period. 

Comprehension of fundamental principles, generalizations, and theories is generally taught as a prerequisite for application level learning, where students are expected to demonstrate understanding by applying the knowledge they just learned to new situations they haven’t encountered before. Instruction that teaches comprehension level learning should be followed as soon as possible with application level activities.  Application level learning strengthens the students’ ability to recall the previously learned knowledge. Applications are potentially more meaningful and motivating to students, especially if they have a manipulative and or emotional component, because they reinforce the conceptual understanding associated with comprehension. Comprehension of fundamental principles, generalizations and theories can be an exciting and motivating part of learning, and it facilitates the students’ future application of knowledge. Because of this, it is worth the time and effort to teach it.

AIOU Solved Assignments 1 & 2 Code 8628 Autumn & Spring 2024

 Q.4 i) What is extrapolation? Write examples from science textbook (S.S.C).

In mathematics, extrapolation is a type of estimation, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown [1] (e.g. a driver extrapolates road conditions beyond his sight while driving). The extrapolation method can be applied in the interior reconstruction problem. 

Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated.

In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces 

Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [?1, 1]. I.e., the error increases without bound.

Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation.

This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior.

 ii) Write down statements of comprehensive objectives.

Read the instructions for the written statement carefully and follow them. Do not exceed the 2 pages limit: do not shrink the font size to fit the 2 page limit. Eleven or twelve point font and two pages is sufficient for meeting the requirements of the essay. 

2. Specify the degree program and track of research interest from the offerings of the department. We offer the PhD, ScD, and DrPH doctoral degrees, the MHS and ScM masters’ degrees, certificates and non-degree post-doctoral fellowship training. The Research Tracks are Epidemiology of Aging, Cancer Epidemiology, Cardiovascular Disease and Clinical Epidemiology, Clinical Trials and Evidence Synthesis, Environmental Epidemiology, General Epidemiology and Methods, Genetic Epidemiology, and Infectious Disease Epidemiology. Certificates in Healthcare Epidemiology, Pharmacoepidemiology, and Epidemiology for Public Health Professionals are also accepting applications. 3. Avoid extraneous personal or biographical information that does not inform the committee about future career plans. 

A high percentage of applicants begin their essay with a personal anecdote about a personal health event, a trip abroad, or an account of illness in the family. While this does allow the committee some idea of the applicant’s motivation, it should not be the whole essay. Please keep the anecdote to 1/3 or less of the full essay. 

One paragraph is usually sufficient to communicate one’s motivation. 4. Emphasize what you will do, not what you have done. Most of the relevant information about what an applicant has accomplished is easily and rapidly accessible to the committee through a review of the CV and other application materials. Many applicants use 90% of the space in the statement to restate and embellish those items. Additionally, in the final paragraph, most applicants will give passing attention to the main objective of the statement (to articulate a clear and concise plan for progressing toward a career in a given field). As a general guideline, more than half of the essay should be spent explaining what the applicant intends to do during and after graduate study. 5. Provide evidence that you as an applicant are well matched to the interests of the department. Some applicants engage in “name dropping” subsequent to a review of catalogues and web sites. 

However, faculty members do change schools or areas of research. A well-written statement explains how particular faculty, research programs, or course work is particularly well-suited to meeting the training objectives of the applicant. Additionally, if the only faculty member doing the research you discuss leaves the program, the Department cannot in good conscience admit you to the program. 6. Be as concrete and specific as possible about your interests and proposed course of study. An applicant’s failure to articulate a clear and detailed training plan leaves the committee with the impression that the applicant has not thought through the nature and meaning of graduate training, and may not be ready for admission. These are the questions to address: (in Epidemiology for instance) How will the applicant help rid the world of disease? To what end will skills and knowledge be directed? What specific aspect of a broad domain of work holds the applicant’s interest? And finally, the statement of objectives is not a binding document. Students, once they matriculate, often shift and refine their focus of study. No one is obligated to remain faithful to the plan they articulate. However, the statement of objectives is designed to provide the department a strong understanding of the applicant’s motivation and commitment to the field and a clear indication of the applicant’s writing ability. 

AIOU Solved Assignments Code 8628 Autumn & Spring 2024

Q.5 i) What is analysis? What are the requirements for analysis test items?

One of the tools used in the evaluation process is an item analysis. It is used to “Test the Test”. It ensures testing instruments measure the required behaviors needed by the learners to perform a task to standard. When evaluating tests we need to ask the question: Do the scores on the test provide information that is really useful and accurate in evaluating student performance? The item analysis provides information about the reliability and validity of test items and learner performance. Item Analysis has two purposes: First, to identify defective test items and secondly, to pinpoint the learning materials (content) the learners have and have not mastered, particularly what skills they lack and what material still causes them difficulty (Brown & Frederick, 1971).

Item Analysis is performed by comparing the proportion of learners who pass an test item in contrasting criterion groups. That is, for each question on a test, how many learners with the highest test scores (U) answered the question correctly or incorrectly compared with the learners who had the lowest test scores (L)?

The upper (U) and lower (L) criterion groups are selected from the extremes of the distribution. The use of very extreme groups, say the upper and lower 10 percent, would result in a sharper differentiation, but it would reduce the reliability of the results because of the small number of cases utilized. In a normal distribution, the optimum point at which these two conditions balance out is 27 percent (Kelly, 1939).

NOTE: With the large and normally distributed samples used in the development of standardized tests, it is customary to work with the upper and lower 27 percent of the criterion distribution. Many of the tables used for the computation of item validity indices are based on the assumption that the “27 percent rule” has been followed. Also, if the total sample contains 370 cases, the U and L groups will each include exactly 100 cases, thus preventing the necessity of computing percentages. For this reason it is desirable in a large test item analysis to use a sample of 370 persons.

Because item analysis is often done with small classroom size groups, a simple procedure will be used here. This simple analysis uses a percentage of 33 percent to divide the class in three groups, Upper (U), Middle (M), and Lower (L). An example will be used for this discussion. In a class of 30 students we have chosen 10 students (33 percent) with the highest scores and 10 students (33 percent) with the lowest scores. We now have three groups: U, M, and L. The test has 10 items in it.

Next, we tally the correct responses to each item given by the students in the three groups. This can easily be done by listing the item numbers in one column and prepare three other columns, named U, M, L. As we go through each student’s paper, we place a tally mark next to each item that was answered correctly. This is done for each of the ten test papers in the U group, then each of the ten test papers in the M group, and finally for each of the ten papers in the L group. The tallies are then counted and recorded for each group as shown in the table below.

Item Analysis Table

A measure of item Difficulty is obtained by adding the number passing each item in all three criterion groups (U + M + L) as shown in the fifth column. A rough index of the validity or discriminative value of each item is found by subtracting the number of persons answering it correctly in the L group from the number answering it correctly in the U group (L – U) as shown in the sixth column.

Reviewing the table reveals five test items (marked with an *) that require closer examination.

  • Item 2 show a low difficulty level. It might be too easy, having been passed by 29 out of 30 learners. If the test item is measuring a valid performance standard, then it could still be an excellent test item.
  • Item 4 shows a negative value. Apparently, something about the question or one of the distracters confused the U group, since a greater number of them marked it wrong than the L group. Some of the elements to look for are: wording of the question, double negatives, incorrect terms, distracters that could be consider right, or text that differs from the instructional material.
  • Item 5 shows a zero discriminative value. A test item of this nature with a good difficulty rating might still be a valid test item, but other factors should be checked. i.e. Was a large number of the U group missing from training when this point was taught? Was the L group given additional training that could also benefit the U group?
  • Item 7 show a high difficulty level. The training program should be checked to see if this point was sufficiently covered by the trainers or if a different type of learning presentation should be developed.
  • Item 9 shows a negative value. The high value of the negative number probably indicates a test item that was incorrectly keyed.

As you can see, the item analysis identifies deficiencies either in the test or in the instruction. Discussing questionable items with the class is often sufficient to diagnose the problem. In narrowing down the source of difficulty, it is often helpful to carry out further analysis of each test item. The table below shows the number of learners in the three groups who choose each option in answering the particular items. 

 ii) Define evaluation. Write statements of evaluation objectives in terms of external criteria

. Essential Principles

The information in this section has been gathered from numerous sources and aligned around three significant concepts: (1) formative assessment is student focused, (2) formative assessment is instructionally informative, and (3) formative assessment is outcomes based.

In an effort not to duplicate information available in other resources, I have condensed the elements and their definitions quite a bit. If you would like to read more about the fundamentals of formative assessment, I recommend “Working Inside the Black Box” (Black, Harrison, Lee, Marshall, & Wiliam, 2004); Classroom Assessment for Student Learning: Doing It Right— Using It Well (Stiggins, Arter, Chappuis, & Chappuis, 2004); and Classroom Assessment and Grading That Work (Marzano, 2006).

Formative Assessment Is Student Focused

Formative assessment is purposefully directed toward the student. It does not emphasize how teachers deliver information but, rather, how students receive that information, how well they understand it, and how they can apply it. With formative assessment, teachers gather information about their students’ progress and learning needs and use this information to make instructional adjustments. They also show students how to accurately and honestly use self-assessments to improve their own learning. Instructional flexibility and student-focused feedback work together to build confident and motivated learners.

In brief: Formative assessment helps teachers

  • Consider each student’s learning needs and styles and adapt instruction accordingly
  • Track individual student achievement
  • Provide appropriately challenging and motivational instructional activities
  • Design intentional and objective student self-assessments
  • Offer all students opportunities for improvement

In practice: Students in Mrs. Chavez’s English class are studying character development. They have read about Scout in To Kill a Mockingbird and Holden Caulfield in The Catcher in the Rye.

Early in the unit, Mrs. Chavez asks her students to define a character trait and give an example of someone in literature or in real life who demonstrates that trait. She gathers their examples in a list, which she posts in the classroom. This is valuable information about the starting point for the unit: in this case, it helps the teacher determine whether she needs to clarify the concept of character traits or can move on with the application of character traits to literature.

Based on the data her students provide, Mrs. Chavez decides to move forward. She arranges the class into random groups and asks each group to write all the character traits of Scout that they can think of on individual yellow sticky notes—one trait per note—and then do the same for Holden Caulfield, this time using blue sticky notes. Then each group posts their responses on the original list of traits, alongside each character trait. Areas of agreement and disagreement are discussed. Mrs. Chavez uses a questioning strategy to elicit information and to clarify any lingering gaps in understanding or accuracy. Following this, students work on their own to create a T chart for each character, using the left side of the T to list life experiences and challenges and the right side to list how these factors have influenced traits and behaviors. Note that Mrs. Chavez has done very little lecturing or whole-class teaching to this point, making for a very student-focused lesson.

Formative Assessment Is Instructionally Informative

During instruction, teachers assess student understanding and progress toward standards mastery in order to evaluate the effectiveness of their instructional design. Both teachers and students, individually and together, review and reflect on assessment outcomes. As teachers gather information from formative assessment, they adjust their instruction to further student learning.

In brief: Formative assessment

  • Provides a way to align standards, content, and assessment
  • Allows for the purposeful selection of strategies
  • Embeds assessment in instruction
  • Guides instructional decisions

In practice: During a high school social studies unit on the development of American nationalism after the War of 1812, Mr. Sandusky uses a series of assessments to monitor his students’ developing understanding of the presented material. Mr. Sandusky begins with a pre-assessment focused on content similar to what students will encounter in the final selected-response test. After reviewing the pre-assessment data, he concludes that his students either remember little of their prior learning about the material or haven’t been exposed to these topics before. He had intended to begin the unit with a discussion of how the popularity of “The Star-Spangled Banner” fueled nationalistic spirit but decides to alter those plans somewhat by having students read articles about the War of 1812, grouping them by readiness and assigning purposefully selected readings. One group reads about the reasons the United States and Britain went to war, another reads about specific events that occurred during the war, and a third reads about Francis Scott Key. Each group reports out, sharing information with the rest of the class.

As the unit progresses, students keep track of their learning and assignments on a work-along, turning it in to Mr. Sandusky every day for a quick check. For example, they describe causes of the war, answer a question about Key’s motivation to write “The Star-Spangled Banner,” and note the location of the battle he observed (Baltimore’s Fort McHenry). This is followed by a Corners activity where students pick different lines of the song to analyze and respond to in terms of relevance to current events. Later, after a discussion of the diverse opinions on the War of 1812, the teacher asks students to report one pro and one con viewpoint. To probe students’ understanding of the significant outcomes of the war, he asks the class to describe three specific changes in the power of the U.S. government that resulted from the war. In these activities, Mr. Sandusky works to align his formative assessment questions with the lesson’s specific objectives, incorporate the questions into instruction, and use the information to guide future instruction.

Leave a Reply

Your email address will not be published. Required fields are marked *