Free AIOU Solved Assignment Code 697 Spring 2024

Free AIOU Solved Assignment Code 697 Spring 2024

Download Aiou solved assignment code 697 free autumn/spring 2024, aiou updates solved assignments. Get free AIOU All Level Assignment from aiousolvedassignment.

Course: Assessment in Science Education (697)
Semester: Spring, 2024
ASSIGNMENT No. 1

Q.1   Define the nature of assessment? Discuss its various types with reference to their purpose.

Before creating the instruction, it’s necessary to know for what kind of students you’re creating the instruction. Your goal is to get to know your student’s strengths, weaknesses and the skills and knowledge the possess before taking the instruction. Based on the data you’ve collected, you can create your instruction.

Formative assessment

Formative assessment is used in the first attempt of developing instruction. The goal is to monitor student learning to provide feedback. It helps identifying the first gaps in your instruction. Based on this feedback you’ll know what to focus on for further expansion for your instruction.

Summative assessment

Summative assessment is aimed at assessing the extent to which the most important outcomes at the end of the instruction have been reached. But it measures more: the effectiveness of learning, reactions on the instruction and the benefits on a long-term base. The long-term benefits can be determined by following students who attend your course, or test. You are able to see whether and how they use the learned knowledge, skills and attitudes.

Confirmative assessment

When your instruction has been implemented in your classroom, it’s still necessary to take assessment. Your goal with confirmative assessments is to find out if the instruction is still a success after a year, for example, and if the way you’re teaching is still on point. You could say that a confirmative assessment is an extensive form of a summative assessment.

Norm-referenced assessment

This compares a student’s performance against an average norm. This could be the average national norm for the subject History, for example. Other example is when the teacher compares the average grade of his or her students against the average grade of the entire school.

Criterion-referenced assessment

It measures student’s performances against a fixed set of predetermined criteria or learning standards. It checks what students are expected to know and be able to do at a specific stage of their education. Criterion-referenced tests are used to evaluate a specific body of knowledge or skill set, it’s  a test to evaluate the curriculum taught in a course.

Ipsative assessment

It measures the performance of a student against previous performances from that student. With this method you’re trying to improve yourself by comparing previous results. You’re not comparing yourself against other students, which may be not so good for your self-confidence.

The effective assessment is objective, and focused on student performance in science education. It should not reflect the personal opinions, likes, dislikes, or biases of the instructor. Instructors must not permit judgment of student performance to be influenced by their personal views of the student, favorable or unfavorable. Sympathy or over-identification with a student, to such a degree that it influences objectivity, is known as “halo error.” A conflict of personalities can also distort an opinion. If an assessment is to be objective, it must be honest; it must be based on the performance as it was, not as it could have been. Substantial part of science education follow as:

Flexible

The instructor must evaluate the entire performance of a student in the context in which it is accomplished. Sometimes a good student turns in a poor performance, and a poor student turns in a good one. A friendly student may suddenly become hostile, or a hostile student may suddenly become friendly and cooperative. The instructor must fit the tone, technique, and content of the assessment to the occasion, as well as to the student. An assessment should be designed and executed so that the instructor can allow for variables. The ongoing challenge for the instructor is deciding what to say, what to omit, what to stress, and what to minimize at the proper moment.

Acceptable

The student must accept the instructor in order to accept his or her assessment willingly. Students must have confidence in the instructor’s qualifications, teaching ability, sincerity, competence, and authority. Usually, instructors have the opportunity to establish themselves with students before the formal assessment arises. If not, however, the instructor’s manner, attitude, and familiarity with the subject at hand must serve this purpose. Assessments must be presented fairly, with authority, conviction, sincerity, and from a position of recognizable competence. Instructors must never rely on their position to make an assessment more acceptable to students.

Comprehensive

A comprehensive assessment is not necessarily a long one, nor must it treat every aspect of the performance in detail. The instructor must decide whether the greater benefit comes from a discussion of a few major points or a number of minor points. The instructor might assess what most needs improvement, or only what the student can reasonably be expected to improve. An effective assessment covers strengths as well as weaknesses. The instructor’s task is to determine how to balance the two.

Constructive

An assessment is pointless unless the student benefits from it. Praise for its own sake is of no value, but praise can be very effective in reinforcing and capitalizing on things that are done well, in order to inspire the student to improve in areas of lesser accomplishment. When identifying a mistake or weakness, the instructor must give positive guidance for correction. Negative comments that do not point toward improvement or a higher level of performance should be omitted from an assessment altogether.

Organized

An assessment must be organized. Almost any pattern is acceptable, as long as it is logical and makes sense to the student. An effective organizational pattern might be the sequence of the performance itself. Sometimes an assessment can profitably begin at the point at which a demonstration failed, and work backward through the steps that led to the failure. A success can be analyzed in similar fashion. Alternatively, a glaring deficiency can serve as the core of an assessment. Breaking the whole into parts, or building the parts into a whole, is another possible organizational approach.

Thoughtful

An effective assessment reflects the instructor’s thoughtfulness toward the student’s need for self-esteem, recognition, and approval. The instructor must not minimize the inherent dignity and importance of the individual. Ridicule, anger, or fun at the expense of the student never has a place in assessment. While being straightforward and honest, the instructor should always respect the student’s personal feelings. For example, the instructor should try to deliver criticism in private.

Specific

The instructor’s comments and recommendations should be specific. Students cannot act on recommendations unless they know specifically what the recommendations are. A statement such as, “Your second weld wasn’t as good as your first,” has little constructive value. Instead, the instructor should say why it was not as good, and offer suggestions on how to improve the weld. If the instructor has a clear, well-founded, and supportable idea in mind, it should be expressed with firmness and authority, and in terms that cannot be misunderstood. At the conclusion of an assessment, students should have no doubt about what they did well and what they did poorly and, most importantly, specifically how they can improve.       

AIOU Solved Assignment Code 697 Spring 2024

Q.2   What are the major implications of aims, goals and objectives for teaching science? Discuss.    

Initially developed between 1956 and 1972, the domains of learning have received considerable contributions from researchers and experts in the field of education. Studies by Benjamin Bloom (on cognitive domain), David Krathwohl (affective domain) and Anita Harrow (Psychomotor domain) have been encompassed into the three domains of learning (Sousa, 2016).

A holistic lesson developed by a teacher requires the inclusion of all the three domains in constructing learning tasks for students. The diversity in such learning tasks help creates a comparatively well – rounded learning experience that meets a number of learning styles and learning modalities. An increased level of diversity in the delivery of lessons help engage students as well as create more neural networks and pathways that helps with recollection of information and events.

Learning helps develop an individual’s attitude as well as encourage the acquisition of new skills. The cognitive domain aims to develop the mental skills and the acquisition of knowledge of the individual. The cognitive domain encompasses of six categories which include knowledge; comprehension; application; analysis; synthesis; and evaluation. Knowledge includes the ability of the learner to recall data or information. This is followed with comprehension which assesses the ability of the learner to understand the meaning of what is known. This is the case where a student is able to explain an existing theory in his or her own words (Anderson et al, 2011). This is followed by application which shows the ability of the student to use the abstract knowledge in a new situation. A typical case is when an Economics student is able to apply the theory of demand and supply to the changing market trend of clothing during a particular season. The analysis category aims to differentiate facts and opinions. The synthesis category shows the ability to integrate different elements or concepts in order to form a sound pattern or structure to help establish a new meaning. The category of evaluation shows the ability to come up with judgments about the importance of concepts. A typical scenario is when a manager is able to identify and implement the most cost effective methods of production in the bid to increase profits whilst sustaining a high level of competitive advantage.

The affective domain includes the feelings, emotions and attitudes of the individual. The categories of affective domain include receiving phenomena; responding to phenomena; valuing; organization; and characterization (Anderson et al, 2011). The sub domain of receiving phenomena creates the awareness of feelings and emotions as well as the ability to utilize selected attention. This can include listening attentively to lessons in class. The next sub domain of responding to phenomena involves active participation of the learner in class or during group discussion (Cannon and Feinstein, 2005). Valuing involves the ability to see the worth of something and express it. This includes the ability of a learner to share their views and ideas about various issues raised in class. The ability of the student to prioritize a value over another and create a unique value system is known as organization. This can be assessed with the need to value one’s academic work as against their social relationships. The sub domain of characterization explains the ability to internalize values and let them control the behavior of the individual. In view of this, a student considers the academic work highly important as it plays an important role in deciding the career path chosen rather than what may be available.

The psychomotor domain includes utilizing motor skills and the ability to coordinate them. The sub domains of psychomotor include perception; set; guided response; mechanism; complex overt response; adaptation; and origination. Perception involves the ability to apply sensory information to motor activity. For instance, a student practices a series of exercises in a text book with the aim of scoring higher marks during exams. Set, as a sub domain, involves the readiness to act upon a series of challenges to overcome them. In relation to guided responses, it includes the ability to imitate a displayed behavior or utilize a trial and error method to resolve a situation (Sousa, 2016). The sub domain of mechanism includes the ability to convert learned responses into habitual actions with proficiency and confidence. Students are able to solve exams questions after they have confidently been able to answer some past questions. Complex Overt responses explain the ability to skillfully perform complex patterns of actions. A typical instance has to do with the ability of a student to have an increased typing speed when using a computer. Adaptability is an integral part of the domain which exhibits the ability to modify learned skills to meet special events. An instance is when a student who has learnt various underlying theories is able to invent or make a working model using everyday materials. Origination also involves creating new movement patterns for a specific situation (Sincero, 2011). Learning is an integral part of every individual’s life. It is very key to growth and development and hence requires the need for both students and teachers to be committed to the process. It is further necessary to ensure that the delivery of learning combines generally different facets which have been identified to be the domains of learning.

With the continually increasing need to ensure that students are taught with varying strategies and techniques, it is important for teachers to adopt a teaching strategy that combines various domains of learning to enable teaching and learning to be considered as effective.

At London School of Management of Education (LSME) we are proud to inform our cherished students and stakeholders that we actively ensure that all our facilitators apply the best and suitable delivery techniques that would impact positively on the Cognitive, Affective and Psychomotor Domains of the students.

All our lecturers are well trained and experienced in pedagogy and they excel based on the feedback from the results churned by ur students in all external exams and standardization. All our graduated students are in gainful employment in the UK, USA, Canada, UAE, India, Pakistan, Saudi Arabia, Qatar, Bahrain, Germany, Spain and most countries in the EU. We are proud of our enviable record in delivering the best training to our students, our partners!

The learning process must go beyond reading and memorizing facts and information to the ability to critically evaluate the information, explain to others as well as design things out for everyday use… and that is what we do best at LSME.

This section considers the dominant cognitive processes that contribute to learning—that is, those processes that can be understood at the level of the individual and relate to content knowledge and reasoning. Because the charge of this study is specific to science learning, wherever possible the committee elects to discuss how these learning processes happen in the context of the domain of science. It is critical to note that these processes are not unique to science learning. Indeed, much of the general scholarship on learning has emerged in relationship to other academic disciplines, each with their own scholarly research traditions.

The Role of Memory in Learning

Learning depends fundamentally on memory. Well over a century of research has delved into the properties of human memory in action, detailing the remarkable role memory plays in both developing and sustaining learning over time. From this research, there are several themes that are helpful to keep in mind.

Durable, long-term learning is best accomplished by repeated experience with the material one seeks to remember. Many researchers of memory and learning would caution against relying on a training program that involves a one-time introduction and immediate assessment of proficiency, which tends to result in short-term performance that predictably deteriorates over time, rather than long-term learning. Further, learning episodes are most efficient when they are spread out over multiple sessions rather than crammed together—a phenomenon known as the spacing effect. That is, the same amount of time invested in studying material one wants to remember will generally result in longer-lasting learning if it is distributed over time rather than performed all at once.

Learning can be enhanced by strategies that promote cognitive engagement with and elaboration of the material one is attempting to learn. Knowledge and skills that are densely interconnected to other information have better storage strength in long-term memory and also have links to more potential retrieval cues. Examples of beneficial strategies include such activities as concept mapping, note-taking, self-explanation, and representing material in multiple formats (e.g., text and graphics). Learning have proposed a framework that differentiates cognitive engagement during learning into four modes: interactive, constructive, active, and passive (presented in decreasing order of the intensity of engagement), with interactive and constructive modes having the greatest impact on learning and conceptual development. Constructive engagement is defined as activities where learners generate some kind of additional externalized product beyond the information they were originally provided with, such as generating inferences and explanations or constructing a new representational format (e.g., a diagram). Interactive engagement goes one step further and occurs when two or more partners (peers, teacher and learner, or intelligent computer agent and learner) together contribute to a mutual dialogue in a constructive mode.

Learning is improved when people are asked to actively apply or construct material from long-term memory, as opposed to passively restudying or being re-told the content, a phenomenon known as the “testing effect”. Providing regular opportunities to generate active responses, such as through informal assessments or practice in the field, helps learners reinforce their learning while at the same time providing information about current states of proficiency. As these examples suggest, corrective feedback is another tool that can help to promote accurate learning and reinforce retention over time. Activity systems are characterized by rules and conventions, which evolve historically and culturally, as well as divisions of labor and participation structures, which may include social strata or a hierarchical structure to the activity, with different actors taking on distinctive roles. A key insight of activity theory is that “tools,” which may be culturally created artifacts or concepts (e.g., machines, software interfaces, information systems, protocols, etc.) that evolve over time, mediate behavior in the system, including learning and transmitting knowledge.

Individuals may participate in multiple activity systems, and more recent work on activity theory has brought out the importance of considering interactions among multiple activity systems, which raises issues of individual and cultural identity, power, motivation, and difference and also points back to the need to consider citizen science learning in the context of a larger ecosystem of learning experiences. Activity systems are often used as a way of modeling practice in various contexts, including educational practice, in such a way that systems-level relations and dynamics are highlighted. In the context of citizen science, activity theory offers ways to think about the complex set of roles, objectives, values, and activities that can emerge when volunteer participants are simultaneously members of other communities, such as master naturalists and conservationists, community activists, hobbyists, students or teachers in formal or informal education, or workers engaged in related economic activity (e.g., fishing or harvesting). Actors may come from distinctly different groups, each with its own set of objectives, tools, customs, discourse patterns, role structures, and ways of doing things. Activity theory suggests that participants and organizers may advance collaborative goals by paying deliberate attention to recognizing or designing appropriate role structures, shared tools, and systems of communication to take advantage of the resources that different activity systems can potentially contribute while promoting common action and understanding.

One way of understanding how people develop expertise in content areas—specifically in the domain of science—explores the evolution of foundational ideas from the perspective of conceptual development over time. Theorists of conceptual development have noted repeatedly that mature concepts are often qualitatively different from concepts held by children or by uninstructed adults. Acquiring sophisticated understanding of concepts is not merely a matter of accumulating more factual knowledge.

A common idea in theories of conceptual development is that concept learning varies in the degree to which knowledge must be restructured to move from naïve to more expert understanding. Some early understandings can be readily nurtured in thoughtful learning settings. On the other hand, strong restructuring is required when novice and expert conceptual structures are fundamentally incompatible or incommensurate. In this case, rather than refining individual concepts or adding new concepts to existing ones, the nature of the concepts themselves and the explanatory structures in which they are embedded undergo change. Chi and her colleagues argue that some science learning is particularly difficult because learners’ initial conceptions belong to a different ontological category than corresponding scientific conceptions. For example, many novices think of heat, gravity, and force as types of material substances, or properties of matter, rather than interactive processes. This can lead learners to misconstrue instruction, as happens when a learner who thinks of electrical current as similar to flowing water draws on matter-based conceptions, like volume or mass, to try to understand electrical phenomena.

The degree to which scientific concepts displace naïve knowledge during the process of strong restructuring is a subject of much debate. It show how conceptual change can occur when a learner begins to be sufficiently dissatisfied with a prior conception (e.g., by being confronted with anomalous information) and comes to see a new alternative conception as intelligible, plausible, and fruitful in its ability to explain and understand other problems. However, a number of studies indicate that intuitive ideas are also persistent and learners may ignore, reject or distort anomalous information. Even experts do this, as is illustrated by the history of science. Further, intuitive beliefs and alternative frameworks can continue to be activated in particular contexts even after an individual shows evidence of understanding and using a scientific concept.

Importantly, people can hold multiple conceptions about phenomena as they engage in rapid reorganization of knowledge and respond to the demands of a particular context. Even experts will shift their reasoning and understanding about a phenomenon depending upon the context. When confronted with novel activities or practices, learners may need to create their own alternative pathways to reconcile conflicting cultural, ethnic, and academic identities.

Learning environments that only see learners’ alternative conceptions as wrong can produce conflicts between learners’ cultural, ethnic, and academic identities, and this approach can also leave narrow the possibilities of generative engagements between community ways of knowing and scientific ways of knowing. Instead, research shows that many phenomena of interest in scientific study are intimately related to people’s everyday experiences and knowledge systems of cultural communities historically underrepresented in science can, and should, be regarded as assets for learning. Educators can do this in a variety of ways. The use of culturally relevant examples, analogies, artifacts, and community resources that are familiar to learners can make science more relevant and understandable, and integrated approaches that rely on the input of community member participation (e.g., input from elders, use of traditional language, and respect of cultural values) help learners navigate between Western modern scientific thinking and other ways of knowing. It point out that science inquiry demands patience, skepticism, and a willingness to embrace uncertainty and ambiguity—which demands trust between teachers and students. Accordingly, the development of trust and caring relationships between teachers and students may be necessary in order to develop deep understandings of science content and practices. In short, research demonstrates that conceptual learning is advanced in contexts and with instructors that recognize learners are simultaneously developing expertise in multiple knowledge systems.

AIOU Solved Assignment 1 Code 697 Spring 2024

Q.3   Enlist the six major divisions of cognitive domain in ascending order of complexity. Provide at least one example of objectives each from Physics, Chemistry, Biology and Mathematics.

One of the most widely used ways of organizing levels of expertise is according to Bloom’s Taxonomy of Educational Objectives. Bloom’s Taxonomy uses a multi-tiered scale to express the level of expertise required to achieve each measurable student outcome. Organizing measurable student outcomes in this way will allow us to select appropriate classroom assessment techniques for the course.

There are three taxonomies. Which of the three to use for a given measurable student outcome depends upon the original goal to which the measurable student outcome is connected. There are knowledge-based goals, skills-based goals, and affective goals (affective: values, attitudes, and interests); accordingly, there is a taxonomy for each. Within each taxonomy, levels of expertise are listed in order of increasing complexity. Measurable student outcomes that require the higher levels of expertise will require more sophisticated classroom assessment techniques.

It is knowledge-based because it requires that the student learn certain facts and concepts. An example of a skills-based goal for this course might be “student flosses teeth properly.” This is a skills-based goal because it requires that the student learn how to do something. Finally, an affective goal for this course might be “student cares about proper oral hygiene.” This is an affective goal because it requires that the student’s values, attitudes, or interests be affected by the course.

Table 1: Bloom’s Taxonomy of Educational Objectives for Knowledge-Based Goals
Level of Expertise Description of Level Example of Measurable
Student Outcome
1. Knowledge Recall, or recognition of terms, ideas, procedure, theories, etc. When is the first day of Spring?
2. Comprehension Translate, interpret, extrapolate, but not see full implications or transfer to other situations, closer to literal translation. What does the summer solstice represent?
3. Application Apply abstractions, general principles, or methods to specific concrete situations. What would Earth’s seasons be like in specific regions with a different axis tilt?
4. Analysis Separation of a complex idea into its constituent parts and an understanding of organization and relationship between the parts. Includes realizing the distinction between hypothesis and fact as well as between relevant and extraneous variables. Why are seasons reversed in the southern hemisphere?
5. Synthesis Creative, mental construction of ideas and concepts from multiple sources to form complex ideas into a new, integrated, and meaningful pattern subject to given constraints. If the longest day of the year is in June, why is the northern hemisphere hottest in August?
6. Evaluation To make a judgment of ideas or methods using external evidence or self-selected criteria substantiated by observations or informed rationalizations. What would be the important variables for predicting seasons on a newly discovered planet?

 

Table 2: Bloom’s Taxonomy of Educational Objectives for Skills-Based Goals
Level of Expertise Description of Level Example of Measurable
Student Outcome
Perception Uses sensory cues to guide actions Some of the colored samples you see will need dilution before you take their spectra. Using only observation, how will you decide which solutions might need to be diluted?
Set Demonstrates a readiness to take action to perform the task or objective Describe how you would go about taking the absorbance spectra of a sample of pigments?
Guided Response Knows steps required to complete the task or objective Determine the density of a group of sample metals with regular and irregular shapes.
Mechanism Performs task or objective in a somewhat confident, proficient, and habitual manner Using the procedure described below, determine the quantity of copper in your unknown ore. Report its mean value and standard deviation.
Complex Overt Response Performs task or objective in a confident, proficient, and habitual manner Use titration to determine the Ka for an unknown weak acid.
Adaptation Performs task or objective as above, but can also modify actions to account for new or problematic situations You are performing titrations on a series of unknown acids and find a variety of problems with the resulting curves, e.g., only 3.0 ml of base is required for one acid while 75.0 ml is required in another. What can you do to get valid data for all the unknown acids?
Organization Creates new tasks or objectives incorporating learned ones Recall your plating and etching experiences with an aluminum substrate. Choose a different metal substrate and design a process to plate, mask, and etch so that a pattern of 4 different metals is created.

 

Table 3: Bloom’s Taxonomy of Educational Objectives for Affective Goals
Level of Expertise Description of Level Example of Measurable
Student Outcome
Receiving Demonstrates a willingness to participate in the activity When I’m in class I am attentive to the instructor, take notes, etc. I do not read the newspaper instead.
Responding Shows interest in the objects, phenomena, or activity by seeking it out or pursuing it for pleasure I complete my homework and participate in class discussions.
Valuing Internalizes an appreciation for (values) the objectives, phenomena, or activity I seek out information in popular media related to my class.
Organization Begins to compare different values, and resolves conflicts between them to form an internally consistent system of values Some of the ideas I’ve learned in my class differ from my previous beliefs. How do I resolve this?
Characterization by a Value or Value Complex Adopts a long-term value system that is “pervasive, consistent, and predictable” I’ve decided to take my family on a vacation to visit some of the places I learned about in my class.

To determine the level of expertise required for each measurable student outcome, first decide which of these three broad categories (knowledge-based, skills-based, and affective) the corresponding course goal belongs to. Then, using the appropriate Bloom’s Taxonomy, look over the descriptions of the various levels of expertise. Determine which description most closely matches that measurable student outcome. As can be seen from the examples given in the three Tables, there are different ways of representing measurable student outcomes.

Bloom’s Taxonomy is a convenient way to describe the degree to which we want our students to understand and use concepts, to demonstrate particular skills, and to have their values, attitudes, and interests affected. It is critical that we determine the levels of student expertise that we are expecting our students to achieve because this will determine which classroom assessment techniques are most appropriate for the course.

Multiple-choice tests also rarely provide information about achievement of skills-based goals. Similarly, traditional course evaluations, a technique commonly used for affective assessment, do not generally provide useful information about changes in student values, attitudes, and interests.

Thus, commonly used assessment techniques, while perhaps providing a means for assigning grades, often do not provide us (or our students) with useful feedback for determining whether students are attaining our course goals. Usually, this is due to a combination of not having formalized goals to begin with, not having translated those goals into outcomes that are measurable, and not using assessment techniques capable of measuring expected student outcomes given the levels of expertise required to achieve them. Using the CIA model of course development, we can ensure that our curriculum, instructional methods, and classroom assessment techniques are properly aligned with course goals.

Note that Bloom’s Taxonomy need not be applied exclusively after course goals have been defined. Indeed, Bloom’s Taxonomy and the words associated with its different categories can help in the goals-defining process itself. Thus, Bloom’s Taxonomy can be used in an iterative fashion to first state and then refine course goals. Bloom’s Taxonomy can finally be used to identify which classroom assessment techniques are most appropriate for measuring these goals.  

AIOU Solved Assignment 2 Code 697 Spring 2024

Q.4   How can the application category of objectives be assessed? Discuss.       

Learning objectives ideally describe a direction for the student acquiring new knowledge, skills, and attitudes. Every decision you make about your lecture or small group session should depend on what you hope your students will be able to do as a result of your session.

Why are learning objectives important? As an expert in your field, you probably already have a good idea of what you want your students to learn during your time with them.  Taking a few minutes before you finalize your session content and activities to capture those objectives is a worthwhile investment – in the development of successful learning experiences for your students and in your own development as an educator.

More specifically, learning objectives

  • Force you to look again.  The exercise of writing or rewriting objectives prompts you to examine content you may have been teaching in much the way way for years, but with a new perspective.
  • Help you trim the fat.  Allowing your learning objectives to drive your content or activity can result in discovering extraneous content that may be trimmed or an activity that doesn’t quite hit the target and needs tweaking.  You may simply be inspired to reorganize a meandering PowerPoint with your learning objectives as an outline.
  • Can make your session “fall in line.”  Once written, learning objectives can confirm a solid alignment or organization of learning activities and assessments or suggest that a fresh pass at your design of the learning experience is needed.  For example, they are invaluable in helping you create your quiz questions – indeed, a quiz should measure whether your objectives have been met.
  • Can provide opportunities to present a more rich and challenging learning experience for your students. Your learning objectives will illuminate the order, whether higher or lower, to which you are asking your students to think, process, and learn during your session.
  • Be a guide for your students.  When displayed to students, learning objectives set student expectations, guide their learning processes, and help them focus their study time for the upcoming exam(s).

How do I write good learning objectives?

Every learning opportunity can have its own objectives, from a multi-session unit to a single lecture or assignment.

  • Good learning objectives are clear, concise, and specific statements describing a student’s behaviors. Only a few short bullet points per activity should be necessary.
  • Learning objective template:  “At the end of this (session, lecture, activity, etc.), students will be able to ____ (insert an action verb).
  • Good learning objectives are specific and measurable statements stating what specifically students be able to do differently after your instructional activity (i.e., not statements on what the instructor will do)

Objectives should not contain such vague outcomes as “Students will understand…” or “Students will know…” This is not to say that students should never simply acquire knowledge, but you will be more likely to measure this knowledge when students “Describe,” “List,” or “Identify” that knowledge.   For example:

After the lecture on “bioenergetics,” students should be able to

Weak Use of Verbs

Understand the uncoupling of the electron transport chain from ATP synthesis and the physiological consequences

Improved

Describe the uncoupling of the electron transport chain from ATP synthesis and the physiological consequences

Note: Understanding, per se, cannot be measured.  Words such as know,     understand, and learn are open to many interpretations and thus not truly      measurable.

  • Good learning objectives are actual outcomes and not simply activities students will complete or things you will do as an instructor. For example, “Students will write a research paper…” is the start of an assignment, not an objective for learning.

Benjamin Bloom’s taxonomy (1956) is very helpful in writing learning objectives for the cognitive (knowing), psychomotor (doing: skill), and affective (attitude) domains. Much of the medical school curriculum focuses on the cognitive domain, which Bloom categorized into 6 levels, starting from simple recall or recognition of facts (knowledge) level, through increasingly more complex and abstract mental levels, to the highest order (evaluation.)

Here are some examples of action verbs that represent each of the six cognitive levels, from lowest to highest, which you should consider using:

Knowledge:  define, list, name, order, recognize, recall, label

Comprehension: classify, describe, discuss, explain, identify, locate, report, review

Application:  apply, choose, demonstrate, illustrate, practice, solve, use

Analysis:  analyze, appraise, calculate, compare/contrast, differentiate, diagram

Synthesis: arrange, assemble, construct, design, formulate, prepare, write

Evaluation: assess, argue, judge, predict, rate, evaluate, score, choose

  • Example of a learning objective at various cognitive levels of Bloom’s Taxonomy:

After the lecture on dizziness, students will be able to

Lower level

Name the five causes of dizziness (lower level of cognition; simple recall)

Higher Level

Given a patient case description, determine the three most likely causes of dizziness (higher level of cognition).

A learning objective states specifically what a student should be able to do.

Here are some examples of good learning objectives:
Students will be able to:

  • Identify different levels of data in new scenarios.
  • Explain in context a confidence interval.
  • Determine which probability distribution out of binomial, poisson or normal is most appropriate to model in an unfamiliar situation.
  • Compare two time series models of the same data and evaluate which is more appropriate in a given context.

Learning objectives need to be specific and measurable.

Here are some things that people might think are learning objectives, but are not:

  • Students will understand the central limit theorem. (The term “understand” is not measurable)
  • Students will learn about probability trees (“learn” is not measurable, and does not specify the level. Do students need to be able to interpret or create probability trees?)

There are vast numbers of resources on learning objectives online.

Here is one I liked, with Bloom’s taxonomy of levels of learning. These are higher and lower levels of learning objectives, ranging from being able to state principles, through to synthesis and evaluation.
http://teachonline.asu.edu/2012/07/writing-measurable-learning-objectives/
And here are some useful verbs to use when writing learning objectives;
http://www.schreyerinstitute.psu.edu/pdf/SampleVerbs_for_LearningObjectives.pdf
It is not difficult to find material on developing learning objectives.

Not just learning objectives

A course is more than the set of its learning objectives. The learning objectives specify the skills, but there are also attitudes and knowledge to be considered. The starting point for course design is the attitudes. What do we want the students to feel about the topics? What changes do we wish them to contemplate in their thinking? Then the skills and knowledge are specified, often starting at a quite general level, then working down to specifics.
For example, we might wish to teach about confidence intervals. We need to determine whether students need to be able to calculate them, interpret them, estimate or derive them.  We need to decide which confidence intervals we are interested in – for means alone, or proportions and slopes as well? Sometimes I find there are concepts I wish to include in the learning objectives, but they don’t really work as objectives. These I put as “important concepts and principles”.
I have put an example of learning objectives and concepts and principles at the end of this post.

Learning objectives tell students what is important

Without learning objectives it is difficult for students to know what they are supposed to be learning. In a lecture, a teacher can talk extensively about a case, but unless she states explicitly, it can be difficult for the students to know where to direct their attention. Do they need to know the details of that specific case or what principles are they supposed to glean from the example? Or was it just a “war-story” to entertain the troops? Students can waste a great deal of time studying things that are not necessary, to the detriment of their learning as a whole. The uncertainty also causes unnecessary anxiety.

Learning objectives enable good assessment development

Each year as we wrote our assessments we would go through the learning objectives and make sure they were assessed.  This way the assessment was fair and applied to the course. If we found it difficult to write a question to assess a learning objective we would think again about the learning objective, and what it is we really want the students to be able to do. It made it easier to write fair, comprehensive assessments.

Learning objectives encourage reflection and good course design and development

As instructors write and review the learning objectives in a course, they can identify the level of learning that is specified in each. At an entry-level course, it is acceptable to have a number of lower level learning objectives. However, there needs to be some serious thinking done if a post-graduate course is not mainly made up of higher level learning objectives. I have seen tests in stage 2 and 3 papers that tested mainly recall and common-sense. It was evident that the instructor had not thought clearly about the level of learning that was expected.
Sometimes we find we are assessing things we have not specifically taught the students. The use of learning objectives, linked with assessment design, helps us to identify the background knowledge that we assume students have. One colleague was frustrated that the students did not seem able to apply the statistical results to a managerial context. However, nowhere had she specified that students would be required to do so, and nowhere had she actually taught students how to do this. She also assumed a level of understanding of business,  that was probably not appropriate in undergraduate students.

Like it or not, assessment drives learning

I spoke recently to a maths advisor who informed me that teachers should be teaching to the curriculum not to the assessments. I felt he was idealistic, and told him so. My experience is that university students will learn what is assessed, and nothing else. I don’t know at what age this begins, but I suspect National Testing, the bane of good education, has lowered the age considerably. How wonderful it would be if our students learned for the sheer joy of learning! Where there are assessments looming, I fear this is unlikely.
When we write exams we are also writing learning materials for future students. One of the most common ways to prepare for an assessment is to do exercises from previous assessments. So when we feel that students were not really coming to grips with a concept, we include questions in the assessment, that can then be used by future students for review.

Information promotes equity and reduces unnecessary stress

The use of learning objectives can help reduce the “gaming” aspects that can proliferate in the absence of clear information. This is apparent at present in the world of Year 13 Statistics in New Zealand. The information regarding the external standards for 2013 is still sketchy (1 July 2013). The exams are written by external examiners and will take place in November of this year. However there is still only vague and sometimes incorrect information as to exactly what may or may not be included in the exams. Because of this, teachers are trying to detect, from what is or isn’t in the formula sheet and the (not totally correct) exemplars what might be in the finals, and what to include in the school practice exams.  I suspect that some teachers or areas have more information than others. The way to make this fairer is to specify what is included in the material that may be included, as learning objectives. Let us hope that some clarity comes soon, for the sake of the teachers and the students.

AIOU Solved Assignment Code 697 Spring 2024

4 Discuss comprehension in the context of its three levels. Support your answer with relevant objectives on any topic of science.

Imagine a boy named Billy. He is sitting alone in a corner and building a tower out of blocks. He places one block on top of another as his tower becomes higher and higher. The more blocks he adds, the more intricate his design becomes.

We can compare Billy’s intricate tower design to the process of reading comprehension. Reading comprehension is the ability to process information that we have read and to understand its meaning. This is a complex process where skills are built upon one another like the blocks used to make Billy’s tower. There are three levels of understanding in reading comprehension: literal meaninginferential meaning, and evaluative meaning.

Let’s take a closer look at each of these different meanings.

Literal Meaning

Literal meaning is simply what the text says. It is what actually happens in the story. This is a very important level of understanding because it provides the foundation for more advanced comprehension. Without understanding the material on this level, you could not go any farther.

Let’s use our story about Billy to provide an example. The literal meaning of the story was that Billy built a tower out of blocks. The answers to questions based on literal meaning will always be found in the text. For example: Who was building the tower? The answer is Billy.

Here are examples of the type of information that could be identified as literal meaning:

  • The main idea
  • Stated facts
  • The sequence of events
  • Characters in the story

Inferential Meaning

Inferential meaning involves determining what the text means. You start with the stated information. This information is then used to determine deeper meaning that is not explicitly stated. Determining inferential meaning requires you to think about the text and draw a conclusion.

Getting back to Billy again, what inferential meaning could we get from our story? We could infer that Billy is good at building towers! A question about inferential meaning will typically make you provide examples from the text that back up your thinking. For example: Why could you assume that Billy is good at building towers? You assume this is true because the story says that Billy’s tower got higher and higher, and the design became more intricate with each block.

Examples of the type of information that could be identified as inferential meaning include:

  • Generalizations
  • Cause and effect relationships
  • Future predictions
  • An unstated main idea

Evaluative Meaning

Evaluative meaning is what the text is telling us about the world outside the story. Readers must analyze what they have read. Then, they must form an opinion based on the information.

When readers read or view a text they can understand it on different levels. Deep comprehension occurs when all levels have been considered.

Literal comprehension occurs at the surface level when a reader/viewer acknowledges what they can see and hear. The details are stated and clear for anyone to identify. Literal comprehension is often referred to as ‘on the page’ or ‘right there’ comprehension. This is the simplest form of comprehension.

Inferential comprehension requires the reader/viewer to draw on their prior knowledge of a topic and identify relevant text clues (words, images, sounds) to make an inference. Inferential comprehension is often referred to as ‘between the lines’ or ‘think and search’ comprehension. This level of comprehension requires more skill but can be achieved by young children.

Evaluative comprehension requires the reader to move beyond the text to consider what they think and believe in relation to the message in the text. It is at this point that readers/viewers are required to justify their opinions, argue for a particular viewpoint, critically analyses the content and determine the position of the author. Evaluative comprehension is often referred to as ‘beyond the text’ and includes ‘big picture’ comprehension. Often there is no right or wrong answer but rather justification for thinking in a particular way.

Levels of Comprehension

The three levels of comprehension, or sophistication of thinking, are presented in the following hierarchy from the least to the most sophisticated level of reading.

  • Least = surface, simple reading
  • Most = in-depth, complex reading

Level One

LITERAL – what is actually stated.

  • Facts and details
  • Rote learning and memorization
  • Surface understanding only

Tests in this category are objective tests dealing with true / false, multiple choice and fill-in-the-blank questions.

Common questions used to illicit this type of thinking are who, what, when, and where questions.

Level Two

INTERPRETIVE – what is implied or meant, rather than what is actually stated.

  • Drawing inferences
  • Tapping into prior knowledge / experience
  • Attaching new learning to old information
  • Making logical leaps and educated guesses
  • Reading between the lines to determine what is meant by what is stated.

Tests in this category are subjective, and the types of questions asked are open-ended, thought-provoking questions like why, what if, and how.

Level Three

APPLIED – taking what was said (literal) and what was meant by what was said (interpretive) and then extend (apply) the concepts or ideas beyond the situation.

  • Analyzing
  • Synthesizing
  • Applying

In this level we are analyzing or synthesizing information and applying it to other information.

AIOU Solved Assignment Code 697 Autumn 2024

Q.5   What are the evaluation techniques for analysis objectives? Discuss in detail.     

An evaluation is a systematic assessment of how well a project or program is meeting established goals and objectives. Evaluations involve collecting and analyzing data to inform specific evaluation questions related to project impacts and performance.1 This performance information enables project managers to:

  • Report progress and make improvements, as necessary, to ensure the achievement of longer-term impacts
  • Assess and communicate the effectiveness of new technologies

Evaluations can be used at different points in the project lifecycle. For example, some evaluations are conducted during implementation to assess whether a technology is operating as planned, while others are conducted post-implementation to assess the outcomes and impacts of a technology. Figure 1 shows where ATCMTD evaluation activities fit in the project lifecycle. During the pre-implementation phase, as the project design is underway, evaluation planning must also be conducted. The remainder of this chapter describes these key evaluation planning activities. During the implementation phase, as the technology is being tested and fully implemented, the data collection methods should also be tested and any baseline data collection should be completed (baseline data also may have been collected during pre-implementation). Once the technology has been implemented, post deployment data are collected for the duration of the evaluation period. Grantees should report interim as well as final evaluation/performance measurement findings in their Annual Reports (see Appendix B for Annual Report template).

The first step in conducting a project evaluation is assembling an evaluation team. Evaluations can be conducted using an internal evaluation team, independent evaluators, or a mix of both. Evaluators should be brought on board as early as possible so that the design of the evaluation can occur as the deployment is being planned and the project generates sufficient data to support the evaluation. Given the reporting requirements in the FAST Act, it is recommended that an independent evaluator be used to design and manage ATCMTD evaluations.

Due to the complex nature of ATCMTD systems and technologies, evaluators should work closely with the ATCMTD project team.3 Evaluators should have regular access to the project team members who are implementing the technology and collecting the data. The project team should set up regular opportunities for the evaluators to work with data providers during and after the data collection period. Data issues are common, and it’s best to troubleshoot these issues collaboratively.

Developing an evaluation plan puts grantees in the best position to identify and collect the data needed to assess the impacts of their ATCMTD technology deployments. This plan is a blueprint for the evaluation; it includes the specifics of the evaluation design and execution, as well as a description of the project and its stakeholders. Table 1 describes the activities involved in evaluation planning and execution, each of which will be discussed in this chapter. Several templates are also included to assist grantees in structuring and documenting their evaluation and performance measurement plans.

While identifying the evaluation questions and performance measures, grantees should also be developing an appropriate evaluation design that describes how, within the constraints of time and cost, they will collect data that addresses the evaluation questions. This process entails identifying the experimental design, the sources of information or methods used for collecting the data, and the resulting data elements.

The experimental design frames the logic for how the data will be collected. Evaluations of technology deployments often utilize a before-after design, whereby pre-deployment data (i.e., baseline data) is compared to data that are collected following the deployment of the technology. For certain evaluation questions, however, it may be appropriate to collect data only during the “after” period. For example, for measures related to user satisfaction with a technology, the design could include surveys only in the post-deployment period.

More robust designs, such as randomized experimental and quasi-experimental designs, utilize a control group that does not receive the “treatment” of a program’s activities to account for potential confounding factors (see Data Limitations or Constraints for more information on confounding factors). The same data collection procedures are used for both the treatment and control groups, but the expectation is that the hypothesized outcome (improved, safety, mobility, etc.) occurs only within the treatment group and not the control group.

Evaluation designs are applied to the different methods or information sources (see next section) that are utilized in the evaluation.

Leave a Reply

Your email address will not be published. Required fields are marked *