Assessment – A Case Study | My Assignment Tutor

Types of Important Observations to Support Project Assessment– A Case StudySvetlana V. Drchova, Joseph E. Hollingsworth, and Murali SitaramanTechnical Report RSRG-13-07School of Computing100 McAdamsClemson UniversityClemson, SC 29634-0974 USASeptember 2013Copyright © 2013 by the authors. All rights reserved.Types of Important Observations to Support ProjectAssessment – A Case StudySvetlana V. Drachova Limestone CollegeComputer ScienceGaffney, SC 296341-864-656-3444Indiana University SoutheastComputer ScienceNew Albany, IN1-812-941-2425Clemson UniversitySchool of ComputingClemson, SC 296341-864-656-3444 [email protected]Joseph E. Hollingsworth[email protected]Murali Sitaraman[email protected]ABSTRACTEvaluation is critical component of all successful CS educationalprojects. One key benefit of evaluation is that it facilitatescontinuous improvement by helping pinpoint areas where there isroom for improvement. Such pinpointing is difficult even in thecase of projects where evaluation assumes a central role because itis often hard to factor out the impact of various elements thatconfound the results. This paper is a multi-year case study of ourown experiences in evaluation. It identifies a variety of importantobservations that we have made in the context of our project thatwe believe will be useful in guiding other educational projectassessment efforts. Clarifying the impact of various factors onproject results also makes it easier for others interested in projectresults to replicate them elsewhere.Categories and Subject DescriptorsK.3.2 [Computers and Education]: Computer and InformationScience Education—computer science educationGeneral TermsManagement, MeasurementKeywordsAssessment, attitudes, concept inventory, performance-basedlearning outcomes, project evaluation1. INTRODUCTIONThe focus of this paper is the assessment process of a project thathas as one of its goals that undergraduate computing students willlearn specific concepts or skills. Often these concepts or skills arenovel to the undergraduate curriculum by either being relativelynew to the computing field or heretofore by having been relegatedsolely to graduate level education. Assessment of such a project isparamount for a number of reasons. High among these reasonsincludes being able to use the assessment results as feedback intothe project for continuous improvement over time. Furthermore, ifa project has received funding its principal investigators must beable to provide quantitative evidence that the funding provided tothe project has produced positive results [1]. The main purpose ofthis paper is to identify various types of data supportedobservations that can be used by a project director—funded ornot—for driving improvement and for justifying the project’sbenefits. We will list a number of such general observations andillustrate them with specific examples.As a topic in itself assessment in education has receivedconsiderable attention, including journals devoted to presentingtechniques for evaluation, their effectiveness, and the impact ofvarious confounding factors. Whereas the goal of these efforts isgeneral applicability, assessment results to show the benefits ofspecific techniques and tools in CS education are often the focusof presentations at SIGCSE. This paper makes CS educationalassessment itself as its topic and it is based on our own experienceover a 5-year period. It contains a variety of useful observationsfor CS educators who are infusing novel ideas in their classrooms.Section 2 provides foundational material for one approach tosystematically collecting data to support the observations. Section3 discusses data collection details. Section 4 lists the general kindsof observations that a project director may wish to make alongwith specific examples to illustrate these observations. Section 5concludes with some discussion.2. FOUNDATIONSIn general assessment needs to be data driven. That includes datafor measuring student learning as well as for measuring students’attitudes. With respect to collecting data for a particular subjectarea, one way to support the systematic collection of directevidence of student learning is to base this data collection on areasonably comprehensive inventory that lists the concepts andskills required of anyone working in that subject area. Theexamples provided in this paper are based on an inventory calledthe reasoning concept inventory (or RCI), which captures much ofwhat needs to be taught to permit students to mathematicallyreason about the correctness of a piece of software so as tosupport the development of high quality software [2]. Technicalresults of the project are reported elsewhere [3,4,5,6,7] and are notdirectly relevant here.Building on this approach with the inventory at the foundation,next comes performance-based learning outcomes. The learningoutcomes refine the more general ideas captured in the inventoryand more precisely identify the concepts and skills expected to beexhibited by the learner after instruction. The “performancebased” part of a performance-based learning outcome, has to dowith choosing action verbs that describe the “performance”expected of a learner when asked to demonstrate that the conceptor skill has been learned. These action verbs are key to helpinginstructors write quality assessments because these verbs willfrequently form the backbone of a particular assessment question.Furthermore, to support collecting data at the appropriate level ofdifficulty, Bloom’s taxonomy is employed. Within the cognitivedomain, the difficulty of a particular skill is considered to fallwithin one of the following six levels listed from lowest level ofdifficulty to the highest: knowledge, comprehension, application,analysis, synthesis, and evaluation. We found that having threelevels satisfied our assessment needs and also reduced some of thecomplexity surrounding assessment, so our learning outcomesappear at one of the three levels: Knowledge-Comprehension (orKC—this combines the two lowest levels), Application-Analysis(or AA), and Synthesis-Evaluation (or SE). Educationalresearchers have developed comprehensive lists of action verbsthat correspond to these cognitive levels and these verbs can thenbe employed in writing the performance-based learning outcomes.For example, by choosing a verb from the AA level, one canconstruct a learning outcome that expects the learner todemonstrate the ability to apply a particular concept or skill or toperform an analysis of a given situation or problem.In Section 4, we introduce a number of observations about studentlearning that are supported by the analysis of the project data thatwas collected. This collected data came from assessmentinstruments written from performance-based learning outcomes,where those outcomes utilized action verbs at various levels ofdifficulty. All of the outcomes were based on our conceptinventory.3. DATA COLLECTIONThis section explains how the Reasoning Concept Inventory,along with the learning outcomes and methods for theirinstruction, forms the basis for experimentation, data collection,evaluation, and improvement. Complete details may be foundelsewhere [2]. The experimentation involves multipleundergraduate courses at 11 universities. The required IRBprocedures were in place at two of the 11 universities but not inplace at the other nine. However, first-hand reports from theadopting instructors at these nine universities indicate thatstudents had a positive outcome learning the reasoning topics.At Clemson where the appropriate IRB procedures had beenexecuted, CPSC215 and CPSC372 were the targeted courses foradoption. CPSC215 is a sophomore-level Software DevelopmentFoundations course and CPSC372 is a follow-on junior-levelcourse titled Introduction to Software Engineering. In both classesin the semesters prior to full-blown adoption of a portion of theRCI reasoning principles, small pilots were run to gain someinitial experience. In CPSC215 adoption encompassed four weeksof the semester, and data was collected over six semesters of eightdifferent sections. In CPSC372, approximately one third of thesemester covered the reasoning principles and data was collectedover four semesters in four different sections. For the remainingtwo-thirds of CPSC372 traditional software engineering topicswere taught, which also factored into the data analysis (seeSection 4.4). At Alabama, the IRB procedures were also executed,and CS315 (a junior-level software engineering course) wastargeted. In CS315, three class periods per semester covered thereasoning principles, and data was collected over a three semesterperiod.While the collected data consists of student midterm and finalexaminations, and select assignments and quizzes thatincorporated the RCI (reasoning) principles, all the data used forillustration in this paper come from final examination questions.Also note that the data of interest was gathered from specificquestions based on specific learning outcomes, which in turn werebased on specific RCI reasoning principles, and was not gatheredfrom an entire test or quiz score.4. TYPES OF OBSERVATIONS TOSUPPORT PROJECT ASSESSMENTThe data analysis provides evidence for various types ofobservations, ranging from ones that show the ability of studentsto learn new (reasoning) principles to ones that show positiveattitudes to the new material. The observations are grouped intoseven sections according to their relevance, and each is followedby a brief discussion. Before proceeding to the discussion of theobservations, it needs to be emphasized that in our case theconcept inventory and the learning outcomes are paramount tomaking conclusions about experimental data. Because theinventory of (reasoning) principles are divided into five areas,each of which is further subdivided into several levels, andlearning outcomes on the appropriate level of difficulty, learningof each principle can be assessed with a high degree of precision.Being able to exactly pinpoint the deficiencies in particular areasof student learning guides the development of an effectiveintervention for the area in need. Table 1 provides a quickoverview of the observations discussed in detail in subsections 4.1through 4.7.In the table below we identify a number of types of observationsthat a project manager may wish or need to make as part of acomprehensive assessment process. Each of these types ofobservations is illustrated by a specific observation that we wereable to make about our project.Table 1. An Overview of Observation Types 1. Observations related to students’ learning of the principles2. Observations related to pinpointing and conductinginterventions3. Observations related to difficulty of assessment questions4. Observations comparing learning of new vs. traditionalprinciples5. Observations related to instructors teaching reasoningprinciples6. Attitudinal observations to supplement direct evidence7. Focus group observations to supplement direct evidence 4.1 Observation Type #1: Students AreCapable of Learning What Is Being TaughtCentral to most undergraduate education projects is the idea thatby some means, undergraduate students can learn material that haseither never been systematically taught at the undergraduate level,or if it has been taught, that the students can learn the materialmore efficiently or at a deeper level than before. Consequently, itis imperative that such a project demonstrates that students arecapable of learning what is being taught.One method for demonstrating this is through the development ofassessment instruments in the manner described in Section 2.Illustrative example:One of the learning goals of our project is to demonstrate thatundergraduate CS are capable of learning the mathematicsrequired to formally reason about the correctness of a piece ofsoftware, and to apply what they have learned to reason aboutsoftware artifacts of the appropriate instructional level.We provided instruction on these topics in CPSC372. The datawas collected from four semesters: Fall 2010 to Spring 2012. Fourprinciples from our RCI inventory were assessed on the AA andSE levels of difficulty (from Bloom’s taxonomy). Table 2contains the collected data.Table 2. Data Supporting Observation Type #1 DifficultyLevelSemester:Fa10 Sp11 Fa11 Sp12RCI #3.4.3AA94% 78% 89% 84%RCI #4.1.1.2SE93% 79% 86% 84%RCI #5.2.2SE61% 73% 76% 71%RCI #5.3SE88% 88% 46% 86% Discussion of Data:The percentages in the rightmost column of Table 2 represent theclass average for a particular assessment question. The projectteam members must identify a cutoff percentage for their project(or department/institution) with respect to making the claim thatthe students have learned the material. Seventy percent mightserve as the cutoff if for example a department requires forpassing that a student earns a ‘C’ or better (where a ‘C’ cutoff is atthe 70% level). In Table 2, our project failed to meet the 70%cutoff for RCI #5.2.2 (row 3) for Fall 2010 and RCI #5.3 (row 4)for Fall 2011. Our project utilized this data to pinpoint what wasbelieved to be causing problems and based on that analysis tosubsequently develop interventions for process improvement (seeSection 4.2).A complementary metric that can also be used, illustrated in row 2of Table 3 (in a subsequent section), is to look at the percentage ofstudents who score at a certain level or better. For example, aproject might aim for 80% of the students scoring at the 70% levelor better on each of the assessment questions. This metric is anapproach for tracking that a project-chosen percentage of thestudents (80% in this example) can demonstrate that they havelearned the material at a reasonable level.Finally, the RCI items that appear in the Table 2 are:• RCI #3.4.3 – applying operation pre/post-conditions in thereasoning process• RCI #4.1.1.2 – evaluating code by tracing/inspection utilizingpre/post-conditions• RCI #5.2.2 – evaluating code for correctness by utilizingassumptions and obligations• RCI #5.3 – synthesizing verification conditions (VCs) andapplying proof techniques that utilize VCsAn example of a specific learning outcome related to RCI #3.4.3at the SE (Synthesis-Evaluation) level in Bloom’s taxonomymight be: Write an ensures clause that precisely captures thebehavior of an operation. The verb “write” in this learningoutcome describes the performance expected of the student.4.2 Observation Type #2: Assessments MustAid in Pinpointing DifficultiesTo support continuous process improvement, project assessmentmust provide feedback to the PI as to where improvement ofinstruction can be made. Two aspects where assessment cansupport improving instruction include helping to pinpoint whereimprovements need to be made, and less directly what type ofintervention might be suitable.Illustrative example – Where to make an intervention?In a CPSC215 we provided instruction on how to utilize pre/postconditions of called operations to create a reasoning table ofassumptions and obligations in the client operation. Table 3contains data (based on assessment instruments developed in themanner described in Section 2) from two back-to-back semestersof the same course, the first semester’s data helped to pinpointwhere an intervention was needed and the second semester’s datahelped to verify that the intervention made a positive impact.It is important to note that all the data pertaining to the CPSC215course used in this paper are (underperforming) students who arenot exempt from the final. About a third of the students with ‘A’grades just prior to the final were exempt.Table 3. Data Supporting Observation Type #2 DifficultyLevelSemester:Fa11 Sp12RCI #4.1.1.2RCI #5.2.2Class averageAA64% 79%RCI #4.1.1.2RCI #5.2.2% of students w/ ≥ 70%AA50% 71% Discussion of Data:In the table the first row uses the class average metric in row one,while in row two the metric shows the percentage of students thatscored 70% or better on the particular assessment (either multiplequestions and/or partial credit were given). Since during Fall 2011both of these metrics indicated student performance was not ashigh as desired, we decided to develop and apply an intervention.Sometimes, more longitudinal data is desired prior to takingaction, especially when only one of the metrics is below standard,or when the metric is slightly below the desired success rate.The intervention taken was the development of three shorteducational videos (approximately five to eight minutes each) thatprovide a step-by-step guide in the construction of a reasoningtable for a simple code example [8]. In Spring 2012, these videosprovided supplemental instruction outside of class to theinstruction provided during class. We recognize that due to themany variables that cannot be controlled from one semester toanother (e.g., student makeup of each section), there is by nomeans a guarantee that our intervention was solely responsible foran improvement of student behavior. However, achieving randomassignment and controlling all variables other than the treatmentcondition is quite difficult in the educational setting, so we mustoften resort to a quasi-experimental design.Discussion on determining the type of intervention:Collecting data on student performance and its subsequentanalysis is at the beginning of the process when making anintervention. The next step requires considering many of the nontreatment variables that cannot usually be held constant and thatconfound the analysis. These variables are related to aspects of thefollowing: types of assessments, class periods, instructors,materials covered, and students. We developed a number ofdiagnostic questions to help determine if there might be a problemwith the actual assessment, such as “Is this the first time thequestion is used?” or “Does the question correspond to the levelof difficulty at which the concept was taught?” among others;Table 13 in [1] lists each of these variables and correspondingquestions for aid in determining the type of intervention.Without a reasonable assessment process in place, at best it willbe difficult determine where an intervention is needed, and atworst it will not even be known that an intervention is required.4.3 Observation Type #3: Assessments MustBe at the Appropriate Level of DifficultyJust having an assessment process in place is not sufficient. Themembers of the project must continue to monitor the assessmentsfor being at the appropriate level of difficulty. Inappropriateassessments are not good for students or for meaningfulevaluation. If an assessment is too simple it can lead tooverconfidence, or simply be a waste of time for the student.Assessments that are too difficult can discourage students, orcause resentment. Furthermore, data collected and analyzed by theproject based on inappropriate assessments can lead the team totake unnecessary or unwarranted interventions, or possibly tomake inaccurate claims.Performance-based learning outcomes based on Bloom’staxonomy (discussed in Section 2) then become a tool to be usedby the project to initially cause to raise questions concerning atwhat level we expect students to perform, and to then provide theforum for discussing the level of difficulty of an assessment. If itis determined that the performance-based learning outcome and itsassessment are both at acceptable levels of difficulty, but theperformance measured is not, then assessment is appropriatelyleading us toward making an intervention as was discussed inSection 4.2.Illustrative exampleIn CPSC215 we asked two questions on the final exam based on aKC-level of difficulty (lowest level). The questions asked studentsto identify the correct definition for “contract programming” and“loop invariant”. Table 4 shows the result. There is nothing wrongwith students scoring 100% on a particular question, however,when discussing this assessment data we realized that at the KClevel we could ask the students to perform other tasks that wouldpossibly better reinforce these concepts. For example, we couldask a student to defend that a particular client has been engineeredusing contract programming principles, or to summarize the basicideas surrounding loop invariants, etc. These two italicized verbswere selected from the many that appear in tables of “Bloom’sverbs”.Table 4. Data Supporting Observation Type #3 DifficultyLevelSemester:Sp12RCI #4.2KC100%RCI #4.3.2.1KC100% At the other end of the spectrum, in the same course in Fall 2011,CPSC215 students were given a final examination whichcontained a question about using mathematical models forconceptualizing objects. The question dealt with RCI#3.3.1(mathematical modeling for conceptualizing objects), andinadvertently involved an idea beyond the knowledge of thestudents. The material was taught on the KC-level of Bloom’staxonomy, but the assessment question was asked on the SE level.Almost none of the students got the answer right (see Table 5). (Ateaching assistant who was teaching the course for the first timeset this particular question.) The instruction was improved to theAA level and the question was changed from being at SE level toAA level. This adjustment helped, and next semester the difficultylevel of the question was more appropriate, with 43% of studentsscoring 70% or higher. Though this is only the averageperformance of non-exempt students, it is still not an idealsituation and requires more investigation into additionalinterventions.Table 5. Data Supporting Observation Type #3 RCI #3.4.3DifficultyLevelClassAverage% studentswith 70% orhigherFall 2011SE4%0%Spring 2012AA43%43% 4.4 Observation Type #4: Assessments MustShow New Can Be Learned As Well As OldIn the Computer Science Curricula 2013 Ironman Draft Version 1[9], it is stated, “… in several places we expect many curricula tointegrate material from multiple Knowledge Areas”, withexamples given for introductory, systems, and parallelprogramming courses. If a project is attempting to integratematerial into an existing course’s curriculum then assessmentsthat can measure how well students learn the new material withrespect to the already existing, or traditional material will be ofvalue.An analysis of assessment data that shows that students learn thenew material on par with the traditional material would suggestthat the new material is appropriate for the targeted audience. Onthe other hand if the initial analysis indicates that the new materialis not being learned as well, that might lead to a number ofadditional questions to be investigated as part of the currentproject or a future project. One example question is: Wouldinstruction of the new material be received better if it were in itsown course rather than being integrated?Illustrative exampleIn CPSC372 we integrated instruction of reasoning concepts intoa software engineering course with two thirds of the coursedevoted to traditional topics such as requirements analysis anddesign. In our final exam we assessed the important parts of bothof these areas, the reasoning concepts and the traditional softwareengineering concepts. Table 6 shows that for two separatesemesters class averages on these two different areas almostmirroring each other. Without such data, for example, one couldreach conclusions that a new approach is working better or worse,when in fact the result might be affected by the studentpopulation.Table 6. Data Supporting Observation Type #4 Semester:Fa10 Sp11Class average onreasoning concepts85% 79%Class average ontraditional SE concepts85% 78% 4.5 Observation Type #5: Assessments MustAccount for Instructor-Related VariationsDissemination and adoption are often important goals of CSeducation projects. So if new teaching techniques, or techniquesfor teaching new conceptual material are developed by the project,then dissemination through adoption by other instructors of thenew conceptual material as well as the teaching techniques willalmost certainly be a project goal. In this situation, results fromassessment will add value if the results can demonstrate how wellstudents perform when learning from an instructor who has littleprior experience with the new techniques or with the newmaterial.To support gathering of data in this situation, the project needs torecruit a second group of instructors who are less familiar with thenew concepts and/or new teaching techniques and then arrange forthis group of instructors to teach the new concepts/techniques.The project at a minimum also needs to provide this group ofinstructors supporting instructional material, already writtenassessments, possibly some face-to-face training with thematerial, and then some support during the semester, e.g.,answering instructor questions, etc. The data gathered by theproject-provided assessments can then be used to analyze howwell the students were able to learn from this group of instructors.Finally, if the group is permitted to teach the material overmultiple terms, then longitudinal data can be gathered andutilized.Illustrative exampleTable 7 presents the assessment data from CPSC215 collected inFall 2012 semester. Instructor 1 had been teaching RCI reasoningprinciples since 2008. Instructor 2 was an experienced computerscience instructor who taught undergraduate CS courses for anumber of years. Instructor 2 taught the RCI reasoning principlesfor the first time in Fall 2012. The six reasoning principles listedwere assessed through various questions at the end of the semesterin both sections. Students in the section taught by Instructor 2scored comparable to those in the section taught by Instructor 1,and in some instances scored higher. We believe that this successis at least due in part to the fact that experienced educators arealready familiar with good foundational teaching methods andwhen provided with good instructional and assessment materials,that they too can achieve satisfactory results with new materialand techniques.Table 7. Data Supporting Observation Type #5 DifficultyLevelFall ’12 Class Average:Instructor 1 Instructor 2RCI #3.4.3.2AA44% 60%RCI #4.1.1.3KC71% 75%RCI #4.2.2.1KC95% 88%RCI #5.2.2.1KC81% 100%RCI #5.3.2AA48% 63%RCI #5.2.2AA81% 85% If, however, this type of assessment seems to point in the otherdirection, i.e., toward students having a harder time learning frominstructors who are unfamiliar with the material, then this mighthelp the project to find ways to modify existing instructionalmaterials or develop additional supplemental instructionalmaterials to either be a direct aid to the student or to theinstructors themselves. Furthermore, if longitudinal data can becollected, then that data can be analyzed to determine if learningimproves as the unfamiliar instructor becomes more familiar withthe material.4.6 Observation Type #6: AttitudinalAssessments Are a MustAttitude measurement is important because it is well known insocial psychology that attitudes not only affect behavior (i.e., theyare predictive of future behavior), but that behavior can affectattitudes. The attitudes of students will be affected by the project’sinstruction and interventions, so it is important to assess theattitudes of students along with how well they have learned thematerial.Illustrative exampleWe conducted attitudinal surveys in both the sophomore-level andjunior-level courses mentioned in this paper. A questionnaire wasadministered to students in each section at the beginning and endof the semesters for these courses. This summative survey dataand the full version of the survey, along with the consent formthat students receive prior to their participation, can be found in[2].The questionnaire assesses the student attitudes on softwareengineering topics. Statistical tests were used to comparestudents’ attitudes before and after taking the class in which thenew topics were taught. The results from CPSC215 showed asignificant positive change in students’ conception of how to buildhigh quality software after taking the class. Results fromCPSC372 showed a significant positive change in students’ viewof precise mathematical descriptions for developing correctsoftware.The results of the attitudinal surveys indicate that the attitudinalchanges occurred exactly in the areas emphasized in each course.The sophomore-level CPSC215 course taught basic concepts ofsoftware design, and the junior-level follow on course CPSC372taught more advanced software engineering skills, includingspecifications, contracts, etc. Such significant attitude changescannot be taken for granted as noted earlier; students may “learn”a topic to achieve better grades without necessarily changing theirattitudes towards the importance of those topics.4.7 Observation Type #7: Focus GroupStudies Are a MustWhile quantitative evidence is central for assessment, ideally, itmust be supported with qualitative evidence as well. Student andinstructor focus groups can both be useful, depending on the sizeand scope of the project. Here, we report on results from a focusgroup meeting that was conducted with the graduate teachingassistants who taught CPSC215, following suitable IRBprocedures. The goal of this meeting was to understand what newprinciples were actually introduced in the course, and to takeinventory of the successes and challenges.The instructors provided useful feedback, and the most relevantitems are discussed below. The full transcription of the meeting isavailable in [2]. It was discovered that a large number of RCIprinciples were covered in the course. Each instructor spent about3 weeks of the course teaching the new principles. Though all theinstructors covered an almost identical set of new principles, eachinstructor introduced the new topics where they thought itlogically fit within the traditional material. When asked to place amark in an inventory table of principles next to the items that theyhave taught/tested and taught/not tested, the marked sets fromeach instructor were almost identical. Some of the principles werecovered at the KC level of the Bloom’s taxonomy, and others atmore advanced levels.Though teaching the topics was a new endeavor to theseinstructors, they only experienced minor difficulties teachingthem. They found the guidelines, instructional materials, andassessment questions that were provided to them useful. We canconclude with confidence that even a novice instructor can besuccessful in teaching these new topics. With minimal guidance,instructors can tailor assessment questions to meet their individualteaching styles and the material coverage level.The difficulties of incorporating new topics into the existingcourse material were also discussed. Though one of the instructorsindicated that introduction of reasoning topics initially seemedlike a “hard left turn”, the others did not see a challenge inincorporating the topics into the existing curriculum.Another important conclusion was that the students were able tolearn new topics, and performed comparably to the traditionaltopics. This qualitative evidence collected at the group meetingcorrelates with the research data discussed in the earlier sections.For example, both indicate that students are capable of learningreasoning topics just as well as traditional ones. Some challengeswere noted in the focus group meeting as well, such as a lack ofprerequisite knowledge of mathematics for some students.5. DISCUSSIONWhile there is much discussion in the general education literatureon aspects of assessment (e.g., [10]), few detailed studies on thetopic itself are available in a computer science context. Moskal,Lurie, and Cooper [10], for example note the improvement inperformance and attitudes for a curriculum designed to benefit “atrisk” majors. Dorn and Elliot [12] have found using a pre/postsurvey that student attitudes can change throughout the course of asemester and that the impact of pedagogy on student attitudes canbe measured. A panel discussion has raised various issues inassessing performance at different colleges [13].The goal of this paper is to help educators confirm throughassessments the benefits of teaching new principles or using newapproaches, and in the process consider a host of factors thataffect the results of such assessments. We have provided adetailed case study with actual examples and supporting data,collected over a period of five years as a guide for other CSeducational efforts. Ultimately, no matter how good the newapproaches might be, without proper validation through criticalevaluation, they are unlikely to be replicated elsewhere.6. ACKNOWLEDGMENTSWe thank members of our research groups for their contributionsto the contents of this paper. This research is funded in part by theNSF grants CCF-1161916, DUE-1022191, and DUE-1022941.7. REFERENCES[1] Stevens, F., Lawrenz, F., and Sharp, L. 1993. User-FriendlyHandbook for Project Evaluation – Science, Mathematics,Engineering and Technology Education, Ed. J. Frechtling.NSF 93-152, available at:http://www.nsf.gov/pubs/2002/nsf02057/start.htm[2] Drachova, S.V., 2013. Teaching and Assessment ofMathematical Principles for Software Correctness Using aReasoning Concept Inventory, Ph.D. Dissertation, ClemsonUniversity.[3] Cook, C.T., Drachova, S.V., Hallstrom, J.O., Hollingsworth,J.E., Jacobs, D.P., Krone, J., and Sitaraman, M., 2012. Asystematic approach to teaching abstraction andmathematical modeling. In Proceedings of the SeventeenthACM annual conference on Innovation and Technology inComputer Science Education, Haifa, Israel 2012, ACM, 357-362. DOI= http://dx.doi.org/10.1145/2325296.2325378.[4] Cook, C.T., Drachova, S. V., Sun, Y-S., Sitaraman, M.,Carver, J., and Hollingsworth, K. E., “Specification andReasoning in SE Projects Using a Web-IDE,” Proc. 26thConference on Software Engineering Education andTraining, IEEE, 2013.[5] Krone, J., Baldwin, D., Carver, J.C., Hollingsworth, J.E.,Kumar, A., and Sitaraman, M., “Teaching MathematicalReasoning Across the Curriculum”, Proc. 43rd ACMTechnical Symposium on Computer Science Education,ACM, 2012, 241-242.[6] Leonard, D.P., Hallstrom, J.O., and Sitaraman, M. 2009.Injecting rapid feedback and collaborative reasoning inteaching specifications. In Proceedings of the 40th ACMTechnical Symposium on Computer Science Education.ACM, New York, NY, USA, 524-528.DOI=10.1145/1508865.1509046http://doi.acm.org.oberon.ius.edu/10.1145/1508865.1509046[7] Sitaraman, M., Hallstrom, J.O., White, J., Drachova-Strang,S.V., Harton, H.K., Leonard, D., Krone, J., and Pak, R. 2009.Engaging students in specification and reasoning: “hands-on”experimentation and evaluation. In Proceedings of the 14thAnnual ACM SIGCSE Conference on Innovation andTechnology in Computer Science Education. ACM, NewYork, NY, USA, 50-54. DOI=10.1145/1562877.1562899http://doi.acm.org.oberon.ius.edu/10.1145/1562877.1562899[8] Hollingsworth, J.H., 2012. SIGCSE Workshop 2012Instructional Video Series.http://www.cs.clemson.edu/group/resolve/teaching/ed_ws/sigcse2012/index.html[9] IEEE/ACM Computer Science Curricula 2013, IronmanDraft V1, 2013.http://ai.stanford.edu/users/sahami/CS2013/ironmandraft/cs2013-ironman-v1.0.pdf[10] Dunn, K. E. and Mulvenon, S. W. 2009. A Critical Reviewof Research on Formative Assessment: The LimitedScientific Evidence of the Impact of Formative Assessmentin Education. In Practical Assessment, Research, andEvaluation, Volume 17, Number 7, 1-11;http://pareonline.net/pdf/v14n7.pdf[11] Moskal, B., Lurie, D., and Cooper, S. 2004. Evaluating theeffectiveness of a new instructional approach. In Proceedingsof the 35th ACM Technical Symposium on Computer ScienceEducation. ACM, New York, NY, USA, 75-79.[12] Dorn, B. and Elliott A.T. 2013. Becoming experts:measuring attitude development in introductory computerscience. In Proceedings of the 44th ACM TechnicalSymposium on Computer Science Education. ACM, NewYork, NY, USA, 183-188. DOI=10.1145/2445196.2445252http://doi.acm.org.oberon.ius.edu/10.1145/2445196.2445252[13] Sazawal, V., Schwarm, S., Goldner, B., Gellenbeck, E., andZander, C. 2003. Assessment of Student Learning inComputer Science Education, The Journal of ComputingSciences in Colleges, Volume 19, Number 2, 39-42.

QUALITY: 100% ORIGINAL PAPER – NO PLAGIARISM – CUSTOM PAPER

Leave a Reply

Your email address will not be published. Required fields are marked *