Description
Geospatial reasoning is the ability to recognize the temporal & spatial interaction of properties, locations, and rates of change of earth's material reservoirs (solid earth, oceans, and atmosphere) in the generation of observed earth processes and phenomena. In order to increase effective teaching of system-scale thinking, one must first be able to assess geospatial reasoning as a result of instruction. The goals of this study are to evaluate the effectiveness of current web-based interfaces designed to increase geospatial reasoning, the quality of design features in such interfaces, and effectiveness of exercises used in conjunction with these interfaces. Several steps were taken to evaluate these components: utilization of a current interface (The GLOBE Program Earth system poster), creation and trial use of a new, inquiry-based classroom exercise to guide use of the interface, in-class observations, collection of completed exercises and test questions, construction of a key and rubric, and analysis of results. Our study design focused on the quality and detail of observations that resulted from student use of the interfaces and the depth and sophistication of student-generated explanations of those observations as a measure of learning gains from the exercises. "Observational" responses tend to be of average to low quality, and "explanation" responses are mostly low quality. "Explanation" style questions embedded into an exam yield better answers compared to the exercise. Misinterpretations of questions were common and the quality of answers between similar questions dropped as students progressed through the exercise. Students also showed a tendency to seek out and use physical globes to augment the computerbased interface during the exercise, even though the use of physical models was not an intended part of the exercise design. Low quality answers to "explanation" questions suggest the interface and exercise only modestly improved geospatial learning. The increase in scores from the exercise to the test shows that while some improvement in subject knowledge and understanding did occur, it is not immediately clear if these gains resulted from further reflection on the exercise, additional study, or reliance on other outside sources. Average to poor responses to observational questions and attachment to the physical globes indicates the interface is inadequate, lacking sufficiently high resolution in the presentation and of data and interactivity in the interface. Common misinterpretations and drops in scores between similar questions as the exercise progressed suggest disengagement and faults with the current structure of the in-class exercise. It is recommended that future interfaces should integrate higher quality data and allow a higher degree of student control. Disengagement and faults with the exercise suggest that further development and evaluation of classroom exercises is needed.