We encounter them in the Sunday funnies, or even on restaurant paper placemats: brainteasers that ask us to compare pictures of adjacent circles and decide which is bigger, or drawings of cubes that you are supposed to mentally rotate and decide what side ends up where.
These tasks involve visual perception and spatial visualization, says Dr. Dale Klopfer, chair of the psychology department. For some people, the exercise is a pleasant challenge, but for others it is frustrating, he says.
A topographic map of Big Cove Tannery, Pa.
More importantly, having the ability to manipulate images mentally—or the lack thereof—can impact one’s success in certain careers.
Klopfer and his co-principal investigator Dr. Laura Leventhal, computer science, along with Dr. Charles Onasch, chair of the geology department, and Dr. Guy Zimmerman, computer science, have received a $121,375 grant from the National Science Foundation (NSF) to look at how students learn to think in three dimensions and how they can overcome the problems associated with 3D spatial visualization.
Titled “Empowering Student Learning in the Geologic Sciences with Three-Dimensional Interactive Animation and Low-Cost Virtual Reality,” the project takes a multidisciplinary approach to addressing a skill that could ultimately affect people’s decisions on whether to pursue science or math-based careers.
“Geology requires a high degree of spatial thinking and visualization,” Onasch said. “Everything we do is in 3D, whether it’s interpreting topographic maps as land-forms or the crystal structure of a mineral as a cube filled with atoms bonded together at right angles. You must be able to visualize the 3D geometry.
“This is a major problem faced by people in many disciplines,” he added.
In addition to geology, “spatial visualization ability is correlated with the fields of chemistry, mechanical reasoning and mathematics, among others,” said Leventhal.
An alternative approach to learning
Klopfer, who does research in applied visual perception, and Leventhal, whose research focuses on human-computer interaction, asked, “Are there ways we can help people improve their performance on problem-solving tasks that require processing of spatial information and perhaps improve their ability as well?”
“We wondered what the role of a computerized tool could be in teaching people to use visual information,” Leventhal said. “Maybe we can provide them another way to learn.”
The project has three phases. The first is not funded by the NSF but, as it is a study of “raw” spatial ability, it involves laying the foundation for the next two parts. The researchers will carry out this study using existing laboratory space and equipment already provided by the University, according to Leventhal.
As an alternative to the old, paper-and-pencil “Cube Comparison Task,” which measures spatial ability, Zimmerman has created a computer model of the cube with designs on each side. Though on a flat screen, it appears to be three-dimensional.
“The goal was to make the cube so realistic that you almost feel you could hold it in your hand,” Zimmerman said. “Using the software, you can rotate it and look at it from all sides.”
In this lab-study phase, the researchers will be examining subjects’ ability to manipulate the cube to bring various sides to the front. As subjects try different moves, the researchers will ask them to reflect on why they made the decisions they did, in an effort to study their mental processes, Klopfer said.
Working in this “context-free” environment, the subjects’ spatial ability will be quantified, Zimmerman said.
Initial findings show that with the paper-and-pencil version, “the working memories of the people with low spatial ability are just overloaded with information,” Klopfer said. “There is a big gap in performance between low- and high-ability people. But when we present the same task on the 3D interactive interface, the gap between low- and high-spatial people is eliminated.”
The grant funding kicks in for the second phase. Zimmerman and the two graduate students on the project—one from psychology and the other from computer science—are developing a prototype computer model using information and features derived from the first tool and incorporating topographic maps. Subjects can again manipulate the 3D model to examine it from different angles. Onasch is consulting with the team to develop profiles of topographic maps of varying difficulty and to provide his insights as someone who routinely carries out visualization tasks at a very high level.
“We’re hoping that working with 3D models will impact people’s ability to construct mental models,” Zimmerman said. “But if not, there’s also the possibility that people with low ability can just offload to the computer the work of doing the mental rotations.”
Meanwhile, Klopfer is observing Onasch’s introductory geology lab classes to get an idea of how students approach the material and what the stumbling blocks might be.
After testing with human subjects in phase two, the project will move on to the third phase, in which the materials will be integrated into geology lab classes.
“Our hope is to develop some new teaching techniques to improve students’ understanding of topographic maps,” Onasch said. “When students don’t get it intuitively, they tend to use a mechanical, ‘cookbook’ method of memorizing one step after another. We’d really like to improve their native ability and give them a deeper understanding than the superficial one they get that way.”
Though they might seem like disparate fields at first glance, psychology and computer science actually have areas of shared focus in the study of perception and information processing, say Klopfer, Leventhal and Zimmerman.
Psychologists have been studying how people learn and process information for over 100 years, Klopfer said, so studying how instructional material can be delivered effectively through computers fits nicely with the domain of cognitive psychology.
Human-computer interaction is a growing field that looks at people’s perception, Leventhal said, with applications to everything from computer screens, mice and keyboards to automobile dashboards to airplane instrument panels. “It involves the engineering of user interfaces,” she explained. “We do research on how people use computers and focus on making their experience better.”