© 2035 by The Clinic. Powered and secured by Wix
Innovative Insights for Informal Educators

The xMacroscope: An open-source tool for informal science institutions
Apr 24
13 min read
0
49
0
By Katy Börner, Ph.D. Indiana University; Laura Weiss, COSI Center for Research and Evaluation; Elizabeth G. Record, Indiana University; and Bruce W. Herr II., Indiana University
The low level of data visualization literacy in the U.S. is well documented. For example, in the Harvard Business Review, Sabar writes that 90% of business leaders report data literacy as key to their company’s success, but only 25% of workers feel confident in their data skills (2021). The DATA Foundation’s March 2022 report (Hart et al., 2022) notes that data literacy is increasingly recognized as a core workforce competency and suggests governmental agencies need to prepare for changes in how they educate their workforces and the public. The question that emerges then is: What is the role of the informal science learning community in helping improve data and data visualization literacy? Although there are calls for the informal sector to play a role (e.g. Hart et al., 2022), what that role is has not been well defined.
On the other side of the data visualization literacy equation is the data itself plus existing tools that can be readily used to convert data into actionable insights. Data analysis and visualization tools aim to empower general audiences to read a dataset (e.g., downloaded from an online source or shared as a file), clean and analyze this data (e.g., use US ZIP codes to geolocate, that is assign a latitude and longitude), and render the data visually (e.g., as a sorted list, a scatter graph, a geospatial map, or network) as a ‘basemap’. Additional data can be ‘overlaid’ on this basemap, e.g., population data or flight traffic networks on a US map. Other data variables, such as size or color can be used to encode additional data variables (e.g., number of inhabitants or age group) (Börner et al., 2016). Understanding and managing the structure and dynamics of complex systems requires keeping track of continuously evolving datasets which necessitates a new kind of data analysis and visualization tool we call a macroscope (de Rosnay, 1979), reflecting the the Greek word macros or great, and skopein, or to observe (Börner et al., 2011). A macroscope makes it possible to create a wide range of visuals that can be used to display data in ways a viewer can make meaning without having to deal with the large amount or complexity of the data itself. The xMacroscope presented in this paper goes one step further to take visitor input data, visitor performance data, and then in real-time present data visualizations of the visitor's data compared to other visitors who have previously participated in the activity (see Pepper, et. al 2021).
To accomplish such an active engagement around a visualization in a museum setting, a visitor needs a suite of competencies necessary for varying levels of mastery of both data visualization creation and interpretation. But what if the data visualization is centered on the individual visitor? Would this help increase visitor understanding of the visualization? This was one of the driving questions behind a series of projects and studies funded under two NSF grants (#12233698l; #1713567).
The xMacroscope
Applying lessons learned from a technology development NSF project, four aspects of data and how people interact with them drove the development of the xMacroscope (Peppler et al., 2021). First, people like to look at data to see something about themselves such as how well they performed in the data (Hoor et al., 2019). People also tend to recall actions they perform better than those they observe (Maxwell & Evans, 2002) and like to compare performative data of their own progress against self, within group, and with others (Meyer et al., 2023). Lastly, it is possible to integrate physical experiences to generate data (Han et al., 2022).
Many science museums and centers have exhibits that involve the visitor in entering and/or generating data through actions they take. This provides an opportunity to consider a couple of questions: If we extract data from movement and turn that into data visualizations, how effective are we at contributing to the ability of visitors to better interpret data visualizations? What happens if we add intentional data visualizations to build on the physical activity?
Science museums offer a unique educational setting for engaging with data visualization literacy as these spaces can make it possible for visitors to contribute personal data, including physical data obtained through sensors, to a publicly emerging data set. This makes it possible for a larger contextual and personally meaningful data set to be produced over longer periods of time and across multiple visitor groups. So how might we do this? Building on work studying the potential for a Macroscope for science policy decision making (NSF #0738111) and lessons learned on a pathways grant exploring data visualization literacy (NSF #1223698), the concept emerged for the xMacroscope as an open-source tool for capturing data input by a visitor via a keyboard and via a sensor that records the individual’s movement.
The exhibit “Run” at the Science Museum of Minnesota (SMM) served as the grounding for the implementation of the xMacroscope. Part of the exhibition, Sportsology, Run allows an individual to compare their running speed against another visitor, of one or several full-sized videos of professional athletes and animals (including a T. rex). The speed of each racer is captured and displayed at the finish line. For the project, the experience was adapted to become “Run (Walk).” Walking was added as many museums do not allow or encourage running for safety reasons. The addition of the xMacroscope enabled visitors to build and interpret visualizations based on data they and other visitors had generated via keyboard input and a finish line sensor.

Using the xMacroscope platform (Picture 1), visitors would enter personal data and then hit the start button which activated the countdown. Then, the individual would walk the lane as fast or slow as they chose. At the end of the lane, they would pass by a finish line sensor that recorded their time walking or running. After the activity, the current participant could explore his/her information in the context of the most recent 50 prior participants; they could select a bar chart or a geospatial map; and customize what data attributes are shown on which axis and encoded using what graphic variables (e.g., size or color). Visitors were able to customize the visualizations to answer specific questions. The integration of data gathered via motion sensors, combined with an easy-to-use interface, provided a novel platform for building data visualization literacy into the visitor experience.

The research component of the project did reveal there were clear data visualization literacy outcomes (Wojton, et al., 2018; Peppler et al., 2021). Visitors across ages looked at and manipulated the data, discussed the visualizations, and often repeated the activity for the purpose of comparison. Age had a strong correlation to the depth of understanding of the visualization and during its development, changes were continually considered and addressed. One example of such a finding was that while the “shorter” line reflecting time was the faster, some visitors, especially youth, saw the longer line in the visual as being “the winner.” Such challenges led to added information built into the visual and additional signage around the visuals.
Evaluation data (Heimlich et al., 2022) showed that visitors were engaged in the data visualization and engaged in meaning-making around the experience and the visualizations. As the project was nearing its end, we asked ourselves another question: What would it take to transfer the xMacroscope to other full-body, physical experiences in science centers and museums? The team then conducted an exploratory study looking at two other exhibit experiences that generated data through the interaction. The Heart Rate Challenge at COSI and the Motion Lab at SMM were selected for this study as each: 1) uses a different physical activity for generating the data, 2) has different means of engaging with the data, and 3) has different structures for comparing the individual’s activity against or with others.
Comparing physical engagement activities for the xMacroscope
To better position the xMacroscope for adaptation to other experiences in museums using the open-source programming, the project team examined the various interactives at the two museums engaged in the evaluation study to consider what would be different from the Run/Walk experience but would conceptually work for the xMacroscope. The two exhibits identified as comparisons for transfer were:
The Heart Rate Challenge at COSI instructs visitors to take their resting heart rate, compete in an activity, and then take their active heart rate. The activity consists of a long table with a row of buttons along each side; visitors compete to to press as many buttons as possible as they light up over the course of 30 seconds. The researcher approached visitors who engaged with the data portion of the exhibit (i.e., taking their heart rate) to participate in the intercept interview.
At the Motion Lab at SMM, visitors can record a video of themselves doing a sports purposive movement activity (e.g., kicking a ball, doing a cartwheel, or dancing). Suggestions are provided that directly align with the categories of video comparisons. Then, they can watch the video of themselves and analyze their movements. They are also able to compare themselves to other visitors or professional athletes doing the same activity.
For the comparison study, a data collector identified an individual or small group as they approached the activity and observed how they interacted with it. After the observed group finished interacting with the exhibit, the data collector approached and asked them to participate in a short interview. Visitors who agreed were first asked to rate four items about their enjoyment of the exhibit and their experience interacting with the data. The data collector then asked follow-up prompts for each item. If a visitor did not agree to participate in the intercept interview, their observation data were not included in the final data set.
During the Heart Rate Challenge interview, we asked visitors to share how they saw themselves in the data. Both adults and youth were most likely to reference their heart rates, with many noting the difference between their resting and active heart rates. Some adult visitors mentioned their fitness or the fitness of others in their group (e.g., speed, stamina), though it is unclear if these reflections on fitness were prompted from the data or from just doing the activity. We also asked visitors what they saw comparing themselves to others. The data-based comparisons visitors made were about their heart rates (e.g., whose rate in the group was higher) and how many buttons they pressed (e.g., who “won”). A few of the visitors who made these types of comparisons shared reasons for higher heart rates (e.g., a youth having to move more to press the buttons) or scores (e.g., a taller group member being able to press the buttons more easily). Visitors also made comparisons without referencing the data, such as comparing fitness levels or physicality of group members.
Visitors’ open-ended responses and responses to the rating scale showed that, overall, they were generally able to see themselves in the data (x̄=5.94 on 7-point scale) and make comparisons (x̄=5.99 on a 7-point scale) at Heart Rate Challenge. The comparisons visitors made between their physicality and fitness and their heart rates and scores show some level of meaning-making between the activity and the data. While the data portion of the exhibit was not frequently referenced when we asked visitors what they liked about the exhibit, things they did mention often (e.g., being active, having fun, competing) could be explored, supported, or reinforced with a stronger data component.
For the Motion Lab, visitors were selected as they entered the queue for the activity. The majority of the adults and children who interacted with the screens appeared to read the signage or instructions at the exhibit (78,3% and 62.5% respectively). Just over a third of adults (n=8, 34.8%) who interacted with the screens did the activity once (i.e., recording their movement), but over half (n=13, 56.5%) did not do the activity. Over half of the children who interacted with the screens (n=17, 53.1%) did the activity once, and two in five (n=12, 37.5%) did the activity multiple times. Most adults used the screens only once (n=15, 65%), and children were similarly likely to use the screens once (n=17, 53%) as they were to use them multiple times (n=15, 47%). The protocol adapted from the Run/Walk protocol for the Motion Lab did not have the data collector observe the visitors while they looked at their videos, so there was no capture of those who compared themselves to others beyond those in their group or to the professional in the comparison video.
Comparing the experiences for transferability of the xMacroscope
Like Run, the first comparison experience, Heart Rate Challenge, provided visitors with one temporal comparison point of their physical activity with five possible comparisons in the data (an individual’s resting heart rate compared to their active heart rate, an individual’s heart rate compared to other player’s heart rates, and an individual’s score compared to the other player’s score). As a strong contrast, Motion Lab provides video of the individual performing some activity, offering continual or multiple points of possible comparisons. The experience offers an opportunity to visually compare their performance to others but with no specific data extracted for comparison.
In the observations, the Heart Rate Challenge reveals a more direct means for transferring the xMacroscope into the experience. The numerical visitor data (i.e., heart rates, activity scores) could be easily used as comparison points for data visualizations. The xMacroscope would need to generate a comparison score between players and a change score rather than the single incident score in Run (Walk). For the xMacroscope to work with the Motion Lab, the video itself would need to be analyzed with critical measurement points for comparisons, such as individual off/on axis, angle or arc of a movement, and other standard measurements of physical activity used in sports and dance requiring additional programming. For both the Heart Rate Challenge and the original Run activities, the number of individuals being compared would need to be increased so that hypotheses generated by the visitors in response to the initial visualization could be better generated and examined by the visitors.
On both, we noted the more intuitive activity offered by each experience (i.e., pressing buttons and recording a video) both benefited use (Luff et al., 2013) but also allowed for a lot of use that was not necessarily generative toward the idealized outcomes of the experiences as they relate to the data component. For example, in Heart Rate Challenge some visitors would simply hit buttons over and over; others created teams to try to get more points without having to move as far/fast. With Motion Lab, some visitors watched to see themselves in the video without the critical physical exploration offered in the experience. As in any free-choice learning setting, there is no ‘right or wrong’ way to engage, but there are the intended outcomes that are met by engaging as per the intention of the design. The xMacroscope does seem to help shape more visitor engagements toward the intended outcome without forcing the outcome of looking at data critically.
What’s next
Data literacy is an important skill to be science-literate in the 21st century. Science centers and museums have the potential to play an important role in both using and facilitating learning around data visualizations and data literacy. One way to do this is to intentionally incorporate data visualizations into existing experiences in ways that engage visitors in interacting with data that is generated by their own actions. The xMacroscope provides a tool for doing this. Further, the findings go beyond simply transferring the technology and suggest a solid foundation for thinking about any data visualizations and data literacy effort in a museum.
In critically looking at two different experiences and comparing them against what we learned from the development and testing of the xMacroscope, we see three major components of what an institution would need to do to incorporate the xMacroscope into an exhibit with a full-body experience:
(1) Can the motion caption of the physical movement be transferred into useable data? These activities need to fit with the criteria that the production of an action is embedded in the features of the exhibit and its context, and the engagement activates the intended activity and interaction (Luff et al., 2013). Once these criteria are met, these components shape what we see as the “next steps” for creating an experience that would support data visualization and data literacy.
(2) How connected is the experience to the intended learning outcome? Does seeing and knowing the data emerging from the physical activity naturally lead to asking questions that relate to the science of the experience? The goal of the xMacroscope is to generate data that prompts questions requiring examination of data and visualizations in different ways. This leads logically to looking critically at the experience to provide appropriate opportunities for visitors to hypothesize and then test their hypotheses.
(3) Does the activity have a natural flow that moves from initial engagement through physical activity and then immediately into data visualization? And does the experience offer options for repeating the experience and going deeper into the data? This comparison suggests an exhibit that clearly prepares the visitor for the action including a data visualization element is important. Then, ensuring that the individual’s action is revealed in a way that is understandable, the data become central in conversation and making meaning of the experience. And finally, repeating the experience can lead to an improved understanding of what data is generated through the experience and how this data can be explored in the data visualization to answer questions and test hypotheses. It is in this final component that both the science of the action and the data visualization literacy of the element meet.
Here is the link for the open-source xMacroscope:
ACM: Digital Library: Communications of the ACM
Works cited:
Börner, Katy. (2011). Plug-and-play macroscopes. Communication ACM, 54 no 3, (2011): 60-69.
Börner, Katy., Adam Maltese, Russell Nelson Balliet, & Joe E. Heimlich. (2016). Investigating aspects of data visualization literacy using 20 information visualizations and 273 science museum visitors. Information Visualization, 15(3), 198-213.
de Rosnay. Joel. (1975), The Macroscope: A New World Scientific System Translation of Le macroscope, vers une vision globale, Editions de Seuill, Paris
Han, A., Keune, A., Huang, J., & Peppler, Kylie. (2022). Visualizing Family Engagement in Museum Settings. In Proceedings of the 16th International Conference of the Learning Sciences-ICLS 2022, pp. 1904-1905. International Society of the Learning Sciences.
Hart, Nick., Adita Karkera, & Valerie Logan. (2022). Data literacy for the public sector: Lessons from early pioneers in the US." https://www. datafoundation.org/data-literacy-report-2022.
Heimlich, Joe E., Laura Weiss, L, E. Elaine T. Horr, E.E.T., & Rebecca F. Kemper (2022). Final Evaluation Report: Understanding and Improving Data Visualization Literacy. Columbus, Ohio: COSI Center for Research and Evaluation. https://www.informalscience.org/final-compiled-evaluation-report-understanding-and-improving-data-visualization-literacy
Hoor, E.Elaine T., Joe E. Heimlich, & Justin R. Meyer. J.R. (2019). Run Exhibit Testing: Phase 1. Columbus, Ohio: COSI Center for Research and Evaluation.
Odum, Howard T., 1971. Environment, Power, and Society. Wiley, New York, 331pp.
Paul Luff, Marina Jirotka,, Naomi Yamashita, Hideaki Kuzuoka, Christian Heath, & Grace Eden. (2013). Embedded interaction: The accomplishment of actions in everyday and video-mediated environments. ACM Transactions on Computer-Human Interaction (TOCHI), 20(1), 1-22.
Maxwell, Loraine E., & Gary W. Evans. (2002). Museums as learning settings: The importance of the physical environment. Journal of Museum Education, 27(1), 3-7.
Meyer, Justin Reeves, Joe E. Heimlich, , E. Elaine. T.Horr, Rebecca F. Kemper, & Katy Börner. (2023). Museum Visitor Comfort When Sharing Personal Information for Evaluation. Journal of Museum Education, 48(2), 136-152.. https://doi.org/10.1080/10598650.2022.2135353
Peppler, Kylie., Keune, A., & Han, A. (2021). Cultivating data visualization literacy in museums. Information and Learning Sciences, 122(1/2), 1-16.
Rasheed Sabar. (2021). How data literate is your company? Harvard Business Review. (August 27, 2021). https://hbr.org/2021/08/how-data-literate-is-your-company
Wojton, Mary Ann, Donelley Hayde, Joe E. Heimlich, & Katy Börner. (2018). Begin at the beginning: A constructionist model for interpreting data visualizations. Curator: The Museum Journal, 61(4), 559-574.
Related Posts
No. 181 Summer 2024





