Vicon - Real Time Visualization of Team Performance in Video Conferencing

Development of an interactive, real-time data-basedvisual feedback system to promote team efficiency in creative problem-solving tasks

Recognitions

Featured at ACM Creativity and Cognition 2023
Fearured at Human Computer Interaction international 2022

The Project

After spending a lot of time in video conferences due to the COVID-19 pandemic, I began to ponder whether interactive and voice-controlled visual feedback systems could help people engage in more productive and balanced conversations. The underlying idea is to provide users with direct feedback on their speaking time through effective information design, indicating if they are speaking too much or too little. The primary challenge in this regard is achieving high visualization efficiency since online meetings already involve cognitively complex processes even without additional visualization applications.

Different design concepts were developed and evaluated based on performance, subjective and objective evaluations, as well as eye-tracking studies. The left version was designed based on previous studies in this field; however, concept studies surprisingly showed low efficiency as users either perceived it incorrectly or did not notice it at all, leading to the development of an alternative approach (right version).

The best visualization was evaluated in the second step through a user study involving participants (n=72) working in groups of four on a collaborative problem-solving task. Our results demonstrate that users perceive the tool as helpful, as measurements indicate a positive evaluation of the tested support system, with positive ratings for perceived usefulness (4.8), ease of understanding (5.6), and perceived accuracy (5.1) on a seven-point Likert scale. These findings are a promising initial step towards the development of a real-time visual support system for video conferencing software.

Related Publications

ViCon – Towards understanding visual support systems in collaborative video conferencing
K Schroeder, S Kohl
Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Health, Operations Management, and Design: 13th International Conference, DHM 2022, Held as Part of the 24th HCI International Conference, HCII 2022
Using Speech Contribution Visualization to Improve Team Performance of Divergent Thinking Tasks
S Kohl, A Calero-Valdez, K Schroeder
ACM C&C ’23: Creativity and Cognition 2023


Panoptes - Immersive Financial Asset Management

PanoptesImmersive Financial Asset Management

Recognitions

Featured at CHI 2020 (Honolulu)
Featured at Dutch CHI 2020
Featured at German CHI 2020
Featured at a special Issue of the Observant
Featured at the Exhibition "Touching Abstraction" at BSSC
Pension Fund Achievement of the Year 2019 - APG GroeiFabriek

The Project

With the advent of mixed reality devices such as the Microsoft HoloLens, developers have been faced with the challenge to utilize the third dimension in information visualization effectively. Research on stereoscopic devices has shown that three-dimensional representation can improve accuracy in specific tasks (e.g., network visualization). Yet, so far the field has remained mute on the underlying mechanism. Our study systematically investigates the differences in user perception between a regular monitor and a mixed reality device. In a real-life within-subject experiment in the field with twenty-eight investment bankers, we assessed subjective and objective task performance with two- and three-dimensional systems, respectively. We tested accuracy with regard to position, size, and color using single and combined tasks.
eplaced{Our results do not show a significant difference in accuracy between mixed-reality and standard 2D monitor visualizations.}{Our results indicate that mixed reality devices do not perform significantly less accurate than standard 2D monitor visualizations.

Related Publications

Evaluation of a Financial Portfolio Visualization using Computer Displays and Mixed Reality Devices with Domain Experts
K Schroeder, B Ajdadilish, AP Henkel, A Calero Valdez
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems


Visualizing Climate Data - Mapping future pathways

Tagline

This site is currently under constrution and will be updated soon.


Carbon Explore - Towards Sustainable Investments

Tackling Climate Change: Visualizing Sustainability Aspects of Financial Portfolios.

Recognitions

Featured at CHI 2020 – FP: Evaluation of Fiancial Portfolio Visualizations
Featured at Dutch CHI 2020 & German CHI 2020
Featured at Smart Climate Day 2019
Funded as Techruption Innovation Project
Pension Fund Achievement of the Year 2019 - APG GroeiFabriek

The project

Development of a visualization algorithm for multivariate financial data and emissions indicators to assist portfolio managers in aligning their investment decisions with climate risks and global sustainability goals.

Related Publications

Evaluation of a Financial Portfolio Visualization using Computer Displays and Mixed Reality Devices with Domain Experts
Kay Schroeder, Batoul Ajdadilish, Alexander P. Henkel, and André Calero Valdez
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3313831.3376556


The Economic Perspective on CO2 - Understanding Climate Change

The Economic Perspective on CO2

Visualizing the Relationship of Economics and Carbon Emissions

Recognitions

Featured at Spiegel Online
Featured at AR 5 (Weltklimabericht)
Features at IPCC SRREN (Special Report on Renewable Energy)

The Project

In the last 15 years, I have systematically examined the design space to explore how information visualization and algorithms can be used to bridge complex societal issues, decision-makers, and society as a whole. For the Intergovernmental Panel on Climate Change (IPCC), I have developed the style guides for the world climate reports, as well as the associated guidelines for visualizing climate data, complex concepts, and data types. The particular challenges of this task lay in both the multitude of illustrations and technical data, as well as the political guidelines that arise within an international body like the IPCC. The visualization concepts not only had to meticulously meet all technical requirements but also needed to be politically neutral, intuitively unbiased, and allow for as little room for interpretation as possible.

For the IPCC, I have led the way on two reports. I have developed over 1000 visualizations that were published by Cambridge Press, Spiegel Online, Nature Climate Change, Environmental Science and Policy, and other relevant contexts.

Related Publications (selection)

Climate change 2014: mitigation of climate change. Vol. 3.
Edenhofer, Ottmar et al.
Cambridge University Press, 2015.


Certain Uncertainty - A Geospatial Data Physicalization of Water Stress

Certain Uncertainty - Mapping Global Water Stress
A Physical Data Experience

Recognitions

Featured at Re:publica 2023
Nominated for the Information is Beautiful Awards
Selected for the 2023 Museum of Wild and Newfangled Art Biennial
Featured at CHI 2023 – WS 10: Data as a Material for Design
Featured at Datavisualization Society's Lightning Talks series
Featured at the Exhibition 'Loosing Earth' at Kunsthalle Düsseldorf

The Project

Within the certain uncertainty project, we developed a novel approach to visualize water stress in a geospatial context in relation to population density. Water stress or scarcity does always need to be reflected in context. While some parts of the world are only sparsely populated, the impact and mitigation of water stress in densely populated areas are potentially critical. While most water stress mappings focus on communicating the water stress within a tempo-spatial context, this project aims to map the water stress of selected capitals within the context of the global population density to enable the viewer to explore the interaction between both dimensions in a meaningful way. By focusing strongly on the data and removing all cartographic borderlines, an abstract space of mountain ranges remains, showing the world as a spatial accumulation of humans confronted with increasingly changing environmental conditions. In this paper, we describe the stepwise development process of the artifact starting data processing and how the individual physical layers were created.

Related Publications

Certain Uncertainty – A Geospatial Data Physicalization of Water Stress and Population Density
K Schroeder, S Jules
CHI’23 WS 4: Data as a Material for Design: Alternative Narratives, Divergent Pathways, and Future Directions, Hamburg 2023

Background

In order to realize the installation, population density mountains must be created. This first requires data that can be used to design this part of the model. A render based on data from Nasa was used. This visualization shows the population density in the world by color (heat map). The image as a data file was used as input for generating the 3D model with Houdini. The figure of the data tree in Houdini shows how the image is directed to the mountain landscape; this tree can be divided into four segments. In the first part, the visualization is converted into points that have the color of the underlying pixels, this color is a vector.

Based on the length of this vector, the points are moved in the normal direction, creating a mountain landscape. Then the mesh of this mountain range is rearranged so that the vertices are more evenly distributed, which is necessary for a clean result. The mountain range is then sliced to reveal the layers. In the final stages, the flat layers (cuts) only need to be extruded according to the thickness of the layers. After that, the model is ready to be produced. Before the model can be cut out, it has to be reorganized so that the CNC cutter understands how to cut it out. Each layer is cut out of a separate plate, for which the Fusion 360 software was used.

This software wrote a gcode that allowed the cutter to cut out the model. After all the models were cut out, the parts could be assembled into the landscape. We used a overhead projection of the geodata mapping to accurately position the individual layers of the population density map on their respective positions. In a next step all individual parts were glued together with wood glue.