This year, I was lucky enough to attend the Tapestry conference.
This conference has been on my wish list for a while now. I can now officially say, everyone was right. This is the best dataviz related conference out there (at least judging from my admittedly limited sample size).
A big part of that is of course being able to meet and have actual conversations with your data heroes. But what really stood out to me was the quality of the talks. To me, the best learning experiences are ones where I walk out with more questions than I had when I went in. So, in appreciation of Tapestry conference, I wanted to share with you the topics that most resonated with me, and the questions that they’ve inspired me to ask.
#1 Data isn’t truth
In her brilliant opening keynote, Mona Chalabi highlighted this issue as one she’s seen throughout her career. People tend to overestimate the accuracy of data. Because it’s data. Data visualizations, as visual representations of data, carry the same connotation of objectivity and truth.
I’m not sure if this was planned out or it just worked out this way – but it was fascinating to see how other presenters expanded on this topic. Ken Field gave us the cartography perspective. As a data visualization practitioner with only a dabbler’s knowledge of GIS concepts, I am only vaguely aware of the distortions caused by different map projections. But it was a sobering reminder of the impact of those distortions, especially in relation to the mental models people have of maps as accurate pictures of our world.
Matthew Kay kicked off day 2 with a talk that was all about the ways we can visualize uncertainty. Like so many of the other Tapestry talks, it was this awesome combination of inspiring examples and challenging questions.
Charts, and the data used to create them, are perceived as objective – as evidence of the truth, if not truth itself. As a maker of charts, I definitely feel like it’s part of my responsibility to challenge myself to be more aware of this truthiness assumption, and whether I’m making any design choices that contribute to it.
Questions I’ll be focusing on:
How trusting (or skeptical) is my audience?
Am I making any design choices that might be unintentionally communicating more certainty or accuracy than they should?
Can I do anything to help my audience better understand the level of uncertainty in the data? As in, are there any visual, textual, or interactive elements I can add to increase uncertainty awareness for this viz?
#2 The power of conventions and shared language
This was another thread that seemed to come up throughout the conference. Mona pointed out an unusual aspect of this topic. How do we communicate with people who don’t share our language? She shared some thought-provoking experiments she’s done with making data viz that is accessible to visually impaired audiences (you can see one of these examples here).
Conventions (otherwise affectionately known as best practices) are of course the shared language of data. One of the things I appreciated about this conference is the different perspectives on our shared data viz language. Aritra Dasgupta talked about some interesting examples from the scientific community, when new conventions have to be introduced because the existing widely-accepted conventions can lead to problems.
That struggle is all too real – whether we work with scientists or in the corporate world.
But Ken Field made some good counter arguments. He pointed out that conventions work because they are conventions – people already know what they mean. No, of course Ken wasn’t advocating for rainbow palettes (at least I don’t think he was!). But he does rightly point out the advantages of using our existing shared language when it works.
This leads us to the ever-raging debate on novel chart forms. I really enjoyed Elijah Meeks’ take on it – as should come with zero surprise to anyone who’s read anything else on this blog. He talked about the importance of introducing new chart forms when those charts are better, or when they can show information that wouldn’t be possible to see from the more common chart. But, it’s not just about sharing that magical new chart. It requires a period of trust building with your user community.
That resonated with me. A lot. A big part of my role (at least as I see it) is to help people find new ways of looking at their data, so that they can find things they weren’t able to see before.
Questions I’ll be focusing on:
What existing conventions is my audience used to? This could be related to chart forms, analysis choices, or even word choice in titles and labels.
And closely related to that, are there subgroups in my audience who are used to different conventions?
What could we be missing (or misinterpreting) by sticking to the accepted conventions?
This was yet another topic from Mona’s opening keynote that stood out. She challenged us to use sequencing when presenting complex information.
Instead of making something people can consume quickly, force them to slow down by sequencing.
Could this be the elusive answer to the complexity vs simplicity wars?
Pacing is such a key storytelling technique. It makes so much sense to take advantage of it in data communication as well. And it doesn’t have to require animation either. I could imagine implementing sequencing in a longform piece, or even in charts that are being presented in a slide deck.
Nadja Popovich made another interesting point on this topic. She talked about the recent project she worked on for the New York Times How Much Hotter Is Your Hometown Than When You Were Born? What really stood out for me was her explanation of how she used near and far views, but in an unusual way. We’re used to starting with the big picture, and then zooming in to more detail. But this piece actually starts with the zoomed in view – you can see the story of your own hometown. This is a powerful way to draw a viewer in by letting them connect the data to their own experience. It’s also a great reminder that not all stories need to be told the same way.
Questions I’ll be focusing on:
Am I overly simplifying this data in the interest of the Speed To Consume rate?
Can I incorporate sequencing to make the experience less overwhelming?
Can I create a near view that would make it easier for people to personally connect with this information?
The meta question: uncertainty and choosing the ‘best’ chart
My last question is to you, my #datafam (which also includes any non Tableau dataviz people!).
This question is inspired mostly by Matthew’s talk on uncertainty, but also by Elijah’s point on our existing preoccupation with optimizing individual charts rather than a more holistic approach to optimizing the experience.
So here’s my question: What’s the role of uncertainty, and our risk aversion preferences, in how we view best practices? Does it impact how important it is for us to feel like we’ve chosen the ‘best’ chart?
Of course, this is based on a more qualitative definition of uncertainty. But maybe part of the problem is that a chart chooser type of approach, and even more generally the effort to optimize at the chart level, is that it only enables our own risk aversion. Are we inaccurately presenting these best practices as rules, and does that lead data vizzers (especially beginners) to overestimate the universality and applicability of these rules?
Hey, did you know that most of these talks are now available to watch? Check them out here.