-
Notifications
You must be signed in to change notification settings - Fork 5
/
long-table.tex
75 lines (58 loc) · 19.6 KB
/
long-table.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
\begin{longtable}{>{\raggedright\arraybackslash}p{3cm}p{14cm}}
\caption{An expanded collection of examples of errors resulting in mirages along different stages of our analytics pipeline. Just as we highlight in the table in the main paper, this list is not exhaustive. Instead it presents examples of how decision-making at various stages of analysis can damage the credibility or reliability of the messages in charts.}
\\\hbox{\normalsize{\textbf{CURATING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colora}Forgotten Population or Missing Dataset & We expect that datasets fully cover or describe phenomena of interest. However, structural, political, and societal biases can result in over- or under-sampling of populations or problems of importance. This mismatch in coverage can hide crucial concerns about the possible scope of our analyses. \cite{missingdatasets, dignazio2019draft}\\
\rowcolor{colora-opaque}Geopolitical Boundaries in Question & Shifting borders and inconsistent standards of ownership can cause geospatial visualizations to be inconsistent. For instance, statistical measures for the United States change significantly depending on whether protectorates and territories are included, or if overseas departments are excluded when calculating measures for France. These issues are more complex when nationstates disagree on the border and extent of their territory, which can cause maps to display significantly different data based on who is viewing the data with what software from what location. \cite{missingdatasets,soeller2016mapwatch}\\
\\\hbox{\normalsize{\textbf{CURATING + WRANGLING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colora}Missing or Repeated Records & We often assume that we have one and only one entry for each datum. However, errors in data entry or integration can result in missing or repeated values that may result in inaccurate aggregates or groupings (see \figref{fig:wrangling}). \cite{kim2003taxonomy} \\
\rowcolor{colora-opaque}Outliers & Many forms of analysis assume data have similar magnitudes and were generated by similar processes. Outliers, whether in the form of erroneous or unexpectedly extreme values, can greatly impact aggregation and discredit the assumptions behind many statistical tests and summaries. \cite{kim2003taxonomy} \\
\rowcolor{colora}Spelling Mistakes & Columns of strings are often interpreted as categorical data for the purposes of aggregation. If interpreted in this way, typos or inconsistent spelling and capitalization can create spurious categories, or remove important data from aggregate queries. \cite{wang2019uni}\\
\rowcolor{colora-opaque}Higher Noise than Effect Size & We often has access to only a sample of the data, or noisy estimates of an unknown true value. How the uncertainty in these estimates is communicated, and whether or not the viewer is made aware of the relative robustness of the effect in the context of this noise, can affect the resulting confidence viewers have in a particular effect. \cite{hofmann2012graphical,hullman2017imagining}\\
\rowcolor{colora}Sampling Rate Errors & Perceived trends in distributions are often subject to the sampling rate at which the underlying data has been curated. This can be problematic as an apparent trend may be an artifact of the sampling rate rather than the data (as is the case visualizations that do not follow the rates suggested by the Nyquist frequency). \cite{kindlmann2014algebraic}\\
\\\hbox{\normalsize{\textbf{WRANGLING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colorb}Differing Number of Records by Group & Certain summary statistics, including aggregates, are sensitive to sample size. However, the number of records aggregated into a single mark can very dramatically. This mismatch can mask this sensitivity and problematize per-mark comparisons; when combined with differing levels of aggregation, it can result in counter-intuitive results such as Simpson's Paradox. \cite{guo2017you}\\
\rowcolor{colorb-opaque}Analyst Degrees of Freedom & Analysts have a tremendous flexibility in how they analyze data. These ``researcher degrees of freedom''~\cite{gelman2013garden} can create conclusions that are highly idiosyncratic to the choices made by the analyst, or in a malicious sense promote ``p-hacking'' where the analyst searches through the parameter space in order to find the best support for a pre-ordained conclusion. A related issue is the ``multiple comparisons problem'' where the analyst makes \emph{so many} choices that at least one, just by happenstance, is likely to appear significant, even if there is no strong signal in the data. \cite{gelman2013garden,pu2018garden,zgraggen2018investigating}\\
\rowcolor{colorb}Confusing Imputation & There are many strategies for dealing with missing or incomplete data, including the imputation of new values. How values are imputed, and then how these imputed values are visualized in the context of the rest of the data, can impact how the data are perceived, in the worst case creating spurious trends or group differences that are merely artifacts of how missing values are handled prior to visualization. \cite{song2018s}\\
\rowcolor{colorb-opaque}Inappropriate/Missing Aggregation & The size of the dataset is often far larger than what can fit in a particular chart. Aggregation at a particular level of detail a common technique to reduce the size of the data. However, the choice of aggregation function can lead to differing conclusions based on the underlying distribution of the data. Furthermore, these statistical summaries may fail to capture important features of distribution, such as second-order statistics. Conversely, when a designer fails to apply an aggregation function (or applies one at too low a level of detail), the overplotting, access visual complexity, or reduced discoverability can likewise hide important patterns in the data. \cite{anscombe1973graphs,few2019loom,matejka2017same,salimi2018bias,wall2017warning}\\
\\\hbox{\normalsize{\textbf{VISUALIZING + WRANGLING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colorc}Outliers Dominate Scale Bounds & Numeric and color scales are often automatically bound to the extent of the data. If there are a few extrema values, this can result in a renormalization in which much of the data is compressed to a narrow output range, destroying the visual signal of potential trends and variability \cite{correll2016surprise,kindlmann2014algebraic}\\
\rowcolor{colorc-opaque}Latent Variables Missing & When communicating information about the relationship between two variables, we assume that we have all relevant data. However, in many cases a latent variable has been excluded from the chart, promoting a spurious or non-causative relationship (for instance, both drowning deaths and ice cream sales are tightly correlated, but are related by a latent variable of external temperature). Even if this variable is present, if the relevant functional dependency is unidentified, the appropriate causal linkage between variables may not be visible in the chart. Similarly, subgroups or subpopulations can exist in datasets that, if not properly separated or identified, can apply universal trends to inappropriate subgroups. \cite{anand2015automatic,wang2019uni}\\
\rowcolor{colorc}Base Rate Masquerading as Data & Visualizations comparing rates are often assumed to show the relative rate, rather than the absolute rate. Yet, many displays give prominence to these absolute or base rates (such as population in choropleth maps) rather than encoded variable, causing the reader to understand this base rate as the data rate. \cite{correll2016surprise}\\
\rowcolor{colorc-opaque}Concealed Uncertainty & Charts that don't indicate that they contain uncertainty risk giving a false impression as well a possible extreme mistrust of the data if the reader realizes the information hasn't been presented clearly. There is also a tendency to incorrectly assume that data is high quality or complete, even without evidence of this veracity. \cite{song2018s, few2019loom, mayrTrust2019, sacha2015role}\\
\\\hbox{\normalsize{\textbf{VISUALIZING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colorc}Non-sequitur Visualizations & Readers expect graphics that appear to be charts to be a mapping between data and image. Visualizations being used as decoration (in which the marks are not related to data) present non-information that might be mistaken for real information. Even if the data are accurate, additional unjustified annotations could produce misleading impressions, such as decorating uncorrelated data with a spurious line of best fit. \cite{correll2017black}\\
\rowcolor{colorc-opaque}Misunderstand Area as Quantity & The use of area encoded marks assumes readers will be able to visually compare those areas. Area encoded marks are often misunderstood as encoding length which can cause ambiguity about interpretation of magnitude. \cite{pandey2015deceptive, correll2017black}\\
\rowcolor{colorc}Non-discriminable Colors & The use of color as a data-encoding channel presumes the perceptual discriminability of colors. Poorly chosen color palettes, especially when marks are small or cluttered (cite), can result in ambiguity about which marks belong to which color classes. \cite{szafir2017modeling}\\
\rowcolor{colorc-opaque}Unconventional Scale Directions & Viewers have certain prior expectations on the direction of scales. For instance, in languages with left-to-right reading orders, time is likewise assumed to move left to right in graphs. Depending on context, dark or opaque colors are perceived as having higher magnitude values than brighter or more transparent colors. Violating these assumptions can cause slower reading times or even the reversal of perceived trends. \cite{correll2017black,pandey2015deceptive,tversky1991cross,schloss2018mapping}\\
\rowcolor{colorc}Overplotting & We expect to be able to clearly identify individual marks, and expect that one visual mark corresponds to a single value or aggregated value. Yet overlapping marks can hide internal structures in the distribution or disguise potential data quality issues, as in \figref{fig:opacity-permute}. \cite{correll2018looks,mayorga2013splatterplots,micallef2017towards}\\
\rowcolor{colorc-opaque}Singularities & In chart types, such as line series or parallel coordinates plots, many data series can converge into a single point in visual space. Without intervention, viewers can have issues discriminating between which series takes which path after such a singularity. \cite{kindlmann2014algebraic}\\
\rowcolor{colorc}Inappropriate Semantic Color Scale & Colors have different effects and semantic associations depending on context (for instance the cultural context of green being associated with money in the United States). Color encodings in charts that violate these assumptions can result in viewers misinterpreting the data: for instance, a viewer might be confused by a map in which the oceans are colored green, and the land colored blue. \cite{lin2013selecting}\\
\rowcolor{colorc-opaque}Within-the-Bar-Bias & The filled in area underneath a bar chart does not communicate any information about likelihood. However, viewers often erroneously presume that values inside the visual area of the bar are likelier or more probable than values outside of this region, leading to erroneous or biased conclusions about uncertainty. \cite{correll2014error,newman2012bar}\\
\rowcolor{colorc}Clipped Outliers & Charts are often assumed to show the full extent of their input data. A chosen domain might exclude meaningful outliers, causing some trends in the data to be invisible to the reader. \\
\rowcolor{colorc-opaque}Continuous Marks for Nominal Quantities & Conventionally readers assume lines indicate continuous quantities and bars indicate discrete quantities. Breaking from this convention, for instance using lines for nominal measures, may cause readers to hallucinate non-existent trends based on ordering. \cite{mcnuttlinting, zacks1999bars}\\
\rowcolor{colorc}Modifiable Areal Unit Problem & Spatial aggregates are often assumed as presenting their data without bias, yet they are highly dependent on the shapes of the bins defining those aggregates. This can cause readers to misunderstand the trends present in the data. \cite{fotheringham1991modifiable, kindlmann2014algebraic}\\
\rowcolor{colorc-opaque}Manipulation of Scales & The axes and scales of a chart are presumed to straightforwardly represent quantitative information. However, manipulation of these scales (for instance, by flipping them from their commonly assumed directions, truncating or expanding them with respect to the range of the data~\cite{pandey2015deceptive, correll2017black, cleveland1982variables, ritchie2019lie, correll2019truncating}, using non-linear transforms, or employing dual axes~\cite{KindlmannAlgebraicVisPedagogyPDV2016, cairo2015graphics}) can cause viewers to misinterpret the data in a chart, for instance by exaggerating correlation~\cite{cleveland1982variables}, exaggerating effect size~\cite{correll2019truncating,pandey2015deceptive}, or misinterpreting the direction of effects~\cite{pandey2015deceptive}. \cite{cairo2015graphics,correll2017black,correll2019truncating,cleveland1982variables,KindlmannAlgebraicVisPedagogyPDV2016,pandey2015deceptive,ritchie2019lie}\\
\rowcolor{colorc}Trend in Dual Y-Axis Charts are Arbitrary & Multiple line series appearing on a common axis are often read as being related through an objective scaling. Yet, when y-axes are superimposed the relative selection of scaling is arbitrary, which can cause readers to misunderstand the magnitudes of relative trends. \cite{KindlmannAlgebraicVisPedagogyPDV2016, cairo2015graphics}\\
\rowcolor{colorc-opaque}Nominal Choropleth Conflates Color Area with Classed Statistic & Choropleth maps color spatial regions according to a theme of interest. However, the size of these spatial regions may not correspond well with the actual trend in the data. For instance, U.S. Presidential election maps colored by county can communicate an incorrect impression of which candidate won the popular vote, as many counties with large area have small populations, and vice versa. \cite{gastner2005maps,nusrat2016state}\\
\rowcolor{colorc}Overwhelming Visual Complexity & We assume that there is a benefit to presenting all of the data in all of its complexity. However, visualizations with too much visual complexity can overwhelm or confuse the viewer and hide important trends, as with graph visualization ``hairballs.'' \cite{hofmann2012graphical, greadability}\\
\\\hbox{\normalsize{\textbf{READING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colord}Reification & It can be easier to interpret a chart or map as being a literal view of the real world, rather than to understand that it as abstraction at the end of a causal chain of decision-making. That is, as confusing the \emph{map} with the \emph{territory}. This misunderstanding can lead to falsely placed confidence in measures containing flaws or uncertainty: Drucker~\cite{drucker2012humanistic} claims that reification caused by information visualization results in a situation ``as if all critical thought had been precipitously and completely jettisoned.'' \cite{drucker2012humanistic}\\
\rowcolor{colord-opaque}Assumptions of Causality & We assume that highly correlated data plotted in the same graph have some important linkage. However, through visual design or arbitrary juxtaposition, viewers can come away with erroneous impressions of relation or causation of unrelated or non-causally linked variables. \cite{xiong2019illusion, few2019loom}\\
\rowcolor{colord}Base Rate Bias & Readers assume unexpected values in a visualization are emblematic of reliable differences. However, readers may be unaware of relevant base rates: either the relative likelihood of what is seen as a surprising value or the false discovery rate of the entire analytic process. \cite{correll2016surprise,pu2018garden, zgraggen2018investigating}\\
\rowcolor{colord-opaque}Inaccessible Charts & As charts makers we often assume that our readers are homogeneous groups. Yet, the way that people read charts is heterogeneous and dependent on underlying perceptual abilities and cognitive backgrounds that can be overlooked by the designer. Insufficient mindfulness of these differences can result in miscommunication. For instance, a viewer with color vision deficiency may interpret two colors as identical when the designer intended them to be separate. \cite{lundgard2019Sociotechnical, plaisant2005information}\\
\rowcolor{colord}Default Effect & While default settings in visualization systems are often selected to guide users towards best practices, these defaults can have an outsized impact on the resulting design. This influence can result in mirages: for instance, default color palettes can artificially associate unrelated variables; or default histogram settings can hide important data quality issues. \cite{correll2018looks,few2019loom, hullman2011visualization,shah2006policy}\\
\rowcolor{colord-opaque}Anchoring Effect & Initial framings of information tend to guide subsequent judgements. This can cause readers to place undue rhetorical weight on early observations, which may cause them to undervalue or distrust later observations. \cite{ritchie2019lie, hullman2011visualization}\\
\rowcolor{colord}Biases in Interpretation & Each viewer arrives to a visualization with their own preconceptions, biases, and epistemic frameworks. If these biases are not carefully considered, various cognitive biases such as the backfire effect or confirmation bias can cause viewers to anchor on only the data (or the reading of the data) that supports their preconceived notions, reject data that does not accord with their views, and generally ignore a more holistic picture of the strength of the evidence. \cite{dignazio2019draft, d2016feminist, few2019loom,wall2017warning,valdez2017framework}\\
\\\hbox{\normalsize{\textbf{READING + WRANGLING ERRORS}}}&\\ \\
\normalsize{Error} & \normalsize{Mirage}\\ \hline
\rowcolor{colord}Drill-down Bias & We assume that the order in which we investigate our data should not impact our conclusions. However, by filtering on less explanatory or relevant variables first, the full scope of the impact of later variables can be hidden. This results in insights that address only small parts of the data, when they might be true of the larger whole. \cite{lee2019avoiding}\\
\rowcolor{colord-opaque}Cherry Picking & Filtering and subsetting are meant to be tools to remove irrelevant data, or allow the analyst to focus on a particular area of interest. However, if this filtering is too aggressive, or if the analyst focuses on individual examples rather than the general trend, this cherry-picking can promote erroneous conclusions or biased views of the relationships between variables. Failing to keep the broader dataset in context can also result in the Texas Sharpshooter Fallacy or other forms of HARKing~\cite{cockburn2018hark}. \cite{few2019loom}\\
\rowcolor{colord}Availability Heuristic & Examples that are easier to recall are perceived as more typical than they actually are. In an visual analytics context, this could be reflected in analysts recalling outlying instances more easily than values that match the trend, or assuming that the data patterns they encounter most frequently (for instance, in the default or home view of their tool) are more common than in they really are in the dataset as a whole. \cite{dimara2016accounting,dimara2018task,few2019loom}\\
\end{longtable}
\label{table:mirage-table}