Paper: Visual Embedding, A Model for Visualization

By: Demiralp et al

Quick Eval: well written, out-of-the-box thinking, would recommend.

Problem Tackled: Automating the design of effective visualization

Philosophy: “visualization as a perceptual painting of structure in data” and structure is mostly represented by distance


This problem has been tackled before many times with Jock Mackinlay’s APT etc, and they extend the simple idea of structure to be distance to ``visual product spaces”, which combines different attributes, like color with shape. Given this model, they could model the visulization function (mapping from data to visual elements) as an optimization problem. This is where the graphical model come in, where the visualization becomes a join probability distribution of random variables (data point), and the edges could be distance in the embedding space. This model is powerful because it expresses the fact that there might be multiple good visualizations, and that it could learn to get better (conditioning and inferences).

Another neat idea is that although people don’t have specs to answer questions like perception of distance, we could compute it via crowssourcing (and if there are multiple types we could model similarly).

Questions and comments

Modeling visualization is hard because we don’t have a good notion of ``knowledge” or information, nor do we have a good notion of how the brain perceives visual elements, but as computer scientists we want to to model the problem to a space where we could apply math.

I was really confused however with the application section and did not understand the examples. The first example of neural tracts felt like, ``here are two cool looking picture”, and no explanation was given for why these pictures were useful and what handcrafted visualizations would look like. I felt that there was a jump from the theory to application.