Truthiness blog post

pull/1/head
Oliver Kennedy 2016-12-13 14:59:03 -05:00
parent 0880b3571c
commit 612e4c8ee7
3 changed files with 35 additions and 0 deletions

View File

@ -0,0 +1,35 @@
---
title: Stop the Truthiness and Just Be Wrong
author: Oliver Kennedy
---
**Note**: This was originally an abstract submitted to CIDR. It's based on numerous discussions with lots of people, including but not limited to: Ying Yang, Niccolò Meneghetti, Poonam Kumari, Will Spoth, Aaron Huber, Arindam Nandi,
Boris Glavic, Vinayak Karuppasamy, Dieter Gawlick, Zhen Hua-Liu, Beda Hammerschmidt, Ronny Fehling, and Lisa Lu.
Since their earliest days, databases have held themselves to a strict invariant: Never give the user a wrong answer.
So ingrained is it in the psyche of the database community, that those violate it really want you to be aware that you're committing sacrilege against Codd. Some examples include adding features to SQL to support continuous data (e.g., [MauveDB](https://pdfs.semanticscholar.org/7e20/9751ae6f0e861a7763d3d22533b39aabd7eb.pdf)), adding features to SQL to query bayesian models (e.g., [BayesStore](https://pdfs.semanticscholar.org/10bf/c12bc444bd0299fa9907ce061b96210eeb6b.pdf)), adding features to SQL to tell the database how accurate you want your results to be (e.g., [DBO](https://pdfs.semanticscholar.org/e42a/429f475d08a7719889b2b2c88e403606984c.pdf)), or adding features to SQL to explicitly ask for specific types of summaries (e.g., [MayBMS](http://infoscience.epfl.ch/record/167070/files/maybms.pdf)).
Sadly, by trying to enforce perfection in the database itself, database systems fail to acknowledge that the data being
stored is rarely precise, correct, valid, or unambiguous. Emphasizing on certain, deterministic data forces the use of
complex, hard-to-manage extract-transform-load pipelines that emit deceptively certain, “truthy” data rather than acknowledging ambiguity or error. The resulting data is often (incorrectly) interpreted as fact by naive users who have no reason to believe otherwise. The problem is getting worse: As more decisions are automated, even small truthiness errors can drastically impact peoples' lives. [Data errors in credit reports](http://money.cnn.com/2016/04/11/pf/john-oliver-credit-reports/index.html) can cause perfectly honest people to be denied access to credit. Similarly, name matching errors combined with rigid protocols have led to an [8-year old being identified as a terrorist](http://www.nytimes.com/2010/01/14/nyregion/14watchlist.html?_r=0).
System designers must decide between presenting erroneous data as truthful or risk discarding useful information, and many choose the former. The database community has already begun treating uncertainty as [a first class primitive in databases](https://smile.amazon.com/Probabilistic-Databases-Synthesis-Lectures-Management/dp/1608456803/ref=sr_1_1?ie=UTF8&qid=1481658544&sr=8-1&keywords=probabilistic+databases). Unfortunately, uncertainty also requires us to rethink how humans interact with data.
Here, industry has done significantly better than the database research community. For example, personal information managers like Apple Calendar and the iOS Phone App increasingly use facts data-mined from email to automatically populate databases in their contacts and calendar applications. For example, the OS X Calendar app finds events in your email and schedules them.
![OS X Calendar App](graphics/2016-calendar-explain.png)
Similarly, the iOS Phone App makes use of phone numbers it finds in your email to predict who's calling you.
![iOS Phone App](graphics/2016-maybe-screen.png)
Both examples illustrate a number of good design elements:
1. The interface keeps uncertain facts distinct or clearly marks them as being guesses.
2. The interface includes intuitive provenance mechanisms that help to put the extracted information in context.
3. The interface includes overt feedback options to help the user correct or confirm uncertain data.
We as a database community need to start adapting these techniques to more general data management settings. The presentation layer isnt the only problem, as identifying sources uncertainty requires developers to invest lots of upfront effort rethinking how they write code. We need to make it worth their while. For example, we might provide infrastructure support to help developers [draw generalizations from ambiguous choices](http://odin.cse.buffalo.edu/papers/2015/HotMobile-maybe-final.pdf). We might streamline [imperative language support for uncertainty](https://books.google.com/books?hl=en&lr=&id=17riBQAAQBAJ&oi=fnd&pg=PP1&dq=Probabilistic+programming&ots=7QUU6HLw0F&sig=zJCPZLhJLZhI6w2ELo-CyGnNKFU). Or, we might define [higher-order](https://pdfs.semanticscholar.org/bbf9/946752cc6456a333e16413583e2e98ef8554.pdf) [data transformation primitives](http://odin.cse.buffalo.edu/papers/2015/VLDB-lenses-final.pdf).
In summary, the illusion of accuracy in database query results can no longer be maintained. Database systems must learn how to acknowledge errors in source data, and how to use this information to effectively communicate ambiguity to users. Moreover, this needs to happen without overwhelming users, without breaking the decades-old abstractions that people understand and use on a day-to-day basis in their work-flows, and without requiring a statistics background from all users.

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB