This post is part of our Bookshelf series organized by the Data Science R&D department at Civis Analytics. In this series, Civis data scientists share links to interesting software tools, blog posts, scientific articles, and other things that they have read about recently, along with a little commentary about why these things are worth checking out. Are you reading anything interesting? We’d love to hear from you on Twitter.
Researchers at Google’s DeepMind recently released another iteration of their AlphaGo program, AlphaGo Zero. The new algorithm teaches itself to play Go by starting randomly and playing a series of games against itself, rather than learning from a dataset of human-played games. It uses the structure of Go’s rules to evaluate which strategies are successful. While this is a remarkable achievement, it really doesn’t signify the rise of the robots. The most important problems (and the exponential number of possible strategies to solve them) have a far more complicated structure than Go and require human attention to make progress, so people will continue to cure diseases, make art, and do even basic tasks like cook dinner without A.I.
We use deep learning at Civis to solve a variety of problems. While these systems can be powerful, they can also be incredibly painful to debug. This blog post details a basic approach to testing TensorFlow models, using concrete code examples. I’ve been thinking about adding more testing to my machine learning workflows, and this offered some meaningful structure.
Earlier this year, Julia Angwin and ProPublica released a fascinating report on the pricing of car insurance in primarily minority neighborhoods in the U.S. Recently, however, she was asked to censor these slides for a conference, which she refused. You can check out her explanation on Twitter here. The piece is an interesting showcase for data in news articles and brings up issues of fairness in an approachable style.