Here are five important AI stories from the week.
In their soon-to-be-released book, Competing in the Age of AI, Harvard University professors Marco Iansiti and Karim Lakhani discuss the four most critical ingredients to building an AI-first company.
Invest in a well-functioning data pipeline to clean and integrate data.
Develop or modify algorithms for your tasks, drawing upon supervised, unsupervised, and/or reinforcement learning, as necessary.
Build an experimentation platform to quickly perform tests on new data with new machine learning algos and approaches.
Deliver data and predictions via APIs; focus on the ease of use by your end consumers. Invest in the cloud and use standard off-the-shelf components for this infrastructure so end consumers can pick up your outputs and use them in their workflow easily and at scale.
The agriculture giant John Deere now considers itself a software company. Its equipment serve as IoT devices, collecting millions of data points per second; in fact, it has more data than Twitter does. This data is fed into machine learning models to help farmers achieve better results from farming (i.e., more and better crops with less fuel, seed, fertilizer, etc.). John Deere is also betting on autonomous vehicles for its farming machinery. For a company that is not typically known as a software company, John Deere has as many software engineers as it does mechanical and electrical engineers.
Spotify logs one terabyte of user data per day and applies machine learning to it to provide a personalized listening experience. The personalization affects what you see on your personal home screen in the app, your curated playlists such as Discover Weekly, and what song plays next for you in an auto-play playlist. To collect this user data, Spotify captures what songs you listen to, what you skip, when you listen to music and what type, your location data, and what users that have similar music tastes to you listen to. This read does a great job laying out just how much data Spotify captures on users and how it uses it to deliver a better and personalized listening experience.
Using the latest NLP techniques, Microsoft’s bot - known as DeepCom, short for “deep commenter” - automatically reads articles, picks up the most important points, and then generates comments based on the points. These comments are intended to engage real human readers to debate in the comments, drawing more and more views for the article. This “fake” AI bot has benign intentions - to draw people in - but this is the type of tech that many people worry will make the problem of fake news on the internet much worse.
Deep learning has led to an explosion of generative content in text, images, video, speech, music, art, and more. Some of this is incredibly beneficial to humans such as auto-generative text in emails and text message, auto-captioning of videos, and AI-created music and art. But, the same tech is being used for nefarious purposes, including propagating fake news and videos, duping humans and leading to widespread misinformation. This article by Wired does a great job explaining just how difficult it has become to spot the fakes from the real content.
More Stories Worth Reading and Watching…