🌎 Community-curated list of tech conference talks, videos, slides and the like — from all around the world

🎤

OK Google, ask Alexa to check if Siri can recommend Cortana a movie to watch with Bixby." Voice assistants are one of the biggest emerging technologies in 2018, and every company wants in. At NPR, the largest public radio producer in the United States, our interest in voice-based interfaces is obvious; they're a natural fit for our content, which has always taken an audio-first approach. But given that it's still such a new field, the development process is anything but straightforward how do you even prototype a screenless interface? How does the Alexa platform differ from, say, Google Assistant, and can you develop one app for both? What's a Lambda, and do you have to use it? In this talk, we'll run through these confusing, high-level questions, and then go over some real-world code samples for a Node.js API that powers a voice-based UI. For demo purposes, we'll use Amazon Alexa, but we'll also discuss strategies we've used to develop an infrastructure that can support other voice assistants once they are further along. Finally, we'll discuss the mistakes we made, the things we wish we'd done differently, and the things we wished we'd known up front as we set out on our journey to build a next-generation voice UI framework in-house at NPR.
This page was generated from this YAML file. Found a typo, want to add some data? Just edit it on GitHub.