Remove from My Interests
Microsoft's speech recognition research system has recently achieved a milestone by matching professional human transcribers in how accurately it transcribes natural conversations, as measured by government benchmark tasks. In this talk we will discuss the significance of the result, give a high-level overview of the deep learning and other machine learning techniques used, and detail the software techniques used. A key enabling factor was the use of CNTK, the Microsoft Cognitive Toolkit, which allowed us to train hundreds of acoustic models during development, using a farm of GPU servers and parallelized training. Model training was parallelized on GPU host machines, using 1-bit distributed stochastic gradient descent algorithm. LSTM acoustic and language model training takes advantage of CNTK's optimizations for recurrent models, such as operation fusion, dynamic unrolling, and automatic packing and padding of variable length sequences. We also give an overview of CNTK's functional API.
Do Not Sell My Personal Information