ALEXANDRIA, Va., Oct. 21 -- United States Patent no. 12,443,848, issued on Oct. 14, was assigned to Microsoft Technology Licensing LLC (Redmond, Wash.).
"Neural network activation compression with narrow block floating-point" was invented by Daniel Lo (Bothell, Wash.), Amar Phanishayee (Seattle), Eric S. Chung (Woodinville, Wash.), Yiren Zhao (Cambridge, Great Britain) and Ritchie Zhao (Ithaca, N.Y.).
According to the abstract* released by the U.S. Patent & Trademark Office: "Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training...