News 

Flex Logix unveils neural inferencing engine for AI in datacenters and on the edge

“Chip maker Flex Logix today introduced its new Nmax general purpose neural inferencing engine designed for AI deployment in a number of environments with popular machine learning frameworks like TensorFlow or Caffe” writes Khari Johnson for venturebeat.com. Flex Logix raised a $5 million funding round in May 2017 to explore ways to build more flexible chips. Flex Logix was founded in 2014 by Tate and cofounder Cheng Wang and is based in Mountain View, California.Nmax uses interconnect technology like the kind used in FPGA chips but is a general purpose neural inferencing engine programmed with TensorFlow and designed to run any kind of neural network.
 
Source: venturebeat.com



Related posts