top of page

Welcome to Tartan AI

Optimized Digital System Technologies for Deep Learning

Home: Welcome

Our Story

We’re a talented group of engineers with groundbreaking IP that we hope will contribute towards a better tomorrow. 

Our work began as research within the University of Toronto and led to technologies that are now implemented on virtually all modern general-purpose processors including memory dependence prediction and snoop filtering.

At Tartan AI, we work with semiconductor companies to provide them with building blocks that reduce data communication, footprint, and computation for hardware acceleration of deep learning.

Home: About Us

Our Innovative Technology

Developing Deep Learning Networks is hard enough, which is why at Tartan AI, we help you boost performance and energy efficiency by reducing the work, storage and communication needed when executing Machine Learning models in silicon.

Our technologies capitalize on the expected behavior of Deep Learning applications. These applications do not behave randomly but rather tend to exhibit specific idiosyncratic behaviors, particularly in the value access streams.

We target optimizations at the middleware and silicon hardware levels, with pre-built models, requiring no intervention from Machine Learning experts.

red chip.png
Home: Our Technology


Our IP portfolio covers technologies that reduce the cost of data storage and transfers, in addition to computations when performing multiply-accumulate operations.

In addition to Sparsity and Memory Compression additional benefits are enabled by:

  • Profiling tools for enhancing sparsity: near zero values that can be ignored

  • Profiling/training methods for reducing data width

  • Processing elements that support variable width data types up to the full spectrum at bit resolution, 1b to 16b in 1b or 2b steps

  • Bit-Skipping processing elements that skip all computations with zero bits

  • Specialized solutions for computational imaging taking advantage of spatial value locality

Home: Projects

Low-cost High-Performance Through Sparsity

TensorDash is a deep learning component designed to deliver state-of-the-art performance with clever software-hardware co-design. This is achieved by exploiting sparsity during both training and inference on simple hardware design, using an innovative scheduler.

Media & Press


Building the computing engines that will power the machine learning revolution

JULY 19, 2018

As machine learning algorithms — such as those that enable Siri and Alexa to recognize voice commands — grow more sophisticated, so must the hardware required to run them. Professor Andreas Moshovos (ECE) heads a national research network that aims to create the next generation of computing engines optimized for artificial intelligence.

Home: News

Exploiting Ineffectual Computations in Convolutional Neural Networks

OCTOBER 10, 2018

Eliminating computations where either the activation or the weight is zero, reduces the amount of computation and energy needed to execute Deep Learning Networks. This functionality has thus far being delegated mostly to costly hardware solutions. Dr. Moshovos shares details of how this functionality can be shared between a software scheduler and a lightweight hardware mechanism with results that outperform state-of-the-art sparse network accelerators by more than 3x.

Vector Institute’s Friday Seminars Series Presents Andreas Moshovos

DECEMBER 3, 2018

Andreas Moshovos, Tartan AI co-founder, University of Toronto Professor and Vector Institute Faculty Affiliate, discusses value-based deep learning hardware acceleration.

Home: News

"The best way to predict the future is to create it"

Abraham Lincoln

Home: Quote

Get in Touch

Thanks for submitting!

Home: Contact
bottom of page