T4 delivers breakthrough performance for deep learning training in FP32, FP16, INT8, INT4, and binary precisions for inference. With 130 teraOPS (TOPS) of INT8 and 260TOPS of INT4, T4 has the world’s highest inference efficiency, up to 40X higher performance compared to CPUs with just 60 percent of the power consumption. Using just 75 watts (W), it’s the ideal solution for scale-out servers at the edge.