Description
This talk will cover two new ways IBM has optimized generative AI inferencing with PyTorch: speculative decoding and Triton kernel development. Speculative decoding leverages predictive modeling to reduce latency by anticipating potential outputs, streamlining the inference process without sacrificing accuracy. IBM Research's team developed new speculative architectures and open sourced speculators for LLama3 models. It will also discuss various Triton kernels to accelerate inference, one of which was contributed to vLLM for accelerating MoE models. Finally, it will share a glimpse of IBM's AI hardware work, including how the IBM Artificial Intelligence Unit (AIU) could integrate into the PyTorch stack.