Nexa
Discord
navigation

Back to blog

Visit AMD and Nexa AI at CES 2025: Transforming On-Device AI with Multimodal Capabilities

Dec 17, 2024

AMD and Nexa AI announce a groundbreaking collaboration showcasing the first-ever multimodal AI capabilities running locally on PCs with AMD processors at CES 2025. This demonstration will feature Nexa AI's vision-language and audio-language models running at unprecedented speeds, fully optimized and accelerated by the cutting-edge NPU, GPU, and CPU technologies from AMD.

The collaboration marks a significant advancement in on-device AI, delivering fast and efficient experiences with cutting-edge security features directly to PCs — redefining AI PC user experiences beyond text-based interactions.

Fast and Efficient Vision AI, Running Locally with AMD Technology

Processing visual data locally on PCs has traditionally meant compromising between speed and usability, limiting the potential of AI applications that require real-time visual understanding. Nexa AI solves this challenge with its lightweight vision-language models, optimized specifically for AMD hardware to deliver unprecedented performance on PCs.

The AMD NPU accelerates visual input processing, while its GPU handles complex language model computations—delivering breakthrough performance with cutting-edge data privacy features through local processing. From interpreting screen content and processing images to powering new creative workflows, users can now experience AI capabilities that were previously available only through cloud services.

Discover these breakthrough capabilities at the AMD CES booth and witness how real-time visual AI is transforming PC experiences.

Breaking Barriers: AMD and Nexa AI Make On-Device Voice AI a Reality

Voice and audio AI capabilities are becoming increasingly central to the future of computing. However, deploying these sophisticated AI models locally has remained beyond the reach of current on-device processing solutions.

Nexa AI's innovative local inference framework makes this possible for the first time. With AMD hardware acceleration, it enables voice and audio models to run entirely on device, delivering real-time performance without compromising on capabilities.

Nexa also introduces OmniAudio, built from the ground up with this local inference framework to deliver the industry's fastest on-device audio language model. Users can now experience instant voice-to-text conversion and responsive audio commands with complete data privacy—all processed locally on their PCs with AMD processors.

Visit the AMD CES booth to experience OmniAudio's capabilities firsthand and see how on-device voice AI is transforming user interaction.

Leading Innovation in On-Device Multimodal AI

Together, AMD and Nexa AI are bringing three industry-first capabilities to CES:

Local Multimodal AI, Now a Reality: For the first time, see sophisticated vision and audio AI models running directly on PCs with AMD processors—delivering the performance previously possible only through cloud services.

Powered by Latest AMD Hardware: Our AI models are optimized for new NPU and GPU architecture from AMD, achieving breakthrough speeds without sacrificing capabilities.

Ready for Production: Move from development to deployment faster with our comprehensive toolkit—including model training, compression, and efficient local inference.

For device companies, this means bringing sophisticated AI features to market quickly while maintaining full data privacy and real-time performance.

Experience Present & Future of On-Device Multimodal AI at CES 2025

Visit the AMD CES booth to witness multimodal AI running entirely on device. See our vision and audio models in action — delivering real-time performance with cutting-edge privacy features. Experience firsthand how AMD and Nexa AI are transforming what's possible on AI PCs.

AMD and Nexa AI—Accelerating the Future of On-Device AI.

Find us Here at CES 2025

AMD Booth: Venetian Hotel, Titian Rooms 2302-2305

Nexa AI Booth: LVCC North Hall, Booth 9177

Join +8,000 developers

Stay tuned with the Best in On-Device AI