Multiverse Computing is rolling out its CompactifAI platform to the global market, aiming to address rising costs and infrastructure demands associated with deploying large-scale artificial intelligence systems.
The company said CompactifAI is designed to compress large AI models significantly while maintaining performance, enabling faster processing and lower operational costs. The technology targets enterprises seeking to deploy advanced AI tools without the heavy computing requirements typically associated with large language models.
Chief Executive Officer Enrique Lizaso Olmos said the company’s focus is on making artificial intelligence more accessible and efficient. “We want to transform the way organisations deploy AI by reducing the resources required without sacrificing performance,” he said.
Multiverse added that compressed models can deliver faster inference speeds and reduced energy consumption, making them suitable for deployment across cloud, on-premise and edge environments.
The global push comes as companies across industries face mounting pressure to manage the cost and energy intensity of AI systems. Analysts say the launch reflects a broader shift in the sector toward optimisation technologies that improve efficiency rather than simply increasing model size.
Industry analysts note that while model compression is gaining traction, wider adoption will depend on independent validation and compatibility with existing proprietary systems.
Read Also:
- Amazon Expands Healthcare AI, Assistant
- Google Drops Nano Banana 2 Model
- NVIDIA DLSS 5 Brings Unmatched AI Graphics
Founded in 2019 and headquartered in Spain, Multiverse Computing has attracted significant investor backing as it seeks to expand internationally and position itself within the growing market for AI infrastructure and optimisation tools.
The company said CompactifAI will be made available to enterprise clients across multiple regions, with further partnerships expected as adoption scales.


