Gcore Enhances Everywhere Inference With Flexible Deployment Options, Including Cloud, On-Premise, and Hybrid
16.1.2025 10:01:00 CET | Business Wire | Press release
Gcore, the global edge AI, cloud, network, and security solutions provider, today announced a major update to Everywhere Inference, formerly known as Inference at the Edge. This update offers greater flexibility in AI inference deployments, delivering ultra-low latency experiences for AI applications. Everywhere Inference now supports multiple deployment options including on-premise, Gcore's cloud, public clouds, or a hybrid mix of these environments.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250116487759/en/

Everywhere Inference leverages Gcore’s extensive global network of over 180 points of presence, enabling real-time processing, instant deployment, and seamless performance across the globe (Graphic: Business Wire)
Gcore developed this update to its inference solution to address changing customer needs. With AI inference workloads growing rapidly, Gcore aims to empower businesses with flexible deployment options tailored to their individual requirements. Everywhere Inference leverages Gcore’s extensive global network of over 180 points of presence, enabling real-time processing, instant deployment, and seamless performance across the globe. Businesses can now deploy AI inference workloads across diverse environments while ensuring ultra-low latency by processing workloads closer to end users. It also enhances cost management and simplifies regulatory compliance across regions, offering a comprehensive and adaptable approach to modern AI challenges.
Seva Vayner, Product Director of Edge Cloud and Edge AI at Gcore, commented: “The update to Everywhere Inference marks a significant milestone in our commitment to enhancing the AI inference experience and addressing evolving customer needs. The flexibility and scalability of Everywhere Inference make it an ideal solution for businesses of all sizes, from startups to large enterprises.”
The new update enhances deployment flexibility by introducing smart routing, which automatically directs workloads to the nearest available compute resource. Additionally, Everywhere Inference now offers multi-tenancy for AI workloads, leveraging Gcore’s unique multi-tenancy capabilities to run multiple inference tasks simultaneously on existing infrastructure. This approach optimizes resource utilization for greater efficiency.
These new features address common challenges faced by businesses deploying AI inference. Balancing multiple cloud providers and on-premises systems for operations and compliance can be complex. The introduction of smart routing enables users to direct workloads to their preferred region, helping them stay compliant with local data regulations and industry standards. Data security is another key concern and with Gcore’s new flexible deployment options, businesses can securely isolate sensitive information on-premise, enhancing data protection.
Learn more at https://gcore.com/everywhere-inference.
About Gcore
Gcore is a global edge AI, cloud, network, and security solutions provider. Headquartered in Luxembourg, with a team of 600 operating from ten offices worldwide, Gcore provides solutions to global leaders in numerous industries. Gcore manages its global IT infrastructure across six continents, with one of the best network performances in Europe, Africa, and LATAM due to the average response time of 30 ms worldwide. Gcore’s network consists of 180 points of presence worldwide in reliable Tier IV and Tier III data centers, with a total network capacity exceeding 200 Tbps. Learn more at gcore.com and follow them on LinkedIn, Twitter, and Facebook.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250116487759/en/

Subscribe to releases from Business Wire
Subscribe to all the latest releases from Business Wire by registering your e-mail address below. You can unsubscribe at any time.
Latest releases from Business Wire
Adtran sets intra-data center benchmark with all-new ultra-low-power LiteWave800™ LPO module10.3.2026 14:00:00 CET | Press release
News summary: AI clusters and GPU fabrics demand massive capacity, yet traditional 800G optics increase energy consumption, heat and cost burdens Adtran’s LiteWave800™ introduces a new class of ultra-low-power, low-latency DR8 LPO modules built on a fully re-engineered design Breakthrough energy efficiency of 1pJ/bit enables greener, scalable intra-data center links for next-generation AI and cloud workloads Adtran today launched LiteWave800™, an ultra‑low‑power 800Gbit/s DR8 linear pluggable optics (LPO) module engineered to help data centers address the power, latency, thermal and bandwidth demands of modern AI and machine-learning (ML) workloads. As GPU clusters grow and short-reach links scale across dense server racks, operators need 800Gbit/s optics that deliver higher capacity within strict power and cooling limits. LiteWave800™ answers this challenge with a fully re-engineered architecture that significantly reduces energy consumption. Operating at just 1pJ/bit and consuming on
Hyperice Introduces Hypervolt 3 Line: More Powerful, Quieter, and Longer-Lasting Percussion Massage Devices10.3.2026 14:00:00 CET | Press release
Upgraded Line Includes Redesigned Head Attachments and New Carry Case for an Enhanced Warm-Up and Recovery Experience Hyperice, a high-performance wellness brand, today announced the global launch of the Hypervolt 3 line, its most advanced percussion massage technology to date. The three-device line features the Hypervolt Go 3 ($149), the Hypervolt 3 ($249), and the Hypervolt 3 Pro ($349) — achieving significant performance upgrades across the board at more accessible price points than industry competitors. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260310934508/en/ The Hypervolt 3 line enables users to massage away stress and tension, loosen muscle knots, maintain flexibility and range of motion, accelerate warm-up before workouts, and recover quickly after activity. Hyperice has built its reputation over the last 15 years on one guiding principle: elite-level recovery technology designed to help everyone perform and mo
Verifone and Thales Unlock Seamless Global Connectivity for Payment Terminals10.3.2026 14:00:00 CET | Press release
Thales today announced a partnership with Verifone, a global leader in payment terminal solutions, to connect Verifone’s next-generation point-of-sale (POS) terminals using Thales eSIM technology. The collaboration aims to simplify device deployment and enable secure, flexible connectivity for payment terminals worldwide; as connectivity can be provisioned, managed and updated remotely throughout the device lifecycle. By using Thales eSIM management platform, Verifone removes the need for removable SIM cards and country-specific hardware versions, significantly reducing operational complexity for manufacturers and service providers alike. At the core of this partnership is Thales’s leadership in eSIM management for large-scale connected devices, enabling remote connectivity provisioning and management, in line with the latest GSMA SGP.32 IoT specifications. These standards support secure and interoperable management of connectivity profiles, making it possible to deploy devices globall
Nexthop AI Unveils Transformative, industry-leading Scale-out and Scale-across Switches engineered for Hyperscalers & NeoClouds10.3.2026 13:45:00 CET | Press release
Underlines focus on power efficient solutions, deployment velocity and open networking Nexthop AI, the leading pioneer of highly efficient AI Networking, today launched a range of products for scale-out, scale-across and front-end applications for cloud and AI datacenters. The launch portfolio sets new standards for performance, power efficiency, and deployment speed - very critical metrics in AI infrastructure. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260310990709/en/ Nexthop AI's highly efficient scale-out and scale-across AI networking portfolio Nexthop also unveiled the Disaggregated Spine architecture - a new, highly efficient, scale-across network design developed in collaboration with a large hyperscaler. This innovative architecture decomposes the traditional monolithic chassis running proprietary software into independent, optimized, functional tiers. It features a scale-across leaf tier (data center fabric fa
Nexthop AI Accelerates Into Hypergrowth With Oversubscribed $500M Series B Funding, Catapulting the Company’s Valuation to $4.2 Billion10.3.2026 13:45:00 CET | Press release
New Tier-1 Investors, Led by Lightspeed Venture Partners, Join to Fuel Development of Highly Efficient, Next-Generation AI and Cloud Networking Solutions Nexthop AI, the leading pioneer of highly efficient AI Networking, today announced successful closure of an oversubscribed $500 Million Series B funding round, catapulting the company’s valuation to $4.2 Billion. This round was spearheaded by Lightspeed Venture Partners, with Andreessen Horowitz joining as a major investor, and participation from Altimeter and all existing investors. “The rapid growth of AI is forcing a fundamental rethink of data center network architecture — and that creates one of the largest infrastructure market opportunities we’ve seen in a generation, with the potential to build a $100B+ company. That conviction led us to invest in Nexthop AI from Day One,” said Guru Chahal, Partner at Lightspeed Venture Partners. “Nexthop is uniquely positioned to become the next great networking vendor by combining hyperscale
In our pressroom you can read all our latest releases, find our press contacts, images, documents and other relevant information about us.
Visit our pressroom