KAYTUS
14.5.2024 09:01:36 CEST | Business Wire | Press release
KAYTUS, a leading IT infrastructure provider, has unveiled MotusAI, an AI development platform now accessible for trial worldwide. MotusAI is tailored for deep learning and AI development, integrating GPU and data resources alongside AI development environments to streamline computing resource allocation, task orchestration, and centralized management. It accelerates training data and manages AI model development workflows seamlessly. This platform drastically reduces resource investment, boosts development efficiency, elevates cluster computing power utilization to over 70%, and significantly enhance in large-scale training task scheduling performance.
Streamline AI Development for Cost-Effectiveness and Efficiency
The rapid expansion of enterprise AI business and AI model development brings forth challenges including low computing efficiency, complexity in model development, varied requirements for task orchestration across different scenarios, and unstable computing resources. Ensuring efficient, flexible, and stable operation of AI business is critical for enterprises to consistently derive business insights, generate revenue, and maintain competitiveness.
Optimize Resource Management for Maximum Computing Power
MotusAI efficiently allocates resources and workloads by implementing intelligent and flexible GPU scheduling. It caters to diverse AI workload demands for computing power by dynamically allocating GPU resources based on demand. With multi-dimensional and dynamic GPU resource allocation, including fine-grained GPU scheduling and support for Multi-Instance GPU (MIG), MotusAI effectively meets computing power requirements across various scenarios such as model development, debugging, and training.
Streamline Task Orchestration for Versatile Support of Various Scenario
MotusAI has revolutionized cloud-native scheduling systems. Its scheduler surpasses the community version by dramatically improving the scheduling performance of large-scale POD tasks. MotusAI achieves rapid startup and environment readiness for hundreds of PODs, boasting a five times increase in throughput and a five times decrease in latency compared to the community scheduler. This ensures efficient scheduling and utilization of computing resources for large-scale training. Moreover, MotusAI enables dynamic scaling of AI workloads for both training and inference services, supporting burst tasks and fulfilling diverse scheduling needs across various scenarios.
MotusAI empowers users to maximize computing resources, spanning from fine-grained division of single-card multiple instances to large-scale parallel computing across multiple machines and cards. By integrating features like computing power pooling, dynamic scaling, and GPU single-card reuse, MotusAI significantly enhances computing power utilization, achieving an average utilization rate of over 70%. Furthermore, it enhances computing efficiency by leveraging cluster topology awareness and optimizing network communication.
Data Transfer Acceleration for Three Times Efficiency Improvement
MotusAI excels in data transfer acceleration through innovative features such as supporting local loading and computing of remote data, which eliminates delays caused by network I/O during computation. Utilizing strategies like "zero-copy" data transfer, multi-threaded retrieval, incremental data updates, and affinity scheduling, MotusAI significantly reduces data caching cycles. These enhancements greatly improve AI development and training efficiency, resulting in 2-3 times boost in model efficiency during data training.
Reliable, and Automatically Fault-Tolerant Platform
MotusAI supports performance monitoring and alerts for computing resources, providing real-time status updates for core platform services. It employs sandbox isolation mechanisms for data with higher security levels. In case of resource failures or abnormalities, MotusAI automatically initiates fault tolerance processes to ensure the quickest possible recovery during interrupted training tasks. This approach reduces fault handling time by over 90%, on average.
Comprehensive Management of AI Model Development in One Integrated Solution
MotusAI accelerates AI development and supports every stage of large model development. From managing data samples and software stacks to designing model architectures, debugging code, training models, tuning parameters, and conducting evaluation testing, MotusAI offers a complete platform. It integrates popular development frameworks like PyTorch and TensorFlow, along with distributed training frameworks such as Megatron and DeepSpeed.
Moreover, MotusAI enables comprehensive lifecycle management of AI inferencing services, including offline and online testing, A/B testing, rolling release, service orchestration, and service decommissioning. These features collectively enhance the business value of AI technology, fostering continuous business growth.
Additionally, MotusAI provides an integrated visual management and operation interface that covers computing, networking, storage, and application resources. Operational staff can comprehensively manage, monitor, and evaluate the overall platform operation status through a single interface.
Free Trial Available
MotusAI is now available worldwide for a trial period, offering free remote access for one month, along with testing, training, and support. Users can also opt for local deployment using their own devices and environment, with local deployment testing support from KAYTUS. For more information1 and to register2, please visit Link1, Link2.
About KAYTUS
KAYTUS is a premier provider of IT infrastructure products and solutions, delivering a suite of cutting-edge, open, and environmentally friendly infrastructure solutions for cloud, AI, edge computing, and other emerging technologies. With a customer-centric approach, KAYTUS adapts flexibly to user needs through its agile business model. Learn more at KAYTUS.com
To view this piece of content from cts.businesswire.com, please give your consent at the top of this page.
View source version on businesswire.com: https://www.businesswire.com/news/home/20240513665403/en/
About Business Wire
Subscribe to releases from Business Wire
Subscribe to all the latest releases from Business Wire by registering your e-mail address below. You can unsubscribe at any time.
Latest releases from Business Wire
Incyte Announces the European Commission Approval of Zynyz® (retifanlimab) for the First-Line Treatment of Advanced Squamous Cell Carcinoma of the Anal Canal (SCAC)6.3.2026 22:42:00 CET | Press release
- Zynyz® (retifanlimab) in combination with carboplatin and paclitaxel (platinum-based chemotherapy) is the first systemic treatment for adult patients with advanced SCAC in Europe- The EC approval is based on results of the POD1UM-303 study which showed that adult patients with advanced SCAC achieved significantly improved progression-free survival with Zynyz in combination with carboplatin and paclitaxel as a first-line treatment compared to chemotherapy alone.1 Incyte (Nasdaq:INCY) today announced that the European Commission (EC) has approved Zynyz® (retifanlimab) in combination with carboplatin and paclitaxel (platinum-based chemotherapy) for the first-line treatment of adult patients with metastatic or with inoperable locally recurrent squamous cell carcinoma of the anal canal (SCAC). “The EC approval of Zynyz marks an important step forward for patients with advanced SCAC, a rare cancer for which meaningful treatment advances have not occurred in several decades,” said Bill Meur
Dfns Launches Payouts6.3.2026 21:27:00 CET | Press release
Dfns today announced the launch of Payouts, a new API enabling institutions to convert stablecoins to fiat and route payouts across multiple bank accounts while keeping wallet-level governance and controls in place. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260305327930/en/ Convert stablecoins to fiat and settle payouts to bank accounts in 94 countries, today. Solving the problem of single-rail off-ramps Today, most fintechs and institutions still hard-wire a single payout provider into their stack, or rely on vertically integrated models that bundle routing, pricing, custody, and settlement together. That approach may be convenient early on, but it creates structural problems at scale: weak price discovery because there is no competitive pressure on margins, limited auditability because routing decisions are opaque, and operational fragility because a single provider degradation in any corridor requires architectural i
Klarna Group Plc Clarifies Mechanics of March 9 Lock-Up Expiration6.3.2026 20:23:00 CET | Press release
Klarna Group plc (NYSE: KLAR) today issues the following clarification to ensure investors and market participants have accurate information regarding the mechanics of its lock-up expiration on March 9, 2026, the processes required before pre-IPO shares can be traded on the NYSE, and the prior liquidity opportunities already available to shareholders. This release contains only factual descriptions of the Company's share structure and applicable processes. It does not constitute guidance or a projection of any kind regarding future trading volumes, share price, or the intentions of any shareholder and speaks only as of the date of this press release. 1. 335 million locked-up shares — but two different categories Of the 378 million total ordinary shares outstanding, approximately 335 million are subject to lock-up restrictions expiring March 9, 2026. However, these shares fall into two distinct categories governed by separate sets of regulations. A. 159 million shares (48% of locked-up
Lone Star Funds Announces Agreement to Acquire the Capsules & Health Ingredients Division of Lonza Group AG6.3.2026 18:30:00 CET | Press release
Lone Star Funds (“Lone Star”) today announced that an affiliate of Lone Star Fund XII, L.P. has entered into a definitive agreement to acquire the Capsules & Health Ingredients (“CHI”) division of Lonza Group AG. As part of the transaction, Lonza will retain a 40% equity position in the business. Headquartered in Basel, Switzerland, CHI operates globally across the Americas, Europe and Asia Pacific. The business comprises three segments: Hard Empty Capsules: leading global manufacturer of gelatin and plant-based capsules offering a broad range of innovative solutions for pharmaceutical and nutraceutical customers. Dosage Form Solutions: end-to-end development and manufacturing platform serving nutraceutical and pharmaceutical customers. Health Ingredients: provider of branded, science-backed nutrition ingredients serving joint health, energy and active lifestyle markets. Lone Star believes CHI is a high-quality, globally recognized platform with strong technical capabilities, different
Sutherland Launches FinAI Hub to Industrialize Agentic AI for Banking and Financial Services6.3.2026 14:00:00 CET | Press release
A domain-trained AI agent workforce enables production-scale AI across regulated financial institution operations Today, Sutherland announced the launch of Sutherland FinAI Hub, an enterprise Agentic AI platform built exclusively for Banking and Financial Services. As financial institutions accelerate AI adoption, many initiatives remain confined to pilots, unable to scale across legacy systems and core operations. Sutherland FinAI Hub is designed to help close that gap. FinAI Hub is an innovation ecosystem where Sutherland works with clients to design, prototype, and scale Agentic AI workflows across core operations. At launch, the platform brings together a large and expanding workforce of domain-trained AI agents purpose-built for financial institutions, supporting functions across retail banking, payments, cards, consumer and commercial lending, servicing, back office, risk and compliance functions. These modular agents can operate independently or be orchestrated across end-to-end
In our pressroom you can read all our latest releases, find our press contacts, images, documents and other relevant information about us.
Visit our pressroom
