ACCESS Newswire

Graid Technology Announces SupremeRAID SE Support for the NVIDIA GeForce RTX 5000 Series, Expanding High-Performance RAID for Workstations

13.2.2025 11:00:00 CET | ACCESS Newswire | Press release

Share

With Support for the NVIDIA® GeForce RTX™ 5090, 5080, 5070 Ti, 5070 and 5060, SupremeRAID™ SE Continues to Push the Boundaries of High-Performance Workstation Storage

SANTA CLARA, CALIFORNIA / ACCESS Newswire / February 13, 2025 / Graid Technology, the leader in GPU-based RAID solutions, today announced that SupremeRAID™ SE now supports the entire NVIDIA® GeForce RTX™ 5000 series, delivering unparalleled RAID performance to high-end workstation users. With support for the GeForce RTX 5090, 5080, 5070 Ti, 5070, and 5060, professionals can leverage NVIDIA's most powerful GPUs to maximize NVMe SSD performance with exceptional efficiency and resilience.Graid Technology Announces SupremeRAID™ SE Support for the NVIDIA® GeForce RTX™ 5000 Series

SupremeRAID™ SE is Graid Technology's bring-your-own-GPU RAID solution, designed for workstation users who demand high-speed data access, reliability, and flexibility. With support for the GeForce RTX 5000 series, users can take advantage of NVIDIA's cutting-edge Blackwell architecture and ultra-fast GDDR7 memory to eliminate RAID bottlenecks and unlock the full potential of NVMe SSD storage.

"With SupremeRAID SE, we're continuing to redefine what's possible for workstation RAID," said Leander Yu, President and CEO at Graid Technology. "By expanding support to the entire NVIDIA GeForce RTX 5000 series, we're giving professionals even greater flexibility to harness their GPU's power for extreme data performance - whether for AI, content creation or other demanding workloads."

SupremeRAID™ SE enables users to configure RAID across 4 to 8 NVMe SSDs, ensuring maximum throughput and resilience while freeing up CPU resources for other intensive tasks.

Key benefits of SupremeRAID™ SE with the NVIDIA GeForce RTX 5000 series include:

  • Unmatched AI and Compute Power - Leverages NVIDIA's Blackwell architecture for high-speed RAID processing.

  • Support for the Latest GPUs - Now compatible with the GeForce RTX 5090, 5080, 5070 Ti, 5070, and 5060.

  • Bring-Your-Own-GPU Flexibility - Users can maximize their investment by utilizing existing GPU hardware for RAID acceleration.

  • Optimized for Extreme Workloads - Ideal for AI, machine learning, 3D rendering, video editing, and gaming development.

  • Simplified, Software-Defined RAID - Easy configuration without the complexity of traditional hardware RAID solutions.

Join the SupremeRAID™ SE Beta Program

Graid Technology continues to pioneer next-generation RAID solutions, reinforcing its position as a market leader. Users interested in experiencing SupremeRAID™ SE with the NVIDIA GeForce RTX 5000 series can sign up for the beta program at beta.graidtech.com.

About Graid Technology

Graid Technology is transforming what's possible for high-performance computing. As the creator of SupremeRAID™, the world's first and only GPU-based RAID, the company is committed to helping enterprises unlock the full potential of their NVMe SSDs. Through global partnerships with industry leaders like NVIDIA, Supermicro, and Dell Technologies, Graid Technology delivers cutting-edge RAID solutions for AI, machine learning, media & entertainment, and high-performance computing. For more information, visit www.graidtech.com.

Contact Information

Andrea Eaken
Senior Director of Marketing, Americas & EMEA
andrea.eaken@graidtech.com
949-742-9928

SOURCE: Graid Technology Inc.



View the original press release on ACCESS Newswire

Graid Technology Inc.

Subscribe to releases from ACCESS Newswire

Subscribe to all the latest releases from ACCESS Newswire by registering your e-mail address below. You can unsubscribe at any time.

Latest releases from ACCESS Newswire

Tenstorrent Unveils TT-QuietBox(TM) 2, the First RISC-V AI Workstation With a Fully Open-Source Stack to Deliver Teraflop-Class Inference11.3.2026 17:00:00 CET | Press release

Liquid-Cooled Desktop System Runs Models up to 120B Parameters Locally With a Fully Open-Source Stack, Starting at $9,999 SANTA CLARA, CA / ACCESS Newswire / March 11, 2026 / Tenstorrent, the AI computing company led by CEO Jim Keller, today announced TT-QuietBox™ 2 (Blackhole™). This whisper-quiet, liquid-cooled AI workstation runs models up to 120 billion parameters directly at your desk, ships with an entirely open-source software stack from compiler to kernel, and starts at $9,999. It marks the industry's first desktop AI workstation built on RISC-V architecture to deliver teraflop-class inference. The Inference Imperative The timing matters. Inference has quietly overtaken training as the dominant AI workload, now accounting for more than 55% of cloud AI infrastructure spending at $37.5 billion - and it is still accelerating. Yet, developers running these workloads face a stark choice: pay per-token cloud fees that compound as usage scales, or buy hardware locked to proprietary st

Clockwork.io Introduces A New Class of Fault Tolerance to End Failure-Driven GPU Waste in AI Training11.3.2026 14:00:00 CET | Press release

New TorchPass solution addresses a multi-million dollar challenge with AI infrastructure; uses Live GPU Migration to keep large-scale AI training running through hardware failures instead of forcing costly restarts PALO ALTO, CA / ACCESS Newswire / March 11, 2026 / Clockwork.io, the leader in Software-Driven AI Fabrics™- a programmable, vendor-neutral software layer that optimizes large-scale GPU clusters for real-time observability, fault tolerance, and deterministic performance-today announced the general availability of TorchPass Workload Fault Tolerance. This new class of software-driven fault-tolerance eliminates one of the most costly failure modes in large-scale AI training: catastrophic job restarts caused by infrastructure faults. Delivered as a core capability of the Clockwork.io FleetIQ™ platform, TorchPass applies the principles of Software-Driven AI Fabrics to distributed training, using Live GPU Migration to allow workloads to continue running through GPU failures, networ

Datavault AI CEO Nathaniel Bradley to Present at Luminary 2026 During Oscars Weekend in Los Angeles11.3.2026 12:00:00 CET | Press release

Datavault AI (Nasdaq:DVLT) Debuts Tokenized Legacy™ Platform and ADIO® Technology at Beverly Center Alongside Scott Page's My Moon Experience, Pink Floyd Live Performance, and Entertainment Industry Luminaries PHILADELPHIA, PA / ACCESS Newswire / March 11, 2026 / Datavault AI Inc. ("Datavault AI" or the "Company") (NASDAQ:DVLT), a provider of data monetization, credentialing, digital engagement, and real‑world asset tokenization technologies, today announced that Chief Executive Officer Nathaniel (Nate) Bradley will deliver a featured presentation at Luminary 2026 - Film, AI, Music and Crypto Innovators, a premier conference taking place Saturday, March 14, 2026, at the Beverly Center (8500 Beverly Blvd, Suite 835, Los Angeles, CA 90048) as part of Oscars Weekend 2026. The Luminary 2026 event is produced by Space Blue/MMF (Dallas Santana) and Hollywood Road Show TV (Chantelle Borelli) and forms the centerpiece of a four-day entertainment and technology activation series running March 1

Context Management Powers Production-Ready AI Analytics at Enterprise Scale11.3.2026 09:00:00 CET | Press release

GoodData delivers governed semantics, grounded knowledge, guided behavior, and full observability for reliable AI analytics. SAN FRANCISCO, CALIFORNIA / ACCESS Newswire / March 11, 2026 / GoodData today introduced Context Management, a governed contextual layer designed to enable production-ready enterprise AI analytics and agents. As organizations deploy AI assistants, copilots, and autonomous agents, they encounter a structural gap: AI lacks enforced business context, governance, and observability. AI pilots demonstrate potential, but moving AI into production exposes the deeper challenge of ensuring answers are consistent, safe, and explainable at scale. Without semantics and traceability, answers shift depending on phrasing. Business rules are applied inconsistently. When outputs change, teams can't explain why. For enterprises, this erodes trust and slows adoption. Many AI analytics platforms rely on prompts, inferred metadata, or loosely integrated document search. Context is sug

Telestream Expands Its Cloud Services with the Introduction of UP11.3.2026 08:05:00 CET | Press release

New cloud-native platform extends Telestream Cloud Services to support Global Ingest, automation, review, and real-time monitoring across hybrid and distributed production environments NEVADA CITY, CA / ACCESS Newswire / March 11, 2026 / Telestream, a global leader in media workflow technologies, today announced the expansion of Telestream Cloud Services with the introduction of UP, a new cloud-native solution designed to support Global Ingest, orchestration, review, and real-time monitoring in modern production environments. With UP, Telestream extends its cloud portfolio beyond high-scale cloud processing and hybrid workflow extension, addressing the growing operational demands of distributed, IP-based media production. As media organizations navigate cost pressures, audience fragmentation, and increasingly distributed production models, cloud adoption has accelerated rapidly but unevenly. Some companies are extending trusted workflows into elastic cloud environments. Others are buil

In our pressroom you can read all our latest releases, find our press contacts, images, documents and other relevant information about us.

Visit our pressroom
World GlobeA line styled icon from Orion Icon Library.HiddenA line styled icon from Orion Icon Library.Eye