Be a part of the occasion trusted by enterprise leaders for practically 20 years. VB Rework brings collectively the individuals constructing actual enterprise AI technique. Be taught extra
As AI transforms enterprise operations throughout various industries, important challenges proceed to floor round knowledge storage—irrespective of how superior the mannequin, its efficiency hinges on the flexibility to entry huge quantities of information rapidly, securely, and reliably. With out the fitting knowledge storage infrastructure, even essentially the most highly effective AI methods may be dropped at a crawl by sluggish, fragmented, or inefficient knowledge pipelines.
This subject took heart stage on Day One in all VB Rework, in a session targeted on medical imaging AI improvements spearheaded by PEAK:AIO and Solidigm. Collectively, alongside the Medical Open Community for AI (MONAI) undertaking—an open-source framework for creating and deploying medical imaging AI—they’re redefining how knowledge infrastructure helps real-time inference and coaching in hospitals, from enhancing diagnostics to powering superior analysis and operational use circumstances.
>>See all our Rework 2025 protection right here<<Innovating storage on the fringe of medical AI
Moderated by Michael Stewart, managing companion at M12 (Microsoft’s enterprise fund), the session featured insights from Roger Cummings, CEO of PEAK:AIO, and Greg Matson, head of merchandise and advertising and marketing at Solidigm. The dialog explored how next-generation, high-capacity storage architectures are opening new doorways for medical AI by delivering the velocity, safety and scalability wanted to deal with huge datasets in medical environments.
Crucially, each firms have been deeply concerned with MONAI since its early days. Developed in collaboration with King’s School London and others, MONAI is purpose-built to develop and deploy AI fashions in medical imaging. The open-source framework’s toolset—tailor-made to the distinctive calls for of healthcare—consists of libraries and instruments for DICOM help, 3D picture processing, and mannequin pre-training, enabling researchers and clinicians to construct high-performance fashions for duties like tumor segmentation and organ classification.
An important design aim of MONAI was to help on-premises deployment, permitting hospitals to keep up full management over delicate affected person knowledge whereas leveraging customary GPU servers for coaching and inference. This ties the framework’s efficiency intently to the info infrastructure beneath it, requiring quick, scalable storage methods to completely help the calls for of real-time medical AI. That is the place Solidigm and PEAK:AIO come into play: Solidigm brings high-density flash storage to the desk, whereas PEAK:AIO makes a speciality of storage methods purpose-built for AI workloads.
“We have been very lucky to be working early on with King’s School in London and Professor Sebastien Orslund to develop MONAI,” Cummings defined. “Working with Orslund, we developed the underlying infrastructure that enables researchers, medical doctors, and biologists within the life sciences to construct on prime of this framework in a short time.”
Assembly twin storage calls for in healthcare AI
Matson identified that he’s seeing a transparent bifurcation in storage {hardware}, with completely different options optimized for particular phases of the AI knowledge pipeline. To be used circumstances like MONAI, comparable edge AI deployments—in addition to situations involving the feeding of coaching clusters—ultra-high-capacity solid-state storage performs a important function, as these environments are sometimes house and power-constrained, but require native entry to huge datasets.
For example, MONAI was capable of retailer greater than two million full-body CT scans on a single node inside a hospital’s current IT infrastructure. “Very space-constrained, power-constrained, and really high-capacity storage enabled some pretty exceptional outcomes,” Matson mentioned. This sort of effectivity is a game-changer for edge AI in healthcare, permitting establishments to run superior AI fashions on-premises with out compromising efficiency, scalability, or knowledge safety.
In distinction, workloads involving real-time inference and lively mannequin coaching place very completely different calls for on the system. These duties require storage options that may ship exceptionally excessive enter/output operations per second (IOPS) to maintain up with the info throughput wanted by high-bandwidth reminiscence (HBM) and guarantee GPUs stay totally utilized. PEAK:AIO’s software-defined storage layer, mixed with Solidigm’s high-performance solid-state drives (SSDs), addresses each ends of this spectrum—delivering the capability, effectivity, and velocity required throughout your complete AI pipeline.
A software-defined layer for medical AI workloads on the edge
Cummings defined that PEAK:AIO’s software-defined AI storage know-how, when paired with Solidigm’s high-performance SSDs, permits MONAI to learn, write, and archive huge datasets on the velocity medical AI calls for. This mix accelerates mannequin coaching and enhances accuracy in medical imaging whereas working inside an open-source framework tailor-made to healthcare environments.
“We offer a software-defined layer that may be deployed on any commodity server, remodeling it right into a high-performance system for AI or HPC workloads,” Cummings mentioned. “In edge environments, we take that very same functionality and scale it right down to a single node, bringing inference nearer to the place the info lives.”
A key functionality is how PEAK:AIO helps remove conventional reminiscence bottlenecks by integrating reminiscence extra instantly into the AI infrastructure. “We deal with reminiscence as a part of the infrastructure itself—one thing that’s usually neglected. Our answer scales not simply storage, but additionally the reminiscence workspace and the metadata related to it,” Cummings mentioned. This makes a big distinction for purchasers who can’t afford—both by way of house or value—to re-run massive fashions repeatedly. By retaining memory-resident tokens alive and accessible, PEAK:AIO permits environment friendly, localized inference with no need fixed recomputation.
Bringing intelligence nearer to the info
Cummings emphasised that enterprises might want to take a extra strategic strategy to managing AI workloads. “You’ll be able to’t be only a vacation spot. It’s important to perceive the workloads. We do some unimaginable know-how with Solidign and their infrastructure to be smarter on how that knowledge is processed, beginning with tips on how to get efficiency out of a single node,” Cummings defined. “So with inference being such a big push, we’re seeing generalists changing into extra specialised. And we’re now taking work that we’ve carried out from a single node and pushing it nearer to the info to be extra environment friendly. We wish extra clever knowledge, proper? The one means to do this is to get nearer to that knowledge.”
Some clear traits are rising from large-scale AI deployments, significantly in newly constructed greenfield knowledge facilities. These services are designed with extremely specialised {hardware} architectures that deliver knowledge as shut as doable to the GPUs. To realize this, they rely closely on all solid-state storage—particularly ultra-high-capacity SSDs—designed to ship petabyte-scale storage with the velocity and accessibility wanted to maintain GPUs repeatedly fed with knowledge at excessive throughput.
“Now that very same know-how is principally taking place at a microcosm, on the edge, within the enterprise,” Cumming defined. “So it’s changing into important to purchasers of AI methods to find out how you choose your {hardware} and system vendor, even to be sure that if you wish to get essentially the most efficiency out of your system, that you just’re working on all solid-state. This lets you deliver large quantities of information, just like the MONAI instance—it was 15,000,000 plus photographs, in a single system. This allows unimaginable processing energy, proper there in a small system on the finish.”
Keep forward of the curve with Enterprise Digital 24. Discover extra tales, subscribe to our publication, and be part of our rising neighborhood at bdigit24.com