

The FabreX™ AI Memory Fabric Platform by GigaIO is a next-generation, memory-centric fabric designed to revolutionize data center architectures in response to the exponential growth of data and the rapid adoption of advanced analytics and Artificial Intelligence (AI). By disaggregating traditional server components and enabling dynamic composition of resources, FabreX addresses the challenges posed by modern compute and storage clusters, offering unparalleled flexibility, performance, and efficiency. Key Features and Functionality: - Memory-Centric Fabric: FabreX connects memory, storage, and a wide array of accelerators—including GPUs, FPGAs, and custom ASICs—either directly or via configurations like NVMe-oF, delivering industry-leading low latency and high bandwidth. - High Performance: With latency from system memory of one server to another being less than 200 nanoseconds and bandwidth scaling up to 512 Gbits/sec in its Gen4 implementation, FabreX ensures true PCIe performance across entire clusters. - Unmatched Flexibility: The platform enables the composition of diverse resources, such as GPUs, DPUs, TPUs, FPGAs, SoCs, NVMe storage, and other I/O devices, across multiple servers and racks. It supports device-to-node, node-to-node, and device-to-device communication within the same high-performance PCIe memory fabric. - Open Standards Compliance: FabreX is 100% PCI-SIG compliant, ensuring seamless integration with heterogeneous computing, storage, and accelerator components into a unified system-area cluster fabric. Primary Value and User Solutions: FabreX addresses the critical need for scalable, flexible, and efficient data center architectures capable of handling the demands of AI, Machine Learning (ML), and Deep Learning (DL) applications. By disaggregating server components and enabling dynamic resource composition, it eliminates bottlenecks and configuration challenges inherent in traditional interconnect systems. This approach not only enhances performance but also optimizes resource utilization, reducing the total cost of ownership and allowing data centers to scale both up and out seamlessly.

The GigaIO™ Accelerator Pooling Appliance – MI300X is a high-performance PCIe accelerator appliance designed to enhance AI/ML training, high-performance computing (HPC), and data analytics applications. It fully supports PCIe Gen5, offering up to 2.048Tb/s total bandwidth for host server connections. Equipped with eight AMD Instinct MI300X 192GB 750W OAM GPUs, it provides a total of 1.54TB of high-bandwidth memory (HBM), enabling efficient processing of complex workloads. Key Features and Functionality: - High Capacity: Accommodates 8x AMD Instinct MI300X 750W accelerators, delivering substantial computational power. - Exceptional Performance: Offers ultra-low latency with 512Gb/s uplinks, ensuring rapid data transfer and processing. - Ample Memory: Provides a total of 1.54TB HBM (8x 192GB per MI300X), facilitating efficient handling of large datasets. - Simplified Deployment: Features RESTful APIs and a WebGUI for straightforward integration and management. Primary Value and User Solutions: The GigaIO Accelerator Pooling Appliance – MI300X addresses the need for scalable and efficient computational resources in demanding environments. By enabling dynamic provisioning and scaling of PCIe devices, it allows users to allocate GPU resources as needed, optimizing utilization and reducing idle hardware. Its centralized management and continuous monitoring capabilities enhance reliability and facilitate rapid problem resolution, making it an ideal solution for AI/ML training, HPC, and data analytics acceleration.

The GigaIO SuperNODE™ is a groundbreaking single-node supercomputer designed to meet the demands of next-generation AI and accelerated computing workloads. By integrating up to 32 AMD or NVIDIA GPUs into a single server, SuperNODE eliminates the complexities associated with multi-server configurations, offering a streamlined and efficient solution for intensive computational tasks. Key Features and Functionality: - High-Density GPU Integration: Supports up to 32 AMD Instinct™ MI210 GPUs or 24 NVIDIA A100 GPUs within a single node, providing exceptional computational power. - FabreX™ Memory Fabric: Utilizes GigaIO’s FabreX, a high-performance PCIe memory fabric, to seamlessly connect all accelerators, ensuring low-latency and high-bandwidth communication. - Energy Efficiency: Operates at approximately 7 kilowatts per 32-GPU deployment, reducing power consumption compared to traditional multi-server setups. - Space Optimization: Achieves a 30% reduction in rack space requirements, allowing for higher computational density within existing data center infrastructures. - Software Compatibility: Compatible with popular AI frameworks like PyTorch and TensorFlow, enabling users to run existing applications without modification. Primary Value and Problem Solved: SuperNODE addresses the challenges of deploying and managing large-scale AI and high-performance computing infrastructures by consolidating extensive GPU resources into a single, efficient node. This consolidation reduces network overhead, minimizes latency, and simplifies system administration. By eliminating the need for complex multi-server configurations and associated networking equipment, SuperNODE offers a cost-effective, energy-efficient, and high-performance solution for organizations aiming to accelerate their AI and computational workloads.

The GigaIO™ Fabric Card is a high-performance network adapter designed to facilitate non-blocking, low-latency composable fabric computing at rack scale. It enables users in AI/ML, HPC, and data analytics to construct tailored systems that optimize performance while reducing total cost of ownership. By supporting a high-speed, cabled interface to cluster subsystems across GigaIO's AI fabric network, the Fabric Card allows for the creation of shared pools of vendor-agnostic PCIe devices, including GPUs, FPGAs, storage, and memory. This flexibility ensures seamless integration and management of disaggregated resource pools. Key Features and Functionality: - High Performance: Delivers up to 512Gb/s speed and 128GB/s bandwidth, ensuring rapid data transfer and processing capabilities. - Low Latency: Achieves latency of less than 10 nanoseconds, facilitating real-time data access and communication. - Versatile Connectivity: Equipped with dual QSFP-DD connections, supporting both copper and optical cabling options for flexible deployment. - Compact Design: Features a low-profile form factor compatible with both full-height and half-height PCIe slots, allowing for easy integration into various server configurations. - Dual Operational Modes: Offers Host Mode for installation into host or head-node servers and Target Mode for integration into Accelerator Pooling Appliances or resource boxes, enhancing adaptability across different system architectures. Primary Value and User Solutions: The GigaIO Fabric Card addresses the growing need for scalable and flexible computing infrastructures by enabling the dynamic composition of hardware resources. It allows organizations to disaggregate and recompose their computing resources on demand, leading to improved resource utilization, enhanced system performance, and reduced operational costs. By supporting a wide range of PCIe-compliant devices, the Fabric Card empowers users to build customized, high-performance computing environments tailored to their specific workload requirements.

The GigaIO Fabric Switch is a high-performance networking solution designed to enable unified, software-driven composable infrastructure. It serves as the foundational component of GigaIO's AI fabric, facilitating true Software Defined Infrastructure (SDI) by dynamically assigning resources to meet the demands of data-intensive applications and varying workloads. Key Features and Functionality: - Ultra-High Performance: Delivers a switch capacity of 6.1Tb/s with industry-leading sub-130ns latency, ensuring rapid data transmission and minimal delay. - Ultimate Flexibility: Supports seamless integration and on-demand composition of various accelerators, including GPUs, TPUs, FPGAs, and SoCs, allowing for adaptable and scalable system configurations. - Unprecedented Scalability: Enables scaling up to dozens of accelerators, accommodating the growth of computing resources without compromising performance. - Simplified Deployment: Utilizes DMTF open-source Redfish® RESTful APIs and a Command Line Interface (CLI) for straightforward configuration and management of computing clusters. Primary Value and User Solutions: The GigaIO Fabric Switch addresses the challenges of modern data centers by providing a unified, low-latency network fabric that connects compute, storage, and accelerator resources using industry-standard PCI-Express protocols. This architecture eliminates the need for traditional interconnects like InfiniBand or Ethernet within the rack, reducing complexity and latency. By enabling direct memory access across servers, it supports the industry's first in-memory network, facilitating efficient resource utilization and dynamic workload management. This solution is particularly beneficial for AI/ML training and inferencing clusters, high-performance computing environments, data analytics acceleration, composable infrastructure deployments, and scale-up computing architectures.

The GigaIO Accelerator Pooling Appliance is a high-performance, fully managed PCIe Gen5 expansion chassis designed to disaggregate and pool accelerator devices such as GPUs, FPGAs, IPUs, DPUs, and specialty AI chips. By enabling dynamic provisioning and scaling of these resources, it transforms static resource silos into elastic, shareable pools, enhancing data center agility and performance while reducing total cost of ownership. Key Features and Functionality: - Capacity: Supports up to 8 double-wide PCIe Gen5 full-height, full-length accelerator cards, each delivering up to 675W, accommodating even the most power-intensive devices. - High Performance: Offers ultra-low latency with 512Gb/s uplinks and a total bandwidth of up to 2.048Tb/s dedicated to host server connections, ensuring rapid data transfer and processing. - Simplified Deployment: Features RESTful APIs and a WebGUI for intuitive management, allowing administrators to provision, monitor, and reconfigure resources seamlessly. - Enterprise-Grade Design: Equipped with redundant power supplies and fans, independent card power control, and continuous monitoring for faults, ensuring high availability and reliability in data center environments. Primary Value and Problem Solved: The GigaIO Accelerator Pooling Appliance addresses the inefficiencies of static, server-bound accelerator resources by enabling a composable, disaggregated infrastructure. This approach allows data centers to dynamically allocate and scale accelerator resources based on workload demands, leading to improved resource utilization, enhanced performance, and significant cost savings. By breaking the constraints of traditional server architectures, it provides cloud-like flexibility and agility within on-premises environments.

FabreX™ Software by GigaIO is a Linux-based, resource-efficient solution designed to enhance dynamic composability in enterprise data centers and high-performance computing environments. Serving as the software engine for GigaIO's Software-Defined Hardware™ (SDH), FabreX enables seamless memory and device composition, allowing for flexible and efficient resource management. Key Features and Functionality: - Hybrid and Multi-Cloud Compatibility: FabreX operates effectively across hybrid and multi-cloud environments, providing consistent performance and integration. - Software-Defined Hardware Flexibility: It brings the agility of software-defined hardware to on-premises infrastructure, enabling rapid adaptation to changing workload demands. - Resource Optimization: By facilitating dynamic scaling of server resources, FabreX optimizes on-premises resource utilization, reducing underused hardware and associated costs. - Seamless Scaling: The software supports both on-premises scaling and cloud bursting, ensuring smooth expansion and contraction of resources as needed. - Accelerator Integration: FabreX allows for the creation of unique server configurations by composing bare metal devices such as GPUs, FPGAs, NVMe storage, and DRAM, even enabling combinations not typically available in cloud environments. - Enhanced Communication: Utilizing GigaIO’s PCIe switching infrastructure, FabreX enables native protocol communications between servers and devices, including server-to-server, server-to-device, and device-to-device interactions. - Open Ecosystem Integration: The software integrates with existing management tools through DMTF open-source Redfish® APIs, facilitating fabric automation and orchestration without the need for additional management interfaces. Primary Value and User Solutions: FabreX Software addresses the limitations of traditional server architectures by enabling dynamic composition of computing resources, thereby eliminating the constraints imposed by physical server configurations. This flexibility allows organizations to tailor their infrastructure to specific workload requirements, enhancing performance and efficiency. By democratizing access to specialized compute resources, FabreX reduces time-to-insight for data-intensive applications, making it an invaluable tool for enterprises seeking to optimize their data center operations and adapt swiftly to evolving computational demands.

Gryf is a portable AI supercomputer, co-designed by GigaIO and SourceCode, that delivers datacenter-class computing power directly to edge operations. Housed in a TSA-friendly, suitcase-sized form factor, Gryf enables real-time data processing and analytics in field environments, eliminating the need to transfer data to centralized datacenters. This innovation allows organizations to transform vast amounts of sensor data collected at the edge into actionable insights on-site. Key Features and Functionality: - Modular and Composable Design: Gryf offers a fully configurable solution through software or by interchanging compute, accelerator, storage, or network sleds, allowing dynamic reconfiguration to meet diverse mission requirements. - Scalability: Up to five Gryf units can be seamlessly interconnected using GigaIO’s FabreX™ AI memory fabric, enabling processing of petabyte-sized datasets and sharing of resources across connected units. - High Compute Density: Each Gryf chassis can accommodate a mix of six compute, accelerator, storage, or network sleds, supporting high-performance GPUs and substantial storage capacity (up to a petabyte) to execute complex AI tasks directly at the operational site. - Portability: Designed for true mobility, Gryf features a rugged, roll-on TSA-friendly form factor that fits into an overhead bin, facilitating deployment at any location. Primary Value and Problem Solved: Gryf addresses the challenge of processing and analyzing large volumes of data collected in field environments by providing a portable, high-performance computing solution. By enabling real-time analytics at the edge, Gryf eliminates delays associated with data transfer to centralized datacenters, enhances operational responsiveness, and supports critical applications in defense, sports analytics, media production, and energy sectors. Its modular design and scalability ensure adaptability to diverse and evolving mission requirements, offering a cost-effective and efficient solution for on-site data processing needs.

GigaIO's Enterprise-Class Software suite empowers organizations to fully leverage composable disaggregated infrastructure, enabling dynamic reconfiguration of data center resources to meet specific workload demands. This suite integrates seamlessly with existing enterprise tools, providing robust security features, user and resource access controls, and streamlined provisioning processes. Key Features and Functionality: - NVIDIA Bright Cluster Manager Integration: Natively integrates with NVIDIA Bright Cluster Manager, allowing users to disaggregate and reconfigure resources like GPUs directly within the management interface. - DevOps Tool Compatibility: Supports integration with existing DevOps tools, facilitating resource management and automation within familiar environments. - SuperCloud Composer Integration: Integrates with SuperCloud Composer, providing a unified dashboard for administering software-defined data centers and enabling seamless assignment of GPUs and high-performance storage. - KVM Virtualization Support: Enables composable infrastructure in virtualized environments with KVM hosts and Linux virtual machines, enhancing flexibility and resource utilization. - Slurm Job Scheduling Integration: Incorporates with Slurm, the leading open-source job scheduler for Linux, allowing dynamic allocation of composable storage and GPUs to servers based on workflow demands. - CloudShell Integration: Accelerates infrastructure provisioning by enabling teams to create self-service, on-demand replicas of full-stack environments for on-premises and hybrid cloud configurations. Primary Value and Problem Solved: GigaIO's Enterprise-Class Software addresses the challenge of underutilized and inflexible data center resources by enabling organizations to dynamically compose and reconfigure their infrastructure. This flexibility leads to optimized resource utilization, reduced operational costs, and the agility to adapt to evolving workload requirements. By integrating with existing enterprise tools and providing robust security and management features, GigaIO ensures a seamless transition to a composable infrastructure model, empowering organizations to maximize the efficiency and performance of their data centers.
GigaIO is a technology company specializing in high-performance computing solutions. They focus on providing innovative hardware and software that enable efficient data processing and management in data centers. GigaIO's flagship product, the GigaIO Fabric, is designed to enhance connectivity and scalability for demanding workloads, particularly in artificial intelligence and machine learning applications. The company aims to optimize resource utilization and improve performance for organizations requiring advanced computing capabilities.