NVIDIA PARTNER

씨이랩은 NVIDIA의 Partner로서

HW와 SW의 패키지 서비스를 제공합니다.

제품 이용문의

GPU를 통한 데이터센터 가속화

NVIDIA H100
Tensor Core GPU
모든 데이터 센터를 위한
전례 없는 성능, 확장성 보안
바로가기
H100 SXM H100 PCIe H100 NVL 1
FP64 34 teraFLOPS 26 teraFLOPS 68 teraFLOPs
FP64 Tensor Core 67 teraFLOPS 51 teraFLOPS 134 teraFLOPs
FP32 67 teraFLOPS 51 teraFLOPS 134 teraFLOPs
TF32 Tensor Core 989 teraFLOPS 2 756 teraFLOPS 2 1,979 teraFLOPs 2
BFLOAT16 Tensor Core 1,979 teraFLOPS 2 1,513 teraFLOPS 2 3,958 teraFLOPs 2
FP16 Tensor Core 1,979 teraFLOPS 2 1,513 teraFLOPS 2 3,958 teraFLOPs 2
FP8 Tensor Core 3,958 teraFLOPS 2 3,026 teraFLOPS 2 7,916 teraFLOPs 2
INT8 Tensor Core 3,958 TOPS 2 3,026 TOPS 2 7,916 TOPS 2
GPU memory 80GB 80GB 188GB
GPU memory bandwidth 3.35TB/s 2TB/s 7.8TB/s 3
Decoders 7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
14 NVDEC
14 JPEG
Max thermal design power (TDP) Up to 700W (configurable) 300-350W (configurable) 2x 350-400W
(configurable)
Multi-Instance GPUs Up to 7 MIGS @ 10GB each Up to 14 MIGS @ 12GB each
Form factor SXM PCIe
dual-slot air-cooled
2x PCIe
dual-slot air-cooled
Interconnect NVLink: 900GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server options NVIDIA HGX H100 Partner and
NVIDIA-Certified Systems with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUs
Partner and
NVIDIA-Certified Systems
with 1–8 GPUs
Partner and
NVIDIA-Certified Systems
with 2-4 pairs
NVIDIA AI Enterprise Add-on Included Included
1. Preliminary specifications. May be subject to change. Specifications shown for 2x H100 NVL PCIe cards paired with NVLink Bridge.
2. With sparsity.
3. Aggregate HBM bandwidth.
NVIDIA L40S
데이터센터를 위한
독보적인 AI 및 그래픽 성능
바로가기
SPECIFICATIONS
GPU Architecture NVIDIA Ada Lovelace architecture
GPU Memory 48GB GDDR6 with ECC
Memory Bandwidth 864GB/s
Interconnect Interface PCIe Gen4 x16: 64GB/s bidirectional
NVIDIA Ada Lovelace Architecture-Based CUDA® Cores 18,176
NVIDIA Third-Generation RT Cores 142
NVIDIA Fourth-Generation Tensor Cores 568
RT Core Performance TFLOPS 212
FP32 TFLOPS 91.6
TF32 Tensor Core TFLOPS 183 I 366*
BFLOAT16 Tensor Core TFLOPS 362.05 I 733*
FP16 Tensor Core 362.05 I 733*
FP8 Tensor Core 733 I 1,466*
Peak INT8 Tensor TOPS
Peak INT4 Tensor TOPS
733 I 1,466*
733 I 1,466*
Form Factor 4.4" (H) x 10.5" (L), dual slot
Display Ports 4x DisplayPort 1.4a
Max Power Consumption 350W
Power Connector 16-pin
세계적으로 입증된 엔터프라이즈 AI

NVIDIA DGX™ H100으로 혁신과 최적화의 영역을 확대하세요.
NVIDIA의 전설적인 DGX 시스템의 최신 버전이자 NVIDIA DGX SuperPOD™의 토대인 DGX H100은
NVIDIA H100 Tensor 코어 GPU의 획기적인 성능으로 가속화된 AI의 강자입니다.
NVIDIA DGX H100
세계적으로 입증된 엔터프라이즈 AI
바로가기
SPECIFICATIONS
GPUs 8x NVIDIA H100 Tensor Core GPUs
GPU memory 640GB total
Performance 32 petaFLOPS FP8
NVIDIA® NVSwitch™ 4x
System power usage 10.2kW max
CPU Dual Intel® Xeon® Platinum 8480C
Processors 112 Cores total, 2.00 GHz (Base),
3.80 GHz (Max Boost)
System memory 2TB
Networking 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
> Up to 400Gb/s InfiniBand/Ethernet
2x dual-port QSFP112 NVIDIA ConnectX-7 VPI
> Up to 400Gb/s InfiniBand/Ethernet
Management Networking 10Gb/s onboard NIC with RJ45 100Gb/s Ethernet NIC
Host baseboard management controller (BMC) with RJ45
Storage OS: 2x 1.92TB NVMe M.2
Internal storage 8x 3.84TB NVMe U.2
Software NVIDIA AI Enterprise – Optimized AI software
NVIDIA Base Command – Orchestration, scheduling, and cluster management
DGX OS / Ubuntu / Red Hat Enterprise Linux / Rocky – Operating System
Support Comes with 3-year business-standard hardware and software support
System weight 287.6lbs (130.45kgs)
Packaged system weight 376lbs (170.45kgs)
System dimensions Height: 14.0in (356mm)
Width: 19.0in (482.2mm)
Length: 35.3in (897.1mm)
Operating temperature range 5–30°C (41–86°F)

A Global Leader of AI Appliance

1. DGX + Astrago
AI 수행에 필수적인 GPU 서버의 활용성을 극대화하는 솔루션
바로가기
2. DGX + X-Labeller
AI에 필요한 영상데이터를 초고속으로 가공하고 학습할 수 있는 솔루션입니다.
바로가기
3. DGX + X-GEN
영상데이터가 부족한 경우 가상으로 데이터를 생성하여 학습을 진행하는 솔루션
바로가기
제품 구매문의
궁금하신 사항이 있으시면 문의해 주세요.
담당자가 안내해드리겠습니다.
제출하기