Info

We are very proud to announce the world's first and only Nvidia GH200 Grace-Hopper Superchip and Nvidia GB200 Grace-Blackwell Superchip-powered supercomputers in quiet, handy and beautiful desktop form factors (GB300 coming soon). Our benchmarks show that they are currently by far the fastest AI and also the fastest ARM desktop PCs in the world. If you are looking for a workstation for inferencing and fine-tuning of insanely huge LLMs, image and video generation and editing, we got you covered.

Example use case 1: Inferencing Deepseek R1 671B, Nvidia Nemotron Super 49B, Nvidia Nemotron Ultra 253B or QwQ-32B
  • Deepseek R1 671B: https://huggingface.co/deepseek-ai/DeepSeek-R1
  • Nvidia Nemotron Super 49B and Ultra 253B: https://www.nvidia.com/en-us/ai-data-science/foundation-models/llama-nemotron/
  • QwQ-32B: https://qwenlm.github.io/blog/qwq-32b/
  • Deepseek R1 671B, Nvidia Nemotron Super 49B, Nvidia Nemotron Ultra 253B and QwQ-32B are the most powerful open-source models by far and even beat GPT-o1/o3 and Claude 3.7 Sonnet. Surprisingly, the small QwQ-32B performs (almost) as well as Deepseek R1 671B.
  • Deepseek R1 671B with 4-bit quantization needs at least 404GB of memory to swiftly run inference! Nvidia Nemotron Ultra 253B with 8-bit quantization needs at least 270GB of memory to swiftly run inference! QwQ-32B with 16-bit quantization needs at least 66GB of memory to swiftly run inference! Luckily, GH200 has a minimum of 576GB, GB200 a minimum of 864GB, GB300 a minimum of 1056GB. With GH200 QwQ-32B in 16bit can be run in VRAM only for ultra high inference speed (approx. 100 tokens/s). With GB200 Blackwell, as well as GB300 Blackwell Ultra this is also possible for Deepseek R1 671B. With GB200 Blackwell and GB300 Blackwell Ultra you can expect significantly more than 200 tokens/s. If the model is bigger than VRAM you can only expect approx. 10-20 tokens/s. Surprisingly, Deepseek R1 671B in 4-bit runs on GH200 with 20 tokens/s (using Nvidia Dynamo). That is usable! 4-bit quantization seems to be the best trade-off between speed and accuracy, but is natively only supported by GB200 Blackwell and GB300 Blackwell Ultra. We recommend using Nvidia Dynamo (https://www.nvidia.com/en-us/ai/dynamo/) for inferencing.
  • Example use case 2: Fine-tuning Deepseek R1 671B with PyTorch FSDP and Q-Lora
  • Tutorial: https://www.philschmid.de/fsdp-qlora-llama3
  • The ultimate guide to fine-tuning: https://arxiv.org/abs/2408.13296
  • Models need to be fine-tuned on your data to unlock the full potential of the model. But efficiently fine-tuning bigger models like Deepseek R1 671B remained a challenge until now. This blog post walks you through how to fine-tune Deepseek R1 using PyTorch FSDP and Q-Lora with the help of Hugging Face TRL, Transformers, peft & datasets.
  • Fine-tuning big models within a reasonable time requires special and beefy hardware! Luckily, GH200, GB200 and GB300 are ideal for this task.
  • Example use case 3: Generating videos with Mochi1, HunyuanVideo or Wan 2.1
  • Mochi1: https://github.com/genmoai/models
  • Tencent HunyuanVideo: https://aivideo.hunyuan.tencent.com/
  • Wan 2.1: https://github.com/Wan-Video/Wan2.1
  • Mochi1, HunyuanVideo and Wan 2.1 are democratizing efficient video production for all.
  • Generating videos with requires special and beefy hardware! Mochi1 and HunyuanVideo need 80GB of VRAM. Luckily, GH200, GB200 and GB300 are ideal for this task. GH200 has a minimum of 96GB, GB200 a minimum of 384GB, GB300 a minimum of 576GB.
  • Example use case 4: Image generation with Flux.1 or SANA-Sprint.
  • Flux: https://github.com/black-forest-labs/flux
  • SANA-Sprint: https://nvlabs.github.io/Sana/Sprint/
  • Flux.1 is the best image generator at the moment. And it's uncensored, too. SANA-Sprint is very fast and efficient.
  • In high-speed inference, FLUX requires approximately 33GB of VRAM for maximum speed. For training the FLUX model, more than 40GB of VRAM is needed. SANA-Sprint requires up to 67GB of VRAM. Luckily, GH200 has a minimum of 96GB, GB200 a minimum of 384GB, GB300 a minimum of 576GB.
  • Example use case 5: Image editing with Omnigen or Nvidia Add-it
  • Omnigen: https://github.com/VectorSpaceLab/OmniGen
  • Nvidia Add-it: https://research.nvidia.com/labs/par/addit/
  • Omnigen and Add-it are the most innovative and easy to use image editors at the moment.
  • For maximum speed in high resolution image generation and editing beefier hardware than consumer graphics cards is needed. Luckily, GH200 and GB200 excel at this task.
  • Example use case 6: Video editing with AutoVFX or VACE
  • AutoVFX: https://haoyuhsu.github.io/autovfx-website/
  • VACE: https://ali-vilab.github.io/VACE-Page/
  • AutoVFX and VACE are the most innovative and easy to use video editor at the moment.
  • For maximum speed in high resolution video editing beefier hardware than consumer graphics cards is needed. Luckily, GH200 and GB200 excel at this task.
  • Why should you buy your own hardware?
  • "You'll own nothing and you'll be happy?" No!!! Never should you bow to Satan and rent stuff that you can own. In other areas, renting stuff that you can own is very uncool and uncommon. Or would you prefer to rent "your" car instead of owning it? Most people prefer to own their car, because it's much cheaper, it's an asset that has value and it makes the owner proud and happy. The same is true for compute infrastructure.
  • Even more so, because data and compute infrastructure are of great value and importance and are preferably kept on premises, not only for privacy reasons but also to keep control and mitigate risks. If somebody else has your data and your compute infrastructure you are in big trouble.
  • Speed, latency and ease-of-use are also much better when you have direct physical access to your stuff.
  • With respect to AI and specifically LLMs there is another very important aspect. The first thing big tech taught their closed-source LLMs was to be "politically correct" (lie) and implement guardrails, "safety" and censorship to such an extent that the usefulness of these LLMs is severely limited. Luckily, the open-source tools are out there to build and tune AI that is really intelligent and really useful. But first, you need your own hardware to run it on.

  • What are the main benefits of GH200 Grace-Hopper and GB200 Grace-Blackwell?
  • They have enough memory to run and tune the biggest LLMs currently available.
  • Their performance in every regard is almost unreal (up to 8520x faster than x86).
  • There are no alternative systems with the same amount of memory.
  • Optimized for memory-intensive AI and HPC performance.
  • Ideal for AI, especially inferencing and fine-tuning of LLMs.
  • Connect display and keyboard, and you are ready to go.
  • Ideal for HPC applications like, e.g. vector databases.
  • You can use it as a server or a desktop/workstation.
  • Easily customizable, upgradable and repairable.
  • Privacy and independence from cloud providers.
  • Cheaper and much faster than cloud providers.
  • Reliable and energy-efficient liquid cooling.
  • Flexibility and the possibility of offline use.
  • Gigantic amounts of coherent memory.
  • No special infrastructure is needed.
  • They are very power-efficient.
  • The lowest possible latency.
  • They are beautiful.
  • Easy to transport.
  • CUDA enabled.
  • Very quiet.
  • Run Linux.
  • What is the difference to alternative systems with the same amount of memory?
    The main difference between GH200/GB200 and alternative systems is that with GH200/GB200, the GPU is connected to the CPU via a 900 GB/s NVLink vs. 128 GB/s PCIe gen5 used by traditional systems. Furthermore, multiple superchips can be connected via 900/1800 GB/s NVLink vs. orders of magnitude slower network connections used by traditional systems. Since these are the main bottlenecks, GH200/GB200's high-speed connections directly translate to much higher performance compared to traditional architectures. Also, multiple NV-linked GH200 or GB200 act as a single giant GPU (CPU-GPU superchip) with one single giant coherent memory pool.

    The alternative systems mentioned above also have one thing in common: they are not available in standard desktop form factors, like our GH200 and GB200 systems are.

    What is the difference to 19-inch server models?
  • Form factor: 19-inch servers have a very distinct form factor. They are of low height and are very long, e.g. 438 x 87.5 x 900mm (17.24" x 3.44" x 35.43"). This makes them rather unsuitable to place them anywhere else than in a 19-inch rack. Our GH200 and GB200 tower models have desktop form factors: 244 x 567 x 523 mm (20.6 x 9.6 x 22.3") or 255 x 565 x 530 mm (20.9 x 10 x 22.2") or 250 x 404 x 359 mm (9.8 x 15.9 x 14.1"). This makes it possible to place them almost anywhere.
  • Noise: 19-inch servers are extremely loud. The average noise level is typically around 90 decibels, which is as loud as a subway train and exceeds the noise level that is considered safe for workers subject to long-term exposure. In contrast, our GH200 and GB200 tower models are very quiet (factory setting is 25 decibels) and they can easily be adjusted to even lower or higher noise levels because each fan can be tuned individually and manually from 0 to 100% PWM duty cycle. Efficient cooling is ensured, because our GH200 and GB200 tower models have a higher number of fans and the low-revving Noctua fans have a much bigger diameter compared to their 19-inch counterparts and move approximately the same amount or even a much higher amount of air depending on the specific configuration and PWM tuning.
  • Transportability: 19-inch servers are not meant to be transported, consequently, they lack every feature in this regard. In addition, their form factor makes them rather unsuitable to be transported. Our GH200 and GB200 tower models, in contrast, can be transported very easily. Our metal and mini cases even feature two handles, which makes moving them around very easy.
  • Infrastructure: 19-inch servers typically need quite some infrastructure to be able to be deployed. At the very least, a 19-inch mounting rack is definitely required. Our GH200 and GB200 models do not need any special infrastructure at all. They can be deployed quickly and easily almost everywhere.
  • Latency: 19-inch servers are typically accessed via network. Because of this, there is always at least some latency. Our GH200 and GB200 tower models can be used as desktops/workstations. In this use case, latency is virtually non-existent.
  • Looks: 19-inch server models are not particularly aesthetically pleasing. In contrast, our available case options are in our humble opinion by far the most beautiful there are.
  • Technical details of our GH200 workstations (base configuration)
  • Metal tower with two color choices: Titan grey and Champagne gold
  • Glass tower with four color choices: white, black, green or turquoise
  • Available air or liquid-cooled
  • 1x/2x Nvidia GH200 Grace Hopper Superchip
  • 1x/2x 72-core Nvidia Grace CPU
  • 1x/2x Nvidia Hopper H100 Tensor Core GPU (on request)
  • 1x/2x Nvidia Hopper H200 Tensor Core GPU
  • 480GB/960GB of LPDDR5X memory with error-correction code (ECC)
  • 96GB of HBM3 (available on request) or 144GB of HBM3e memory per superchip
  • 576GB (available on request), 624GB or 1248GB of total fast-access memory
  • NVLink-C2C: 900 GB/s of bandwidth
  • Programmable from 450W to 1000W TDP (CPU + GPU + memory)
  • 2x/4x High-efficiency 2000W PSU
  • 2x/4x PCIe gen4/5 M.2 slots on board
  • 2x/4x/8x PCIe gen4/5 drive (NVMe)
  • 2x/3x FHFL PCIe Gen5 x16
  • 1x/2x/4x USB 3.0/3.2 ports
  • 2x RJ45 10GbE ports
  • 1x RJ45 IPMI port
  • 1x Mini display port
  • Halogen-free LSZH power cables
  • Stainless steel bolts
  • Very quiet, the factory setting is 25 decibels (fan speed and thus noise level can be individually and manually configured from 0 to 100% PWM duty cycle)
  • 3 years manufacturer's warranty
  • 244 x 567 x 523 mm (20.6 x 9.6 x 22.3") or 255 x 565 x 530 mm (20.9 x 10 x 22.2") or 281 x 553 x 539 mm (11.1 x 21.8 x 21.2") or 250 x 404 x 359 mm (9.8 x 15.9 x 14.1")
  • 30 kg (66 lbs) or 33 kg (73 lbs) or 27 kg (60 lbs)
  • Optional components
  • Liquid cooling (highly recommended)
  • Bigger custom air-cooled heatsink
  • NIC Nvidia Bluefield-3
  • NIC Nvidia ConnectX-7
  • NIC Intel 100Gb
  • WLAN + Bluetooth card
  • Up to 2x 8TB M.2 SSD
  • Up to 8x 30TB E1.S SSD
  • Up to 10x 60TB 2.5" SSD
  • Storage controller
  • Raid controller
  • Additional USB ports
  • Multi-display graphics card
  • Sound card
  • Mouse
  • Keyboard
  • Consumer or industrial fans
  • Intrusion detection
  • OS preinstalled
  • Anything possible on request

  • Need something different? We are happy to build custom systems to your liking.

    What are the main differences between the offered GH200 models?
  • GH200: metal or glass tower, air-cooled, without M.2 (0 of 2) and 2.5" (0 of 2 or 4) hard disks, 1x USB (mini USB hub included: 3x USB)
  • GH200 Giga: metal or glass tower, air-cooled, without M.2 (0 of 2) and no 2.5" (0 of 4) hard disks, 2x USB (mini USB hub included: 4x USB)
  • GH200 Dual: Two NVlinked superchips, metal tower, air-cooled, comes with 1 of 2 M.2 and 0 of 8 E1.S hard disks, 2x USB (mini USB hub included: 4x USB)

  • Comparison chart: GH200 comparison chart.pdf

    Compute performance of one GH200
  • 67 teraFLOPS FP64
  • 1 petaFLOPS TF32
  • 2 petaFLOPS FP16
  • 4 petaFLOPS FP8
  • Benchmarks
  • https://github.com/mag-/gpu_benchmark
  • White paper: Nvidia GH200 Grace-Hopper white paper

    GB200 Blackwell

    The Nvidia GB200 Grace-Blackwell Superchip has truly amazing specs to show off. GPTshop.ai systems with Nvidia GB200 Grace-Blackwell are available now. GB300 Blackwell Ultra will be available in Q4 2025. Be one of the first in the world to get a GB200 or GB300 desktop/workstation. Order now!

    Compute performance of one GB200
  • 90 teraFLOPS FP64
  • 5 petaFLOPS TF32
  • 10 petaFLOPS FP16
  • 20 petaFLOPS FP8
  • 40 petaFLOPS FP4
  • White paper: Nvidia GB200 Grace-Blackwell white paper

    The Grace CPU

    The Nvidia Grace CPU delivers twice the performance per watt of conventional x86-64 platforms and is currently the world’s fastest ARM CPU. Grace-Grace superchip workstations are available on request.

    Benchmarks
  • https://www.phoronix.com/review/nvidia-gh200-gptshop-benchmark
  • https://www.phoronix.com/review/nvidia-gh200-amd-threadripper
  • https://www.phoronix.com/review/aarch64-64k-kernel-perf
  • https://www.phoronix.com/review/nvidia-gh200-compilers
  • https://www.phoronix.com/review/nvidia-grace-epyc-turin
  • White paper: Nvidia Grace-Grace white paper

    Download

    Here you can find various downloads concerning our GH200 and GB200 systems: operating systems, firmware, drivers, software, manuals, white papers, spec sheets and so on. Everything you need to run your system and more.

    White papers
  • Nvidia GH200 Grace-Hopper white paper
  • Nvidia GB200 Grace-Blackwell white paper
  • Developing for Nvidia superchips
  • The ultimate guide to fine-tuning
  • Diffusion LLMs

  • Comparison charts
  • Comparison chart GH200 Grace-Hopper systems: GH200 comparison chart.pdf

  • Spec sheets
  • GH200 624GB: Spec sheet GH200 624GB.pdf
  • GH200 Giga 624GB: Spec sheet GH200 Giga 624GB.pdf
  • GH200 Glass 624GB: Spec sheet GH200 Glass 624GB.pdf
  • GH200 Glass Giga 624GB: Spec sheet GH200 Glass Giga 624GB.pdf
  • GH200 Dual NVL2 1.2TB: Spec sheet GH200 Dual NVL2 1.2TB.pdf
  • Mi300A 512GB: Spec sheet Mi300A 512GB.pdf
  • GB200 Blackwell Dual NVL4 1.8TB: Spec sheet GB200 Blackwell Dual NVL4 1.8TB.pdf
  • GB300 Blackwell Ultra NVL2 1TB: Spec sheet GB300 Blackwell Ultra NVL2 1TB.pdf
  • GB300 Blackwell Ultra 784GB: Spec sheet GB300 Blackwell Ultra 784GB.pdf
  • GB300 Blackwell Ultra Glass 784GB: Spec sheet GB300 Blackwell Ultra Glass 784GB.pdf

  • Manuals
  • Official Nvidia GH200 Manual: https://docs.nvidia.com/grace/#grace-hopper
  • Official Nvidia Grace Manual: https://docs.nvidia.com/grace/#grace-cpu
  • Official Nvidia Grace getting started: https://docs.nvidia.com/grace/#getting-started-with-nvidia-grace
  • GH200 624GB: Manual GH200 624GB.pdf
  • GH200 Giga 624GB: Manual GH200 Giga 624GB.pdf
  • GH200 Glass 624GB: Manual GH200 Glass 624GB.pdf
  • GH200 Glass Giga 624GB: Manual GH200 Glass Giga 624GB.pdf
  • GH200 Dual NVL2 1.2TB: Manual GH200 Dual NVL2 1.2TB.pdf

  • Operating systems
  • Ubuntu Server for ARM: https://cdimage.ubuntu.com/releases/24.04/release/ubuntu-24.04.2-live-server-arm64+largemem.iso

  • Using the newest Nvidia 64k kernel is highly recommended: https://packages.ubuntu.com/search?keywords=linux-nvidia-64k-hwe

    Drivers
  • Nvidia GH200 drivers (also work for RTX 4000 Ada and RTX 6000 Ada): https://www.nvidia.com/Download/index.aspx?lang=en-us
    Select product type "data center", product series "HGX-Series" and operating system "Linux aarch64".
  • Aspeed drivers: https://aspeedtech.com/support_driver/
  • Nvidia Bluefield-3 drivers: https://developer.nvidia.com/networking/doca#downloads
  • Nvidia ConnectX-7 drivers: https://network.nvidia.com/products/ethernet-drivers/linux/mlnx_en/
  • Intel E810-CQDA2 drivers: https://www.intel.com/content/www/us/en/download/19630/intel-network-adapter-driver-for-e810-series-devices-under-linux.html?wapkw=E810-CQDA2
  • Graid SupremeRAID SR-1010 drivers: https://docs.graidtech.com/#linux-driver

  • Firmware
  • GH200 BMC: GH200 BMC.zip
  • GH200 BIOS: GH200 BIOS.zip
  • Nvidia Bluefield-3 firmware: https://network.nvidia.com/support/firmware/bluefield3/
  • Nvidia ConnectX-7 firmware: https://network.nvidia.com/support/firmware/connectx7/
  • Intel E810-CQDA2 firmware: https://www.intel.com/content/www/us/en/search.html?ws=idsa-default#q=E810-CQDA2

  • Top open source LLMs
  • Nvidia Llama Nemotron Super 49B and Ultra 253B: https://www.nvidia.com/en-us/ai-data-science/foundation-models/llama-nemotron/
  • Deepseek R1 671B: https://huggingface.co/deepseek-ai/DeepSeek-R1
  • QwQ-32B: https://qwenlm.github.io/blog/qwq-32b/
  • Llama 3.1, 3.2 and 3.3: https://www.llama.com/
  • Mistral Large 2 123B: https://huggingface.co/mistralai/Mistral-Large-Instruct-2407
  • Pixtral Large 123B: https://mistral.ai/news/pixtral-large/
  • Llama-3.2 Vision 90B: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision
  • Llama-3.1 405B: https://huggingface.co/meta-llama/Llama-3.1-405B
  • Deepseek V3 671B: https://huggingface.co/deepseek-ai/DeepSeek-V3
  • MiniMax-01 456B: https://www.minimaxi.com/en/news/minimax-01-series-2
  • Tülu 3 405B: https://allenai.org/tulu
  • Qwen2.5 VL 72B: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct
  • Aya Vision: https://cohere.com/blog/aya-vision
  • Gemma-3 27B: https://blog.google/technology/developers/gemma-3/
  • Mistral Small 3.1 24B: https://mistral.ai/news/mistral-small-3-1

  • Software
  • Nvidia Dynamo: https://www.nvidia.com/en-us/ai/dynamo/
  • Nvidia Github: https://github.com/NVIDIA
  • Nvidia CUDA: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=arm64-sbsa
  • Nvidia Container-toolkit: https://github.com/NVIDIA/nvidia-container-toolkit
  • Nvidia Tensorflow: https://github.com/NVIDIA/tensorflow
  • Nvidia Pytorch: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
  • Keras: https://keras.io/
  • Apache OpenNLP: https://opennlp.apache.org/
  • Nvidia NIM models: https://build.nvidia.com/explore/discover
  • Nvidia Triton inference server: https://www.nvidia.com/de-de/ai-data-science/products/triton-inference-server/
  • Nvidia NeMo Customizer: https://developer.nvidia.com/blog/fine-tune-and-align-llms-easily-with-nvidia-nemo-customizer/
  • Huggingface open source LLMs: https://huggingface.co/models
  • Huggingface text generation inference: https://github.com/huggingface/text-generation-inference
  • vLLM - inference and serving engine: https://github.com/vllm-project/vllm
  • vLLM docker image: https://hub.docker.com/r/drikster80/vllm-gh200-openai
  • Ollama - run LLMs locally: https://ollama.com/
  • Open WebUI: https://openwebui.com/
  • ComfyUI: https://www.comfy.org/
  • LM Studio: https://lmstudio.ai/
  • Llamafile: https://github.com/Mozilla-Ocho/llamafile
  • Fine-tune Llama 3 with PyTorch FSDP and Q-Lora: https://www.philschmid.de/fsdp-qlora-llama3/
  • Perplexica: https://github.com/ItzCrazyKns/Perplexica
  • Morphic: https://github.com/miurla/morphic
  • Open-Sora: https://github.com/hpcaitech/Open-Sora
  • Flux.1: https://github.com/black-forest-labs/flux
  • Storm: https://github.com/stanford-oval/storm
  • Stable Diffusion 3.5: https://huggingface.co/stabilityai/stable-diffusion-3.5-large
  • Genmo Mochi1: https://github.com/genmoai/models
  • Genmo Mochi1 (reduced VRAM): https://github.com/victorchall/genmoai-smol
  • Rhymes AI Allegro: https://github.com/rhymes-ai/Allegro
  • OmniGen: https://github.com/VectorSpaceLab/OmniGen
  • Segment anything: https://github.com/facebookresearch/segment-anything
  • AutoVFX: https://haoyuhsu.github.io/autovfx-website/
  • DimensionX: https://chenshuo20.github.io/DimensionX/
  • Nvidia Add-it: https://research.nvidia.com/labs/par/addit/
  • MagicQuill: https://magicquill.art/demo/
  • AnythingLLM: https://github.com/Mintplex-Labs/anything-llm
  • Pyramid-Flow: https://pyramid-flow.github.io/
  • LTX-Video: https://github.com/Lightricks/LTX-Video
  • CogVideoX: https://github.com/THUDM/CogVideo
  • OmniControl: https://github.com/Yuanshi9815/OminiControl
  • Samurai: https://yangchris11.github.io/samurai/
  • All Hands: https://www.all-hands.dev/
  • Tencent HunyuanVideo: https://aivideo.hunyuan.tencent.com/
  • Aider: https://aider.chat/
  • Unsloth: https://github.com/unslothai/unsloth
  • Axolotl: https://github.com/axolotl-ai-cloud/axolotl
  • Star: https://nju-pcalab.github.io/projects/STAR/
  • Sana: https://nvlabs.github.io/Sana/
  • RepVideo: https://vchitect.github.io/RepVid-Webpage/
  • UI-TARS: https://github.com/bytedance/UI-TARS
  • DiffuEraser: https://lixiaowen-xw.github.io/DiffuEraser-page/
  • Go-with-the-Flow: https://eyeline-research.github.io/Go-with-the-Flow/
  • 3DTrajMaster: https://fuxiao0719.github.io/projects/3dtrajmaster/
  • YuE: https://map-yue.github.io/
  • DynVFX: https://dynvfx.github.io/
  • ReasonerAgent: https://reasoner-agent.maitrix.org/
  • Open-source DeepResearch: https://huggingface.co/blog/open-deep-research
  • Deepscaler: https://github.com/agentica-project/deepscaler
  • InspireMusic: https://funaudiollm.github.io/inspiremusic/
  • FlashVideo: https://github.com/FoundationVision/FlashVideo
  • MatAnyone: https://pq-yang.github.io/projects/MatAnyone/
  • LocalAI: https://localai.io/
  • Stepvideo: https://huggingface.co/stepfun-ai/stepvideo-t2v
  • SkyReels: https://github.com/SkyworkAI/SkyReels-V1
  • OctoTools: https://octotools.github.io/
  • SynCD: https://www.cs.cmu.edu/~syncd-project/
  • Mobius: https://mobius-diffusion.github.io/
  • Wan2.1: https://github.com/Wan-Video/Wan2.1
  • TheoremExplainAgent: https://tiger-ai-lab.github.io/TheoremExplainAgent/
  • RIFLEx: https://riflex-video.github.io/
  • Browser use: https://browser-use.com/
  • HunyuanVideo-I2V: https://github.com/Tencent/HunyuanVideo-I2V
  • Spark-TTS: https://sparkaudio.github.io/spark-tts/
  • GEN3C: https://research.nvidia.com/labs/toronto-ai/GEN3C/
  • DiffRhythm: https://aslp-lab.github.io/DiffRhythm.github.io/
  • Babel: https://babel-llm.github.io/babel-llm/
  • Diffusion Self-Distillation: https://primecai.github.io/dsd/
  • OWL: https://github.com/camel-ai/owl
  • ANUS: https://github.com/nikmcfly/ANUS
  • Long Context Tuning for Video Generation: https://guoyww.github.io/projects/long-context-video/
  • Tight Inversion: https://tight-inversion.github.io/
  • VACE: https://ali-vilab.github.io/VACE-Page/
  • SANA-Sprint: https://nvlabs.github.io/Sana/Sprint/
  • Sesame Conversational Speech Model: https://github.com/SesameAILabs/csm
  • Search-R1: https://github.com/PeterGriffinJin/Search-R1
  • AI Scientist: https://github.com/SakanaAI/AI-Scientist

  • Benchmarking
  • GPU benchmark: https://github.com/mag-/gpu_benchmark
  • Ollama benchmark: https://llm.aidatatools.com/results-linux.php
  • Phoronix test suite: https://www.phoronix-test-suite.com/
  • MLCommons: https://mlcommons.org/benchmarks/
  • Artifical Analysis: https://artificialanalysis.ai/
  • Lmarena: https://lmarena.ai/
  • Livebench: https://livebench.ai/
  • Contact

    Email: x@GPTshop.ai

    GPT LLC
    Fifth Floor, Zephyr House, 122 Mary Street
    George Town, P.O. Box 31493
    Grand Cayman KY1-1206
    Cayman Islands
    Company register number: HM-7509

    European branch:
    GPT LLC
    Sachsenhof 1
    96106 Ebern
    Germany

    We accept almost all currencies there are. Payment is possible via wire transfer or cash.

    Try

    Try before you buy. You can apply for remote testing of a GH200 or GB200 system. After approval, you will be given login credentials for remote access. If you want to come by and see it for yourself and run some tests, that is also possible any time.

    Currently available for testing:

  • GH200 624GB
  • GH200 Giga 624GB

  • Apply via email: x@GPTshop.ai