www.lesswrong.com/posts/fkKcftthj2fhSGDje/most-successful-entrepreneurship-is-un...
2 corrections found
Nvidia; the gap between using CPUs and using GPUs for AI & graphics processing is so large that there was basically no "alternative" to Nvidia for its current enterprise applications.
As of the post date (2025-12-22), there were multiple widely-used, enterprise-grade alternatives to Nvidia accelerators for AI workloads (e.g., Google Cloud TPUs, AWS Trainium, Intel Gaudi, AMD Instinct).
Full reasoning
The post presents this as a factual claim about the state of enterprise computing, but by late 2025 there were clearly established non-Nvidia accelerator options used for enterprise AI training and inference:
- Google Cloud TPUs are explicitly offered for training/serving large models (an enterprise application) and are not Nvidia GPUs.
- AWS Trainium is explicitly positioned by AWS as a purpose-built AI accelerator family for training and inference at scale.
- Intel Gaudi 2 is explicitly marketed (and benchmarked via MLPerf) as an alternative to Nvidia’s H100 class accelerators.
- AMD Instinct MI300X is an enterprise AI/HPC GPU accelerator line with large HBM memory capacity, used as a non-Nvidia option.
Even if Nvidia dominated many segments, the statement "basically no alternative" is contradicted by the existence and enterprise deployment of these alternatives.
4 sources
- Train a model using TPU v5e | Google Cloud Documentation
"TPU v5e is optimized to be a high value product for ... training, fine-tuning, and serving." (Google Cloud TPU docs)
- AI Accelerator - AWS Trainium - AWS
"AWS Trainium is a family of purpose-built AI accelerators ... designed to deliver ... for training and inference across a broad range of generative AI workloads."
- Intel Gaudi 2 Remains Only Benchmarked Alternative to NV H100 for GenAI Performance - Intel Newsroom
"The Intel Gaudi 2 AI accelerator remains the only benchmarked alternative to Nvidia H100 for generative AI (GenAI) performance."
- AMD Instinct MI300X — AMD Instinct Customer Acceptance Guide
"The AMD Instinct™ MI300X is a high-performance GPU accelerator designed for AI, HPC, and demanding workloads."
The kinds of work that GPUs are now applied to simply didn't get done before they existed.
Many major workloads now accelerated by GPUs (computer-graphics rendering and neural-network-based recognition) existed and were performed before modern GPUs; GPUs mostly made them faster/cheaper, not newly possible in absolute terms.
Full reasoning
This statement is unambiguously too strong as written ("simply didn't get done"). Concrete counterexamples exist well before modern GPUs:
-
High-end computer graphics rendering for films predated modern GPU compute. Pixar’s RenderMan was released in 1988 and was used to render major films (e.g., Toy Story in 1995). Pixar also notes that later RenderMan’s RIS path-tracing renderer "only used CPUs" (i.e., significant rendering work was done without GPUs). That directly contradicts the claim that this kind of work "didn't get done".
-
Core AI workloads (e.g., convolutional neural network–based document recognition) were done decades before GPUs became standard for ML training. The well-known IEEE review paper by LeCun et al. is from 1998 and surveys gradient-based learning/CNN approaches for document recognition—work that necessarily pre-existed today’s GPU-centric deep learning era.
GPUs massively expanded scale and reduced cost/time for these workloads, but the categorical claim that the work "simply didn't get done" before GPUs is contradicted by documented pre-GPU practice.
2 sources
- Pixar's RenderMan | The Evolution of RenderMan
RenderMan is described as released in 1988 and used for films like Toy Story (1995); Pixar also states RIS path-tracing "only used CPUs."
- Gradient Based Learning Applied to Document Recognition (LeCun, Bottou, Bengio, Haffner), Proceedings of the IEEE, 1998
Bibliographic entry showing CNN/gradient-based document-recognition work published in 1998—well before GPUs became the standard platform for deep learning training.