All corrections
LessWrong February 23, 2026 at 08:15 AM

www.lesswrong.com/posts/fkKcftthj2fhSGDje/most-successful-entrepreneurship-is-un...

2 corrections found

1
Claim
Nvidia; the gap between using CPUs and using GPUs for AI & graphics processing is so large that there was basically no "alternative" to Nvidia for its current enterprise applications.
Correction

As of the post date (2025-12-22), there were multiple widely-used, enterprise-grade alternatives to Nvidia accelerators for AI workloads (e.g., Google Cloud TPUs, AWS Trainium, Intel Gaudi, AMD Instinct).

Full reasoning

The post presents this as a factual claim about the state of enterprise computing, but by late 2025 there were clearly established non-Nvidia accelerator options used for enterprise AI training and inference:

  • Google Cloud TPUs are explicitly offered for training/serving large models (an enterprise application) and are not Nvidia GPUs.
  • AWS Trainium is explicitly positioned by AWS as a purpose-built AI accelerator family for training and inference at scale.
  • Intel Gaudi 2 is explicitly marketed (and benchmarked via MLPerf) as an alternative to Nvidia’s H100 class accelerators.
  • AMD Instinct MI300X is an enterprise AI/HPC GPU accelerator line with large HBM memory capacity, used as a non-Nvidia option.

Even if Nvidia dominated many segments, the statement "basically no alternative" is contradicted by the existence and enterprise deployment of these alternatives.

4 sources
2
Claim
The kinds of work that GPUs are now applied to simply didn't get done before they existed.
Correction

Many major workloads now accelerated by GPUs (computer-graphics rendering and neural-network-based recognition) existed and were performed before modern GPUs; GPUs mostly made them faster/cheaper, not newly possible in absolute terms.

Full reasoning

This statement is unambiguously too strong as written ("simply didn't get done"). Concrete counterexamples exist well before modern GPUs:

  1. High-end computer graphics rendering for films predated modern GPU compute. Pixar’s RenderMan was released in 1988 and was used to render major films (e.g., Toy Story in 1995). Pixar also notes that later RenderMan’s RIS path-tracing renderer "only used CPUs" (i.e., significant rendering work was done without GPUs). That directly contradicts the claim that this kind of work "didn't get done".

  2. Core AI workloads (e.g., convolutional neural network–based document recognition) were done decades before GPUs became standard for ML training. The well-known IEEE review paper by LeCun et al. is from 1998 and surveys gradient-based learning/CNN approaches for document recognition—work that necessarily pre-existed today’s GPU-centric deep learning era.

GPUs massively expanded scale and reduced cost/time for these workloads, but the categorical claim that the work "simply didn't get done" before GPUs is contradicted by documented pre-GPU practice.

2 sources
Model: OPENAI_GPT_5 Prompt: v1.5.0