All corrections
LessWrong February 28, 2026 at 07:55 PM

www.lesswrong.com/posts/8YnHuN55XJTDwGPMr/a-gentle-introduction-to-sparse-autoen...

3 corrections found

1
Claim
multi-layer perceptions (MLPs)
Correction

MLP stands for “multilayer perceptron,” not “multi-layer perceptions.”

Full reasoning

The post expands the acronym MLP as “multi-layer perceptions,” but in machine learning MLP is the standard abbreviation for multilayer perceptron, a type of feedforward neural network.

This isn’t a matter of terminology preference: the “P” in MLP refers to perceptron, the historical name for a (simple) neural network unit/network, not “perception(s).”

1 source
2
Claim
The only way for n vectors in n- dimensional space to be linearly independent is if they’re all orthogonal.
Correction

In R^n, n vectors can be linearly independent without being orthogonal; orthogonality is sufficient but not necessary for linear independence.

Full reasoning

This claim is false in basic linear algebra.

  • Orthogonality implies linear independence (for nonzero vectors), but the reverse direction does not hold.
  • A simple counterexample in (\mathbb{R}^2): ((1,0)) and ((1,1)) are linearly independent (neither is a scalar multiple of the other) but not orthogonal (their dot product is (1)).

Jim Hefferon’s open linear algebra textbook explicitly states that “not every basis … has mutually orthogonal vectors” and gives an explicit basis for (\mathbb{R}^2) whose members “are not orthogonal.” Since a basis is (by definition) linearly independent, that directly contradicts the post’s statement that linear independence (for n vectors in n-dimensional space) requires orthogonality.

2 sources
3
Claim
Created in the 1990s, autoencoders were initially designed for dimensionality reduction and compression.
Correction

Autoencoder-style “auto-association” networks used for compression/dimensionality reduction were published in 1988, so they were not “created in the 1990s.”

Full reasoning

The post dates the creation of autoencoders to the 1990s, but published work describing autoencoder-style networks for compression/dimensionality reduction exists earlier.

A 1988 paper by Bourlard & Kamp discusses multilayer perceptrons in auto-association mode specifically as candidates for “data compression or dimensionality reduction.” This is the core autoencoder setup (learn an internal representation that reconstructs the input). That places autoencoder-style methods in the literature before the 1990s.

A later historical review (“Autoencoders reloaded,” 2022) explicitly discusses the 1988 work and notes the terminology shift: “Auto-associative multilayer perceptrons are now called autoencoders.” Together, these sources directly contradict the claim that autoencoders were created in the 1990s.

2 sources
Model: OPENAI_GPT_5 Prompt: v1.6.0