---
title: "Isomap"
language: "en"
type: "Method"
summary: "Isomap (Machine Learning Method) Method for DimensionReduction, DimensionReduce, FeatureSpacePlot and FeatureSpacePlot3D. Reduce the dimension of data using an isometric mapping. Isomap, which stands for isometric mapping, is a nonlinear neighbor-based dimensionality reduction method. The method attempts to find a low-dimensional embedding of data via a transformation that preserves geodesic distances. Isomap is able to learn nonlinear manifolds; however, it gives poor results on boundaries, can fail if data has high-density variations, and is computationally expensive. The following plots show two-dimensional embeddings learned by the Isomap method applied to the benchmark datasets Fisher's Irises, MNIST and FashionMNIST: Isomap constructs a neighborhood graph on N data points given their k nearest neighbors. The geodesic distances between all pairs of points on the manifold are estimated by the shortest path in the nearest neighbors graph, d_ij^G, which results in the matrix of graph distances D_G={d_ij^G}. The method attempts to find the embedding for which the Euclidean distance (shortest distance) is equal to the geodesic distance. The lower-dimensional embeddings y_i are computed by minimizing the embedding cost: min \\[Sum]^ N_i=1 [||y_i-y_j||- d_ij^G]^2 The method is equivalent to performing the classical MultidimensionalScaling on the matrix of graph distances D_G. Consequently, the Euclidean distances in the embedding space match the graph distances: ||y_i-y_j||\\[TildeTilde] d_ij^G. The following suboption can be given:"
keywords: 
- MachineLearning
- DimensionReduction
- Manifold Learning
- unsupervised learning
canonical_url: "https://reference.wolfram.com/language/ref/method/Isomap.html"
source: "Wolfram Language Documentation"
---
# "Isomap" (Machine Learning Method)

* Method for ``DimensionReduction``, ``DimensionReduce``, ``FeatureSpacePlot`` and ``FeatureSpacePlot3D``.

* Reduce the dimension of data using an isometric mapping.

---

## Details & Suboptions

* ``"Isomap"``, which stands for isometric mapping, is a nonlinear neighbor-based dimensionality reduction method. The method attempts to find a low-dimensional embedding of data via a transformation that preserves geodesic distances.

* ``"Isomap"`` is able to learn nonlinear manifolds; however, it gives poor results on boundaries, can fail if data has high-density variations, and is computationally expensive.

* The following plots show two-dimensional embeddings learned by the ``"Isomap"`` method applied to the benchmark datasets [Fisher's Irises](https://datarepository.wolframcloud.com/resources/Sample-Data-Fishers-Irises), [MNIST](https://datarepository.wolframcloud.com/resources/MNIST) and [FashionMNIST](https://datarepository.wolframcloud.com/resources/msollami_FashionMNIST) :

[image]

* ``"Isomap"`` constructs a neighborhood graph on $$N$$ data points given their $k$ nearest neighbors. The geodesic distances between all pairs of points on the manifold are estimated by the shortest path in the nearest neighbors graph, $d_{\text{ij}}{}^G$, which results in the matrix of graph distances $D_G=\left\{d_{\text{ij}}{}^G\right\}$. The method attempts to find the embedding for which the Euclidean distance (shortest distance) is equal to the geodesic distance. The lower-dimensional embeddings $y_i$ are computed by minimizing the embedding cost:                                        									``min`` Subscript[``∑``^* N*, ``i`` = 1] [ || Subscript[``y``, ``i``] - Subscript[``y``, ``j``] || -Subscript[``d``, ``ij``]^$$G$$]^2

* The method is equivalent to performing the classical ``"MultidimensionalScaling"`` on the matrix of graph distances $D_G$. Consequently, the Euclidean distances in the embedding space match the graph distances:  || Subscript[``y``, ``i``] - Subscript[``y``, ``j``] || ≈ Subscript[``d``, ``ij``]^G.

* The following suboption can be given:

"NeighborsNumber" 	[`Automatic`](https://reference.wolfram.com/language/ref/Automatic.en.md)	number of neighbors ``k``

---

## Examples (3)

### Basic Examples (1)

Create and visualize a "Swiss roll" dataset:

```wl
In[1]:=
SeedRandom[12]
data = Table[Insert[AngleVector[1 + {2, 8} * RandomReal[]], RandomReal[], 2], 1000];

In[2]:= ListPointPlot3D[data, BoxRatios -> {1, 1, 1}, PlotStyle -> Directive[PointSize -> Large]]

Out[2]= [image]
```

Train a nonlinear dimension reducer using ``"Isomap"`` on the dataset to map to two-dimensional space:

```wl
In[3]:= reducer = DimensionReduction[data, 2, Method -> "Isomap"]

Out[3]=
DimensionReducerFunction[Association["ExampleNumber" -> 1000, 
  "Imputer" -> MachineLearning`MLProcessor["ImputeMissing", 
    Association["Invertibility" -> "Perfect", "Missing" -> "Imputed", 
     "StructurePreserving" -> True, "Input" -> Associ ...   "Date" -> DateObject[{2020, 7, 14, 12, 36, 59.736843`8.52881724014869}, "Instant", "Gregorian", 
      -4.], "ProcessorCount" -> 6, "ProcessorType" -> "x86-64", "OperatingSystem" -> "MacOSX", 
    "SystemWordLength" -> 64, "Evaluations" -> {}]]]
```

Find and visualize the data coordinates in the reduced space:

```wl
In[4]:=
reducedpts = reducer[data];
ListPlot[Style[#, Hue[First[#] / 20]] & /@ reducedpts, AspectRatio -> 1 / 2, PlotStyle -> Directive[PointSize -> Small]]

Out[4]= [image]
```

Visualize the dataset in the original space, with each point colored according to its reduced variable:

```wl
In[5]:= ListPointPlot3D[Style[#, Hue[First[reducer[#]] / 20]] & /@ data, PlotStyle -> Directive[PointSize -> Large], BoxRatios -> {1, 1, 1}]

Out[5]= [image]
```

### Scope (1)

#### Dataset Visualization (1)

Load the Fisher *Iris* dataset from ``ExampleData`` :

```wl
In[1]:= iris = ExampleData[{"MachineLearning", "FisherIris"}, "Data"];

In[2]:= RandomSample[iris, 5]

Out[2]= {{6.9, 3.1, 4.9, 1.5} -> "versicolor", {4.4, 3., 1.3, 0.2} -> "setosa", {6., 3.4, 4.5, 1.6} -> "versicolor", {5.7, 2.8, 4.1, 1.3} -> "versicolor", {6.3, 3.3, 4.7, 1.6} -> "versicolor"}
```

Generate a reducer function using ``"Isomap"`` with the features of each example:

```wl
In[3]:= diris = DimensionReduction[iris[[All, 1]], 2, Method -> "Isomap"]

Out[3]=
DimensionReducerFunction[Association["ExampleNumber" -> 150, 
  "Imputer" -> MachineLearning`MLProcessor["ImputeMissing", 
    Association["Invertibility" -> "Perfect", "Missing" -> "Imputed", 
     "StructurePreserving" -> True, "Input" -> Associa ...  "Date" -> DateObject[{2020, 7, 14, 12, 37, 32.368799`8.262701566230815}, "Instant", 
      "Gregorian", -4.], "ProcessorCount" -> 6, "ProcessorType" -> "x86-64", 
    "OperatingSystem" -> "MacOSX", "SystemWordLength" -> 64, "Evaluations" -> {}]]]
```

Group the examples by their species:

```wl
In[4]:= byspecies = GroupBy[iris, Last -> First];
```

Reduce the dimension of the features:

```wl
In[5]:= byspecies = diris /@ byspecies;
```

Visualize the reduced dataset:

```wl
In[6]:= ListPlot[Values[byspecies], PlotLegends -> Keys[byspecies]]

Out[6]= [image]
```

### Options (1)

#### "NeighborsNumber" (1)

Generate a dataset of different head poses from 3D geometry data with random viewpoints:

```wl
In[1]:=
SeedRandom[123]
cc = RandomPoint[Circle[{0, 0}, 0.8, {Pi, 2Pi}], 50];
pp = Flatten[Table[{cc[[i, 1]], cc[[i, 2]], j}, {i, 1, Length@cc}, {j, -0.8, 0.8, 0.2}], 1];
vp = RandomSample[pp, 60];
obj = ExampleData[{"Geometry3D", "Beethoven"}];
poses = Table[Show[obj, ViewPoint -> vp[[i]]], {i, 1, Length@vp}];
imgs = Table[Rasterize[poses[[i]], ImageResolution -> 64, ImageSize -> 50, Background -> None], {i, 1, Length@poses}];
```

Visualize different head poses:

```wl
In[2]:= RandomSample[imgs, 10]

Out[2]= {[image], [image], [image], [image], [image], [image], [image], [image], [image], [image]}
```

Reduce the dataset to a two-dimensional representation by specifying the ``"NeighborsNumber"`` in the neighborhood graph for performing the isometric mapping:

```wl
In[3]:= red = DimensionReduce[imgs, 2, Method -> {"Isomap", "NeighborsNumber" -> 10}];
```

Visualize the original images in the reduced space, in which the up-down and front-side poses are disentangled:

```wl
In[4]:= Graphics[MapThread[Inset[#1, #2, {0, 0}, 15, Background -> None]&, {imgs, red}], ImageSize -> 300, ImagePadding -> 15, Frame -> False, AspectRatio -> 1]

Out[4]= [image]
```

## See Also

* [`DimensionReduce`](https://reference.wolfram.com/language/ref/DimensionReduce.en.md)
* [`DimensionReduction`](https://reference.wolfram.com/language/ref/DimensionReduction.en.md)
* [`FeatureSpacePlot`](https://reference.wolfram.com/language/ref/FeatureSpacePlot.en.md)
* [`FeatureSpacePlot3D`](https://reference.wolfram.com/language/ref/FeatureSpacePlot3D.en.md)
* [`FeatureExtraction`](https://reference.wolfram.com/language/ref/FeatureExtraction.en.md)
* [`ClusterClassify`](https://reference.wolfram.com/language/ref/ClusterClassify.en.md)
* [`Classify`](https://reference.wolfram.com/language/ref/Classify.en.md)
* [`Predict`](https://reference.wolfram.com/language/ref/Predict.en.md)
* [`NetTrain`](https://reference.wolfram.com/language/ref/NetTrain.en.md)
* [`MultidimensionalScaling`](https://reference.wolfram.com/language/ref/method/MultidimensionalScaling.en.md)
* [`LLE`](https://reference.wolfram.com/language/ref/method/LLE.en.md)
* [`Hadamard`](https://reference.wolfram.com/language/ref/method/Hadamard.en.md)
* [`LatentSemanticAnalysis`](https://reference.wolfram.com/language/ref/method/LatentSemanticAnalysis.en.md)
* [`Linear`](https://reference.wolfram.com/language/ref/method/Linear.en.md)
* [`PrincipalComponentsAnalysis`](https://reference.wolfram.com/language/ref/method/PrincipalComponentsAnalysis.en.md)
* [`TSNE`](https://reference.wolfram.com/language/ref/method/TSNE.en.md)
* [`UMAP`](https://reference.wolfram.com/language/ref/method/UMAP.en.md)

## Related Links

* [An Elementary Introduction to the Wolfram Language: Machine Learning](https://www.wolfram.com/language/elementary-introduction/22-machine-learning.html)

## History

* [Introduced in 2020 (12.2)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn122.en.md)