## Simple Geometry Initiation

This is an easy-to-follow intro to hyperbolic geometry.

If you know nothing about it, this is for you! 🙂

The only parts which may require basic differential geometry are these

Some explanation potentially requiring knowledge of tangent space, metric tensor, geodesic, exponential map, etc.

### What’s a geometry?

It is commonly defined by a

Consider a set . A distance function maps two points to a non-negative , such that it satisfies the following properties:

1.  Symmetry: ,
2.  Separation: if and only if ,
3. Triangle inequality: ,

for all .

There are many

1. Euclidean geometry: and  .
2. Connected graph geometry: is the set of vertices and is the length of a shortest-path, with potentially weighted edges.
3. Hamming geometry: is a set of strings of equal length and is the number of positions at which the corresponding symbols are different.

A set  together with a distance function  is called a metric space.

We consider the geometries defined by two metric spaces and   to be equivalent if they are

We say that  and are isometric if there exists a one-to-one mapping from onto preserving all distances:

,

and is called an isometry. This definition is quite intuitive.

The hyperbolic space is a particular case of metric space with many interesting properties that can be very powerful for data representations in machine learning.

### What’s hyperbolic geometry?

In differential geometry, the spaces that people study are called manifolds, a sort of high-dimensional generalization of curved surfaces.

Each point of such a manifold can be assigned a curvature. When the curvature is constant, it is either everywhere positive, zero or negative. This gives rise to three types of geometry: elliptic, Euclidean and hyperbolic respectively, usually considered as the

• Elliptic geometry (positive curvature) is found in spheres.
1. The sum of angles in a triangle is always greater than 180°.
2. Two lines orthogonal to another one must intersect.
• Euclidean geometry (zero curvature) is well known.
1. The sum of angles in a triangle is always exactly 180°.
2. In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.
• Hyperbolic geometry (negative curvature) is less intuitive.
1. The sum of angles in a triangle is always lesser than 180°.
2. For any given line and point not on the line, in the plane containing both the line and the point, there are at least two distinct lines through the point that do not intersect the line.

There exist 5 models of hyperbolic space, i.e. ways to represent it and work with it. The most commonly used in machine learning are the Poincaré (disk) model and the Lorentz (hyperboloid) model.

Their sets and distance functions are defined in the table below.

ModelSet Distance function
Poincaré
Lorentz

Where is called the Lorentz inner-product, and is called the conformal factor. These are also useful to define the

• In the -dimensional Poincaré model, for any , the tangent space at is given by

,

and the metric tensor, for , by

.

Note that the Poincaré model is conformal, i.e. defines the same angles as in Euclidean space, since:

,

for all .

• In the -dimensional Lorentz model, for any , the tangent space at is given by

,

and the metric tensor, for , by

.

These two models are equivalent, i.e. they are isometric. See Figure 6 below.

It is hence easy to switch from one model of hyperbolic space, to another.

Now that we have defined hyperbolic spaces, let’s try to understand them. For intuition, the Poincaré model is the friendliest one. The following gif illustrates how “straight-lines” look like in the Poincaré disk.

As can be seen, around the origin of the disk, these lines look straight, as in Euclidean space.

However, the closer they get to the border, the stronger they bend.

Note that these lines are geodesics, i.e. they are shortest paths between the two points that they connect, and their length in hyperbolic space gives the hyperbolic distance between the two points.

Let’s have a closer look at the

Remember that for , we have , which is equivalent to  around , and to towards .

Further notice how the conformal factor goes to 2 around the origin and grows to when approaches the border.

If are close to the origin, then

This shows that close to the origin, the hyperbolic space resembles Euclidean geometry.

The hyperbolic nature of the space is more prominent at the border of the disk, which corresponds to points at infinity in the Lorentz model, according to Figure 6.

Moreover, in the Lorentz model , it is known that

The volume of a ball grows exponentially with the radius!

In comparison, the volume of in grows only proportionally to .

This exponential volume growth of hyperbolic spaces is reminiscent of trees.

Indeed, a binary tree of depth contains nodes.

This dilatation of distances can also be understood intuitively from the negative curvature of the space.

Consider the surface in Figure 9. It is negatively curved.

This means that it is stretched in opposite directions.

Now consider two rays of light emerging from the origin, shooting in different directions, but remaining inside the surface.

Because of negative curvature, they will go apart much faster if their respective directions are orthogonal, than if they are co-linear.

As a byproduct, the shortest path between the end points of these two rays of light will almost go near the origin.

This is similar to what the metric would look like in a tree, in which joining two leaves requires passing through the closest parent.

Similarly, one can think of a hyperbolic space as stretching the metric in opposite directions, connecting together the concepts of exponential volume growth, continuous tree-likeness and negative curvature…

### What’s better in hyperbolic geometry?

Because hyperbolic spaces possess this exponential volume growth property, they are better suited to embed tree-like graphs, or any kind of data with an underlying

In particular, they have been widely used to better visualize large hierarchies (Figure 10, from this paper).

Other common applications range from representing phylogenetic trees of DNA sequences in bioinformatics, to efficiently routing information in complex networks (see Papers and Code for more references).

More recently, hyperbolic spaces have been used in machine learning by Nickel & Diela, to embed word hierarchies, obtaining new state-of-the-art results in word hypernymy detection, i.e. given two words, predicting if one is a subconcept of the other.

But is hyperbolic geometry only useful for hierarchical data?

No! It turns out that its power applies to any kind of data with an

Indeed, some recent work on the hyperbolic geometry of complex networks explains that the presence of an underlying hyperbolic geometry in a symbolic graph is equivalent to the graph possessing a heterogeneous topology, meaning that its nodes can be somehow classified into a taxonomy of elements, i.e. clustered into groups, and these groups being split into smaller subgroups, etc…

Notice how general this is, since the tree-structure only needs to be approximate!

Let’s have a simple thought experiment.

Consider a toy language obtained by finite sequences of tokens taken from a vocabulary of size . The number of sentences of size is , growing exponentially with its length.

Roughly speaking, if we see smaller sentences as more generic, and longer ones telling a more specific story, we naturally obtain a graph of entailment relations between sentences, together with a heterogeneous taxonomy between nodes.

Approximate hierarchical structures are more prominent than they seem to be.

At this stage, you may wonder:

Could we use hyperbolic embeddings in downstream task, i.e. as inputs to neural networks?

Could we design neural networks with hidden states in hyperbolic space?

The answer in the next episode… 🙂