Application to Graph theory

 

Introduction and a little bit of History: Königsberg was a city in Russia situated on the Pregel River, which served as the residence of the dukes of Prussia in the 16th century.  Today, the city is named Kaliningrad, and is a major industrial and commercial centre of western Russia.  The river Pregel flowed through the town, dividing it into four regions, as in the following picture. 

 

 

In the eighteenth century, seven bridges connected the four regions. Königsberg people used to take long walks through town on Sundays. They wondered whether it was possible to start at one location in the town, travel across all the bridges without crossing any bridge twice and return to the starting point. This problem was first solved by the prolific Swiss mathematician Leonhard Euler, who, as a consequence of his solution invented the branch of mathematics now known as graph theory. Euler’s solution consisted of representing the problem by a “graph” with the four regions represented by four vertices and the seven bridges by seven edges as follows:

 

 

Graph Theory is now a major tool in mathematical research, electrical engineering, computer programming and networking, business administration, sociology, economics, marketing, and communications; the list can go on and on. In particular, many problems can be modelled with paths (see the definition below) formed by traveling along the edges of a certain graph. For instance, problems of efficiently planning routes for mail delivery, garbage pickup, snow removal, diagnostics in computer networks, and others, can be solved using models that involve paths in graphs.

 

Basic concepts of graph theory Before we make the connection between graph theory

and linear algebra, we start with some basic definitions in graph theory for those of you who are not familiar with the topic:

·        A graph is a collection of points called vertices, joined by lines called edges:

 

·        A graph is called directed or a digraph if its edges are directed (that means they have a specific direction). A path joining two vertices X and Y of a digraph is a sequence of distinct points (vertices) and directed edges. A path starting and ending at one vertex P is called a loop at P. For example, in the digraph:

 

there is more than one path from P1 to P3. One of them consists of three points P1 P2 P3. A different path from P1 to P3 consists of the points P1 P2 P4 P5 P3.

·        A graph is called connected if there is a path connecting any two distinct vertices. It is called disconnected otherwise:

 

·        In a graph G, if there exists a path consisting of n edges between two vertices Pi and Pj, then we say that there exists an n-walks from Pi to Pj. For instance, there are three different 2-walks between the points P2 and P7 on the above graph G1.

 

So how is linear Algebra related to graph theory?

As you can expect, graphs can be sometimes very complicated. So one needs to find more practical ways to represent them. Matrices are a very useful way of studying graphs, since they turn the picture into numbers, and then one can use techniques from linear algebra.

Given a graph G with n vertices v1,…,vn, we define the adjacency matrix of G with respect to the enumeration v1,…,vn  of the vertices as being the n×n matrix A=[aij] defined by

 

 

 

 

For example, for the graph G1 above, the adjacency matrix (with respect to the enumeration  of its vertices) is

 

 

 

 

 

 

 

 

Note that for an undirected graph, like G1, the adjacency matrix is symmetric (that is it is equal to its transpose), but it is not necessarily the case for a digraph like G2.

 

Also, note that any square Boolean matrix (two values entries: 0 and 1) with 0’s on the main diagonal determines a unique digraph.

 

 

The following theorem gives one important use of powers of the adjacency matrix of a graph:

 

If A is the adjacency matrix of a graph G (with vertices v1,…, vn), the (i, j)-entry of Ar represents the number of distinct r-walks from vertex vi to vertex  vj in the graph.

 

Taking the square the matrix M1 above gives

 

 

 

 

 

 

 

The resulting matrix gives us the number of different paths using two edges between the vertices of the graph G1. For instance, there are three different 2-walks between the points P2 and P7 on the above graph; but there is no way to get from P1 to P5 in two steps.

 

Example The following diagram represents the route map of a delivery company:

 

 

 

 

 

 

where A, B, C, D, E are the cities served by the company. The adjacency matrix M of the above graph is:

 

 

 

 

 

 

 

 

 

 

 

so M2 is the matrix:

 

 

 

 

 

 

 

and M3 is the matrix:

 

 

 

 

 

 

 

 

 

If the company is mostly interested in connections from city A to city B, one can see that the number of 1-step connections between A and B is 1; the number of 2-step connections is 1, but the number of 3-step connections between the two cities is 4. These 4 connections can be given explicitly:

 

 

1.      AC→E→B

2.      A→B→D→B

3.      A→D→A→B

4.      A→C→A→B

 

 

 

 

Dominance -directed graph A digraph G is called a dominance-directed graph if for any pair of distinct vertices u and v of G, either u→v or v→u, but not both (here the notation u→v means there is an edge from u to v)

 

The following is an example of a dominance-directed graph:

 

 

In the above graph, the vertices A, C and E have the following property: from each one there is either a 1-step or a 2-step connection to any other vertex in the graph. In a sports tournament these vertices would correspond to the most powerful teams in the sense that these teams beat any given team or beat some other team that beat the given team. The above graph is not unique with this property. The following theorem guarantees that:

 

In any dominance-directed graph there is at least one vertex from which there is a 1-step or a 2-step connection to any other vertex in the graph.

 

In a dominance-directed graph, we define the power of a vertex, as being the total number of 1-step and 2-step connections to other vertices. Using the adjacency matrix M of the graph, one can find the power of a vertex Pi as follows: the sum of the entries in the ith row of M is the total number of 1-step connections from Pi to other vertices, and the sum of the entries in the ith row of M2 is the total number of 2-step connections from Pi  to other vertices. Therefore, the sum of the entries in the ith row of the matrix A=M+ M2 is the total number of 1-step and 2-step connections from Pi to other vertices.

 

 

In a dominance-directed graph, one would like to locate the vertices with the largest power. To do that, we compute the matrix A=M+ M2 , and then a row of A with the largest sum of entries corresponds to such a vertex.

 

Example Suppose that the results of a baseball tournament of five teams A, B, C, D and E are given by the dominance-directed graph H above. The adjacency matrix M of H is:

 

 

 

 

 

so, the matrix M2 is

 

 

 

 

and the matrix A=M+M2 is

 

 

 

 

 

 

Since the first row has the largest sum, the vertex A must have a 1-step or 2-step connection to any other vertex. The ranking of the teams according to the powers of the corresponding vertices is:

 

Team A (first), Team C (second), Teams B and D (third) and Team E (last).