12.1 Incidence matrix and connectivity

Let n=|V|,m=|E|. We shall assign an orientation to every edge eE and call E this set of oriented edges. The orientation can be arbitrary.

Definition 12.1 (Incidence matrix ).

1 is a n×m matrix with 1(i,j)=χej(vi) where χe(v)=𝟏[v=e+]𝟏[v=e] for e=(e,e+)E,vV.

1(i,j)e1e2emv1( 100) v2111vn001

We set 0=[1,,1] as a 1×n-matrix.

Remark 12.2.

We consider the matrices as a real matrices and our matrix algebra will be with respect to . One can consider any other field 𝔽 instead of as well.

Further set 𝒞j={0},j2,𝒞1=, 𝒞0={i=1naivi:ai},𝒞1=={i=1maiei:ai,eiE}, where the sums are to be considered as a formal sum. Observe that 𝒞0,𝒞1 are- vector spaces and note that 𝒞0V,𝒞1E. The canonical basis for 𝒞0 is {v1,,vn} and for 𝒞1 is {ei:eiE}. Here, we think of v1=v1+0v2++0vn,e1=e1+0e2++0em and similarly for other vi’s and ej’s.

We shall view 0,1 as linear maps in the following sense :

0:𝒞0𝒞1,z=i=1naiviiai=0[a1,,an]t,
1:𝒞1𝒞0,z=i=1maieiiai(ei+ei).

Suppose we represent 1z=i=1nbivi then (b1,,bn)t=1[a1,,am]t. Further if the columns of 1 matrix are Co1,,Com then (b1,,bn)t=i=1maiCoi.

Assume that G. We set i=Im(i+1),𝒵i=Ker(i).

Remark 12.3.

The above abstraction of representing 𝒞0,𝒞1 as formal sums seems a little unnecessary as we could have also represented 𝒞0,𝒞1 as functions from V and E respectively. However, this representation will be notationally convenient and is borrowed from algebraic topology where one considers richer structures giving rise to further vector spaces such as 𝒞2,𝒞3,. Further, if we choose -coefficients instead of -coefficients, 𝒞0,𝒞1 are modules and not vector spaces. Thus, the above representation allows us to carry over similar algebraic operations even in such a case. As an exercise, the algebraically inclined students can try to repeat the proofs with -coefficients. See [Edelsbrunner 2010] for an accessible introduction to algebraic topology and [Munkres 2018] for more details.

Trivially, we have that 1=𝒞1=. Thus by the fundamental theorem of algebra, we have that r(𝒵0)=n1 where by r(.) we denote the rank of matrix/dimension of a vector space. It is easy to see that vivj𝒵0,ij. Further, we have that v1vi,i=2,,n are linearly independent and hence form a basis for 𝒵0. Here linear independence means that if j=2naj(vjv0)=0 then aj=0,j=2,,n. Observe that for z=i=1mbiei,

01z=0(i=1mbi(ei+ei))=i=1mbi(0ei+0ei)=0,

where in the first and third equalities we have used the definition of 1,0 respectively and in the second equality, the linearity of 0. Thus, we have that 01=0 and in other words 0𝒵0. The same can also be deduced by noting that as matrix product 01 is nothing but sum of rows of 1 and hence 0.

Now, if we can understand r(0) then we can understand all the ranks involved. Observe that 0 is nothing but the column space of of 1 i.e., denoting the columns of the matrix 1 by Co1,,Com, we have that

0{i=1maiCoi:ai}.

Hence r(0) is nothing but the maximum number of linearly independent column vectors.

Suppose e1,,ek denote the edges corresponding to the linearly independent columns. Assume that the subgraph H=e1ek is connected. Let V(H)={v1,,vl}. Consider the incidence matrix restricted to {v1,,vl}×{e1,,ek}. Call it 1. Since the non-trivial entries of ei’s are on v1,,vl, the columns of 1 are also linearly independent and so the column rank of 1=k and since H is connected, kl1. Let R1,,Rl be the rows of the matrix 1. Since every column has exactly one +1 and one 1, we have that iRi=0 and thus the row rank of 1l1. Since the row rank = column rank, we have that k=l1 i.e., H is a tree.

Thus, we have proved that if e1,,ek denote the edges corresponding to the linearly independent columns, then every component of e1ek is a tree i.e., e1ek is a forest or acyclic subgraph. We show the converse and determine the basis for 0.

Proposition 12.4.

Let Coi be the columns corresponding to ei in 1. Then e1ek is acyclic iff Co1,,Cok are linearly independent.

Proof.

We have shown the ‘if’ part. We will show the ‘only if’ part. Suppose that H=e1ek is acyclic. We will show that Co1,,Cok are linearly independent by induction. The conclusion holds trivially for k=1. Suppose it holds for all l<k. WLOG assume that ek is a leaf edge with degH(ek+)=1. Thus ek+e1ek1. If i=1kaiCoi=0 then since ek+ek only, we have that ak=0. Thus i=1k1aiCoi=0 and since e1ek1 are acyclic and hence by induction hypothesis, we have that Co1,,Cok1 are linearly independent i.e., ai=0,1ik. ∎

From the previous proposition, the following theorem follows easily.

Theorem 12.5.
  1. 1.

    Let e1ek be the (maximal) spanning forest in G. Then

    0={i=1kai1(ei):ai}.
  2. 2.

    r(0)=nβ0(G),r(𝒵0)r(0)=β0(G)1.

  3. 3.

    G is connected iff 0=𝒵0.

By the rank-nullity theorem, we have that r(𝒵1)=mn+β0(G) which is nothing but the number of excess edges i.e., edges added after forming a (maximal) spanning forest. Let e1e2,ek form a cycle, denoting the reversal of an edge by e^, set

zC:=i=1k𝟏[eiE]ei𝟏[ei^E]ei^.

Thus, it is easy to verify that 1zC=0 i.e., since 1ei=1ei^=ei+ei,

1zC=i=1kei+ei=0,

where the last equality is because e1ek is a cycle. Hence zc𝒵1. We can extend this further.

Exercise(A) 12.6.

Let F be a maximal spanning forest in G and e1,,elEE(F) where l=mn+β0(G). Let C1,,Cl be the cycles in Fe1,,Fel respectively. Then zC1,,zCl form a basis for 𝒵1.

Remark 12.7.

Another challenging exercise : In terms of matrices, the linear transformation 0,1 represent right multiplication i.e., we have that

{0}2𝒞11𝒞001{0}.

But we could have considered left multiplication and in which case we will have linear transformations δi’s as follows :

{0}δ2𝒞1δ1𝒞0δ0δ1{0}.

Can you compute the ranks and characterize connectivity as we did above with 0,1’s ?

See [Bapat 2010, Chapter 2] for more on incidence matrices.

We shall define the 01 incidence matrix M as follows : M(i,j)=1[viej]. In other words, M is the unoriented incidence matrix. The following result explains the importance of considering orientations in the incidence matrix.

Theorem 12.8.

Let G be a connected graph. Then r(M)=n1 if G is bi-partite and r(M)=n otherwise.

See [Bapat 2010, Lemma 2.17] for a proof.