Loading [MathJax]/jax/output/HTML-CSS/jax.js

Tuesday, October 30, 2012

C++ Enumeration to String

Hacked together a script which transforms a C++ enumeration to some simple c-strings or something else.

If you have an enumeration enum [tag] [: type] {enum-list} then you can convert the enum-list part to something like case some_entry: return "some_entry"; for each entry in enum-list, if the operation is set to: case $: return "$";
Whereas the character $ is the replacement position for the actual enumeration entry.

Operation

Enumeration-List


Output below

Tuesday, May 1, 2012

Tensor Products and Algebras

V,W are vector spaces over the field K. A vector space (T,ϕ) over K together with a bilinear map ϕ:V×WT is called a tensor product of V and W if for every vector space U over K and every bilinear map f:V×WU a single linear map h:TU exists such that f=hϕ. The elements of (T,ϕ) are called tensors (singular: tensor).

Let V and W be of finite dimensions. Then even the biggest image set of any map f fits in a vector space U of the dimensions dimVdimW. Say for a map f the images of basis vectors (bi) of V and (ci) of W are linearly independent (we can just construct such a map since a bilinear map can be defined by only the images of basis vectors) and every other image of vectors taken from V×W  is in the span of (f(bi,cj)). (v,w)V×W:v=vibi,w=wici:f(v,w)=viwjf(bi,cj) Now the space T can't be a single dimension smaller than U with dimU=dimVdimW (the smallest vector space in terms of dimension that can "hold" the image of f), because otherwise we would not be able to even find a homomorphism h.

More precise: The sequence (ϕ(bi,cj) has to be linearly independent since we chose an f (we can do so since f can be chosen arbitrarily by definition) such that the sequence f(bi,cj) is linearly independent. Let's assume (ϕ(bi,cj)) is linearly dependent then there exist scalars λij where at least one is not equal zero such that λijϕ(bi,cj)=0. Which yields if we apply h h(λijϕ(bi,cj))=λijh(ϕ(bi,cj))=0 But also it is h(ϕ(bi,cj)=f(bi,cj) by definition.  h(λijϕ(bi,cj))=λijh(ϕ(bi,cj))=0=λijf(bi,cj) Which is a contradiction to our assumption that (f(bi,cj)) is linearly independent. So (f(bi,cj)) must be a linearly independent set as well. Easier it is to see that the dimension of T can't be bigger than the dimension of U because then we would not be able to describe a unique linear map h such that f=hϕ.

Now we can define :V×W(VW:=(T,ϕ)):(v,w)vw:=ϕ(v,w) and say: If (bi,cj) is a basis of V×W then (bicj) is a basis of VW.

The existence of tensor products is more or less easy to see, at least in this case where only finite dimensional vector spaces treated. We just construct a vector space T of the dimension dimVdimW over the field K and define a bilinear map by defining the images bicj of the basis vectors (bi,cj) of V×W. Already we can find for any K-vector space U and any bilinear map f:V×WU a linear map such that f=h. It is just required to set h(bici):=f(bi,cj)i,j. Note that h is unambiguously defined by the images of basis vectors of T. So far it can also be seen that T can be any vector space over K if it just has the required dimension. 

The tensor product VW exists and every other tensor product of V×W is isomorphic to VW.

And since the difference of all tensor products lies in an isomorphism we can write for vectors spaces V1,V2,V3 over KV1(V2V3)(V1V2)V3=:V1V2V3 For N vector spaces Vi over K we define NVi:=V1V2VN 0V:=K

Now we show:
For any vector space U over K and any N linear map f:Ni=1ViU a single homomorphism h:NViU exists such that f=h (where :Ni=1ViNVi is of course N linear).

We know that for any bilinear map f:N1Vi×VNU a unique homomorphism h:NViU such that f=h. Both maps f and f are unambiguously defined by their images of a basis of their inverse image spaces. When we look more closely we realize that both maps require exactly Ni=1dimVi images. Which means that for any map f we can find a bilinear map f such that for all vectors (v1,v2,...,vN)Ni=1 f(v1,v2,...,vN)=f(v1v2vN1,vN)=h(v1v2vN)

Further we define nmV:=(nV)(mV) and call the tensor product nmV n times contravariant and m times covariant.

So the tensor product 00V=K would be the space of all scalars, 10V=V the space of all vectors and 01V=V the space of all linear forms VK.

Let I be an index set Vi for all iI a family of vector spaces of K then iIVi:={(vi)iIiIVi|almost all vi=0} is called the direct sum of the vector spaces Vi (see wikipedia).

The sets T(V):=p0i=0iV=KV(VV)T(V,V):=p0i=0,j=0ijV together with the the multiplication for v=(vi),w=(wi)T(V) and a=(aij),b=(bij)T(V,V)vw:=(i+j=nviwj)n0ab=(i+j=m,k+l=naijbkl)m,n0 form algebras. T(V) is called tensor algebra and T(V,V) mixed tensor algebra. 
The neutral element is 1=(1K,0,0,).


Saturday, April 28, 2012

Linear Forms induce Bilinear Forms

Here I make use of the Einstein convention to cut down some keyboard work.

Let V be a vector space over the field K and V is the dual space of it. The map FHom(V,V) induces a bilinear form v,wF:=F(v)(w), a more or less trivial fact. 

B:=(bi) ... Basis sequence of V.

Every vectors v,wV can be unambiguously expressed by coordinates vi,wi regarding the above basis. v=vibi   ;   w=wibi By making use of the linearity and the commutativity of the underlying field we get F(v)(w)=viwjF(bi)(bj)=viF(bi)(bj)wj It can be directly seen that the field elements F(bi)(bj)=bi,bjF shape a matrix - the so called Gramian Matrix. Further we realize that it depends on the basis vector's of v and w. In this case we used the same basis for both vectors but that is not essential.

Back to the beginning: F induces a bilinear form, but can every bilinear form be induced by a mapping VV

Let BV be the set of all bilinear forms V×VK. One way to start working on our question is to show that Hom(V,V) and BV are equipotent. This can be done by constructing a bijective map.

So we focus on the map H:Hom(V,V)BV:F.,.F.

First we show that H is surjective, that is for every βBV there exists a FHom(V,V) such that H(F)=β. We just set F(bi):=β(bi,.) for all i, which forms a linear map VV since β is linear in each parameter and a linear map is uniquely defined by defining the image of basis vectors, which we just did. Now we need to show that H(F)=.,.F=β. β(v,w)=viβ(bi,w)=viF(bi)(w)=vibi,wF=viH(F)(bj,w)=H(F)(v,w)

Second we need to prove that H is injective. That is for every F,GHom(V,V) F=GH(F)=H(G) Let FG, then there exists at least a single index k such that F(bk)G(bk), because otherwise the mappings would be equal (uniquely defined by the images of basis vectors). Further if H is injective there have to exist vectors v,wV such that H(F)(v,w)H(G)(v,w). Suppose  H(F)(v,w)=H(G)(v,w)v,wV then we get v,wF=vibi,wF=vibi,wG=v,wG and if we set all vi=1K for all i (we can do so since the vectors can be arbitrarily chosen): bi,wF=bi,wGF(bi)(w)=G(bi)(w)  i wV Particularly it is F(bk)(w)=G(bk)(w)  wV in contradiction to our assumption FG. So the map H is injective and therefore bijective. 

Conclusion: |HomK(V,V)|=|BV| for finite dimensional vector spaces V where BV is the set of all bilinear forms on V.

Note that this also suggests that every bilinear form is uniquely defined by the respective Gramian matrix!

Friday, April 27, 2012

Calculation of a Permutation's Sign

Let πSn be a permutation taken from the symmetrical group then it is easy to see that it can be expressed as the product of cyclic permutations ciSn as follows: π=ki=1ci. And each cycle ci can be expressed as a product of transpositions tij:=(cj1i(1) cji(1))Sn for j=1,2,...,|ci|1 ci=(1 ci(1) c2i(1) ... c|ci|1i(1))=(1 ci(1))(ci(1) c2i(i))...(c|ci|2i(1) c|ci|1i(1)) Here the expression |ci| is the order (cardinality) of the group <ci>. Also note that c|ci|i=id where id is the identity of Sn.

A transposition (i j) has 2(ji)1 inversions. The inversions are the pairs (i,k) for k=i+1,i+2,...,j and the pairs (k,j) for k=i+1,i+2,...,j1 so it all comes down to basic counting. Note that every other element besides i,j are fix points. Which means the sign of a (simple) transposition is 1

The number of inversions of the identity is zero. Are there any other permutations besides the identity with no inversions? Let αSn have zero inversions but αid. Since this permutation doesn't have any inversions the chain α(1)<α(2)<...<α(n) must be valid because α(i)<α(j)i<j is given by the definition of this permutation. In other words α(1) must be thre smallest element of the sequence (1,2,...,n) which is 1. α(2) the second smallest element which is 2 and so on α=id. There are no permutations besides the identity with no inversions.

π is a permutation which is not the identity, so it has inversions. Can we find a simple transposition τ such that τπ has less inversions than π? If there would be such a simple transposition then there also exists a an inverse image i with π(i)>π(i+1). And this i has to exist because otherwise we would have π(i)<π(i+1) for all i=1,2,...,n1 which is only true for the idenity - contradiction to the assumptions πid. Therefore we can find simple transpositions to non trivial permutations to reduce their inversion count by one. 

A transposition (i j) as seen above has 2(ji)1 inversions. Therefore it takes 2(ji)1 simple transpositions τh such that (2(ji)1h=1τh)(i j)=id and such transpositions do exist. This also means that we can write every permutation as a product of simple transpositions.

The sign mapping sgn:Sn{1,1} is a group homomorphism. To show that we first need to realize that the sign of a permutation π changes when we multiply the permutation with a simple transposition τ since we either add or remove a single inversion. sgn(πτ)=sgn(τπ)=sgn(π)=sgn(τ)sgn(π) Especially we get for simple transpositions τ1,τ2 sgn(τ2τ1)=sgn(τ2)sgn(τ1)

Let ππSn be permutations which are not the identity. There exist N,N simple transpositions τi,τi such that π=Ni=1τi and π=Ni=1τi. This yields sgn(ππ)=sgn((Ni=1τi)(Ni=1τi))=sgn(Ni=1τi)sgn(Ni=1τi)=sgn(π)sgn(π) 

Let's take a closer look at the permutation π as it is defined directly above. We come to the formula sgn(π)=Ni=1sgn(τi)=(1)N This tells us that if pi is even than N must even as well. The same idea applies to π being an odd permutation.

Back to cyclic permutations. Every transposition is odd and therefore every odd cyclic permutation must be a product of an odd number of (simple) transpositions. The cycles ci which we defined at the beginning are odd if |ci| is even because we can express ci with the product of |ci|1 transpositions. The sign of pi is calculated as sgn(π)=ki=1sgn(ci) The signs on the right side of the equation which equal 1 can be discarded in the process of the calculation (neutral element). So we can come to the conclusion:

In Order to calculate the sign of a permutation π we express it as a product of cyclic permutations. Then we count the even cycles, let us assume there are K such cycles, and calculate the signsgn(π)=(1)K

This method is straight forward and not as error prone as to count each single inversion manually. 

Sunday, April 22, 2012

Theorem of Cayley-Hamilton

Let V be a vector space over the field K and fHomK(V,V). If we define fλ:=λf for every λK then we can substitute the linear map in the characteristic polynomial χf (ring homomorphism). Further the linear map ϕ:HomK(V,V)Kn×n,fMEE(f) is isomorph, where MEE(f)=:M is the image matrix of f regarding the canonical basis E of V. We define again Mλ:=λMλK and substitute in χf. χf(ϕ(f))=det(Mϕ(f)I)=det(MM)=0Kn×n Also note that ϕ(f)n=Mn=ϕ(fn) which yields for our polynomial where the λiK are coefficitients χf(ϕ(f))=ni=0λiϕ(f)i=ϕ(ni=0λifi)=ϕ(χf(f)) Since ϕ is isomorph we get ϕ1(0)=0HomK(V,V) and therefore χf(f)=χf(ϕ1(ϕ(f)))=ϕ1χf(ϕ(f))=ϕ1(0)=0
Every linear map is a root of its characteristic polynomial.

Thursday, April 12, 2012

Covectors and Scalar Products

Let V be a vector space over the field K, B:=(bi) a basis sequence of V and fV. There exist field elements λi such that f=λibi which yields if we calculate f(bi): f(bi)=(λjbj)(bj)=λjδji=λi Also any vector vV can be expressed as a linear combination of coordinates: v=vibi and if we apply the linear form f we get f(v)=(λibi)(vjbj)=λivjδij=λivi=<[v]B,(f(bi))i=1,2,...,dimV> where [v]B is the coordinate vector of v regarding the basis B.

Tuesday, April 10, 2012

Kern and Image of Linear Transformations

Let V,W be vector spaces over the field K and f:VW a linear transformation.

Kernel of f is a subspace of V

The kernel of f also denoted as kerf is a set of vectors which are mapped to zero by f. The zero vector of V is in the kernel since f(0V)=f(0K0V)=0Kf(0V)=0W so kerf is never  empty. Let us see if the kernel is complete regarding vector addition. To see that just take any two vectors v,wkerf and check if their sum is also in kerf.
u:=v+w;f(f+w)=f(v)+f(w)=0+0=0=f(u)ukerf The same idea we apply to scalar vector multiplaction. λK;u:=λv;f(λv)=λf(v)=λ0=0=f(u)ukerf So the kernel of f is a subspace of V.

Image of f is a subspace of W 

The image of our linear transformation,  denoted as imgf or  imf , is the set imgf:={wW|vV:f(v)=w} Any vector taken from the kern of f is transformed to the zero vector in W. So the image of f is not empty since kerf{}. w,wimgf:v,vV:w=f(v)w=f(v) w+w=f(v)+f(v)=f(v+v) λK;λw=λf(v)=f(λv) The image is complete regarding vector sums and scalar vector multiplications which makes it a subspace of W.

Dimensional Relationship

The linear transformation f:R3R3:(x,y,z)(0,y,z)  implies that dimkerf+dimimgf=dimR3 and also suggests that the more general formula for any linear transformation f:VW
dimkerf+dimimgf=dimV
is valid as well.

To verify such a relationship we need to examine the properties of the kern and image space regarding f and to do that we can use quotient spaces. These spaces are useful when you want to examine a subspace by making use of the properties of it's superspace. In this case we want to look at the space V/kerf for which we have the dimensional relationship given as dimV=dimV/kerf+dimkerf A way to prove the initial guess is to show that dimV/kerf=dimimgf or in other words that V/kerfimgf. One way to do so is to construct an isomorphism f:V/kerfimgf. As a first guess of such a linear transformation one can suggest for all  v+kerfv+kerfˉf({v+u|ukerf}):={f(v)+f(u)|ukerf}={f(v)} When we define the sum of vectors and a scalar vector multiplication on imgˉf  for all v,wimgˉf and λK as follows {v}+{w}:={v+w}λ{v}:={λv} the set imgˉf becomes a vector space and also it is easy to see that ˉf is a linear transformation. Further we define a trivial linear transformation ˆf:imgˉfW:{v}v and set f:=ˆfˉf At first we check if ˉf is injective and take any two vectors v+kerf,w+kerfV/kerf:v+kerfw+kerf. If we apply ˉf we get ˉf(v+kerfw+kerf)={f(vw)}={f(v)f(w)} and since the equivalence classes v+kerf and w+kerf are not equal the expression vwkerf is valid, which yields f(vw)=f(v)f(w)0f(v)f(w) or in other words the linear transformation f is injective.
f is also a surjection. Suppose there is a vector vV for which a w+kerfV/kerf can't be found such that f(w+kerf)=f(v). If vkerf then f(v)=0=f(v+kerf)=f(0+kerf)=0f(0+kerf) implies that such a v does not exist since we can simply find a w=0 (remember that any member of an equivalence class can be a representative). The case f(v)0 leads directly to f(v+kerf)=f(v) if we set w=v. As a result we can say:

For vector spaces V and W and a linear mapping f:VW the following relation is true. dimkerf+dimimgf=dimV 

Sunday, April 8, 2012

Quotient Spaces

This text is about the construction of quotient spaces. Mainly to test Blogger in conjunction with MathJax.

Given a vector space V over a field K and a subspace UV one can define the relation 
v,wV:vwvwU
which actually is an equivalence relation since it is
reflexive:
vV:vvvv=0U
symmetric:
v,wV:vw!uU:vw=uwv=uUwv
transitive:
v,w,qV:vqqw!uU:vq=u!uU:wq=u
uu=vqw+q=vwUvw
The equivalence class [v] of vV is given by 
[v]={wV|vw}
w[v]:vw!uU:vw=uw=vu which leads us to assume that
v+U:={v+u|uU}=[v]
First we check if v+U[v], so let wv+U then there exists uU such that w=v+u. The last equation leads to wv=uwvw[v]. Next we take a look at [v]v+U. For every w[v] there is vw!uU:vw=uw=vuv+U. So the assumption
v+U=[v]
is valid.

Quotient Space


The equivalence classes v+U of V define a vector space V/U, a so called quotient space, if we define the vector addition
[v]+[w]:=[v+w]v,wV
and the scalar multiplication 
λ[v]:=[λv]λK;vV
Since an equivalence class can be represented by several vectors, the first step should be to show that the above operations are actually well defined. So we take vectors v,v,w,wV:[v]=[v][w]=[w] which should yield [v+w]=[v+w] or in other words
v+wv+wvv+wwU 
Because there are u,uU such that vv=u and ww=u we get
vv+wwu+uU
which just shows that v+w and v+w describe the same equivalence class. The same idea we can apply to show that the scalar multiplication is well defined as well. What is left is the proof that the addition and scalar multiplication satisfy a vector space which is just long winding and technical, but quite easy.

What about the dimension of a quotient space? Lets assume the vector space V is of finite dimension n then dimU=:mn. So we can take a basis sequence (bi)i=1,2,...,m of U and extend it to be sequence of V with another vector sequence (bi)i=m+1,m+2,...,n.  The equivalence class of any vector taken from U is actually the subspace U itself. We can partition the vector space VU with vector subspaces V,W. Then we get for any non zero vV,wW
v+Uw+U
since vw does not exist. We can extend this idea and realize that the equivalence classes bi+U for i=m+1,m+2,...,n are pairwise disjoint and for i=1,2,...,m they are equal 0+U. We can assume that 
B:=(bi+U)i=m+1,m+2,...,n
is a basis sequence of the quotient space. So this set should be linearly independent as well as a generating system of the quotient space. When we take a look at the following equation
0+U=ni=m+1λibi+U
whereby the λi are field elements of K, then we can almost directly see that the presumed basis of the quotient space is indeed linearly independent, just note that ni=m+1λibi=0λi=0i. Further we can't express any element of U as a linear combination of the vectors (bi)i=m+1,m+2,...,n. What's left is to show that B spans V/U. So we take a vector v+UV/U. To avoid any triviality we let vU. So there should exist field elements λi such that
v+U=ni=m+1λibi+U
Such elements do exist since vU, so the same field elements we used above also satisfy 
v=ni=0λibi
B is shown to be linearly independent as well as a generating system of V/U and therefore a basis system. As a direct result we also get the dimension formula of quotient spaces
dimV=dim(V/U)dimU