V,W are vector spaces over the field K. A vector space (T,ϕ) over K together with a bilinear map ϕ:V×W→T is called a tensor product of V and W if for every vector space U over K and every bilinear map f:V×W→U a single linear map h:T→U exists such that f=h∘ϕ. The elements of (T,ϕ) are called tensors (singular: tensor).
Let V and W be of finite dimensions. Then even the biggest image set of any map f fits in a vector space U of the dimensions dimVdimW. Say for a map f the images of basis vectors (bi) of V and (ci) of W are linearly independent (we can just construct such a map since a bilinear map can be defined by only the images of basis vectors) and every other image of vectors taken from V×W is in the span of ⟨(f(bi,cj))⟩. ∀(v,w)∈V×W:v=vibi,w=wici:f(v,w)=viwjf(bi,cj) Now the space T can't be a single dimension smaller than U with dimU=dimVdimW (the smallest vector space in terms of dimension that can "hold" the image of f), because otherwise we would not be able to even find a homomorphism h.
More precise: The sequence (ϕ(bi,cj) has to be linearly independent since we chose an f (we can do so since f can be chosen arbitrarily by definition) such that the sequence f(bi,cj) is linearly independent. Let's assume (ϕ(bi,cj)) is linearly dependent then there exist scalars λij where at least one is not equal zero such that λijϕ(bi,cj)=0. Which yields if we apply h h(λijϕ(bi,cj))=λijh(ϕ(bi,cj))=0 But also it is h(ϕ(bi,cj)=f(bi,cj) by definition. h(λijϕ(bi,cj))=λijh(ϕ(bi,cj))=0=λijf(bi,cj) Which is a contradiction to our assumption that (f(bi,cj)) is linearly independent. So (f(bi,cj)) must be a linearly independent set as well. Easier it is to see that the dimension of T can't be bigger than the dimension of U because then we would not be able to describe a unique linear map h such that f=h∘ϕ.
Now we can define ⊗:V×W→(V⊗W:=(T,ϕ)):(v,w)↦v⊗w:=ϕ(v,w) and say: If (bi,cj) is a basis of V×W then (bi⊗cj) is a basis of V⊗W.
The existence of tensor products is more or less easy to see, at least in this case where only finite dimensional vector spaces treated. We just construct a vector space T of the dimension dimVdimW over the field K and define a bilinear map ⊗ by defining the images bi⊗cj of the basis vectors (bi,cj) of V×W. Already we can find for any K-vector space U and any bilinear map f:V×W→U a linear map such that f=h∘⊗. It is just required to set h(bi⊗ci):=f(bi,cj)∀i,j. Note that h is unambiguously defined by the images of basis vectors of T. So far it can also be seen that T can be any vector space over K if it just has the required dimension.
The tensor product V⊗W exists and every other tensor product of V×W is isomorphic to V⊗W.
And since the difference of all tensor products lies in an isomorphism we can write for vectors spaces V1,V2,V3 over KV1⊗(V2⊗V3)≅(V1⊗V2)⊗V3=:V1⊗V2⊗V3 For N vector spaces Vi over K we define ⊗NVi:=V1⊗V2⊗⋯⊗VN ⊗0V:=K
Now we show:
For any vector space U over K and any N linear map f:∏Ni=1Vi→U a single homomorphism h:⊗NVi→U exists such that f=h∘⊗ (where ⊗:∏Ni=1Vi→⊗NVi is of course N linear).
We know that for any bilinear map f′:⊗N−1Vi×VN→U a unique homomorphism h:⊗NVi→U such that f′=h∘⊗. Both maps f and f′ are unambiguously defined by their images of a basis of their inverse image spaces. When we look more closely we realize that both maps require exactly ∏Ni=1dimVi images. Which means that for any map f we can find a bilinear map f′ such that for all vectors (v1,v2,...,vN)∈∏Ni=1 f(v1,v2,...,vN)=f′(v1⊗v2⊗⋯⊗vN−1,vN)=h(v1⊗v2⊗⋯⊗vN)
Further we define ⊗nmV:=(⊗nV)⊗(⊗mV∗) and call the tensor product ⊗nmV n times contravariant and m times covariant.
So the tensor product ⊗00V=K would be the space of all scalars, ⊗10V=V the space of all vectors and ⊗01V=V∗ the space of all linear forms V→K.
Let
I be an index set
Vi for all
i∈I a family of vector spaces of
K then
⊕i∈IVi:={(vi)i∈I∈∏i∈IVi|almost all vi=0} is called the direct sum of the vector spaces
Vi (see
wikipedia).
The sets T(V):=⊕p≥0i=0⊗iV=K⊕V⊕(V⊗V)⊕…T(V,V∗):=⊕p≥0i=0,j=0⊗ijV together with the the multiplication for v=(vi),w=(wi)∈T(V) and a=(aij),b=(bij)∈T(V,V∗)vw:=(∑i+j=nvi⊗wj)n≥0ab=(∑i+j=m,k+l=naij⊗bkl)m,n≥0 form algebras. T(V) is called tensor algebra and T(V,V∗) mixed tensor algebra.
The neutral element is 1=(1K,0,0,…).