Welcome to Gaia! ::

The Physics and Mathematics Guild

Back to Guilds

 

Tags: physics, mathematics, science, universe 

Reply Mathematics
Tensors

Quick Reply

Enter both words below, separated by a space:

Can't read the text? Click here

Submit

nonameladyofsins

PostPosted: Thu Oct 05, 2006 7:03 am


Ok, so far I've got.... two vectors paired up. I believe Gray Wanderer was expalaining this to me in the ex-Mathematics Forum, it was actually a very good and in-dept response as to some building some intuitive concepts of what a tensor is, however that post got deleted along with the cessation of the existance of said forum. I am very sad this entry was taken out, I wanted to revisit it because I couldn't grasp the entierty of the concept immediately. I shall look up some initial information. I would eventually like to know what Tensors are. Can anyone explain? Anything you understand is fine, or anything you care to, or any links you know of will be appreciated.
PostPosted: Thu Oct 05, 2006 8:11 am


From http://en.wikipedia.org/wiki/Tensor
Quote:
Note: This article attempts to provide a non-technical introduction to the idea of tensors, and to provide an introduction to the articles which describe different, complementary treatments of the theory of tensors in detail.


So it would seem to be a good starting point.

Dave the lost


VorpalNeko
Captain

PostPosted: Thu Oct 12, 2006 11:06 pm


Before one can truly understand tensors and their use, it is advisable to get some grounding in linear algebra, topology, etc. A kind of superficial "motivation" for tensors, based on just linear algebra will precede the real thing. Hopefully, that would make it more understandable.

Given a vector space V over some field K, the space of linear functionals V' = {l:V→K|l linear} is called the dual space of V. It is easy to show that V' is also a vector space over the space field K; its elements are sometimes called covectors in places that mix V and V' to differentiate between them. If V is finite-dimensional, then dim V = dim V' = n for some n. By picking any basis, they're both isomorphic to K^n and hence to each other. Additionally, the doubly-dual space V" = {ξ:V'→K|ξ linear} can be identified directly with V itself like this: ξ in V" is identified with x in V such that ξ(l) = l(x) for every l in V'. For this reason, the dual of the dual of V is often simply taken to be the original space V (this is not generally possible for infinite-dimensional spaces, however).

I've no doubt you've seen this before, if not actually in those terms. In R^n, representing quantities with matrices, identify "column vectors" as just "vectors" with V = R^n. Interpret "row vector" as a function taking a vector as an argument through left-multiplication: (row-vector)(column-vector) = scalar, i.e., "row vectors" are linear functionals on V. Write vectors with superscripted indices, x^a, and covectors with subscripted ones, y_a. A covector y applied to x would be written as either y_a x^a or x^a y_a, with implicit summation over the index a (implicit summation over repeated indices is a very common notational shortcut initially introduced by Einstein). Note that in this notation, unlike the matrix one, the "product" is commutative in the sense that it is not ambiguous which is the function and which is the argument--y has lower index, so it is the covector (linear functional).

Now, call vectors x^a "contravariant" with valence (1,0) and covectors x_a "covariant" with valence (0,1), both of them "tensors". Note that if one is equipped with an inner (dot) product, then a vector x^a can be identified with the covector x_a as the linear functional that performs an inner product with its argument (in matrix notation, that's just transpose): x_a y^a = x·y = .

Of course, this is quite open to generalization. For example, one can have a linear functional that takes two vectors as arguments. An easy example of that is the inner product itself: call it the tensor g with valence (0,2), i.e., one contravariant part and two covariant parts. The inner product of vectors x^a and y^b is then = g_ab x^a y^b. The "index lowering" operation alluded to above (contravariant->covariant) can be defined as x_b = g_ab, so that = x_b y^b as above. Also, g is assumed to be symmetric so that the notation is kept commutative. Otherwise, does g_ab x^a y^b mean g applied to x (giving x_b) or g applied to y (giving y_a)? If g is symmetric, it does not matter.

What about a regular matrix, i.e., a linear function that both takes and gives a vector? It can be identified as a tensor of valence (1,1): the transformation y = Tx in matrix notation is y^a = T^a_b x^b in tensor component notation. This is quite natural, as what it means is summation over the repeated index b, i.e., the covariant (row) part of T applied to contravariant (column) part of x--and matrix multiplication is defined as row-by-column! Matrix multiplication A = BC can be translated as A^a_b = B^a_c C^c_b.

Just for practice, consider a linear transformation T:V→U. Its transpose is T':U'→V' satisfying (T'l)(x) = l(Tx), where x in V, l in U', and so T'l in V'. Translate both sides into component-tensor form and convince yourself that they're equivalent if commutativity is assumed.

---

There's a lot more to tensors, particularly their connection to calculus on manifolds, but before that, I'd have to give some substantial background information. Tell me what you think of this so far.
PostPosted: Wed Nov 08, 2006 8:42 am


We're just doing this stuff in the differential geometry/gen rel course... riemann curvature ftw...

xsparkledovex


VorpalNeko
Captain

PostPosted: Wed Nov 08, 2006 8:20 pm


I still wonder if poweroutage has actually found this helpful, and if so, whether I should continue to the really interesting bits from differential geometry... And if not, whether it was because my explanation was too obtuse or because she knew those basics already.
PostPosted: Sun Nov 19, 2006 4:56 pm


VorpalNeko
I still wonder if poweroutage has actually found this helpful, and if so, whether I should continue to the really interesting bits from differential geometry... And if not, whether it was because my explanation was too obtuse or because she knew those basics already.


no it's helpful, I've only taken basic linear algebra. I didn't understand you definition of a dual vector space very well. I quoteth
"Given a vector space V over some field K, the space of linear functionals V' = {l:V→K|l linear} is called the dual space of V" mostly because the notation in the part in bold isn't very clear. Do you mean aboslute brackets or a number or some integer 'l' such that V maps onto K?

nonameladyofsins


VorpalNeko
Captain

PostPosted: Mon Nov 20, 2006 7:51 am


In V' = {l:V→K|l linear}, l:V→K means a function l from V to K, which is the field of corresponding to K (most usually real or complex), the bar | is typical way in set notation to say "such that" (sometimes also :, but that would be confusing when mixed with the standard notation for functions).

Out of curiosity, do you see the → character as an arrow? I'm posting in unicode [UTF-8], so if your browser's character encoding is set to something else, it may come display something completely different.
PostPosted: Mon Nov 20, 2006 10:33 am


yes i do see the arrow. We use ':' to denote 'such that' and also, don't you think that 'l' and '|' can be confused and/or unclear.
V'= {f:v->K : f is linear} are there two such that's? if '|' is the such that what is the double colon? The problem is then just notation, we use different notation.

nonameladyofsins


VorpalNeko
Captain

PostPosted: Mon Nov 20, 2006 12:28 pm


poweroutage
yes i do see the arrow. We use ':' to denote 'such that' and also, don't you think that 'l' and '|' can be confused and/or unclear.

It honestly didn't occur to me.

poweroutage
V'= {f:v->K : f is linear} are there two such that's? if '|' is the such that what is the double colon? The problem is then just notation, we use different notation.

No; f:A→B is standard notation for a function with domain A and codomain B. I guess that instead of 'l', I should have picked some other symbol for the function itself.
PostPosted: Mon Nov 20, 2006 5:35 pm


VorpalNeko
poweroutage
yes i do see the arrow. We use ':' to denote 'such that' and also, don't you think that 'l' and '|' can be confused and/or unclear.

It honestly didn't occur to me.

poweroutage
V'= {f:v->K : f is linear} are there two such that's? if '|' is the such that what is the double colon? The problem is then just notation, we use different notation.

No; f:A→B is standard notation for a function with domain A and codomain B. I guess that instead of 'l', I should have picked some other symbol for the function itself.


we use the expression f:A→B to denote a function that maps from A to B.... ie. f: R^n -> R is a function that maps from a multidimensional space to a line. eek is it semantics or notation that's getting us mixed up?

nonameladyofsins


VorpalNeko
Captain

PostPosted: Mon Nov 20, 2006 5:50 pm


poweroutage
we use the expression f:A→B to denote a function that maps from A to B.... ie. f: R^n -> R is a function that maps from a multidimensional space to a line. eek is it semantics or notation that's getting us mixed up?

Semantics, meseems. The domain is the set from which the functions maps, and the codomain is the set to which it maps. E.g., in f:R^n→R, R^n is the domain and R is the codomain.
Reply
Mathematics

 
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum