Homework for you

Vectors And Tensors Analysis Essay

Category: Essay

Description

A Student s Guide to Vectors and Tensors

An Introduction From the Author:

Welcome to the website for A Student’s Guide to Vectors and Tensors, and thanks for visiting. The purpose of this site is to supplement the material in the book by providing resources that will help you understand vectors and tensors. On this site, you’ll find:

Complete solutions to every problem in the book
From this page, you’ll be able to get a series of hints to help you solve each of the problems in the text, or you can see the full solution to each problem straight away. Just use the menu on the left to click on one of the Chapters, select “Problems,” and pick the problem for which you’d like some help.

Audio podcasts
For each of the six chapters in the text, I’ve recorded a series of audio podcasts in which I walk you through the major concepts contained within that chapter. These audio files can be streamed to your computer so you can hear them immediately, or you can use your favorite podcast-catching software to grab and store them.

Matrix Algebra Review
Also from this page, you can download a .pdf file with a brief review of matrix algebra. This is not a comprehensive introduction to the use of matrices, but it should provide enough of a review for you to understand the matrix algebra used in the text.

Alternative Forms of the Electromagnetic Field Tensor
As it says in the text, you may encounter several different forms of the EM field tensor in other texts, so this .pdf file provides a comparison between the forms used in several of the more popular EM texts.

I hope you find the material on this site helpful, and I’m interested in any comments you may have as to how this site (and the text) could be more useful to you. Thanks for your time.

To contact Dan Fleisch, send an e-mail to

Other articles

Best Introduction to Tensors

µ
Best Introduction to Tensors

Tensors are mathematical objects that can be used to represent real-world systems. Properly introduced, their basic nature is not hard to understand. Tensors have proven to be useful in many engineering contexts, in fluid dynamics, and for the General Theory of Relativity. Knowledge of tensor math (called tensor calculus or the absolute differential calculus) also is useful in the fields of financial analysis, machine understanding (artificial intelligence), and in the analysis of other complex systems.

Tensors are often introduced as being vectors on steroids. That is not an unfair generalization, but the way vectors are taught in high school often leads to misunderstandings when steroids are added.

Starting with the simplest case, we have scalars. These are single numbers that might represent readings from a scale. They are the numbers of ordinary arithmetic (they could also be complex numbers. or other math objects, but in this essay we'll stick to real numbers ). Following high school algebra, we'll let x represent a number, but we will also add subscripts so that we can have lots of variables, rather than naming them a, b, x, y, z, etc. Variables are just variables, whatever symbol you use to represent them. A number of schemes are used for tensor variables, often dependent on the branch of engineering or science they are used for. Here I'll use a's with subscripts to keep the discussion as general as possible. For coordinate axes I'll use x's, with subscripts if necessary [the x,y,z axes of high school become the x 1. x 2. and x 3 coordinate axes].

To show that we are working with a scalar, not just a number, we put the scalar in brackets, hence: [a].

We say scalars are one-dimensional. We also say scalars can be tensors of rank zero. Rank will be explained soon. Scalars are like individual lego pieces, or atoms: although simple, they are the basis of far more complex, far more interesting systems.

Since scalars are so simple, we can use them to introduce other topics important to tensor analysis: fields and coordinate systems. When we say a field we mean a space with objects at every point. If the objects are scalars, we have a scalar field. For instance, in two dimensions, for a plane, if there is a scalar value at each point, we have a two-dimensional scalar field. A physical example might be a surface, with the scalar values corresponding to temperatures at points on the surface.

Coordinate systems assign reference numbers to points in spaces. Most commonly, both for high-school math and for everyday use, we use the cartesian coordinate system of dimensions set at right angles to each other. But other coordinate systems simplify the math in certain engineering and science problems. The two-dimensional polar coordinate system is an example, as are cylindrical and spherical coordinates. Also we may find it convenient to use two or more different cartesian systems to solve a problem. In simple problems about motion for example, we might have a coordinate system we designate as stationary and a second coordinate system that moves with a set velocity along one of the axes of the first coordinate system (its origin moves).

Now, back to vectors. In high school these are presented as arrows, used in introductory physics to represent forces or velocities. Because they can represent a lot of varied things, for tensors it is best to think of vectors as ordered sets of numbers. For each dimension in the space under consideration vectors use one real number. We put the numbers in brackets, corresponding to the number of dimensions in the space. For a two dimensional plane the general vector can be represented as [ a 1 a 2 ] and a specific vector example could be [1.2 3.7]. In a five dimensional space the vectors can be represented as [ a 1 a 2 a 3 a 4 a 5 ]. Vectors can also be represented as columns of numbers, which is conventional for some applications.

In a vector field each point in the space has a vector associated with it. In an experiment the vectors might be found by data readings. In math problems they are usually designated by equations that take coordinates into account. Note that the numbers [ a 1 a 2 ] for a vector in a field do not represent the numbers of the coordinates you would get if you used the high-school arrow type vector starting from the base point. The numbers equate to the endpoint coordinates the vector would reach if it were at the origin. This makes sense if you think of the many objects a vector might represent, like forces. They don't really represent movement within the coordinate system, but do correspond to a direction in the coordinate system. They are non-spatial variables or specific data. Just like our scalar temperature readings are non-spatial.

What happens to vectors when we change coordinate systems? Usually the specific numbers representing the vector change. In a very simple example consider two separate cartesian coordinate systems, which have axes aligned, but in which the second system X has its origin at the point (5,0,0) of the first system, X. [It is more common to distinguish coordinate systems by having the second have a bar over its name, but underlining does just as well and is easier in HTML. It is also common to distinguish systems with apostrophes, as in X and X']. Suppose the vector [1,1,0] is located at the point (0,0,0) of system X. Then it is located at (5,0,0) of system X. If we transform the coordinates from X to X, the vector itself does not change, it remains [1,1,0]; just the coordinates of its position change.

With more complex changes in coordinate systems, both the numbers representing the vector might change as well as the coordinates of its position. Without worrying about the actual transformation formulas, lets call them T when going from X to X, and T when going from X to X. Similarly, in the X system we'll call our vector V, but the same vector (representing a real object, such as magnetic force) in the X system will be called V. because it probably has different actual numbers for [ a1a2 ]. Thus

Now, you have been dying to get to tensors. Surprise! You already have. It is said:

Vectors are tensors of rank 1.

But the beginning student is apt to misinterpret this statement, because it assumes you already know that tensors are not free standing objects. They always include transformation laws. So it should be stated:

Vectors with appropriate transformation laws for coordinate system changes are tensors of rank 1.

Before progressing, let's take a step back to scalars.

Scalars, with appropriate transformation laws for coordinate system changes, are tensors of rank 0.

Now, about rank, also called the order by some authors. The rank refers to how many dimensions an array has to have to represent the tensor. [a] is like a point, so scalars require 0 dimensions and have rank 0. [ a 1 a 2 a 3 a 4 a 5 ] is like a line, so it has 1 dimension, and vectors have rank 1.

The fun really begins when we start using tensors of rank 2, and so it is easy to forget that scalars and vectors can be tensors. Often, the word tensor is used only for tensors of rank 2 or greater.

Next: Tensors of Rank 2 (This project has been suspended for now)

Eigen Values And Vectors Philosophy Essay

Eigen Values And Vectors Philosophy Essay

Published: 23rd March, 2015 Last Edited: 23rd March, 2015

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In mathematics, given a linear transformation, an eigenvector of that linear transformation is a nonzero vector which, when that transformation is applied to it, may change in length, but not direction.

For each eigenvector of a linear transformation, there is a corresponding scalar value called an eigenvalue  for that vector, which determines the amount the eigenvector is scaled under the linear transformation. For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation for a particular eigenvalue is the set (linear span) of the eigenvectors associated to this eigenvalue, together with the zero vector (which has no direction).

In linear algebra, every linear transformation between finite-dimensional vector spaces can be expressed as a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below.

These concepts play a major role in several branches of both pure and applied mathematics - appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.

Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency.

AN INTRODUCTION TO EIGEN VALUES AND EIGEN VECTORS

The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on "eigenvalues" and "eigenvectors"-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.Â

Example. Consider the matrixÂ

Consider the three column matricesÂ

In other words, we haveÂ

Next consider the matrix P for which the columns are C1, C2, and C3, i.e.,Â

We have det(P) = 84. So this matrix is invertible. Easy calculations giveÂ

Next we evaluate the matrix P-1AP. We leave the details to the reader to check that we haveÂ

In other words, we haveÂ

Using the matrix multiplication, we obtainÂ

which implies that A is similar to a diagonal matrix. In particular, we haveÂ

forÂ. Note that it is almost impossible to find A75 directly from the original form of A.Â

This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P-1AP is a diagonal matrix?Â

From now on, we will call column matrices vectors. So the above column matrices C1, C2, and C3 are now vectors. We have the following definition.Â

Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex)  such thatÂ

If such a number  exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .Â

Remark. The eigenvector C must be non-zero since we haveÂ

for any number .Â

Example. Consider the matrixÂ

We have seen thatÂ

So C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.

Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the termracine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.

Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur. Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues. This was extended by Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, andClebsch found the corresponding result for skew-symmetric matrices. Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing thatdefective matrices can cause instability.

In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm-Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.

At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic", or "individual" - emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.

The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961.

APPLICATIONS OF EIGEN VALUES AND EIGEN VECTORS Schrödinger Equation

An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:

where H, the Hamiltonian, is a second-order differential operator and ψE, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its energy.

However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψE within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψE and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. (Fig. 8 presents the lowest eigenfunctions of the Hydrogen atom Hamiltonian.)

The Dirac notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented byÂ. In this notation, the Schrödinger equation is:

where  is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above  is understood to be the vector obtained by application of the transformation H to .

Fig. 8. The wavefunctions associated with thebound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward:n=1,2,3. ) and angular momentum (increasing across:s, p, d. ). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a positionmeasurement. The center of each figure is the atomic nucleus, a proton.

Molecular Orbitals

In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentialsvia Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.

Geology And Glaciology

In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram or as a Stereonet on a Wulff NetÂ. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 are in the order E1 ≥ E2 ≥ E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength.

The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E1, E2, and E3 are dictated by the nature of the sediment's fabric. If E1 = E2 = E3, the fabric is said to be isotropic. If E1 = E2 > E3the fabric is planar. If E1 > E2 > E3 the fabric is linear. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 .

Factor analysis

In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is astatistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability.

Vibration analysis

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are used to determine the natural frequencies of vibration, and the eigenvectors determine the shapes of these vibrational modes. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis.

Eigen Faces

Fig. shows eigen faces as eigen vectors

In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combinationof some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identificationpurposes. Research related to eigen vision systems determining hand gestures has also been made.

Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.

Tensor Of Inertia

In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass.

Stress Tensor

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.

Eigenvalues Of A Graph

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix, which is either T−A or I−T1/2AT −1/2, where T is a diagonal matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second principal eigenvector can be used to partition the graph into clusters, viaspectral clustering. Other methods are also available for clustering.

Essay Writing Service

Fully referenced, delivered on time, Essay Writing Service.