# Dominion Strategy Forum

• September 21, 2017, 11:52:21 am
• Welcome, Guest

### News:

DominionStrategy Wiki

Pages: 1 ... 34 35 [36] 37  All

0 Members and 1 Guest are viewing this topic.

#### fisherman

• Pawn
• Offline
• Posts: 4
• Respect: +2
« Reply #875 on: July 23, 2017, 03:14:32 pm »
0

You just mean you have a vector space with the standard basis and you want to rewrite the elements in terms of a different linear basis? If so, then the word polar has nothing to do with this.
Logged

#### Mic Qsenoch

• 2015 DS Champion
• Offline
• Posts: 1547
• Respect: +3741
« Reply #876 on: July 23, 2017, 03:18:18 pm »
0

You have to change A for your new basis as well.
Logged

#### fisherman

• Pawn
• Offline
• Posts: 4
• Respect: +2
« Reply #877 on: July 23, 2017, 03:29:22 pm »
+2

To expand on Mic's point: the endomorphism is not itself a matrix. Once you pick a basis for your vector space, you get to write down a matrix that represents your endomorphism with respect to your basis. If you pick a different basis, you have to write down a different matrix for the same endomorphism.
Logged

#### pacovf

• Cartographer
• Offline
• Posts: 3039
• Multiediting poster
• Respect: +3321
« Reply #878 on: July 23, 2017, 04:36:06 pm »
0

For the underlying question, personally I think your interpretation A is the best way to think about it, with the understanding that the n-tuple is to the vector what the matrix is to the endomorphism (i.e, just a representation that depends on your basis for R^3).
Logged
pacovf has a neopets account.  It has 999 hours logged.  All his neopets are named "Jessica".  I guess that must be his ex.

#### liopoil

• Margrave
• Offline
• Posts: 2578
• Respect: +2449
« Reply #879 on: July 23, 2017, 08:34:52 pm »
+2

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

One other minor thing here is that R3 is special in that is it literally the set of ordered triples of real numbers, and so there are naturally "coordinates". But if we took, say, R[ x ]<3, the vector space of real-valued polynomials in x with degree less than three, there aren't really "coordinates". (1,x,x^2) is a simple basis for the space, but the vectors themselves do not have coordinates. 4x^2 + 2x + 7 is a vector in this space, and with respect to that basis it is (7,2,4). Then if you apply a linear transformation to this vector, you can instead use a matrix with respect to this basis and apply it to (7,2,4); it doesn't make any sense to multiply a matrix by 4x^2 + 2x + 7. In fact, R[ x ]<3 is isomorphic to R3, but here it's more clear that the vectors need to be written with respect to a basis.
Logged

#### silverspawn

• Cartographer
• Offline
• Posts: 3900
• ♦ Twilight ♦
• Respect: +1675
« Reply #880 on: July 24, 2017, 10:57:11 am »
0

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

We did that, too (even with the same notation). I think that's consistent with saying functions operate on "real" elements of your vectors space, matrices operate on coordinate vectors, and because coordinate vectors depend on the basis, your matrices always depend on basis. Except then it makes no sense to say f(x) = Ax without specifying a basis, which we also did just one assignment later.
Logged

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #881 on: July 24, 2017, 11:25:23 am »
+1

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

One other minor thing here is that R3 is special in that is it literally the set of ordered triples of real numbers, and so there are naturally "coordinates". But if we took, say, R[ x ]<3, the vector space of real-valued polynomials in x with degree less than three, there aren't really "coordinates". (1,x,x^2) is a simple basis for the space, but the vectors themselves do not have coordinates. 4x^2 + 2x + 7 is a vector in this space, and with respect to that basis it is (7,2,4). Then if you apply a linear transformation to this vector, you can instead use a matrix with respect to this basis and apply it to (7,2,4); it doesn't make any sense to multiply a matrix by 4x^2 + 2x + 7. In fact, R[ x ]<3 is isomorphic to R3, but here it's more clear that the vectors need to be written with respect to a basis.

You're doing the same thing in R^3, just with the basis {(1,0,0), (0,1,0), (0,0,1)}.  You could of course use any other basis.  It might also be illustrative for someone familiarizing themselves with these things to take a point (like (1,2,3)) in the standard basis {(1,0,0), (0,1,0), (0,0,1)} and write it with respect to some other basis (I dunno, something random like {(1,1,1), (1, 2, 1), (0,1,1)}.)

Also worth pointing out that a linear map f:V->W is determined by its action on the basis vectors because any v in V can be written as v = sum_i (alpha_i*v_i), so

f(v) = f(sum_i (alpha_i*v_i)) =  sum_i alpha_i *f(v_i),

so f's action on a vector v is determined by its action on the basis {v_i}.
Logged

#### faust

• Torturer
• Offline
• Posts: 1614
• Respect: +2083
« Reply #882 on: July 24, 2017, 11:46:23 am »
+1

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

We did that, too (even with the same notation). I think that's consistent with saying functions operate on "real" elements of your vectors space, matrices operate on coordinate vectors, and because coordinate vectors depend on the basis, your matrices always depend on basis. Except then it makes no sense to say f(x) = Ax without specifying a basis, which we also did just one assignment later.
The problem, I think, is this: You can define the multiplication of a Matrix and a vector not talking about bases and coordinates at at; it is just an action of the multiplicative group of Matrices on the vector space R^n (in case of n x n-matrices - I'm only considering these here for simplicity). This definition is perfectly valid. Then you take some vector space, e.g. polynomials of limited degree, and how kind of want to do the same thing here.

Well, what you do is first you fix an isomorphism from the polynomials to R^n. This is what the basis does: It tells you which elements are sent to (1,0,0), (0,1,0), (0,0,1), respectively. This uniquely defines the isomorphism as you probably have shown at some point during the course. Let's say you have a map f between polynomial of limited degree, and an isomorphism g (represented by a basis). You get a diagram like this:

f
P -----------> P
|                   ^
| g                 | g^{-1}
v          h       |
R^n ---------> R^n

The map h is uniquely defined as g o f o g^{-1}, and there is a matrix A such that h(x)=Ax for all x. Thus we can say that there is a connection between the map f and the matrix A via the isomorphism g, and since it's all isomorphic, mathematicians often get sloppy and use the two interchangably.
Logged
Since the number of points is within a constant factor of the number of city quarters, in the long run we can get (4 - ε) ↑↑ n points in n turns for any ε > 0.

• Minion
• Offline
• Posts: 707
• Not Doc Cod
• Respect: +536
« Reply #883 on: July 24, 2017, 12:16:02 pm »
+2

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

We did that, too (even with the same notation). I think that's consistent with saying functions operate on "real" elements of your vectors space, matrices operate on coordinate vectors, and because coordinate vectors depend on the basis, your matrices always depend on basis. Except then it makes no sense to say f(x) = Ax without specifying a basis, which we also did just one assignment later.
WW's response to this is solid.  Basically ('scuse the pun), whenever a basis is not specified for R^n (or C^n, or any other cartesian power of a field), it is convention to use the "standard basis", ie {(1,0,...0), (0,1,0,...,0),...,(0,...,0,1)}.  A lot of your issues are solved if you use that convention.

Mathematicians are sticklers for accuracy, but in some cases things are so standard that even mathematicians won't bother to specify that they're using the standard thing.

Obviously (per liopoil's example) if you're in a space where there's no obvious standard basis and someone writes x=Ay, with a matrix A, you should mistrust everything they say unless and until they specify a basis.  (Though I'd argue that {1,x,x^2,...} is pretty close to being a standard basis in spaces of univariate polynomials.)
Logged
The best reason to lynch Haddock is the meltdown we get to witness on the wagon runup. I mean, we should totally wagon him every day just for the lulz.

M Town Wins-Losses (6-2, 75%): 71, 72, 76, 81, 83, 87 - 79, 82.  M Scum Wins-Losses (2-1, 67%): 80, 101 - 70.
RMM Town Wins-Losses (0-1, 0%): x - 31.  RMM Scum Wins-Losses (3-3, 50%): 33, 37, 43 - 29, 32, 35.
Modded: M75, M84, RMM38.     Mislynched (M-RMM): None - None.     Correctly lynched (M-RMM): 101 - 33, 33, 35.       MVPs: RMM37, M87

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #884 on: July 24, 2017, 12:20:48 pm »
+1

(Though I'd argue that {1,x,x^2,...} is pretty close to being a standard basis in spaces of univariate polynomials.)

I'd say kind of; there's a sense in which it's natural (because it's so easy* to write a_0 + a_1*x + ... + a_n*x^n), but orthonormal bases are usually much more helpful, so usually a different polynomial basis is used.  (And different ones depending on what your problem is.)

*Edit: Maybe 'easy' isn't the right word here, but rather that's our first form of polynomials we're exposed to.  'Easy' or 'natural' depends on your problem.. if you're talking about interpolating a function with a polynomial, for instance, the Lagrange polynomials are more natural.

« Last Edit: July 24, 2017, 12:22:58 pm by Witherweaver »
Logged

#### silverspawn

• Cartographer
• Offline
• Posts: 3900
• ♦ Twilight ♦
• Respect: +1675
« Reply #885 on: July 24, 2017, 12:37:28 pm »
0

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

We did that, too (even with the same notation). I think that's consistent with saying functions operate on "real" elements of your vectors space, matrices operate on coordinate vectors, and because coordinate vectors depend on the basis, your matrices always depend on basis. Except then it makes no sense to say f(x) = Ax without specifying a basis, which we also did just one assignment later.
WW's response to this is solid.  Basically ('scuse the pun), whenever a basis is not specified for R^n (or C^n, or any other cartesian power of a field), it is convention to use the "standard basis", ie {(1,0,...0), (0,1,0,...,0),...,(0,...,0,1)}.  A lot of your issues are solved if you use that convention.

Mathematicians are sticklers for accuracy, but in some cases things are so standard that even mathematicians won't bother to specify that they're using the standard thing.

Obviously (per liopoil's example) if you're in a space where there's no obvious standard basis and someone writes x=Ay, with a matrix A, you should mistrust everything they say unless and until they specify a basis.  (Though I'd argue that {1,x,x^2,...} is pretty close to being a standard basis in spaces of univariate polynomials.)

So you would also agree that it's sloppy/convention-but-dosen't-actually-make-sense to define a basis with coordinate vectors?
Logged

#### liopoil

• Margrave
• Offline
• Posts: 2578
• Respect: +2449
« Reply #886 on: July 24, 2017, 04:53:48 pm »
0

Yeah, (1,x,x^2,...) is a really intuitive basis, but the point is that you HAVE to choose it. Any finite-dimensional vector space will be isomorphic to F^n for some field F, so any example I give would be easily interpreted as coordinates; I would just be obfuscating to make the basis hard to see. But the vectors in F^n itself are literally coordinates, so you might not think to translate to a basis. Faust's diagram is really instructive, and can even be generalized for any finite-dimensional F-vector spaces V,W (they both need to have the same base field). Then if f: V --> W is linear, f = iV(g(iW-1))) with

iV            g             iW-1
V ---> F^n  ---> F^m  ---> W

iV and iW are the isomorphisms from V to F^n and W to F^m specified by bases, and then g is the transformation specified by some n-by-m matrix.

At this point I'm not sure anyone is confused, but I just think these diagrams are pretty cool.
Logged

#### pacovf

• Cartographer
• Offline
• Posts: 3039
• Multiediting poster
• Respect: +3321
« Reply #887 on: July 24, 2017, 05:46:54 pm »
0

And then somebody brings in category theory.
Logged
pacovf has a neopets account.  It has 999 hours logged.  All his neopets are named "Jessica".  I guess that must be his ex.

• Minion
• Offline
• Posts: 707
• Not Doc Cod
• Respect: +536
« Reply #888 on: July 25, 2017, 05:14:50 am »
0

(Though I'd argue that {1,x,x^2,...} is pretty close to being a standard basis in spaces of univariate polynomials.)

I'd say kind of; there's a sense in which it's natural (because it's so easy* to write a_0 + a_1*x + ... + a_n*x^n), but orthonormal bases are usually much more helpful, so usually a different polynomial basis is used.  (And different ones depending on what your problem is.)

*Edit: Maybe 'easy' isn't the right word here, but rather that's our first form of polynomials we're exposed to.  'Easy' or 'natural' depends on your problem.. if you're talking about interpolating a function with a polynomial, for instance, the Lagrange polynomials are more natural.
I agree it definitely depends on the context, yes.

The class I took was very careful to be precise about this matter. Let V,W be finite-dimensional vector spaces and f: V --> W a linear map between them. Then f is completely determined by how it transforms some basis for V. Say B = (v_1,v_2,...,v_n) is a basis for V, and C = (w_1,w_2,...,w_m) is a basis for W. Then let M be the m-by-n matrix where the i-th column is f(v_i) written with respect to the basis C in W. Then we can say that f(v) = Mv for all vectors v in V - when v is written in the basis B and f(v) is written in the basis C. In particular, we would frequently write f = B[M]C to emphasize that the matrix transforms from the basis B to the basis C.

We did that, too (even with the same notation). I think that's consistent with saying functions operate on "real" elements of your vectors space, matrices operate on coordinate vectors, and because coordinate vectors depend on the basis, your matrices always depend on basis. Except then it makes no sense to say f(x) = Ax without specifying a basis, which we also did just one assignment later.
WW's response to this is solid.  Basically ('scuse the pun), whenever a basis is not specified for R^n (or C^n, or any other cartesian power of a field), it is convention to use the "standard basis", ie {(1,0,...0), (0,1,0,...,0),...,(0,...,0,1)}.  A lot of your issues are solved if you use that convention.

Mathematicians are sticklers for accuracy, but in some cases things are so standard that even mathematicians won't bother to specify that they're using the standard thing.

Obviously (per liopoil's example) if you're in a space where there's no obvious standard basis and someone writes x=Ay, with a matrix A, you should mistrust everything they say unless and until they specify a basis.  (Though I'd argue that {1,x,x^2,...} is pretty close to being a standard basis in spaces of univariate polynomials.)

So you would also agree that it's sloppy/convention-but-dosen't-actually-make-sense to define a basis with coordinate vectors?
Mmf.  Not sure I do agree with that.  Not sure what you mean by "doesn't make sense".  So long as you are literally talking about the vector space F^n (for some field F) - rather than just something isomorphic to it (since everything finite-dimensional is isomorphic to such a thing), it is perfectly legitimate to give a basis for F^n in terms of standard coordinate vectors.

After all, for the literal vector space F^n, tuples of numbers are a real and well-defined thing.
Logged
The best reason to lynch Haddock is the meltdown we get to witness on the wagon runup. I mean, we should totally wagon him every day just for the lulz.

M Town Wins-Losses (6-2, 75%): 71, 72, 76, 81, 83, 87 - 79, 82.  M Scum Wins-Losses (2-1, 67%): 80, 101 - 70.
RMM Town Wins-Losses (0-1, 0%): x - 31.  RMM Scum Wins-Losses (3-3, 50%): 33, 37, 43 - 29, 32, 35.
Modded: M75, M84, RMM38.     Mislynched (M-RMM): None - None.     Correctly lynched (M-RMM): 101 - 33, 33, 35.       MVPs: RMM37, M87

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #889 on: July 25, 2017, 07:50:52 am »
0

The basis is represented with coordinate vectors but exists without them.  It's like a base for a number system.  The number 10 (in base ten) is thing that exists independent of how it's represented; it's just helpful to write it down some way and that's how we do it.

Logged

#### silverspawn

• Cartographer
• Offline
• Posts: 3900
• ♦ Twilight ♦
• Respect: +1675
« Reply #890 on: July 25, 2017, 09:10:15 am »
0

Mmf.  Not sure I do agree with that.  Not sure what you mean by "doesn't make sense".  So long as you are literally talking about the vector space F^n (for some field F) - rather than just something isomorphic to it (since everything finite-dimensional is isomorphic to such a thing), it is perfectly legitimate to give a basis for F^n in terms of standard coordinate vectors.

After all, for the literal vector space F^n, tuples of numbers are a real and well-defined thing.

I think you're saying here that if your vector space is F^n then coordinate vectors are okay as basis vectors because they literally are tupels?
Logged

• Minion
• Offline
• Posts: 707
• Not Doc Cod
• Respect: +536
« Reply #891 on: July 25, 2017, 09:30:40 am »
0

Mmf.  Not sure I do agree with that.  Not sure what you mean by "doesn't make sense".  So long as you are literally talking about the vector space F^n (for some field F) - rather than just something isomorphic to it (since everything finite-dimensional is isomorphic to such a thing), it is perfectly legitimate to give a basis for F^n in terms of standard coordinate vectors.

After all, for the literal vector space F^n, tuples of numbers are a real and well-defined thing.

I think you're saying here that if your vector space is F^n then coordinate vectors are okay as basis vectors because they literally are tupels?
Right.

I mean, any vector space has SOME object you can use as basis vectors (and typically the objects can be constructed as set-theoretic objects, though in most cases why bother?).  Just, in most vector spaces it's not as obvious which things to use for a basis.
Logged
The best reason to lynch Haddock is the meltdown we get to witness on the wagon runup. I mean, we should totally wagon him every day just for the lulz.

M Town Wins-Losses (6-2, 75%): 71, 72, 76, 81, 83, 87 - 79, 82.  M Scum Wins-Losses (2-1, 67%): 80, 101 - 70.
RMM Town Wins-Losses (0-1, 0%): x - 31.  RMM Scum Wins-Losses (3-3, 50%): 33, 37, 43 - 29, 32, 35.
Modded: M75, M84, RMM38.     Mislynched (M-RMM): None - None.     Correctly lynched (M-RMM): 101 - 33, 33, 35.       MVPs: RMM37, M87

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #892 on: July 25, 2017, 09:51:37 am »
+1

It seems like the definition is being dodged around a bit.

A vector space V (over a scalar field) is a collection of elements satisfying the given conditions.  A basis is a set A \subset V (say indexed by A = {a_alpha}, alpha in some index set) such that:

1. For all v in V, there exists a set of scalars {c_alpha} such that v = sum_alpha (c_alpha*a_alpha),
2. A is linearly independent.  I.e., if sum_alpha (c_alpha*a_alpha) = 0, then c_alpha = 0 for all alpha.

If this is satisfied, then you have a basis set.  There tends to be natural ones to choose (usually because of how we define or represent the vector space), but they are no more or less "okay" than any other in a vacuum.

Logged

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #893 on: July 25, 2017, 10:11:43 am »
0

In fact you can define coordinates this way.  Given a basis A = {a_alpha} of a vector space V, the coordinates of v are the scalars (c_1, c_2, ... ) such that

v = sum(c_alpha*a_alpha).
Logged

• Minion
• Offline
• Posts: 707
• Not Doc Cod
• Respect: +536
« Reply #894 on: July 25, 2017, 10:23:04 am »
+1

Right.  But (one of) the question(s) I think silver is asking is:

Why is it ok to use "n-tuple" notation to denote elements of vector spaces (given a basis, presumably, or with one taken as standard) when really these things aren't n-tuples at all, they're some other objects?

The answer is that mathematicians double-think like this all the time.  We like to think about various different representations for the same object, since it can help with intuition.  It rarely causes a problem because meaning should always be clear from context.

Extra confusion is caused by the fact that if your vector space is literally F^n, then the n-tuple notation for an element of F^n (using the standard coordinate basis) coincides with the literal object that it is representing; some n-tuple of elements of F.
Logged
The best reason to lynch Haddock is the meltdown we get to witness on the wagon runup. I mean, we should totally wagon him every day just for the lulz.

M Town Wins-Losses (6-2, 75%): 71, 72, 76, 81, 83, 87 - 79, 82.  M Scum Wins-Losses (2-1, 67%): 80, 101 - 70.
RMM Town Wins-Losses (0-1, 0%): x - 31.  RMM Scum Wins-Losses (3-3, 50%): 33, 37, 43 - 29, 32, 35.
Modded: M75, M84, RMM38.     Mislynched (M-RMM): None - None.     Correctly lynched (M-RMM): 101 - 33, 33, 35.       MVPs: RMM37, M87

#### faust

• Torturer
• Offline
• Posts: 1614
• Respect: +2083
« Reply #895 on: July 25, 2017, 10:32:27 am »
0

Extra confusion is caused by the fact that if your vector space is literally F^n, then the n-tuple notation for an element of F^n (using the standard coordinate basis) coincides with the literal object that it is representing; some n-tuple of elements of F.
Extra extra confusion when you use a non-standard basis for F^n and start representing n-tuples with different n-tuples.

(a thing that usually only happens in exercises for linear algebra)
« Last Edit: July 25, 2017, 10:34:12 am by faust »
Logged
Since the number of points is within a constant factor of the number of city quarters, in the long run we can get (4 - ε) ↑↑ n points in n turns for any ε > 0.

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #896 on: July 25, 2017, 10:50:40 am »
0

Well, that's what I said above.  We define the 'tuple' (or coordinate) as notation for expression v=sum (c_alpha*a_alpha).  It is relative to a basis.

A vector in R^n is representing more than a tuple of n real numbers.  It is an n-tuple of real numbers with certain properties (an addition operator satisfying the correct things, a zero element, a scalar field with multiplication satisfying the right things, etc.).  You can define the notion of an n-tuple as a set, before the notion of a vector space, and before the notion of a basis.  Then when you introduce the vector space, it turns out that its coordinate/tuple notation in the standard basis is the same tuple that is used for a point in R^n.

But R^n the vector space and R^n the sets are different things.  One has more structure.  We can go further and define a norm to make R^n a normed space (a Banach space with the right one), or an inner product to make R^n normed linear space (a Hilbert space with the right one).  But you can define different norms, etc., and you get different spaces, even though we think of them all as "R^n".

You could take the set R^n and the same scalar field and make a different vector space out of it (though probably isomorphic to the standard one) by defining different vector operations.

So it's not like the notation (x,y,z) as a point in the (set) R^3 is the same as the notation (x,y,z) as the vector in the vector space R^3 (over reals, with the standard operations) written in the standard basis.  They have the same symbol out of context, but the meaning is different.
Logged

#### Witherweaver

• Online
• Posts: 6306
• Respect: +7490
« Reply #897 on: July 25, 2017, 10:53:03 am »
0

Extra confusion is caused by the fact that if your vector space is literally F^n, then the n-tuple notation for an element of F^n (using the standard coordinate basis) coincides with the literal object that it is representing; some n-tuple of elements of F.
Extra extra confusion when you use a non-standard basis for F^n and start representing n-tuples with different n-tuples.

(a thing that usually only happens in exercises for linear algebra)

It is helpful if, say, you have another important set of vectors.  Maybe the eigenvectors of a matrix of interest or something.
Logged

• Minion
• Offline
• Posts: 707
• Not Doc Cod
• Respect: +536
« Reply #898 on: July 25, 2017, 11:01:50 am »
+2

Well, that's what I said above.  We define the 'tuple' (or coordinate) as notation for expression v=sum (c_alpha*a_alpha).  It is relative to a basis.

A vector in R^n is representing more than a tuple of n real numbers.  It is an n-tuple of real numbers with certain properties (an addition operator satisfying the correct things, a zero element, a scalar field with multiplication satisfying the right things, etc.).  You can define the notion of an n-tuple as a set, before the notion of a vector space, and before the notion of a basis.  Then when you introduce the vector space, it turns out that its coordinate/tuple notation in the standard basis is the same tuple that is used for a point in R^n.

But R^n the vector space and R^n the sets are different things.  One has more structure.  We can go further and define a norm to make R^n a normed space (a Banach space with the right one), or an inner product to make R^n normed linear space (a Hilbert space with the right one).  But you can define different norms, etc., and you get different spaces, even though we think of them all as "R^n".

You could take the set R^n and the same scalar field and make a different vector space out of it (though probably isomorphic to the standard one) by defining different vector operations.

So it's not like the notation (x,y,z) as a point in the (set) R^3 is the same as the notation (x,y,z) as the vector in the vector space R^3 (over reals, with the standard operations) written in the standard basis.  They have the same symbol out of context, but the meaning is different.
The structures (R^n, no additional structure, just sets) and (R^n, +, scalar times, 0), and even (R^n, +, scalar times, 0, ||.||) are of course all different.

But the elements of the underlying set of each of those three structures are the same objects: n-tuples of real numbers.  The fact that one can choose to talk about the additive/norm structure or not to do so has no effect on the fact that a tuple (x_1,...,x_n) is a single object that exists in all of the structures and is always the same.

So, no, I don't agree that the meaning of an (literal) n-tuple is different depending on whether you're considering it as an element of a vector space or a set.  An n-tuple is an n-tuple.

EDIT: Perhaps the clearest way to say this: the extra structure (eg. addition) on a vector space is a property of the vector space itself, not of the individual elements.
Logged
The best reason to lynch Haddock is the meltdown we get to witness on the wagon runup. I mean, we should totally wagon him every day just for the lulz.

M Town Wins-Losses (6-2, 75%): 71, 72, 76, 81, 83, 87 - 79, 82.  M Scum Wins-Losses (2-1, 67%): 80, 101 - 70.
RMM Town Wins-Losses (0-1, 0%): x - 31.  RMM Scum Wins-Losses (3-3, 50%): 33, 37, 43 - 29, 32, 35.
Modded: M75, M84, RMM38.     Mislynched (M-RMM): None - None.     Correctly lynched (M-RMM): 101 - 33, 33, 35.       MVPs: RMM37, M87

• Torturer
• Offline
• Posts: 1614