I have a few questions about the fundamentals behind vectors. Maybe someone can help?
If say the vector space V is R^3, then there seem to be two different things. One is the actual elements of R^3, ordered triples, like (7, 2, 4) which would be something like the set {{7}, {7,2}, {7,2,4}}.
And the other thing is the coordinate vectors. Now, I think one of two things must be true:
A) coordinates aren't actually real mathematical elements, but are just notation for proper n-tuples. So if I have a basis B = ((7,2,0),(0,1,0),(0,0,2)) then the "vector" (1,0,2)_B (<- meaning B in index to signal that it's from the basis B) is actually just a different notation for the set {{7}, {7,2}, {7,2,4}}.
B) They are their own things and ought to have their own, distinct representation as sets.
So if it's A then, for one, isn't it totally bongus to write down basis in coordinates? If coordinates mean "a times this basis vector, b times this basis vector..." and you define a basis in terms of coordinates, you're referencing the thing you just want to define. It seems like you should always write down a basis as ordered n-tupels (I guess that's just sloppy, though why be sloppy if it takes the same amount of time?). And furthermore, writing any vector without a basis in the index would also not make sense but just be a quick notation if the basis is obvious, the same way we write 0 instead of the n-tupel with n zeros. But okay, so far no big deal.
What about matrices? I guess matrices themselves are fine, because they are mathematical things which are proven to form a ring with addition and matrix multiplication. Fine. But multiplying matrices with vectors? Given a linear function and a basis, we say there is a distinctly defined matrix which does the same as the function, and the matrix then only cares about the coordinates... which I thought weren't a real thing.
But where it really gets confusing to me (and where I actually ran into real problems doing tasks) is when you do stuff like, let f : V -> V be an endomorphism that maps v -> A * v where A is a matrix.
How is that legal if f takes proper n-tupels as arguments and A takes weird coordinate things? If say the matrix is ((0,0),(0,1)) and the vector is (1,1), couldn't you just say, well I choose the basis ((1,1),(0,1)) then my vector (1,1) has the coordinates (1,0), therefore Av = 0, whereas if I choose the standard basis, then A(1,1) = (0,1). Even if you "convert" your vector "back" by "applying" the coordinates, (0,0) is definitely not equal to (0,1).
And if it's B... well then what the hell are coordinates?
It seems to me that no-one ever explained this properly (fwiw I did hear people criticize the lecture for that), and we're just supposed to do what seems intuitive if the problems come up and the tasks will be designed in such a way that it'll work out. But that's super unsatisfying.