The Set of All Continuous Functions Defined on the Negative Interval With the Standard Operations
You should upgrade or use an alternative browser.
- Forums
- Homework Help
- Calculus and Beyond Homework Help
Linear Algebra Problem #1
- Thread starter Saladsamurai
- Start date
Homework Statement
Which of the following sets (with natural addition and multiplication by a
scalar) are vector spaces. Justify your answer.
a) The set of all continuous functions on the interval [0, 1];
b) The set of all non-negative functions on the interval [0, 1];
c) The set of all polynomials of degree exactly n;
d) The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]
Homework Equations
Definition of a vector space
A vector space V is a collection of ob jects, called vectors (denoted in this
book by lowercase bold letters, like v), along with two operations, addition
of vectors and multiplication by a number (scalar) 1 , such that the following
8 properties (the so-called axioms of a vector space) hold:
The first 4 properties deal with the addition:
1. Commutativity: v + w = w + v for all v, w ∈ V ;
2. Associativity: (u + v) + w = u + (v + w) for all u, v, w ∈ V ;
3. Zero vector: there exists a special vector, denoted by 0 such that
v + 0 = v for all v ∈ V ;
4. Additive inverse: For every vector v ∈ V there exists a vector w ∈ V
such that v + w = 0. Such additive inverse is usually denoted as −v;
5. Multiplicative identity: 1v = v for all v ∈ V ;
6. Multiplicative associativity: (αβ )v = α(β v) for all v ∈ V and all
scalars α, β ;
7. α(u + v) = αu + αv for all u, v ∈ V and all scalars α;
8. (α + β )v = αv + β v for all v ∈ V and all scalars α, β .
The Attempt at a Solution
Let's just start with (a) the set of all continuous functions on the interval [0, 1]
This is probably really easy, but I am having trouble figuring out how to answer this one.
I guess I start by seeing if all continuous functions adhere to the eight criterion above right?
Well it appears that 1 and 2 hold, as continuous functions add commutatively and associatively right?
3 (the existence of a zero vector such that v+0=v) seems true enough
4 and 5 should hold (out of curiosity, when does 1*v not equal to v?)
6,7,8 also seem obvious enough, but I don't know how to prove any of this.
So I am concluding that the set of all continuos functions on the interval [0,1] IS a vector space.
What is the proper approach to these kinds of problems? And why did they choose the interval [0,1] ? Why not all reals?
Sorry for so many questions! Any input towards ANY of them is greatly appreciated!
Answers and Replies
1*v=v is basically just a definition of 1. The point of 5) is really just to say 1 exists. 6,7 and 8 hardly even need proving. When you multiply functions by constants and add them then you are really just multiplying and adding real numbers. So 6,7 and 8 aren't very mysterious. The thing to prove is that a constant times a continuous function is continuous and that the sum of two continuous functions is continuous. If you want to get down into the dirt and prove them, then use epsilons and deltas. But I think you've probably proved that before, right? Otherwise, they are just properties of real numbers.
Actually, somehow I got like the one Calculus professor who did not find it necessary to do epsilon and deltas (with limits). I am afraid of them
What is the difference here? I am sorry, I am losing focus here... which of these properties (1-8) would not hold for noncontinuous functions?
Okay then! Let's move on to part (b)Now it seems similar to part (a) except that now it includes just positive functions and they are not necessarily continuous.
What is the difference here? I am sorry, I am losing focus here... which of these properties (1-8) would not hold for noncontinuous functions?
They all hold for functions that aren't necessarily continuous as well.
Crap like, the definition of a set and stuff like that...
I know that the terms aren't difficult, but I have to think about them instead of them just being in there (my head!) and knowing them well.....
Am I stupid? Okay, don't answer that.... I dropped out of high school, so I did not have the luxury of high-school maths. So when I started out at Community college, I started with "Intermediate Algebra" and now that I am transferring out of that school, I have completed many math courses. However, I feel like I missed MANY of the BASICS!Crap like, the definition of a set and stuff like that...
I know that the terms aren't difficult, but I have to think about them instead of them just being in there (my head!) and knowing them well.....
I was NOT implying you were stupid. It takes practice to focus on what's important in a list of 8 axioms. I was trying to encourage you.
That's easy. If V is the set of nonnegative functions and v is in V and I do a vector space type operation like (-1)*v, is the result necessarily a nonnegative function? These aren't so hard, are they?
Okay. I think I might be with you now. The approach to these is something like this:
I have some collection (set) of objects (elements); now I should ask myself: If I apply a vector space type operation (i.e. 1-8) to one of these elements, do I as a result, get one of those elements?
If the answer is yes, it IS a vector space. If the answer is NO, it is not.
Is this right?
I was NOT implying you were stupid. I was trying to encourage you.
I didn't mean you were! I was just sort of "talking out loud"... sorry, I do it a lot! Sometimes if I type out what I am thinking, it helps me to sort out the nonsense going on inside my head. :rofl:
I had originally planned on asking what class most people started learning stuff like that in.... but I got lost somewhere along the line!
Okay. I think I might be with you now. The approach to these is something like this:I have some collection (set) of objects (elements); now I should ask myself: If I apply a vector space type operation (i.e. 1-8) to one of these elements, do I as a result, get one of those elements?
If the answer is yes, it IS a vector space. If the answer is NO, it is not.
Is this right?
Yes, that's REALLY right. What about c) and d)?
Yes, that's REALLY right.
Sweet-Jesus!
I had originally planned on asking what class most people started learning stuff like that in.... but I got lost somewhere along the line!
Likely, they learned it the same class you are taking. Sorry, NOT TAKING.
1-8 hold and since all said operations yield a polynomial of degree exactly n
So for part (c) I would say that the set of all polynomials of degree exactly n IS a vector space since,1-8 hold and since all said operations yield a polynomial of degree exactly n
Agreed. Now finish it with d) and agree with me that it's not that hard and you aren't stupid.
:rofl: Okee-dokee!Agreed. Now finish it with d) and agree with me that it's not that hard and you aren't stupid.
This will be the harder of the four since I have to really think about what it is saying.
I am a little confused by the definition of symmetric matrices:
The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]
How can a matrix be equal to its transpose unless all of the entries of that matrix are the SAME entry? If you have some 2 x 2 matrix called A. And you take the first row of A and make it the first column of some other matrix B. And then you take the 2nd row of A and make it the 2nd column of B. B is now the transpose of A. Isn't the only way the A=B if all the entries in A were the same entry?
Umm. They are just symmetric matrices. I'm trying not to let a bit of index oddness in what you wrote throw me off. a_{ij}=a_{ji), right?
I thought I copied their definition right.... but yes, it said symmetric matrices.
EDIT: Here's a screenshot of the text
I thought I copied their definition right.... but yes, it said symmetric matrices.![]()
EDIT: Here's a screenshot of the text
![]()
This may show you that there are people even stupider than you are. I have no idea what that means. What does the superscript 'n' mean? What does j,k=1 mean? I do know what a symmetric matrix is, and that notation does nothing to convey the meaning.
Nooooo. A=[[1,2],[2,1]]. A^T=A. All of the entries in A aren't the same.
:uhh: Sorry, but I am confused by that matrix.... wait... is that comma in between sets of brackets to denote a new row? ... Seems it must be.
:uhh: Sorry, but I am confused by that matrix.... wait... is that comma in between sets of brackets to denote a new row? ... Seems it must be.
Yes, new row. Ones along the diagonal, twos along the opposite diagonal. Symmetric, I'm pretty sure.
Thanks Dick!! You're responses are always helpful to me and get me thinking.
I thought I copied their definition right.... but yes, it said symmetric matrices.![]()
EDIT: Here's a screenshot of the text
![]()
That little bit of indexy stuff just means the A is a two index object with indices running from 1 to n. I.e. that it's an nxn matrix. Must have been tired last night.
Okay. So by my logic from post #11 and applying axiom 1, not only does some pair of symmetric matrices need to add commutatively, but the sum needs to be a symmetric matrix too. I am trying to think of a way to show prove that that is or is not true.
I have had very limited exposure to proofs in general, let alone those involving matrices. I am having trouble figuring out how to go about showing that the sum of two symmetric matrices is or is not ALSO a symmetric matrix.
A hint to get me going here would be great
You just have to write this one out to see why it's the case. The addition of two matrices is just the sum of it's corresponding entries.
I know how to add two matrices if I have numbers, but what about the general case? How do you go about adding two general matrices. I tried this:
[tex]\left[\begin{array}{cc}a_{11}&a_{12} \\ a_{21}&a_{22}\end{array}\right]+\left[\begin{array}{cc}b_{11}&b_{12} \\ b_{21}&b_{22}\end{array}\right]=
\left[\begin{array}{cc}a_{11}+b_{11}&a_{12}+b_{12} \\ a_{21}+b_{21}&a_{22}+b_{22}\end{array}\right][/tex]
But that does not tell me much. Is there a better way to approach this? That is to say, I am not sure what to write out.
A matrix A is symmetric if for all its entries [itex]a_{ij}=a_{ji}[/itex] Suppose there's another symmetric matrix B with the same property.
The sum of the 2 matrices is C and a typical matrix entry of C is [itex]c_{ij} = a_{ij} + b_{ij} [/itex]. Now can you show if [itex]c_{ij} = c_{ji}[/itex]?
Ok, I can't tell if you can see why the sum of two symmetic matrices is itself symmetric, or if you can see that it is so but can't think of a formal or acceptable way to prove it. Consider this then:A matrix A is symmetric if for all its entries [itex]a_{ij}=a_{ji}[/itex] Suppose there's another symmetric matrix B with the same property.
The sum of the 2 matrices is C and a typical matrix entry of C is [itex]c_{ij} = a_{ij} + b_{ij} [/itex]. Now can you show if [itex]c_{ij} = c_{ji}[/itex]?
How about this:
[itex]c_{ij} = a_{ij} + b_{ij} [/itex]
[itex]c_{ji} = a_{ji} + b_{ji} [/itex]
But since [itex]a_{ij}=a_{ji}[/itex] and [itex]b_{ij}=b_{ji}[/itex]
then [itex]c_{ij} = a_{ij} + b_{ij} =a_{ji} + b_{ji}=c_{ji}[/itex]
Does that work? I think it does if I got my indexes right
Suggested for: Linear Algebra Problem #1
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Last Post
- Forums
- Homework Help
- Calculus and Beyond Homework Help
Source: https://www.physicsforums.com/threads/linear-algebra-problem-1.241337/
0 Response to "The Set of All Continuous Functions Defined on the Negative Interval With the Standard Operations"
Post a Comment