Psychology 3256 (Winter 2010) Review Notes

Review of Yates’ order stuff.

Review of fill in the blank ANOVA.

Review regression problem.

Please feel free to comment here on the blog to discuss the notes.  Your first comment will have to be reviewed, this prevents comment spam, after that you can post a comment whenever you would like.

13 Replies to “Psychology 3256 (Winter 2010) Review Notes”

  1. I’m having some issues with the hierarchal ANOVA. I did it a couple of ways and the method I used that gives me the right df does not make sense. For my sv column I have C, A(C), S(AC), B, BC, BA(C), BS(AC) in that order. All are between factors so why is the B after the subjects variable? Or, why is it working out this way?

  2. It should not be after, but before of course. Should be C B CB A(C) AB(C) then subjects. At least from my reading…

  3. I am just making sure I am doing the 2nd Yates order right. Is a and b between and c within? Just making sure before I start the table.

  4. On the regression one, look at which model has a high R2, little overlap (correlation) and, secondarily, look at C(P) and complexity. I would probably choose a 2 variable model as there is a correlation between two of the X variables.

  5. On the regression, X1 X2 is probably the best model, it has no multicolinearity and is pretty simple, a 2 variable model. That said, the three variable model has a nice high R2, but some multicolinearity.

    You can tell there is colinearity by adding up the R2 for each individual variable, and see if it is the same as for the two, or three, variables in a model together. So, X1 R2 = .39 X2 R2 = .10 X3 R2 = .29. If it is less you have multicolinearity.

    X1 X2 the R2 is .49 if no multicolinearity .49
    X2 X3 R2 is .39 if ” ” .39
    X1 X3 R2 is .58 if ” ” .68

    So, X1 X3 correlate. Bad….

  6. Dave can you repost the whole yates order for the second one again from start to end, i am easily confused with your correction. Also is the problem that the bottom left residual plot is not linear, therefor we can use the standard formula for Y hat for regression because this violates the assumption of a linear model. So we would use either the log formula or the lamda (with crazy notation)formula. As for figuring out which one to use, you told us to ask the math department. Is that what you were going for or did I miss the boat.

  7. The second yates are all between, so it looks like this:

    B 1
    C 1
    BC 1
    A(C) 2
    BA(C) 2
    S(ABC) 40

    The top left residual is fine, the others are bad. The top right, well the error is correlated with x, that is no good, the bottom left is a curve, and the bottom right over predicts, then underpredicts, so those residuals are not random around 0.

Leave a Reply to Dave Brodbeck Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.