# SPSS Analysis Report: Methods Of Analysis Of Data Assessment 4 Answer Pages: 4 Words: 890

## Question :

Assessment 4: Methods of Analysis of Data (2000 words) - 15% -

Students will submit the design for analysing their data, with justification for the chosen method(s), including sufficient detail to show that the method(s) are well understood and appropriate for the data to be collected. Technical requirements should be identified, and evidence should be given that the proposed methods can be carried out within the resources available.

Level 9 (Masters) Assessment Criteria

• Data analysis design is clear, coherent and justified.

• Evidence that knowledge of data collection methods is extensive and integrated•

Clear identification of the technical requirements applicable to the data analysis methodology(ies)

• Evidence that the methods can be implemented within the constraints of available resources

SPSS Analysis
Report
1. T-test
2. One-way ANOVA
3. Correlation
4. Regression

# T-test

 Group Statistics Gender N Mean Std. Deviation Std. Error Mean Informing Female 106 2.9843 .82087 .07973 Male 93 3.0645 .70413 .07301

 Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means F Sig. t Df Sig. (2-tailed) Informing Equal variances assumed 1.654 .200 -.735 197 .463 Equal variances not assumed -.742 196.906 .459

 t-test for Equality of Means Mean Difference Std. Error Difference 95% Confidence Interval of the Difference Lower Upper Informing Equal variances assumed -.08024 .10920 -.29558 .13510 Equal variances not assumed -.08024 .10811 -.29344 .13296

An independent two-sample t-test is conducted to conclude whether the means of the two groups are statistically significantly different or not. Firstly, null and alternative hypothesis are created and then one of them is rejected basis the results of the t-test. Typically, significance level used in p = .05. When this test is conducted in SPSS, multiple tables are generated which will be discussed below for the given data:

1. Variables: The independent variable has two groups, namely Females and Males. The dependent variable is ‘Informing’.
2. Hypothesis:
1. Null H0: µf = µm
2. Alternative H1: µf ≠ µm

An independent sample t-test is undertkaen to idetify if there is any variation among the mean score of males and females on the dependent variable forming..

1. Group Statistics: This is the first table generated when a t-test is conducted. The table gives important statistical information for the sample. For example, the above table depicts details related to size of sample (there were 106 females and 93 males), mean (2.9843 for females and 3.0645 for males), and standard deviation (0.82087 for females and 0.70413 for males). The table also provides standard error of mean for females was 0.7973 and for males was 0.07301. From this data, we can see that these numbers for the two groups are numerically different.
2. Levene’s test: The next step is to perform the Levene’s test in order to see whether it can be assumed that the variances across group are equal or not. Then, we will determine which data to use for analysis. In this case, Levene’s test reveals a p-value of greater than 0.05 (the value is 0.200 which is greater than 0.05). Hence, we can assume equality of variances and we will consider data from the first row named ‘Equal Variances assumed’ as the assumption is not violated.
3. Equality of means: Furthering above, the final step is to identify if the means of the two groups are statistically significantly different or not. This is done by looking at Sig. value also called p-value. In above case, the p-value is 0.463 (which is greater than 0.05).
4. Conclusion: Hence, we are unable to reject the null hypothesis as there is no such difefrence in th emean values that can be considered strikingly different among the the mean value of two groups, that is, males and females for the variable, ‘Informing’

One-way ANOVA

 Descriptives OrgValue N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum Lower Bound Upper Bound 1.00 72 3.0694 .73145 .08620 2.8976 3.2413 1.00 4.67 2.00 81 3.0165 .77082 .08565 2.8460 3.1869 1.33 5.00 3.00 45 3.2444 .56586 .08435 3.0744 3.4144 1.67 4.33 Total 198 3.0875 .71655 .05092 2.9871 3.1880 1.00 5.00

 ANOVA OrgValue Sum of Squares Df Mean Square F Sig. Between Groups 1.541 2 .770 1.508 .224 Within Groups 99.609 195 .511 Total 101.149 197

The purpose of conducting a one-way ANOVA test is to conclude if there is statistically significant difference among the means of three or more independent groups.

1. Assumptions: This test is based on certain assumption, major of which are as follows:
1. There is a normal distribution of the dependent variable (in this case, the dependent variable is OrgVal)
2. The variances across groups are equal
3. All the observations are independent
2. Descriptive Table: it is the first table generated when a one-way ANOVA test is conducted in SPSS. As the name suggests, this table provides important statistical information about the sample, such as, mean, standard deviation, 95% confidence intervals, for each of the groups considered within dependent variable. In this case, the information is as follows: Group 1.00 (M = 3.0694, SD = 0.73145, 95% CI: 2.8976, 3.2413), Group 2.00 (M = 3.0165, SD = 0.77082, 95% CI: 2.8460, 3.1869) and Group 3.00 (M = 3.2444, SD = 0.56586, 95% CI: 3.0744, 3.4144).
3. ANOVA Table: This is the next table which gets generated and provides the actual analysis output for purpose of taking a conclusion. In this given case, the value for ‘Sig.’ or p-value is 0.224 which is greater than 0.05.
4. Conclusion: As we saw, the p = .224 is greater than 0.05. Hence, we are unable to reject the null hypothesis. It is clear that there lies no statiscaly noteworthy variation among the the means of various groups.
5. Post Hoc: The ANOVA test only tells whether there is variation that is statistically significant lying among the means of the groups or not. It does not identify the groups for which there is statistically significant difference between the means. For this purpose, Post Hoc tests are conducted. This will be discussed in later pages.

Homogeneous Subsets

 OrgValue Student-Newman-Keulsa,b Position1 N Subset for alpha = 0.05 1 2.00 81 3.0165 1.00 72 3.0694 3.00 45 3.2444 Sig. .181 Means for groups in homogeneous subsets are displayed. a. Uses Harmonic Mean Sample Size = 61.911. b. The group sizes are unequal. The harmonic mean of the group sizes is used. Type I error levels are not guaranteed.

The research on the subject is divided about the utility of Post Hoc tests as some consider it to be part of ANOVA and some consider it as ‘over analysis’. In the given case, Post Hoc is not very useful because there is no variation that is statistically significant lying among the means of various groups. Hence, there is no question of identifying groups with statistically significantly different means.

There are many types of Post Hoc tests that can be used. In the above given case, Student-Newman-Keuls or S-N-K post hoc test has been utilized. This test assumes that variance is same across groups and ranks the groups basis means with the largest means listed first and so on. Various pairs of means are tested for differences and calculate a tests statistic (q) in ‘studentized range’.

The output generated for the test needs to be read by seeing positioning of the mean of various groups. If the mean for a group is listed alone in a column, then that group is identified as the one having statistically significantly different mean from the other groups. However, if the mean for a group is listed along with means for other groups, then those groups are identified as having no statistically significantly different means.

In the given case, means for all the three groups are listed in the same column. Hence, it is quite clear that there is an absence of any statistically significant variation in the means of the three groups. This is same conclusion as drawn from ANOVA test above.

Correlation

 Descriptive Statistics Mean Std. Deviation N DecMak 3.0875 .71655 198 Concern 3.0235 .92630 199 Correlations DecMak Concern DecMak Pearson Correlation 1 .203** Sig. (2-tailed) .004 N 198 198 Concern Pearson Correlation .203** 1 Sig. (2-tailed) .004 N 198 199 **. Correlation is significant at the 0.01 level (2-tailed).

A correlation analysis is done to understand the extent of relationship between two selected variables. In this case, the correlation analysis was conducted to determine any significant relationship between two variables: ‘DecMak’ (M = 3.0875, SD = 0.71655) and ‘Concern’ (M = 3.0235, SD = 0.92630). The results of the analysis can be explained as follows:

1. Correlations box: The first table generated through correlations analysis in SPSS is known as Correlations box and includes four sections as highlighted by the dark black border above; top left, top right, bottom left and bottom right. In the given case, the two variables for which correlations analysis needs to be conducted are called ‘DecMak’ and ‘Concern’. Each of these four sections provide a correlation analysis between chosen variables:
1. Top left section: This particular section is not useful as it provides the correlations analysis related data between ‘DecMak’ and ‘DecMak’. It is not useful because there will obviously be a perfect correlationship as same data is being used twice.
2. Top right section: This particular section is of use for our purpose as it provides the correlations analysis related data between ‘DecMak’ and ‘Concern’. In other words, this section provides data for variable crossings. The section has following data:
1. Pearson’s coefficient of correlation: As can be seen above, the value provided as Pearson’s coefficient of correlation ( r) is 0.283 which indicates a weak positive relationship. It is weak because Pearson’s coefficient is measured on a scale of -1 to 1 and hence, a value of 0.283 indicates a weak relationship between the two variables, ‘DecMak’ and ‘Concern’. This means that a change in one variable will lead to a small change in other variable. Further, since the value is positive, it indicates a positive relationship. This means that a unit increase in one variable will lead to an increase ein other variable and vice versa.
2. Sig (2-tailed) value: As can be seen above, the value provided for Sig (2-tailed) is p = .004. Since this value is less than 0.05, it is clear that there lies a remarkable correlation between the two variables, ‘DecMak’ and ‘Concern’.
3. N: As can be seen above, the provided value is 198 which refers to the sample size considered for generating the output.
2. Bottom Left section: This particular section is of use for our purpose as it provides the correlations analysis related data between ‘DecMak’ and ‘Concern’. In other words, this section provides data for variable crossings. The section has following data:
1. Pearson’s coefficient of correlation: As can be seen above, the value provided as Pearson’s coefficient of correlation ( r) is 0.283 which indicates a weak positive relationship. It is weak because Pearson’s coefficient is measured on a scale of -1 to 1 and hence, a value of 0.283 indicates a weak relationship between the two variables, ‘DecMak’ and ‘Concern’. This means that a change in one variable will lead to a small change in other variable. Further, since the value is positive, it indicates a positive relationship. This means that a unit increase in one variable will lead to an increase ein other variable and vice versa.
2. Sig (2-tailed) value: As can be seen above, the value provided for Sig (2-tailed) is p = .004. Since the result onbtained is less than 0.05, it  is clear that there is a remarkable or noteworthy correlation between the two variables, ‘DecMak’ and ‘Concern’.
3. N: As can be seen above, the provided value is 198 which refers to the sample size considered for generating the output.
3. Bottom right section: This particular section is not useful as it provides the correlations analysis related data between ‘Concern’ and ‘Concern’. It is not useful because there will obviously be a perfect correlationship as same data is being used twice.

Hence, we can conclude that the results of the analysis (r = .203, p < .05) indicate that there lies a statistically noteworthy association within the variables. The results can be interpreted to mean that there is a positive linear between ‘DecMak’ and ‘Concern’.

# Regression

 Variables Entered/Removeda Model Variables Entered Variables Removed Method 1 EmpLeadb . Enter a. Dependent Variable: Trust b. All requested variables entered. Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate 1 .749a .561 .559 .61702 Predictors: (Constant), EmpLead ANOVAa Model Sum of Squares df Mean Square F Sig. 1 Regression 95.269 1 95.269 250.234 .000b Residual 74.621 196 .381 Total 169.890 197 a. Dependent Variable: Trust Predictors: (Constant), EmpLead Coefficientsa Model Unstandardized Coefficients Standardized Coefficients T Sig. B Std. Error Beta 1 (Constant) -.306 .215 -1.423 .156 EmpLead 1.093 .069 .749 15.819 .000 a. Dependent Variable: Trust

A regression analysis is conducted to determine an equation to predict change in one variable given the value of other. In the given case, the regression analysis was conducted for independent variable, ‘EmpLead’ and dependent variable, ‘Trust’. The objective is to determine whether: (a) ‘EmpLead’ explained a statistically significant amount of variance in ‘Trust’, and (b) there was a statistically significant causal relationship between ‘EmpLead’ and ‘Trust’.

1. The first box simply provides list of independent variables entered for purpose of regression analysis. In given case, there is only one variable, ‘EmpLead’. The dependent variable is ‘Trust’.
2. The second box is called ‘Model Summary’ as generated by SPSS and provides values that help in analysing regression output. As can be seen above, R value is 0.749 clarifying a high degree of correlation between the two variables in consideration. R2 is at 0.561 with adjusted R2 being 0.559. The value of R2 is nothing but the proportion of change in dependent variable that can be explained through the independent variable. As can be seen, at 56.1%, a high proportion of change in value of ‘Trust’ can be attributed to change in ‘EmpLead’.
3. The third box is called the ‘ANOVA’ table which assists in assessing the fit of the regression model. As can be seen above, the corresponding Sig. value is p = .000 which is less than 0.05 and hence, indicates that the above regression model will generate the outcome variable which is statistically significant. In other words, the regression equation has a good fit for the data.
4. The last box is called the ‘Coefficients’ table and provides the coefficients needed to generate the regression equation. In the given case, the regression equation can be stated as: Trust = -0.306 + 1.093(EmpLead)

Hence, results of the analysis (R = .75, R2 = .56, p < .000) indicate that EmpLead explains 74.6% of the variance in Trust. Results of the analysis (p < .000) also indicate that a statistically significant causal relationship exists between EmpLead and Trust. The results can be interpreted to mean that EmpLead explains variation in Trust and also predicts values of Trust.

Tags: