## Thursday, March 31, 2011

### The Root of All Fear

The craving to become causes fears; to be, to achieve, and so to depend engenders fear. The state of the nonfear is not negation, it is not the opposite of fear nor is it courage. In understanding the cause of fear, there is its cessation, not the becoming courageous, for in all becoming there is the seed of fear. Dependence on things, on people, or on ideas breeds fear; dependence arises from ignorance, from the lack of self-knowledge, from inward poverty; fear causes uncertainty of mind-heart, preventing communication and understanding. Through self-awareness we begin to discover and so comprehend the cause of fear, not only the superficial but the deep casual and accumulative fears. Fear is both inborn and acquired; it is related to the past, and to free thought-feeling from it, the past must be comprehended through the present. The past is ever wanting to give birth to the present which becomes the identifying memory of the "me" and the "mine" the "I". The self is the root of all fear. - J.Krisnamurti, The Book of Life

## Tuesday, March 29, 2011

### The Disorder That Time Creates

Time means moving from what is to "what should be." I am afraid, but one day I shall be free of fear; therefore, time is necessary to be free of fear, at least, that is what we think. To change from what is to "what should be" involves time. Now, time implies effort in that interval between what is and "what should be." I don't like fear, and I am going to make an effort to understand, to analyze, to dissect it, or I am going to discover the cause of it, or I am going to escape totally from it. All this implies effort and effort is what we are used to. We are always in conflict between what is and "what should be." The "what I should be" is an idea, and the idea is fictitious, it is not "what I am,"which is the fact; and the "what I am" can be changed only when I understand the disorder that time creates.So, is it possible for me to be rid of fear totally, completely, on the instant? If I allow fear to continue, I will create disorder all the time; therefore, one sees that time is an element of disorder, not a means to be ultimately free of fear. So there is no gradual process of getting rid of fear, just as there is no gradual process of getting rid of the poison of nationalism. If you have nationalism and you say that eventually there will be the brotherhood of man, in the interval there are wars, there are hatreds, there is misery, there is all this appalling division between man and man; therefore, time is creating disorder. - J. Krishnamurti, The Book of Life

### Statistical Dictionaries

• http://www.animatedsoftware.com/elearning/Statistics%20Explained/glossary/se_glossary.html
• http://www.stat.berkeley.edu/~stark/SticiGui/Text/gloss.htm
• http://rkb.home.cern.ch/rkb/AN16pp/node1.html
• http://www.animatedsoftware.com/statglos/statglos.htm#index
• http://www.animatedsoftware.com/elearning/Statistics%20Explained/glossary/se_glossary.html
• http://www.statsoft.com/textbook/statistics-glossary/
• http://www.statsoft.com/textbook/
• http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#h0

### degree of freedom

http://www.jerrydallal.com/LHSP/dof.htm

### p-value

The function $\scriptstyle A(t|\nu)$ is the integral of Student's probability density function, ƒ(t) between −t and t. It thus gives the probability that a value of t less than that calculated from observed data would occur by chance. Therefore, the function $\scriptstyle A(t|\nu)$ can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in t-tests. For the statistic t, with $\scriptstyle\nu$ degrees of freedom, $\scriptstyle A(t|\nu)$ is the probability that t would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that t > 0). It is defined for real t by the following formula:

## Monday, March 28, 2011

### Fear Is Nonacceptance of What Is

Fear finds various escapes. The common variety is identification, is it not?,identification with country, with society, with an idea. Haven't you noticed how you respond when you see a procession, a military procession or a religious procession, or when the country is in danger of being invaded? You then identify yourself with the country, with a being, with an ideology.There are other times when you identify yourself with your child, with your wife, with a particular form of action, or inaction. Identification is a process of self-forgetfulness.So long as I am conscious of the "me" I know there is pain, there is struggle, there is constant fear. But if I can identify myself with something greater, with something worthwhile, with beauty, with life, with truth, with belief (e.g., religion, God, a guru), with knowledge, at least temporarily, there is an escape from the "me", is there not? If I talk about "my country" I forget myself temporarily, do I not? If I can say something about God, I forget myself. If I can identify myself with my family, with a group, with a particular party, with a certain ideology, then there is a temporary escape.Do we now know what fear is? Is it not the non-acceptance of what is? We must understand the word acceptance. I am not using that word as meaning the effort made to accept. There is no question of accepting when I perceive what is. When I do not see clearly what is, then I bring in the process of acceptance. Therefore, fear is the non-acceptance of what is. - J. Krishnamurti, The Book of Life

### PSM and volunteering

Individuals with higher PSM levels are expected to hold more empathetic attitudes, and have a higher regard for the importance of civic participation (Houston 2008).
Other forms of prosocial behaviour, such as volunteering, and organizational citizenship behaviours have also been associated with PSM (Dekker and Halman 2003; Kim 2006). Perry et al. (2008) determined that high PSM levels are related to formal and informal volunteering. Prosocial orientation of individuals with high PSM levels appears to foster prosocial acts (Houston 2006, 2008).

### Contacting Fear

There is physical fear. You know, when you see a snake, a wild animal, instinctively there is fear; that is a normal, healthy, natural fear. It is not fear, it is a desire to protect oneself, that is normal.
But the psychological protection of oneself, that is, the desire to be always certain, breeds fear. A mind that is seeking always to be certain is a dead mind, because there is no certainty in life, there is no permanency. When you come directly into contact with fear, there is a response of the nerves and all the rest of it. Then, when the mind is no longer escaping through words or through activity of any kind, there is no division between the observer and the thing observed as fear. It is only the mind that is escaping that separates itself from fear. But when there is a direct contract with fear, there is no observer, there is no entity that says, "I am afraid." So, the moment you are directly in contact with life, with anything, there is no division; it is this division that breeds competition, ambition, fear. So what is important is not "how to be free of fear?" If you seek a way, a method, a system to be rid of fear, you will be everlastingly caught in fear. But if you understand fear, which can only take place when you come directly in contact with it, as you are in contact with hunger, as you are directly in contact when you are threatened with losing your job, then you do something; only then will you find that all fear ceases; we mean all fear, not fear of this kind or of that kind. - J. Krishnamurti, The Book of Life

### HLM

http://www.ats.ucla.edu/stat/hlm/

### Multilevel Modeling

http://www.ats.ucla.edu/stat/spss/topics/MLM.htm

### Multicollinearity

There are different ways that the relative contribution of each predictor variable can
be assessed.

In the “simultaneous” method (which SPSS calls the Enter method),
the researcher specifies the set of predictor variables that make up the model. The
success of this model in predicting the criterion variable is then assessed.  stepwise regression, (the computer, determines the order of entry of the variables. )

In contrast, “hierarchical” methods enter the variables into the model in a specified
order. The order specified should reflect some theoretical consideration or previous
findings.  If  you  have  no  reason  to  believe  that  one  variable  is  likely  to  be  more
important than another you should not use this method. As each variable is entered
into  the  model  its  contribution  is  assessed.  If  adding  the  variable  does  not
significantly  increase  the  predictive  power  of  the  model  then  the  variable  is
dropped.
Hierarchical multiple regression (not to be confused with hierarchical linear models) is similar to  but the researcher, not the computer, determines the order of entry of the variables. F-tests are used to compute the significance of each added variable (or set of variables) to the explanation reflected in R-square. This hierarchical procedure is an alternative to comparing betas for purposes of assessing the importance of the independents. In more complex forms of hierarchical regression, the model may involve a series of intermediate variables which are dependents with respect to some other independents, but are themselves independents with respect to the ultimate dependent. Hierarchical multiple regression may then involve a series of regressions for each intermediate as well as for the ultimate dependent.

For hierarchical multiple regression, in SPSS first specify the dependent variable; then enter the first independent variable or set of variables in the independent variables box; click on "Next" to clear the IV box and enter a second variable or set of variables; etc. One also clicks on the Statistics button and selects "R-squared change." Note that the error term will change for each block or step in the hierarchical analysis. If this is not desired, it can be avoided by selecting Statistics, General Linear Model, GLM-General Factorial, then specifying Type I sums of squares. This will yield GLM results analogous to hierarchical regression but with the same error term across blocks.

In “statistical” methods, the order in which the predictor variables are entered into (or  taken  out  of)  the  model  is  determined  according  to  the  strength  of  their
correlation  with  the  criterion  variable.  Actually  there  are  several  versions  of  this
method,  called  forward  selection,  backward  selection  and  stepwise  selection.  In
Forward  selection,  SPSS  enters  the  variables  into  the  model  one  at  a  time  in  an
order determined by the strength of their correlation with the criterion variable. The
effect  of  adding  each  is  assessed  as  it  is  entered,  and  variables  that  do  not
significantly add to the success of the model are excluded.

In Backward selection, SPSS enters all the predictor variables into the model. The
weakest predictor variable is then removed and the regression re-calculated. If this
significantly  weakens  the  model  then  the  predictor  variable  is  re-entered  –
otherwise it is deleted. This procedure is then repeated until only useful predictor
variables remain in the model.

Stepwise  is  the  most  sophisticated  of  these  statistical  methods.  Each  variable  is
entered in sequence and its value assessed. If adding the variable contributes to the
model then it is retained, but all other variables in the model are then re-tested to
see  if  they  are  still  contributing  to  the  success  of  the  model.  If  they  no  longer
contribute significantly they are removed. Thus, this method should ensure that you
end up with the smallest possible set of predictor variables included in your model.
In addition to the Enter, Stepwise, Forward and Backward methods, SPSS also
offers  the  Remove  method  in  which  variables  are  removed  from  the  model  in  a
block – the use of this method will not be described here.

If you have no theoretical model in mind, and/or you have relatively low numbers
of cases, then it is probably safest to use Enter, the simultaneous method.
Statistical procedures should be used with caution and only when you have a large number of cases. This is because minor variations in the data due to sampling errors can have a
large effect on the order in which variables are entered and therefore the likelihood
of them being retained. However, one advantage of the Stepwise method is that it
should  always  result  in  the  most  parsimonious  model.  This  could  be  important  if
you wanted to know the minimum number of variables you would need to measure
to  predict  the  criterion  variable.  If  for  this,  or  some  other  reason,  you  decide  to
select  a  statistical  method,  then  you  should  really  attempt  to  validate  your  results
with  a  second  independent  set  of  data.  The  can  be  done  either  by  conducting  a
second study, or by randomly splitting your data set into two halves

## Sunday, March 27, 2011

### multiple regression, spss output

http://www.ats.ucla.edu/stat/spss/output/reg_spss.htm

### Significance of r

One tests the hypothesis that the correlation is zero (p = 0, H0) using this formula:
t = [r*SQRT(n-2)]/[SQRT(1-r2)]

where r is the absolute value of the correlation coefficient and n is sample size, and where one looks up the t value in a table of the distribution of t, for (n - 2) degrees of freedom.

If the computed t value (correlation) is as high or higher than the table t value (small p value), then the researcher concludes the correlation is significant (that is, significantly different from 0).

In practice, most computer programs compute the significance of correlation for the researcher without need for manual methods. By default, the test is two-tailed.

### one tail or two tail

Normally the researcher wants two-tailed significance and this is the default in SPSS output. One is then testing the chance the observed correlation is significantly different from zero correlation, in a positive or negative direction. That is, a two-tailed test tests the absolute magnitude of the correlation. If for some theoretical reason one direction of correlation can be ruled out (negative or positive) because it is impossible, then the researcher should choose one-tailed significance. In SPSS, choose Analyze, Correlate, Bivariate; check Two-tailed (the default) or One-tailed..

### Steps in Hypothesis Testing

The first step in hypothesis testing is to specify the null hypothesis (H0) and the alternative hypothesis (H1). If the research concerns whether one method of presenting pictorial stimuli leads to better recognition than another, the null hypothesis would most likely be that there is no difference between methods (H0: μ1 - μ2 = 0). The alternative hypothesis would be H1: μ1 ≠ μ2. If the research concerned the correlation between grades and SAT scores, the null hypothesis would most likely be that there is no correlation (H0: ρ= 0). The alternative hypothesis would be H1: ρ ≠ 0.

The next step is to select a significance level. Typically the 0.05 or the 0.01 level is used.

The third step is to calculate a statistic analogous to the parameter specified by the null hypothesis. If the null hypothesis were defined by the parameter μ1- μ2, then the statistic M1 - M2 would be computed.

The fourth step is to calculate the probability value (often called the p value). The p value is the probability of obtaining a statistic as different or more different from the parameter specified in the null hypothesis as the statistic computed from the data. The calculations are made assuming that the null hypothesis is true. (click here for a concrete example)

The probability value computed in Step 4 is compared with the significance level chosen in Step 2. If the probability is less than or equal to the significance level, then the null hypothesis is rejected; if the probability is greater than the significance level then the null hypothesis is not rejected. When the null hypothesis is rejected, the outcome is said to be "statistically significant" when the null hypothesis is not rejected then the outcome is said be "not statistically significant."

If the outcome is statistically significant, then the null hypothesis is rejected in favor of the alternative hypothesis. If the rejected null hypothesis were that μ1- μ2 = 0, then the alternative hypothesis would be that μ1≠ μ2. If M1 were greater than M2 then the researcher would naturally conclude that μ1 ≥ μ2. (Click here to see why you can conclude more than μ1 ≠ μ2)

### Null Hypothesis

The null hypothesis is an hypothesis about a population parameter. The purpose of hypothesis testing is to test the viability of the null hypothesis in the light of experimental data.

Consider a researcher interested in whether the time to respond to a tone is affected by the consumption of alcohol. The null hypothesis is that µ1 - µ2 = 0 where µ1 is the mean time to respond after consuming alcohol and µ2 is the mean time to respond otherwise. Thus, the null hypothesis concerns the parameter µ1 - µ2 and the null hypothesis is that the parameter equals zero

The null hypothesis is often the reverse of what the experimenter actually believes; it is put forward to allow the data to contradict it. In the experiment on the effect of alcohol, the experimenter probably expects alcohol to have a harmful effect. If the experimental data show a sufficiently large effect of alcohol, then the null hypothesis that alcohol has no effect can be rejected.

### hypothesis

A hypothesis is a proposed explanation for an observable phenomenon.

null hypothesis --- states that there is no relation between the phenomena whose relation is under investigation
The null hypothesis is often the reverse of what the experimenter actually believes; it is put forward to allow the data to contradict it.
In the experiment on the effect of alcohol, the experimenter probably expects alcohol to have a harmful effect (based on theory). Null hypothesis (no effect). If the experimental data show a sufficiently large effect of alcohol, then the null hypothesis that alcohol has no effect can be rejected.
null hypothesis would be that there is no difference between the two methods, the researcher would be hoping to reject the null hypothesis
it is the hypothesis of no difference between population means
null hypothesis to be that the difference between population means is a particular value. Or, the null hypothesis could be that the mean SAT score in some population is 600. The null hypothesis would then be stated as: H0: μ = 600. although the null hypotheses discussed so far have all involved the testing of hypotheses about one or more population means, null hypotheses can involve any parameter.
the correlation between job satisfaction and performance on the job would test the null hypothesis that the population correlation (ρ) is 0.
H0: ρ = 0.
Some possible null hypotheses are given below:
H0: μ=0
H0: μ=10
H0: μ1 - μ2 = 0
H0: π = .5
H0: π1 - π2 = 0
H0: μ1 = μ2 = μ3
H0: ρ1- ρ2= 0

If the research concerns whether one method of presenting pictorial stimuli leads to better recognition than another, the null hypothesis would most likely be that there is no difference between methods (H0: μ1 - μ2 = 0). The alternative hypothesis would be H1: μ1 ≠ μ2. If the research concerned the correlation between grades and SAT scores, the null hypothesis would most likely be that there is no correlation (H0: ρ= 0). The alternative hypothesis would be H1: ρ ≠ 0.

alternative hypothesis--- it states that there is some kind of relation

The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there is some effect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance).

Conventional significance levels for testing the hypotheses are .10, .05, and .01.

p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed,  (calculated based on the sample) --- 樣本發生該實際現象的機率

One often "rejects the null hypothesis" when the p-value is less than 0.05 or 0.01 (在樣本上, 發生該實際現象的機率低) corresponding respectively to a 5% or 1% chance of rejecting the null hypothesis when it is true (Type I error). When the null hypothesis is rejected, the result is said to be statistically significant.
One often "rejects the null hypothesis" when the p-value is less than 0.05 or 0.01, corresponding respectively to a 5% or 1% chance of rejecting the null hypothesis when it is true (Type I error).
When the null hypothesis is rejected, the result is said to be statistically significant.

Closeness to God, an individual’s perception of closeness to God when engaged in both spiritual and social activities, is an antecedent to PSM . Church involvement, a proxy for religious fundamentalism, is not an antecedent to PSM. An individual’s adherence to religious doctrines does not assure acceptance of the Judeo-Christian ethics of love and compassion (Perry, 1997). Church involvement was negatively, rather than positively, associated with  PSM.

Vinokur-Kaplan, Jayaratne, and Chess (1994) examined the impact of workplace conditions and motivators on the job satisfaction and retention of social workers in public agencies, non-proﬁt agencies, and private agencies. They found opportunities for promotion and job challenge were the most important factors inﬂuencing the job satisfaction of individuals in non-proﬁt and public agencies.

Vinokur-Kaplan, D., Jayaratne, S., & Chess, W.A. 1994. Job satisfaction and retention of social workers in public agencies, non-proﬁt agencies, and  private practice: The impact of workplace conditions and motivators. Administration in Social Work, 18: 93–121.

### statistic website

http://faculty.chass.ncsu.edu/garson/PA765/statnote.htm

### Time Series Analysis

http://faculty.chass.ncsu.edu/garson/PA765/time.htm

### Hierarchical Linear Modeling with HLM Software

http://faculty.chass.ncsu.edu/garson/PA765/hlmsoft.htm

### Multilevel Models

http://faculty.chass.ncsu.edu/garson/PA765/structur.htm#multilevel

Multilevel modeling addresses the special issue of hierarchical data from different units of analysis (ex., data on students and data or their classrooms and data on their schools). It has been widely used in educational research. Because of the group effects involved in multi-level modeling, analysis of covariance structures requires somewhat different algorithms implemented by such software packages as HLM and MLWin. This variant on structural equation modeling is discussed at greater length in a separate section on multilevel modeling. That discussion mainly references multilevel modeling using the SPSS "Linear Mixed Models" module, and also HLM software. For a concise overview discussion of multilevel modeling in EQS and LISREL, see Schumacker & Lomax (2004: 330-342). Mplus software supports multilevel SEM modeling.

### Hierarchical multiple regression

http://faculty.chass.ncsu.edu/garson/PA765/regress.htm

Hierarchical multiple regression (not to be confused with hierarchical linear models) is similar to stepwise regression, but the researcher, not the computer, determines the order of entry of the variables. F-tests are used to compute the significance of each added variable (or set of variables) to the explanation reflected in R-square. This hierarchical procedure is an alternative to comparing betas for purposes of assessing the importance of the independents. In more complex forms of hierarchical regression, the model may involve a series of intermediate variables which are dependents with respect to some other independents, but are themselves independents with respect to the ultimate dependent. Hierarchical multiple regression may then involve a series of regressions for each intermediate as well as for the ultimate dependent.

For hierarchical multiple regression, in SPSS first specify the dependent variable; then enter the first independent variable or set of variables in the independent variables box; click on "Next" to clear the IV box and enter a second variable or set of variables; etc. One also clicks on the Statistics button and selects "R-squared change." Note that the error term will change for each block or step in the hierarchical analysis. If this is not desired, it can be avoided by selecting Statistics, General Linear Model, GLM-General Factorial, then specifying Type I sums of squares. This will yield GLM results analogous to hierarchical regression but with the same error term across blocks.

### multiple regression

http://faculty.chass.ncsu.edu/garson/PA765/regress.htm

• Multiple regression can establish that a set of independent variables explains a proportion of the variance in a dependent variable at a significant level (through a significance test of R2), and can establish the relative predictive importance of the independent variables (by comparing beta weights, standardized coefficient).
• The multiple regression equation takes the form y = b1x1 + b2x2 + ... + bnxn + c. The b's are the regression coefficients, representing the amount the dependent variable y changes when the corresponding independent changes 1 unit. The standardized version of the b coefficients are the beta weights.
• Power terms can be added as independent variables to explore curvilinear effects.
• Cross-product terms can be added as independent variables to explore interaction effects.
• One can test the significance of difference of two R2's to determine if adding an independent variable to the model helps significantly.  ---- SEM
• Using hierarchical regression, one can see how most variance in the dependent can be explained by one or a set of new independent variables, over and above that explained by an earlier set.
• R2  --- the percent of variance in the dependent variable explained collectively by all of the independent variables.
•  Logistic regression is used for dichotomous and multinomial dependents, implemented here with logistic procedures and above in generalized linear models.
•  Logit regression uses log-linear techniques to predict one or more categorical dependent variables.
•

### multilevel model

http://faculty.chass.ncsu.edu/garson/PA765/structur.htm#multilevel

### Correlation Coefficient r and Beta (standardised regression coefficients)

• r is  a  measure  of  the  correlation  between  the  observed  value  and  the  predicted
value of the criterion variable.
• When you have only one predictor variable in your model, then beta is equivalent to
the  correlation coefficient (r) between  the  predictor  and  the  criterion  variable.
• When  you  have  more  than  one  predictor  variable,  you  cannot  compare  the
contribution  of  each  predictor  variable  by  simply  comparing  the  correlation
coefficients. The beta (B) regression coefficient is computed to allow you to make such
comparisons  and  to  assess  the  strength  of  the  relationship  between  each  predictor variable to the criterion variable.
• Beta (standardised regression coefficients)  --- The beta value is a measure of how strongly each predictor variable influences the criterion (dependent) variable. The beta is measured in units of standard deviation. For example,  a beta value of 2.5 indicates that a change of one standard deviation in the predictor  variable will result in a change of 2.5 standard deviations in the criterion variable. Thus, the higher the beta value the greater the impact of the predictor variable on the criterion variable.
• In multiple regression, to interpret the direction of the relationship between variables, look at the signs (plus or minus) of the B coefficients. If a B coefficient is positive, then the relationship of this variable with the dependent variable is positive (e.g., the greater the IQ the better the grade point average); if the B coefficient is negative then the relationship is negative (e.g., the lower the class size the better the average test scores). Of course, if the B coefficient is equal to 0 then there is no relationship between the variables.

### coefficient of determination R2

• tells the percent of the variance in the dependent variable that can be explained by all of the independent variables taken together.
• to evaluate model fit
• The R-square value is an indicator of how well the model fits the data (e.g., an R-square close to 1.0 indicates that we have accounted for almost all of the variability with the variables specified in the model).
• if there is no relationship between the X and Y variables, R-square would be 0
• If X and Y are perfectly related,  R-square = 1
• R-square will fall somewhere between 0.0 and 1.0
• If we have an R-square of 0.4, we have explained 40% of the original variability, and are left with 60% residual variability
• It is the proportion of variability in a data set that is accounted for by the statistical model.
• R2 is a statistic that will give some information about the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression line approximates the real data points. An R2 of 1.0 indicates that the regression line perfectly fits the data.
• Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis.
• R-squared increases as we increase the number of variables in the model. This illustrates a drawback to one possible use of R2, where one might try to include more variables in the model until "there is no more improvement". This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model.

•

### z score, t score, degree of freedo

According to the central limit theorem, the sampling distribution of a statistic (like a sample mean) will follow a normal distribution, as long as the sample size is sufficiently large. Therefore, when we know the standard deviation of the population, we can compute a z-score, and use the normal distribution to evaluate probabilities with the sample mean.

But sample sizes are sometimes small, and often we do not know the standard deviation of the population. When either of these problems occur, statisticians rely on the distribution of the t statistic (also known as the t score), whose values are given by:

t = [ x - μ ] / [ s / sqrt( n ) ]
where x is the sample mean, μ is the population mean, s is the standard deviation of the sample, and n is the sample size. The distribution of the t statistic is called the t distribution or the Student t distribution.

## Degrees of Freedom

There are actually many different t distributions. The particular form of the t distribution is determined by its degrees of freedom. The degrees of freedom refers to the number of independent observations in a set of data.
When estimating a mean score or a proportion from a single sample, the number of independent observations is equal to the sample size minus one. Hence, the distribution of the t statistic from samples of size 8 would be described by a t distribution having 8 - 1 or 7 degrees of freedom. Similarly, a t distribution having 15 degrees of freedom would be used with a sample of size 16.
For other applications, the degrees of freedom may be calculated differently. We will describe those computations as they come up.

## When to Use the t Distribution

The t distribution can be used with any statistic having a bell-shaped distribution (i.e., approximately normal). The central limit theorem states that the sampling distribution of a statistic will be normal or nearly normal, if any of the following conditions apply.
• The population distribution is normal.
• The sampling distribution is symmetric, unimodal, without outliers, and the sample size is 15 or less.
• The sampling distribution is moderately skewed, unimodal, without outliers, and the sample size is between 16 and 40.
• The sample size is greater than 40, without outliers.
The t distribution should not be used with small samples from populations that are not approximately normal.

### multiple correlation coefficient (R)

• in multiple regression, the overall correlation between the dependent variable and all the
independent variables is called the multiple correlation coefficient (R)

### Hypothesis—what should be observed if the theory is correct

1 st step --- theoretical statements about relationship among constructs into hypothesis about associations among variables. If theoretical relationships among constructs are true, then variables should be associated with one another in the manner described by theories
2 nd step --- how well theory-based expectations confirm to observed data? If the variables are not associated with one another, then the theory is disconfirmed.

Formulation of hypothesis --- expectation about what should be observed in the sample, if the theory about the population is true
assume what is true in the population is also true for the sample
then, use the sample to test the population
We apply our theory to a sample, making guess about what should be observed if our theory is correct.

Hypothesis testing --- whether the results of a research (on sample) supports a theory about the population

State hypothesis about the sample data, we are primarily interested in hypotheses about associations among measured variables for a finite sample.

Null hypothesis --- no association between variables, H0= r = 0
Or Null hypothesis --- no difference (mean) between two groups

In most cases, we hope to reject the null hypothesis. Theories usually suggest the presence
of relationships (associations exist).

Calculate the probability (p) of observing the empirical associations (in the sample).
The test statistic for the correlation coefficient is t with degree of freedom (N-1). It t is large, its associated p value is small, then the null hypothesis (no association) is rejected. We would conclude that it is unlikely that the two constructs are unrelated to each other in the population. In contrast, if t is small and its p value is large, we would fail to reject the null hypothesis (no association).

P values for t tests are often printed underneath correlation coefficients (a known, exact for the sample).

T test and p value pertain to inferences about the probable value of the coefficient in the population. If we are interested only in the sample, the test statistic and its p value would be unnecessary.

Magnitude of the association is distinct from its probability level [probability (p) of observing the empirical associations (in the sample) ]

### contingent relationships, moderator, interaction

• third variable (x2) modify the effect of the focal independent variable (x1) on the dependent variable (y), this particular type of third variable is called effect modifier, moderator, moderating variable
• the magnitude of the focal relationship (x1 and y) varies across values of the modifying variable (x2)
• x2 alter the magnitude of the association between the independent variable (x1) and dependent variable (y)
• interaction
• the impact of x1 is conditional on x2 (effect modifier)

### bivariate, multivariate model

• the addition of a second independent variable converts the bivariate analysis into a three-variable analysis, suppose a third variable (x2) alters the focal relationship between y and x1
• hold constant, being equal, statistically controlled, controlled
• multicollinearity --- one independent variable is almost indistinguishable from the other independent variable
• explanatory analysis moves toward models containing multiple independent variables
• the addition of a set of independent variables may alter the estimate of the focal relationship (x1 and  y) or leave it unchanged
• the actual outcome is a result of how strongly the independent variables are correlated with the dependent variable and of how strongly the independent variables are correlated with one another

A majority of public employees are attracted to work by intrinsic purposes. Yet, as a whole, they express dissatisfaction with the intrinsic aspects of work (individual development, involvement, etc.) (Cacioppe & Mock, 1984).

## Saturday, March 26, 2011

### Face-to-Face with the Fact

Are we afraid of a fact or of an idea about the fact? Are we afraid of the thing as it is, or are we afraid of what we think it is? Take death, for example. Are we afraid of the fact of death or of the idea of death? The fact is one thing and the idea about the fact is another. Am I afraid of the word death or of the fact itself? Because I am afraid of the word, of the idea, I never understand the fact, I never look at the fact, I am never in direct relation with the fact
It is only when I am in complete communion with the fact that there is no fear. If I am not in communion with the fact, then there is fear, and there is no communion with the fact so long as I have an idea, an opinion, a theory, about the fact; so I have to be very clear whether I am afraid of the word, the idea, or the fact. If I am face-to-face with the fact, there is nothing to understand about it: the fact is there, and I can deal with it. If I am afraid of the word, then I must understand the word, go into the whole process of what the word, the term, implies. It is my opinion, my idea, my experience, my knowledge about the fact, that creates fear. So long as there is verbalization of the fact, giving the fact a name and therefore identifying or condemning it, so long as thought is judging the fact as an observer, there must be fear. Thought is the product of the past; it can only exist through verbalization, through symbols, through images. So long as thought is regarding or translating the fact, there must be fear.  - J. Krishnamurti, The Book of Life

### Fear Makes Us Obey

Why do we do all this:obey, follow, copy? Why? Because we are frightened inwardly to be uncertain. We want to be certain, we want to be certain financially, we want to be certain morally,we want to be approved, we want to be in a safe position, we want never to be confronted with trouble, pain, suffering, we want to be enclosed. So, fear, consciously or unconsciously, makes us obey the Master, the leader, the priest, the government. Fear also controls us from doing something which may be harmful to others, because we will be punished. So behind all these actions, greeds, pursuits, lurks this desire for certainty, this desire to be assured. So, without resolving fear, without being free from fear, merely to obey or to be obeyed has little significance; what has meaning is to understand this fear from day to day and how fear shows itself in different ways. It is only when there is freedom from fear that there is that inward quality of understanding, that aloneness in which there is no accumulation of knowledge or of experience, and it is that alone which gives extraordinary clarity in the pursuit of the real. - J. Krishnamurti, The Book of Life

## Thursday, March 24, 2011

### There Is No Such Thing As Living Alone

We want to run away from our loneliness, with its panicky fears, so we depend on another, we enrich ourselves with companionship, and so on. We are the prime movers, and other become pawns in our game; and when the pawn turns and demands something in return, we are shocked and grieved. If our own fortress is strong, without a weak spot in it, this battering from the outside is of little consequence to us. The peculiar tendencies that arise with advancing age must be understood and corrected while we are still capable of detached and tolerant self-observation and study; our fears must be observed and understood now. Our energies must be directed, not merely to the understanding of the outward pressures and demands for which we are responsible, but to the comprehension of ourselves, of our loneliness, our fears, demands, and frailties.There is no such thing as living alone, for all living is relationship; but to live without direct relationship demands high intelligence, a swifter and greater awareness for self-discovery. A "lone" existence, without this keen and flowing awareness, strengthens the already dominant tendencies, thus causing unbalance, distortion. It is now that one has to become aware of the set and peculiar habits of thought-feeling which come with age, and by understanding them make away with them. Inward riches alone bring peace and joy. - J. Krishnamurti, The Book of Life

### Freedom from Fear

Is it possible for the mind to empty itself totally of fear? Fear of any kind breeds illusion; it makes the mind dull, shallow. Where there is fear there is obviously no freedom, and without freedom there is no love at all. And most of us have some form of fear; fear of darkness, fear of public opinion, fear of snakes, fear of physical pain, fear of old age, fear of death. We have literally dozens of fears. And is it possible to be completely free of fear?We can see what fear does to each one of us. It makes one tell lies; it corrupts one in various ways; it makes the mind empty, shallow. There are dark corners in the mind which can never be investigated and exposed as long as one is afraid. Physical self-protection, the instinctive urge to keep away from the venomous snake, to draw back from the precipice, to avoid falling under the tramcar, and so on, is sane, normal, healthy. But I am asking about the psychological self-protectiveness which makes one afraid of disease, of death, of an enemy. When we seek fulfillment in any form, whether through painting, through music, through relationship, or what you will, there is always fear. So, what is important is to be aware of this whole process of oneself, to observe, to learn about it, and not ask how to get rid of fear. When you merely want to get rid of fear, you will find ways and means of escaping from it, and so there can never be freedom from fear. - The Book of Life

### Dealing with Fear

One is afraid of public opinion, afraid of not achieving, not fulfilling, afraid of not having the opportunity; and through it all there is this extraordinary sense of guilt;one has done a thing that one should not have done; the sense of guilt in the very act of doing; one is healthy and others are poor and unhealthy; one has food and others have no food. The more the mind is inquiring, penetrating, asking, the greater the sense of guilt, anxiety. Fear is the urge that seeks a Master, a guru; fear is this coating of respectability, which every one loves so dearly;to be respectable. Do you determine to be courageous to face events in life, or merely rationalize fear away, or find explanations that will give satisfaction to the mind that is caught in fear? How do you deal with it? Turn on the radio, read a book, go to a temple, cling to some form of dogma, belief? Fear is the destructive energy in man. It withers the mind, it distorts thought, it leads to all kinds of extraordinarily clever and subtle theories, absurd superstitions, dogmas, and beliefs. If you see that fear is destructive, then how do you proceed to wipe the mind clean? You say that by probing into the cause of fear you would be free of fear. Is that so? Trying to uncover the cause and knowing the cause of fear does not eliminate fear. - J. Krishnamurti, The Book of Life

### The Door to Understanding

You cannot wipe away fear without understanding, without actually seeing into the nature of time, which means thought, which means word. From that arises the question: Is there a thought without word, is there a thinking without the word which is memory? Sir, without seeing the nature of the mind, the movement of the mind, the process of self-knowing, merely saying that I must be free of it, has very little meaning. You have to take fear in the context of the whole of the mind. To see, to go into all this, you need energy. Energy does not come through eating food;that is a part of physical necessity. But to see, in the sense I am using that word, requires an enormous energy; and that energy is dissipated when you are battling with words, when you are resisting, condemning, when you are full of opinions which are preventing you from looking, seeing; your energy is all gone in that. So in the consideration of this perception, this seeing, again you open the door. - J. Krishnamurti, The Book of Life

### The Third America: The emergence of the nonprofit sector in the United States

by Michael O’Neill, 1989
Religion as the “godmother of the nonprofit sector”

### Managing the Nonprofit Organization: Principles and Practices

by Peter Drucker
Balfour, D. L., & Wechsler., B. (1990). Organizational commitment: A reconceptualization and empirical test of public-private differences Review of Public Personnel Administration 10, 23-40.
organizational support is the most  significant variable in determining identification commitment. Perceptions of  a  supportive and understanding organizational climate  are  critical to the establishment of feelings of affiliation. Absent these supportive features of organizational life, employees will not  identify with the organization  nor  will they form psycho-
logical attachments based  on a sense  of belonging  to the organization.---

### Shamir, B. (1990). Calculations, Values, and Identities: The Sources of Collectivistic Work Motivation

• expression and maintenance of aspects of a person's self-concept
• affirm himself as a person
• express and affirm self-concept
• Katz and Kahn (1966) --- the motivation to establish and maintain a satisfactory self-concept
• present oneself and behave in as manner consistent with their self-concept
• maintain and affirm self-identity
• a person whose self-concept is based on ..., will participate in the activities of that collectivity because such participation clarifies and affirms his self-concept
• the more consistent the collective work action with his self-concept, the more likely is the person to be motivated to contribute to that work effort, even in the absence of expected rewards
• In weak situations,behavior may be better explained and predicted by the person's values and salient identities than by the calculative model.

### Do You Have Free Will? Yes, It’s the Only Choice

http://www.nytimes.com/2011/03/22/science/22tier.html

Suppose that Mark and Bill live in a deterministic universe. Everything that happens this morning — like Mark’s decision to wear a blue shirt, or Bill’s latest attempt to comb over his bald spot — is completely caused by whatever happened before it.

If you recreated this universe starting with the Big Bang and let all events proceed exactly the same way until this same morning, then the blue shirt is as inevitable as the comb-over.
Now for questions from experimental philosophers:
1) In this deterministic universe, is it possible for a person to be fully morally responsible for his actions?
2) This year, as he has often done in the past, Mark arranges to cheat on his taxes. Is he is fully morally responsible for his actions?
3) Bill falls in love with his secretary, and he decides that the only way to be with her is to murder his wife and three children. Before leaving on a trip, he arranges for them to be killed while he is away. Is Bill fully morally responsible for his actions?

To a classic philosopher, these are just three versions of the same question about free will. But to the new breed of philosophers who test people’s responses to concepts like determinism, there are crucial differences, as Shaun Nichols explains in the current issue of Science

Most respondents will absolve the unspecified person in Question 1 from full responsibility for his actions, and a majority will also give Mark a break for his tax chiseling. But not Bill. He’s fully to blame for his heinous crime, according to more than 70 percent of the people queried by Dr. Nichols, an experimental philosopher at the University of Arizona, and his Yale colleague Joshua Knobe.
Is Bill being judged illogically? In one way, yes. The chain of reasoning may seem flawed to some philosophers, and the belief in free will may seem naïve to the psychologists and neuroscientists who argue that we’re driven by forces beyond our conscious control — an argument that Bill’s lawyer might end up borrowing in court.

But in another way it makes perfect sense to hold Bill fully accountable for murder. His judges pragmatically intuit that regardless of whether free will exists, our society depends on everyone’s believing it does. The benefits of this belief have been demonstrated in other research showing that when people doubt free will, they do worse at their jobs and are less honest.

In one experiment, some people read a passage from Francis Crick, the molecular biologist, asserting that free will is a quaint old notion no longer taken seriously by intellectuals, especially not psychologists and neuroscientists. Afterward, when compared with a control group that read a different passage from Crick (who died in 2004) these people expressed more skepticism about free will — and promptly cut themselves some moral slack while taking a math test.

Asked to solve a series of arithmetic problems in a computerized quiz, they cheated by getting the answers through a glitch in the computer that they’d been asked not to exploit. The supposed glitch, of course, had been put there as a temptation by the researchers, Kathleen Vohs of the University of Minnesota and Jonathan Schooler of the University of California, Santa Barbara.

In a follow-up experiment, the psychologists gave another test in which people were promised \$1 for every correct answer — and got to compile their own scores. Just as Dr. Vohs and Dr. Schooler feared, people were more likely to cheat after being exposed beforehand to arguments against free will. These people went home with more unearned cash than did the other people.

This behavior in the lab, the researchers noted, squares with studies in recent decades showing an increase in the number of college students who admit to cheating. During this same period, other studies have shown a weakening in the popular belief in free will (although it’s still widely held).

Doubting one’s free will may undermine the sense of self as agent,” Dr. Vohs and Dr. Schooler concluded. “Or, perhaps, denying free will simply provides the ultimate excuse to behave as one likes.”

That could include goofing off on the job, according to another study done by Dr. Vohs along with a team of psychologists led by Tyler F. Stillman of Southern Utah University. They went to a day-labor employment agency armed with questionnaires for a sample of workers to fill out confidentially.

These questionnaires were based on a previously developed research instrument called the Free Will and Determinism Scale. The workers were asked how strongly they agreed with statements like “Strength of mind can always overcome the body’s desires” or “People can overcome any obstacles if they truly want to” or “People do not choose to be in the situations they end up in — it just happens.”

The psychologists also measured other factors, including the workers’ general satisfaction with their lives, how energetic they felt, how strongly they endorsed an ethic of hard work. None of these factors was a reliable predictor of their actual performance on the job, as rated by their supervisors. But the higher the workers scored on the scale of belief in free will, the better their ratings on the job.

Free will guides people’s choices toward being more moral and better performers,” Dr. Vohs said. “It’s adaptive for societies and individuals to hold a belief in free will, as it helps people adhere to cultural codes of conduct that portend healthy, wealthy and happy life outcomes.”

Intellectual concepts of free will can vary enormously, but there seems to be a fairly universal gut belief in the concept starting at a young age. When children age 3 to 5 see a ball rolling into a box, they say that the ball couldn’t have done anything else. But when they see an experimenter put her hand in the box, they insist that she could have done something else.

That belief seems to persist no matter where people grow up, as experimental philosophers have discovered by querying adults in different cultures, including Hong Kong, India, Colombia and the United States. Whatever their cultural differences, people tend to reject the notion that they live in a deterministic world without free will.
They also tend to agree, across cultures, that a hypothetical person in a hypothetically deterministic world would not be responsible for his sins. This same logic explains why they they’ll excuse Mark’s tax evasion, a crime that doesn’t have an obvious victim. But that logic doesn’t hold when people are confronted with what researchers call a “high-affect” transgression, an emotionally upsetting crime like Bill’s murder of his family.

“It’s two different kinds of mechanisms in the brain,” said Alfred Mele, a philosopher at Florida State University who directs  the Big Questions in Free Will project. “If you give people an abstract story and a hypothetical question, you’re priming the theory machine in their head. But their theory might be out of line with their intuitive reaction to a detailed story about someone doing something nasty. As experimenters have shown, the default assumption for people is that we do have free will.”

At an abstract level, people seem to be what philosophers call incompatibilists: those who believe free will is incompatible with determinism. If everything that happens is determined by what happened before, it can seem only logical to conclude you can’t be morally responsible for your next action.

But there is also a school of philosophers — in fact, perhaps the majority school — who consider free will compatible with their definition of determinism. These compatibilists believe that we do make choices, even though these choices are determined by previous events and influences. In the words of Arthur Schopenhauer, “Man can do what he wills, but he cannot will what he wills.”

Does that sound confusing — or ridiculously illogical? Compatibilism isn’t easy to explain. But it seems to jibe with our gut instinct that Bill is morally responsible even though he’s living in a deterministic universe. Dr. Nichols suggests that his experiment with Mark and Bill shows that in our abstract brains we’re incompatibilists, but in our hearts we’re compatibilists.

“This would help explain the persistence of the philosophical dispute over free will and moral responsibility,” Dr. Nichols writes in Science. “Part of the reason that the problem of free will is so resilient is that each philosophical position has a set of psychological mechanisms rooting for it.”

Some scientists like to dismiss the intuitive belief in free will as an exercise in self-delusion — a simple-minded bit of “confabulation,” as Crick put it. But these supposed experts are deluding themselves if they think the question has been resolved. Free will hasn’t been disproved scientifically or philosophically. The more that researchers investigate free will, the more good reasons there are to believe in it.

## Wednesday, March 23, 2011

Baldwin  (1987)  identified  some  of  the distinguishing characteristics of public sector organizations. He proposed that in comparison to the private sector, public sector organizations have:  (a)vague, unclear, or ambiguous goals and objectives;  public sector organizations have fewer quantitative indicators of demand and fewer performance measures  that  enhance  the clarity of goals   (p.  182).