Saturday, March 28, 2015

The ‘No Shampoo Experiment': 6 Months Later

http://expandedconsciousness.com/2014/09/29/the-no-shampoo-experiment-6-months-later/

Thursday, March 26, 2015

In Iceland’s DNA, New Clues to Disease-Causing Genes

http://www.nytimes.com/2015/03/26/science/in-icelands-dna-clues-to-what-genes-may-cause-disease.html?src=twr&smid=fb-nytimes&bicmst=1409232722000&bicmet=1419773522000&bicmp=AD&smtyp=aut&bicmlukp=WT.mc_id&_r=0

Wednesday, March 25, 2015

豆渣煎餅

http://blog.wonderfulfood.com.tw/2015/03/23/%E8%B1%86%E6%B8%A3%E7%87%9F%E9%A4%8A%E5%A5%BD%E8%99%95%E5%A4%9A%EF%BC%8C%E6%89%93%E5%AE%8C%E8%B1%86%E6%BC%BF%E5%88%A5%E4%BA%82%E4%B8%9F%EF%BD%9E%E4%BE%86%E5%81%9A%E4%BD%8E%E5%8D%A1%E5%8F%88%E6%9C%89/

Tuesday, March 24, 2015

法國新建物屋頂須安裝太陽能板或植栽

http://technews.tw/2015/03/23/frence-decrees-new-rooftops-need-to-be-covered-in-solar-panels-or-plants/

Saturday, March 21, 2015

婁燁
頤和園
蘇州河
紫蝴蝶
人間不會有單純的快樂。快樂總夾帶著煩惱和憂慮。人間也沒有永遠。我們一生坎坷,暮年才有了一個可以安頓的居處。

http://www.cw.com.tw/article/article.action?id=5065687

Wednesday, March 18, 2015

The purpose of life

There are many people who will give you the purpose of life; they will tell you what the sacred books say. Clever people will go on inventing what the purpose of life is. The political group will have one purpose, the religious group will have another purpose, and so on and on. So, what is the purpose of life when you yourself are confused? When I am confused, I ask you this question, "What is the purpose of life?" because I hope that through this confusion, I shall find an answer. How can I find a true answer when I am confused? Do you understand? If I am confused, I can only receive an answer that is also confused. If my mind is confused, if my mind is disturbed, if my mind is not beautiful, quiet, whatever answer I receive will be through this screen of confusion, anxiety, and fear; therefore, the answer will be perverted. So, what is important is not to ask, "What is the purpose of life, of existence?" but to clear the confusion that is within you. It is like a blind man who asks, "What is light?" If I tell him what light is, he will listen according to his blindness, according to his darkness; but suppose he is able to see, then he will never ask the question, "What is light?" It is there.Similarly, if you can clarify the confusion within yourself, then you will find what the purpose of life is; you will not have to ask, you will not have to look for it; all that you have to do is to be free from those causes which bring about confusion. - Krishnamurti, J. Krishnamurti, The Book of Life

Live in This World Anonymously

Is it not possible to live in this world without ambition, just being what you are? If you begin to understand what you are without trying to change it, then what you are undergoes a transformation. I think one can live in this world anonymously, completely unknown, without being famous, ambitious, cruel. One can live very happily when no importance is given to the self; and this also is part of right education.The whole world is worshipping success. You hear stories of how the poor boy studied at night and eventually became a judge, or how he began by selling newspapers and ended up a multi-millionaire. You are fed on the glorification of success. With achievement of great success there is also great sorrow; but most of us are caught up in the desire to achieve, and success is much more important to us than the understanding and dissolution of sorrow. - Krishnamurti, J. Krishnamurti, The Book of Life

Tuesday, March 17, 2015

President Obama Speaks in Selma [Complete Speech]

Obama State of the Union 2015 Address: President's [FULL] SOTU Speech To...

heaven


law and economic development

economic development, history, Taiwan

Taiwan's Economic History: A Case of Etatisme and a Challenge to Dependency Theory
 Modern China, Vol. 5, No. 3, Symposium on Taiwan: Society and Economy (Jul., 1979), pp. 341-379
http://instructional1.calstatela.edu/tclim/W10_Courses/AMSDEN-Taiwan.pdf

Taiwan in the Global Economy -- Past, Present, and Future
http://www.colorado.edu/Economics/mcguire/workingpapers/Taiwan-GlobalEconomy.pdf

Taiwan in the 21st Century
http://www.econ.yale.edu/~granis/papers/Taiwans-success.pdf

Economic History of Taiwan: A Survey
http://homepage.ntu.edu.tw/~ntut019/ltes/TEHsurvey_Eng.pdf

Revisiting Economic Development in Post-war Taiwan: The Dynamic Process of Geographical Industrialization
http://www.ios.sinica.edu.tw/cll/Revisit.pdf


A Guide to Merit Systems Protection Board Law and Practice


Peter Broida

problem-based research is more interesting?

if the research aims to solve a problem, then, if the problem is solved by the research, a sense of accomplishment arises..........


Monday, March 16, 2015


剩飯變化各種點心

http://blog.ytower.com.tw/%E9%A3%9F%E8%AD%9C/%E6%8B%BF%E5%89%A9%E9%A3%AF%E4%BE%86%E8%AE%8A%E5%8C%96%E5%90%84%E7%A8%AE%E9%BB%9E%E5%BF%83%E5%90%A7%EF%BC%81/

Sunday, March 15, 2015

九層塔保存方法

http://blog.wonderfulfood.com.tw/2015/03/13/%E4%B9%9D%E5%B1%A4%E5%A1%94%E4%B8%8D%E8%AE%8A%E9%BB%91%EF%BC%81%E7%B0%A1%E5%96%AE3%E6%AD%A5%E9%A9%9F%E6%95%99%E6%82%A8%E4%BF%9D%E5%AD%98%E4%B8%80%E5%91%A8/

高麗菜泡菜

http://blog.wonderfulfood.com.tw/2015/03/11/%E5%86%AC%E5%A4%A9%E9%AB%98%E9%BA%97%E8%8F%9C%E4%B8%8D%E6%80%95%E5%90%83%E4%B8%8D%E5%AE%8C%E6%95%99%E6%82%A8%E8%87%AA%E8%A3%BD%E3%80%8C%E9%AB%98%E9%BA%97%E8%8F%9C%E6%B3%A1%E8%8F%9C%E3%80%8D/

Saturday, March 14, 2015

手風琴馬鈴薯

https://tw.news.yahoo.com/%E6%89%8B%E9%A2%A8%E7%90%B4%E9%A6%AC%E9%88%B4%E8%96%AF-%E8%BF%B7%E4%BD%A0%E7%89%88-115544943.html

稻盛和夫:此生的意義,就是磨練魂魄

每當有人問我「這輩子你為什麼而來?」時,我就毫不遲疑地回答,因為我想成為

「比剛出生時更好一點」的人,

也就是說,我為了想帶著更美麗、高尚一點的靈魂離開這個世界而來

http://m.cw.com.tw/article/article.action?id=5052138#sthash.EqsjscbT.dpbs


Rilke on How Great Sadnesses Bring Us Closer to Ourselves

The future enters into us in this way in order to transform itself in us long before it happens.
http://www.brainpickings.org/2015/03/10/rilke-letters-to-a-young-poet-sadness/


In L.A., Now You Can Use City Land For A Free Vegetable Garden

http://www.growthechangetoday.com/in-l-a-now-you-can-use-city-land-for-a-free-vegetable-garden/#gs.5367c43a258f4102a704a04456bcbf44

André Gide on Sincerity, Being vs. Appearing, and What It Really Means to Be Yourself

Don’t ever do anything through affectation or to make people like you or through imitation or for the pleasure of contradicting.

http://www.brainpickings.org/2015/03/13/andre-gide-journals-sincerity/

Friday, March 13, 2015

relational capitalism

law schools, students, bar exams, economic development

Law & Society Review

Social & legal studies

Law, Economics & Organization

Journal of legal studies

International review of Law & Economics

Davis, Michael C. (1998) "The Price of Rights: Constitutionalism and East Asian Economic Development," 20 Human Rights Q. 303-37.

Chua, Amy (1998) "Markets, Democracy, and Ethnicity: Toward a New Paradigm for Law and Development," 108 Yale LawJ. 1-107.

Wednesday, March 11, 2015

How to Convert a Crediting Plan to an Assessment Questionnaire
























http://www.opm.gov/policy-data-oversight/human-capital-management/hiring-reform/reference/creditingplanfactsheet.pdf

Physical Ability Tests

Physical ability tests typically ask individuals to perform job-related tasks requiring manual labor or physical skill. These tasks measure physical abilities such as strength, muscular flexibility, and stamina. Examples of physical ability tests include: ƒ


  • Muscular Tension Tests - Tasks requiring pushing, pulling, lifting ƒ 
  • Muscular Power Tests - Tasks requiring the individual to overcome some initial resistance (e.g., loosening a nut on a bolt) ƒ 
  • Muscular Endurance Tests - Tasks involving repetitions of tool use (e.g., removing objects from belts) ƒ 
  • Cardiovascular Endurance Tests - Tasks assessing aerobic capacity (e.g., climbing stairs) ƒ
  • Flexibility Tests - Tasks where bending, twisting, stretching or reaching of a body segment occurs (e.g., installing lighting fixtures) ƒ Balance Tests - Tasks in which stability of body position is difficult to maintain (e.g., standing on rungs of a ladder)
While some physical ability tests may require electronically monitored machines, equipment needs can often be kept simple. For example, stamina can be measured with a treadmill and an electrocardiograph, or with a simple set of steps. However, a possible drawback of using simpler methods is less precise measurement. 

Many factors must be taken into consideration when using physical ability tests. First, employment selection based on physical abilities can be litigious. Legal challenges have arisen over the years because physical ability tests, especially those involving strength and endurance, tend to screen out a disproportionate number of women and some ethnic minorities. Therefore, it is crucial to have validity evidence justifying the job-relatedness of physical ability measures. Second, physical ability tests involving the monitoring of heart rate, blood pressure, or other physiological factors are considered medical exams under the Americans with Disabilities Act. Administering medical exams to job applicants prior to making a job offer is expressly prohibited. Finally, there is the concern of candidates injuring themselves while performing a physical ability test (e.g., a test involving heavy lifting may result in a back injury or aggravate an existing medical condition). 

Arvey, R. D., Maxwell, S. E., & Salas, E. (1992). Development of physical ability tests for police officers: A construct validation approach. Journal of Applied Psychology, 77, 996- 1009. 

Arvey, R. D., Nutting, S. M., & Landon, T. E. (1992). Validation strategies for physical ability testing in police and fire settings. Public Personnel Management, 21, 301-312. 

Campbell, W. J., & Fox, H. R. (2002). Testing individuals with disabilities in the employment context: An overview of issues and practices. In R. B. Ekstrom & D. K. Smith (Eds.) Assessing Individuals with Disabilities in Educational, Employment, and Counseling Settings (1st ed, p. 198). Washington, DC: American Psychological Association. 

Campion, M. A. (1983). Personnel selection for physically demanding jobs: Review and recommendations. Personnel Psychology, 36, 527-550. 

Hogan, J. (1991). The structure of physical performance in occupational tasks. Journal of Applied Psychology, 76, 495-507. 

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

@@@@@

http://www.siop.org/workplace/employment%20testing/testtypes.aspx#8.  Physical Ability Tests


Physical ability tests typically use tasks or exercises that require physical ability to perform. These tests typically measure physical attributes and capabilities, such as strength, balance, and speed. 
Advantages
Disadvantages

  • Have been demonstrated to produce valid inferences regarding performance of physically demanding tasks.
  • Can identify applicants who are physically unable to perform essential job functions.
  • Can reduce business costs by identifying individuals for hiring, promotion or training who possess the needed skills and abilities, by minimizing the risk of physical injury to employees and others on the job, and by decreasing disability/medical, insurance, and workers compensation costs.
  • Will not be influenced by test taker attempts to impression manage or fake responses.
  • Are typically more likely to differ in results by gender than other types of tests.
  • May be problematic for use in employee selection if the test is one used to diagnose medical conditions (i.e., a physical disability) rather than simply to assess ability to perform a particular job-related task. 
  • Can be expensive to purchase equipment and administer.
  • May be time consuming to administer.
  • May be inappropriate or difficult to administer in typical employment offices.


job/organizaton-fit measure

Job-fit measures (sometimes referred to as organization fit, personorganization fit, person-environment fit, or “fit check” tools), compare applicant personality, interest, value, or organizational culture preference information to the characteristics of the job or organization. The concept behind job-fit instruments is individuals are attracted to, and seek employment with, organizations which exhibit characteristics similar to their own.

The most common organizational characteristic used in job-fit measures is the organizational culture (e.g., innovative, detail oriented, team oriented). Although job-fit can be measured with interviews or other instruments, job-fit instruments are typically administered to applicants in the form of self-report questionnaires or surveys. Technological advancements of the Internet have made it easier to administer job-fit measures on-line, or as a possible feature to an agency or company’s website. An example item from a job-fit measure is: “I prefer a work environment which doesn’t demand constant adaptation” (1 = strongly disagree, 5 = strongly agree).

Based on their responses to the job-fit items, applicants are often offered tailored feedback regarding their likely fit with the job or organization. Moreover, those who perceive or receive feedback which indicates they are not a good fit with the job or organization are more likely to voluntarily withdraw from the application process. For this reason, job-fit measures that give applicants the opportunity to self-select out are typically administered before all traditional assessments (e.g., cognitive ability tests, accomplishment record).

Job-fit measures can also be used as a screen-out tool (such as traditional assessments); however, the research (e.g., validity, methodology, utility) regarding the use of job-fit measures in this regard is still in its infancy.

Arthur, W., Jr., Bell, S. T., Villado, A. J., & Doverspike, D. (2006). The use of personorganization fit in employment decision making: An assessment of its criterion-related validity. Journal of Applied Psychology, 91, 786-801.

Cable, D. M., & Judge, T. A. (1997). Interviewers’ perceptions of person-organization fit and organizational selection decisions. Journal of Applied Psychology, 82, 546-561.

Dineen, B. R., Ash, S. R., & Raymond, N. A. (2002). A web of applicant attraction: Personorganization fit in the context of web-based recruitment. Journal of Applied Psychology, 87(4), 723-734.

Judge, T. A., & Cable, D. M. (1997). Applicant personality, organizational culture, and organizational attraction. Personnel Psychology, 50, 359-394.

Kristof, A. L. (1996). Person-organization fit: An integrative review of its conceptualizations, measurement, and implications. Personnel Psychology, 49, 1-49.

Martinez, M. N. (2000). Get job seekers to come to you. HR Magazine, 45, 45-52

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

Background Evaluation/Investigation

Background evaluations, sometimes referred to as background investigations, seek information about an applicant’s employment, criminal, and personal history in an effort to investigate behavioral reliability, integrity, and personal adjustment. Background evaluations are conducted to determine whether there are any historical facts that would interfere with an applicant’s ability to perform the job, including violations of statutes, regulations, or laws. It is important to note background evaluations are a different process than competency-based assessments and are typically handled apart from the traditional assessments (e.g., cognitive ability tests, accomplishment record). Depending on the extensiveness of the background evaluation, you may be required to gain the applicant’s permission.

Background evaluation data are primarily used when screening personnel for positions of trust in which integrity and positive psychological adjustment is particularly desirable. Such occupations include law enforcement, private security industry, and positions requiring government-issued security clearances. The appointment of any civilian employee to a position in the Federal Government is subject to a background investigation.

Examples of factors investigated with a background evaluation are an applicant’s employment history, past illegal drug use, and previous criminal records. In addition to collecting background information directly from an applicant, background information is sometimes collected from other sources who know the applicant such as former employers and co-workers, friends, and neighbors.

Hilliard, P. A. (2001). Comparison of the predictive validity of a written test, an integrity test, a conscientiousness questionnaire, a structured behavioral interview and a personality inventory in the assessment of job applicants' background investigations, and subsequent task or contextual performance. Dissertation Abstracts International: Section B: The Sciences & Engineering, 62(6-B), 2981.

McDaniel, M. A. (1989). Biographical constructs for predicting employee suitability. Journal of Applied Psychology, 74(6), 964-970.

McFadden, K. L. (1997). Policy improvements for prevention of alcohol misuse by airline pilots. Human Factors, 39(1), 1-8.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

Training and Experience (T & E) Evaluation

 A traditional T & E evaluation, sometimes called a crediting plan or rating schedule, is a systematic method used to assess previous experience, education, and training information provided by job applicants. These assessment factors are based on critical job requirements and competencies identified through a job analysis.

Rating factors generally include the amount and quality of the applicant’s previous job-related experience, as well as any other information deemed important to performing the duties of the position. Typically, information on the assessment factors is reported by applicants as a supplement to the application blank. This information is evaluated against education and experience benchmarks to generate scores for selection purposes. Benchmarks are often developed by Human Resource Specialists familiar with the occupations covered with the T & E evaluation.

T & E evaluations are relatively easy to develop and may apply to multiple occupations sharing the same requirements and competencies. For the most part, these assessments are used for entry level positions. Most often, T & E evaluations are used as a screen early in the selection process to identify applicants who meet the minimum proficiency levels on the rating factors. While most rating factors are usually broad, more specific factors tailored to a particular occupation or organization can be developed.

A variation of the traditional rating schedule based on training and experience rating factors is a task-based rating method. The task-based method is used to assess applicants’ training and experience in relation to descriptions of tasks performed on the job to be filled. Specifically, the task-based rating schedule is developed from a list of tasks performed by incumbents in the target job. Applicants read each task statement and indicate whether they have ever performed such activities. Some versions ask applicants to also indicate the level of proficiency at which the task was performed. Generally, the more tasks performed, the higher an applicant’s score will be.

As with most self-report instruments, applicant inflation or distortion can threaten the validity of a T & E evaluation. Two approaches can be taken to combat the problem of rating inflation: (1) creating applicant expectations that responses will be verified, and (2) carrying out verification procedures, making adjustments to scores based on the findings.

Other self-report measures that collect additional types of training and experience information are available as alternatives to the traditional T & E evaluation. An example of such an alternative is the competency-based self-report method. This method functions much like a traditional rating schedule in terms of ease of administration and scoring. However, in addition to rating the extent to which a critical job competency is demonstrated, accomplishments, (e.g., written statements of personal accomplishments that best illustrate an applicant’s proficiency on critical job dimensions) are collected to support the self-reported information. This is very similar to the accomplishment records method discussed earlier in this section. Another option with the competency-based self-report method is the inclusion of a process requiring formal verification (e.g., via reference checking) of the information provided by the applicants in their written self-ratings and/or accomplishments. This verification information is often used to limit, as much as possible, the rating inflation typically observed with applicant self-reports of accomplishments.

Considerations:

• Validity – The content of the training and experience items on a traditional rating schedule and the task items on a task-based rating schedule are often highly representative of actual job performance (i.e., they show a high degree of content validity); Generally, performance on rating schedules does not relate well to performance on the job (i.e., they show a low degree of criterion-related validity), with length and recency of education, academic achievement, and extracurricular activities demonstrating the weakest relation to job performance

• Face Validity/Applicant Reactions – Reactions from professionals who feel they should be evaluated on their experience is typically favorable; Less favorable reactions may be seen if used for younger, less experienced applicants with few previous related experiences to describe

• Administration Method – Can be administered via paper-and-pencil or electronically

• Subgroup Differences – Generally little or no performance differences are found between men and women or applicants of different racial or ethnic backgrounds

 • Development Costs – Takes less time to develop than other measures of training and experience (e.g., the accomplishment record)

• Administration Costs – Takes a very short time to administer and for applicants to complete; Administration time is shorter than other measures of training and experience (e.g., accomplishment record)

 • Utility/ROI – Return on investment for training and experience measures can be moderate to high if the same rating schedule instrument can be used to assess for various positions

• Common Uses – Commonly used as a screening device prior to another selection tool (e.g., structured interview) for both entry level positions across various professional occupations (e.g., trainee positions) and jobs requiring prior preparation.

Lyons, T. J. (1989). Validity of Education and Experience Measured in Traditional Rating Schedule Procedures: A Review of the Literature. Office of Personnel Research and Development, U.S. Office of Personnel Management, Washington, DC, OPRD-89-02.

Lyons, T. J. (1988). Validity Research on Rating Schedule Methods: Status Report. Office of Personnel Research and Development, U.S. Office of Personnel Management, Washington, DC, OED-88-17.

McCauley, D. E. (1987). Task-Based Rating Schedules: A Review. Office of Examination Development, U.S. Office of Personnel Management, Washington, DC, OED 87-15.

McDaniel, M. A., Schmidt, F. L., & Hunter, J. E. (1988). A meta-analysis of the validity of methods for rating training and experience in personnel selection. Personnel Psychology, 41, 283-309.

Schwartz, D. J. (1977). A job sampling approach to merit system examining. Personnel Psychology, 30(2), 175-185.

Sproule, C. F. (1990). Personnel Assessment Monographs: Recent Innovations in Public Sector Assessment (Vol 2, No. 2). International Personnel Management Association Assessment Council (IPMAAC).

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

Situational Judgment Test

 Situational judgment tests (SJTs) present applicants with a description of a work problem or critical situation related to the job they are applying for and ask them to identify how they would handle it. Because applicants are not placed in a simulated work setting and are not asked to perform the task or behavior (as would be the case in an assessment center or a work sample), SJTs are classified as low-fidelity simulations.

SJTs measure effectiveness in social functioning dimensions such as conflict management, interpersonal skills, problem solving, negotiation skills, facilitating teamwork, and cultural awareness. SJTs are particularly effective measures of managerial and leadership competencies.

 SJTs can be developed to present scenarios and collect responses using a variety of formats. One alternative is to present a situation and then ask respondents to answer several questions about the situation. More often, SJTs present a new situation for each question. To respond to this type of SJT item, applicants may be asked: a) what they would do in the particular situation, b) what they would be most and least likely to do in the situation, c) what response is the best response among several options, d) what response is the best and second-best among several options, or e) what would most likely occur next in a certain situation or as a result of a certain decision.

SJTs can be presented in either a linear or interactive format. With a linear format, all respondents are presented with the same questions and in the same order. With an interactive (usually computer administered) format, SJTs can be structured according to a branching process in which the scenarios and response options presented later in the test depend on how applicants responded to questions presented earlier in the test. SJT questions and alternatives are typically based on critical incidents generated by subject matter (i.e., job) experts. Scores are based on subject matter experts’ judgments of the best and worst alternatives.

Considerations:

• Validity – The tasks and activities described in the SJT scenarios are very representative of the tasks and activities found on the job (i.e., they have a high degree of content validity) and performance on the tests moderately relates to performance on the job (i.e., they have a moderately high degree of criterion-related validity)

• Face Validity/Applicant Reactions – Applicants often perceive SJTs as being very fair (i.e., the tests have a high degree of face validity)

• Administration Method – Possible to administer in paper and pencil, computer-based, or video-based format

• Subgroup Differences – Subgroup differences are typically moderate; Racial differences in test scores may be smaller than those typically observed for cognitive ability tests

• Development Costs – Generally, developmental costs are less than high-fidelity alternatives (e.g., assessment centers) and depend on costs related to use of subject matter experts

• Administration Costs – Administration costs are typically low when delivered via paper and pencil, but may be more costly via computer or video; No special administrator expertise is needed

• Utility/ROI – High return on investment if you need applicants who possess a high level of social and interpersonal skills upon entry into the job; If the skills measured by the tests can be learned on the job or are not highly critical, then the return on investment will be significantly lower

• Common Uses – SJTs can be developed for a variety of jobs, but are typically used for managerial positions or other jobs requiring effective interpersonal interactions

Hanson, M. A., Horgen, K. E., & Borman W. C. (1998, April). Situational judgment tests (SJT) as measures of knowledge/expertise. Paper presented as the 13th Annual Conference of the Society for Industrial and Organizational Psychology, Dallas, TX.

McDaniel, M. A., Morgeson, F. P, Finnegan, E. B, Campion, M. A., & Braverman, E. P. (2001). Use of situational judgment tests to predict job performance: A clarification of the literature. Journal of Applied Psychology, 86, 730-740.

McDaniel, M. A., Whetzel, D. L., & Nguyen, N. T. (2006). Situational judgment tests for personnel selection. Alexandria, VA: IPMA Assessment Council.

Motowidlo, S. J., Dunnette, M. D., & Carter, G. W. (1990). An alternative selection procedure: The low-fidelity simulation. Journal of Applied Psychology, 75, 640-647.

Motowidlo, S. J., & Tippins, N. (1993). Further studies of the low-fidelity simulation in the form of a situational inventory. Journal of Occupational and Organizational Psychology, 66, 337-344.

Weekley, J. A., & Jones, C. (1999). Further studies of situational tests. Personnel Psychology, 52(3), 679-700.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

Reference Checking

Reference checking is an objective evaluation of an applicant’s past job performance based on information collected from key individuals (e.g., supervisors, peers, subordinates) who have known and worked with the applicant. Reference checking is primarily used to: ƒ

  • Verify the accuracy of information given by job applicants through other selection processes (e.g., résumés, occupational questionnaires, interviews) ƒ 
  • Predict the success of job applicants by comparing their experience to the competencies required by the job ƒ 
  • Uncover background information on applicants that may not have been identified by other selection procedures 
Job applicants may attempt to enhance their chances of obtaining a job offer by distorting their training and work history information. While résumés summarize what applicants claim to have accomplished, reference checking is meant to assess how well those claims are backed up by others. Verifying critical employment information can significantly cut down on selection errors. Information provided by former peers, direct reports, and supervisors can also be used to forecast how applicants will perform in the job being filled. Reference data used in this way is based on the behavioral consistency principle that past performance is a good predictor of future performance.

As a practical matter, reference checking is usually conducted near the end of the selection process after the field of applicants has been narrowed to only a few competitors. Most reference checks are conducted by phone. Compared to written requests, phone interviews allow the checker to collect reference data immediately and to probe for more detailed information when clarification is needed. Phone interviews also require less time and effort on the part of the contact person and allow for more candid responses about applicants. 

Reference checking has been shown to be a useful predictor of job performance (as measured by supervisory ratings), training success, promotion potential, and employee turnover. As with employment interviews, adding structure to the reference checking process can greatly enhance its validity and usefulness as an employee selection procedure. Strategies for structuring reference checking include basing questions on a job analysis, asking applicants the same set of questions, and providing interviewers with standardized data collection and rating procedures. 

Conducting reference checks can reduce the risk of lawsuits for negligent hiring—the failure to exercise reasonable care when selecting new employees. Providing accurate information when called as a reference for a former employee is equally important, but many employers refuse to give negative information about former employees, fearing a lawsuit for defamation. This is generally not deemed a serious problem for Federal reference providers and reference checkers because of legal protections provided under the Federal Tort Claims Act

Considerations:

 • Validity – Reference checks are useful for predicting applicant job performance, better than years of education or job experience, but not as effective as cognitive ability tests; Reference checks can add incremental validity when used with other selection procedures, such as cognitive ability and self-report measures of personality; Adding structure (as is done with employment interviews) can enhance their effectiveness

 • Face Validity/Applicant Reactions – Some applicants may view reference checks as invasive 

• Administration Method – Reference checks are typically collected by phone using a structured interview format; Written requests for work histories typically result in low response rates and less useful information

 • Subgroup Differences – Generally little or no score differences are found between men and women or applicants of different races; Employers should be especially careful to avoid asking questions not directly related to the job

• Development Costs – Costs are generally low and depend on the complexity of the job, the number of questions needed, competencies measured, and development and administration of checker/interviewer training

 • Administration Costs – Generally inexpensive, structured telephone reference checks take about 20 minutes to conduct per contact, a minimum of three contacts is recommended

 • Utility/ROI – Used properly, reference checks can reduce selection errors and enhance the quality of new hires at a minimal cost to the agency

 • Common Uses – Best used in the final stages of a multiple-hurdle selection process when deciding among a handful of finalists 

Aamodt, M. G. (2006). Validity of recommendations and references. Assessment Council News, February, 4-6. 

Taylor, P. J., Pajo, K., Cheung, G. W., & Stringfield, P. (2004). Dimensionality and validity of a structured telephone reference check procedure. Personnel Psychology, 57, 745-772. 

U.S. Merit Systems Protection Board. (2005). Reference checking in federal hiring: Making the call.




Personality Test

Personality tests are designed to systematically elicit information about a person’s motivations, preferences, interests, emotional make-up, and style of interacting with people and situations. Personality measures can be in the form of interviews, in-basket exercises, observer ratings, or self-report inventories (i.e., questionnaires).

Personality self-report inventories typically ask applicants to rate their level of agreement with a series of statements designed to measure their standing on relatively stable personality traits. This information is used to generate a profile used to predict job performance or satisfaction with certain aspects of the work.

Personality is described using a combination of traits or dimensions. Therefore, it is ill-advised to use a measure that taps only one specific dimension (e.g., conscientiousness). Rather, job performance outcomes are usually best predicted by a combination of personality scales. For example, people high in integrity may follow the rules and be easy to supervise but they may not be good at providing customer service because they are not outgoing, patient, and friendly. The personality traits most frequently assessed in work situations include: (1) Extroversion, (2) Emotional Stability, (3) Agreeableness, (4) Conscientiousness, and (5) Openness to Experience. These five personality traits are often referred to collectively as the Big Five or the Five-Factor Model. While these are the most commonly measured traits, the specific factors most predictive of job performance will depend on the job in question. When selecting or developing a personality scale, it is useful to begin with inventories that tap the Big Five, but the results from a validity study may indicate some of these traits are more relevant than others in predicting job performance.

It is important to recognize some personality tests are designed to diagnose psychiatric conditions (e.g., paranoia, schizophrenia, compulsive disorders) rather than work-related personality traits. The Americans with Disabilities Act considers any test designed to reveal such psychiatric disorders as a “medical examination.” Examples of such medical tests include the Minnesota Multiphasic Personality Inventory (MMPI) and the Millon Clinical Multi-Axial Inventory (MCMI). Generally speaking, personality tests used to make employment decisions should be specifically designed for use with normal adult populations. Under the Americans with Disabilities Act, personality tests meeting the definition of a medical examination may only be administered after an offer of employment has been made.

Considerations:

• Validity – Personality tests have been shown to be valid predictors of job performance (i.e., they have an acceptable level of criterion-related validity) in numerous settings and for a wide range of criterion types (e.g., overall performance, customer service, team work), but tend to be less valid than other types of predictors such as cognitive ability tests, assessment centers, and work samples and simulations

• Face Validity/Applicant Reactions – May contain items that do not appear to be job related (i.e., low face validity) or seem to reveal applicants’ private thoughts and feelings; Applicants may react to personality tests as being unnecessarily invasive; Items may also be highly transparent, making it easy for applicants to fake or distort test scores in their favor

• Administration Method – Can be administered via paper and pencil or electronically

 • Subgroup Differences – Generally, few, if any, average score differences are found between men and women or applicants of different races or ethnicities, therefore it is beneficial to use a personality measure when another measure with greater potential for adverse impact (e.g., cognitive ability test) is included in the selection process

• Development Costs – Cost of purchasing a personality test is typically less expensive than developing a customized test • Administration Costs – Generally inexpensive, requires few resources for administration, and does not require skilled administrators

• Utility/ROI – High return on investment if you need applicants who possess strong interpersonal skills or other job-related specific personality traits

• Common Uses – Typically used to measure whether applicants have the potential to be successful in jobs where performance requires a great deal of interpersonal interaction or work in team settings; Less useful for highly scripted jobs where personality has little room to take effect; Frequently administered to large groups of applicants as a screen

Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1-26.

Hogan, R., Hogan, J., & Roberts, B. W. (1996). Personality measurement and employment decisions: Questions and answers. American Psychologist, 51, 469-477.

Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & McCloy, R. A. (1990). Criterionrelated validities of personality constructs and the effect of response distortion on those validities. Journal of Applied Psychology, 75, 581-595.

Hough, L. M., & Oswald, F. L. (2000). Personnel selection: Looking toward the future— Remembering the past. Annual Review of Psychology, 51, 631-664.

Tett, R. P., Jackson, D. N, & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf
@@@@@

http://www.siop.org/workplace/employment%20testing/testtypes.aspx#7.   Personality Tests

Some commonly measured personality traits in work settings are extraversion, conscientiousness, openness to new experiences, optimism, agreeableness, service orientation, stress tolerance, emotional stability, and initiative or proactivity.  Personality tests typically measure traits related to behavior at work, interpersonal interactions, and satisfaction with different aspects of work.  Personality tests are often used to assess whether individuals have the potential to be successful in jobs where performance requires a great deal of interpersonal interaction or work in team settings.   
Advantages
Disadvantages

  • Have been demonstrated to produce valid inferences for a number of organizational outcomes.
  • Can reduce business costs by identifying individuals for hiring, promotion or training who possess the needed skills and abilities.
  • Are typically less likely to differ in results by gender and race than other types of tests.
  • Can be administered via paper and pencil or computerized methods easily to large numbers.
  • Can be cost effective to administer.
  • Does not require skilled administrators.
  • May contain questions that do not appear job related or seem intrusive if not well developed.
  • May lead to individuals responding in a way to create a positive decision outcome rather than how they really are (i.e., they may try to positively manage their impression or even fake their response).
  • May be problematic for use in employee selection if the test is one used to diagnose medical conditions (i.e., mental disorders) rather than simply to assess work-related personality traits.

Job Knowledge Test

 Job knowledge tests, sometimes referred to as achievement or mastery tests, typically consist of questions designed to assess technical or professional expertise in specific knowledge areas. Job knowledge tests evaluate what a person knows at the time of taking the test. Unlike cognitive ability tests, there is no attempt to assess the applicant’s learning potential. In other words, a job knowledge test can be used to inform employers what an applicant currently knows, but not whether the individual can be relied on to master new material in a timely manner. Job knowledge tests are not appropriate when applicants will be trained after selection in the critical knowledge areas needed for the job.

Job knowledge tests are used in situations where applicants must already possess a body of learned information prior to being hired. They are particularly useful for jobs requiring specialized or technical knowledge that can only be acquired over an extended period of time. Examples of job knowledge tests include tests of basic accounting principles, computer programming, financial management, and knowledge of contract law. Job knowledge tests are often constructed on the basis of an analysis of the tasks that make up the job. While the most typical format for a job knowledge test is a multiple choice question format, other formats include written essays and fill-in-the-blank questions.

Licensing exams, agency certification, and/or professional certification programs are also job knowledge tests. Licensure and certification are both types of credentialing—the process of granting a designation that indicates competence in a subject or area. Licensure is more restrictive than certification and typically refers to the mandatory Governmental requirement necessary to practice in a particular profession or occupation. A passing score on a job knowledge test is typically a core requirement to obtain a professional license. Licensure implies practice and title protection. This means only individuals who hold a license are permitted to practice and use a particular title. For example, to practice law, a law school graduate must apply for admission into a state bar association that requires passing the bar licensure examination. Certification is usually a voluntary process instituted within a nongovernmental or single Governmental agency in which individuals are recognized for advanced knowledge and skill. As with licensure, certification typically requires a passing score on a job knowledge exam.

Considerations:

• Validity – Knowledge areas tested are very representative of those required to perform the job (i.e., high degree of content validity); Performance on job knowledge tests relates highly to performance on the job (i.e., high degree of criterion-related validity); Can add a substantial amount of incremental validity above and beyond the validity provided by general cognitive ability tests; Customized job knowledge tests have been shown to have slightly higher validity than off-the-shelf tests

 • Face Validity/Applicant Reactions – Applicants often perceive job knowledge tests as being very fair (i.e., as having a high degree of face validity) because such tests are typically designed to measure knowledge directly applied to performance of the job

• Administration Method – Can be administered via paper and pencil or electronically

• Subgroup Differences – Tend to produce race and ethnic group differences larger than other valid predictors of job performance (e.g., work sample tests, personality tests)

 • Development Costs – Typically expensive and time consuming to develop; Frequent updates to the test content and validation may be needed to keep up with changes in the job; Cost of purchasing an off-the-shelf job knowledge test is typically less expensive than developing a customized test

 • Administration Costs – Generally inexpensive and requires few resources for administration

• Utility/ROI – High return on investment if you need applicants who possess technical expertise in specific job knowledge areas; Utility is lower when the job knowledge test contributes little to the prediction of job performance above and beyond inexpensive and readily available cognitive ability tests

• Common Uses – Best used for jobs requiring specific job knowledge on the first day of the job (i.e., where the knowledge is needed upon entry to the position)

Dubois, D., Shalin, V. L., Levi, K. R., & Borman, W. C. (1993). Job knowledge test design: A cognitively-oriented approach. U.S. Office of Naval Research Report, Institute Report 241, i-47.

Dye, D. A., Reck, M., & McDaniel, M. A. (1993). The validity of job knowledge measures. International Journal of Selection and Assessment, 1, 153-157.

Ree, M. J., Carretta, T. R., & Teachout, M. S. (1995). Role of ability and prior job knowledge in complex training performance. Journal of Applied Psychology, 80(6), 721-730.

Roth, P. L., Huffcutt, A. I., & Bobko, P. (2003). Ethnic group differences in measures of job performance: A new meta-analysis. Journal of Applied Psychology, 88(4), 694-706.

Sapitula, L., & Shartzer, M. C. (2001). Predicting the job performance of maintenance workers using a job knowledge test and a mechanical aptitude test. Applied H.R.M. Research, 6(1- 2), 71-74.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

@@@@@
http://www.siop.org/workplace/employment%20testing/testtypes.aspx#6.      Job Knowledge Tests

Job knowledge tests typically use multiple choice questions or essay type items to evaluate technical or professional expertise and knowledge required for specific jobs or professions.  Examples of job knowledge tests include tests of basic accounting principles, A+/Net+ programming, and blueprint reading.  
Advantages
Disadvantages

  • Have been demonstrated to produce valid inferences for a number of organizational outcomes, such as job performance.
  • Can reduce business costs by identifying individuals for hiring, promotion or training who possess the needed skills and abilities.
  • Are typically less likely to differ in results by gender and race than other types of tests.
  • May be viewed positively by test takers who see the close relationship between the test and the job.
  • Will not be influenced by test taker attempts to impression manage or fake responses.
  • Can provide useful feedback to test takers regarding needed training and development.

  • May require frequent updates to ensure test is current with the job.
  • May be inappropriate for jobs where knowledge may be obtained via a short training period.
  • Can be costly and time-consuming to develop, unless purchased off-the-shelf.

Integrity/Honesty Test

An integrity test is a specific type of personality test designed to assess an applicant’s tendency to be honest, trustworthy, and dependable. A lack of integrity is associated with such counterproductive behaviors as theft, violence, sabotage, disciplinary problems, and absenteeism. Integrity tests have been found to measure some of the same factors as standard personality tests, particularly conscientiousness, and perhaps some aspects of emotional stability and agreeableness.

Integrity tests can also be valid measures of overall job performance. This is not surprising because integrity is strongly related to conscientiousness, itself a strong predictor of overall job performance. Like other measures of personality traits, integrity tests can add a significant amount of validity to a selection process when administered in combination with a cognitive ability test. In addition, few, if any, integrity test performance differences are found between men and women or applicants of different races or ethnicities. Integrity tests will not eliminate dishonesty or theft at work, but the research does strongly suggest that individuals who score poorly on these tests tend to be less suitable and less productive employees.

Overt integrity tests (also referred to as clear-purpose tests) are designed to directly measure attitudes relating to dishonest behavior. They are distinguished from personality-based tests in that they make no attempt to disguise the purpose of the assessment. Overt tests often contain questions that ask directly about the applicant’s own involvement in illegal behavior or wrongdoing (e.g., theft, illicit drug use). Such transparency can make guessing the correct answer obvious. Applicant faking is always a concern with overt integrity tests. The score results from such tests should be interpreted with caution.

 Considerations:

• Validity – Integrity tests have been shown to be valid predictors of overall job performance as well as many counterproductive behaviors such as absenteeism, illicit drug use, and theft; The use of integrity tests in combination with cognitive ability tests can substantially enhance the prediction of overall job performance (i.e., high degree of incremental validity)

• Face Validity/Applicant Reactions – May contain items that do not appear to be job related (i.e., low face validity) or seem to reveal applicants’ private thoughts and feelings; Applicants may react to integrity tests as being unnecessarily invasive, but strong negative reactions have been found to be rare; Some item types may be highly transparent making it easy for applicants to fake or distort test scores in their favor

• Administration Method – Can be administered via paper and pencil or electronically

• Subgroup Differences – Generally, few, if any, average score differences are found between men and women or applicants of different races or ethnicities, therefore it is beneficial to use an integrity measure when another measure with greater potential for adverse impact (e.g., a cognitive test) is included in the selection process; Both overt and personality-based integrity test scores seem to be correlated with age indicating younger individuals have the potential to be more counterproductive employees, possibly because of a youthful tendency towards drug experimentation and other social deviance

• Development Costs – The cost of purchasing an integrity test is typically less expensive than developing a customized test

• Administration Costs – Generally inexpensive, requires few resources for administration, and does not require skilled administrators • Utility/ROI – High return on investment in settings where counterproductive behaviors (e.g., theft of valuable property or sensitive information, absenteeism) are highly disruptive to organizational functioning

• Common Uses – Typically used to measure whether applicants have the potential to be successful in jobs where performance requires a high level of honesty and dependability; Frequently administered to large groups of applicants as a screen-out measure

Cullen, M. J., & Sackett, P. R. (2004). Integrity testing in the workplace. In J. C. Thomas & M. Hersen (Eds.), Comprehensive handbook of psychological assessment, Volume 4: Industrial and organizational psychology (pp. 149-165).

Hoboken, NJ: John Wiley & Sons. Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance. Journal of Applied Psychology, 78, 679-703.

Sackett, P. R., & Wanek, J. E. (1996). New developments in the use of measures of honesty, integrity, conscientiousness, dependability, trustworthiness and reliability for personnel selection. Personnel Psychology, 49(4), 787-829.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

@@@@@

http://www.siop.org/workplace/employment%20testing/testtypes.aspx#4.      Integrity Tests

Integrity tests assess attitudes and experiences related to a persons honesty, dependability, trustworthiness, reliability, and pro-social behavior.   These tests typically ask direct questions about previous experiences related to ethics and integrity OR ask questions about preferences and interests from which inferences are drawn about future behavior in these areas. Integrity tests are used to identify individuals who are likely to engage in inappropriate, dishonest, and antisocial behavior at work. 
Advantages
Disadvantages

  • Have been demonstrated to produce valid inferences for a number of organizational outcomes (e.g., performance, inventory shrinkage difficulties in dealing with supervision).
  • Can reduce business costs by identifying individuals who are less likely to be absent, or engage in other counterproductive behavior.
  • Send the message to test takers that integrity is an important corporate value.
  • Are typically less likely to differ in results by gender and race than other types of tests.
  • Can be administered via paper and pencil or computerized methods easily to large numbers.
  • Can be cost effective to administer.
  • Does not require skilled administrators.

  • May lead to individuals responding in a way to create a positive decision outcome rather than how they really are (i.e., they may try to positively manage their impression or even fake their response).
  • May be disliked by test takers if questions are intrusive or seen as unrelated to the job.


Emotional Intelligence Tests

 Emotional intelligence (EI) is defined as a type of social competence involving the ability to monitor one’s own and others’ emotions, to discriminate among them, and to use the information to guide one's thinking and actions. EI is a fairly specific ability that connects a person’s knowledge processes to his or her emotional processes. As such, EI is different from emotions, emotional styles, emotional traits, and traditional measures of intelligence based on general mental or cognitive ability (i.e., IQ). EI involves a set of skills or abilities that may be categorized into five domains:
  • ƒSelf-awareness: Observing yourself and recognizing a feeling as it happens. ƒ 
  • Managing emotions: Handling feelings so they are appropriate; realizing what is behind a feeling; finding ways to handle fears and anxieties, anger, and sadness. ƒ 
  • Motivating oneself: Channeling emotions in the service of a goal; emotional self-control; delaying gratification and stifling impulses. ƒ
  • Empathy: Sensitivity to others’ feelings and concerns and taking their perspective; appreciating the differences in how people feel about things. ƒ 
  • Handling relationships: Managing emotions in others; social competence and social skills.  
The typical approach to measuring EI ability involves administering a set of questions to applicants and scoring the correctness of those responses based on expert judgment (expert scoring) or consensus among a large number of people (consensus scoring). For example, one EI ability test requires the applicant to view a series of faces and report how much of each of six emotions is present, answer questions about emotional scenarios and responses (e.g., predict how an anxious employee will react to a significantly increased workload), and solve emotional problems (e.g., decide what response is appropriate when a friend calls you upset over losing his or her job).

Some tests of EI use a self-report method. Self-report questionnaires are commonly used to measure personality traits (e.g., extroversion, agreeableness, conscientiousness). Self-report assessments have been around for decades and serve a very useful purpose. As a way to measure EI abilities, they have some drawbacks. Using a self-report approach has been compared to estimating typing skill by asking applicants a series of questions about how quickly and accurately they can type. Does this mean self-report measures of emotional intelligence should not be used? If the objective is to measure a person’s self-perceived competence or self-image, then this may be the preferred approach. If the objective is to measure EI as a set of abilities, skills, or emotional competencies, then self-report may not be the best method to use. To the extent employers are concerned with fakability of self-reports, ability models of EI will be more acceptable. 

Considerations:

• Validity – Ability-based tests of emotional intelligence have been shown to contribute to the prediction of job performance, particularly when the maintenance of positive interpersonal relations is important to job success 

• Face Validity/Applicant Reactions – Test items appearing to measure social skill generally have good face validity (e.g., identifying emotions expressed in a photograph of a person’s face); Applicants may have a difficult time determining the best answer on some of the items; Some items may not appear to be directly work-related

 • Administration Method – Can be administered via paper and pencil or electronically • Subgroup Differences – There is some evidence women tend to score better than men on tests of emotional intelligence, which is consistent with other research showing women are more skilled at reading facial expressions of emotions than are men 

• Development Costs – Cost of purchasing an emotional intelligence test is typically far less expensive than developing a customized test 

• Administration Costs – Generally inexpensive, requires few resources for administration, and does not require skilled administrators • Utility/ROI – High return on investment if applicants are needed who possess strong interpersonal skills 

• Common Uses – Used with occupations requiring high levels of social interaction, cooperation, and teamwork  

Brackett, M. A., Rivers, S. E., Shiffman, S., Lerner, N., & Salovey, P. (2006). Relating emotional abilities to social functioning: A comparison of self-report and performance measures of emotional intelligence. Journal of Personality and Social Psychology, 91, 780-795. 

Frost, D. E. (2004). The psychological assessment of emotional intelligence. In J. C. Thomas & M. Hersen (Eds.), Comprehensive handbook of psychological assessment, Volume 4: Industrial and organizational psychology (pp. 203-215). 

Hoboken, NJ: John Wiley & Sons. Mayer, J. D., Salovey, P., & Caruso, D. R. (2004). Emotional intelligence: Theory, findings, and implications. Psychological Inquiry, 15, 197-215. 

Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition, and Personality, 9, 185-211. 

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf



Cognitive Ability Tests

Cognitive ability tests assess abilities involved in thinking (e.g., reasoning, perception, memory, verbal and mathematical ability, and problem solving). Such tests pose questions designed to estimate applicants’ potential to use mental processes to solve work-related problems or to acquire new job knowledge.

Traditionally, the general trait measured by cognitive ability tests is called “intelligence” or “general mental ability.” However, an intelligence test often includes various item types which measure different and more specific mental factors often referred to as “specific mental abilities.” Examples of such items include arithmetic computations, verbal analogies, reading comprehension, number series completion, and spatial relations (i.e., visualizing objects in three
dimensional space).

Some cognitive ability tests sum up the correct answers to all of the items to obtain an overall score that represents a measure of general mental ability. If an individual score is computed for each of the specific types of abilities (e.g., numeric, verbal, reasoning), then the resulting scores represent measures of the specific mental abilities.

Traditional cognitive tests are well-standardized, contain items reliably scored, and can be administered to large groups of people at one time. Examples of item formats include multiple choice, sentence completion, short answer, or true-false. Many professionally developed cognitive tests are available commercially and may be considered when there is no significant need to develop a test that refers specifically to the particular job or organization.

Considerations:

• Validity – Tests of general cognitive ability are good predictors of job performance and training success for a wide variety of jobs (i.e., they have a high degree of criterionrelated validity); The more complex the job or training demands, the better these tests work; Other predictors may add only small amounts of incremental validity over cognitive tests

• Face Validity/Applicant Reactions – Tests developed to refer explicitly to specific jobs or types of jobs within the hiring organization may be viewed as more highly related to the job (i.e., high face validity) than commercially developed tests

 • Administration Method – Can be administered via paper and pencil or electronically

• Subgroup Differences – Cognitive ability tests typically produce racial and ethnic differences larger than other valid predictors of job performance such as biodata, personality tests, and structured interviews; The use of other assessment methods (e.g., interviews, biodata instruments) in combination with cognitive ability tests is recommended to lower any potential adverse impact

 • Development Costs – Cost of purchasing a cognitive test is typically less expensive than developing a customized test

• Administration Costs – Generally inexpensive, requires few resources for administration, and does not require skilled administrators

• Utility/ROI – High return on investment if you need applicants who possess particular cognitive abilities or have high potential to acquire job knowledge or benefit from training; Cost effectiveness of developing own test over purchasing a commercial test is lower when face validity is not an issue

 • Common Uses – Best used for jobs requiring particular cognitive abilities for effective job performance and for more complex jobs

Hunter, J. E. (1986). Cognitive ability, cognitive aptitude, job knowledge, and job performance. Journal of Vocational Behavior, 29(3), 340-362.

Murphy, K. R., Cronin, B. E., & Tam, A. P. (2003). Controversy and consensus regarding the use of cognitive ability testing in organizations. Journal of Applied Psychology, 88(4), 660-671.

Outtz, J. L. (2002). The role of cognitive ability tests in employment selection. Human Performance, 15(1-2), 161-172.

Ree, M. J., Earles, J. A., & Teachout, M. S. (1994). Predicting job performance: Not much more than g. Journal of Applied Psychology, 79(4), 518-524.

Schmidt, F. L., & Hunter, J. (2004). General mental ability in the world of work: Occupational attainment and job performance. Journal of Personality & Social Psychology, 86(1), 162- 173.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

@@@@@

http://www.siop.org/workplace/employment%20testing/testtypes.aspx#3.      Cognitive Ability Tests

Cognitive ability tests typically use questions or problems to measure ability to learn quickly, logic, reasoning, reading comprehension and other enduring mental abilities that are fundamental to success in many different jobs.  Cognitive ability tests assess a persons aptitude or potential to solve job-related problems by providing information about their mental abilities such as verbal or mathematical reasoning and perceptual abilities like speed in recognizing letters of the alphabet.   
Advantages
Disadvantages

  • Have been demonstrated to produce valid inferences for a number of organizational outcomes (e.g., performance, success in training).
  • Have been demonstrated to predict job performance particularly for more complex jobs.
  • Can be administered via paper and pencil or computerized methods easily to large numbers.
  • Can be cost effective to administer.
  • Does not typically require skilled administrators.
  • Can reduce business costs by identifying individuals for hiring, promotion or training who possess the needed skills and abilities.
  • Will not be influenced by test taker attempts to impression manage or fake responses.

  •  Are typically more likely to differ in results by gender and race than other types of tests.
  • Can be time-consuming to develop if not purchased off-the-shelf


Biographical Data (Biodata) Tests

Biodata measures are based on the measurement principle of behavioral consistency, that is, past behavior is the best predictor of future behavior. Biodata measures include items about past events and behaviors reflecting personality attributes, attitudes, experiences, interests, skills and abilities validated as predictors of overall performance for a given occupation.

Often, biodata test items are developed through behavioral examples provided by subject matter experts (SMEs). These items specify situations likely to have occurred in a person’s life, and ask about the person’s typical behavior in the situation. In addition, biodata items reflect external actions that may have involved, or were observable by, others and are objective in the sense there is a factual basis for responding to each item. An item might ask “How many books have you read in the last 6 months?” or “How often have you put aside tasks to complete another, more difficult assignment?” Test takers choose one of several predetermined alternatives to best match their past behavior and experiences.

A response to a single biodata item is of little value. Rather, it is the pattern of responses across several different situations that give biographical data the power to predict future behavior on the job. For this reason, biodata measures often contain between 10 and 30 items and some wideranging instruments may contain a hundred or more items. Response options commonly use a 5- point scale (1 = Strongly Disagree to 5 = Strongly Agree). Once a group of biodata items is pretested on a sample of applicants, the responses are used to group the items into categories or scales. Biodata items grouped in this way are used to assess how effectively applicants performed in the past in competency areas closely matched to those required by the job.

A more recent development is targeted biodata instruments. In contrast to traditional biodata measures developed to predict overall job performance, targeted biodata measures are developed to predict individual differences in specific job-related behaviors of interest. Similar to the developmental process used for traditional biodata, the content of a targeted biodata measure is often driven by SME-generated behavioral examples relevant to the specific behavior(s) of interest.

An example of a targeted biodata measure is a job compatibility measure (sometimes referred to as a suitability measure) which focuses on the prediction of counterproductive or deviant behaviors. Counterproductive behavior is often defined as on-the-job behavior that is (a) harmful to the mission of the organization, (b) does not stem from a lack of intelligence, and (c) is willful or so seriously careless it takes on the character of being willful. Previous criminal misconduct (e.g., theft), employment misconduct (e.g., sexual harassment, offensiveness to customers, and disclosure of confidential material), fraud, substance abuse, or efforts to overthrow the Government are some major factors that may be relevant to suitability determinations. A job compatibility index is typically used to screen out applicants who are more likely to engage in counterproductive behavior if they are hired. Job compatibility measures are less costly to implement than other procedures typically used to detect counterproductive behaviors (e.g., interviews, polygraphs) and are beneficial for positions requiring employees to interact frequently with others or handle sensitive information or valuable materials.

Considerations:

• Validity – Biodata measures have been shown to be effective predictors of job success (i.e., they have a moderate degree of criterion-related validity) in numerous settings and for a wide range of criterion types (e.g., overall performance, customer service, team work); Biodata measures have also appeared to add additional validity (i.e., incremental validity) to selection systems employing traditional ability measures

• Face Validity/Applicant Reactions – Because some biodata items may not appear to be job related (i.e., low face validity) applicants may react to biodata tests as being unfair and invasive

• Administration Method – Administered individually but can be administered to large numbers of applicants via paper and pencil or electronically at one time

• Subgroup Differences – Typically have less adverse impact on minority groups than do many other types of selection measures; Items should be carefully written to avoid stereotyping and should be based on experiences under a person’s control (i.e., what a person did rather than what was done to the person)

 • Development Costs – The development of biodata items, scoring strategies, and validation procedures is a difficult and time-consuming task requiring considerable expertise; Large samples of applicants are needed to develop as well as validate the scoring strategy and additional samples may be needed to monitor the validity of the items for future applicants

• Administration Costs – Can be cost effective to administer and generally not time consuming to score if an automated scoring system is implemented

• Utility/ROI – High predictive ability can allow for the identification and selection of top performers; Benefits (e.g., savings in training, high productivity, decreased turnover) can outweigh developmental and administrative costs

 • Common Uses – Commonly used in addition to cognitive ability tests to increase validity and lower adverse impact

Elkins, T., & Phillips, J. (2000). Job context, selection decision outcome, and the perceived fairness of selection tests: Biodata as an illustrative case. Journal of Applied Psychology, 85(3), 479-484.

Hough, L. M., & Oswald, F. L. (2000). Personnel selection: Looking toward the future— Remembering the past. Annual Review of Psychology, 51, 631-664.

Mount, M. K., Witt, L. A., & Barrick, M. R. (2000). Incremental validity of empirically keyed biodata scales over GMA and the five factor personality constructs. Personnel Psychology, 53(2), 299-323.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75(2), 175-184.

Schmitt, N., Cortina, J. M., Ingerick, M. J., & Wiechmann, D. (2003). Personnel selection and employee performance. Handbook of Psychology: Industrial and Organizational Psychology, 12, 77-105. New York, NY: John Wiley & Sons, Inc.

http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf

@@@@@

http://www.siop.org/workplace/employment%20testing/testtypes.aspx#2.  Biographical Data

The content of biographical data instruments varies widely, and may include such areas as leadership, teamwork skills, specific job knowledge and specific skills (e.g., knowledge of certain software, specific mechanical tool use), interpersonal skills, extraversion, creativity, etc.   Biographical data typically uses questions about education, training, work experience, and interests to predict success on the job.  Some biographical data instruments also ask about an individuals attitudes, personal assessments of skills, and personality.    
Advantages
Disadvantages

  • Can be administered via paper and pencil or computerized methods easily to large numbers.
  • Can be cost effective to administer.
  • Have been demonstrated to produce valid inferences for a number of organizational outcomes (e.g., turnover, performance).
  • Are typically less likely to differ in results by gender and race than other types of tests.
  • Does not require skilled administrators.
  • Can reduce business costs by identifying individuals for hiring, promotion or training who possess the needed skills and abilities.


  • May lead to individuals responding in a way to create a positive decision outcome rather than how they really are (i.e., they may try to positively manage their impression or even fake their response).
  • Do not always provide sufficient information for developmental feedback (i.e., individuals cannot change their past).
  • Can be time-consuming to develop if not purchased off-the-shelf.