The method simulates real work problems that are solved by the participant in a web application.
The method works with a competency model, compiled by HR psychologists according to the requirements of business practice. The competency model includes competencies, factors, and indicators that are crucial for the performance of virtually any job position.
Our method collects a variety of data on the abilities, skills and personality factors of job applicants and employees.
It collects this information at dozens of different points and tests it repeatedly.
Data on a particular personality trait, such as competency, is collected throughout the simulation game. A high score on a participant's competence then indicates consistent performance across the simulation, not random behavior.
Individual measures are evaluated by the application's own algorithms.
The information obtained can be used for several different purposes:
Job seeker diagnostics
Diagnostics of the development needs of existing employees
Diagnostics of career plans of students or trainees
Diagnostics in the context of adaptation plans
Inductively created method - from the individual parts Brhavera created an overall system.
Multidimensional theoretical basis
Behavera method is based on different theoretical schools, which allows it to use the most up-to-date psychological knowledge and to reflect the conclusions and results of current research in the fields of work psychology, personality psychology, statistics and psychometrics, personality psychology, behavioral and cognitive psychology, gestalt, classical test theory (CTT), and even social psychology (which deals with topics such as conformity, social desirability, etc.), research in the field of work psychology, prediction of work performance, issues of self-assessment, and issues of social desirability.
We draw on CTT (classical test theory) and factor analysis
Inductively created factors (competencies) with self operationalization. The individual factors were created on the basis of field research, qualitative investigation and interviews with HR specialists in order to cover as much information required by HR specialists as possible and as few factors as possible. The aim was to create - in terms of measurement - a sufficiently fine screening method.
If Behavera's method is based on test theory, can we still differentiate it from traditional tests?
Yes. Tests usually mean a 'diagnostic' in the broadest sense of the word. There is nothing inherently wrong with tests. Behavera is also a test in this sense. It comes down to what kind of test it is (what the factors it measures are and which items saturate them) and what purpose it's used for.
It is precisely the combination of factors, items, and purpose that is key. When these three aspects work together, a test can be great. In the context of recruiting, for example, the test can be void even with well-chosen factors. This can happen if the questions that saturate the factors are along the lines of "Do you think you're a good leader?" because they don't match the purpose of the test. Participants aren't motivated to answer such questions sincerely in a recruiting scenario, and the test thus falls short.
However, if the same factors are saturated differently in the same context (of recruiting) - for example, simulating real work behaviors instead of asking questions - the combination of the three suddenly becomes consistent. This is what we want and need to define ourselves against; it's not about the simulation vs test distinction, but about distinguishing a good test from a bad one.
There has been a great deal of research conducted at the intersection of behavioral psychology and occupational psychology to examine what information is best used to predict a person's future job performance. The most important predictive factor is current work behavior, which is also the best predictor of how a person will perform in the future. Predictors such as a person's self-esteem, self-confidence, education, experience, etc. are secondary. Therefore, Behavera focuses on current behavior in the simulation game-based assessment and draws conclusions from it to maximize the predictive validity of the method.
What's wrong with questionnaires and traditional tests?
A large number of studies (especially in the field of social psychology) have shown that when filling out questionnaires and tests, people have a strong (even if unconscious) tendency to misrepresent their results and to attempt to appear more competent than they are.
Research shows that this effect ('social desirability' or the tendency to be liked) is strongest when people are strongly motivated to achieve good results. This is emblematic of recruiting, where the social desirability tendencies are usually strongest.
On the flip side, the inability to accurately evaluate oneself comes forward with employee testing. People have conscious or unconscious tendencies to under- or overestimate themselves.
Because of this, our method focuses on behavior (rather than self-assessment), which makes misrepresenting (and embellishing) participants' results much more difficult. We've also implemented an algorithm that highlights tendencies toward social desirability.
What are the benefits of behavioral data over other types?
Behavioral data are the best feasible proof. The only type of data that's better is long-term real-life observation. The validity of behavioral data has been proven by thousands of studies and mta-analyses.
Based on one large meta-analysis, the reliability of data from traditional interview questions has been shown to be trivial. Their validity and reliability around 20 - 30% (i.e. worse than chance). With good behavioral data, these can climb to 60 - 80%, which presents a huge rise in reliability. There is still room for error, as is in everything concerning the human mind and its manifestations in practice.
Which sources underpin our science?
There are many sources we've used - and a great number of studies and meta-analyses are available.
For example:
Cascio, W.F. Aguinis, H. (2005) Applied Psychology in Human Resource Management (6th Edition), NJ: Prentice
Hall Dewberry, C. (2011) Integrating candidate data: Consensus or arithmetic? In N. Povah C. Thornton (Eds) Assessment and Development Centres: Strategies for Global Talent Management, Farnham, Surrey: Gower
Gaugler, B.B., Rosenthal, D.B., Thornton, G.C. Bentson, C. (1987) Meta-analysis of assessment centre validity, Journal of Applied Psychology, 72, 493-511 Lowry, P.E. (1994) The structured interview: An alternative to the assessment centre? Public Personnel Management, 23
Lowry, P.E. (1994) The structured interview: An alternative to the assessment centre? Public Personnel Management, 23
Meriac, J.P., Hoffman, B.J. Woehr, D.J. Fleisher, M.S. (2008) Further evidence for the validity of assessment centre dimensions: a meta-analysis of the incremental criterion related validity of dimension ratings, Journal of Applied Psychology, 93(5), 1042-1052
Schmidt, F.L. Hunter, J.E. (1998) The validity and utility of selection methods in personnel psychology: practical and theoretical implications of 85 years of research findings, Psychological Bulletin, 124, 262-274
Smith, M. (1994) A theory of the validity of predictors in selection, Journal of Occupational and Organizational Psychology, 67(1), 13-31
Thornton, G.C. Gibbons, A.M. (2009) Validity of assessment centres for personnel selection, Human Resource Management Review, 19, 169-187
Why are the theories used in Behavera and their combinations the right way to go?
A theory as such does not guarantee anything and the word 'theory' is really another way to say 'approach'. There are only two approaches to creating diagnostic methods which are grounded in science, research, methodology, and statistics. These approaches aren't necessarily competition, since each is useful in different cases. The other approaches to diagnostic methods are ad hoc would-be methods based on intuition.
Our approach, called CTT (classical test theory), is based on traditional test theory and is 'traditional' and 'test-based' because it views a test (any diagnostic, including our simulations) as a whole, and takes the same approach to assessments (influences norm creation that applies to the results of the method as a whole). The theory mainly defines the methodological approach and how we test our method's validity and reliability. CTT offers specific statistical tools that have been in development for decades. One of these is factor analysis, which was invented by the genius mathematician Charles Spearman, who was coincidentally also a genius psychologist. He is responsible for a sizeable part of the basics of scientific psychology, methodology, and statistics.
A second - relatively recent approach - is IRT (item response theory), which builds on entirely different principles. The primary difference here is that IRT does not concern itself with the test as a whole, but instead with the validity, reliability, and norms of each item separately. Each item (e.g. a question in a test) has a pre-set structure of variables and if these are well conceptualized, the whole test works. This is used with intelligence tests, where 10.000 questions with different parameters are created and then the test can be taken by a thousand people with randomized questions (so each test is unique). Because the test deployer has control over the items, they can be sure that the overall results can be comparable, valid, and reliable. This presents entirely different requirements on the statistics and methodology, utilizes different tools, programs, etc.
Are we using the best theory there is?
No theory by itself guarantees anything. It is always a journey towards an ideal that we strive to approximate. With few exceptions, all psychological methods in the world (including clinical, forensic, IQ, educational, etc.) are based on CTT theory. The exceptions are based on IRT, and so far there are very few of them, mostly still at the testing stage.
The method draws on the best practices from assessment centers, which go back more than 70 years. The basic building block is working with a competency model in which individual competency is tested several times on different work tasks.
This gives us insight into the individual aspects of the competencies and also allows us to verify their stability (minimizing the possibility of accidental one-off guesses).
Is Behavera an alternative to assessment centers? How are they different?
Behavera's simulation games are inspired by assessment centers' best practices. We use simulated situations and exercises which are very similar to assessment centers.
The main difference is that Behavera does not take place in a physical, face-to-face setting. Instead, we use automation to deliver the assessments online.
Our algorithms are very similar to how assessment center facilitators evaluate, but we don't need to rely on external assessors.
Finally, Behavera offers better prices, scalability, and user experience than traditional assessment centers.
The method implicitly works with behavioral scales and ipsative comparison (scoring). The aim is to compare the candidate with the general population norm as well as with the anticipated prerequisites (criteria) for the given position.
Our method methodologically borrows from assessment and development centers, and behavioral psychology. The methodological principles reflect modern findings in the field of personality psychology, motivation, stimulation, leadership, and self-management. These allow us to check the construct validity of the model.
Construct validity requirements state that a method must measure what it has set out to measure. In other words, if we choose to measure communication, we must be sure that what we have in the data is partial evidence of the communication competency and not, for example, general intelligence and wit that enable us to find the right answer.
In our approach, each of the competencies, factors, dimensions, or indicators has a clear psychological and methodological background that could be traced back as far as Freud, if we were to exaggerate. We tailor the content of each task to the given methodological background of each competency. This allows us to control that task - competence.
When developing the method, we applied similar procedures to those used when developing a standardized psychodiagnostic instrument. Ours is an inductively developed psychological method that respects the requirements of both diagnostic practices and organizations.
A fundamental element of measurement reliability is the "objectivity" of the test.
However, objective here does not mean objective truth. Objective means that Behavera measures behavior and is not based on the participant's self-assessment. To put it simply, assessment outcomes do not reflect a person's perception of themselves, but their actual work performance in specific situations.
Unlike questionnaire methods and self-assessment scales, Behavera lets the participant perform the actual work task (or its simulated representation) to solve a situation in real time.
Are Behavera data and results objective?
Strictly speaking, our data are not objective because there is no such thing.
Nevertheless, they approximate a "work reality" more accurately than other methods. Why is that? Because we don't ask people how they work, we let them demonstrate it "live". That's the fundamental difference, and also the reason why our measurements are more reliable and more useful in practice.
Behavera simulation games therefore are not an assessment of how an individual perceives themselves in a given skill, ability, or competency; the output is an objective assessment of the individual's performance, choices, and practical skills they demonstrated in the simulated environment.
Our method is constructed in such a way that each trait (competency) is measured as many times as necessary to have sufficient predictive value. The entire measurement system is so detailed and sophisticated that it measures at approximately 200-500 points during the simulation, ensuring that each competency is measured enough times to make relevant judgements based on the measurement itself.
The measurement frequency greatly refines the internal consistency (reliability) of the assessment.
Each measurement adds a sliver of knowledge from the participant's manifested personality (particularly work characteristics) to the overall picture.
Based on these measurements, the application then uses statistical procedures to construct and interpret a holistic picture of the participant.
The participant has the opportunity to behave in a number of very different ways at many points during the course of the game simulation, which may lead to identical or different outcomes. However, these choices are not a priori good or bad. What is crucial for measurement purposes, however, is which choices the candidate is inclined to make, which ones they choose over others, and, last but not least, the consistency of these choices.
more...
The individual choices help us indicate the specific behaviour of the candidate and thus allow us to make unbiased inferences about their personal preferences, since the candidate is not looking for one right option but is choosing how to behave from a range of possible solutions.
The point is therefore not simply to decide on a particular option, but also on the way in which the candidate makes that decision.
An integral part of this measurement is also the underlying context and the means of communication by which the candidate accompanies their individual choices.
For example:
Imagine a simple situation: a colleague at work asks you for help and you decide to oblige. So you choose “yes” from the two fundamental options... but how do you communicate your "yes"?
"Sure, I am happy to help you! That's what I'm here for!"
"Okay, but this is the last time..."
"You're so annoying, it's terrible, show me what you can't do... "
"OK, but you owe me!"
All the answers lead to the “yes” choice, but with a completely different story that reveals a lot!
Isn't freedom of choice a relative term? After all, the participants do choose from a limited set of options.
Of course, freedom of choice is relative. Behavera is a simulation that's meant to help diagnostics in companies, needs to be easily accessible and run a quick but valid assessment. The extent of 'freedom of choice' in our simulation games is adapted to these factors. We draw on our experience in psychodiagnostics, psychometrics, and general findings from cognitive psychology. We provide just enough options to be able to discern the necessary behaviors. Sometimes, five options are enough, and sometimes we need to offer more to distinguish the factors in a sufficiently nuanced way.
How do we keep our measurements reliable?
We regularly measure our assessments' reliability. We adjust the norms, watch the distribution, and make sure that the method differentiates well. The method is internally consistent (i.e. "test-retest reliability"), which is precisely because we utilize a good methodology - well-set factors that are well operationalized, and which we measure in sufficient detail at many points in the simulation.
Behavera begins with the choice of competencies and ends with the assessment of competencies.
As a result, we can guarantee a high degree of construct validity because the individual constructs (competencies) are operationalized, repeatedly measured, and interpreted based on that operationalization.
The operationalization of competencies is based on current personality theories, which we link to the requirements from practice (organizations, companies, universities, etc.).
This enables us to measure specific competencies and then draw relevant conclusions on these specific competencies.
By design, the method effectively tests selected sub-aspects of the day-to-day work tasks of the selected position for which it is currently set up. Due to the necessity to actively participate directly in the assigned tasks, the method can reliably predict the current level of the individual competencies - and thus also reliably predict the opportunities that open up for a given candidate on the job market.
Participants who perform well in competencies measured by Behavera assessments also succeed in these competencies in their workplace.
We validated the factors using recognized methods (assessment centers, BIP, work observations, 360 interviews, peer interviews, and superior interviews), which means they are well-defined and truly mirror a person's work-related personality characteristics.
The method behind Behavera's simulation game-based assessments is sophisticated in terms of face validity. It allows the participants to immerse themselves in the role and experience the feeling of solving a real work problem. On the other hand, the method combines the measurement of multiple competencies as much as possible and quickly changes the sections in which it measures specific information. This ensures that candidates do not experience a 'mental setting' - the method does not give participants much opportunity to realize which competency is currently being measured and thus they cannot try to adapt their behavior.
What makes Behavera's method valid?
Our method is valid because it uses a valid methodology. It has validly set factors (competencies) that measure what organizations need (face validity).
Thanks to our research we know that the results do match business practice - i.e. the people who perform well in Behavera perform well in their workplace (external validity).
Finally, we validated the factors using recognized methods, which means that they are well-defined and truly mirror a person's work-related personality characteristics (convergent validity).
How exactly do we check the validity - and what are the results?
In terms of external validity, we conducted a statistical investigation to test convergent validity. For this purpose, we used some existing psychological diagnostic tools (such as BIP) to measure the same group of people. We then made intergroup comparisons.
In this way, we verified the meaningfulness of the factors (competencies) as well as the ability of the method to adequately differentiate between groups of persons.
We already have clear (numerical) evidence that our method can differentiate people well, distinguishing good performance from average and average performance from poor; thus helping make correct personnel decisions.
At the same time, thanks to regular contact with our customers, we can verify the performance of successful candidates "in practice" and compare their Behavera performance with real job criteria they are now assessed on.
Participants are assessed in a similar way to an assessment center. The basic measured element is observable behavior.
Each task is linked to a specific competency and each choice contributes to the whole picture in that competency.
More complex tasks are linked to several competencies and the individual steps taken by the candidate during the simulation game contribute to one, two, or more competencies simultaneously.
Competencies are measured repeatedly and at different points of the assessment.
As a result, we can assess not only the overall level of the competency but also the specific working conditions in which a better and/or worse performance in the competency was achieved.
Each sequence of our simulation games contains a multitude of model situations.
In each of these, several competencies manifest themselves.
Generally, 1 - 2 competencies are represented more than others.
We designed the model situations from the ground up so that the individual competencies measured manifest themselves and can be measured.
Each model situation is tested repeatedly and adjusted based on statistical analyses. This is to ensure that the situations are adequately difficult, and meaningful and contribute as much as possible to assessing the bigger picture of the candidate's skills and abilities.
The method allows to make three levels of comparisons:
where the same individual is measured (compared) against themselves at different times. This provides insight not only into the current picture of the participant's abilities and skills but also their consistency and development over time.
where participants are compared with each other in a relevant group, for example with other candidates for the same position in the same company.
where the participant is compared to the general norm of the relevant population, for example, a candidate for a sales job with the corresponding population of salespeople.
Through a sophisticated system of collecting evidence of an individual's competencies and presenting it, Behavera can assess the participant's performance at different times, compare it to other participants within the group (applicants, existing employees, or within the team), and also compare their performance to the general population norm.
The results of our assessment are not only a comprehensive picture of the participant's traits and competencies. They also answer the question "What does this mean for me?"
Although the Behavera methodology does not offer many opportunities to bias one's results in a desirable direction, we pay particular attention to phenomena such as "social desirability", which is especially present in personal interviews and assessment centers.
We have added a mechanism that helps us capture different aspects of socially desirable behavior at various points of the game. We then assess their overall strength and the risks involved, which allows us to evaluate the degree of social desirability in an individual.
This data, including its interpretation, is then included in the output report in the platform.
Behavera uses a methodology based on competency models. Competency models are linked to specific job positions and operationalized concerning market needs.
The link between competencies and job roles is determined by the requirements of business practice (HR, managerial, SMEs) and supported by analyses of job postings.
As a result, we can test exactly what business professionals require (and actively seek out) in the roles.
Competency models are comprehensive and are able to encompass the entire personality-work side of an individual
In creating competency models, we followed two complementary rules:
Rule 1:
The individual competencies measured are not a reflection of our imagination, but are based on well-tested and clearly operationalised theoretical concepts. In this way, we are assured that we know what we measure, and that what we measure can be measured in the first place.
Rule 2:
We intend to develop a method that will precisely meet the requirements of our target audience. Therefore, in Behavera we examine competencies that are key in practice, and we present conclusions that are useful, understandable, and applicable in professional practice.
Integrity refers to wholeness, which manifests itself in personal experience, thoughts, and behaviour. Integrity occurs when these components are in harmony, i.e. when a person acts as they feel and think. There is no stylization or obfuscation of true motives and intentions. Integrity also refers to keeping one's own attitudes and approaches to work in line with one's personal value system. Integrity can also mean unity regarding an individual's approach to work and reflection of company values. It includes acceptance of related principles and policies.
Acts by the moral principles applicable in the context.
Acts per declared attitudes (does what they say and says what they do) and values.
Acts following the role they (voluntarily) occupy.
Knows and cares about the vision of the organization, understands its essence, and considers it to be their own.
Appears trustworthy and there are no obvious reasons for loss of trust.
Can clearly articulate their motives for working and there is a clear alignment between what they expect from the job and what the organization can deliver.
Recognizes the values of the organization and its culture, and thinks and acts by them.
Responds calmly and coolly to obstacles (e.g., wrong information, lack of information, less time than expected, etc.) and does not panic or place blame or responsibility on others. Does not hide from problems.
Interest in the job is based on intrinsic motives - it is not simply the result of external stimuli that are not directly related to the job (benefits).
Is interested in taking on job/role-related responsibilities.
In crises, acts per the organization's priorities.
Knows the limits of their resilience, applies the principles of psycho-hygiene, and does not let the situation "go too far" when necessary.
How can we measure competencies such as integrity?
The competency of integrity is not being measured often; since it's more complicated to do so.
For those who don't have rich experience in workplaces and haven't seen the competencies at play, it can be difficult to grasp a competency like integrity methodologically. Unlike effective communication, integrity is not easy to operationalize (i.e. it's not easy to figure out how a person with a high level of integrity behaves).
Our team regularly meets and works with recruiters, candidates, and managers. As a result of these years of field experience with work behaviors combined with our high-quality methodology, we can reliably measure even these competencies.
The operationalization of all competencies is based on theoretical knowledge and current research; as well as the requirements of diagnostic practice (see above).
Detailed process of our operationalization:
The specific description of behavioral manifestations at different levels of mastery of each competency was the first essential step. However, the correct link between well-defined competencies and the core of the method (or measurement) is crucial.
The following steps were followed in linking the operationalized competencies to the content of the method:
First, the key prototypes of generic positions that Behavera will focus on in its work were identified.
Subsequently, rough contours of tasks for each type of position were drawn up. Defined (operationalized) competencies were then identified within these tasks.
Once a preliminary competency model for a specific job had been established, we repeatedly returned to the individual type tasks and adjusted them based on a comprehensive view of the competency model to be sure of a few key parameters.
These parameters include in particular:
The certainty that a given competency is manifested in a given type of task
Knowledge of the degree of occurrence (difficulty / "guessability") of the competency in the task
Confidence that we will have a valid and reliable measurement tool for the task-competency combination
It is only after several iterations of this creative process that we have reached the final state, i.e. the point where we have job-specific task types, we know which competencies we are measuring in them, and what methodology we will use to determine the degree and level of their occurrence.
The Behavera methodology is not based on self-assessment scales (unlike most psychodiagnostic methods) but on the assessment of observable behavior in a virtual environment.
The underlying mechanism is the multitude of choices that the simulation game presents to the participants, as well as the considerable freedom in decision-making and solving each type of task. As a result, we can distinguish very accurately between different approaches to problem-solving and different communication and negotiation styles.
At the psychometric level, this refers to preferential choice; rather than right and wrong answers.
The evaluation of these tasks is more complicated in terms of methodology, as it assumes knowledge of different patterns of behavior that are saturated with different levels of competency sets. In the background of this measurement, many different ways in which a given simulation can be completed are defined. Our knowledge of which specific competencies accentuates these ways. One participant in the simulation may emphasize more the customer in their choices, while another will prioritize the supervisor, and a third participant might focus on their interests, for example. It is precisely based on these unique choices that we build an overall picture of the participant's psychological profile.
This method of assessment is the same as, for example, assessing a task at a live assessment center. These tasks have right and wrong solutions and the candidate can make mistakes in them; from ignorance, inattention, or misunderstanding of the task, for example. A typical task here is, for example, correcting incorrect details on a tax document - an invoice that has clearly stated (objective) details.
For these tasks, the assessment methodology is much simpler - thanks to the existence of right and wrong procedures, we know exactly how the candidate must finish the simulation to have the maximum level of competency; or we know exactly where and what mistake they made, which lowers their overall score in the competency.
Based on the facts presented so far, we can discover a large amount of information in the course of a single work simulation. We can accurately classify, describe, and evaluate this information. The result of this effort is a comprehensive set of information about the behavior, actions, and specific skills of the candidates - but only in a "raw" state at this point.
Our primary effort is to make Behavera a simple and readable tool for diagnostic practice - making it necessary to interpret these "raw" outputs properly. To this end, we pay particular attention to what is central to employers' decisions, the information on which they base their decisions, which values they are attuned to, and the form in which they are used to reading and interpreting these values.
To make the employers’ job (and decision-making) even easier, we directly interpret some of these "raw" outputs. As such, the employer learns what a given level of competency means, whether it is sufficient for the position, and, last but not least, can compare the data with other participants and with the general population norm.
The gross score is converted to a standardized score and then compared to the population norm. We create the norm ourselves through a regular statistical evaluation of the data, analyzing it in detail. We update the standards regularly.
What does a high score signify?
A high score in competency indicates that the participant consistently exhibited competency-relevant behavioral patterns across the simulation game.
What does a low score signify?
Lower tendencies in competencies may indicate partial reserves (skills, knowledge) or a lack of motivation, and can hint at the current psychological state (stress, fatigue). Our simulation games are diagnostic tools, and like any other tool, they are influenced by the current psychological state of the participant.
Are competencies learnable?
All measured competencies are inherently "improvable" over the long term. That said, the lower the performance in a given competency, the more difficult quantitative increase will be for the participant and the more time and effort it will take to improve.
Competences such as information processing and numeracy can be influenced by experience to a greater extent and as such can be relatively easy to learn (provided the participant is motivated).
Competencies such as effective communication, self-reflection, or integrity are largely influenced (defined) by the person's work personality and their development requires continuous effort, support from a trainer, supervisor, or coach, good working conditions, sufficient constructive feedback, and, of course, internal motivation.
We decided to adapt the STEN (STandard tEN) scale for interpreting results. The scale has 10 points, follows a Gaussian distribution curve, and has a mean value of 5.5. Thus, the average individual scores 5.5 on the competencies.
Behavera is a continuously developed and improved method. We constantly check and adjust it based on various statistical analyses and procedures.
First and foremost, we continuously check the flow of completing each task and analyze typical flow patterns, difficulty, completion times, and error rates. These statistical procedures are intended to reveal the typical ways in which an individual traverses the work simulation and how challenging it is for them. Based on this knowledge, we can then modify our simulation games so that their difficulty is a good match to the employer's requirements.
Other statistical techniques include:
It can reveal how individual tasks and simulation games saturate the measured competencies. The result of this statistical procedure is a statistical model that shows the complex functioning of all individual measures.
Reveals specific groupings of candidates. Cluster analysis is a complex statistical method that can use large amounts of data to reveal typical patterns of candidate behavior and categorize candidates into groups based on similarity. Based on the outputs of cluster analysis, we can modify work simulations for specific groups of job seekers or, for example, to detect if a group tends to "cheat" the simulations.
ANOVA allows us to get a good assessment of differences between participants or differences between particular tasks or entire sequences. The goal of analyses of variance is to see if one task is generally more difficult than another and to identify their critical points.
These allow us to identify "average" and "median" candidates and set STEN or percentile norms.
Before launching the application on the market, we need to be sure that the theoretical model we have developed will also work in practice. Testing was performed both after each task and within the simulation game as a whole. Our goal was to test each point of the model and its functioning within its complexity. To this end, we took several steps:
Simply put - to be sure of our competency model and how it works in practice, we test the Behavera diagnostic on a specific group of candidates who will also undergo similarly focused psychodiagnostic tests. This comparison (testing A and testing B on one group of people) gives us a good idea of how our competencies stand up against the measurement of basic psychological competencies by other recognized methods. A basic prerequisite for the success of this testing method is the choice of a good convergent method that is both appropriate in its focus and of psychometric quality. For this purpose, we chose the German psychodiagnostic BIP.
This testing was conducted on a relevant working population. Our goal was to find out what group of participants completes a given task correctly, what group makes errors in the task, and what those errors are. Based on this measurement, we were able to modify the tasks (so that their difficulty matched the intent) and secondly, we were also able to appropriately match them to specific job roles.
Předchozí dva body by samy o sobě měly stačit k tomu, abychom dokázali náš teoretický model dostatečně posoudit a upravit. Pro naprostou jistotu výsledků jsme provedli poslední vlnu testování, ve které jsme testovali metodu jako celek na reálných uchazečích o zaměstnání a zaměstnancích.
The outcomes of the measurements are compared with the findings from directly observing participants (typically during the assessment/development center), based on well-defined parameters. This is a very accurate way of checking the validity of the results of a given method due to the quality of the input for direct observation.
To this end, we have consulted with experts thoroughly to ensure we are clear about what they want to know about participants, what level of detail they require, and what information they wish us to interpret.
The final format of the output is geared towards HR professionals (recruitment, LD), but also managers who are responsible for operational and strategic HR management.
We aim to answer the following questions:
Which participant's score was the best in the given competencies
What overall level did the participants achieve
What are the most demonstrated competencies and which are the less demonstrated ones
How do the individuals compare to each other
How do the individuals compare to the general population norm or the internal norm
Verbal interpretation of the competencies
Assessment of overall performance - fit for position
Implications for future work and development
Recommendation of a suitable position within the talent program