Find a Reviewer
View the results for an anonymize reviewer by selecting a survey submission time:
Don't see your results? Our analysis server is not currently active. Email Vinesh Kannan (v@hawk.iit.edu) and we will update the website with the latest anonymous results.
Resume Reviewer Models
Decision Tree Classification
How to Read
- The tree does not represent how the reviewer made choices, just the attributes that most-cleanly split between their accepted and rejected resumes.
- Read the tree nodes from top to bottom for rules on how to classify resumes as accepted or rejected.
- Split nodes show the split criteria in the first line. Go left if the attribute is True and go right if the attribute is False.
- The gini value measures entropy at each node. The maximum entropy value is 0.5, indicating nodes with the same number of accepted and rejected resumes. Smaller entropy values indicate more decisive splits.
- The value property of each node shows how many resumes with that classification were: [accepted, rejected].
- The class label is set to accepted or rejected based on which of the two values is larger.
Figure 1. Decision tree for labeling a resume as accepted or rejected by the reviewer. Maximum number of splits = 3.
Association Rule Mining
How to Read
- The table shows resume attributes that frequently occurred in resumes the reviewer accepted or rejected.
- The metric supp(a) stands for support, measuring the proportion of resumes that showed this attribute or pair of attributes.
- The metric conf(a -> b) stands for confidence, measuring the proportion of cases that satisfy an association rule.
- Compare the confidence metrics for a -> b and b -> a. If they are different, the association is asymmetric.
- In this case, conf(a -> b) represents the proportion of resumes with attribute a that were accepted or rejected. The measure conf(b -> a) represents the proportion of accepted or rejected resumes that had attribute a.
- When conf(a -> b) is greater than conf(b -> a) for accepted, the attribute may be a preference or "nice-to-have" for the reviewer.
- The metric phi(a, b) stands for Phi correlation between the two attributes.
- The metric is(a, b) stands for IS score, a correlation measure adjusted for asymmetric binary (true/false) attributes.
- Phi and IS measure correlation, not causation. Values close to 0.0 indicate statistical independence, 1.0 indicates perfect positive correlation and -1.0 indicates perfect negative correlation.
Table 1. Association rule evaluation metrics for attributes frequently occurring with acceptance or rejection.