*** This page is intended to answer both general (first section) and technical (second section) questions about Predicted Risk Profiles on the CyberGRX platform. ***
Q: We heard from a customer that you have assessment results for us, but we have never completed a questionnaire. How did they get these results?
A: Our Predictive Risk Profile is produced by applying advanced machine learning to data from varied sources. Most of the data featured in the profile comes from our third-party risk Exchange, which was created from more than 11,000 self-assessments from companies spanning various industries and geographies. Our proprietary algorithm analyzes this data along with firmographic information and outside-in scanning data from our partners to predict a typical company would answer an assessment with up to 91% accuracy.
Q: How are the results within the Predictive Risk Profile calculated?
A: We have built a proprietary algorithm based on 5+ years of data which allows us to apply machine learning to our substantial dataset. Factors that are considered include industry, revenue, company size and external scanning input from threat intelligence and perimeter scanning technology parters.
Q: Who is able to see predictive results about my company?
A: There are several ways that a Predictive Risk Profile can be viewed. As a third party, a customer on the Exchange who has added you to their third-party portfolio, can see your Predictive Risk Profile. If you choose to share your self-attested results, they can view those results as well.
As a customer, your own company’s Predictive Risk Profile can be seen by the third parties you add to your portfolio. In addition, any users from your company registered on the CyberGRX Exchange can also see your company’s Predictive Risk Profile.
Q: Can I see the results before they are made publicly available?
A: Our Predictive Risk Profile is available once your company is added to a CyberGRX member’s third-party portfolio. If you choose to complete an assessment on our Exchange and authorize sharing with your customers, both the predictive and self-attested results will be available, with the default view being your self-attested results.
Q: How can I change my Predictive Risk Profile?
A: You can help us improve the confidence of the results by completing an assessment
on our Exchange. In addition, our model is continually being tuned based on the factors of industry, revenue, company size, and external scanning input from our technology partners. Data is automatically refreshed as new information becomes available.
Q: If I complete your assessment, will I be able to compare those results with those within my Predictive Risk Profile?
A: Yes. Members who have completed an assessment on the Exchange have access to their own security profile which includes both predictive and self-attested data.
Q: Will I receive an update if/when my Predictive Risk Profile changes?
A: At this time, we have no plans to issue notification or updates on Predictive Risk Profile changes given the dynamic nature of the data. Members with access to the platform can view their results at any time.
Q: We are currently dealing with major changes in our organization (M&A, security program updates etc.) resulting in a lot of positive changes in our security program. Will our Predictive Risk Profile automatically update?
A: It depends on the situation. In an instance of M&A activities, the results will be based on legal entity information. If there is a change in that information, it will be taken into account in your Predictive Risk Profile. For internal security improvements, the best way to ensure the most accurate information is available is to complete an assessment or work with an Assessment Coordinator to update your previously submitted assessment.
Q: Some of the controls in the Predictive Risk Profile don’t apply to us. Why are they included and do they affect our scores?
A: The controls represented in our results align with those in our assessment questionnaire. These controls aim to capture your risk posture at an enterprise level. If they are not relevant to your environment, they can be captured as such and will be taken into account in the results. We do evaluate the impact of your relationship with a third party or customer which could influence the scoring of your attested results.
Q: How is a Predictive Risk Profile different from security ratings?
A: While security ratings are effective for making quick, high-level decisions, they don’t incorporate the same depth of data included in the CyberGRX Predictive Risk Profile. CyberGRX uses both outside-in and inside-out data, sourced by more than 11,000 self assessments on our Exchange. We provide a collaborative process for third parties to complete an assessment and validate their controls in order to own their cyber risk reputation.
Q: What does the reported overall accuracy measure?
A: The overall accuracy measures how well we can predict the answers of historical self-attested assessments on the Exchange.
Q: Why are the predictive results different from the attested (or validated) assessment?
A: Predictive results emerge from a limited number of outside information sources which may not describe the entire story of a third party member’s internal security program. The intent of the predictive results is to show what a self-attested assessment is most likely to reveal in the absence of the data itself in order to provide actionable information.
Q: How is confidence shown for the different results interpreted?
A: Two ways:
- Coverage: The confidence shown in the five coverage groups should be interpreted as the confidence level that the given score will not deviate 10% away from the predicted score. This threshold is subject to change.
- Gaps Confidence: The confidence shown with each gap output is the probability, according to assessment simulations, that the question will be answered as a “No” by the third-party.
Q: What influences confidence levels on the scores the most?
A: The confidence calculation is a mixture of several factors:
- Evidence, or answers from the third party. With additional answers, questions become fixed, which influences the probabilities of related questions, leading to less variance in simulating assessments for a third party.
- Variance from simulated assessments. If the simulated assessment outcomes result in a distribution with high variance, there is a wider margin of error that has an influence on the confidence.
- Margin of error. A high confidence level with a wider margin of error is undesirable due to scenarios where the results are, for example, 99% confident, but the coverage for that confidence almost spans the 0-100 range.
Q: Why would the highest probabilities of answering “No” to a question not be the output for gaps?
A: The confidence on each gap is the chance of predicting that the gap will be a “No”. The “Top 5” list is based on a ranking of gaps from the ranked list of gaps that is produced by our MITRE gap analysis for each simulated assessment. We take the top of that final ranked list and pair it with the probability of a “No” answer from the simulated assessments. Predicting the controls with the highest probability of being answered “No” would result in listing gaps that may not be relevant in MITRE use cases.
Q: How can I increase the confidence level for predictive results on a third party?
A: The best way to increase the confidence level of the results is to invite a third party to become an active member of the CyberGRX Exchange and complete an assessment.
Q: Why do multiple third parties under my portfolio have different security ratings or firmographics but similar predictive results?
A: Although third-party members under a portfolio may seem different, due to preprocessing steps before predictions, they have been identified as similar entities under the model’s input space resulting in similar predictive results. This is due to an added layer of categorization applied to the firmographics and ratings which may result in similar predicted results for different companies.
Q: Why are there gaps shown in the predicted results with relatively low confidence?
A: We are showing a low probability that this area is a gap; but, on the chance that it is a gap, we classify it as an impact gap.
Q: What is the difference between inherent risk, residual risk, and predicted residual risk?
A: Inherent risk utilizes impact answers and all answers to questions that are answered “No”. Residual risk leverages the impact answers along with attested assessment answers to produce scores. Predictive residual risk applies the residual risk calculation process on hundreds of assessment simulations for a third party member resulting in a distribution of residual risk that is analyzed and given as output.