The fourth step of a typical Oracle | Primavera Risk (Pertmaster) Monte Carlo analysis is to load uncertainty or productivity ranges on the network activities. Correlation is an important part of obtaining a proper statistical Monte Carlo analysis. Without understanding and implementing correlation properly, a risk analyst may have issues with duration cancellation, unintended critical path switching, and merge bias. This training module will attempt to explain difficult statistical concepts without offending users who have not taken a statistics class in many years.
The Central Limit Theorem states that the distribution of an average tends to be Normal, even when the distribution from which the average is computed is decidedly non-Normal. A great pictorial demonstration of the Central Limit Theorem is located at http://www.statisticalengineering.com/central_limit_theorem.htm.
The basic result of the Central Limit Theorem on the Monte Carlo analysis will be the results pushing toward the mean. A risk analyst might notice that the confidence-level histogram is very tall and skinny due to the cancellation effect. Correlation should not be added to simply engineer an answer, however there are reasons that justify correlation.
For example, a contractor has worked between a .95 and 1.2 productivity factor on our last 10 jobs using them. If their productivity factor is a 1.2 on a certain job, then most activities held a value close to that range. It would not be expected that a many of the activities were completed at a .95 productivity factor on that same job. Without correlation, each iteration will have an equal amount of hits on both sides of the triangle causing a cancellation effect. Some activities will hit toward a .95 productivity factor while a similar amount will hit toward a 1.2 productivity factor. With correlation, iterations will occur where the 1.2 productivity factor is applied to the contractor’s entire grouping of activities. This will give a much more accurate answer if the correlation grouping can be justified.
Merge bias relates to the schedule risk at merge points based on multiple path running in parallel. Parallel paths can be related to a coin flip. There is a 50% chance of flipping the head-side of a coin one time. There is a 50% chance of flipping the same on a second coin. If both are flipped simultaneously, then there is only a 25% chance that both coins will hit heads on the same flip. Parallel paths work the same way. If each path has a 50% chance of finishing on time, the number of parallel paths will dictate the chance of success, much like the number of coin flips would dictate the chance of winning in a casino. The chance of flipping 4 heads on the same iteration is 6.5%.
Planners and schedulers often have parallel paths that are representative of when the work is being done but does not take into account the true logic of the events. We may be doing four concrete pours at basically the same time based on site availability. The pours were not linked finish-start by the scheduler because they did not know the true sequence of events. The events may have been placed in parallel to represent that they will all be worked on during the same time frame. The network will not see 4 parallel paths and there may be merge bias issues where there are not true parallel paths.
A symmetrical distribution will always give a near 50% probability of success on a series of activities. A symmetrical distribution has an equal upside and downside. An 80%, 100%, 120% risk spread is symmetrical as it will have 20% on each side of the triangle. The chance of success on a symmetrical distribution for a series of activities is around 50%. The probability of success based on 6 parallel paths would be roughly 1.5%. A symmetrical distribution can give a very low chance of success and cause a massive push out. This may be very true in how a project works.
If we are doing three tasks at one time, then the item with the longest duration will be the critical path. That being said, it is important that false parallel paths are not allowed to create extra slip due to representative logic, etc. Proper correlation will often help with this impact. In some cases the parallel paths are valid. We are receiving parts from three vendors and the deliveries are independent of each other. There is no correlation on the procurement activities.
Often project managers, risk analysts, and statisticians will argue over the correct number to use for correlation. A risk analysis usually is trying to paint a good picture of reality and fighting over percentage points is generally not a useful endeavor. If you do not have statistical evidence based on historical data, then how will you have detailed knowledge on 82% versus 93% correlation. Try to get the number somewhere in the ballpark and make it justifiable. Included below is a very rough rule of thumb.
A project manager can look at the scatter plots for these ranges by manually correlating two activities that have fairly wide ranges via the menu path Risk | Correlation. Notice that the scatter plot of 65% does not look much different from 75%. Statistically the results will be fairly similar.
A justification of correlation might revolve around something like welders. The welders typically work on .9 to 1.3 productivity factor. If they are working at a 1.3 factor, then it usually extrapolates to all of their activities so we have used a very high correlation of 90%. Using very low correlation would not make sense as the welders would be completing some activities with amazing productivity while doing very poorly on other in the same iteration. Using a very low correlation might raise a flag that the model could be statistically improved.
Applying Groups of Correlation in the Templated Quick Risk
The templated quick risk allows correlation by group. Users simply type in a correlation percentage. The correlation will be created for all tasks in the selected group. This concept is almost like looking at risk from a summary schedule point-of-view without destroying the detailed logic that planners and schedulers have painstakingly compiled. Therefore this approach is a hybrid between a summary schedule risk analysis and manual risk loading on a detailed schedule. The inputs can be gathered quickly in a collaborative fashion and applied to the detailed schedule logic. This will force schedule improvement and avoid working with two schedules in parallel.
Copyright © 2021 PRC Software. All rights reserved