Management of Change (MOC) – Process Optimization

The Many Approaches to Performance Measurement

Business processes have a purpose—in the case of MOC, that purpose is to ensure that a change is performed safely and effectively.
Beyond the general notion of “purpose”, a business process also must satisfy a number of requirements. Measuring the performance of business processes is a fundamental part of ensuring that a business process is functioning effectively and efficiently.

There are many different approaches to performance measurement, each with its own methodology, each with its own benefits, each with its own proponents. Currently popular business process measurement approaches include:

  • Activity-Based Costing (38)
  • Balanced Scorecard (47)
  • Key Performance Indicators, “KPIs”, (5)
  • Lean Six Sigma (56)
  • Six Sigma (297)
  • Business Process Modeling (16)

Much has been written about these various approaches to business process measurement. In fact, a recent search of www.barnesandnoble.com revealed there were 38 books with “Activity-Based Costing” in the title, 47 books with “Balanced Scorecard” in the title, and so on, for a total of 443 books. Even conducting a review of all of this material is daunting.

Terminology

Each author is attempting the explain the origins and the usage of the concepts contained in the book’s title. The authors invariably explain concepts in quantitative terms, and thus require the appropriate vocabulary. This can get confusing because words with very similar meanings (at least to a layperson) are used interchangeably or differently by different authors.

Here are some terms that shall be useful to us:

Measure, Measurement: “the extent, quantity, amount, or degree of something, as determined by measurement or calculation.1” We’ll assume that measurements are always objective. Measurements can include quantities (e.g. 52.3 days), and counts (e.g. 27 MOCs). Measurements can even include results of surveys (60% of respondents chose “strongly agree” to a proposition).

Indicator: An indicator is directly calculated from measurements and/or additional objective information. E.g. “days late”, “cost to process an MOC”.

Metric: A generic term applied to the collection of all measurements and indicators.

These can’t be regarded as dictionary definitions, nor am I making any claims that these definitions are “correct”, since their meaning is derived from how I and other authors use the terms.

A Pragmatic Approach

Although lacking the media buzz that the list of business process measurement approaches garners, there’s always (plain old) Project Management, i.e. the desire to accomplish a project on time, on spec., and on budget. We’ll take this more pragmatic approach to address the specific needs and characteristics of MOC.

Metrics for MOC Performance Measurement

Traditional project management aims to complete each project “on time, on budget and on spec.” Since MOC is a project-oriented process, these appear to be reasonable objectives for the MOC process as well. So, we can propose the following metrics for MOC:

  • Time: this is the duration of various parts an MOC, or even the entire MOC process. This is measured in units of time, often days.
  • Cost: this is the cost of conducting all or part of the MOC process. This does not include the cost of materials (e.g. pipe, weld rod, instrumentation) for implementing the change. Cost is measured in units of currency, often dollars.
  • Goals: this is the degree to which the goals and objectives of the MOC are attained. Goals are measured by the number actual goals/objectives attained as a percentage of planned goals/objectives.

There are additional metrics relevant to many business processes, including MOC. Often these are overlooked since the measurements are not necessarily easy to make, and the units of measure are far from obvious. However, they are important and worthy of attention:

  • Satisfaction: This is a measure of how much the participants like the process. The logic is that the greater the user satisfaction with the process, the more the process will be used and/or used correctly.
  • Flexibility: Flexibility is the ability to adapt to new or unforeseen circumstances. The more difficult it is to accommodate new circumstances, the less flexible the process is.
  • Risk: This is a measure of the probability and consequences of negative outcomes, whether they be related to safety, environmental, financial, regulatory or other issues.

The following sections will elaborate on each of these metrics.

Time

Various indicators can be constructed from measurements of time, as indicated in Table 1.

If your site typically uses time-based metrics (average duration, number of temporary MOCs that are overdue, etc.), then you certainly are not alone. Time‑based metrics are what most companies use; in many cases, time-based metrics are the only ones used to monitor MOCs.

MOC practitioners, which include MOC Coordinators, PSM coordinators, MOC originators and MOC owners, would typically be interested in the values of T1 – T6n since these can be used to manage the progress of specific MOCs.

Process Safety managers and EH&S managers would typically be interested in summaries of the time indicators. Summaries would provide some indication of where bottlenecks are in the process, and PSM managers and EH&S managers would typically have control over the resources to remove bottlenecks for this process.

EH&S directors, plant managers and executives would typically be interested in summaries and trends of T1, to satisfy themselves that the MOC process is “working” or “on track”.

Indicator Name Calculation Method Units
T1 MOC Duration current date – initiation date days
T2 Close-out Timeliness actual close-out date – target close-out date days
T3 Request cycle duration approval date – initiation date days
T4 Approval timeliness actual approval date – target approval date days
T5 Startup timeliness actual startup date – target startup date days
T6a Duration of initiation state end of initiation state – start of initiation state days
T6b Duration of scoping state end of scoping state – start of scoping state days
T6n Duration of close-out state end of close-out state – start of close-out state days
T7 Excessive duration MOCs Count of MOCs where { MOC duration > a threshhold value (like 2 years) } #
Table 1. Time indicators.

Cost

Most companies do not explicitly track the cost of MOCs. Why do I say “explicitly”? Because costs are actually tracked implicitly. Here’s how we can tell… From a best-practices standpoint, we could argue that the total MOC duration should be as short as possible—for permanent MOCs a shorter cycle time indicates that the benefits of the change can be realized quicker, for temporary MOCs a shorter cycle time indicates that the situation which precipitated the temporary change has been addressed. In most cases, the cycle time of MOCs could be reduced by hiring many more people to “process” the MOCs. Why doesn’t every plant do that then? Well, because of the cost.

So, apparently, cost IS a concern in regards to MOCs, but the way costs are managed is to constrain the resources. “We have 3 MOC Coordinators at our site” is another way of saying “Whatever the number of MOCs that can be processed by 3 MOC Coordinators, then that’s the number of MOCs we are prepared to undertake in a given year.”

But why 3 MOC Coordinators? Why not 2, or 4? How about getting rid of all the MOC Coordinators and spreading the costs among all the MOC Initiators? Any or all of these possibilities may have a place, but it’s impossible to assess the merits of any of these approaches without a detailed consideration of the costs. That’s why the cost metrics are important.

Various cost indicators are suggested in Table 2. Some of the indicators are expressed in currency units [$]. Others are simply a count of items. These counts of items are not synonymous with costs, but they certainly drive costs since, typically, the more items there are to deal with, the greater the costs.

EH&S directors, plant managers and executives would typically be interested in summaries and trends of the total cost of manpower, C1, to satisfy themselves that the MOC process is “working” or “on track”.

Practitioners would be interested in the details and summaries of the other cost indicators.

Indicator Name Calculation Method Units
C1 Total manpower cost Total cost of all manpower required to conduct the MOC process (but not the change) $
C2a Initiation cost The sum of (time to complete the form * burdened hourly rate) for each participant $
C2b The sum of (time to complete the form * burdened hourly rate) for each participant approval date – initiation date $
C2n Close-out cost The sum of (time to conduct each close-out task * burdened hourly rate of each participant) for each participant $
C3 Mechanical integrity cost The sum of (time to conduct each inspection * burdened hourly rate of each inspector) for each inspection $
C4 Training cost The sum of (time to train each participant * burdened hourly rate of each participant) for each participant $
C5 Update documentation cost The sum of (time to incorporate markups into final documents each document * burdened hourly rate of the designer) for each document $
C11 Document updates Number of documents updated. Note that a markup + final incorporation counts as 2 updates #
C12 PSM action items Number of action items coming out of initial scoping #
C13 Impact assessment action itmes Number of action items resulting from impact assessment #
C14 PSSR action items Number of action items resulting from PSSR #
C15 Reminder count Number of reminders issued to participants for late completion of tasks #
C16 Expediting count Number of times someone expedited a task by intervening and contacting the responsible person #
Table 2. Cost indicators.

Goals

A given change may have multiple objectives, say, “while were installing the new control valve, let’s add a sample point upstream from the valve.” When the work is actually executed, some of these objectives are dropped: “we couldn’t get the parts in time, we didn’t want to delay start-up, we didn’t get the permit for the xyz, etc.”

All of the aforementioned quotes constitute good project management, based on sensible trade-offs, so there’s no intent to criticize the logic of reducing the scope during a change. But without a metric to monitor what was achieved, it’s tough to know what the time and cost metrics are telling us. In the extreme case, we suffered through x days while the change was being made, it cost $y, but when it was all over, none of the stated objectives were met. Surely, tracking goal achievement is a necessary part of the metric set.

Indicator Name Calculation Method Units
G1 Percentage of goals met G3/G2 %
G2 Goals of the MOC Number of goals described in the MOC scope #
G3 Number of goals achieved Number of goals actually achieved during implementation #
Table 3. Goal achievement indicators.

Satisfaction

User satisfaction with the MOC process is not measured in any systematic way, at most sites. Satisfaction measurement usually consists of the occasional survey sent out by the PSM department. In many cases the survey results are very negative, with users indicating that they believe the MOC process is unnecessarily complex, too paperwork intensive, and contributes to delays. In a nutshell, the users hate the MOC process.

The tone, of this sort of feedback, discourages further or more frequent measurement of user satisfaction. Also, the PSM department feels that they have little room to maneuver since the objectionable items are ostensibly needed for compliance.

Another reason for not measuring user satisfaction is that the statistics may contain too much variation to be of much use. Factors like time of day, day of the week, recent or pending company events (e.g. layoffs), perhaps even inclement weather, may have an impact on how users are feeling at the moment they are asked to complete a satisfaction survey.

One alternative, to relying so much on users’ perceptions, is to use the following logic:

  • usually when things work they way they’re supposed to, then users are satisfied,
  • usually when unexpected or unwanted things occur (errors?), then users are dissatisfied,
  • unwanted or unexpected events include:
    • MOCs being rejected at any point during the process
    • MOCs being cancelled
    • Replacement-In-Kind situations being incorrectly classified as Management of Change cases
    • numerous action items being generated by impact assessments
    • numerous action items being generated during the PSSR

It’s not hard to imagine that zero unwanted or unexpected events is a much more satisfying situation than many unwanted or unexpected events. But, to get a handle on user satisfaction, it’s necessary to measure/count these events.

Using this logic, the Satisfaction Indicator, S1, has been created, as shown in Table 4. Satisfaction indicators.Table 4. A perfect score would be 100%. Total dissatisfaction yields 0% on this indicator. The only logical requirement on the relationship between S1 and S2 through S6, is that S1 be proportional to the other variables, which it is. Whether S1 should be linearly related to a number of equally weighted parameters S2 through S6 is open to debate—the linear, equally weighted approach is suggested simply for simplicity.

Again, the overall satisfaction indicator would be of executive interest, while the individual detailed indicators would be of use to practitioners.

Indicator Name Calculation Method Units
S1 Satisfaction indicator S1 = (S2 + S3 + S4 + S5 + S6)/50 %
S2 User’s ranking User’s opinion of the MOC process for this particular MOC Scale from 0 ‑ 10
S3 RIKs misclassified = 10 if this was truly an MOC

= 0 if this was a RIK, but mistakenly processed as an MOC

Scale from 0 ‑ 10
S4 Initiation rejection = 0 if this was rejected during initiation. The logic is that a user would be dissatisfied when her/his suggestion is rejected.

= 10 if not

Scale from 0 ‑ 10
S5 Approval rejection =

0 if this was rejected during approvals

= 10 if not

Scale from 0 ‑ 10#
S6 MOC canceled = 0 if this MOC was canceled

= 10 if not

Scale from 0 ‑ 10
Table 4. Satisfaction indicators.

Flexibility

Flexibility is the ability to adapt to changing, new or unforeseen circumstances. Flexibility is perhaps a difficult concept to grasp, but its consequences are rather straightforward.

Suppose a new requirement becomes apparent…One currently debated issue is whether an MOC is needed when a control room goes from three operators on‑shift to two? In the event that you decide to use the MOC process for this, how is it documented? Do checklists have to be modified? How do you go about conducting the impact assessment (HAZOP surely won’t be applicable)? Who are the correct approvers? What do you do for a PSSR? etc.

It’s entirely possible that the current MOC process needs to be modified in order to address this new requirement. Once the changes to the MOC process are determined, then the MOC process ranks high on the flexibility scale if it only takes a few minutes to implement the changes to the MOC process. However, if it takes weeks or months to modify the MOC process to handle this kind of a change, then the MOC process ranks low on the flexibility scale.

Plants that have moved from paper-based MOC processes to electronic MOC systems are well acquainted with the importance of flexibility. In a real, operating plant there are constant changes to:

  • rules: new MOC business rules may be added or existing ones modified. E.g.: if you add a check valve, then you must update the P&IDs; if you move a trailer onto the site, it’s necessary to check compliance with blast radius calculations or drawings.
  • users: employees or contractors are added or removed.
  • roles: people change roles constantly: e.g. Area 1 Operations Superintendent used to be Joe Bleau, and is now Mary Jones
  • groups: e.g. this kind of change can be approved by anyone in the MOC Coordinator group

If each time one of these items is changed, it requires a great deal of time and or specialized resources, then the system is inflexible. The result will be that important changes will not be reflected in the system, and the MOC system will increasingly diverge from what is actually taking place in the plant.

Incidentally, electronic MOC systems are neither inherently better nor inherently worse on the flexibility scale than paper-based MOC systems—it depends on the specific features of the electronic system that permit or hinder the updating of business rules.

A flexibility indicator, F1, is proposed in Table 5. A higher value of F1 indicates that a greater amount of time is required to modify the MOC process, which implies a lack of flexibility.

One problem with F1 is the potentially indeterminate value if the denominator is zero. In that case, F1 is assigned a value of 0, by definition.

Indicator Name Calculation Method Units
F1 Flexibility indicator F1 = (F4 + F5) / (F2 + F3), if (F2 + F3) > 0
F1 = 0, if (F2 + F3) = 0
hr
F2 Process changes Number of changes to the MOC process, required by the change #
F3 Rule changes Number of rules created, as a result of the change #
F4 Process change effort Time/effort required to change the MOC process in order to accomplish the change hr
F5 Rule change effort Time/effort required to implement new or modified rules hr
Table 5. Flexibility indicators.

Risk

In this discussion of risk, we first need to separate the risks inherent in the change from the risks associated with the MOC business process. The inherent risks deal with the physics of the change: e.g. cylindrical roller bearings, although cheaper, are less reliable than tapered roller bearings in cases where there’s a potential for off-axis loads.

The MOC process risks arise from how the MOC process is structured. Every time someone makes a mistake executing the MOC process, every time something was forgotten or overlooked, risks increase. Indicators R3 through R7 , defined in Table 6 deal with these error-related issues.

If everything were done perfectly, the count of items in indicators R3 – R7 would be zero. If everything were done perfectly, the MOC business process risk would be minimized (we still have the risks inherent in the physics of the change).

ndicator R2 is based on the logic that a more complex change implies greater risk. A more complex change is characterized by a greater number of action items, as identified by the initial, i.e. PSM, scoping.

Indicator Name Calculation Method Units
R1 Risk indicator R1 = R2 + R3 + R4 + R5 + R6 + R7 #
R2 PSM action items Number of action items resulting from PSM scoping #
R3 Impact action items Number of action items resulting from impact assessments #
R4 PSSR action items Number of action items resulting from PSSR #
R5 Missing signatures Number of missing signatures from MOC form #
R6 Lack of training Number of instances of lack of training #
R7 Lack of notification Number of instances of lack of notification #
Table 6. Risk indicators.

Trade-Offs Between the Metrics

Each of the metrics is interrelated with each of the other metrics in numerous ways. Optimizing one metric usually results in penalizing another metric. By way of example, let’s consider that generally duration is to be minimized. Here’s how duration minimization may impact the other metrics:

  1. Minimizing duration by adding more resources, increases costs.
  2. Minimizing duration by strict adherence to schedule may require that certain objectives by eliminated from the MOC scope, or postponed to another MOC. This sacrifices the goals metric.
  3. Minimizing duration by rushing through the design stages may cause a rejection of the proposed change during approvals. This negatively impacts the user satisfaction with the process.
  4. Minimizing duration by ignoring or skipping the process of updating the MOC process (i.e. updating checklists, etc) when a new circumstance is discovered may cause flexibility to decrease since the next time this same circumstance occurs, the MOC process will not be able to handle it at that point, either.
  5. Minimizing duration by rushing through the implementation of the change in the plant may cause certain critical steps to be missed, which increases the risks and should be reflected in more PSSR punch list items.

Every proposal for MOC process improvement needs to be carefully evaluated with respect to all of the applicable metrics.

Indicators in the Real World

The large number of indicators, as listed in Table 1 through Table 6, presents a couple of practical problems:

  1. Data collection: A great deal of measurements need to be taken in order to satisfy the data collection needs of all the indicators. Fortunately, most of the measurements, needed to calculate the indicators in Table 1 through Table 6, can be automatically acquired from an electronic MOC system. Collecting this much data from a paper-based MOC system is, admittedly, somewhat onerous.
  2. Presentation: As indicated in Table 1 through Table 6, there are too many indicators for executive and management use. However, the number is probably ideal for PSM professional use, since these indicators may permit pinpointing problems and suggest potential solutions.

An MOC practitioner would find a use for all of the indicators listed in the previous tables. Executive level information would likely only consist of one indicator from each category. Let’s review the one’s we have so far:

Category Indicator Better Values Are.. Typical Value Ideal Value
Time T1, Basic MOC Duration Lower 0 d #
Cost C1, Basic total manpower cost Lower $1,000 $0
Goals G1, Basic percentage of goals achieved Higher 95% 100%
Satisfaction S1, Basic satisfaction indicator Higher 80% 100%
Flexibility F1, Basic flexibility indicator Lower 0.5 hr 0 hr
Risk R1, Basic risk indicator Lower 25 0
Table 7. Summary of executive-level MOC performance indicators.

These could easily be presented as a series of graphs, as shown in Figure 1 though Figure 6. This kind of presentation may be part of an “executive dashboard” presentation of MOCs, although, executive dashboards typically use different graphical metaphors such as speedometers with redlines.

Using Reference Values to Extend Indicator Usefulness

The indicators, shown in Table 7, can all be plotted as function of time and this is how Figure 1 though Figure 6 are constructed. What’s not obvious from these figures is any consideration of what values are acceptable, and at what point management should intervene in the process and take remedial action. Although time and cost have well-understood optimum values (0, in each case), there are no obvious values that could be labeled “good” and “bad”.

One solution is to reference current values to past values using simple statistics. If the means and standard deviations were calculated over some time frame, like one year, then any data point that falls outside of, say, 3 standard deviations would be cause for concern. A horizontal line represents the mean value, and additional horizontal lines are displaced by some number of standard deviations.

Figure 2 indicates that costs in April were unusually low, raising attention and possibly leading to an investigation. Figure 6 indicates that risk in May was unusually high. This should also be cause for concern. None of these diagrams indicate why the values are out of range, nor do they provide any information about what ought to be done. These diagrams simply identify where there are areas of concern. Diagnosing problems can be accomplished using more detailed indicators (Table 1- Table 6), or using the techniques described in the next sections.

Figure 1. Time, T1, Basic MOC Duration.

Figure 2. Cost, C1, Basic total manpower cost.

Figure 3. Goal attainment, G1, Basic percentage of goals achieved.

Figure 4. Satisfaction, S1, Basic satisfaction indicator.

Figure 5. Flexibility, F1, Basic flexibility indicator.

Figure 6. Risk, R1, Basic risk indicator.

Using Grouping to Extend Indicator Usefulness

Figure 1 though Figure 6 provide useful information about MOCs and they indicate when performance changes in time. If we imagine that the data in these figures is reported on a plant-wide basis, then there’s nothing to indicate which units or areas are doing well, and which ones are performing poorly.

If the data is grouped by location in the plant, say unit or area, then additional information may be revealed. For instance, Figure 7 shows that Area 3 has better performance than the other areas, since the MOCs in Area 3 tend to close sooner than in other areas. Of course, Figure 8 shows that the average cost of processing MOCs in Area 3 is also higher, perhaps due to additional overtime.

MOC performance data can be grouped on other attributes as well. The key concept is that grouping permits comparisons among the groups.

Figure 7. Average duration of MOCs, closed in a specific month, grouped by plant area.

Figure 8. Average cost of MOCs, closed in a specific month, grouped by plant area.

Using Filtering to Extend Indicator Usefulness I

In some cases, average or aggregate values may lead to the wrong conclusions. For instance, Figure 9 represents the aggregate data for all MOCs at a plant, and appears to indicate that MOC duration was relatively short in January (55 days), and then increased in subsequent months.

When the data for routine MOCs is grouped separately from MOCs for turnarounds and MOCs associated with capital projects, as shown in Figure 10, it’s apparent that the MOC duration hardly varies from month to month. The explanation for the variation in MOC duration in Figure 9 must therefore be simply that the relative numbers of MOCs in each of the categories have changed from month to month. So, Figure 9 doesn’t measure performance at all—it simply indicates that the blend of routine, T/A-related and capital project-related MOCs changed between January and May.

Figure 9. Average duration of MOCs, closed in a specific month, for the entire plant.

Figure 10. Average duration of MOCs, closed in a specific month, grouped into “routine, T/A-related and Capital Project-related” categories.

Therefore, a useful analysis of routine MOCs would filter out the data for turnaround-related and capital project-related MOCs, as shown in Figure 11.

The same logic can be applied to temporary MOCs, since the duration of temporary MOCs is governed by the need for the temporary change itself, while the duration for permanent MOCs is often governed by the efficiency of the MOC process.

Figure 11. Average duration of routine, permanent MOCs (with T/A and capital project MOCs filtered out), grouped by area.

Using Filtering to Extend Indicator Usefulness II

Consider the MOC lifecycle for a specific MOC, shown in Table 8, with the number of days in each lifecycle state shown below the state.

The total cycle time is 182 working days, about 8 months, since there only 5 working days per week. This correlates well with actual data from refineries.

MOC Phase Request Phase mplementation Phase
State Name Initiation Scoping Design Change Impact Assessment Approvals Implementation PSSR Close-Out
Duration [days] 1 1 5 5 5 100 5 60 182
Tabulated values are durations, measured in work days in each lifecycle state.
Table 8. Days in each lifecycle state, for a single, routine, permanent MOC.

One can see that the duration is mostly governed by 2 large blocks of time:

  • Implementation: 100 work days. It doesn’t typically take that long to implement a change. Out of the 100 days, 90 days are generally spent waiting for materials to arrive.
  • Close-Out: 60 work days. It only takes the MOC Coordinator a day to close out an MOC. The other 59 days are usually spent waiting for drawings and documentation to be finalized. If there’s a queue in the design/CAD group (and there always is), then it may take weeks for drawings to get updated.

Interestingly, only 22 days are spent working on the MOC, the remainder is spent largely waiting for things to happen.

If the total 182 days is used as the “time” indicator, then what’s being measured is really a combination of:

  • the effectiveness of the Purchasing department to procure materials,
  • the effectiveness of suppliers to meet their delivery dates,
  • the effectiveness of the Stores and Inventory department to identify and stock commonly used items,
  • the effectiveness of the Design Engineering or Drafting department to update drawings, and,
  • the effectiveness of the MOC business process.

The effectiveness of the MOC business process is measured, but only to the smallest extent. This may cause two problems:

  1. Improvements in the MOC duration may be overwhelmed by the other factors. Improvements in processing MOCs from 22 days to 11 days represents a 50% improvement, yet, due to the duration of Implementation and Close-Out, it appears as only a (11/182 =) 6% improvement.
  2. MOC durations are often used to compensate/reward PSM Managers and MOC Coordinators. If total MOC durations are used, then, most of the duration, and consequently their pay, is determined by factors entirely outside of the control of the people responsible for MOC.

One solution is to recast the statistics, by filtering out the Implementation and Close‑Out times. This will give a better indication of the MOC business process performance.

Indicators for Business Process Improvement

Performance indicators for the actual execution of real business processes benefit from trending (i.e. changes in time), grouping and filtering, as described previously. During the design of a new business process, these features aren’t really relevant. Typically, a business process redesign activity attempts to compare a real or idealized AS IS process with a proposed and/or simulated TO BE process. So, the most useful performance indicators are those that succinctly portray two or more alternatives: AS IS and TO BE, before and after, or a choice among alternatives.

A radar graph, as shown in Figure 12, is a convenient way of comparing two or more alternatives. In order to use a radar graph for MOC metrics, two problems must be overcome:

  1. As indicated in Table 7, time, cost, flexibility and risk are better if the values are smaller, while goals and satisfaction are better if larger. In order to make the radar graph less confusing, one direction must be chosen as the “better” direction. We shall choose smaller values (i.e. closer to 0.0) as being better.
  2. The scales of each indicator, in Table 7, are different. T1 is measured in hours, C1 is measured in dollars, and so on. Using different scales on the axes of Figure 12, adds crowding and complexity to the radar graph. To compensate for these scaling problems, we suggest normalizing the values by dividing the measured values by a reference value. Table 9 details how this is accomplished.

Solving these problems, as suggested, actually creates a completely new set of indicators, which are denoted by the boldface characters: T, C… Reference values are denoted by an “r” in the subscript.

Categoryv Indicator Definition Reference Value
Time MOC Duration T1 = T1/T1r T1r = 60 d
Cost Total manpower cost C1 = C1/C1r C1r = $1000
Goals Percentage of goals achieved G1 = (1-G1)/(1-G1r) G1r = 95%
Satisfaction Satisfaction indicator S1 = (1-S1)/(1-S1r) S1r = 80%
Flexibility Flexibility indicator F1 = F1/F1r F1r = 0.5hr
Risk Risk indicator R1 = R1/R1r R1r = 25
Table 9. Summary of executive-level MOC performance indicators.

With these analytical tools, we can now construct a radar graph that’s usable for measuring business process improvements. As indicated in Figure 12, a baseline MOC process is assessed and characterized by the black curve in the graph. An alternative process is proposed and analyzed, and the results are indicated by the colored curve on the graph.

Figure 12. Basic radar graph showing two MOC cases. For each indicator, smaller values are better.

An improved MOC business process would appear on a radar graph as shown in Figure 13. In contract, a worse business process would appear on a radar graph as shown in Figure 14.

Figure 13. A proposed business process with performance improvements, when compared to a baseline.

Figure 14. A proposed business process with performance degradation, when compared to a baseline.

Gateway Consulting Group Inc.


8610 Transit Road, Suite 100,
East Amherst, NY 14051 USA
Phone: (716) 636-0200

LinkedIn

Directions from BUF Airport:


East on Genesee St./RT-33 for 1.6 mi.
North on Transit Rd./RT-78 for 4.4 mi
8610 Transit Rd is on the left

Direct Sales Inquiries to:


Phone: (716)636-0200 X 4
Email: info@gatewaygroup.com


© Copyright 2020 Gateway Consulting Group Inc. All Rights Reserved.