3.    Effort Estimation

In COCOMO II effort is expressed as Person-Months (PM).  A person month is the amount of time one person spends working on the software development project for one month.  COCOMO II treats the number of person-hours per person-month, PH/PM, as an adjustable factor with a nominal value of 152 hours per Person-Month.  This number excludes time typically devoted to holidays, vacations, and weekend time off.  The number of person-months is different from the time it will take the project to complete; this is called the development schedule or Time to Develop, TDEV.  For example, a project may be estimated to require 50 PM of effort but have a schedule of 11 months.  If you use a different value of PH/PM–say, 160 instead of 152–COCOMO II adjusts  the PM estimate accordingly (in this case, reducing by about 5%).  This reduced PM will result in a smaller estimate of development schedule.

The COCOMO II effort estimation model was introduced in Equation 1, and is summarized in Equation 11.  This model form is used for both the Early Design and Post-Architecture cost models to estimate effort between the end points of LCO and IOC for the MBASE/RUP and SRR and SAR for the Waterfall lifecycle models (see Section 6.2).  The inputs are the Size of software development, a constant, A, an exponent, E, and a number of values called effort multipliers (EM).  The number of effort multipliers depends on the model.

                                                                                Eq. 11

The exponent E is explained in detail in Section 3.1.  The effort multipliers are explained in Section 3.2.  The constant, A, approximates a productivity constant in PM/KSLOC for the case where E = 1.0.  Productivity changes as E increases because of the non-linear effects on Size.  The constant A is initially set when the model is calibrated to the project database reflecting a global productivity average.  The COCOMO model should be calibrated to local data which then reflects the local productivity and improves the model's accuracy.  Section 7 discusses how to calibrate the model to the local environment.

The Size is KSLOC.  This is derived from estimating the size of software modules that will constitute the application program.  It can also be estimated from unadjusted function points (UFP), converted to SLOC, then divided by one thousand.  Procedures for counting SLOC or UFP were explained in Section 2, including adjustments for reuse, requirements evolution, and automatically translated code.

Cost drivers are used to capture characteristics of the software development that affect the effort to complete the project.  A cost driver is a model factor that "drives" the cost (in this case Person-Months) estimated by the model.  All COCOMO II cost drivers have qualitative rating levels that express the impact of the driver on development effort.  These ratings can range from Extra Low to Extra High.  Each rating level of each cost driver has a value, called an effort multiplier (EM), associated with it.  This scheme translates a cost driver's qualitative rating into a quantitative one for use in the model.  The EM value assigned to a cost driver's nominal rating is 1.00.  If a cost driver's rating level causes more software development effort, then its corresponding EM is above 1.0.  Conversely, if the rating level reduces the effort then the corresponding EM is less than 1.0.

The rating of cost drivers is based on a strong rationale that they would independently explain a significant source of project effort or productivity variation.  The difference between the Early Design and Post-Architecture models are the number of cost drivers and the areas of influence they explain.  There are 7 cost drivers for the Early Design model and 17 cost drivers for the Post-Architecture model.  Each set is explained with its model later in the manual.

It turns out that the most significant input to the COCOMO II model is Size.  Size is treated as a special cost driver in that it has an exponential factor, E.  This exponent is an aggregation of five scale drivers.  These are discussed next.

What is not apparent in the model definition form given in Equation 11 is that there are some model drivers that apply only to the project as a whole.  The scale drivers in the exponent, E, are only used at the project level. Additionally, one of the cost drivers that is in the product of effort multipliers, Required Development Schedule (SCED) is only used at the project level.  The other cost drivers, which are all represented in the product of effort multipliers, and size apply to individual project components.  The model can be used to estimate effort for a project that has only one component or multiple components.  For multi-component projects the project-level cost drivers apply to all components, see Section 3.3.

3.1             Scale Drivers

The exponent E in Equation 11 is an aggregation of five scale drivers that account for the relative economies or diseconomies of scale encountered for software projects of different sizes [Banker et al. 1994].  If E < 1.0, the project exhibits economies of scale.  If the product's size is doubled, the project effort is less than doubled.  The project's productivity increases as the product size is increased.  Some project economies of scale can be achieved via project-specific tools (e.g., simulations, testbeds), but in general these are difficult to achieve.  For small projects, fixed start-up costs such as tool tailoring and setup of standards and administrative reports are often a source of economies of scale.

If E = 1.0, the economies and diseconomies of scale are in balance.  This linear model is often used for cost estimation of small projects.

If E > 1.0, the project exhibits diseconomies of scale.  This is generally because of two main factors: growth of interpersonal communications overhead and growth of large-system integration overhead.  Larger projects will have more personnel, and thus more interpersonal communications paths consuming overhead.  Integrating a small product as part of a larger product requires not only the effort to develop the small product, but also the additional overhead effort to design, maintain, integrate, and test its interfaces with the remainder of the product.

See [Banker et al. 1994] for a further discussion of software economies and diseconomies of scale.

Figure 1.                  

Diseconomies of Scale Effect on Effort

Equation 12 defines the exponent, E, used in Equation 11.  Table 10 provides the rating levels for the COCOMO II scale drivers.  The selection of scale drivers is based on the rationale that they are a significant source of exponential variation on a project’s effort or productivity variation.  Each scale driver has a range of rating levels, from Very Low to Extra High.  Each rating level has a weight.  The specific value of the weight is called a scale factor (SF).  The project's scale factors, the selected scale driver ratings, are summed and used to determine a scale exponent, E, via Equation 12.  The B term in the equation is a constant that can be calibrated.  Calibration is discussed in Section 7.

                                                                                  Eq. 12

For example, scale drivers in COCOMO II with an Extra High rating are each assigned a scale factor weight of (0).  Thus, a 100 KSLOC project with Extra High ratings for all scale drivers will have SSFj = 0, E = 0.91, and a relative effort of 2.94(100)0.91 = 194 PM.  For the COCOMO II.2000 calibration of scale factors in Table 10, a project with Very Low ratings for all scale drivers will have SSFj=31.6, E = 1.226, and a relative effort of 2.94(100)1.226 = 832 PM.  This represents a large variation, but the increase involved in a one-unit change in one of the factors is only about 6%.  For very large (1,000 KSLOC) products, the effect of the scale factors is much larger, as seen in Figure 3.

Table 1.         Scale Drivers for COCOMO II Models

 

Scale Drivers

 

Very Low

 

Low

 

Nominal

 

High

 

Very High

 

Extra High

 

PREC

thoroughly unprecedented

largely unprecedented

somewhat unprecedented

generally familiar

largely familiar

thoroughly familiar

SFj:

6.20

4.96

3.72

2.48

1.24

0.00

FLEX

rigorous

occasional relaxation

some relaxation

general conformity

some conformity

general goals

SFj:

5.07

4.05

3.04

2.03

1.01

0.00

RESL

little (20%)

some (40%)

often (60%)

generally (75%)

mostly (90%)

full (100%)

SFj:

7.07

5.65

4.24

2.83

1.41

0.00

 

TEAM

very difficult interactions

some difficult interactions

basically cooperative interactions

largely cooperative

highly cooperative

seamless interactions

SFj:

5.48

4.38

3.29

2.19

1.10

0.00

 

PMAT

SW-CMM Level 1 Lower

SW-CMM Level 1 Upper

SW-CMM Level 2

SW-CMM Level 3

SW-CMM Level 4

SW-CMM Level 5

SFj:

7.80

6.24

4.68

3.12

1.56

0.00

 

or the estimated Process Maturity Level (EMPL)

The two scale drivers, Precedentedness and Flexibility largely capture the differences between the Organic, Semidetached, and Embedded modes of the original COCOMO model [Boehm 1981].  Table 11 and Table 12 reorganize [Boehm 1981; Table 6.3] to map its project features onto the Precedentedness and Development Flexibility scales.  These table can be used as a more in depth explanation for the PREC and FLEX rating scales given in Table 10.

3.1.1       Precedentedness (PREC)

If a product is similar to several previously developed projects, then the precedentedness is high.

Table 2.         Precedentedness Rating Levels

Feature

Very Low

Nominal / High

Extra High

 

Organizational understanding of product objectives

General

Considerable

Thorough

 

Experience in working with related software systems

Moderate

Considerable

Extensive

 

Concurrent development of associated new hardware and operational procedures

Extensive

Moderate

Some

 

Need for innovative data processing architectures, algorithms

Considerable

Some

Minimal

 

3.1.2       Development Flexibility (FLEX)

Table 3.         Development Flexibility Rating Levels

Feature

Very Low

Nominal / High

Extra High

 

Need for software conformance with pre-established requirements

Full

Considerable

Basic

 

Need for software conformance with external interface specifications

Full

Considerable

Basic

 

Combination of inflexibilities above with premium on early completion

High

Medium

Low

 

The PREC and FLEX scale factors are largely intrinsic to a project and uncontrollable. The next three factors identify management controllables by which projects can reduce diseconomies of scale by reducing sources of project turbulence, entropy, and rework.

3.1.3       Architecture / Risk Resolution (RESL)

This factor combines two of the scale drivers in Ada COCOMO, “Design Thoroughness by Product Design Review (PDR)” and “Risk Elimination by PDR” [Boehm-Royce 1989; Figures 4 and 5].  Table 13 consolidates the Ada COCOMO ratings to form a more comprehensive definition for the COCOMO II RESL rating levels.  It also relates the rating level to the MBASE/RUP Life Cycle Architecture (LCA) milestone as well as to the waterfall PDR milestone.  The RESL rating is the subjective weighted average of the listed characteristics.

Table 4.         RESL Rating Levels

Characteristic

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra

High

Risk Management Plan identifies all critical risk items, establishes milestones for resolving them by PDR or LCA.

None

Little

Some

Generally

Mostly

Fully

Schedule, budget, and internal milestones through PDR or LCA compatible with Risk Management Plan.

None

Little

Some

Generally

Mostly

Fully

Percent of development schedule devoted to establishing architecture, given general product objectives.

5

10

17

25

33

40

Percent of required top software architects available to project.

20

40

60

80

100

120

Tool support available for resolving risk items, developing and verifying architectural specs.

None

Little

Some

Good

Strong

Full

Level of uncertainty in key architecture drivers: mission, user interface, COTS, hardware, technology, performance.

Extreme

Significant

Considerable

Some

Little

Very Little

Number and criticality of risk items.

> 10 Critical

5-10 Critical

2-4 Critical

1 Critical

> 5Non-Critical

< 5 Non-Critical

3.1.4       Team Cohesion (TEAM)

The Team Cohesion scale driver accounts for the sources of project turbulence and entropy because of difficulties in synchronizing the project’s stakeholders: users, customers, developers, maintainers, interfacers, others.  These difficulties may arise from differences in stakeholder objectives and cultures; difficulties in reconciling objectives; and stakeholders' lack of experience and familiarity in operating as a team.  Table 14 provides a detailed definition for the overall TEAM rating levels.  The final rating is the subjective weighted average of the listed characteristics.

Table 5.         TEAM Rating Components

 

Characteristic

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra

High

Consistency of stakeholder objectives and cultures

Little

Some

Basic

Considerable

Strong

Full

Ability, willingness of stakeholders to accommodate other stakeholders’ objectives

Little

Some

Basic

Considerable

Strong

Full

Experience of stakeholders in operating as a team

None

Little

Little

Basic

Considerable

Extensive

Stakeholder teambuilding to achieve shared vision and commitments

None

Little

Little

Basic

Considerable

Extensive

3.1.5       Process Maturity (PMAT)

Overall Maturity Levels

The procedure for determining PMAT is organized around the Software Engineering Institute’s Capability Maturity Model (CMM).  The time period for rating Process Maturity is the time the project starts.  There are two ways of rating Process Maturity.  The first captures the result of an organized evaluation based on the CMM, and is explained in Table 15.

Table 6.         PMAT Ratings for Estimated Process Maturity Level (EPML)

PMAT Rating

 

Maturity Level

EPML

Very Low

-

CMM Level 1 (lower half)

0

Low

-

CMM Level 1 (upper half)

1

Nominal

-

CMM Level 2

2

High

-

CMM Level 3

3

Very High

-

CMM Level 4

4

Extra High

-

CMM Level 5

5

Key Process Area Questionnaire

The second is organized around the 18 Key Process Areas (KPAs) in the SEI Capability Maturity Model [Paulk et al. 1995].  The procedure for determining PMAT is to decide the percentage of compliance for each of the KPAs.  If the project has undergone a recent CMM Assessment then the percentage compliance for the overall KPA (based on KPA Key Practice compliance assessment data) is used.  If an assessment has not been done then the levels of compliance to the KPA’s goals are used (with the Likert scale in Table 16) to set the level of compliance.  The goal-based level of compliance is determined by a judgment-based averaging across the goals for each Key Process Area.  See [Paulk et al. 1995] for more information on the KPA definitions, goals and activities.


Table 7.         KPA Rating Levels

 

 

 

 

Key Process Areas (KPA)

 

 

 

Almost Always1

Frequently2

About Half3

Occasionally4

Rarely if Ever5

Does Not Apply6

Don’t Know7

Requirements Management

·          System requirements allocated to software are controlled to establish a baseline for software engineering and management use.

·          Software plans, products, and activities are kept consistent with the system requirements allocated to software.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Software Project Planning

·          Software estimates are documented for use in planning and tracking the software project.

·          Software project activities and commitments are planned and documented.

·          Affected groups and individuals agree to their commitments related to the software project.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Software Project Tracking and Oversight

·          Actual results and performances are tracked against the software plans

·          Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans.

·          Changes to software commitments are agreed to by the affected groups and individuals.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Software Subcontract Management

·          The prime contractor selects qualified software subcontractors.

·          The prime contractor and the subcontractor agree to their commitments to each other.

·          The prime contractor and the subcontractor maintain ongoing communications.

·          The prime contractor tracks the subcontractor’s actual results and performance against its commitments.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Software Quality Assurance (SQA)

·          SQA activities are planned.

·          Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively.

·          Affected groups and individuals are informed of software quality assurance activities and results.

·          Noncompliance issues that cannot be resolved within the software project are addressed by senior management.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Software Configuration Management (SCM)

·          SCM activites are planned.

·          Selected workproducts are identified, controlled, and available.

·          Changes to identified work products are controlled.

·          Affected groups and individuals are informed of the status and content of software baselines.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Organization Process Focus

·          Software process development and improvement activities are coordinated across the organization.

·          The strengths and weaknesses of the software processes used are identified relative to a process standard.

·          Organization-level process development and improvement activities are planned.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Organization Process Definition

·          A standard software process for the organiation is developed and maintained.

·          Information related to the use of the organization’s standard software process by the software projects is collected, reviewed, and made available.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Training Program

·          Training activities are planned.

·          Training for developing the skills and knowledge needed to perform software management and technical roles is provided.

·          Individuals in the software engineering group and software-related groups receive the training necessary to perform their roles.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Integrated Software Management

·          The project’s defined software process is a tailored version of the organization’s standard software process.

·          The project is planned and managed according to the project’s defined software process.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Software Product Engineering

·          The software engineering tasks are defined, integrated, and consistently performed to produce the software

·          Software work products are kept consistent with each other.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Intergroup Coordination

·          The customer’s requirements are agreed to by all affected groups.

·          The commitments between the engineering groups are agreed to by the affected groups.

·          The engineering groups identify, track, and resolve intergroup issues.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Peer Reviews

·          Peer review activities are planned.

·          Defects in the software work products are identified and removed.

 

o

 

o

 

o

 

o

 

o

 

o

 

o

Quantitative Process Management

·          The quantitative process management activities are planned.

·          The process performance of the project’s defined software process is controlled quantitatively.

·          The process capability of the organization’s standard software process is known in quantitative terms.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Software Quality Management

·          The project’s software quality management activities are planned.

·          Measurable goals of software product quality and their priorities are defined.

·          Actual progress toward achieving the quality goals for the software products is quantified and managed.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Defect Prevention

·          Defect prevention activities are planned.

·          Common causes of defects are sought out and identified.

·          Common causes of defects are priortized and systematically eliminated.

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

 

 

o

Technology Change Management

·          Incorporation of technology changes are planned.

·          New technologies are evaluated to determine their effect on quality and productivity.

·          Appropriate new technologies are transferred into normal practice across the organization.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

Process Change Management

·          Continuous process improvement is planned.

·          Participation in the organization’s software process improvement activities is organization wide.

·          The organization’s standard software process and the project’s defined software processes are improved continuously.

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

 

 

 

o

1.   Check Almost Always when the goals are consistently achieved and are well established in standard operating procedures (over 90% of the time).

2.   Check Frequently when the goals are achieved relatively often, but sometimes are omitted under difficult circumstances (about 60 to 90% of the time).

3.   Check About Half when the goals are achieved about half of the time (about 40 to 60% of the time).

4.   Check Occasionally when the goals are sometimes achieved, but less often (about 10 to 40% of the time).

5.   Check Rarely If Ever when the goals are rarely if ever achieved (less than 10% of the time).

6.   Check Does Not Apply when you have the required knowledge about your project or organization and the KPA, but you feel the KPA does not apply to your circumstances.

7.   Check Don’t Know when you are uncertain about how to respond for the KPA.

An equivalent process maturity level (EPML) is computed as five times the average compliance level of all n rated KPAs for a single project (Does Not Apply and Don’t Know are not counted which sometimes makes n less than 18).  After each KPA is rated the rating level is weighted (100% for Almost Always, 75% for Frequently, 50% for About Half, 25% for Occasionally, 1% for Rarely if Ever).  The EPML is calculated as in Equation 2-13.

                                                                                                 Eq. 13

An EPML of 0 corresponds with a PMAT rating level of Very Low in the rating scales of Table 10 and Table 15.

The COCOMO II project is tracking the progress of the recent CMM Integration (CMM-I) activity to determine likely future revisions in the definition of PMAT.

3.2             Effort Multipliers

3.2.1       Post-Architecture Cost Drivers

This model is the most detailed and it is intended to be used when a software life-cycle architecture has been developed.  This model is used in the development and maintenance of software products in the Application Generators, System Integration, or Infrastructure sectors [Boehm et al. 2000].

The 17 Post-Architecture effort multipliers (EM) are used in the COCOMO II model to adjust the nominal effort, Person-Months, to reflect the software product under development, see Equation 11.  Each cost driver is defined below by a set of rating levels and a corresponding set of effort multipliers.  The Nominal level always has an effort multiplier of 1.00, which does not change the estimated effort.  Off-nominal ratings generally do change the estimated effort.  For example, a high rating of Required Software Reliability (RELY) will add 10% to the estimated effort, as determined by the COCOMO II.2000 data calibration.  A Very High RELY rating will add 26%.  It is possible to assign intermediate rating levels and corresponding effort multipliers for your project.  For example, the USC COCOMO II software tool supports rating cost drivers between the rating levels in quarter increments, e.g.  Low+0.25, Nominal+0.50, High+0.75, etc.  Whenever an assessment of a cost driver is halfway between quarter increments always round to the Nominal rating, e.g. if a cost driver rating falls halfway between Low+0.5 and Low+0.75, then select Low+0.75; or if a rating falls halfway between High+0.25 and High+0.5, then select High+0.25.  Normally, linear interpolation is used to determine intermediate multiplier values, but nonlinear interpolation is more accurate for the high end of the TIME and STOR cost drivers and the low end of SCED.

The COCOMO II model can be used to estimate effort and schedule for the whole project or for a project that consists of multiple modules.  The size and cost driver ratings can be different for each module, with the exception of the Required Development Schedule (SCED) cost driver and the scale drivers.  The unique handling of SCED is discussed in Section 3.2.1.4 and in 4.

3.2.1.1              Product Factors

Product factors account for variation in the effort required to develop software caused by characteristics of the product under development.  A product that is complex, has high reliability requirements, or works with a large database will require more effort to complete.  There are five product factors and complexity has the strongest influence on estimated effort.

Required Software Reliability (RELY)

This is the measure of the extent to which the software must perform its intended function over a period of time.  If the effect of a software failure is only slight inconvenience then RELY is very low.  If a failure would risk human life then RELY is very high. 

Table 8.         RELY Cost Driver

RELY Descriptors:

slight inconven-ience

low, easily recoverable losses

moderate, easily recoverable losses

high financial loss

risk to human life

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

0.82

0.92

1.00

1.10

1.26

n/a

This cost driver can be influenced by the requirement to develop software for reusability, see the description for RUSE.

Data Base Size (DATA)

This measure attempts to capture the effect large data requirements have on product development.  The rating is determined by calculating D/P, the ratio of bytes in the database to SLOC in the program.  The reason the size of the database is important to consider is because of the effort required to generate the test data that will be used to exercise the program.  In other words, DATA is capturing the effort needed to assemble the data required to complete test of the program through IOC.

Table 9.         DATA Cost Driver

DATA* Descriptors

 

DB bytes/Pgm SLOC < 10

10 £ D/P < 100

100 £ D/P < 1000

D/P ³ 1000

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

n/a

0.90

1.00

1.14

1.28

n/a

* DATA is rated as Low if D/P is less than 10 and it is very high if it is greater than 1000.  P is measured in equivalent source lines of code (SLOC), which may involve function point or reuse conversions.

Product Complexity (CPLX)

Complexity is divided into five areas: control operations, computational operations, device-dependent operations, data management operations, and user interface management operations.  Using Table 19 select the area or combination of areas that characterize the product or the component of the product you are rating.  The complexity rating is the subjective weighted average of the selected area ratings.  Table 20 provides the COCOMO II.2000 CPLX effort multipliers.

 

 

 

 

Table 10.      Component Complexity Ratings Levels

 

 

Control Operations

 

Computational Operations

Device-dependent Operations

Data Management Operations

User Interface Management Operations

 

 

 

 

Very Low

Straight-line code with a few non-nested structured programming operators: DOs, CASEs, IF-THEN-ELSEs.  Simple module composition via procedure calls or simple scripts.

Evaluation of simple expressions: e.g., A=B+C*(D-E)

Simple read, write statements with simple formats.

Simple arrays in main memory.  Simple COTS-DB queries, updates.

Simple input forms, report generators.

 

 

 

 

Low

Straightforward nesting of structured programming operators.  Mostly simple predicates

Evaluation of moderate-level expressions: e.g., D=SQRT(B**2-4.*A*C)

No cognizance needed of particular processor or I/O device characteristics.  I/O done at GET/PUT level.

Single file subsetting with no data structure changes, no edits, no intermediate files.  Moderately complex COTS-DB queries, updates.

Use of simple graphic user interface (GUI) builders.

 

 

 

 

Nominal

Mostly simple nesting.  Some intermodule control.  Decision tables.   Simple callbacks or message passing, including middleware-supported distributed processing

Use of standard math and statistical routines.  Basic matrix/vector operations.

I/O processing includes device selection, status checking and error processing. 

Multi-file input and single file output.  Simple structural changes, simple edits.  Complex COTS-DB queries, updates.

Simple use of widget set.

 

 

 

 

High

Highly nested structured programming operators with many compound predicates.  Queue and stack control.  Homogeneous, distributed processing.  Single processor soft real-time control.

Basic numerical analysis: multivariate interpolation, ordinary differential equations.  Basic truncation, round-off concerns.

Operations at physical I/O level (physical storage address translations; seeks, reads, etc.).  Optimized I/O overlap.

Simple triggers activated by data stream contents.  Complex data restructuring.

Widget set development and extension.  Simple voice I/O, multimedia.

 

 

 

 

Very High

Reentrant and recursive coding.  Fixed-priority interrupt handling.  Task synchronization, complex callbacks, heterogeneous distributed processing.  Single-processor hard real-time control.

Difficult but structured numerical analysis: near-singular matrix equations, partial differential equations.  Simple parallelization.

Routines for interrupt diagnosis, servicing, masking.  Communication line handling.  Performance-intensive embedded systems.

Distributed database coordination.  Complex triggers.  Search optimization.

Moderately complex 2D/3D, dynamic graphics, multimedia.

 

 

 

Extra High

Multiple resource scheduling with dynamically changing priorities.  Microcode-level control.  Distributed hard real-time control.

Difficult and unstructured numerical analysis: highly accurate analysis of noisy, stochastic data.  Complex parallelization.

Device timing-dependent coding, micro-programmed operations.  Performance-critical embedded systems.

Highly coupled, dynamic relational and object structures.  Natural language data management.

Complex multimedia, virtual reality, natural language interface.

 

Table 11.     CPLX Cost Driver

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

0.73

0.87

1.00

1.17

1.34

1.74

Developed for Reusability (RUSE)

This cost driver accounts for the additional effort needed to construct components intended for reuse on current or future projects.  This effort is consumed with creating more generic design of software, more elaborate documentation, and more extensive testing to ensure components are ready for use in other applications.  “Across project” could apply to reuse across the modules in a single financial applications project.  “Across program” could apply to reuse across multiple financial applications projects for a single organization.  “Across product line” could apply if the reuse is extended across multiple organizations.  “Across multiple product lines” could apply to reuse across financial, sales, and marketing product lines.

Development for reusability imposes constraints on the project's RELY and DOCU ratings.  The RELY rating should be at most one level below the RUSE rating.  The DOCU rating should be at least Nominal for Nominal and High RUSE ratings, and at least High for Very High and Extra High RUSE ratings.

Table 12.     RUSE Cost Driver

RUSE Descriptors:

 

none

across project

across program

across product line

across multiple product lines

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

n/a

0.95

1.00

1.07

1.15

1.24

Documentation Match to Life-Cycle Needs (DOCU)

Several software cost models have a cost driver for the level of required documentation.  In COCOMO II, the rating scale for the DOCU cost driver is evaluated in terms of the suitability of the project’s documentation to its life-cycle needs.  The rating scale goes from Very Low (many life-cycle needs uncovered) to Very High (very excessive for life-cycle needs).

Attempting to save costs via Very Low or Low documentation levels will generally incur extra costs during the maintenance portion of the life-cycle.  Poor or missing documentation will increase the Software Understanding (SU) increment discussed in Section 2.4.2.

Table 13.     DOCU Cost Driver

DOCU Descriptors:

Many life-cycle needs uncovered

Some life-cycle needs uncovered.

Right-sized to life-cycle needs

Excessive for life-cycle needs

Very excessive for life-cycle needs

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

0.81

0.91

1.00

1.11

1.23

n/a

This cost driver can be influenced by the developed for reusability cost factor, see the description for RUSE.

3.2.1.2              Platform Factors

The platform refers to the target-machine complex of hardware and infrastructure software (previously called the virtual machine).  The factors have been revised to reflect this as described in this section.  Some additional platform factors were considered, such as distribution, parallelism, embeddedness, and real-time operations.  These considerations have been accommodated by the expansion of the Component Complexity rating levels in Table 19.

Execution Time Constraint (TIME)

This is a measure of the execution time constraint imposed upon a software system.  The rating is expressed in terms of the percentage of available execution time expected to be used by the system or subsystem consuming the execution time resource.  The rating ranges from nominal, less than 50% of the execution time resource used, to extra high, 95% of the execution time resource is consumed.

Table 14.     TIME Cost Driver

TIME Descriptors:

 

 

£ 50% use of available execution time

70% use of available execution time

85% use of available execution time

95% use of available execution time

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

n/a

n/a

1.00

1.11

1.29

1.63

Main Storage Constraint (STOR)

This rating represents the degree of main storage constraint imposed on a software system or subsystem.  Given the remarkable increase in available processor execution time and main storage, one can question whether these constraint variables are still relevant.  However, many applications continue to expand to consume whatever resources are available---particularly with large and growing COTS products---making these cost drivers still relevant.  The rating ranges from nominal (less than 50%), to extra high (95%).

Table 15.     STOR Cost Driver

STOR Descriptors:

 

 

£ 50% use of available storage

70% use of available storage

85% use of available storage

95% use of available storage

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

n/a

n/a

1.00

1.05

1.17

1.46

Platform Volatility (PVOL)

“Platform” is used here to mean the complex of hardware and software (OS, DBMS, etc.) the software product calls on to perform its tasks.  If the software to be developed is an operating system then the platform is the computer hardware.  If a database management system is to be developed then the platform is the hardware and the operating system.  If a network text browser is to be developed then the platform is the network, computer hardware, the operating system, and the distributed information repositories.  The platform includes any compilers or assemblers supporting the development of the software system.  This rating ranges from low, where there is a major change every 12 months, to very high, where there is a major change every two weeks.

Table 16.     PVOL Cost Driver

PVOL Descriptors:

 

Major change every 12 mo.; Minor change every 1 mo.

Major: 6 mo.; Minor: 2 wk.

Major: 2 mo.;Minor: 1 wk.

Major: 2 wk.;Minor: 2 days

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

n/a

0.87

1.00

1.15

1.30

n/a

3.2.1.3              Personnel Factors

After product size, people factors have the strongest influence in determining the amount of effort required to develop a software product.  The Personnel Factors are for rating the development team’s capability and experience – not the individual.  These ratings are most likely to change during the course of a project reflecting the gaining of experience or the rotation of people onto and off the project.

Analyst Capability (ACAP)

Analysts are personnel who work on requirements, high-level design and detailed design.  The major attributes that should be considered in this rating are analysis and design ability, efficiency and thoroughness, and the ability to communicate and cooperate.  The rating should not consider the level of experience of the analyst; that is rated with APEX, LTEX, and PLEX.  Analyst teams that fall in the fifteenth percentile are rated very low and those that fall in the ninetieth percentile are rated as very high.

Table 17.     ACAP Cost Driver

ACAP Descriptors:

15th percentile

35th percentile

55th percentile

75th percentile

90th percentile

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.42

1.19

1.00

0.85

0.71

n/a

Programmer Capability (PCAP)

Current trends continue to emphasize the importance of highly capable analysts.  However the increasing role of complex COTS packages, and the significant productivity leverage associated with programmers’ ability to deal with these COTS packages, indicates a trend toward higher importance of programmer capability as well.

Evaluation should be based on the capability of the programmers as a team rather than as individuals.  Major factors which should be considered in the rating are ability, efficiency and thoroughness, and the ability to communicate and cooperate.  The experience of the programmer should not be considered here; it is rated with APEX, LTEX, and PLEX.  A very low rated programmer team is in the fifteenth percentile and a very high rated programmer team is in the ninetieth percentile.

Table 18.     PCAP Cost Driver

PCAP Descriptors

15th percentile

35th percentile

55th percentile

75th percentile

90th percentile

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.34

1.15

1.00

0.88

0.76

n/a

Personnel Continuity (PCON)

The rating scale for PCON is in terms of the project’s annual personnel turnover: from 3%, very high continuity, to 48%, very low continuity.

Table 19.     PCON Cost Driver

PCON Descriptors:

48% / year

24% / year

12% / year

6% / year

3% / year

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.29

1.12

1.00

0.90

0.81

 

Applications Experience (APEX)

The rating for this cost driver (formerly labeled AEXP) is dependent on the level of applications experience of the project team developing the software system or subsystem.  The ratings are defined in terms of the project team’s equivalent level of experience with this type of application.  A very low rating is for application experience of less than 2 months.  A very high rating is for experience of 6 years or more. 

Table 20.     APEX Cost Driver

APEX Descriptors:

£ 2 months

6 months

1 year

3 years

6 years

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.22

1.10

1.00

0.88

0.81

n/a

Language and Tool Experience (LTEX)

This is a measure of the level of programming language and software tool experience of the project team developing the software system or subsystem.  Software development includes the use of tools that perform requirements and design representation and analysis, configuration management, document extraction, library management, program style and formatting, consistency checking, planning and control, etc.  In addition to experience in the project’s programming language, experience on the project’s supporting tool set also affects development effort.  A low rating is given for experience of less than 2 months.  A very high rating is given for experience of 6 or more years. 

Table 21.     LTEX Cost Driver

LTEX Descriptors:

£ 2 months

6 months

1 year

3 years

6 year

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.20

1.09

1.00

0.91

0.84

 

Platform Experience (PLEX)

The Post-Architecture model broadens the productivity influence of platform experience, PLEX (formerly labeled PEXP), by recognizing the importance of understanding the use of more powerful platforms, including more graphic user interface, database, networking, and distributed middleware capabilities.

Table 22.     PLEX Cost Driver

PLEX Descriptors:

£ 2 months

6 months

1 year

3 years

6 year

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.19

1.09

1.00

0.91

0.85

n/a

3.2.1.4              Project Factors

Project factors account for influences on the estimated effort such as use of modern software tools, location of the development team, and compression of the project schedule.

Use of Software Tools (TOOL)

Software tools have improved significantly since the 1970s’ projects used to calibrate the 1981 version of COCOMO.  The tool rating ranges from simple edit and code, very low, to integrated life-cycle management tools, very high.  A Nominal TOOL rating in COCOMO 81 is equivalent to a Very Low TOOL rating in COCOMO II. An emerging extension of COCOMO II is in the process of elaborating the TOOL rating scale and breaking out the effects of TOOL capability, maturity, and integration.

Table 23.     TOOL Cost Driver

TOOL Descriptors

edit, code, debug

simple, frontend, backend CASE, little integration

basic life-cycle tools, moderately integrated

strong, mature life-cycle tools, moderately integrated

strong, mature, proactive life-cycle tools, well integrated with processes, methods, reuse

 

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.17

1.09

1.00

0.90

0.78

n/a

Multisite Development (SITE)

Given the increasing frequency of multisite developments, and indications that multisite development effects are significant, the SITE cost driver has been added in COCOMO II.  Determining its cost driver rating involves the assessment and judgement-based averaging of two factors: site collocation (from fully collocated to international distribution) and communication support (from surface mail and some phone access to full interactive multimedia).

For example, if a team is fully collocated, it doesn't need interactive multimedia to achieve an Extra High rating.  Narrowband e-mail would usually be sufficient.

Table 24.     SITE Cost Driver

SITE:

Collocation Descriptors:

Inter-national

Multi-city and Multi-company

Multi-city or Multi-company

Same city or metro.  area

Same building or complex

Fully collocated

SITE: Communications Descriptors:

Some phone, mail

Individual phone, FAX

Narrow band email

Wideband electronic communication.

Wideband elect.  comm., occasional video conf.

Interactive multimedia

Rating Levels

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multipliers

1.22

1.09

1.00

0.93

0.86

0.80

Required Development Schedule (SCED)

This rating measures the schedule constraint imposed on the project team developing the software.  The ratings are defined in terms of the percentage of schedule stretch-out or acceleration with respect to a nominal schedule for a project requiring a given amount of effort.  Accelerated schedules tend to produce more effort in the earlier phases to eliminate risks and refine the architecture, more effort in the later phases to accomplish more testing and documentation in parallel.  A schedule compression of 75% is rated very low.  A schedule stretch-out of 160% is rated very high.  Stretch-outs do not add or decrease effort.  Their savings because of smaller team size are generally balanced by the need to carry project administrative functions over a longer period of time.  The nature of this balance is undergoing further research in concert with our emerging CORADMO extension to address rapid application development (goto http://sunset.usc.edu/COCOMOII/suite.html for more information).

SCED is the only cost driver that is used to describe the effect of schedule compression / expansion for the whole project.  The scale drivers are also used to describe the whole project.  All of the other cost drivers are used to describe each module in a multiple module project.  Using the COCOMO II Post-Architecture model for multiple module estimation is explained in Section 3.3.

Table 25.     SCED Cost Driver

SCED Descriptors

75%

of nominal

85%

of nominal

100%

of nominal

130%

of nominal

160%

of nominal

 

Rating Level

Very Low

Low

Nominal

High

Very High

Extra High

Effort Multiplier

1.43

1.14

1.00

1.00

1.00

n/a

SCED is also handled differently in the COCOMO II estimation of time to develop, TDEV.  This special use of SCED is explained in Section 4.

3.2.2       Early Design Model Drivers

This model is used in the early stages of a software project when very little may be known about the size of the product to be developed, the nature of the target platform, the nature of the personnel to be involved in the project, or the detailed specifics of the process to be used.  This model could be employed in either Application Generator, System Integration, or Infrastructure development sectors.  For discussion of these marketplace sectors see [Boehm et al. 2000].

The Early Design model uses KSLOC or unadjusted function points (UFP) for size.  UFPs are converted to the equivalent SLOC and then to KSLOC as discussed in Section 2.3.  The application of project scale drivers is the same for Early Design and the Post-Architecture models and was described in Section 3.1.  In the Early Design model a reduced set of cost drivers is used as shown in Table 35.  The Early Design cost drivers are obtained by combining the Post-Architecture model cost drivers.  Whenever an assessment of a cost driver is halfway between the rating levels always round to the Nominal rating, e.g. if a cost driver rating is halfway between Very Low and Low, then select Low.  The effort equation is the same as given in Equation 11 except that the number of effort multipliers is 7 (n = 7).

Table 26.     Early Design and Post-Architecture Effort Multipliers

Early Design Cost Driver

Counterpart Combined Post-Architecture Cost Drivers

RCPX

RELY, DATA, CPLX, DOCU

RUSE

RUSE

PDIF

TIME, STOR, PVOL

PERS

ACAP, PCAP, PCON

PREX

APEX, PLEX, LTEX

FCIL

TOOL, SITE

SCED

SCED

Overall Approach: Personnel Capability (PERS) Example

The following approach is used for mapping the full set of Post-Architecture cost drivers and rating scales onto their Early Design model counterparts.  It involves the use and combination of numerical equivalents of the rating levels.  Specifically, a Very Low Post-Architecture cost driver rating corresponds to a numerical rating of 1, Low is 2, Nominal is 3, High is 4, Very High is 5, and Extra High is 6.  For the combined Early Design cost drivers, the numerical values of the contributing Post-Architecture cost drivers are summed, and the resulting totals are allocated to an expanded Early Design model rating scale going from Extra Low to Extra High.  The Early Design model rating scales always have a Nominal total equal to the sum of the Nominal ratings of its contributing Post-Architecture elements.

Personnel Capability (PERS)

An example will illustrate this approach.  The Early Design PERS cost driver combines the Post-Architecture cost drivers Analyst capability (ACAP), Programmer capability (PCAP), and Personnel continuity (PCON).  Each of these has a rating scale from Very Low (=1) to Very High (=5).  Adding up their numerical ratings produces values ranging from 3 to 15.  These are laid out on a scale, and the Early Design PERS rating levels assigned to them, as shown below.  The associated effort multipliers are derived from the ACAP, PCAP, and PCON effort multipliers by averaging the products of each combination of effort multipliers associated with the given Early Design rating level. 

The effort multipliers for PERS and the other Early Design model cost drivers are derived from those of the Post-Architecture model by averaging the products of the constituent Post-Architecture multipliers (ACAP, PCAP, PCON) for each combination of cost driver ratings corresponding with the Early Design rating level.  For PERS = Extra High, this would involve four combinations: ACAP, PCAP, and PCON all Very High, or only one High and the other two Very High.

Table 27.     PERS Cost Driver

PERS Descriptors:

 

 

 

 

 

 

 

·          Sum of ACAP, PCAP, PCON Ratings

3, 4

5, 6

7, 8

9

10, 11

12, 13

14, 15

·          Combined ACAP and PCAP Percentile

20%

35%

45%

55%

65%

75%

85%

·          Annual Personnel Turnover

45%

30%

20%

12%

9%

6%

4%

Rating Levels

Extra

Low

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra

High

Effort Multipliers

2.12

1.62

1.26

1.00

0.83

0.63

0.50

The Nominal PERS rating of 9 corresponds to the sum (3 + 3 + 3) of the Nominal ratings for ACAP, PCAP, and PCON, and its corresponding effort multiplier is 1.0.  Note, however that the Nominal PERS rating of 9 can result from a number of other combinations, e.g.  1 + 3 + 5 = 9 for ACAP = Very Low, PCAP = Nominal, and PCON = Very High.

The rating scales and effort multipliers for PCAP and the other Early Design cost drivers maintain consistent relationships with their Post-Architecture counterparts.  For example, the PERS Extra Low rating levels (20% combined ACAP and PCAP percentile; 45% personnel turnover) represent averages of the ACAP, PCAP, and PCON rating levels adding up to 3 or 4.

Maintaining these consistency relationships between the Early Design and Post-Architecture rating levels ensures consistency of Early Design and Post-Architecture cost estimates.  It also enables the rating scales for the individual Post-Architecture cost drivers, Table 35, to be used as detailed backups for the top-level Early Design rating scales given above.

Product Reliability and Complexity (RCPX)

This Early Design cost driver combines the four Post-Architecture cost drivers Required software reliability (RELY), Database size (DATA), Product complexity (CPLX), and Documentation match to life-cycle needs (DOCU).  Unlike the PERS components, the RCPX components have rating scales with differing width.  RELY and DOCU range from Very Low to Very High; DATA ranges from Low to Very High, and CPLX ranges from Very Low to Extra High.  The numerical sum of their ratings thus ranges from 5 (VL, L, VL, VL) to 21 (VH, VH, EH, VH).

Table 36 assigns RCPX ratings across this range, and associates appropriate rating scales to each of the RCPX ratings from Extra Low to Extra High.  As with PERS, the Post-Architecture RELY, DATA CPLX, and DOCU rating scales discussed in Section 3.2.1.1 provide detailed backup for interpreting the Early Design RCPX rating levels.

Table 28.     RCPX Cost Driver

RCPX Descriptors:

 

 

 

 

 

 

 

·          Sum of RELY, DATA, CPLX, DOCU Ratings

5, 6

7, 8

9 - 11

12

13 - 15

16 - 18

19 - 21

·          Emphasis on reliability, documentation

Very Little

Little

Some

Basic

Strong

Very Strong

Extreme

·          Product complexity

Very simple

Simple

Some

Moderate

Complex

Very complex

Extremely complex

·          Database size

Small

Small

Small

Moderate

Large

Very Large

Very Large

Rating Levels

Extra

Low

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra High

Effort Multipliers

0.49

0.60

0.83

1.00

1.33

1.91

2.72

Developed for Reusability (RUSE)

This Early Design model cost driver is the same as its Post-Architecture counterpart, which is covered in Section 3.2.1

Platform Difficulty (PDIF)

This Early Design cost driver combines the three Post-Architecture cost drivers Execution time constraint (TIME), Main storage constraint (STOR), and Platform volatility (PVOL).  TIME and STOR range from Nominal to Extra High; PVOL ranges from Low to Very High.  The numerical sum of their ratings thus ranges from 8 (N, N, L) to 17 (EH, EH, VH).

Table 38 assigns PDIF ratings across this range, and associates the appropriate rating scales to each of the PDIF rating levels.  The Post-Architecture rating scales in Tables 23, 24, 25 provide additional backup definition for the PDIF ratings levels. 

Table 29.     PDIF Cost Driver

PDIF Descriptors:

 

 

 

 

 

·          Sum of TIME, STOR, and PVOL ratings

8

9

10 - 12

13 - 15

16, 17

·          Time and storage constraint

£ 50%

£ 50%

65%

80%

90%

·          Platform volatility

Very stable

Stable

Somewhat volatile

Volatile

Highly volatile

Rating Levels

Low

Nominal

High

Very High

Extra High

Effort Multipliers

0.87

1.00

1.29

1.81

2.61

Personnel Experience (PREX)

This Early Design cost driver combines the three Post-Architecture cost drivers Application experience (APEX), Language and tool experience (LTEX), and Platform experience (PLEX).  Each of these range from Very Low to Very High; as with PERS, the numerical sum of their ratings ranges from 3 to 15.

Table 39 assigns PREX ratings across this range, and associates appropriate effort multipliers and rating scales to each of the rating levels.

Table 30.     PREX Cost Driver

PREX Descriptors:

 

 

 

 

 

 

 

·          Sum of APEX, PLEX, and LTEX ratings

3, 4

5, 6

7, 8

9

10, 11

12, 13

14, 15

·          Applications, Platform, Language and Tool Experience

£ 3 mo.

5 months

9 months

1 year

2 years

4 years

6 years

Rating Levels

Extra

Low

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra

High

Effort Multipliers

1.59

1.33

1.22

1.00

0.87

0.74

0.62

Facilities (FCIL)

This Early Design cost driver combines two Post-Architecture cost drivers: Use of software tools (TOOL) and Multisite development (SITE).  TOOL ranges from Very Low to Very High; SITE ranges from Very Low to Extra High.  Thus, the numerical sum of their ratings ranges from 2 (VL, VL) to 11 (VH, EH).

Table 40 assigns FCIL ratings across this range, and associates appropriate rating scales to each of the FCIL rating levels.  The individual Post-Architecture TOOL and SITE rating scales in Section 3.2.1 again provide additional backup definition for the FCIL rating levels.

Table 31.     FCIL Cost Driver

 

FCIL Descriptors:

 

 

 

 

 

 

 

·          Sum of TOOL and SITE ratings

 

2

 

3

 

4, 5

 

6

 

7, 8

 

9, 10

 

11

·          TOOL support

Minimal

Some

Simple CASE tool collection

Basic life-cycle tools

Good; moderately integrated

Strong; moderately integrated

Strong; well integrated

·          Multisite conditions

Weak support of complex multisite development

Some support of complex M/S devel.

Some support of moderately complex M/S devel.

Basic support of moderately complex M/S devel.

Strong support of moderately complex M/S devel.

Strong support of simple M/S devel.

Very strong support of collocated or simple M/S devel.

Rating Levels

Extra

Low

Very

Low

 

Low

 

Nominal

 

High

Very

High

Extra

High

Effort Multipliers

1.43

1.30

1.10

1.0

0.87

0.73

0.62

Required Development Schedule (SCED)

This Early Design model cost driver is the same as its Post-Architecture counterpart, which is covered in Section 3.2.1.

3.3              Multiple Module Effort Estimation

Usually software systems are comprised of multiple subsystems or components.  It is possible to use COCOMO II to estimate effort and schedule for multiple components.  The technique described here is for one level of sub-components.  For multiple levels of sub-components see [Boehm 1981].

The COCOMO II method for doing this does not use the sum of the estimates for each component as this would ignore effort due to integration of the components.  The COCOMO II multiple module method for n number of modules has the following steps:

 

1.        Sum the sizes for all of the components, Sizei, to yield an aggregate size.

2.        Apply the project-level drivers, the Scale Drivers and the SCED Cost Driver, to the aggregated size to derive the overall basic effort for the total project, PMBasic.  The Scale Drivers are discussed in Section 2.3.1 and SCED is discussed in Section 2.3.2.1.

3.        Determine each component’s basic effort, PMBasic(i), by apportioning the overall basic effort to each component based on its contribution to the aggregate size.

4.        Apply the component-level Cost Drivers (excluding SCED) to each component’s basic effort.

5.        Sum each component’s effort to derive the aggregate effort, PMAggregate, for the total project.

6.        The schedule is estimated by repeating steps 2 through 5 without the SCED Cost Driver used in step 2.  Using this modified aggregate effort, PM'Aggregate, the schedule is derive using Equation 14 in Section 4.