ICSSEA 2008 - Presentation

 

Software & Systems Engineering and their Applications

Paris, December 9-11, 2008

Scaling up and Paradigm Shifts: Are software & systems engineering ready for speciation?

 

Large dimensions, high complexity, self-management, and ubiquity characterize today’s computer systems. It is expected that these features will be emphasized and prevail in systems to come, whether they relate to civilian, governmental or business applications. They call for entirely new approaches for designing, implementing, validating, maintaining, and exploiting software intensive systems.

With regard to the enterprise computing space, an evolution much sought for is driven by the need to bring business and IT together, i.e. to shift control of processes from IT professionals to business experts in order to scaling up business process management to support process of higher complexity.

Concerning software engineering a new approach, Model-Driven Engineering, upon which it is based, no longer considers the source code as a central element of software, it is rather considered as an element derived from model units. More generally, the source code becomes an element derived from the fusion or interweaving of model units. This approach becomes more and more important in the context of software and hardware architectures driven by standards such as MDA (Model-Driven Architecture) proposed by the standards authority OMG, Microsoft's Software Factories or IBM's proposal of EMF (Eclipse Modeling Software) tools.

Co-organized by TELECOM ParisTech, CS, and the “Génie Logiciel” quarterly in cooperation with the “Software Engineering”, “Complex Systems” and “Trustworthy Systems” Clubs of SEE, the 21st edition of the ICSSEA Conference (International Conference on Software & Systems Engineering and their Applications) will be held in Paris on December 9-11, 2008. It aims at providing a critical survey of the current status of tools, methods, and processes for elaborating software & systems, in gathering actors from the enterprise and research worlds. However, lectures and discussions will be conducted with the evolution of software & system engineering as a leitmotiv.

 

 

ICSSEA 2008 - Program Global View  

Tuesday, December 9        9:00 - 17:30
9:00 - 9:30 Registration
AMPHI THEVENIN
9:30 - 10:30     Session 1:    Inaugural Lectures   

Chair: Elie Najm

You can’t get there from here!: Problems and potential solutions in developing new classes of complex computing systems
Michael G. HincheyUniversity of Limerick (IRL)

      10:30 - 11:00  Break    
11:00 - 12:00    Session 1 (continued):    Inaugural Lectures 

Chair: Jean-Claude Rault

The requirements conundrum in complex systems engineering
Joseph K. DeRosa – The MITRE Corp. (USA)

    12:00 - 14:00  Lunch
 
   
AMPHI   THEVENIN   AMPHI   B312   AMPHI   B310

14:00 - 16:00

Chair: Axel Rennoch

Session 2
- Testing  

 

 

14:00 - 15:30 

Chair: Stanislaw Budkowski

Session 3 - Processes

 

14:00 - 16:00

Chair: Sylvie Vignes
 

Tools Presentations  1

 
      16:00 - 16:30  Break    

16:30 - 17:30

Chair: Joeseph DeRosa

Session 5
- Systems engineering

 

16:30 - 17:30

Chair: Maarten Boasson

Session 6
- Governance

   

 

Wednesday, December 10        9:00 - 18:00 
AMPHI   THEVENIN   AMPHI   B312   AMPHI   B310

9:00 - 10:30

Chair: Nicolas Trèves

Session 7 -
Modeling  
 

 

9:00 - 10:30 

Tutorial 1
Systems Architectures  

 

9:00 - 11:00 

Tutorial 2
Model Based Testing

10:30 - 11:00  Break   10:30 - 11:00  Break   11:00 - 11:30 -   Break

11:30 - 12:00

Chair: Eric Lefèbvre

Session 8
- CSCW

 

11:00 - 12:30

Tutorial 1
Systems Architectures  

 

11:30 - 12:30

Tutorial 2 (part 2)
Model Based Testing

    12:00 - 14:00  Lunch    

14:00 - 15:30

Chair: Robert Swarz

Session 9
- V&V  

 

14:00 - 15:30

Tutorial 1
Systems Architectures  

 

14:00 - 15:00 

Jérôme Hugues

Tools Presentations 2

      15:30 - 16:00  Break    

16:00 - 17:00

Chair: Christian Winkler

Session 10
- Web

 

16:00 - 17:00

Tutorial 1
Systems Architectures

   
     

AMPHI THEVENIN

   
17:00 - 18:00     Invited Lecture

Chair: Laurent Pautet

Validation challenges of safety critical embedded systems
Peter Feiler – SEI (USA)

 

 

 

 

 

 

Thursday, December 11        9:00 - 12:30 
AMPHI   THEVENIN   AMPHI   B312   AMPHI   B310

9:00 - 10:00

Chair: Wassilka Kirova

Session 11
- MDE
 

 

9:00 - 10:30 

Tutorial 3
Sustainable software project estimation
 

 

9:00 - 10:30 

Tutorial 4
SysML - What's new in systems engineering

     10:30 - 11:00  Break    

11:00 - 12:00

Chair: Augusti Canals

Session 12
- Quality

 

11:00 - 12:00

Tutorial 3
Sustainable software project estimation

 

11:00 - 12:00

Tutorial 4
SysML - What's new in systems engineering

     

AMPHI THEVENIN

   
12:00 - 12:30    Closing Lecture

Chair: Jean-Claude Rault

Scalability also means more new features
Gérard Memmi – Casenet (USA)

 

 

 

 

 

 

 

ICSSEA 2008 - Program Detailed View   
Tuesday, December 9        9:00 - 17:30   
 
9:30 - 10:30    Session 1 - Inaugural Lectures          Chair: Elie Najm

You can’t get there from here!: Problems and potential solutions in developing new classes of complex computing systems
Michael G. HincheyUniversity of Limerick (IRL)

The explosion of capabilities and new products within the sphere of communications and information technology (ICT) has fostered widespread overly-optimistic opinions regarding the industry, based on common but unjustified assumptions of quality and correctness of software. NASA faces this dilemma as it envisages advanced mission concepts that involve large swarms of small spacecraft that will engage cooperatively to achieve science goals. Such missions involve levels of complexity that beg for new methods for system development far beyond today's methods, which are inadequate for ensuring correct behavior of large numbers of interacting intelligent mission elements. New system development techniques recently devised through NASA-led research will offer innovative approaches to achieving correctness in complex system development, including autonomous swarm missions that exhibit emergent behavior, as well as general software products created by the software industry.

 
11:00 - 12:00     Session 1 - Inaugural Lectures  (continued)         Chair: Jean-Claude Rault

The requirements conundrum in complex systems engineering
Joseph K. DeRosa – The MITRE Corp. (USA)

The requirements management process has long been a mainstay of classical systems engineering.  However, as the scale and complexity of the systems we engineer become greater, it is increasingly difficult to proceed logically from requirements to design.  First the requirements are not always known or knowable. Second, even if they are known at the level of overall desired capabilities, the exigencies of the many interacting stakeholders, processes and technologies in the system and its environment make the allocation of those requirements down to stable subsystem requirements impossible.  This paper defines the conditions under which classical requirements management fails as a systems engineering process, and outlines an adaptive requirements management process for complex systems engineering.

 

14:00 - 16:00     Session 2 - Testing                    Chair: Axel Rennoch

• A testing approach for SOA applications
Iva Krasteva, Ilina Manova – Rila Solutions (BG), and Sylvia Ilieva – Sofia University (BG)

• Mutation operators for WS-BPEL 2.0
Antonia Estero Botaro, Francisco Palomo Lozano, and Immaculada Medina Bulo – Universidad de Cadiz (E)

• Experiences in using principal component analysis for testing and analyzing complex system behavior
Pekka Tuutttila – Nokia Siemens Networks (SF) and Teemu Kanstrén – VTT (SF)

• Helping low maturity organizations to perform acceptance testing
Samuel Renault – CRP Henri Tudor (L)

    

14:00 - 15:30     Session 3 - Processes                Chair: Stanislaw Budkowski

• CMMI: a field experience
Bruno Meijer – Airbus (F)

• Requirements at the heart of the application development life cycle - Part 1
Pierre-Marie Potier – IBM Rational (F)

• Requirements at the heart of the application development life cycle - Part 2
Pierre-Marie Potier – IBM Rational (F)
 

    

 

    

16:30 - 17:30     Session 5 - Systems Engineering                Chair: Joseph DeRosa

• Requirements for a multidisciplinary structural-behavioural component model
Dusko Jovanovic – Neopost Technologies (NL)

• Multidisciplinary modelling: Current status and expectations in the Dutch TWINS consortium
F. P. M. Stappers, Lou J. A. M. Somers, and M. A. Reniers – Technische Universiteit Eindhoven (NL)

    

16:30 - 17:30      Session 6 - Governance                Chair: Maarten Boasson

• IBM Rational System Architect: Modeling from strategy to technical infrastructure
Jaafar Chraibi – IBM Rational (F)

• Certifying information systems: What are the main evolutions?
Laurent Hanaud –ADELI (F)
 

    

14:00 - 16:00 Tools Presentations 1               Chair: Sylvie Vignes

• MDA approach and generic data record/replay & analysis activities with the CSLab product
Marc Alter – DCNS (F)

• MDA for deploying applications as Web services using the Xcarecrows set of tools. A detailed example.
Pierre Kalfon and Min Xu – Cogenit (F)
 

 

ICSSEA 2008 - Program Detailed View
Wednesday, December 10,        9:00 - 18:00    
 

9:00 - 10:30 Session 7 - Modeling                 Chair: Nicolas Trèves


• IBM Rational SYSTEM ARCHITECT: Modeling, from strategy to technical infrastructure
TBA – IBM Rational (F)

• Specification of a formal semantics for the UML
Gasso Wilson Mwaluseke and Ali Abdallah – London South Bank University (GB)

• Process-integrated refinement patterns in UML
Timo Kehrer and Edmund Ihler – Stuttgart Media University-HDM (D)

 

 

9:00-17:00 Tutorial 1               

The Method Framework for Engineering System Architectures

Donald G. Firesmith – Software Engineering Institute (USA)

 

Typical general-purpose, architecture methods and standards are incomplete and lack adequate flexibility to meet the specific needs of the project. Systems vary greatly in size, complexity, criticality, domain, operational independence of other systems, requirements volatility, required quality characteristics and attributes, as well as technology, its diversity, and volatility. Development organizations vary greatly in centralization, management and engineering culture, expertise, experience, and staff co-location. Projects and programs vary greatly in type, contracting, lifecycle scope, schedule, and funding. Finally, system stakeholders vary greatly in type, numbers, authority, and accessibility.

Based on the concept of situational method engineering, the Method Framework for Engineering System Architectures (MFESA) addresses these challenges by helping system architects, process engineers, and technical managers to develop appropriate, project-specific system architecture engineering methods that can be used to effectively and efficiently engineer high-quality system architectures.
This tutorial begins by providing the motivation for using MFESA including why system architecture engineering is critical to system and project success, the many different challenges facing system architects, and the principles that their system architecture engineering methods should follow to help them successfully meet these challenges. The tutorial continues by providing an introductory overview of MFESA including its four component parts: (1) the MFESA ontology defining architectural concepts, (2) the MFESA metamodel defining foundational types of architectural method components, (3) the MFESA repository of these reusable method components, and (4) the MFESA metamethod for constructing appropriate, project specific architectural engineering methods. The tutorial continues by covering each of these MFESA components in turn, including associated guidelines and pitfalls.

The tutorial then covers the first MFESA component – the MFESA ontology that defines the concepts and terminology underlying system architecture engineering as well as the relationships between these concepts. Concepts covered system, system architecture, architectural structures, architectural patterns and mechanisms, architectural drivers and concerns including quality requirements based on an extensive quality model, architectural representations, architectural models and views, architectural focus areas, and architectural quality cases, and architectural visions.

The tutorial continues by covering the second MFESA component – the MFESA metamodel that defines the most general and abstract types of method components that can be used to construct project-specific system architecture engineering methods. This includes (1) architectural work products, (2) architectural work units that can be performed to produce the architectural work products, and (3) the architectural workers who perform the work units.

The tutorial then covers the third MFESA component – the MFESA repository and the reusable method components that it stores. This includes numerous architectural work products such as system architecture and its many representations. It also includes architectural work units such as architectural tasks and techniques. Finally, this includes architectural works such as the architects, their teams, and the tools they use.

The tutorial continues by covering the fourth and final MFESA component – the MFESA metamethod for constructing appropriate, project specific architectural engineering methods. This includes determining the project needs, selection of the appropriate reusable system architecture engineering method components, tailoring these selected components, and integration of these selected method components.

In summary, MFESA is an application of situational method engineering to the engineering of system architectures. It provides both the repository of reusable method components and a metamethod for using these components to create appropriate system architecture engineering methods. These reusable components are free and open source, so that any organization or team can use MFESA to produce the architecture engineering methods appropriate for them. MFESA provides the benefits of standardization through the underlying ontology and metamodel on which the reusable components are based. MFESA also provides the necessary benefits of flexibility in that it lets the architects, process engineers, and managers select the appropriate method components and tailor them as needed. Thus, unlike existing architecture engineering methods and generic architecture standards that are incomplete and insufficiently flexible, MFESA provides the system architecture community with the best of both worlds.

This tutorial summarizes the contents of the book by the same title, to be published by Auerbach during the fall of 2008.
Learner Outcomes: Attendees will learn:

  1. the foundational concepts and terminology underlying system architecture engineering,

  2. an overview of the method components of the MFESA framework including architectural work products (e.g., architectures and architectural representations such as documents and models), architectural work units (e.g., tasks and techniques), and architectural workers (e.g., roles, teams, and tools) that perform the work units to engineer the work products,

  3. how to improve their system architecture engineering processes.

   

 

Tutorial 2: 9:00-12-30         

Model-Based Testing

Part 1: Introduction to Model-based Testing and its use in the ITEA2 research project D-MINT

Thomas Bauer – Fraunhofer IESE (Germany) and Axel Rennoch – Fraunhofer FOKUS (Germany)

Since the development and maintenance of industrial test suites requires huge resources several approaches and tools have been proposed to apply model-based techniques. The acceptance for the use of the new techniques is limited and different in industries like automotive, telecommunication or automation. The presentation introduces industrial approaches, their usage and evaluation in the context of the international research project D-MINT.

Contents of the presentation:

  • Basic terminology

  • Test design and specification
    Standardized techniques (UTP, TTCN-3)
    Statistical testing

  • Industrial domains
    Requirements
    Practical approaches

  • D-MINT project
    Case studies

  • Common methodological approach

  • Evaluation processes

  • Tool support

Related link: www.d-mint.org

Audience: project managers, test engineers.

Model-Based Testing      

Part 2: A Case Study

Bruno Legeard – Smartesting and Université de Franche-Comté
 

This talk presents the deployment of a model-based testing solution, called Smartesting Test Designer™, to a project of migrating a critical financial application. The main objective is to test functional regression of the application in a context of an offshore development and iterative development process.

Automated test scripts are automatically generated from a test model of the application. The test model has been developed specifically for test generation purpose.

The project results were on time delivery, systematic non-regression testing for each release, and a full coverage of project functional requirements. The model-based testing approach covers all functional testing need, mostly automated scripts, but also the remaining manual test cases.

Key points:

  • Experience report of deploying a model-based testing solution in a context of offshore development.

  • Showing how this accelerates and reduces the cost of producing and maintaining automated tests.

  • Giving project metrics on test generation productivity and accuracy of generated tests.

    

11:30 - 12:30 Session 8 - CSCW               Chair: Eric Lefèbvre


• Addressing change in collaborative software development through integrated artifact flow and dependence analysis
Vassilka Kirova – Alcatel-Lucent (USA), Thomas Marlowe – Seton Hall University (USA), and Mojgan Mohtashami – Advanced Infrastructure Design (USA)

• Semantic Web services as enabler of collaborative networked organizations
Josef Withalm and Walter Wölfel – Siemens (D)


 

 

14:00 - 15:30 Session 9 - V & V               Chair: Robert Swarz


• The OlivaNova model execution system (ONME) anr its optimization through linguistic validation methods
Günther Fliedl, Christian Winkler, and Horst A. Kandutsch – Universität Klagenfurt (A)

• Formal verification of use case sequence diagrams consistency
Mouez Ali, Hanene Ben-Abdallah, and Faïez Gargouri – Sfax University (TN)

• Consistency checking in multiple state diagrams
Mohammad N. Alanazi and David A. Gustafson – Kansas State University (USA)
 

 

16:00 - 17:00 Session 10 - Web                Chair: Christian Wrinkler


• Ontology matching: Context-based similarity of attributes
Irina Astrova, Arne Koschel – Tallinn University of Technology (EE), and Ahto Kalja – Fachhochschule Hannover (D)

• WebLab: An integration infrastructure to ease the development of multimedia processing applications
Patrick Giroux, Stephan Brunessaux, Sylvie Brunessaux, Jérémie Doucy, Gérard Dupont, Bruno Grilhères, Yann Mombrun, and Arnaud Saval – EADS (F)

 

 

14:00 - 16:00 Tools Presentations  2           Chair: Jérôme Hugues

• Automating testing at the model level using UML Testing Profile
Charles-Henri Jurd – IBM Rational (F)
 

 

17:00 - 18:00 Invited Talk              Chair: Laurent Pautet

Validation challenges of safety critical embedded systems

Peter Feiler – SEI (USA)

Safety-critical systems are systems whose failure or malfunction may result in injury or death, or damage or loss of equipment, or damage to the environment. These safety risks are managed by a range of safety analyses ranging from hazard analysis to fault tree analysis. As safety-critical systems have become increasingly software intensive the embedded software system has become an increasing risk factor. For this reason, the SAE Architecture Analysis & Design Language (AADL) international standard has been developed to support model-based engineering of embedded and real-time software intensive systems.

In this presentation we examine how AADL contributes to model-based validation of systems, to consistency between different analytical models of the same system, and validation of the implementation against the validated models. We will illustrate model-based analysis throughout the life cycle of different degrees of fidelity and formality with examples in terms of security, latency, and model checking of redundancy logic. The presentation concludes with an illustration of challenges in preserving validated operational properties when producing an implementation.
 

 

 

 

 

ICSSEA 2008 - Program Detailed View     
Thursday, December 11,      9:00 - 12:30    
 

9:00 - 10:00 Session 11 - MDE               Chair: Vassilka Kirova


• Model-based scheduling analysis with the UML profile for MARTE
Sébastien Demathieu and Laurent Rioux – Thalès (F)


• Achieving object-oriented systems modernization through model-driven architecture migration
Ismail Khriss, Gino Chénard, and Aziz Salah – Université du Québec (CDN)
 

 

9:00-12:00 Tutorial 3      

Sustainable Software Project Estimation

Bernard Londeix – Telmaco (GB)
 

Software production is becoming more and more quantitatively managed in order to become like other industries. Yearly, software projects are better defined and benchmarked. This could not have happened without the advent of some particular standards and methods for measuring and estimating pieces of software and their related projects. The MeterIT tool suite of Telmaco covers the complete metrics process in a straightforward practical fashion.

Proper planning of software projects requires a project estimating method based on the quantity of software to be processed. This quantity has to be measured or estimated from the very beginning of the project life cycle. Then, knowing the amount of software to be worked on, the project can be estimated in terms of its effort and duration requirements. The use of the software size to estimate projects requires beforehand the establishment of a collection of predictors. These predictors are usually produced during the benchmarking of a set of projects, thus quantifying the capability of the organization at that time.

However, predictors and software measures are dependent on the measurement standard used. The currently most used standards, which have obtained ISO recognition, are traditionally well-positioned in the category of First Generation (1G) sizing standards: there we find the standards IFPUG, NESMA, and Mark II and their derivatives. Being much used and practiced over the past 20 years, largely due to the advantage of their being available, their insufficiencies have also progressively become evident. Hence, a new ISO standard has become necessary. This is COSMIC, the measurement standard of Second Generation (2G). The number of COSMIC measured projects is progressing at such a rate that we can expect it to overtake IFPUG measured projects at some point in the near future. The problem now lies with those organizations, which have constituted large knowledge bases of 1G software measurements. Re-measuring their previous portfolio of thousands of measurements must be avoided as being uneconomical. However, converting the 1G sizes into 2G sizes is a very productive option. Meanwhile, there is the question of urgency to convert to 2G measures or else run the risk of losing metrics assets.

The proposed method is to use MeterIT-Converter, a size conversion tool from Telmaco. MeterIT-Converter offers three main capabilities: (i) - conversion from 1G to COSMIC, (ii) - retro-version from COSMIC to each of the three main 1G standards, and (iii) - the facility to produce and maintain one’s own Conversion Algorithm (CA). Once the conversion facility is established, the predictors repository may be easily passed from 1G to 2G using MeterIT-Project, the benchmarking tool of Telmaco. The measurement of new requirements can then benefit from the COSMIC standard being supported by a tool such as MeterIT-Cosmic, the COSMIC measurement tool from Telmaco. The predictors repository can then be used to calibrate PredictIT, Telmaco’s project estimation tool, to estimate the new projects.

In summary then, the MeterIT tool suite supports software management in the Plan-Do-Check-Act of the Deming cycle thus keeping the predictor repository up to date during the enhancement of the sizing methodology to 2G irrespective of the diversity of standards among the outsourcers.

    
 

 

Tutorial 4: 9:00-12:00    

SysML: What’s New in Systems Engineering?

Françoise Caron – EIRIS Conseil (F)
 

What is the Systems Modeling Language (SysML) standard? What are its objectives? What are the explicit and implicit concepts that underlie the choices for the current version?

The purpose of SysML is to bring together the state-of-the-art modeling practices used in complex systems engineering through a unified modeling language based on the fundamentals of the Unified Modeling Language, version 2 (UML2). UML2 refers to the needs of software system design and production, however, whereas SysML refers to the development needs of both industrial complex systems (such as aeronautical, railway, or space systems), and systems of systems (such as defense or air traffic management systems). Thus, one cannot transpose the principles of UML2, on which software developments are based, to SysML deployment, which requires a good understanding of the principles of complex systems engineering.

This tutorial aims at presenting the contributions of SysML to the recurring needs of complex system modeling in the context of industrial projects. It introduces the two essential aspects of SysML, system design and architecture modeling and —during the development process—the formalization of traceability links such as the derivation of requirements or allocations; it stresses the utility of UML2 elements of notation and diagrams adopted by SysML as a language for complex systems engineering; and it sums up the elements of notation specific to SysML. Beyond SysML, the tutorial also presents “methods-and-tools issues” raised by any instrumented traceability strategy involving models.

A debate will follow the presentation to allow exchanges between attendees, especially regarding experience feedback and methods-and-tools monitoring results.

    

11:00 - 12:00 Session 12 - Quality             Chair: Augusti Canals


• Toward a real integration of quality in software development
Nicolas Blanc – INSA (F), Marc Rambert – Kalistick (F), Jean-Louis Sourrouille, and Régis Aubry – INSA (F)

• Assessing quality of use cases descriptions using a mathematical model based on sequential events
Reyes Juárez-Ramirez, Guillermo Licea, Antonio Rodriguez-Diaz, and Alfredo Cristobal-Salas – Universidad Autonoma de Baja California (MX)

 

 

12:00 - 12:30 Closing Lecture           Chair: Jean-Claude Rault

Scalability also means more new features

Gérard Memmi – Casenet (USA)


Software scalability has multiple facets or dimensions. It can be, without limiting the topic, a question of complexity and performance, a question of data modeling and memory occupation, a question of networking capacity and bandwidth, or even a question of production organization and planning. In any case, it is an unavoidable question: as soon as a software product or application is successful, it will face one scalability dimension or another. To prepare for it and anticipate which dimension will come first is often at the heart of the dialog between the software architect and the marketing organization of a company.

First, the concept of software scalability is discussed. Haven’t you ever heard “it does not scale up” in many meetings or reviews, from many different stakeholders in the company organization often with a large degree of vagueness and ambiguity? Any attempt in defining and positioning software scalability has to consider it as a property of many different measurable aspects of a software product or application. Many if not all measurable parameters can be deduced from the user activity including the time he is spending using the software and indeed the quantity of information he needs to manipulate. These factors will drive our discussion.

It often appears that resolving scalability issues is one of the major activities of the maintenance phase of Software Product Life Cycle. This point will be discussed and it will be described how a software product is evolving and scaling up while maturing.

This lecture is briefly reporting few experiences analyzing how this vast and broad question was addressed in a start-up environment where resources (in engineering workload) are scarce by definition. The start-up environment is certainly to be felt a limiting factor for anticipating and designing for scalability from the start as it seems so reasonable and is strongly recommended by so many articles on software architecture.

Sometimes a scalability issue is resolved by changing the value of some compilation parameters; these are the good cases where the architecture anticipated the issue. But other times, the problem is ‘resolved’ by deeply changing the software. Then, not only, it became more flexible directly addressing the question, but some of its features, and behavior also changed, offering some new usages either by design or more rarely in an unexpected way. We will see through our examples how it may happen; how what could be considered as a secondary effect is in our opinion, an important piece of the puzzle that companies have to integrate, and cope with in their product strategy, especially in the early stages of the product life cycle.