ICCSEA 2008 - Invited Talks

 Inaugural Lectures: Tuesday, December 9,    9:30 - 10:30

 

You can’t get there from here!: Problems and potential solutions in developing new classes of complex computing systems

Michael G. Hinchey – University of Limerick (IRL)

 

The explosion of capabilities and new products within the sphere of communications and information technology (ICT) has fostered widespread overly-optimistic opinions regarding the industry, based on common but unjustified assumptions of quality and correctness of software. NASA faces this dilemma as it envisages advanced mission concepts that involve large swarms of small spacecraft that will engage cooperatively to achieve science goals. Such missions involve levels of complexity that beg for new methods for system development far beyond today's methods, which are inadequate for ensuring correct behavior of large numbers of interacting intelligent mission elements. New system development techniques recently devised through NASA-led research will offer innovative approaches to achieving correctness in complex system development, including autonomous swarm missions that exhibit emergent behavior, as well as general software products created by the software industry.

Mike Hinchey is Co-Director of Lero-the Irish Software Engineering Research Center and Professor of Software Engineering at University of Limerick, Ireland. Until recently he was Director of the NASA Software Engineering Research Center. Hinchey received a BSc from University of Limerick, MSc from University of Oxford and a PhD from University of Cambridge. He previously held positions as full professor at universities in Ireland, UK, Sweden, Australia, and USA. The author/editor of more than 12 books and over 100 technical articles, he is Chair of the IEEE Technical Committee on Complexity in Computing and Vice Chair of the IEEE Technical Committee on Autonomous and Autonomic Systems. He is also the IEEE's representative to IFIP TC1 (Foundations of Computer Science) which he currently chairs and was recently appointed Chair of the IFIP Technical Assembly.


Mike Hinchey

 

 Inaugural Lectures: Tuesday, December 9,    11:00 - 12:00

 

The requirements conundrum in complex systems engineering

Joseph K. DeRosa – The MITRE Corp. (USA)

 

The requirements management process has long been a mainstay of classical systems engineering. However, as the scale and complexity of the systems we engineer become greater, it is increasingly difficult to proceed logically from requirements to design. First the requirements are not always known or knowable. Second, even if they are known at the level of overall desired capabilities, the exigencies of the many interacting stakeholders, processes and technologies in the system and its environment make the allocation of those requirements down to stable subsystem requirements impossible. This paper defines the conditions under which classical requirements management fails as a systems engineering process, and outlines an adaptive requirements management process for complex systems engineering.

Dr. Joseph K. DeRosa is Director of Systems Engineering for the MITRE Corporation's Command and Control Center. He holds B.S., M.S. and Ph.D. degrees in Electrical Engineering from Northeastern University in Boston, Massachusetts and has done graduate studies at Babson College School of Business and the Santa Fe Institute. He has over 30 years experience as a researcher, teacher, system developer and manager. He has held a number of Director positions at MITRE in the research and development of software-intensive information systems. Before joining MITRE, he was a member of the technical staff at M.I.T. Lincoln Laboratory and Director of Business Development at Linkabit Corporation. He is a past Chairman of the Boston Section of the Institute of Electrical and Electronics Engineers Communications Society.


Dr. Joseph K. DeRosa

 

 Invited Talk: Wednesday, December 10,  17:00 - 18:00

 

Validation challenges of safety critical embedded systems

Peter Feiler – SEI (USA)

 

Safety-critical systems are systems whose failure or malfunction may result in injury or death, or damage or loss of equipment, or damage to the environment. These safety risks are managed by a range of safety analyses ranging from hazard analysis to fault tree analysis. As safety-critical systems have become increasingly software intensive the embedded software system has become an increasing risk factor. For this reason, the SAE Architecture Analysis & Design Language (AADL) international standard has been developed to support model-based engineering of embedded and real-time software intensive systems.

In this presentation we examine how AADL contributes to model-based validation of systems, to consistency between different analytical models of the same system, and validation of the implementation against the validated models. We will illustrate model-based analysis throughout the life cycle of different degrees of fidelity and formality with examples in terms of security, latency, and model checking of redundancy logic. The presentation concludes with an illustration of challenges in preserving validated operational properties when producing an implementation.
 

Dr. Peter Feiler has been with the Software Engineering Institute (SEI) for 23 years. He is the technical lead and author of the Society of Automotive Engineers (SAE) Architecture Analysis & Design Language (AADL) standard. His research interests include dependable real-time systems, architecture languages for embedded systems, and predictable system engineering.


Dr. Peter Feiler

 

 Closing Lecture: Thursday, December 11,     12:00 - 12:30


Scalability also means more new features

Gérard Memmi – Casenet (USA)

Software scalability has multiple facets or dimensions. It can be, without limiting the topic, a question of complexity and performance, a question of data modeling and memory occupation, a question of networking capacity and bandwidth, or even a question of production organization and planning. In any case, it is an unavoidable question: as soon as a software product or application is successful, it will face one scalability dimension or another. To prepare for it and anticipate which dimension will come first is often at the heart of the dialog between the software architect and the marketing organization of a company.

First, the concept of software scalability is discussed. Haven’t you ever heard “it does not scale up” in many meetings or reviews, from many different stakeholders in the company organization often with a large degree of vagueness and ambiguity? Any attempt in defining and positioning software scalability has to consider it as a property of many different measurable aspects of a software product or application. Many if not all measurable parameters can be deduced from the user activity including the time he is spending using the software and indeed the quantity of information he needs to manipulate. These factors will drive our discussion.

It often appears that resolving scalability issues is one of the major activities of the maintenance phase of Software Product Life Cycle. This point will be discussed and it will be described how a software product is evolving and scaling up while maturing.

This lecture is briefly reporting few experiences analyzing how this vast and broad question was addressed in a start-up environment where resources (in engineering workload) are scarce by definition. The start-up environment is certainly to be felt a limiting factor for anticipating and designing for scalability from the start as it seems so reasonable and is strongly recommended by so many articles on software architecture.

Sometimes a scalability issue is resolved by changing the value of some compilation parameters; these are the good cases where the architecture anticipated the issue. But other times, the problem is ‘resolved’ by deeply changing the software. Then, not only, it became more flexible directly addressing the question, but some of its features, and behavior also changed, offering some new usages either by design or more rarely in an unexpected way. We will see through our examples how it may happen; how what could be considered as a secondary effect is in our opinion, an important piece of the puzzle that companies have to integrate, and cope with in their product strategy, especially in the early stages of the product life cycle.
 

Gérard Memmi has over 20 year experience in software industry. Currently, he is VP engineering with CaseNET, Inc. Prior to joining CaseNET, he was Chief Software Architect and member of the executive team of DAFCA. Earlier, he has detained positions as Technical Manager and Executive Management with Avant! (acquired by Synopsys), Chrysalis Symbolic Design (acquired by Avant!), and the Bull Group. At Bull, he has established and developed their Applied Research Laboratory in the USA. There, he developed with his team one of the first real-time collaborative system, as an aid to business process monitoring user activities.

During his academic activities, Gérard Memmi was professor in computer engineering at École Nationale Supérieure des Télécommunications (ENST), now Telecom ParisTECH.

Gérard Memmi authored more than 50 technical papers, several patents and has coauthored one book. He has been invited speaker in several international conferences. He graduated from École Nationale Supérieure des Télécommunications (ENST and detains a PhD degree from the University of Paris.

 

 

 

 

 

 

 

 

ICCSEA 2008 - Tutorials

 Tutorial 1: Wednesday, December 10        9:00 - 17:00

 

The Method Framework for Engineering System Architectures

Donald G. Firesmith – Software Engineering Institute (USA)

 

Typical general-purpose, architecture methods and standards are incomplete and lack adequate flexibility to meet the specific needs of the project. Systems vary greatly in size, complexity, criticality, domain, operational independence of other systems, requirements volatility, required quality characteristics and attributes, as well as technology, its diversity, and volatility. Development organizations vary greatly in centralization, management and engineering culture, expertise, experience, and staff co-location. Projects and programs vary greatly in type, contracting, lifecycle scope, schedule, and funding. Finally, system stakeholders vary greatly in type, numbers, authority, and accessibility.

Based on the concept of situational method engineering, the Method Framework for Engineering System Architectures (MFESA) addresses these challenges by helping system architects, process engineers, and technical managers to develop appropriate, project-specific system architecture engineering methods that can be used to effectively and efficiently engineer high-quality system architectures.
This tutorial begins by providing the motivation for using MFESA including why system architecture engineering is critical to system and project success, the many different challenges facing system architects, and the principles that their system architecture engineering methods should follow to help them successfully meet these challenges. The tutorial continues by providing an introductory overview of MFESA including its four component parts: (1) the MFESA ontology defining architectural concepts, (2) the MFESA metamodel defining foundational types of architectural method components, (3) the MFESA repository of these reusable method components, and (4) the MFESA metamethod for constructing appropriate, project specific architectural engineering methods. The tutorial continues by covering each of these MFESA components in turn, including associated guidelines and pitfalls.

The tutorial then covers the first MFESA component – the MFESA ontology that defines the concepts and terminology underlying system architecture engineering as well as the relationships between these concepts. Concepts covered system, system architecture, architectural structures, architectural patterns and mechanisms, architectural drivers and concerns including quality requirements based on an extensive quality model, architectural representations, architectural models and views, architectural focus areas, and architectural quality cases, and architectural visions.

The tutorial continues by covering the second MFESA component – the MFESA metamodel that defines the most general and abstract types of method components that can be used to construct project-specific system architecture engineering methods. This includes (1) architectural work products, (2) architectural work units that can be performed to produce the architectural work products, and (3) the architectural workers who perform the work units.

The tutorial then covers the third MFESA component – the MFESA repository and the reusable method components that it stores. This includes numerous architectural work products such as system architecture and its many representations. It also includes architectural work units such as architectural tasks and techniques. Finally, this includes architectural works such as the architects, their teams, and the tools they use.

The tutorial continues by covering the fourth and final MFESA component – the MFESA metamethod for constructing appropriate, project specific architectural engineering methods. This includes determining the project needs, selection of the appropriate reusable system architecture engineering method components, tailoring these selected components, and integration of these selected method components.

In summary, MFESA is an application of situational method engineering to the engineering of system architectures. It provides both the repository of reusable method components and a metamethod for using these components to create appropriate system architecture engineering methods. These reusable components are free and open source, so that any organization or team can use MFESA to produce the architecture engineering methods appropriate for them. MFESA provides the benefits of standardization through the underlying ontology and metamodel on which the reusable components are based. MFESA also provides the necessary benefits of flexibility in that it lets the architects, process engineers, and managers select the appropriate method components and tailor them as needed. Thus, unlike existing architecture engineering methods and generic architecture standards that are incomplete and insufficiently flexible, MFESA provides the system architecture community with the best of both worlds.

This tutorial summarizes the contents of the book by the same title, to be published by Auerbach during the fall of 2008.
Learner Outcomes: Attendees will learn:

  1. the foundational concepts and terminology underlying system architecture engineering,

  2. an overview of the method components of the MFESA framework including architectural work products (e.g., architectures and architectural representations such as documents and models), architectural work units (e.g., tasks and techniques), and architectural workers (e.g., roles, teams, and tools) that perform the work units to engineer the work products,

  3. how to improve their system architecture engineering processes.

 

Donald Firesmith is a senior member of the technical staff at the Software Engineering Institute (SEI), where he works in the Acquisition Support Program helping the Department of Defense acquire major systems. He has published dozens of technical articles, presented papers and given tutorials at numerous conferences, led several international workshops, and has been the program chair or on the program committee of several conferences. With over 25 years of industry experience, he has published six books on software and system engineering, primarily in the areas of process and object orientation. His latest book (Fall 2008) is on system architecture engineering, and he is currently completing a book on engineering safety- and security-related requirements. He is the founding chair of the OPEN Process Framework (OPF) Repository organization www.opfro.org, which provides the world’s largest free open-source website documenting over 1,100 reusable method components.


Donald G. Firesmith

 

 Tutorial 2: Wednesday, December 10        9:00 - 12:30

 

Model-Based Testing

Part 1: Introduction to Model-based Testing and its use in the ITEA2 research project D-MINT
 

Thomas Bauer – Fraunhofer IESE (Germany) and Axel Rennoch – Fraunhofer FOKUS (Germany)

 

Since the development and maintenance of industrial test suites requires huge resources several approaches and tools have been proposed to apply model-based techniques. The acceptance for the use of the new techniques is limited and different in industries like automotive, telecommunication or automation. The presentation introduces industrial approaches, their usage and evaluation in the context of the international research project D-MINT.

Contents of the presentation:

  • Basic terminology

  • Test design and specification
    Standardized techniques (UTP, TTCN-3)
    Statistical testing

  • Industrial domains
    Requirements
    Practical approaches

  • D-MINT project
    Case studies

  • Common methodological approach

  • Evaluation processes

  • Tool support

Related link: www.d-mint.org

Audience: project managers, test engineers.
 

Thomas Bauer is scientist at the Fraunhofer Institute in Kaiserslautern, Germany.

 
Thomas Bauer

Axel Rennoch is scientist at the Fraunhofer Institute for Open Communication Systems in Berlin, Germany. Both are involved in the development of test methods and design and implementation of test suites in industrial and research projects.

     
Axel Rennoch              

 

Model-Based Testing

Part 2: A Case Study
 

Bruno Legeard – Smartesting and Université de Franche-Comté
 

This talk presents the deployment of a model-based testing solution, called Smartesting Test Designer™, to a project of migrating a critical financial application. The main objective is to test functional regression of the application in a context of an offshore development and iterative development process.

Automated test scripts are automatically generated from a test model of the application. The test model has been developed specifically for test generation purpose.

The project results were on time delivery, systematic non-regression testing for each release, and a full coverage of project functional requirements. The model-based testing approach covers all functional testing need, mostly automated scripts, but also the remaining manual test cases.

Key points:

  • Experience report of deploying a model-based testing solution in a context of offshore development.

  • Showing how this accelerates and reduces the cost of producing and maintaining automated tests.

  • Giving project metrics on test generation productivity and accuracy of generated tests.
     

Prof. Dr. Bruno Legeard is Chief Technology Officer of Smartesting® (previously LEIRIOS) and Professor of Software Engineering at the University of Franche-Comté (France). Bruno Legeard is co-author of the book “Practical Model-Based Testing – A tools approach” – Elsevier/2006. He started working on model-based testing in the mid 1990's and has extensive experience in applying model-based testing to large information systems, e-transaction applications, and embedded software.


Bruno Legeard

 

 Tutorial 3: Thursday, December 11        9:00 - 12:00

 

Sustainable Software Project Estimation

Bernard Londeix – Telmaco (GB)
 

Software production is becoming more and more quantitatively managed in order to become like other industries. Yearly, software projects are better defined and benchmarked. This could not have happened without the advent of some particular standards and methods for measuring and estimating pieces of software and their related projects. The MeterIT tool suite of Telmaco covers the complete metrics process in a straightforward practical fashion.

Proper planning of software projects requires a project estimating method based on the quantity of software to be processed. This quantity has to be measured or estimated from the very beginning of the project life cycle. Then, knowing the amount of software to be worked on, the project can be estimated in terms of its effort and duration requirements. The use of the software size to estimate projects requires beforehand the establishment of a collection of predictors. These predictors are usually produced during the benchmarking of a set of projects, thus quantifying the capability of the organization at that time.

However, predictors and software measures are dependent on the measurement standard used. The currently most used standards, which have obtained ISO recognition, are traditionally well-positioned in the category of First Generation (1G) sizing standards: there we find the standards IFPUG, NESMA, and Mark II and their derivatives. Being much used and practiced over the past 20 years, largely due to the advantage of their being available, their insufficiencies have also progressively become evident. Hence, a new ISO standard has become necessary. This is COSMIC, the measurement standard of Second Generation (2G). The number of COSMIC measured projects is progressing at such a rate that we can expect it to overtake IFPUG measured projects at some point in the near future. The problem now lies with those organizations, which have constituted large knowledge bases of 1G software measurements. Re-measuring their previous portfolio of thousands of measurements must be avoided as being uneconomical. However, converting the 1G sizes into 2G sizes is a very productive option. Meanwhile, there is the question of urgency to convert to 2G measures or else run the risk of losing metrics assets.

The proposed method is to use MeterIT-Converter, a size conversion tool from Telmaco. MeterIT-Converter offers three main capabilities: (i) - conversion from 1G to COSMIC, (ii) - retro-version from COSMIC to each of the three main 1G standards, and (iii) - the facility to produce and maintain one’s own Conversion Algorithm (CA). Once the conversion facility is established, the predictors repository may be easily passed from 1G to 2G using MeterIT-Project, the benchmarking tool of Telmaco. The measurement of new requirements can then benefit from the COSMIC standard being supported by a tool such as MeterIT-Cosmic, the COSMIC measurement tool from Telmaco. The predictors repository can then be used to calibrate PredictIT, Telmaco’s project estimation tool, to estimate the new projects.

In summary then, the MeterIT tool suite supports software management in the Plan-Do-Check-Act of the Deming cycle thus keeping the predictor repository up to date during the enhancement of the sizing methodology to 2G irrespective of the diversity of standards among the outsourcers.
 

Bernard Londeix graduated in 1968 from the INP-ENSERG, Grenoble, and followed his interest in the then emergent French computing industry by joining the recently created CII, Louveciennes. This was the beginning of his participation in the production of software, moving from firmware to large business and real-time software. This in turn reached a further determining stage when he was working within the ITT Group in 1979 in London. Bernard’s book "Cost Estimation for Software Development" was published by Addison-Wesley in 1987 and at this time, he also carried out his first software benchmarking exercise as a basis for the next steps. In order to make software measurement, project estimation and benchmarking activities more readily available, Bernard created Telmaco Ltd (www.telmaco.com) in 1989 offering MeterIT tool suite and related services. As a member of the Community of Practice of SMS (www.measuresw.com), and working at making software metrics more accessible to project management, Bernard is contributing to set the standard, teach it, and practice it at all organizational levels. In this he has worked with a number of client organizations including government institutions, and software development organizations independent of, or part of a diversity of industries. Bernard participates in the COmmon Software Measurement International Consortium (COSMIC) Group (www.cosmicon.com) and takes an active role in promoting the COSMIC method of measuring software production. This method has the unique advantage of being applicable to business, real-time and infrastructure software, that is, to most of the software industry. Bernard is a CEng member of APM, BCS, UKSMA, COSMIC, and IET.


Bernard Londeix

 

 Tutorial 4: Thursday, December 11        9:00 - 12:00

 

SysML: What’s New in Systems Engineering?

Françoise Caron – EIRIS Conseil (F)
 

What is the Systems Modeling Language (SysML) standard? What are its objectives? What are the explicit and implicit concepts that underlie the choices for the current version?

The purpose of SysML is to bring together the state-of-the-art modeling practices used in complex systems engineering through a unified modeling language based on the fundamentals of the Unified Modeling Language, version 2 (UML2). UML2 refers to the needs of software system design and production, however, whereas SysML refers to the development needs of both industrial complex systems (such as aeronautical, railway, or space systems), and systems of systems (such as defense or air traffic management systems). Thus, one cannot transpose the principles of UML2, on which software developments are based, to SysML deployment, which requires a good understanding of the principles of complex systems engineering.

This tutorial aims at presenting the contributions of SysML to the recurring needs of complex system modeling in the context of industrial projects. It introduces the two essential aspects of SysML, system design and architecture modeling and —during the development process—the formalization of traceability links such as the derivation of requirements or allocations; it stresses the utility of UML2 elements of notation and diagrams adopted by SysML as a language for complex systems engineering; and it sums up the elements of notation specific to SysML. Beyond SysML, the tutorial also presents “methods-and-tools issues” raised by any instrumented traceability strategy involving models.

A debate will follow the presentation to allow exchanges between attendees, especially regarding experience feedback and methods-and-tools monitoring results.

Françoise Caron is an Expert Consultant in Systems Engineering. Holding a PhD in quantum chemistry, Françoise Caron has a broad experience of research and research methods in scientific environments. As a Development Engineer for 3 years at the Compagnie Internationale de Services en Informatique (CISI), a subsidiary of the Commissariat à l’Énergie Atomique (CEA), she gained experience coordinating fundamental research, applied research, and industrialists in a simulation project on behalf of the pharmaceutical industry.

As a Quality and Process Engineer at Dassault-Systèmes for 10 years, she advanced her understanding of the industrial process working with both the editor’s clients and IBM, the editor’s distributor. She actively participated in the industrialization of CATIA, which became the leading software for CAD/CAM.

As Principal Engineer and Manager Consultant at Valtech for six years, she contributed to the development of Valtech’s “Information Systems” business offer. Moreover, she developed a competence center addressing modular architecture issues for industrial customers. Progressively, this competence center detached itself from the Valtech’s core business, i.e., information systems, to focus on systems engineering specifics.

In May 2004 she created her own consultancy, EIRIS Conseil, and has since offered consulting and training, chiefly on systems engineering in industrial environments. She also leads the workgroup “Methods and Tools” of the Association Française d’Ingénierie Système (AFIS), the French chapter of the International Council on Systems Engineering (INCOSE).

www.eiris.fr

 

 

 

 

....