Industry Papers

 

Industry Session 1: Best Paper Nominees

Session Chair: Sigrid Eldh, Ericsson

  1. Dependability Challenges in the Model-Driven Engineering of  Automotive Systems Rakshith Amarnath, Peter Munk, Eike Thaden, Arne Nordmann and Simon Burton
  2. A Platform for Automating Chaos Experiments Ali Basiri, Aaron BlohowiakLorin Hochstein and Casey Rosenthal  
  3. Using Learning Styles to Staff and Improve Software Inspection Team  Performance Anurag Goswami, Gursimran Walia and Urvashi Rathod

Industry Session 2: Reliability through Test and Certification

Session Chair: Kristoffer Ankarberg, Ericsson

  1. A Planning Mechanism for Software Testing Pete Rotella
  2. Coverage-based Test Prioritization for Regression Testing of Configurable Software Dusica Marijan and Marius Liaen
  3. An Amanat-Based Multi-Party Certification Protocol for Outsourced Software in Automotive Systems Chung-Wei Lin, Shinichi Shiraishi and Baekgyu Kim

Industry Session 3: Enabling Reliability: From Space to Cyberphysical Systems

Session Chair: Alf Larsson, Ericsson

  1. Time and Space Partitioning Using On-board Software Reference Architecture Victor Bos, Timo Vepsäläinen, Yuliya Prokhorova and Timo Latvala
  2. A Framework To Support Generation and Maintenance of An Assurance Chung-Ling Lin and Wuwei Shen
  3. A Model Based Systems Engineering Approach to Resiliency Analysis of a Cyberphysical System Myron Hecht

Industry Session 4: Increasing Quality in an Agile Industry

Session Chair: Myron Hecht, The Aerospace Corporation

  1. Bug Bash: An efficient approach to increase test coverage and ensure product quality in an agile environment Uma Balasubramani, Kartik Iyer, Balaji Santhana Krishnan and Hema Kovvuri
  2. Lessons Learned: Using a Static Analysis Tool Within a Continuous Integration System Claude Bolduc
  3. Case Study: Project Management Using Cross Project Software Reliability Growth Model Considering System Scale Case Kiyoshi Honda, Nobuhiro Nakamura, Hironori Washizaki and Yoshiaki Fukazawa

 


Abstracts of accepted industry papers


Dependability Challenges in the Model-Driven Engineering of  Automotive Systems Rakshith Amarnath, Peter Munk, Eike Thaden, Arne Nordmann and Simon Burton

Automotive products increase in complexity as they evolve from driver assistance towards highly automated and autonomous driving. We propose model-driven engineering (MDE) as a basis for developing these complex systems. Our contribution to this topic is twofold: First, we identify and discuss challenges in adopting MDE to the specific needs of the automotive industry with a special focus on dependability. Second, we sketch challenges that still require interdisciplinary research activities


A Platform for Automating Chaos Experiments Ali Basiri, Aaron Blohowiak, Lorin Hochstein and Casey Rosenthal 

The Netflix video streaming system is composed of many interacting services. in such a large system, failures in individual services are not uncommon. This paper describes the Chaos Automation Platform, a system for running failure injection experiments on the production system to verify that failures in non-critical services do not result in system outages.


Using Learning Styles to Staff and Improve Software Inspection Team  Performance Anurag Goswami, Gursimran Walia and Urvashi Rathod

Software companies strive to improve the quality by employing inspections of early software artifacts to detect and eliminate early faults. However, evidence suggests that overall performance of an inspection team is highly dependent on an individual inspectors’ ability to detect faults. This paper leverages research on cognitive Learning Styles (LS) to improve the inspection team performance. This paper presents the results from studies with industrial professionals regarding the effect of LS of individual inspector on inspection performance. The results in this study showed that the inspection teams formed with inspectors of diverse LSs outperform teams with team of inspectors with similar LSs. These results can benefit software managers in staffing inspectors, enabling cost savings, and improving quality.


A Planning Mechanism for Software Testing Pete Rotella

It is necessary to balance the contributions from four primary software testing 'dimensions,' in the integration branch test cycle for waterfall and hybrid agile/waterfall projects, to achieve best-in-class customer experience: 1) Sufficient testing resources (engineers) are needed to ensure that adequate testing is accomplished; 2) sufficient bug fixing resources (engineers) are needed to ensure that the newly uncovered bugs are properly fixed; 3) adequate testing and bug fixing time/schedule are needed to ensure that there is sufficient time to run the test plans completely, to do enough regression testing, and to accommodate dead periods when test stoppers are encountered; and 4) new feature content must not be so high that the testing and fixing teams cannot complete their tasks and produce code that is sufficiently reliable.
These four dimensions are key parameters in any testing planning exercise, and often, during the testing cycle itself, one or more of these parameters must be adjusted to satisfy the cost/schedule/reliability goals laid out at the project start. The work described in this paper attempts to construct a generalized model that quantifies the contributions from these dimensions, and enables the practitioner to construct what-if scenarios to dynamically estimate the customers' software reliability experience in their production networks.
The telecommunications software industry is experiencing a period of very high feature velocity, and in a time also of limited engineering resources, it is sometimes the case that software reliability suffers substantially. The RSC model is an attempt to enable development and test managers to balance the available testing and bug fixing degrees of freedom to ensure high reliability.


Coverage-based Test Prioritization for Regression Testing of Configurable Software Dusica Marijan and Marius Liaen

Practical testing of highly-configurable software requires efficient techniques for increasing the thoroughness of testing while reducing overall effort spent. If a highly-configurable software is developed applying continuous integration, supporting frequent releases, the problem of cost-effective testing is additionally constrained by time-efficiency criteria. In addition to ensuring high configuration coverage by executing software in a variety of configurations, testing needs to provide rapid feedback on potential failures. To address this problem, we propose a history-based test prioritization approach that orders existing test cases to achieve the three objectives: high fault detection, short test feedback loop, and high configuration coverage. The approach has been implemented in our tool Titan. In the paper we describe the application of the tool for prioritizing test suites for highly-configurable software. We evaluate whether the approach can improve industry practice in a set of experiments using industrial test suites. The results show that the approach improves industry practice in terms of the three studied metrics.


An Amanat-Based Multi-Party Certification Protocol for Outsourced Software in Automotive Systems Chung-Wei Lin, Shinichi Shiraishi and Baekgyu Kim

Software plays a more and more important role in automotive systems. As autonomous features advance, it is believed that most behaviors, including safety-critical functions, of future automotive systems will be defined by software. Due to the safety-critical nature, correctness and quality of automotive software draw concerns from governments, Original Equipment Manufacturers (OEMs), and customers. To prove the fulfillment of governments' regulations, reduce OEMs' liability, and enhance customers' confidence, software certification is a promising technique, and it should integrate verification, simulation, and testing results in systematic and rigorous ways. Expecting that software certification will become a necessary step in future automotive design, we resolve the conflict between certification issuers and software developers in this paper. Based on the Amanat protocol [6], we address the corresponding challenges in the automotive domain and propose a protocol which guarantees "authenticity" to certification issuers and "confidentiality" to software developers at the same time. Authenticity means that only authenticated results from compilers and analysis tools (verification, simulation, and/or testing) are considered by the certification issuers in the certification process, and confidentiality means that sensitive source codes of the software developers are not released to certification issuers. The proposed protocol is an important step towards the realization of automotive software certification.


Time and Space Partitioning Using On-board Software Reference Architecture Victor Bos, Timo Vepsäläinen, Yuliya Prokhorova and Timo Latvala

In this paper, we present lessons learned from the EagleEye Time and Space Partitioning (TSP) project in which time and space partitioning was applied to the EagleEye reference mission of European Space Agency (ESA). We identify challenges in EagleEye TSP and categorize them according to the design problem to be related to 1) communication and data sharing and 2) timing dependencies between partitions and applications of the partitions. We also suggest improvements of the approach to tackle the challenges based on ESA’s On-Board Software Reference Architecture (OSRA).


A Framework To Support Generation and Maintenance of An Assurance Chung-Ling Lin and Wuwei Shen

One of the greatest challenges in software intensive systems such as safety critical systems is to ensure software quality assurance (called software assurance for brevity) which encompasses some quality-related attributes such as reliability and security as well as functionality and performance. To this end, engineers prefer a safety case or an assurance case, via Goal Structuring Notation (GSN) to convey the information about software assurance in a system during its development. An assurance case, similar to a legal case, lays out an argumentation-structure with supporting evidence to claim that software assurance in a system is achieved. However, due to complexity of software intensive applications especially heterogeneity of artifacts used as evidence, the creation and management of an assurance case become a challenging issue facing the safety critical domains. In this report, we present a novel framework to automatically generate an assurance case via a safety pattern and further support the maintenance of an assurance case during a system’s evolution. Last, we use the Wheel Brake System (WBS) for an aircraft as a case study to illustrate the construction and maintenance of a safety case during a system’s evolution.


A Model Based Systems Engineering Approach to Resiliency Analysis of a Cyberphysical System Myron Hecht

This article describes an approach to cyberattack resilience modeling for a IEC 61850 network of intelligent relays controlling an electric power distribution station that comprise a cyberphysical system. The approach is based on earlier work on survivability analysis use Markov Models and implemented using Matlab and SysML. After describing example power distribution station is described, the paper sets forth an analytical resiliency model which and shows how it can be expressed in an SysML model. Example results from a transient analysis of the system are shown to demonstrate the utility of the approach.


Bug Bash: An efficient approach to increase test coverage and ensure product quality in an agile environment Uma Balasubramani, Kartik Iyer, Balaji Santhana Krishnan and Hema Kovvuri

Delivering high quality product in an agile environment becomes a challenge due to various reasons like limited sprint time, test team size, number of features being delivered, monotonous testing approach, doing only module testing and lack of expertise for a particular feature. Our product is released once in three months. It follows complete agile approach where first one and half months is scrum phase which contains 6 Sprints or iteration and the next one and half months is regression phase. Since the release cycle has limited time, we faced the above challenges.  To address the above issues and ensure quality deliverables, we have come up with an approach called Bug Bash event. It is a Quality Assurance event which happens before or during the regression cycle in every release. Bug bash focus more on Integration or System testing rather than just a module testing. This will not only get lot of defects but also uncover valid Customer and regression defects which actually come from various Integration of modules.


Lessons Learned: Using a Static Analysis Tool Within a Continuous Integration System Claude Bolduc

Static analysis tools are used for improving software quality and reliability. Since these tools can be time consuming when used for analysis of big codebases, they are normally run during scheduled (e.g. nightly) builds. However, the sooner a defect is found, the easier it is to fix efficiently. In order to detect defects faster, some analysis tools offer an integration with the integrated development environment of the developers at the cost of not always detecting all the issues. To detect defects earlier and still provide a reliable solution, one could think of running an analysis tool at every build of a continuous integration system. In this paper, we share the lessons learned during the integration of the static analysis tool Klocwork (that we are developing) with our continuous integration system. We think that the lessons learned will be beneficial for most companies developing safety-critical software (or less critical systems) that wish to run their analysis tool more often in their build system. We report these lessons learned along with examples of our successes and failures.


Case Study: Project Management Using Cross Project Software Reliability Growth Model Considering System Scale Case Kiyoshi Honda, Nobuhiro Nakamura, Hironori Washizaki and Yoshiaki Fukazawa

We propose a method to compare software products developed by the same company in the same domain. Our method, which measures the time series of the number of detected faults, employs software reliability growth models (SRGMs). SRGMs describe the relations between faults and the time necessary to detect them. Herein our method is extended to classify past projects for comparison to current projects to help managers and developers decide when to end the test phases or release a project. Past projects are classified by three parameters: lines of code, number of test cases, and test density. Then SRGM is applied. Our extended method is applied to the datasets for nine projects developed by Sumitomo Electric Industries, Ltd. Classification by test density produces the best results.