Beijing national stadium.jpg Beijing, China (40031007062).jpg

News

[Feb 04, 2022]: Registration for LTB 2022 is combined with ICPE 2022 and free. For more information please refer to ICPE registration page.
[Jan 10, 2022]: The abstract and submission deadline has been extended to January 23, 2022, AOE.

Workshop Agenda:

Note: The start of the workshop is 16:00 CEST, 22:00 Beijing Time (CST), 10:00 Eastern Time (EDT) on April 10, 2022.

16:00 - 16:05 CEST
22:00 - 22:05 CST
10:00 - 10:05 EDT
Introduction: Alexander Podelko (MongoDB), Heng Li (Polytechnique Montréal), Nima Mahmoudi (University of Alberta). [Talk]
16:05 - 17:00 CEST
22:05 - 23:00 CST
10:05 - 11:00 EDT
Keynote: Jianmei Guo (East China Normal University). [Talk][Slides]
From SPEC Benchmarking to Online Performance Evaluation in Data Centers. Abstract: Data centers have become the standard infrastructure for supporting large-scale Internet services. As they grow in size, each upgrade to the software (e.g., OS) or hardware (e.g., CPU) at scale becomes costlier and riskier. Reliable performance evaluation for a given upgrade facilitates infrastructure optimization, stability maintenance, and cost reduction at data centers, while erroneous evaluation produces misleading data, wrong decisions, and ultimately huge losses. This talk introduces the challenges and practices of system performance engineering in data centers, mainly covering SPEC benchmarking for competitive analysis and online evaluation for large-scale production environments.
Bio: Prof. Dr. Jianmei Guo co-directs the System Optimization Lab in the School of Data Science and Engineering at East China Normal University. He received his Ph.D. in Computer Science in 2011 from Shanghai Jiao Tong University. He was a Postdoctoral Fellow at the University of Waterloo and a Staff Engineer at Alibaba Group. His research interests include system optimization, system performance engineering, and system reliability engineering. He has published over fifty peer-reviewed papers and received one ACM SIGSOFT Distinguished Paper Award and two Best Paper Awards. He serves regularly in the reviewer boards or program committees of highly-ranked international journals or conferences. [Website]
17:00 - 17:20 CEST
23:00 - 23:20 CST
11:00 - 11:20 EDT
Research Paper: Thomas F. Düllmann (University of Stuttgart), André van Hoorn (University of Hamburg), Vladimir Yussupov (University of Stuttgart), Pelle Jakovits (University of Tartu) and Mainak Adhikari (Indian Institute of Information Technology Lucknow). [Talk][Slides][Preprint]
CTT: Load Test Automation for TOSCA-based Cloud Applications
Abstract: Despite todays fast and rapid modeling and deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance to avoid outages, unsatisfied customers, and performance problems. To tackle such issues, (load) testing is one of several approaches. In this paper we introduce the Continuous Testing Tool (CTT), which enables modeling of tests and test infrastructures along with the cloud system under test, as well as deploying and executing (load) tests against a fully deployed system in an automated manner. CTT employs the OASIS TOSCA Standard to enable end-to-end support for continuous testing of cloud-based applications. We demonstrate CTT's workflow, its architecture, as well as its application to DevOps-oriented load testing and load testing of data pipelines.
17:20 - 17:40 CEST
23:20 - 23:40 CST
11:20 - 11:40 EDT
Presentation: Alexander Podelko (MongoDB). [Talk][Slides]
A Review of Modern Challenges in Performance Testing
Abstract: The way we develop software is changing – and performance testing is changing too to remain relevant. It should be integrated into agile development and DevOps (including shift-left, shift-right, and continuous performance testing). Automation and Continuous Integration (CI) become necessary as we get to multiple iterations and shrinking times to verify performance. It changes the way we do performance testing and poses many new challenges that are neither deeply investigated nor even well defined. This presentation is an attempt to review these challenges, loosely group them, and provide some information on handling them in industry and academia when such information is available. The goal of this presentation is to trigger discussions and to be a foundation of a more formal review if the topic draws enough interest.
17:40 - 18:00 CEST
23:40 - 00:00 CST
11:40 - 12:00 EDT
Presentation: Leandro Melendez (Grafana/K6). [Talk][Slides]
Cost effective load testing
Abstract: In these agile and continuous days, organizations still need to traditionally execute complete end-to-end load testing projects. These circumstances often have misconceptions on the coverage, type of cases to use, and the number of automations to include. All that confusion generates wasted time, extra effort from the performance team, inefficiencies when executing Load Tests, and many more. Each one of those, in the end, translates into additional costs for your organization. Performance testing is a field oriented at activities whose ultimate goal translates into cost mitigation. But we do not stop to think about cost mitigation or optimization we can do internally in our performance testing processes. Often, performance teams will blindly follow sets of load testing guidelines, which may not be best practices, to conduct performance and load testing projects.
In this talk, Leandro will show you some common mistakes that organizations fall into that produces extra costs and impact the team's efficiency, even unknowingly. Then he will dive into a myriad of tips from the Pareto principle, passing on prioritizations and techniques to increase the cost efficiency and cost savings impact.
18:00 - 19:00 CEST
00:00 - 01:00 CST
12:00 - 13:00 EDT
Keynote: Andre van Hoorn (University of Hamburg). [Talk][Slides]
Architecture-based Resilience Testing and Engineering of Microservice-based Software Systems.
Abstract: Microservice-based architectures are expected to be resilient. However, various systems still suffer severe quality degradation from changes causing transient behavior, e.g., service failures or workload variations. There are different reasons for this deficiency. In practice, the elicitation of resilience requirements and the quantitative evaluation of whether the system meets these requirements are usually not systematic or not even conducted. Resilience testing (aka chaos engineering) aims to assess a software system's and an organization's ability to cope with failures, e.g., by injecting faults and observing their effects. However, identifying and prioritizing effective and efficient resilience tests is non-trivial. Moreover, resilience tests are costly as they require actual system deployment and execution.
In this talk, I will give an overview of our recent research activities to assess and address the previously mentioned resilience-related challenges. We conducted expert interviews to analyze the relevance of transient behavior in practice. We explored the use of well-known risk analysis methods (e.g., Fault Tree Analysis and FMEA) and the scenario-based Architecture Trade-Off Analysis Method (ATAM) for resilience requirement elicitation and resilience testing via industrial case studies and state-of-the-art technologies. We developed approaches and tools that leverage the relationship between resilience patterns, antipatterns, and fault injections; automatically extract architectural knowledge to generate and refine resilience tests; and use simulations to further reduce the number of resilience tests to execute. We developed a number of techniques and tools for the interactive and visual elicitation, specification, comprehension, and refinement of resilience requirements and properties.
Bio: Andre van Hoorn is a senior researcher with the Department of Informatics at the University of Hamburg, Germany (since 2021). Before moving to Hamburg, he was with the Institute of Software Engineering at the University of Stuttgart, Germany (2013-2021), with the Software Engineering group at Kiel University, Germany (2010-2012), and with the Software Engineering group at the University of Oldenburg, Germany (2007-2010). He received his PhD degree (with distinction) from the Kiel University, Germany (in 2014), and his Master’s degree from the University of Oldenburg, Germany (2007). Andre’s research focuses on designing, operating, and evolving trustworthy distributed software systems. Of particular interest are runtime quality attributes such as performance, reliability, and resilience – and how they can be assessed and optimized using a smart combination of model-based and measurement-based approaches. In recent years, André has investigated challenges and opportunities to apply such approaches in the context of continuous software engineering and DevOps. André is the principal investigator of several research projects, including basic and applied research, and is actively involved in community activities, e.g., in the scope of the Research Group of the Standard Performance Evaluation Corporation (SPEC) and the ACM/SPEC International Conference ACM/SPEC International Conference on Performance Engineering (ICPE).
19:00 - 20:00 CEST
01:00 - 02:00 CST
13:00 - 14:00 EDT
Panel discussion: Andre B. Bondi (Software Performance and Scalability Consulting LLC), Leandro Melendez (Grafana/K6), Weiyi Shang (Concordia University), Alexander Podelko (MongoDB), Heng Li (Polytechnique Montréal). [Talk]
Performance Education and Training
Leading experts from academia and industry will discuss their experiences, best practices and challenges in educating and training future performance researchers or practitioners. Panelists are Andre B. Bondi (Software Performance and Scalability Consulting LLC), Leandro Melendez (Grafana/K6), Weiyi (Ian) Shang (Concordia University), and Alexander Podelko (MongoDB). The panel is moderated by Heng Li (Polytechnique Montréal).
20:00 - 20:20 CEST
02:00 - 02:20 CST
14:00 - 14:20 EDT
Research Paper: Devon Hockley (University of Calgary) and Carey Williamson (University of Calgary). [Talk][Preprint][Slides]
Benchmarking Runtime Scripting Performance in WebAssembly
Abstract: In this paper, we explore the use of WebAssembly (WASM) as a sandboxed environment for general-purpose runtime scripting. Our work differs from prior research focusing on browser-based performance or SPEC benchmarks. In particular, we use micro-benchmarks and a macro-benchmark (both written in Rust) to compare execution times between WASM and native mode. We first measure which elements of script execution have the largest performance impact, using simple micro-benchmarks. Then we consider a Web proxy caching simulator, with different cache replacement policies, as a macro-benchmark. Using this simulator, we demonstrate a 5-10x performance penalty for WASM compared to native execution.
20:20 - 20:40 CEST
02:20 - 02:40 CST
14:20 - 14:40 EDT
Presentation: Josef Mayrhofer (Performetriks LLC). [Talk][Slides]
Simplify Performance Engineering with Intelligence
Abstract: We are living in an incredible world. All our day-to-day needs are just one mouse click away, and delivery services bring these goods to our front doors. But, this flexibility comes with a price, and we become more dependent on the availability and performance of such e-commerce or financial transactions. Performetriks has 20 years of experience in building fast and secure business applications. It's our mission to simplify and streamline performance engineering across industries and make it everyone's daily job. Our performance engineering maturity model (PEMM) focuses on standards, methods, processes, and tools used in an organization to objectively evaluate their performance engineering maturity level compared with other companies and guide them on their performance journey. We enrich our knowledge base using expert advice and use Monte Carlo and machine learning to create our customers' performance engineering remediation plan.
20:40 - 21:00 CEST
02:40 - 03:00 CST
14:40 - 15:00 EDT
Presentation: Matt Fleming (DataStax), Guy Bolton King (DataStax), Sean McCarthy (DataStax), Jake Luciani (DataStax) and Pushkala Pattabhiram (DataStax). [Talk][Slides]
Fallout: Distributed systems testing as a service
Abstract: All modern distributed systems list performance and scalability as their core strengths. Given that optimal performance requires carefully selecting configuration options, and typical cluster sizes can range anywhere from 2 to 300 nodes, it is rare for any two clusters to be exactly the same. Validating the behavior and performance of distributed systems in this large configuration space is challenging without automation that stretches across the software stack. In this paper we present Fallout, an open-source distributed systems testing service that automatically provisions and configures distributed systems and clients, supports running a variety of workloads and benchmarks, and generates performance reports based on collected metrics for visual analysis. We have been running the Fallout service internally at DataStax for over 5 years and have recently open sourced it to support our work with Apache Cassandra, Pulsar, and other open source projects. We describe the architecture of Fallout along with the evolution of its design and the lessons we learned operating this service in a dynamic environment where teams work on different products and favor different benchmarking tools.
21:00 - 21:05 CEST
03:00 - 03:05 CST
15:00 - 15:05 EDT
Conclusion. [Talk]

Travel Statement

In light of ongoing developments with rising COVID-19 case numbers all over the world, the appearance of new virus variants, and ongoing travel restrictions the ICPE 2022 organization committee decided to go full virtual with the next ICPE edition. After thoughtful discussions with our steering committees and co-sponsors (SPEC & ACM), ICPE will move to be a fully online, virtual experience that will take place during yet-to-be-defined time slots in the conference week. We believe this is the safest approach for the health and safety of our global community and we are excited that a virtual event allows people to participate and interact despite the ongoing pandemic.

Attendees will be able to participate virtually in the conference similar to the experience from the previous full virtual editions. Networking is a cornerstone of our event and we will try to set up a virtual conference experience that fosters this as much as possible.

The conference agenda will now be aligned as in previous editions with core sessions taking place for approximately four hours per day including paper discussions, keynotes, workshops, and interactions.

We hope you all understand our decision in the current situation and will be happy to meet you virtually in ICPE 2022.

Call for papers

Software systems (e.g., smartphone apps, desktop applications, telecommunication infrastructures and enterprise systems, etc.) have strict requirements on software performance. Failing to meet these requirements may cause business losses, customer defection, brand damage, and other serious consequences. In addition to conventional functional testing, the performance of these systems must be verified through load testing or benchmarking to ensure quality service.

Load testing examines the behavior of a system by simulating hundreds or thousands of users performing tasks at the same time. Benchmarking compares the system's performance against other similar systems in the domain. The workshop is not limited by traditional load testing; it is open to any ideas of re-inventing and extending load testing, as well as any other way to ensure systems performance and resilience under load, including any kind of performance testing, resilience / reliability / high availability / stability testing, operational profile testing, stress testing, A/B and canary testing, volume testing, and chaos engineering.

Load testing and benchmarking software systems are difficult tasks that require a deep understanding of the system under test and customer behavior. Practitioners face many challenges such as tooling (choosing and implementing the testing tools), environments (software and hardware setup), and time (limited time to design, test, and analyze). Yet, little research is done in the software engineering domain concerning this topic.

Adjusting load testing to recent industry trends, such as cloud computing, agile / iterative development, continuous integration / delivery, micro-services, serverless computing, AI/ML services, and containers poses major challenges, which are not fully addressed yet.

This one-day workshop brings together software testing and software performance researchers, practitioners, and tool developers to discuss the challenges and opportunities of conducting research on load testing and benchmarking software systems. Our ultimate goal is to grow an active community around this important and practical research topic.

We solicit two tracks of submissions:

  1. Research or industry short papers (maximum 4 pages)
  2. Research or industry full papers (maximum 8 pages)
  3. Presentation track for industry or research talks (maximum 700 words extended abstract)
Research/Industry papers should follow the standard ACM SIG proceedings format and need to be submitted electronically via EasyChair. Extended abstracts for the presentation track need to be submitted as "abstract only'' submissions via EasyChair as well. Accepted papers will be published in the ICPE 2022 Proceedings. Submissions can be research papers, position papers, case studies, or experience reports addressing issues including but not limited to the following:


Important Dates

Paper Track (research and industry papers):

Abstract submission: January 9, 2022, AOE; January 23, 2022, AOE;
Paper submission: January 15, 2022, AOE; January 23, 2022, AOE;
Author notification: February 24, 2022, AOE;
Camera-ready version: May 4, 2022, AOE;

Presentation Track:

Extended abstract submission: February 6, 2022, AOE;
Author notification: February 24, 2022, AOE;


Organization:

Chairs:

Alexander Podelko MongoDB, USA
Heng Li Polytechnique Montréal, Canada
Nima Mahmoudi University of Alberta, Canada

Program Committee:

Shang, Weiyi Concordia University, Canada
Chen, Tse-Hsun (Peter) Concordia University, Canada
Sunyé, Gerson University of Nantes, France
Jiang, Zhen Ming (Jack) York University, Canada
Chen, Jinfu Huawei, Canada
Liao, Lizhi Concordia University, Canada
Bondi, Andre Software Performance and Scalability Consulting LLC, USA
Ventresque, Anthony University College Dublin (UCD), Ireland
Lange, Klaus-Dieter HPE, USA


Past LTB Workshops: