We hope you can join us for the upcoming SIGOFT and ACM PD sponsored webinars.
Follow the links below to register for these free 60 minute webinars and be sure to share this with friends and colleagues who may be interested in these topics. Check out our past events, all available on demand.
Each talk will be followed by a moderated live question and answer session.
Note: If you'd like to attend but can't make it to the virtual event, you still need to register to receive a recording of the webinar when it becomes available. You can stream this and all ACM SIGSOFT and ACM Learning Webinars on your mobile device, including smartphones and tablets.
Interested in presenting a webinar? Check our page for prospective presenters to find out how.
Speaker: Xin Xia, moderator: Xing Hu
The security and transparency of the software supply chain have been an emergency problem met by the government, industry, and academia. Software Bill of Materials (SBOM), which records the ingredients that makeup software components, is widely used as a key building block to support the trusted software supply chain (TSSC). Except for SBOM, do we need to invent other technologies to support TSSC? What is the future road of TSSC? In this talk, I will present our recent progress in this area. I will introduce our initial works on SBOM generation and consumption, and then I will present our works relevant to vulnerability management (e.g., silent vulnerability bug reports and fixes identification, vulnerability detection, and CVE improvement) and supply chain attack prevention. Finally, I will briefly mention the future direction of TSSC.
Speaker: Baishakhi Ray, moderator: Saikat Chakraborty
The past decade has seen unprecedented growth in Software Engineering— developers spend enormous time and effort to create new products. With such enormous growth comes the responsibility of producing and maintaining quality and robust software. However, developing such software is non-trivial— 50% of software developers’ valuable time is wasted on finding and fixing bugs, costing the global economy around USD$1.1 trillion. Today, I will discuss how AI can help in different stages of the software development life cycle for developing quality products.
In particular, I will talk about Programming Language Processing (PLP), an emerging research field that can model different aspects of code (source, binary, execution, etc.) to automate diverse Software Engineering tasks, including code generation, bug finding, security analysis, etc.
Speaker: Gustavo Pinto, moderator: Fernanda Madeiral
Cognitive Driven Development (CDD for short) is a coding design technique that aims to reduce the complexity of a code unit (e.g., a class) by systematically limiting the number of coding items that add complexity to that code unit. We have been using CDD to build products at Zup Innovation, a Brazilian tech company. Our experience suggests that, by using CDD, the software development team was able to keep the code units under a reasonably small size, even with the (near) linear growth of the software. We believe CDD can be a sharp tool in the developer's arsenal when it comes to designing better software. This talk presents our current CDD-related research and our plans for future work.
Speaker: Kevin Moran, moderator: Michele Tufano
Bridging the abstraction gap between concepts and source code is the essence of Software Engineering (SE). SE researchers regularly use machine learning to bridge this gap, but there are two fundamental issues with traditional applications of machine learning in SE research. Traditional applications are typically reliant on human intuition, and they are not capable of learning expressive yet efficient internal representations. Ultimately, SE research needs approaches that can automatically learn representations of massive, heterogeneous, datasets in situ, apply the learned features to a particular task, and possibly transfer knowledge from task to task.
Improvements in both computational power and the amount of memory in modern computer architectures have enabled new approaches to canonical machine learning tasks. Specifically, these architectural advances have enabled machines that are capable of learning deep, compositional representations of massive data depots. This rise of deep learning has led to tremendous advances in several fields. Given the complexity of software repositories and the artifacts therein, deep learning has ushered in new analytical frameworks and methodologies for SE research and its corresponding practical applications. A recent report from the 2019 NSF Workshop on Deep Learning & Software Engineering has referred to this area of research as Deep Learning for Software Engineering (DL4SE).
This talk will provide a retrospective on the current state of DL4SE research by offering an analysis on work that has been done across different software engineering tasks including code suggestion, program repair, and program synthesis, to name a few. Additionally, the talk will explore how different types of software artifacts and deep learning architectures have been used, as well as some pressing challenges faced by this line of work. The talk will conclude with a discussion of promising future directions of work as well as an overview of potential opportunities for the DL4SE research community to continue to drive impactful, open, and reproducible work.
Speaker: Alexander Serebrenik, moderator: Anita Sarma
This talk will provide a brief overview of several recent studies of gender and gender diversity in software development teams. The main findings are: (1) more gender-diverse GitHub teams are not only more productive than less gender-diverse ones (Vasilescu et al., CHI 2015), but they are also less likely to exhibit suboptimal communication patterns (Catolino et al., ICSE-SEIS 2019) known to lead to suboptimal code patterns (Palomba et al., TSE 2019); and (2) social capital obtained by collaboration in GitHub open source projects is beneficial for duration of engagement in an open source project; diversity of information ties, i.e., involvement in very different projects, is beneficial for people of all genders, more so for women than for men (Qiu et al., ICSE 2019). If the time permits, it will also touch on the ongoing work related to going beyond gender binary. In this preliminary study that has been based on interviews of three transgender women working in software development, it has been observed that remote work, facilitated by technological solutions, reduces barriers for participation in software development projects. It is conjectured that remote work can benefit other underrepresented minorities as well (Ford et al., ICSE-SEIS 2019).
Speaker: Paul Ralph, moderator: Sebastian Baltes
Scholarly peer-review - the lynchpin of science - is demonstrably prejudiced, unreliable, inaccurate, wasteful and sometimes devastating to researchers' careers and emotional wellbeing. The ACM SIGSOFT Paper and Peer Review Quality Task Force convened to overcome these problems by developing empirical standards. An empirical standard is a brief public document that communicates expectations for a specific kind of study (e.g. a questionnaire survey). Empirical standards facilitate transforming peer review into a process of checking whether a study meets transparent expectations set by our community, rather than the whims of individuals. This transformation will produce numerous benefits for researchers, reviewers, editors, and society, including increasing acceptance rates and research quality while decreasing workloads and frustration. In this webinar, Prof. Ralph will describe the standards and how they can be used, how they were created, how they produce benefits, how they will evolve, how they will be governed, and how you can get involved. Software engineering - not medicine or physics or psychology - is going to usher forth a revolution in peer review: a revolution of fairness, effectiveness, consistency and kindness.
Speaker: Rachel Tzoref-Brill, moderator: Myra Cohen
Combinatorial testing (CT) is an effective test design technique, considered to be a testing best practice. CT provides automatic test plan generation, but it requires a manual definition of the test space in the form of a combinatorial model, consisting of parameters, their respective values, and constraints on the value combinations. As the system under test evolves, e.g., due to iterative development processes and bug fixing, so does the test space, and thus, in the context of CT, evolution translates into frequent manual model definition updates. Thus, the comprehension and evolution of combinatorial models and test plans pose challenges to the application of CT in industry.
First, I will describe and demonstrate three techniques for visualizing combinatorial models and the test plans derived from them. The three techniques are based on graphs, tables, and treemaps, and are used to visualize different aspects of the models and test plans, such as the relationships between the parameters and the constraints, the relationships between the tests in the derived test plan, the degree of uniqueness of each test, the degree of legality of each parameter combination, and its degree of coverage in a derived test plan.
Second, I'll introduce a syntactic and semantic differencing technique for combinatorial models. We define a canonical representation for differences between two models, and suggest a scalable algorithm for automatically computing it. We further use our differencing technique to analyze the evolution of 42 real-world industrial models, demonstrating the scalability of our solution. A user study with 16 CT practitioners shows that comprehension of differences between real-world combinatorial model versions is challenging and that our differencing tool significantly improves the performance of less experienced practitioners.
Finally, I'll introduce a first co-evolution approach for combinatorial models and test plans. By combining three building blocks, to minimally modify existing tests, to enhance them, and to select from them, we provide five alternatives for co-evolving the test plan with the combinatorial model, considering tradeoffs between maximizing fine-grained reuse and minimizing total test plan size, all while meeting the required combinatorial coverage.
We have implemented all the ideas described above in IBM Functional Coverage Unified Solution (IBM FOCUS), an industrial-strength CT tool.
The talk will cover works published in ASE'16, ICSE'17, and ESEC/FSE'18.
Speaker: Massimiliano Di Penta, moderator: Shane McIntosh
Continuous Integration (CI) and Delivery (CD) have been claimed to introduce several benefits in software development, including high software quality reliability, and faster release cycles. However, recent work pointed out challenges, barriers and bad practices characterizing its adoption. This happens especially because many developers are not adequately experiences with the practices, or with the related technology.
This talk will outline recent work (by ourselves and by others) aimed at characterizing the CI/CD process from different perspectives. First, the talk will provide an overview of various kinds of anti-patterns related to the CI/CD adoption. Then, it will go more in detail of how certain practices, such as static analysis, testing or code reviews are followed in the context of CI/CD. Last, but not least, the talk will overview approaches to help developers improving their CI/CD practice, also outlining open challenges in this area, and future research directions.
Speakers: Jeff Sutherland and Ivar Jacobson, moderator: Pekka Abrahamsson
Daily Scrum practice is plagued with disfunction in over 50% of "agile" teams. This causes projects to be late, over budget, with unhappy customers. The key to agile coaching is to make the dysfunction clear without being directive so a team can decide for itself how to self-organize towards a shippable increment every sprint that delivers real value.
One of the most effective tools in recent years has been the use of Scrum Essential Cards to coach teams to improve their practices in their organization and explain to everyone who needs to know how the practices work in a specific organization. These cards describe Scrum by using Essence. Essence is the international standard for defining methods and practices. Ivar Jacobson's company has worked with the Co-Creator of Scrum to define a set of cards that provides a complete definition of Scrum consistent with the Scrum Guide. For instance, the cards have been used to have teams "Build Their Own Scrum" to clarify what parts of their implementation need work and define process improvements that need to be made in each sprint. Work with these cards with hundreds of teams shows that the average Scrum team implements 1/3 of the 21 components of Scrum well, 1/3 of the components poorly, and 1/3 of pieces of Scrum not at all. Daily practice with only a third of components working well is like driving a car with wheels missing.
Exercises with these cards are dramatically revealing about how practices should work, how people on the same team may have different ideas about what practices are, and what a team needs to do next to improve their process. One participant in this exercise using the Scrum cards said he learned more about Scrum in one hour with the Essence cards than he did in the previous six years of being on a Scrum team.
Essence provides "tools" not previously available to the creation of methods. Essence:
In particular Essence can result in a better retrospective, improved selection of process improvements for each sprint, a more clarifying Daily Scrum, and a more valuable shippable increment of product at every Sprint Review.
Speaker: Mathieu Nayrolles, moderator: Shane McIntosh
Producing AAA games takes a lot of effort and organization. The production pipeline used at Ubisoft for its major brands like Rainbow Six, Assassin Creed or Far Cry is in constant evolution to produce bug-free games for our millions of players and support the game as a service (GaaS) paradigm that is currently transforming the video-game industry. This talk will present how we have automated our debug and profiling activities using known techniques from the software as a service world, landmarks of the SE scientific literature and our own research. This talk will also present the problems we are currently tackling, in partnership with our research lab (Ubisoft La Forge), Mozilla and several Canadian universities (Concordia, Polytechnique Montreal, ETS, McGill), to further automate our production pipeline.
Speaker: Mark Harman, moderator: Federica Sarro
In this talk I will talk about research and deployment work on source code analysis, testing and SBSE, which I have undertaken with the many wonderful collaborators, colleagues and friends; my personal view on the joys of scientific research and the excitement of deployment, but also the frustrations of both. I think frustration is important and needs to be acknowledged, because often leads to further insights and development and is, thereby, the root cause of future joys.
Speaker: Ivar Jacobson, moderator: Pekka Abrahamsson
Use cases is a well-proven technique for doing requirements and it has been part of the inspiration for more recent techniques such as user stories and test-driven development. Now the inspiration has flown in the other direction. Use-Case 2.0 is the new generation of use-case driven development - light, agile, and lean - inspired by user stories, Scrum and Kanban.
Use-Case 2.0 has all the popular values from the past, not just supporting requirements but also architecture, design, test, operations, user experience. In fact, the use case idea has become so widespread that the term "use case" has become a normal English word used to understand the usages of virtually anything.
The latest ideas, particularly the concept of "slicing" use cases, allows it to be used for small agile development teams producing applications. Moreover, Use-Case 2.0 seamlessly scales up to large projects producing the most complex of systems, which is where many teams struggle to use other agile requirements techniques like user stories. Use-Case 2.0 provides all the benefits of user stories with the ability to scale when necessary and easily see how all the requirements relate to each other across all kinds of systems - businesses, systems, and software.
Use-Case 2.0 is among other resources represented by a set of poker-sized cards which makes it easy to be learned and applied by agile teams. The cards are also used to play serious games and stimulate conversations within the team and between teams.
In this presentation Dr. Ivar Jacobson, the creator of use cases, will present Use-Case 2.0 and how to use it by agile teams.
Speaker: Margaret Burnett, moderator: Anita Sarma
Gender inclusiveness in software companies is receiving a lot of attention these days, but it overlooks a potentially critical factor: software itself. Research shows that different people often work differently with software, and that some of these differences statistically cluster by gender. In this talk, we'll begin by presenting a method we call GenderMag, which can be used to find and fix "inclusivity bugs" -- gender biases in software that support people of one gender less well than people of another gender. As we'll explain, at the core of the method are 5 facets of cognitive style differences that are also statistically gender differences, drawn from a large body of foundational work from computer science, psychology, education, communications, and women's studies. We then present some results of using GenderMag on real products - both commercial and open source - and finally focus on practices for taking the method into real world usage.
Speaker: Tim Menzies, moderator: Shane McIntosh
One of the misleading myths about AI is that it will remove the need for programmers - that somehow, programmers will disappear since AI tools will build our software systems. This is just false. There are so many ways to apply SE to AI to SE that human programmers (who know AI methods) will have ready employment, for decades to come.
For example, this talk will explore "DUO" which is a synergy between data miners and optimizers. In this partnership, data miners generate the models that are explored by optimizers. Also, optimizers advise how to best adjust the control parameters of a data miner. This combined approach acts like an agent leaning over the shoulder of an analyst that advises "ask this question next" or "ignore that problem, it is not relevant to your goals". Further, those agents can help us build "better" predictive models, where "better" can be either greater predictive accuracy, or faster modeling time (which, in turn, enables the exploration of a wider range of options).
DUO is a dynamic, and growing, field. While small parts of it can be automated, humans programmers are essential to understand, apply and maintain these methods within their organization.
Speaker: Tao Xie, moderator: Robert Dyer
As an example of exploiting the synergy between AI and software engineering, the field of intelligent software engineering has emerged with various advances in recent years. Such a field broadly addresses issues on intelligent [software engineering] and [intelligence software] engineering. The former, intelligent [software engineering], focuses on instilling intelligence in approaches developed to address various software engineering tasks to accomplish high effectiveness and efficiency. The latter, [intelligence software] engineering, focuses on addressing various software engineering tasks for intelligence software, e.g., AI software. This talk will discuss recent research and future directions in the field of intelligent software engineering.
Speakers: Ivar Jacobson and Roly Stimson, moderator: Pekka Abrahamsson
So, what is Essence? Put simply, Essence is an international standard that defines two things:
The Essence language is simple, visual and intuitive. It is designed to help us clearly express the common challenges that we share, and the practices that help us to be successful.
Core language elements include:
The Essence Kernel defines the core concepts that are universal - i.e. to be found wherever and whenever we do software engineering. The authors of the standard had a simple rule - for any concept, if we can think of an example of an endeavor where it is not central or relevant, then it was not included in the kernel. Examples of Kernel Alphas include:
The magic of Essence is that if we simply take the time to share our practices using a common language, and relate then to our shared challenges, then practitioners can easily understand them, compare and contrast them, select the ones that are of most value, see how they fit and work together, and adapt them to meet their needs, all while maintaining absolute clarity and transparency of their way of working at all times.
Speakers: Ivar Jacobson and Brian Kerr, moderator: Pekka Abrahamsson
In the first seminar of the series we looked at the benefits from having a way to organise the knowledge of how we engineer software, having a common ground of universal concepts to build upon, and allowing the mixing and matching of various practices defined on top of it. Essence is this new way of thinking about software development. It gives us a standard way to capture and combine these practices, which may come from many different sources, to describe a team's way of working.
In this presentation we will introduce some of the key ideas behind Essence, not in terms of the theory but by demonstrating how teams can put these ideas to work. We will see how we go from the traditional world where practices are just static descriptions, to one where the practices come to life and are actively used.
Most teams are familiar with representing their work items on cards, such as their user stories, change requests, defects, etc. They can be in either physical or electronic form and are great for helping to organize, prioritise, visualize and track the work. Essence uses a similar technique to represent the key concepts of working practices as a set of cards. This opens up a whole new set of powerful games, allowing a team to reason about and greatly improve their way of working.
Making the team's way of working something tangible that they can see, touch and manipulate, can facilitate deep conversations and decision making around their process. It stops being a reference and becomes something that is used, adapted and living. We will see some example Essence games that teams can play to get started, track progress, and importantly identify improvements over time to their process. The games can be applied to any set of practices and the widely used one of Scrum will be used to illustrate the gameplay.
Speaker: Ivar Jacobson, moderator: Pekka Abrahamsson
Software Engineering was the theme of a 1968 conference in Garmisch, Germany, with at the time the leading computer scientists and methodologists in the world. That meeting is considered being the beginning of software engineering and by now we have developed the discipline over 50 years.
"This is not the end, it is not even the beginning of the end, but it is perhaps the end of the beginning" (Winston Churchill).
We are more than 20 million software developers on the planet, with a large number of methods to develop software. However, the most successful recipe for success is a method that focuses on hiring the most brilliant people in the world and empowering them to create wonders. 50 years ago, Ericsson in Sweden did that. Now Apple, Google, Amazon, etc. do that.
What about the rest of the world? - banks, insurance, airlines, defense, telecom, automotive, etc. How can we get these industries to be more innovative and develop better software, faster, cheaper and with happier customers? How can we do that given that the state of the art of our discipline is in such a chaos, characterized by the multitude of competing methods out there?
The most powerful way to help the rest of the world to build excellent software is to dramatically increase the competency (and skill) of all of us. There are no shortcuts. Education must start from an understanding of the heart of software development, from a common ground that is universal to all software development endeavors. The common ground must be extensible to allow for any method with its practices to be defined on top of it. This would allow us to sort out the chaos and to increase the competency of all of us. As a plus, that competency increase wouldn't hurt the brilliant people, but make them even more productive than today.
In this presentation Dr. Ivar Jacobson will revisit the history of methods, explain why we need to break out of our repetitive dysfunctional behavior, and introduce Essence: a new way of thinking that promises many things, one of them being to dramatically change the way we educate in software development to increase the competency in our profession.
Speaker: André van der Hoek, moderator: David Budgen
Blockchain. AI/machine learning. Security. Cloud. While these are today's topics, they may not be tomorrow's. In a landscape where technologies and infrastructures change orders of magnitude faster than personnel, one thing remains of constant importance: the ability of developers to be great designers.
Much goes into being a great designer: knowing the domain inside and out, understanding design thinking, and, yes, being intimately familiar with the technology at hand. Crucially important, however, is the ability to effectively keep a steady mind and balance multiple perspectives in a world of uncertainty, constant change, and competing demands on the design under consideration.
What exactly sets expert software designers apart, and what makes them have enduring design success regardless of the technology or infrastructure du jour, is the topic of this talk. Based upon decades of observations, conversations, interviews, and empirical studies of software developers 'in action' designing, we will present key insights into their thought and decision-making processes, use and non-use of tools and notations, reliance on colleagues, and more.
Speaker: George Sherwood, Testcover.com; moderator: Robert Dyer
Combinatorial testing (CT) is a way to design software tests so that interactions among configuration settings and input values are covered by the test design. This webinar introduces CT, from its origins in design of experiments to its present role in verifying interactions in complex systems. A persistent CT usability challenge has been constraints among test factor values, which can cause some tests to be valid but others not. Research progress in managing constraints has enabled increased adoption among practicing software engineers, and better coverage of test interactions. Embedded functions technology allows functionally dependent relations among test factors to be defined as functions in a general purpose programming language. These relations enforce constraints among test factor values and insure that all valid combinations of determinant factors are available for the test design. Resulting usability improvements enable automated pairwise test designs to meet novel objectives: Cover equivalence classes of expected results; verify univariate and multivariate equivalence class boundaries; verify corners among intersecting boundaries and edges.
Speaker: Paul McMahon, PEM Systems; moderator: Will Tracz
Seventeen years ago agile began as a simple manifesto. Now, with all the methods and frameworks formulated in its name, it has become fat and flabby. We have reached a point where what we set out to change (big prescriptive methods) has returned, but now under the banner of being agile. The Heart of Agile is an attempt to return to agile's real core. But are the four words collaborate, deliver, reflect, and improve enough to get practitioners to implement the true heart of agile?
Essence, a new common ground for software engineering, is an attempt to find a middle ground between the very core of agile and all the multitude of competing implementations of agile. In this presentation you will learn how Essence can strengthen the Heart of Agile without getting into particular ways of doing agile. Concrete examples will be provided along with success stories demonstrating practical techniques you can start using today to strengthen your own team's implementation of the Heart of Agile.
Speaker: Danny Dig, Oregon State University; moderator: Robert Dyer
In the last decade refactoring research has seen exponential growth, with thousands of peer-reviewed research papers. I will attempt to map this vast landscape and the advances that the community has made by answering questions such as who does what, when, where, why, and how. I will muse on some of the factors contributing to the growth of the field, the adoption of research into industry, and the lessons that we learned along this journey. This will inspire and equip you so that you can make a difference, with people who make a difference, at a time when it makes a difference.
Speaker: Dennis Frailey, ACM Fellow; moderator: Robert Dyer
Unfortunately, software data often don't satisfy the criteria for use of many popular statistical analysis techniques. This webinar introduces robust statistics and other non-parametric techniques that are suitable for many software situations. It includes discussion of correlation analysis and regression analysis, as well as such techniques as box plots, histograms and control charts.
Speaker: Dennis Frailey, ACM Fellow; moderator: Robert Dyer
This webinar explains how to conduct sound research studies and select the most appropriate statistical techniques. It begins by introducing several very basic statistical concepts such as central tendency measures, measures of dispersion, and measures of confidence. It shows why claims about software are often dubious due to insufficient study techniques or use of inappropriate statistical methods for analysis. It explains the different types of research studies and the factors that can invalidate a study, as well as how to assess statistical significance. The webinar concludes with a discussion of several important study techniques (such as Analysis of Variations), introduces several important statistical methods for data analysis, discusses how to choose the most appropriate statistical methods, and shows why non-parametric and robust statistical methods are needed for many software studies.
Speaker: Dennis Frailey, ACM Fellow; moderator: Robert Dyer
Effective data analysis begins with a disciplined process of measurement. This webinar explains the steps of the measurement process, with examples, followed by a discussion of certain very basic analysis techniques. It then introduces the concept of statistical distributions (with emphasis on the normal distribution) and explains how distributions are utilized in data analysis.
Speaker: Dennis Frailey, ACM Fellow; moderator: Robert Dyer
Data analysis begins with understanding the fundamental principles of measurement. This webinar introduces several important principles, explains why they matter, and shows how we can misinterpret our data if we don't pay attention to these principles. The webinar provides some definitions, explains the importance of scales of measure, shows why sample size matters, and gives examples where ignoring these principles can lead to erroneous conclusions. This webinar serves as a foundation for the others in the series.
Speaker: Audris Mockus, University of Tennessee; moderator: Robert Dyer
One percent of code is responsible for 60-99% of user-found issues: how to discover it and what to do about it? The software quality improvement method described in this webinar is a data driven approach intended to
Risky File Management (RFM) starts from linking end user experiences to activities, such as code fixes or other improvements, in the source code. It utilizes data recorded in version control, issue tracking, and, potentially, customer relationship management systems, linking negative user experiences to the corresponding fixes in the source code. In most products the bulk of fixes are a normal part of the development and testing activities and are not triggered by user feedback. The RFM tracing procedure can typically be encapsulated as an add-on for common build tools. Once the tracing is complete, the resulting data are used to identify robust predictors of negative user experiences. Typically such predictors include a past history of problems and of developer churn, though some variation among projects may be present. Based on the predictors identified in the prior step the information is fed into a simple reporting system integrated with a project's development environment, most likely a code inspection system. One percent or less of the most risky codebase is presented in such a reporting system. Each file is then annotated with links to past changes and issues and project experts are asked to make a final determination of what needs to be done based on a cheat-sheet of common scenarios. Such recommendations may range from no action, to, in the other extreme, reengineering the problematic area. Other example actions include more rigorous inspections and/or testing and assigning ownership for abandoned areas. The final recommendations are then scheduled to be implemented based on urgency and availability of resources.
The RFM approach has been deployed and refined as a part of a quality management process in a large communications equipment company.
By the end of this webinar you should understand how to use rich data in version control and related systems to identify problematic areas in your project and various action that may be warranted in different circumstances.
Speaker: David Weiss, Sustainable Software, LLC; moderator: Robert Dyer
You know your product is successful when your users start asking for changes. The more useful your software is, the more change requests and variety of change requests you get. Is there a way to anticipate such success as you design and build your software? One way is to consider that you are building a family of systems and to try to define what the family members will have in common, i.e., their commonalities, and how you are willing to let them vary, i.e., their variabilities. Software product line engineering is based on the idea of defining and developing a family of systems. The goal is to make it easy to produce members of the family. Experienced product line engineers make it possible to generate members of the family by identifying the decisions that need to be made to specify a family member and using parameterization and other techniques to instantiate the code for the family to produce the corresponding family member. Put another way, they create a decision model that links variabilities with parameters and code segments that are needed to implement the family member.
This talk will define software product line engineering and discuss the FAST method (Family-oriented Abstraction, Specification and Translation) for applying it, with examples.
Speaker: David Weiss, Sustainable Software, LLC; moderator: Robert Dyer
Architecture is key to producing systems that satisfy requirements and that are distinctive, useful, maintainable, buildable, and that delight users. This talk will cover how architecture helps to develop and maintain systems that are sustainable and that have distinct competitive advantages. It will consider architectural structures and the particular questions that they help answer that are central to software development in general and sustainability in particular. The discussion will also consider the knowledge that an architecture provides that enables developers to maintain and evolve a system over a long lifetime. Along the way we will consider what we can learn from building architecture that helps in producing sustainable systems. Understanding the characteristics of architecture that lead to sustainable systems is the hallmark of a competent architect. This talk will try to provide useful insights and examples that will help you in the design and efficient development of software that is highly desirable, sustainable, and admired.
Speaker: Gail Murphy, University of British Columbia; moderator: Robert Dyer
Continuity in software development is all about shortening cycle times. For example, continuous integration shortens the time to integrating changes from multiple developers and continuous delivery shortens the time to get those integrated changes into the hands of users. Although it is now possible to get multiple new versions of complex software systems released per day, it still often takes years, if ever, to get software engineering research results into use by software development teams. What would software engineering research and software engineering development look like if we could shorten the cycle time from taking a research result into practice? What can we learn from how continuity in development is performed to make it possible to achieve continuous adoption of research results? Do we even want to achieve continuous adoption? In this talk, I will explore these questions, drawing from experiences I have gained in helping to take a research idea to market and from insights learned from interviewing industry leaders.
Speaker: Patrick Madden, SUNY Binghamton; moderator: Will Tracz
Combinatorial optimization problems are notoriously difficult; many of them are NP-Complete, and there are few general purpose tools available. In this talk, a novel approach to optimization for these problems is presented; the approach provides trade-offs between simple greedy heuristics, classical dynamic programming, and brute force enumeration. This work is part of a larger effort to deliver sophisticated optimization tools to the general public.
Speakers: Sergio Segura, Seville University, and Zhi Quan (George) Zhou, University of Wollongong; moderator: Will Tracz
What if we could test a program by using the program itself? What if we could tell if a program is buggy even when we cannot distinguish erroneous outputs from the correct ones? This is exactly the advantage of metamorphic testing, a technique where failures are not revealed by checking an individual concrete output, but by checking the relationship among the inputs and outputs of multiple executions of the program under test. Nearly two decades after its introduction, metamorphic testing is becoming a fully-fledged testing paradigm with successful applications in multiple domains including, among others, online search engines, simulators, compilers, and machine learning programs. This webinar will provide an introduction to metamorphic testing from a double perspective. First, Sergio Segura will present the technique and the results of a novel survey outlining its main trends and lessons learned. Then, Zhi Quan Zhou will go deeper and present some of the successful applications of the technique using multiple examples.
Speaker: Douglas Comer, Purdue University; moderator: Robert Dyer
Although he has spent most of his career in academia, Dr. Comer has taken several leaves of absence to work in industry. This talk distills his observations about fundamental differences between an academic environment and an industrial environment. It considers the structure of organizations, project time scales, attitudes, reward systems, and innovation. The talk also highlights the differences between software and hardware engineering. Finally, the talk examines research and the effect of 20th century industrial research labs on both the research community and industry.
Speaker: Ivica Crnkovic, Chalmers University and Mälardalen University; moderator: Robert Dyer
Continuous and long-term collaboration between industry and academia is crucial for front-line research and for a successful utilization of the research results. In spite of many mutual benefits, this collaboration is often challenging, not only due to different goals, but also because of different pace in providing the results. The software development industry has during the last decade aligned their development process with agile methodologies. For the researchers, the agile methodologies are a topic for a research, rather than a means of performing the research itself. However, research is often characterized by elements that can be related to practices from agile methodologies. We can ask a question, whether the agile methodologies can be a good common ground for enabling successful research collaboration between industry and academia? Is it possible to apply certain agile practices established in industry, e.g. SCRUM for collaboration projects? Which would be the possible benefits, and possible unwanted side-effects? These questions will be discussed in the presentation. The presentation will also elaborate experiences from a longitudinal case study of a collaboration between several academic institutions and several companies, which stepwise adopted SCRUM over a six-year period.
Speaker: Amy J. Ko, University of Washington; moderator: Robert Dyer
Many developers work in startups, but few have the time or incentive to reflect rigorously on their experiences, and then share those reflections. In this talk, I will report on my efforts to engage in such reflection, spanning three-years of daily diary writing while I acted as CTO and co-founder of a software startup. Based on an analysis of my more than 9,000 hours of experience, I will share several ideas on how software evolves in startups, how the people in startups shape and constrain its evolution, and how the decisions behind this evolution are primarily structured by a company's beliefs about it's software's ever-evolving value to customers. Based on these ideas, I share several implications for how developers in startups might rethink their roles as engineers from builders to translators of value.
Speakers: Ravi Sethi, University of Arizona, and John Palframan, Sustainable Software; moderators: Robert Dyer and Will Tracz
The software quality improvement method described in this webinar is a data driven approach with these elements:
The post-release customer quality metric is based on serious defects that are reported by customers after systems are deployed. The pre-release implementation quality index serves as a predictor of future customer quality; empirical analysis shows a positive correlation with the customer quality metric. One prioritization technique used, introduced in this webinar and discussed in more detail in a later webinar, focuses limited resources on the top 1% riskiest files in the code. Governance for the improvement method was provided by regular reviews with an R&D quality council.
These techniques, used by a large telecommunications company, contributed to improving its Net Promoter Score over 3 years by 60%.
By the end of this webinar you should understand how to establish your own measurement program based on customer perceived quality.
For more information on improving Customer Perceived Quality, see the following paper:
Randy Hackbarth, Audris Mockus, John Palframan, Ravi Sethi, "Improving Software Quality as Customers Perceive It", IEEE Software, vol. 33, no. , pp. 40-45, July-Aug. 2016, doi:10.1109/MS.2015.76
Speakers: Andy Meneely, Rochester Institute of Technology, and Robert Dyer, Bowling Green State University
A critical piece of securing our nation's digital infrastructure is to reduce vulnerabilities in software. Vulnerabilities, while prevalent in the media and national conversation, are rare occurrences in software, existing in only approximately 1% of source code files. While many vulnerabilities look like simple coding mistakes, preventing these vulnerabilities is extraordinarily difficult as they are small, difficult to test for, and require an attacker mindset to think of. Software engineering researchers have been studying how these vulnerabilities manifest themselves in software from an empirical, evidence-based perspective. While research knowledge has proven useful to academic audiences, the stories of how vulnerabilities arise in software have yet to gain a wider audience, namely in students and professional software engineers.
In this webinar, Dr. Andy Meneely will discuss his efforts to create the Vulnerability History Project (VHP). The VHP is a data source, a collaboration platform, and a visual tool to explore the engineering failures behind vulnerabilities. The VHP is a collaboration among undergraduate students, security researchers, and professional software engineers to aggregate, curate, annotate, and visualize the history behind thousands of vulnerabilities that are patched in software systems every year. This data curation project allows researchers to conduct in-depth studies of open source products, as well as educate software engineers-in-training and in the field on what can go wrong in their software project that leads to vulnerabilities.
Speakers: David Weiss, Iowa State University and Sustainable Software, and Randy Hackbarth, Sustainable Software; moderators: Robert Dyer and Will Tracz
In this webinar, we describe an annual corporate-wide software assessment process that has been used successfully for more than 10 years to improve software competency within a large company. The process is tailored to address specific goals of an organization. A company's software development organization is continually called upon to improve the quality of its software, to decrease its time-to-market, and to decrease the cost of development and maintenance of its software. Under these pressures, it is critical to identify changes in development processes, environments, cultures, and tools that maximize improvement, that can be accomplished with existing resources, that help the company to be more competitive, and that produce measurable results.
We will use examples taken from annual assessments, described in a yearly report, to illustrate the methods used, both qualitative methods, using interviews, and quantitative methods using big data. We will discuss the lessons learned from the results of applying those methods. We show why and how the scope of the report and the methods used evolved over time, how the report became a basis for software improvement in the company, what the impact of the report was and how we estimate that impact, both financially and subjectively. We discuss why this approach was successful and provide suggestions for how to initiate a corresponding effort.
By the end of this webinar you should understand how to establish your own organization's software assessment program.
For more information on state of software assessments, see chapter 15 in "Assessing the State of Software: A 12 Year Retrospective", by Hackbarth, Palframan, Mockus, Weiss; in The Art and Science of Analyzing Software Data, edited by Christian Bird, Tim Menzies, and Thomas Zimmerman, Elsevier, August, 2015.
Speakers: Paul E. McMahon, PEM Systems, and Will Tracz, ACM SIGSOFT
In 2008, then Secretary of Defense for the US Government, Robert Gates stated that we can't keep looking for the 100% solution. Before this point in time, people were trying agile approaches in defense companies, but they weren't openly talking about it. It was underground because the common belief at that time was there was something wrong with being agile in a regulated environment. But Gates' statement in 2008 marked a key turning point that changed how the Department of Defense (DoD) viewed agile and lean approaches and it eventually led to a new DoD instruction 5000.02 released in January, 2015. This webinar is about being agile and lean in regulated environments, but it isn't just about how to do it in the US defense industry because the challenges faced are similar in other regulated industries including medical, pharmaceutical, and financial. Those challenges tie to what it really means to be agile and lean versus common false beliefs which will be highlighted in the webinar along with tips and pitfalls to help webinar participants locate their right level of "becoming agile and lean" while also ensuring regulatory compliance. Actual case study stories will be cited along with multiple references to where webinar participants can find related published material.
Speakers: Nikolai Tillmann, Microsoft, Judith Bishop, Microsoft Research and Tao Xie, University of Illinois at Urbana-Champaign
Achieving successful technology adoption in practice has often been an important goal for both academic and industrial researchers. However, it is generally challenging to transfer research results into industrial products or into tools that are widely adopted. What are the key factors that lead to practical impact for a research project? This talk presents our experiences and lessons learned in successfully transferring tools from a medium-sized software testing project, Pex (http://research.microsoft.com/pex). Over the course of nearly a decade, the collaboration between groups at Microsoft across the world, and academics in various universities has led to high-impact tools that are now shipped by the company and adopted by the community. These tools include Fakes, a test isolation framework shipped with Visual Studio 2012/2013; IntelliTest, an automatic test generation tool shipped with Visual Studio 2015; and Code Hunt (https://www.codehunt.com), a popular serious gaming platform for coding contests and practicing programming skills, which has attracted hundreds of thousands of users since 2014. Attendees will take away with some general ideas from our experiences that they can apply within their own projects.
Speakers: Margaret-Anne Storey, University of Victoria, and Robert Dyer, Bowling Green State University
Software analytics and the use of computational methods on "big" data in software engineering is transforming the ways software is developed, used, improved and deployed. Software engineering researchers and practitioners are witnessing an increasing trend in the availability of diverse trace and operational data and the methods to analyze it. This information is being used to paint a picture of how software is engineered and suggest ways it may be improved. But we have to remember that software engineering is inherently a socio-technical endeavour, with complex practices, activities and cultural aspects that cannot be externalized or captured by tools alone -- in fact, they may be perturbed when trace data is surfaced and analyzed in a transparent manner.
In this talk, I will ask:
I will explore these questions through specific examples and discuss how software analytics that depend on "big data" from tools, as well as methods that collect "thick" data from participants, can be mutually beneficial in improving software engineering research and practice.
Speakers: Gail C. Murphy, Tasktop Technologies and University of British Columbia, and Betty Zakheim, Tasktop Technologies
Everyone seems to want more software developed and produced faster. Yet simply ramping up the number of individuals able to produce software is not sufficient; it is also important to improve the productivity of the software developers. But, what is software development productivity anyway? When do software developers consider themselves productive? What friction exists in software development that lowers productivity? In this talk, Gail Murphy will discuss recent studies about software development productivity from the eyes of developers and will suggest directions to improve software development productivity based on the daily activities of software developers. This talk includes joint work with T. Fritz (U. Zürich), A. Meyer (U. Zürich) and T. Zimmermann (Microsoft Research).
Speakers: Bram Adams, Polytechnique Montreal, Stephany Bellomo, SEI, Foutse Khomh, Polytechnique Montreal, Shane McIntosh, McGill University
The release engineering process brings high quality code changes from a developer's workspace to the end user, encompassing (amongst others) the integration of code changes, continuous building/testing of such changes (CI), setup of deployment environments, deployment and release. Recent practices of continuous delivery, which bring changes to the end user in the order of days or hours rather than years, have convinced many companies to invest in their release engineering pipeline and teams. However, what exactly should these companies invest in? Which continuous delivery strategies work, and which ones did not (and why)? Do only large companies benefit? These and other questions were targeted by the past three editions of the International Workshop on Release Engineering (RELENG) and the 1st IEEE Software Special Issue on Release Engineering. This webinar will revisit the major insights and discussion points of RELENG, aiming to provide a starting point for companies to decide on their future release engineering strategy.
Speakers: Alessandro Orso, Georgia Institute of Technology, Gregg Rothermel, University of Nebraska-Lincoln and Willem Visser, Stellenbosch University
Despite decades of work by researchers and practitioners on numerous software quality assurance techniques, testing remains one of the most widely practiced and studied approaches for assessing and improving software quality. In this webinar, which is based on our ICSE 2014 "Future of Software Engineering" paper, we provide an accounting of some of the most successful research performed in software testing in the last 15 years and present some of the most significant challenges and opportunities in this area. To be more inclusive in this effort, and to go beyond our own personal opinions and biases, we began this effort by collecting the input of 50 of our colleagues, both in academia and in industry, who are active in the testing research area. What we will provide is therefore not only our views, but also those of the software testing community in general.
Speaker: Jane Cleland-Huang, DePaul University
Modern Software and Systems engineering projects produce large quantities of data as a natural byproduct of the engineering process. Artifacts include user stories, requirements, design documents, source code, commit logs, project plans, and much more. When combined with the power of software analytics, this data can deliver actionable intelligence into the hands of project stakeholders. Such intelligence supports decision making, process improvement, safety analysis, and a myriad other software engineering tasks. In this talk, Professor Cleland-Huang first discusses the diverse queries that project stakeholders need and want to ask. She then presents process-driven, dynamic traceability solutions for establishing meaningful associations between artifacts. These traceability solutions are designed, wherever possible to establish traceability as a byproduct of the development process, and where not possible, to leverage just-in-time information retrieval solutions. Professor Cleland-Huang then shows how the traceability infrastructure supports powerful query mechanisms which are capable of retrieving and processing raw data in order to deliver real project intelligence. In particular, she will present TiQi: A Natural Language Interface for querying software projects and provide examples of diverse analytic queries. Given the benefits of such query mechanisms and the irreplaceable role of Traceability in achieving them leads to the bold claim that Traceability has become the New Black!
Speaker: Aditya Nori, Senior Researcher, Microsoft Research India
Recent years have seen a huge shift in the kind of programs that most programmers write. Programs are increasingly data driven instead of being algorithm driven. They use various forms of machine learning techniques to build models from data, for the purpose of decision making. Indeed, search engines, social networks, speech recognition, computer vision, and applications that use data from clinical trials, biological experiments, and sensors, are all examples of data driven programs. We use the term "probabilistic programs" to refer to data driven programs that are written using higher-level abstractions. Though they span various application domains, all data driven programs have to deal with uncertainty in the data, and face similar issues in design, debugging, optimization and deployment. In this talk, we describe connections this research area called "Probabilistic Programming" has with programming languages and software engineering - this includes language design, static and dynamic analysis of programs, and program synthesis. We survey current state of the art and speculate on promising directions for future research.
Speaker: Václav Rajlich, Wayne State University
Both employers and graduate schools expect computer science graduates to be able to work on software projects as developers, yet many computer science programs fail in that fundamental goal. This webcast describes how the first software engineering course (1SEC) can be reorganized in order to meet these expectations. The webcast first presents seven common dead-end approaches to 1SEC ("deadly sins"). We avoided the deadly sins by teaching the evolutionary software development (ESD) which is the current software development mainstream; agile, iterative, and open source processes are variants of ESD. The fundamental task of ESD is software change that adds new feature to an existing program. We teach phased model of software change that divides software change into phases and helps novices to add new features to complex unfamiliar programs. Our 1SEC projects use open-source programs and students add new features to these programs; this gives them experience with projects of realistic size and complexity, without requiring extraordinary effort to reach that size. The webcast presents our experience with this approach. The webcast also proposes follow-up courses that would teach additional skills the future developers may need.
Speakers: Danny Weyns, Linnaeus University, and Tomas Bures, Charles University
Cyber-Physical Systems (CPS) are large-scale networked distributed systems that combine various data sources to control real-world ecosystems (e.g. intelligent traffic control). One of the trends is to endow such systems with "smart" capabilities, typically in the form of self-awareness and self-adaptation, along with the traditional qualities of safety and dependability. The combination of these requirements together with specifics of smart CPS render traditional software engineering (SE) techniques not directly applicable, making systematic SE of smart CPS a challenging task. In this webinar, we report on the results of the First International Workshop on Software Engineering of Smart Cyber-Physical Systems (SEsCPS 2015), where 25 participants discussed characteristics, challenges and opportunities of SE for smart CPS. In the first part we discuss "Core Themes" that we derived from the contributions presented in the morning session of the workshop. Themes include: Faults and conflicts; Modeling, testing, and verification; and Collaboration. In the second part of the webinar, we elaborate on "Open Research Topics" that we derived from the results of the workshop's afternoon breakout sessions. Topics include: Aligning different disciplines; Human in the loop; Uncertainty; and Pragmatic vs. systematic engineering.
Speaker: Lionel C. Briand, University of Luxembourg
This talk will report on more than a decade of experience regarding system traceability and its applications. Various forms of traceability between requirements, design decisions, and test cases are required by numerous industry standards. Traceability research is the focus of limited attention but is nevertheless an extremely important topic. I will present an overview of the field and its challenges based on project experience with industry. Going through three recent projects, I will illustrate my main points and reflections on the subject. The focus of this presentation will be on traceability between requirements, design decisions, and test cases, as traceability research has been largely code-centric to date.
Speaker: Satish Chandra, Senior Principal Engineer, Samsung Electronics
So you have developed a new software productivity tool, published a research paper about it, and you are justifiably proud of your work. If you work for a company, your (curmudgeonly) manager now wants to see its "impact" on the business. This is the part where you have to convince someone else to use your shiny new tool in their day-to-day work, or ship it as a product. But, you soon realize that getting traction with developers or product managers is significantly harder than the research itself. Sound familiar?
In the past several years, Satish was involved in taking a variety of software productivity tools to various constituencies within a company: internal users, product teams, and service delivery teams. In this talk, he would like to share the experiences he had in interacting with these constituencies; sometimes successful experiences, but at other times not so successful ones. The webinar will focus broadly on tools in two areas: bug finding and test automation. Satish will make some observations on when tech transfer works and when it stumbles.
Speaker: Margaret Burnett, Oregon State University
End-user programming has become pervasive in our society, with end users programming simulations, courseware, spreadsheets, macros, mashups, and more. This talk considers what happens when we add consideration of the software lifecycle beyond the "coding" phase of end-user programming. Considering other phases is necessary, because there is ample evidence that the programs end users create are filled with errors. End-user software engineering (EUSE) is a research area that aims to invent new kinds of technologies that collaborate with end users to improve the quality of their software.
In this webinar, we describe the present state of EUSE, and challenges in moving forward toward a bright future. We show how the future of EUSE may become over-siloed, restricting future researchers' vision of what can be achieved. We then show that focusing on the in-the-moment intents of end-user developers can be used to derive a number of promising directions forward for EUSE researchers, and how theories can help us further de-silo future EUSE research. Finally, we discuss how overcoming challenges for the future of end-user software engineering may also bring direct benefits to the future of "classic" software engineering.
Speaker: Will Tracz, ACM SIGSOFT Chair
This webinar provides attendees with the who, what, and what next, where of ACM Special Interest Group on Software Engineering. Attendees will be informed of changes in membership benefits, volunteer opportunities, and recent changes in ACM publication policies regarding open source access to conference proceedings. The initial presentation is scheduled for no more than 30 minutes with the remaining half hour reserved for answering questions.
Speaker: Václav Rajlich, Wayne State University
Successful software requires constant change that is triggered by volatility of requirements, technologies, and stakeholder knowledge. This constant change constitutes software evolution. There is also the new prominence of evolutionary software development that includes agile, iterative, open source, inner source, and other processes; the bulk of software development now happens in the stage of software evolution. This webcast discusses reasons for this shift and new issues that emerged. It also discusses the process of software change, which is the fundamental software evolution task. It briefly contrasts software evolution and software maintenance. It presents both the current state of the art and the perspectives of future advances.