Research Highlights

Starting in 2020, SIGSOFT is selecting papers from its sponsored conferences that show recent, significant, and exciting results that are also of general interest to the computer science research community. These papers, called SIGSOFT Research Highlights, are also recommended for consideration for the Research Highlights section of the Communications of the ACM.

Nomination Process

The committee considers nominated papers from two sources:

  • Conference Organizers: The chair of the SIGSOFT Research Highlights Committee will solicit nominations from the program chairs of SIGSOFT-sponsored conferences.
  • Community: Any SIGSOFT member may nominate a paper appearing in a SIGSOFT conference, provided they do not have a conflict of interest.

To nominate a paper, please fill in the nomination form.

By submitting a nomination statement, nominators authorize the SIGSOFT Research Highlights Committee to reuse all or a portion of the statement for the purpose of nominating the paper further as a CACM Research Highlight, if applicable.

SIGSOFT Research Highlights Committee

SIGSOFT Research Highlights Papers

A Tale from the Trenches: Cognitive Biases and Software Development

S. Chattopadhyay, N. Nelson, A. Au, N. Morales, C. Sanchez, R. Pandita, and A. Sarma

Venue: ICSE 2020

Nomination Statement: Cognitive biases impact decision-making in many spheres of human activity, including software development. If left unchecked, cognitive biases can lead to negative outcomes which, in the case of software development, include inferior solutions and the need to reverse design decisions later in the development process. This paper presents the first field study of the effects of cognitive biases in software development. In contrast to prior work which was conducted in controlled environments, this study analyzed data primarily collected from software developers' daily work. The study offers a rich perspective on how cognitive biases manifest themselves in practice with insights for many stakeholders, including developers, their managers, and the builders of software development tools. These insights for improving software development tools and practices also have the potential to apply more broadly to the use of software technology in general.

Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language?

N. Shrestha, C. Botta, T. Barik, C. Parnin

Venue: ICSE 2020

Nomination Statement: It is not uncommon for programmers to have to learn a new programming language, yet relatively few resources exist to facilitate this transition. This study carefully documents why it is difficult for proficient programmers to learn a different language. The study effectively leverages two complementary sources of data: Stack Overflow posts and interviews with programmers. Through this synergy, it provides a rich illustration of how knowledge of one language can interfere with learning. The study also provides insights on the source of confusion caused by old habits and attempts at mapping between languages. The paper provides an important reminder that software technologies do not exist in isolation.

How does misconfiguration of analytic services compromise mobile privacy?

X. Zhang, X. Wang, R. Slavin, T. Breaux, J. Niu

Venue: ICSE 2020

Nomination Statement: Popular mobile applications (apps) typically rely on a third-party analytic service to collect usage profiles data for their users. Analytics services present a privacy risk because their interface enables app developers to channel personally-identifiable information (PII) to the services. This paper reports on a deep technical investigation of how analytic services are used by popular apps with respect to privacy protection. Its findings are both clear and unsettling: over 12% of apps studied provide PII to their analytics services, in many cases in direct violation of the app's own privacy policy. These results have implications for practically all stakeholders of the mobile software ecosystem including, notably, most app users.

White-box fairness testing through adversarial sampling

P. Zhang, J. Wang, J. Sun, G. Dong, X. Wang, X. Wang, J.S. Dong, and T. Dai

Venue: ICSE 2020

Nomination Statement: Deep neural networks (DNNs) have demonstrated their effectiveness in multiple important application contexts, from face recognition, to medical diagnosis, fraud detection, and others. Especially when DNNs work with human-related characteristics, it is of paramount importance to ensure that they show fair behavior. However, because of societal bias often occurring in the training data, the resulting DNNs may introduce discrimination unintentionally. To address this problem, the paper proposes a scalable approach for generating individual discriminatory instances of DNNs. By generating several instances, it is possible to retrain a DNN to reduce discrimination. The approach is evaluated by comparing it with other two from the state of the art. The evaluation is performed on three significant datasets and shows a more effective search space exploration as well as the possibility to generate a larger number of individual discriminatory instances using significant less time. This paper provides a contribution that is cross-cutting two disciplines, software engineering and machine learning, and paves the way toward improving the quality of DNNs and their usability in societal contexts.