It is well known that once a Java application uses native C/C++ methods through the Java Native Interface (JNI), any security guarantees provided by Java might be invalidated by the native methods. So any vulnerability in this trusted native code can compromise the security of the Java program. Fuzzing test is an approach to software testing whereby the system being tested is bombarded with inputs generated by another program. When using fuzzer to test JNI programs, how to accurately reach the JNI functions and run through them to find the sensitive system APIs is the pre-condition of the test. In this paper, we present a heuristic fuzz generator method on JNI vulnerability detection based on the branch predication information of program. The result in the experiment shows our method can use less fuzzing times to reach more sensitive windows APIs in Java native code.
Developers use FAQs (Frequently Asked Questions) to access and share knowledge about software libraries, APIs, and platforms. This paper studies 2,660 questions from 43 FAQ websites. We analyzed accessibility metrics such as the steps from the main documentation page, tagging or multilingualism as well as structure and readability metrics such as code-to-text ratio, number of links, and Flesch Reading-Ease. In addition, we compared these FAQs to 69,548 Stack Overflow (SO) posts, which cover the same topics and which have been posted by developers at least twice (i.e. duplicates). Our results reveal that different software vendors give different importance to their FAQs, e.g. by investing more effort or less in structuring and presenting them. We found that studied FAQs include more references (e.g. to corresponding API documentation) and are more verbose and difficult to read than corresponding SO duplicates. We also found that FAQs cover additional topics compared to corresponding duplicate posts.
The temptation to be able to talk to a machine is not new. Recent advancements in the field of Natural Language Understanding has made it possible to build conversational components that can be plugged inside an application, similar to other components. These components, called chatbots, can be created from scratch or with the help of commercially available platforms. These platforms make it easier to build and deploy chatbots, often without writing a single line of code. However, similar to any other software component, chatbots also have quality concerns. Despite significant contributions in the field, an architectural perspective of building chatbots with desired quality requirements is missing in the literature.
In the current work, we highlight the impact of features provided by these platforms (along with their quality) on the application design process and overall quality attributes. We propose a methodological framework to evaluate support provided by a chatbot platform towards achieving quality in the application. The framework, called Hospitality Framework, is based on software architectural body of knowledge, especially architectural tactics. The framework produces a metric, called Hospitality Index, which has utilities for making various design decisions for the overall application. We present the use of our framework on a simple use case to highlight the phases of evaluation. We showcase the process by picking three popular chatbot platforms - Watson Assistant, DialogFlow and Lex, over four quality attributes - Modifiability, Security & Privacy, Interoperability and Reliability. Our results show that different platforms provide different support for these four quality attributes.
The use of software analytics in software development companies has grown in the last years. Still, there is little support for such companies to obtain integrated insightful and actionable information at the right time. This research aims at exploring the integration of runtime and development data to analyze to what extent external quality is related to internal quality based on real project data. Over the course of more than three months, we collected and analyzed data of a software product following the CRISP-DM process. We studied the integration possibilities between runtime and development data, and implemented two integrations. The number of bugs found in code has a weak positive correlation with code quality measures and a moderate negative correlation with the number of rule violations found. Other types of correlations require more data cleaning and higher quality data for their exploration. During our study, several challenges to exploit data gathered both at runtime and during development were encountered. Lessons learned from integrating external and internal data in software projects may be useful for practitioners and researchers alike.
Quality of software systems is continuing to be an important investigation of software systems. Assessing and predicting quality attributes of object-oriented design are performed by using software metrics, knowing that a good internal structure of software system influences in a great extent its external quality attributes.
This study presents an empirical investigation of software reliability. The goal is to identify the applicability of object-oriented design metrics for reliability prediction. Firstly, an estimation of the reliability is conducted. We proposed a new reliability metric at the class level considering two perspectives related to failures/bugs found, i.e. priority and severity. Later, the estimated reliability value helps us to predict the reliability of other software projects based on their internal structure. The prediction value for reliability can be made earlier in the software development life cycle.
The approach’s methodology for prediction is a statistical method, the multiple linear regression considering as dependent variable for our analysis the bugs count for a class (reflected in the newly proposed metric) and as independent variables the values of the Chidamber and Kemerer (CK) metrics. The results indicated that the most influential CK metrics in predicting reliability are WMC (Weighted Methods per Class) and CBO (Coupling Between Object classes), and that the RFC (Response For Class) and LCOM (Lack of Cohesion of Methods) metrics have no impact on the value of reliability. The root mean square error is used to validate our proposed regression equation considering data from the other four projects.
Engaging contributors in a Free Open Source Software (FOSS) project can be challenging. Finding an appropriate task to start with is a common entrance barrier for newcomers. Poor code quality contributes to difficulties in the onboarding process and limits contributor satisfaction in general. In turn, dissatisfied developers tend to exacerbate problems with system integrity. Poorly designed systems are difficult to maintain and extend. Users can often directly experience these issues as instabilities in system behavior. Thus code quality is a key issue for users and contributors in FOSS. We present a case study on the interactions between code quality and contributor experience in the real-world FOSS project Catrobat. We describe the implications of a refactoring process in terms of code metrics and benefits for developers and users.