Software is incredibly hard to secure because it’s a black box. We’ve spent decades trying to verify properties of software by analyz- ing the source code, scanning, fuzzing, pentesting, etc. only to be continually outpaced by software complexity. Instrumentation is a powerful approach for measuring security directly from within run- ning code. In this this talk, you’ll learn how to use the free and open source Java Observability Toolkit (JOT) project to easily create your own powerful runtime instrumentation without coding. You can use JOT to analyze security defenses, identify complex vulnerabili- ties, create custom sandboxes, and enforce policy at runtime. You can even create your own IAST tests and your own RASP defenses using JOT. Ultimately, we’ll show that security instrumentation empowers development and security to work together in harmony.
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level. Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on two threads: building a taxonomy of known attacks on ML and the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.
The rising popularity of the Internet-of-Things (IoT) devices has driven their increasing adoption in various settings, such as modern homes. IoT systems integrate such physical devices with third-party apps, which can coordinate in arbitrary ways. However, malicious or undesired coordination can lead to serious vulnerabilities. This paper explores two different ways, i.e., a commonly-used state-based approach and a holistic, rule-based approach, to formally model app coordination and the safety and security thereof in the context of IoT platforms. The less common rule-base approach allows for a smaller, more scalable model. We realize both modeling approaches using bounded model checking with Alloy to automatically identify potential cases where apps exhibit coordination relationships. We evaluate the effectiveness of the modeling approaches by checking a corpus of real-world IoT apps of Samsung SmartThings and IFTTT. The experimental results demonstrate that our rule-based modeling leads to a more scalable analysis.
Being able to detect program runtime complexity can help identify security vulnerabilities such as DoS attacks and side-channel information leakage. In prior work, we use dynamic invariant generation to infer nonlinear numerical relations to represent runtime complexity of imperative programs. In this work, we propose a new dynamic analysis approach for learning recurrence relations to capture complexity bounds for recursive programs. This approach allows us to efficiently infer simple linear recurrence relations that represent nontrivial, potentially nonlinear, complexity bounds. Preliminary results on several popular recursive programs show that we can learn precise recurrence relations capturing worst-case complexity bounds such as O(n log n) and O(cn).
Software repositories contain information about source code, software development processes, and team interactions. We combine provenance of the development process with code security analysis to automatically discover insights. This provides fast feedback on the software's design and security issues, which we evaluate on projects that are developed under time pressure, such as Germany's COVID-19 contact tracing app 'Corona-Warn-App'.
A common type of security analysis involves checking whether a system is capable of establishing a set of security requirements under a particular threat model. Building an accurate threat model, however, is a challenging task due to the uncertain and evolving nature of a malicious environment in which the system is deployed. In this paper, as a complementary analysis, we propose a systematic approach for evaluating the design of a system with respect to its robustness against an adversarial environment; i.e., the degree of assumptions about attacker capabilities under which the system is capable of maintaining its security requirements. We argue that robustness is an important property that should be considered as part of any secure development process. In this paper, we propose a formal definition of robustness, and describe a technique for automatically evaluating the robustness of a system. We demonstrate potential applications of the robustness concept using an example involving the OAuth authentication protocol.