Mobile devices have become a permanent fixture in modern society. As such, it is of critical importance that the mobile development process is made as frictionless as possible to facilitate the creation of high-quality apps for end users. This keynote offers a brief introduction to mobile development paradigms, surveys the major categories of research conducted to date towards improving mobile software engineering, examines open challenges, and outlines a roadmap of future work aimed to support mobile developers.
With the emergence of mobile application markets, there has been a dramatic increase in mobile malware. Mobile platform providers are constantly creating and refining their malware-detection techniques, including static analysis and behavioral monitoring. The goal of malware writers is to hide the malware payload from those analyzers. In parallel, security analysts want to quickly detect if any software is malware in order to prevent harm to users. This confrontation is pushing malware writers to develop new evasion techniques that prevent their malware from being detected or making analysis harder.
This paper describes ARES, a system built on top of an existing behavioral analysis, based on static information-flow analysis, binary instrumentation, and multiexecution analysis, to detect and bypass many common evasive techniques used by mobile malware. Additionally, this paper presents our implementation of ARES, and shows that, when run against real-world software, ARES is able to reveal previously unknown malicious components. We also developed a test suite for evasion detection techniques: EVADROID, which we have made fully available to other researchers.
Android apps often contain third-party libraries. For many program analyses, it is important to identify the library code in a given closed-source Android app. There are several clients of such library detection, including security analysis, clone/repackage detection, and library removal/isolation. However, library detection is complicated significantly by commonly-used code obfuscation techniques for Android. Although some of the state-of-the-art library detection tools are intended to be resilient to obfuscation, there is still room to improve recall, precision, and analysis cost.
We propose a new approach to detect third-party libraries in obfuscated apps. The approach relies on obfuscation-resilient code features derived from the interprocedural structure and behavior of the app (e.g., call graphs of methods). The design of our approach is informed by close examination of the code features preserved by typical Android obfuscators. To reduce analysis cost, we use similarity digests as an efficient mechanism for identifying a small number of likely matches. We implemented this approach in the ORLIS library detection tool. As demonstrated by our experimental results, ORLIS advances the state of the art and presents an attractive choice for detection of third-party libraries in Android apps.
Smartphone apps usually have access to sensitive user data such as contacts, geo-location, and account credentials and they might share such data to external entities through the Internet or with other apps. Confidentiality of user data could be breached if there are anomalies in the way sensitive data is handled by an app which is vulnerable or malicious. Existing approaches that detect anomalous sensitive data flows have limitations in terms of accuracy because the definition of anomalous flows may differ for different apps with different functionalities; it is normal for "Health" apps to share heart rate information through the Internet but is anomalous for "Travel" apps.
In this paper, we propose a novel approach to detect anomalous sensitive data flows in Android apps, with improved accuracy. To achieve this objective, we first group trusted apps according to the topics inferred from their functional descriptions. We then learn sensitive information flows with respect to each group of trusted apps. For a given app under analysis, anomalies are identified by comparing sensitive information flows in the app against those flows learned from trusted apps grouped under the same topic. In the evaluation, information flow is learned from 11,796 trusted apps. We then checked for anomalies in 596 new (benign) apps and identified 2 previously-unknown vulnerable apps related to anomalous flows. We also analyzed 18 malware apps and found anomalies in 6 of them.
By considering the fast pace at which mobile applications need to evolve, Architectural Technical Debt results to be a crucial yet implicit factor of success. In this research we present an approach to automatically identify Architectural Technical Debt in Android applications. The approach takes advantage of architectural guidelines extraction and modeling, architecture reverse engineering, and compliance checking. As future work, we plan to fully automate the process and empirically evaluate it via large-scale experiments.
Mobile, IoT, and wearable devices have been transitioning from passive consumers of remote data to active generators of massive amounts of data. Mobile apps often need to move data, generated on one device, to other nearby devices for processing. For example, when reading its wearer's health vitals, a health monitoring app on a wearable device needs to transfer the data to the wearer's smartphone for display and analysis. This local processing of data by means of nearby computing resources has been promoted as a solution to network bandwidth bottlenecks, and commonly referred to as edge computing. Despite the critical dependence of edge computing on the device-to-device data sharing functionality, its mainstream implementations introduce low-level and hard-to-maintain code into the mobile codebase. To address this problem, this research introduces Remote ICC (RICCi), a novel middleware framework that provides programming support for data-intensive mobile applications at the edge, thereby reconciling programming convenience and performance efficiency. RICCi builds upon the native Android Inter-Component Communication (ICC) to simultaneously support seamless and efficient inter-device data sharing via a convenient and familiar programming model. To reach these design objectives, RICCi innovates in the middleware space by offering distributed programming abstractions that are data-oriented rather than procedure-oriented, thereby elevating latency into a first-class design concern for developing distributed mobile apps.
Developers adopt code comments for different reasons such as document source codes or change program flows. Due to a variety of use scenarios, code comments may impact on readability and maintainability. In this study, we investigate how developers of 5 open-source mobile applications use code comments to document their projects. Additionally, we evaluate the performance of two machine learning models to automatically classify code comments. Initial results show marginal differences between desktop and mobile applications.
Determining how to make users more aware of the permissions used in their favorite apps is one of the largest challenges facing mobile developers today. Research has shown that the less aware a user is, the less secure they feel while using the app. Unfortunately, this same research has shown that users are not well informed of the permissions their apps use, leading to many users feeling insecure. This makes us ask, how can users become more informed about the permissions their apps use so they can feel more secure?
To better understand this question, we examined the effects of the previous and current Android permissions models, as well as our proposed permissions model through an in person study. Our primary findings were I) Our proposed permissions model makes users significantly more secure than both Android models. II) Runtime based permissions models make users significantly more informed than install-time based models.
In the past, bad code quality has been associated with higher bugproneness. At the same time, the main reason why mobile users negatively rate an app is due to the presence of bugs leading to crashes. In this paper, we preliminarily investigate the extent to which code quality metrics can be exploited to predict the commercial success of mobile apps. Key results suggest the existence of a relation between code quality and commercial success; We found that inheritance and information hiding metrics represent important indicators and therefore should be carefully monitored by developers.
To protect the privacy of end users from intended or unintended malicious behaviour, the Android operating system provides a permissions-based security model that restricts access to privacy-relevant parts of the platform. Starting with Android 6, the permission system has been revamped, moving to a run-time model. Users are now prompted for confirmation when an app attempts to access a restricted part of the platform.
We conducted a large-scale empirical study to investigate how end users perceive the new run-time permission system of Android, collecting and inspecting over 4.3 million user reviews about 5,572 apps published in the Google Play Store. Among them, we identified, classified, and analyzed 3,574 permission-related reviews, employing machine learning and Natural Language Processing techniques. Out of the permission-related reviews, we determined recurring points made by users about the new permission system and classified them into a taxonomy. Results of our analysis suggest that, even with the new system, permission-related issues are widespread, with 8% of collected apps having user reviews with negative comments about permissions. We identify a number of points for improvement in the Android run-time permission system, and provide recommendations for future research.
Do you know the permissions your favorite apps use? You probably don't, and you aren't alone. Everyone seemingly talks about how important app security and privacy is to them, but research has shown that users are generally not well informed about the permissions their apps use. This leads to serious ramifications for security, privacy and user perception (rating) of an app. Understanding the current Android permission model and how it can be improved offers significant benefits for both developers and users.
To better understand user perception of the previous, current and a new proposed permission model, we conducted an in-person study involving 185 participants. Our primary findings include I) The current Android runtime model does not make users feel more secure in comparison with the older install-time model. II) Our proposed model is beneficial in helping users feel more secure. III) There is no statistically significant difference between the user ratings given to the apps using the different permissions models. IV) Runtime permission models are significantly beneficial in helping users to recall the requested permissions. V) We found that users were generally well informed about what the requested permissions meant, but age played a significant factor in reducing how informed users were.
The runtime permission model of Android enhances security yet also constitutes a source of incompatibility issues that impedes the productivity of mobile developers. This paper presents a novel analysis that detects the incompatible permission uses in a given app and repairs them when found, hence automatically adapting the app to the runtime permission model. The key approach is to check and enforce the app's conformance to the runtime permission use protocol through static control flow analysis and bytecode transformation. We implemented our technique as an open-source tool, ARPDROID, and initially evaluated it on 20 incompatible and 3 compatible real-world apps, assisted by manual ground truth and verification. Our results show that ARPDROID achieved 100% detection accuracy, 90% repair success rate, and 91.3% overall adaptation success rate at an average time cost of about two minutes.
We present the Android app TYDR (Track Your Daily Routine) which tracks smartphone sensor and usage data and utilizes standardized psychometric personality questionnaires. With the app, we aim at collecting data for researching correlations between the tracked smartphone data and the user's personality in order to predict personality from smartphone data. In this paper, we highlight our approaches in addressing the challenges in developing such an app. We optimize the tracking of sensor data by assessing the trade-off of size of data and battery consumption and granularity of the stored information. Our user interface is designed to incentivize users to install the app and fill out questionnaires. TYDR processes and visualizes the tracked sensor and usage data as well as the results of the personality questionnaires. When developing an app that will be used in psychological studies, requirements posed by ethics commissions / institutional review boards and data protection officials have to be met. We detail our approaches concerning those requirements regarding the anonymized storing of user data, informing the users about the data collection, and enabling an opt-out option. We present our process for anonymized data storing while still being able to identify individual users who successfully completed a psychological study with the app.
The functionality of many mobile applications is dependent on various contextual, external factors. Depending on unforeseen scenarios, mobile apps can even malfunction or crash. In this paper, we have introduced MobiCoMonkey - automated tool that allows a developer to test app against custom or auto generated contextual scenarios and help detect possible bugs through the emulator. Moreover, it reports the connection between the bugs and contextual factors so that the bugs can later be reproduced. It utilizes the tools offered by Android SDK and logcat to inject events and capture traces of the app execution.
We present ICC-INSPECT, a tool for understanding Android app behaviors exhibited at runtime via inter-component communication (ICC). Through lightweight Intent profiling, ICC-INSPECT streams run-time ICC information to a dynamic visualization framework which depicts interactive ICC call graphs along with informative ICC statistics. This framework allows users to examine the details of a specific fragment of execution in the context of the holistic ICC view of an app. Through its ability to concisely map in a visual format the complex ICC mechanisms of any Android app, ICC-INSPECT facilitates behavior understanding and debugging of Android programs. The open-source release, documentation, and a video demo of ICC-INSPECT are available here.
When interacting with Android apps, users may not always get what they expect. For instance, when clicking on a button labeled "upload picture", the app may actually leak the user's location while uploading photos to a cloud service. In this paper we present BACKSTAGE, a static analysis framework that binds UI elements to their corresponding callbacks, and further extracts actions in the form of Android sensitive API calls that may be triggered by events on such UI elements. We illustrate the inner workings of the analysis implemented by BACKSTAGE and then compare it against similar frameworks.
We study the authentication of the heart rate (HR) signal and we present HR-Auth, an algorithm to authenticate HR data using two independent wearable sensors. We describe and evaluate the proposed algorithm.
Thanks to the performance improvements in hardware and software architectures, more applications, which used to run on desktop computers, are now being migrated to mobile devices. However, this entails increased power consumption, that necessitates more effective runtime power management techniques due to battery capacity constraints. Such techniques should reduce power consumption while satisfying user-perceived requirements, such as frame rate, and response times. A major hurdle in incorporating such techniques into real products is that user-perceived requirements are only visible to user applications, but not accessible by the power managers residing in the operating system. In this paper, we show that better power management is achievable by passing such information to the OS, and propose an API for that purpose.
There is a wide range of different approaches and tool for cross-platform mobile application development. We studied the performance characteristic of the mobile applications developed with a number of common approaches and tools for mobile application development, including the native SDK's of Google Android and Apple iOS, and cross-platform tools of Apache Cordova, Microsoft Xamarin, and Appcelerator Titanium. The data reveal insights to the designs and trade-offs of different approaches and offer guidance in selecting the appropriate approaches based on their respective performance characteristics.
Data-intensive applications in diverse domains, including video streaming, gaming, and health monitoring, increasingly require that mobile devices directly share data with each other. However, developing distributed data sharing functionality introduces low-level, brittle, and hard-to-maintain code into the mobile codebase. To reconcile the goals of programming convenience and performance efficiency, we present a novel middleware framework that enhances the Android platform's component model to support seamless and efficient inter-device data sharing. Our framework provides a familiar programming interface that extends the ubiquitous Android Inter-Component Communication (ICC), thus lowering the learning curve. Unlike middleware platforms based on the RPC paradigm, our programming abstractions require that mobile application developers think through and express explicitly data transmission patterns, thus treating latency as a first-class design concern. Our performance evaluation shows that using our framework incurs little performance overhead, comparable to that of custom-built implementations. By providing reusable programming abstractions that preserve component encapsulation, our framework enables Android devices to efficiently share data at the component level, providing powerful building blocks for the development of emerging distributed mobile applications.
Modern mobile users commonly use multiple heterogeneous mobile devices, including smartphones, tablets, and wearables. Enabling these devices to seamlessly share their computational, network, and sensing resources has great potential benefit. Sharing resources across collocated mobile devices creates mobile device clouds (MDCs), commonly used to optimize application performance and to enable novel applications. However, enabling heterogeneous mobile devices to share their resources presents a number of difficulties, including the need to coordinate and steer the execution of devices with dissimilar network interfaces, application programming models, and system architectures. In this paper, we describe a solution that systematically empowers heterogeneous mobile devices to seamlessly, reliably, and efficiently share their resources. We present a programming model and runtime support for heterogeneous mobile device-to-device resource sharing. Our solution comprises a declarative domain-specific language for device-to-device cooperation, supported by a powerful runtime infrastructure. we evaluated our solution by conducting a controlled user study and running performance/energy efficiency benchmarks. The evaluation results indicate that our solution can become a practical tool for enhancing the capabilities of modern mobile applications by leveraging the resources of nearby mobile devices.
In this paper, we present a real-life case study of a mobile healthcare application that leverages code offloading techniques to accelerate the execution of a complex deep neural network algorithm for analyzing audio samples. Resource-intensive machine learning tasks take a significant time to complete on high-end devices, while lower-end devices may outright crash when attempting to run them. In our experiments, offloading granted the former a 3.6x performance improvement, and up to 80% reduction in energy consumption; while the latter gained the capability of running a process they originally could not.
Following the ever-growing demand for mobile applications, researchers are constantly developing new test automation solutions for mobile developers. However, researchers have yet to produce an automated functional testing approach, resulting in many developers relying on a resource consuming manual testing. In this paper, we present a novel approach for the automation of functional testing in mobile software by leveraging machine learning techniques and reusing generic test scenarios. Our approach aims at relieving some of the manual functional testing burden by automatically classifying each of the application's screens to a set of common screen behaviors for which generic test scripts can be instantiated and reused. We empirically demonstrate the potential benefits of our approach in two experiments: First, using 26 randomly selected Android applications, we show that our approach can successfully instantiate and reuse generic functional tests and discover functional bugs. Second, in a human study with two experienced human mobile testers, we show that our approach can automatically cover a large portion of the human testers' work suggesting a significant potential relief in the manual testing efforts.
Test generators for graphical user interfaces must constantly choose which UI element to interact with, and how. We guide this choice by mining associations between UI elements and their interactions from the most common applications. Once mined, the resulting UI interaction model can be easily applied to new apps and new test generators. In our experiments, the mined interaction models lead to code coverage improvements of 19.41% and 43.03% on average on two state-of-the-art tools (DROIDMATE and DROIDBOT), when executing the same number of actions.
To gain a deeper empirical understanding of how developers work on Android apps, we investigate self-reported activities of Android developers and to what extent these activities can be classified with machine learning techniques. To this aim, we firstly create a taxonomy of self-reported activities coming from the manual analysis of 5,000 commit messages from 8,280 Android apps. Then, we study the frequency of each category of self-reported activities identified in the taxonomy, and investigate the feasibility of an automated classification approach. Our findings can inform be used by both practitioners and researchers to take informed decisions or support other software engineering activities.
Mobile devices are practically ubiquitous in today's society. People are increasingly dependent on mobile devices, for uses such as computation, navigation, storing private information, and web browsing among others. Thus, developers are required to produce high quality mobile apps. However, mobile operating systems are frequently updated, which can affect the functionality of mobile apps and hinder developers' ability to consistently provide high quality apps across multiple operating systems versions. In this paper, we introduce a novel approach for automatically locating the part of Android apps that have been affected by an update of the underlying Android mobile operating system, and statistically analyzing the impact of the update. Preliminary evaluation shows that the overall impact of an operating system update is low.
Mobile devices have become pervasive in today's society. The range of their use has been constantly increasing, which requires more computing capability. As the computing capability of mobile devices grow, so does the need for effective power management. There has been some work on reducing the power consumption of mobile applications by detecting energy bugs. In this work, we address no-sleep energy bugs with respect to semaphore wakelocks in consideration of race conditions with synchronization using reaching definitions and parallel flow graphs. We demonstrate the approach through a case example.
Mobile devices have changed the way we live, but most applications are still conceived for isolated devices and do not allow the user to take advantage of the different devices (e.g., phones, cars, watches, televisions, etc.) opportunistically, efficiently, and dynamically. Multi-device interactions are currently mainly conceived as independent cooperating applications, which then require the a-priori definition of the set of communicating elements, along with the responsibility carried out by each participant. This paper tries to flip the perspective and fosters the idea of liquid, loosely coupled distributed Android applications by extending intent-based app communication, usually limited to the same device, to proximal devices. While changing the operating system would have been too expensive, the concept has been implemented through LIQDROID, a middleware that eases the creation of distributed Android applications and oversees their execution on a dynamically changing set of Android devices. Specifically, LIQDROID is an Android service that both augments each single Android device and manages their cooperation. Some example applications demonstrate the main characteristics of LIQDROID and provide interesting insights for possible future developments.
Despite the increasing availability of IoT devices and technology, the current user interaction requires users to download dedicated ad-hoc mobile apps, each of these remotely monitoring and controlling a limited set of IoT devices. In the near future, with billions of deployed IoT devices, this approach will not scale since users cannot be expected to download different apps for every place they visit or every device they interact with.
To overcome such limitation, this paper presents the vision, the risks, and opportunities, of the development and deployment of future hybrid proximity services, as a paradigm shift for an intuitive interaction between users and the surrounding IoT environment. Hybrid proximity services are user-facing mobile applications developed and configured for specific locations and automatically installed on user mobile devices when the user is in their proximity and without the need of explicit download. A proximity service enabled environment can directly provide the user with the right service for the right place, and automatically instructs the user device on how to interact with the surrounding IoT devices.
A native cross-platform mobile app has multiple platform-specific implementations. Typically, an app is developed for one platform and then ported to the remaining ones. Translating an app from one language (e.g., Java) to another (e.g., Swift) by hand is tedious and error-prone, while automated translators either require manually defined translation rules or focus on translating APIs. To automate the translation of native cross-platform apps, we present J2SINFERER, a novel approach that iteratively infers syntactic transformation rules and API mappings from Java to Swift. Given a software corpus in both languages, J2SLNFERER first identifies the syntactically equivalent code based on braces and string similarity. For each pair of similar code segments, J2SLNFERER then creates syntax trees of both languages, leveraging the minimalist domain knowledge of language correspondence (e.g., operators and markers) to iteratively align syntax tree nodes, and to infer both syntax and API mapping rules. J2SLNFERER represents inferred rules as string templates, stored in a database, to translate code from Java to Swift. We evaluated J2SLNFERER with four applications, using one part of the data to infer translation rules, and the other part to apply the rules. With 76% in-project accuracy and 65% cross-project accuracy, J2SLNFERER outperforms in accuracy j2swift, a state-of-the-art Java-to-Swift conversion tool. As native cross-platform mobile apps grow in popularity, J2SLNFERER can shorten their time to market by automating the tedious and error prone task of source-to-source translation.
The development frameworks and the power efficiency in mobile devices have been studied separately. This paper deals with both topics together to determine how the selection of a development framework can impact on energy consumption of a mobile application. The focus is on applications with high processing load, audio and video playback. The results were analyzed and conclusions were reached.
The dynamics of mobile networks make it difficult for mobile apps to deliver a seamless user experience. In particular, intermittent connections and weak signals pose challenges for app developers. While recent network libraries have simplified network programming, much expert knowledge is still required. However, most mobile app developers are relative novices and tend to assume a reliable network connection, paying little attention to handling network errors in programming until users complain and leave bad reviews.
We argue that the difficulty of avoiding such software defects can be mitigated through an annotation language that allows developers to declaratively state desired and actual properties of the application, largely without reference to fault-tolerant concepts, much less implementation. A pre-compiler can process these annotations, replacing calls to standard networking libraries with customized calls to a specialized library that enhances the reliability. This paper presents ANEL, a declarative language and middleware for Android that enables non-experts. We demonstrate the expressiveness and practicability of ANEL annotation through case studies and usability studies on real-world networked mobile apps. We also show that the ANEL middleware introduces negligible runtime performance overhead.
Currently, mobile operating systems are dominated by the duopoly of iOS and Android. App projects that intend to reach a high number of customers need to target these two platforms foremost. However, iOS and Android do not have an officially supported common development framework. Instead, different development approaches are available for multi-platform development.
The standard taxonomy for different development approaches of mobile applications is: Web Apps, Native Apps, Hybrid Apps. While this made perfect sense for iPhone development, it is not accurate for Android or cross-platform development, for example.
In this paper, a new taxonomy is proposed. Based on the fundamental difference in the tools and programming languages used for the task, six different categories are proposed for everyday use: Endemic Apps, Web Apps, Hybrid Web Apps, Hybrid Bridged Apps, System Language Apps, and Foreign Language Apps. In addition, when a more precise distinction is necessary, a total of three main categories and seven subcategories are defined.
The paper also contains a short overview of the advantages and disadvantages of the approaches mentioned.
Mobile eHealth applications have become very popular, not just using mobile phones but also wearables, mobile AR/VR, and increasingly "smart houses" and "smart care" sensing and interaction facilities. However, a large majority of these solutions, despite early promise, suffer from a range of challenges including effort to develop, deploy and maintain; lack of end user acceptance; integration with other health systems; difficulty in tailoring to divergent users; lack of adequate feedback to developers; lack of sustainable adoption; and ultimately lack of success. In this MobileSoft vision paper we characterise these key issues from a Software Engineering perspective and present and discuss some approaches to mitigating them, building on our and others prior work.
Developing mobile applications is typically a labor-intensive process in which software engineers manually re-implement in code screen designs, inter-screen transitions, and in-screen animations developed by user interface and user experience experts. Other engineering domains have used computer vision techniques to automate human perception and manual data entry tasks. The P2A tool adopts computer vision techniques for developing animated mobile applications. P2A infers from mobile application screen designs the user interface portion of an application's source code and other assets that are ready to be compiled and executed on a mobile phone. Among others, inferred mobile applications contain inter-screen transitions and in-screen animations. In our experiments on screenshots of 30 highly-ranked third-party Android applications, the P2A-generated application user interfaces exhibited high pixel-to-pixel similarity with their input screenshots. P2A took an average of 26 seconds to infer in-screen animations.
A typical way to design and develop a mobile app is to sketch the graphical user interfaces (GUIs) for the different screens in the app and then create actual GUIs from these sketches. Doing so involves identifying which layouts to use, which widgets to add, and how to configure and connect the different pieces of the GUI. To help with this difficult and time-consuming task, we propose GUIFetch, a technique that takes as input the sketch for an app and leverages the growing number of open source apps in public repositories to identify apps with GUIs and transitions that are similar to those in the provided sketch. GUIFetch first searches public repositories to find potential apps using keyword matching. It then builds models of the identified apps' screens and screen transitions using a combination of static and dynamic analyses and computes a similarity metric between the models and the provided sketch. Finally, GUIFetch ranks the identified apps (or parts thereof) based on their computed similarity value and produces a visual ranking of the results together with the code of the corresponding apps. We implemented GUIFetch for Android apps and evaluated it through user studies involving different types of apps.
Mobile applications1 are nowadays used by everyone. The success of a mobile app highly depends on its user acceptance, which must be checked as part of quality assurance. However, such tests are costly because they usually include testers using the app manually. An obvious solution for improving efficiency is to automate certain test steps. In this article, we present an approach that tracks user emotions automatically to support acceptance testing. Furthermore, we consider user motivation in a positive way with gamification solutions and focus on data privacy aspects in order to gain the trust of potential test users.
Face-to-face health educational and intervention programs are helpful in addressing mental and physical illness challenges in focused groups. However, these programs are expensive, resource-intensive and struggle with scalability and reachability, leading to limited take-up and short-term impact. Digital Health Intervention (DHI) programs incorporate the use of technology - mobile, web, wearables, virtual and augmented reality - to address these limitations while being more cost-effective. DHIs have shown major success in improving physical and mental health outcomes for the general public as well as reducing adverse outcomes or high-risk groups. However, it is still very challenging and expensive to design and run high quality mobile-based DHI programs, in part due to the lack of technical skills of researchers in this field. Our proposed mobile eHealth Learning and Intervention Platform (eHeLP) aims to address these challenges with a novel approach that allows health researchers to focus on their studies, and participants to have access to multiple health programs that meet their needs. The platform caters for identified stakeholders in the DHI field and encourages the development of a new health-tech industry. We present our vision eHeLP, why this idea is worth further research, risks we perceive, and next steps.