Because of the level of the abundant risks due to vulnerabilities in multitudes of mobile communications apps, a 50-page draft app vetting process and guidance has been issued by the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST).
NIST’s Special Publication 800-163 Rev. 1 defines the federal government’s cyber and other computer security policies, procedures and guidelines.
As NIST rightfully pointed out, “There are a number of common classes of mobile software errors that can create such vulnerabilities,” but the most familiar “errors in using security services or cryptography include weak authentication of users or systems, incorrect implementation of cryptographic primitives, choosing outdated or broken cryptographic algorithms or parameters, or failure to encrypt app traffic between a mobile device and web- or enterprise-hosted services.”
“Risky interactions among software components on a mobile device include the use of data from untrustworthy sources as input to security-sensitive operations, use of vulnerable third-party-provided software libraries, and app code that leaks sensitive data outside of the app (e.g., through logs of app activity),” NIST continued, adding, “Also, mobile systems may be exposed to malicious code or injections of data through communication with a compromised web or enterprise service.
NIST’s revised draft guidance “updates a process for vetting mobile applications … to address changes in the mobile landscape,” NIST explained, saying its, “Guidance has been expanded to better define the app vetting process as a whole, while providing greater detail about the roles, capabilities, and strategies of mobile application testing. Security requirements and references have been added to aid organizations in defining their own app vetting policy, as well as a “discussion of the mobile app threat landscape is included to better contextualize the need for app vetting.”
NIST stated the new draft was required because, “Mobile applications have become an integral part of our everyday personal and professional lives. As both public and private organizations rely more on mobile applications, securing these mobile applications from vulnerabilities and defects becomes more important.”
The draft publication (which NIST is accepting public comments on through September 6 at email@example.com, with, “Comments on Draft SP 800-163 Rev. 1” in the Subject field) “outlines and details a mobile application vetting process” that can be used to ensure mobile applications “conform to an organization’s security requirements and are reasonably free from vulnerabilities.”
NIST emphasized that the risks from vulnerabilities “varies depending on several factors, including the data accessible to an app. For example, apps that access data such as precise and continuous geolocation information, personal health metrics or personally identifiable information (PII), may be considered to be of higher-risk than those that do not access sensitive data. In addition, apps that depend on wireless network technologies (e.g., Wi-Fi, cellular, Bluetooth) for data transmission may also be of high risk since these technologies also can be used as vectors for remote exploits.”
But, “Even apps considered low risk … can have significant impact if exploited,” NIST warned. “For example, public safety apps that fail due to a vulnerability exploit could potentially result in the loss of life.”
Consequently, in order to “mitigate potential security risks associated with mobile apps,” NIST said, “organizations should employ a software assurance process that ensures a level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner.”
In its draft paper, NIST defines a software assurance process for mobile applications it refers to “as an app vetting process” for planning and implementing an app vetting process; developing security requirements for mobile apps; identifying appropriate tools for testing mobile apps; and, determining if a mobile app is acceptable for deployment on an organization’s mobile devices.
The draft NIST app vetting process provides “an overview of techniques commonly used by software assurance professionals is provided, including methods of testing for discrete software vulnerabilities and misconfigurations related to mobile app software.”
NIST said while “mobile apps continue to provide unprecedented support for facilitating organizational objectives,” they also “can pose serious security risks to an organization and its users due to vulnerabilities that may exist within their software. Such vulnerabilities may be exploited to steal information, control a user’s device, deplete hardware resources, or result in unexpected app or device behavior. App vulnerabilities are caused by several factors including design flaws and programming errors, which may have been inserted intentionally or inadvertently. In the app marketplace, apps containing vulnerabilities are prevalent due in part to the submission of apps by developers who may trade security for functionality in order to reduce cost and time to market.”
The draft guidance “is intended for public- and private-sector organizations that seek to improve the software assurance of mobile apps deployed on their mobile devices,” NIST stated.
But, “more specifically [it’s] intended for those who are:
• Responsible for establishing an organization’s mobile device security posture;
• Responsible for the management and security of mobile devices within an organization;
• Responsible for determining which apps are used within an organization, and
• Interested in understanding what types of assurances the app vetting process provides.
NIST said prior to vetting any mobile app for security, the “organization must [first] define the security requirements that an app must meet in order to be approved by the organization.”
NIST defined two types of app security requirements that organizations “should satisfy: general and organization-specific.”
NIST conceded, however, that stringently adhering to the draft guidance, which has Department of Homeland Security (DHS) support, that, “App software assurance activity costs should be included in project budgets and should not be an afterthought. Such costs may be significant and can include licensing costs for test tools and salaries for analysts, approvers, and administrators. Organizations that hire contractors to develop apps should specify that app assessment costs be included as part of the app development process.”
“Note, however,” NIST said, “that for apps developed in-house, attempting to implement app vetting solely at the end of the development effort will lead to increased costs and lengthened project timelines. It is strongly recommended to identify potential vulnerabilities or weaknesses during the development process when they can still be addressed by the original developers. Identifying and fixing errors during the development process is also significantly cheaper than fixing errors once a product is released. To provide an optimal app vetting process implementation, it is critical for the organization to hire personnel with appropriate expertise. For example, organizations should hire analysts experienced in software security and information assurance as well as administrators experienced in mobile security.”
In its Biometrics Strategic Framework 2015 – 2025, DHS described the entire Homeland Security Enterprise as not only “… federal, state, local, tribal, [and] territorial” components, but also “non-governmental and private‐sector entities, as well as individuals, families, and communities who share a common national interest in the safety and security of America and the American population.”
Hence, app developers and users – both and for government, especially, but also other designated critical infrastructure organizations and industry — need to think long and hard about NIST’s guidance … or any NIST cybersecurity guidance.
NIST suggested app developers use DHS’s Custom Software Questionnaire to answer questions like, “Does your software validate inputs from untrusted resources?” and, “What threat assumptions were made when designing protections for your software?”
“Another useful question, not included in the DHS questionnaire,” NIST added, is: ‘Does your app access a network application programming interface (API)?’ Note that such questionnaires can be used only in certain circumstances such as when source code is available and when developers can answer questions.”
While NIST briefly discussed the most common app security vulnerabilities, one is particularly worrisome. It’s called a “rooter.” It’s a “software tool that enables a user to root a mobile device [by] enabling users to gain privileged (root) access on the device’s operating system (OS). Rooting is often performed to overcome restrictions that carriers and device manufacturers often enforce on some mobile devices. Rooting enables alteration or replacement of systems applications and settings, execution of specialized apps requiring administrative privileges, or performance of carrier-prohibited operations. On some mobile platforms (e.g., Android), rooting also can facilitate the complete removal and replacement of the device’s OS, e.g., to install a newer version of it.”
There are two types of rooting:
• “Soft rooting” typically is performed via a third-party application that uses a security vulnerability called a “root exploit;” and,
• “Hard rooting,” which requires flashing binary executables and provides super-user privileges.
In their 2015 paper, Android Root and its Providers: A Double-Edged Sword — the research for which was sponsored by the Army Research Laboratory — University of California, Riverside computer scientists Hang Zhang, Dongdong She, and Zhiyun Qian concluded that a “root provider is a unique product in history that has unique characteristics, and, “Although legitimate, the functionality is implemented by exploiting vulnerabilities of the target system, which presents significant security risks.”
The goal of their research “is to understand and characterize the risk that well-engineered exploits from the root providers can be stolen and easily repackaged in malware.”