Around a decade ago operating systems typically had two privilege levels – a user and an administrator. The administrator would set up the system and install applications for the user, who then had little permission to modify the system themselves. Different users could share a system, meaning each user and their applications would be unable to interfere with another user’s applications and data.
However, this model doesn’t stop a user’s applications from interfering with other applications and data owned by the same user. Remarkably effective worms such as Melissa and Love letter in the 1990s and early 2000s spread by extracting all the user’s contacts from the user’s address book application, and then using the user’s mail application to send copies of the worm to those contacts.
It was clear that the user/administrator model had weaknesses, and so technologies like sandboxing began to appear. Platforms that incorporate sandboxing create a limited number of well-defined ways for applications to interact with each other, many of which prompt the user for permission for these interactions.
A Melissa equivalent on a modern platform would have to ask the user for permission to read their contact list, and then ask the user again for permission to send the email; it would be unable to send the email itself without the user’s explicit involvement and agreement.
Actually, the above paragraph is only true if there are no vulnerabilities in the platform which allow privilege escalation. A jailbreak (called rooting on some platforms) is simply a vulnerability in the platform which has been exploited in a controlled way to give some code extra permissions. Normally this is to allow the user to have extra control over their device – so they can customise it more, or install applications which aren’t otherwise available in app stores.
However, in order to achieve this, a jailbreak will normally disable or weaken some security features, such as sandboxing. Application signing may also be disabled to enable the user to install any application of their choosing. All this makes it much easier for malware to gain a presence on the device, as has been shown by code such as Unflod and Ikee.
This is clearly bad from a security point of view, so device administrators may want to detect if this is happening on their organisation’s devices. Many Mobile Device Managers (MDMs) today claim to detect jailbreaks. They generally do this by attempting to detect the artefacts of the jailbreak, rather than the actual exploitation of the vulnerability which was used to escalate privilege of the jailbreak code.
For example, a jailbreak detection tool may look for the presence of a third-party application store (e.g. Cydia). It may check to see if it can execute unsigned code. It may look to see if additional libraries are loaded into its own process space. The key point is that it can only look for a limited number of things that it knows to look for, and also has permission to look for.
A jailbreak is usually a benign payload attached to an exploit for a vulnerability in the platform. Vulnerabilities allowing privilege escalation on modern platforms are thankfully rare thanks to the excellent work of security teams across the major product vendors. However, when a jailbreak contains such an exploit, malware authors are able to take that exploit and repackage it with a more malicious payload. Given this opportunity, they can install their own code onto a platform with high privileges that doesn’t modify the underlying platform in the same way as the jailbreak. By doing this, malware is able to evade jailbreak detection software in MDMs.
So can anti-malware software on modern devices help? The irony of attempting to scan for malware on a sandboxed platform is that, without additional permissions, the scanning application is itself limited by the sandbox and code signing that attempt to keep the platform secure. Techniques on older platforms (such as checking apps before they run, scanning the entire storage system, and reviewing what applications are present) may all be hard, if not impossible, for anti-malware tools to achieve on a modern platform.
Jailbreak detection can help identify where the user has deliberately altered the security of their device -- assuming jailbreak authors don’t begin routinely evading jailbreak detection. The user could, however, jailbreak their own device and then install an application such as xCon which disables the jailbreak detection routines if they wanted to.
What jailbreak detection cannot detect is the presence of the original exploit(s) which permitted the privilege escalation in the first place. No third-party product can possibly do this. These inherent vulnerabilities and potential exploits can only be fixed through the installation of security updates from the product vendors themselves.
Jailbreak detection is useful, but don’t rely on it exclusively. Instead, it is extremely important to have a rapid-response patching policy. Users should be able to update their devices as soon as updates are available. Devices, whose manufacturers support the platform installed on their devices for the device’s lifetime, should be preferred. In fact, one of CESG’s twelve principles to think about when choosing device is exactly this – Device update policy.
Start talking with users about security and pretty quickly you end up on the topic of passwords.
Passwords are probably the security measure that everyone runs into on a daily basis. We have passwords for our IT systems at work, we have passwords for the services we use at home, we have passwords for the devices we use. There are passwords everywhere!
However, the conversation we've had with people all around the public sector hasn't been a happy one when it comes to passwords. When every system needs a different password, the complexity settings for each system are set high, and password changes are enforced frequently, the outcome is not better security. Through research, in collaboration with the Research Institute in the Science of Cyber Security, we've learnt about how trying to make passwords "more secure" means systems end up less secure. When we're overloaded with passwords, we all end up "breaking the rules": we use the same passwords across different systems; we use coping strategies to make passwords more memorable (and thus more easily guessed), and we store passwords insecurely. Jokes about passwords on sticky notes underneath keyboards aren't jokes.
When we overload users with passwords, we also add cost. There's the cost of dealing with increased password resets and account lockouts, and by putting up barriers in the name of security, we reduce the functionality of systems, and make it harder for people to do their jobs.
Worst of all, making all password policies "complex" doesn't stop attacks; see Microsoft's research paper on this subject. Attackers who have stolen a password database - even if hashed and salted - can generally brute force the majority of the passwords in a reasonable length of time, unless the passwords are so long as to be impossible to remember. Attackers who only get a few tries at guessing passwords (such as with a well-designed online service, or enterprise IT network with throttling and lockout) will be stopped by a fairly short password. The vast majority of password policies are in the middle of this - they give us passwords that are far too short to prevent brute force attacks, but that are much more complicated than they need to be to prevent others. The result is that we're asking users to put in more work remembering complicated passwords, for no actual extra security benefit.
Today we've published our new guidance on passwords. Unlike previous guidance, this doesn't focus on trying to get ever more entropy into passwords. Instead we're encouraging system designers and security architects to think more about where they're requiring passwords, and what they're trying to achieve with them. We also recommend simple approaches to improve security, whilst improving the usability of systems. In a future blog we'll describe how you might implement some of these concepts in different scenarios - such as on End User Devices.
As ever, we're always grateful for feedback on our guidance. Please email enquiries@cesg.gsi.gov.uk or leave a comment on this blog post with your thoughts.
]]>It's been a while since I last blogged about the new Online Services team. There's a lot to report, so this will be the first of a short series of blogs giving an update on progress. This first blog post will cover the key milestones we've passed so far. Subsequent posts will cover how we're transitioning existing activities into the new team and, finally, I’ll say something about our adoption of agile methods for delivery.
In our first 10 weeks after the team was created, we chose to focus our energy on two main areas: developing ideas we’d had for new digital security services and learning how to make the most of our new OFFICIAL accommodation.
Our user research highlighted some potential gaps in the provision of cyber risk management support for departments developing online services. We identified some ways in which we might fill these. For example, we quickly prototyped a virtual community with a small number of users in which participants could ask cyber security risk management questions and obtain answers from CESG experts or others in the community. We’ll blog more about this experiment soon. We also began to explore other ideas and learnt a lot about agile working along the way.
We did much of this development work in our newly-acquired OFFICIAL location. We immediately appreciated the value of having a large space in which we could work collaboratively. Although we are used to open plan environments, we weren’t used to having lots of wall space on which to capture our ideas or the freedom of access to the internet. We found this was a real spur to creative, innovative thought.
As planned, by 8 May, we had been able to develop new ideas and new ways of working. We had identified a number of blockers but had ideas of how to overcome these. We therefore felt confident about continuing into a second stage of activity, running from 11 May to 31 July. For this stage, our goal is to continue the development work on three new ideas and also to successfully transition existing activities in CESG into our team.
]]>The decision to create the team is in direct response to what you have been telling us. We’ve heard your concerns about whether CESG can currently deliver the support you need in this new environment.
Our team is working in a new location, where you can come and work with us at OFFICIAL. Our mission is to make it easier for government to secure its digital services. Since last Monday, we’ve been revisiting user stories we collected at the end of last year. We’ve used them to build our Sprint One deliverables, which should enable us to make a measurable and visible difference to the security of digital services.
During the first sprint, which will run for 10 weeks until 8 May, we’ll blog regularly so you can see what we’re doing. We want to hear what you think about our blog and what we are doing, so please email your feedback to enquiries@cesg.gsi.gov.uk.
]]>We were quietly pleased that our additional user requirements capture and analysis didn’t lead to a major re-write. Rather, we found we needed to fine-tune the principles to reflect the greater breadth of our understanding. For example, we spent quite a lot of time debating what we’d said about the use of jargon – sometimes using technical language is the right choice if you are communicating exclusively within the technical community.
We made other refinements, so take a look, see what you think and let us know. CESG will be saying more about managing risk at OFFICIAL for government over the next few weeks - watch this space for more information.
]]>As part of this project we have been building a new desktop environment to support the user needs of our team and also give us the opportunity to test out the risk management approach we are developing on our own IT project. This post is the first in a short series to follow the evolution of this system and some of the important decisions we make along the way.
Our service design started with understanding the needs of our users, who are a small group split across two locations. These boiled down to:
The next step, before making any technology decisions, was to think about what data or information the team might come into contact with. Based on our plans for the alpha we listed the following types of information:
Normal project work
Examples:
Most of our day to day working on the project and outputs
Our project planning materials
Normal correspondence within the team
Personal information
Examples:
Contact details for our team and others
Personally sensitive emails
Personal correspondence, depending on the content
Information which is sensitive to a public or commercial entity
Examples:
Correspondence and documentation relating to specific public sector organisations’ risks, systems or information
Commercial information and correspondence with companies
After identifying the information we expected to hold, we set out some basic guidelines for our team on how we would protect it. Our 3 different types of information needed different protections. We produced a guide on how we would aim to protect these types of information so we could reassure senior stakeholders that we are properly looking after information that needs to be well protected. Some highlights from the reassurance we provided include:
Having established our user needs and set some ground rules for how we would protect different types of information, we were ready to start making some decisions about how to build our IT system. More posts about important choices we make will follow.
Sign up for email alerts from the CESG Digital blog.
]]>We’ve done this sort of strategy exercise before, but this time we recognised that we were in danger of offering what we perceive our users want rather than really understanding their needs. Those of us in CESG who have worked with GDS and departments on some of the 25 exemplar services have seen the user-centric approach to design of digital services work very well, and we saw no reason why we couldn't borrow most of the approach to design and deliver the next generation of some of our services.
We've taken some liberties in how we've followed the approach because a lot of our services are 'physical' (e.g. our consultancy services) rather than digital, but it has worked well so far, so we thought we would share our experience.
At the end of our discovery, we had 15 potential new services or transformation projects we could have taken forward into alpha. Deciding which to pilot wasn’t easy, but looking at all the things we could do, their potential impact and the dependencies between them, there was one project that stood out – produce a compelling alternative approach to risk management for OFFICIAL.
Of all of the user needs we collected in discovery, there was a strong theme around the need for improvements to the way we apply security risk management in government. We want to promote a more pragmatic and effective risk management approach, that better supports the technology strategies of departments.
In the spirit of starting small and iterating, our risk management work will focus on the approach taken by three projects to building their OFFICIAL IT systems: Cabinet Office technology transformation programme, Cert-UK and the risk management decisions we make around our own IT for the alpha delivery team.
Sign up for email alerts for the CESG Digital blog.
During this alpha phase we'll be 'learning by doing' and exploring how existing good practice and new ideas can be used to help manage risk in a way that meets user needs. If you have experience of using our existing risk management and accreditation guidance we'd love to hear from you and understand how well it works for your organisation's needs. We’ll be blogging here for the duration of the alpha about our work on redefining an approach to risk management.
]]>