Firewalls are starting to look a bit out of date nowadays (see zero trust networks) due to the fact that so much activity goes on in areas that are outside the ‘network perimeter’ (for example in the cloud and on mobile devices), the existence of so many active ports, and increased recognition of the risk posed by insider threats. However, they still have an important role to play as part of a total security methodology, especially when you consider recent developments in WAF (Web Application Firewall) technology, and ‘next-gen’ firewalls.
The original firewalls were fairly primitive filtering tools that monitored data entering or exiting the network, and either let it pass or otherwise depending on certain parameters, including the port number, direction of flow and source/destination IP addresses. They have evolved significantly since then. ‘Next-gen’ firewalls are much more intelligent – able to inspect encrypted data flows, look for malware and operate an ‘identity-based approach’ that is far more sophisticated than simply considering the port numbers and IP addresses. Firewalls remain the principle mitigator of DoS and DDoS attacks which are still common and can be extremely disruptive.
MULTI FACTOR AUTHENTICATION
The ability to prove – or at least show with very high probability – that you are who you say you are, is clearly a fundamental part of any security system. Passwords have been the workhorse of digital authentication for many years now (first used in 1961) but have become unpopular in recent years. The main problem is finding a balance between selecting a password that is secure (hard to “guess” by a computer that can work through billions of possibilities per second, using a ‘brute force’ or ‘dictionary’ attack) and easy to remember. By the way, based on current computing power, it is reckoned that the minimum secure password length is eight characters – and they should be a random selection of letters, numbers and symbols.
Apart from passwords, PIN numbers and other so-called ‘knowledge factors’, the two other main means of authentication are physical or ‘owned’ factors (e.g. a card, token or phone – SMS-based verification), and factors that inhere in the user themselves (typically but not necessarily biometric identifiers such as fingerprints). Combination of two or more of these factors constitutes Multi-Factor Authentication (two factor authentication or 2FA is a form of MFA), and although it can be bypassed, it is far more secure than passwords alone. A further level of authentication can be achieved by using geo-location; verification that the location of the user makes sense, and is consistent with what they are trying to achieve.
Repeated application of authentication within a network is an important part of the theory of zero trust networks, along withprivilege management and the principle of least privilege. It reduces the ability of external attackers who are operating inside the network or insider threats to traverse from their entry point to more sensitive areas, or to exfiltrate data.
The Principle of Least Privilege means giving all users (and in fact also all processes, applications and devices) the minimum access rights that they need to do some authorised activity. This may be a limit on network areas, directories or files that can be accessed, but also on the amount of time that the access rights apply. Controlling privileges in this way can be an extremely effective deterrent against external and internal threats, both of which use ‘excess’ access rights to traverse the network, access credentials and other data and cause damage in other ways.
Before you can start properly managing and protecting personal data, according to applicable privacy regulations (GDPR, CCPA etc), you need to be certain what you have and where it resides. Data Discovery software trawls through all possible locations where such data can be saved – PCs, databases, devices such as mobile phones and cloud storage locations. A variety of approaches are used to find relevant data, ranging from application of regular expressions that hunt for the recognisable format of 16 digit credit card numbers to the most advanced machine learning algorithms and Optical Character Recognition (OCR), that can spot data that is presented in images and scanned documents. The result should be that you end up with a complete map of sensitive data across your entire enterprise – including double counted information in storage.
The most rigorous solutions incorporate scanning of the dark web for data that has been exfiltrated for sale to criminal networks. PI data on the dark web that can be matched with internal data is clearly a sign that a breach has occurred.
Forcepoint offers a powerful ‘crawler’ solution that is able to scan for and then classify sensitive data.
DATA LOSS PROTECTION
Digitally stored data are frequently as valuable and important to organisations as their physical assets – or indeed more so. And they face similar risks as physical assets: They can be stolen, damaged or ransomed by criminals, but are also at risk from the activities – criminal or negligent – of employees or friendly third parties with digital links. In the age of GDPR and other robust privacy regulations, we tend to focus on Private Information (PI) such as addresses, credit card details, passwords, health information and so on. However, companies also need to protect IP (Intellectual Property) assets such as design blueprints and corporate strategy documents. Digital data can be ‘structured’ (meaning stored in a regular, easily searchable format such as a database), and ‘unstructured’ (emails, PDFs etc), and needs to be managed both ‘at rest’ and ‘in transit’ (ie while being transferred by email, FTP and so on). PI and IP data may, of course, also reside in the cloud.
Apart from the direct commercial implications of damaged or stolen data and trade secrets, GDPR regulations can result in severe fines for companies that suffer data breaches or are found not properly to be applying the regulations. DLP solutions can help mitigate against both of these risks.
Arguably, DLP is synonymous with information security, as nearly all aspects of the broader field are concerned with maintaining the confidentiality, integrity and availability of data, including malware management, encryption, firewalls and so on. However, specific ‘DLP software’ tends to focus on the following narrower definition of “detecting and preventing unauthorised exfiltration or damage of [sensitive] data”. This entails locating the data – a job that is harder than it sounds given the wide range of locations (including cloud storage, mobile devices etc) where it may be stored. Once the data has been located, DLP software monitors it, looking out in particular for unusual-looking behaviour (UEBA) such as file transfers at strange times or with unexpected destinations. Policies can be set up that apply different rules in different circumstances, depending on who sent the data, the type of data, the channel being used and so on, with certain situations resulting in the flow being blocked and a range of other responses.
Forcepoint offers a comprehensive DLP solution that protects against data exfiltration using a wide range of different media (email, cloud, print etc), including a feature called ‘Drip DLP’ that can spot the so-called ‘low and slow’ exfiltration technique whereby sensitive information is broken into small pieces and leaked – inconspicuously – over a long period of time
Network ‘endpoints’ are generally defined as end-user devices – digital equipment operated by users within an organisation, such as PCs, laptops and mobile phones. Because of the increasing amount of endpoint-based activity that is outside the security perimeter of the network, or on WiFi, a number of security solutions have been developed that are located on the endpoints themselves. These are more advanced than earlier anti-virus applications, in that they now incorporate many different functions that aim to mitigate threats at different stages of the attack chain. They are also smarter – often applying machine learning and UEBA to help identify malicious activity.
There is a difference between EPP (Endpoint Protection Platform) and EDR (Endpoint Detection and Response). The former looks after the underlying security functionality; it detects and blocks threats with various anti-malware, intrusion and exfiltration techniques. EDR provides an extra level of understanding and control for users, which might include threat monitoring, alerts and explanations, over the full breadth of the enterprise network(s) and devices. Organisations are becoming more attracted by EDR-type solutions as we move into the GDPR era, as they can be a helpful way to express the organisations information security environment – and events that are happening in it – to senior management and regulators.
EPP and EDR solutions are offered by Cylance, a software firm that is considered a pioneer in the application of machine learning to malware detection and prevention.
A number of vulnerabilities can be introduced into websites at the development phase by coding that doesn’t take security risks into consideration. Probably the most infamous of the attacks associated with this type of oversight is called a ‘buffer overflow’ attack and works by flooding the ‘buffer’ – a part of the computer’s memory – with data via, for example, the password input field. The second part of this attack is to insert malicious code that might, for example, give the attacker admin privileges. Also common are ‘injection attacks’ that take advantage of insecure coding in SQL, JSON and other languages to ‘inject’ malicious code into website text boxes where you would normally be inputting a database query or similar. Exploit of any of these vulnerabilities can lead to a compromised website – a common vector for malware.
Injection attacks are surprisingly common – right up there with DDoS – but these and other vulnerabilities can be mitigated by considering secure coding practices when building the website. Subsequent monitoring can be done by using web app security software of the type offered by Acunetix.
Encryption is absolutely fundamental to information security. (Secure) encryption can help to achieve the confidentiality and integrity of data at rest (in storage in a database) and data in transit (when it’s being transferred via email or FTP transfer, for example). Given the current (publicly known) state of technology, it is effectively impossible to break the current ‘secure’ encryption standards.
The most commonly used encryption system that is generally considered to be secure is a symmetric block cipher called Rijndael that was developed by Belgian researchers, and subsequently selected by the US government as the Advanced Encryption Standard (AES). Other systems that have been well tested and are currently considered to be secure include RSA and 3DES.
The possibly imminent arrival of quantum computing is problematic for encryption, as it may provide a way to decrypt 128 bit systems such as the very widely accepted AES 128. This is why, in some applications, 256 bit (symmetric) encryption systems are used. Watch this on YouTube for a proper explanation! The problem with these more sophisticated algorithms is that they use significantly more processing power and/or time, which can be inconvenient and expensive, so they are preserved for situations requiring very high security.
Anti-malware applications attempt to detect malicious software, a term which now covers a huge range of different digital threats including rootkits, keyloggers, adware, malicious URLs etc etc. Once detected, the relevant file is ‘cleaned’ (the infection is removed from the file), ‘quarantined’ or deleted. The main differences between vendor solutions are based on the approach to detection:
- SIGNATURE BASED
This is the traditional approach to malware: Anti-virus specialists look out for new malware, and when they find an example, they analyse it and extract a ‘signature’ that can be used by others to identify it as malicious. Application users have a database which is maintained by frequent updates, that are communicated by the vendor. This methodology is arguably being deprecated in favour of the below, more modern approaches, because a small change in the code of the malware can change its signature and render it immune to detection. ‘Polymorphic’ malware is continuously changing in order to evade this form of defence.
This is a more open-minded approach than signature-based detection, and relies on spotting ‘suspicious characteristics’ rather than very specific pieces of code. It can also entail ‘dynamic heuristics’, whereby a questionable program is allowed to run in a ‘virtual machine’ or ‘sandbox’ (a virtual computer that is being emulated on another computer), to observe its behaviour. This approach is more resistant to dynamic threats.
- MACHINE LEARNING (“AI”)
The machine learning or “AI” approach entails powerful computers using statistical learning algorithms such as neural networks to churn through many millions of examples of good and bad programs. The result of this is that the software learns by looking at a large number of features that may be used to describe each program, and from its observations gains the ability to assign a probability to whether another program is likely to be malware. This probabilistic result can then be used either automatically or manually to decide what to do with the file.
Cylance and Check Point both produce endpoint anti-malware solutions. Cylance specialises in the application of machine-learning anti-malware techniques, and provides the software that Bitglass uses for detection and prevention of threats in cloud apps.
ATTACK SURFACE MANAGEMENT
The organisation’s digital attack surface is the full range of different points or vectors whereby a cyber-attacker can potentially infiltrate its network. These include connected devices such as PCs, mobile phones and IoT devices (e.g. thermostats, or, in the industrial/infrastructure setting ICS units). However, an expanding and increasingly relevant part of the attack surface is the collection of online assets accumulated by the organisation, which can be literally thousands of websites, web applications, domains, SSL certificates and so on. A fundamental rule of information security is simply to minimise the attack surface. Attack surface management solutions help to do this by locating and reviewing the full range of the organisation’s online resources, along with fake websites and other web-based assets that use its corporate identity. They also help to manage the attack surface by compiling observed vulnerabilities, unpatched or out-of-date software implementations, expired SSL certificates and other security risks, so that they can be addressed one way or another.
RiskIQ provides attack surface management solutions, using advanced scanning and crawling technology to map out the organisation’s digital footprint and any associated vulnerabilities. They also offer an online threat intelligence service that can be used by at-risk clients such as high net worth individuals, politicians or senior management of well-known companies. This service searches the full internet including the deep and dark webs, for risk signals such as leaked sensitive information, personal threats or social impersonations.
SECURITY AWARENESS TRAINING
Phishing, and other methods of distributing malware via communications media such as instant messaging are currently the main root cause behind cyber-attacks. Robust digital security can mitigate the malware itself (either blocking it on the moment of file or link execution, or detecting and eliminating it on the network and endpoints while it goes through the time-consuming processes of scanning for and then encrypting or exfiltrating data) by applying endpoint security and anti-malware software. However it’s clear that the optimum solution is simply to block the exploit at the earliest stage, and this can be achieved – to a pretty good extent – by training employees to spot and avoid threats.
KnowBe4 offers comprehensive training programs that include test attacks, video training modules and assessments, and reports that management can use to monitor awareness and track improvements in response performance.
As corporate and governmental organisations become exposed to new and evolving threats, they accumulate evidence about the nature and source of the threats, vectors that they employ, mitigating procedures and so on. To quite a great extent, this information – otherwise known as ‘threat intelligence’ is shared via communities such as Open Threat Exchange. However, larger organisations may set up their own Threat Intelligence Platforms (TIP) as part of their Security Operations Centres (SOC), or take advantage of commercial vendor products as offered by companies like Check Point.
Keeping operating systems and software up to date is generally viewed as the single most important way to protect against cyberthreats. This is largely because many exploits take advantage of flaws in software, and when such vulnerabilities are spotted the developers of the software are usually able to come up with a repair or ‘patch’, which is subsequently released as an update.
It is critical to stay up to date with patches. In some cases, vulnerabilities to specific exploits are patched within days or even hours, but after that point, attackers will be aware that unpatched devices represent an opportunity to them. Indeed, many hackers search out such opportunities by using scanning applications that look for computers running obsolete software versions. The Zero trust network approach recommends applying an ‘upgrade-only’ policy to software, as one potential exploit is to induce a version downgrade in order to expose a known vulnerability – something that is relatively easy to get away with as previous versions are often both authorised and trusted.
ZERO TRUST NETWORKS
The theory of zero trust networks is based on the prevailing view in infosec that we should, now, consider the network perimeter to be porous. This is due to the number of personal devices that tend to be connected via the cloud (“bring your own device” or BYOD), third-party access, insider threat risk and so on. A geographical analogy would be London or San Francisco compared to the walled cities of medieval times, when access was controlled at one or two gates.
What this means is that firewalls remain important, but can’t be fully relied on. The focus switches to monitoring activity on the network (anomaly detection, unauthorised data exfiltration etc.) and an assumption that individuals (in fact, individual/device combinations known as ‘agents’) operating in the network or in sensitive areas of the network probably shouldn’t be there (“zero trust”) unless they can justify their presence (authentication and privilege management – “principle of least privilege”).
WEBSITE APPLICATION SECURITY
Websites and web applications that have been neglected so that software is out of date, or are vulnerable due to misconfiguration or insecure coding represent a real threat to the connected organisation. Web application security deals directly with these problems: Firstly a discovery process searches the internet for online assets belonging to or connected with the organisation. These are then scanned for conspicuous vulnerabilities, such as an expired SSL certificate. Finally, crawling software works through all pages, forms and so on in the websites in the same manner as a hacker, looking for more subtle issues such as coding errors that might allow access via injection attacks or other exploits. The results of the investigation provide the basis for remediation and a hardening of the organisation’s attack surface.
USER AND ENTITY BEHAVIOUR ANALYTICS (UEBA)
A relatively recent form of information security analytics that sits on top of more traditional solutions, UEBA uses machine learning to develop a sense of what is ‘normal’ and what is ‘anomalous’ activity on a network. It is an important component of the Zero Trust Network approach, as it treats all users as a potential threat – searching out malicious employees (insider threat) along with attackers from without that have breached the network with the intention of exfiltrating or damaging data or installing malware. Once it has established what typical network behaviour looks like, it will monitor for unusual activity – be it an unusual device, time of activity, location of activity or combination of these. Other typical observations might include consecutive login failures, or evidence that an employee in one department (say accountancy) is exporting data from the network that comes from a different department (say HR). The response may be to report the anomalous activity for investigation and/or block it.
UEBA is sometimes used in DLP software, as a means to assign risk scores to network users; users who are behaving in a way that decreases the probability that they have malicious intent (ie behaving ‘normally’) will be less limited in their ability to move sensitive data around than those that are behaving anomalously. Forcepoint is a vendor that uses UEBA extensively in its DLP solutions.
One way to limit the potential damage from a successful infiltration attempt, is to segment the network so that it is harder or impossible for attackers to traverse from their entry point (e.g. a salesperson’s laptop) to a more sensitive area (e.g. a database containing personal information). This is normally done by using next-gen firewalls and Virtual Local Area Networks (VLANs) that restrict access from one ‘zone’ to another.
This type of segmentation is a method to mitigate third party risk, whereby a supplier or other partner causes a network compromise; they – or an attacker that has infiltrated their own systems – get access to the areas that they need, but are unable to traverse to more sensitive zones.
The Cloud provides many opportunities for business, but because some control and responsibility is lost, it’s easy to slip up regarding security configuration, updates and patches, all of which must be monitored carefully. Most of the security functions that are applied ‘on-premise’, such as anti-malware, DLP etc. can be used in the Cloud, and are frequently supplied by the same vendors that offer the on-premise solutions. However, specialised cloud security providers also exist that can do this but are also CASB (Cloud Access Security Brokers), that effectively intermediate between endpoints (including mobile phones) and Cloud applications such as file hosting services (e.g. Dropbox). That way, data traffic to these applications can be monitored and controlled to mitigate unauthorised data exfiltration by attackers that have infiltrated a device or network or insider threats.
In summary, CASBs provide – in the context of the organisations exposure to the cloud – visibility into application usage, monitoring and management capability of data sharing, malware protection, compliance with privacy regulations and enforcement of access policies on mobile devices. Bitglass and Netskope are CASBs that provides these services.
WEB APPLICATION FIREWALL (WAF)
This is a type of firewall that protects against attacks on web applications, by monitoring and filtering HTTP traffic that passes between the application and the internet. As with a ‘normal’ (network) firewall, traffic restrictions are determined by user-defined policies, which tend to either whitelist (allow specified traffic) or blacklist (exclude specified traffic). WAFs can be deployed ‘on-premise’ or on the cloud. Imperva offers a market-leading WAF.
WAFs defend applications against two main types of threat; exploits that profit from vulnerabilities in the application such as SQL injection and cross-site scripting (XSS), and application layer DDoS attacks. For really thorough protection they can be used along with a web application scanning solution, such as provided by Acunetix.