According to predictions in The 2020 Data Attack Surface Report1, by 2025 there will be around 100 zettabytes of data stored in the cloud globally. That’s 100 billion (100 followed by 9 zeros) GBs. Total global data storage is projected to exceed 200 zettabytes by 2025. This includes data stored on private and public IT infrastructures, on utility infrastructures, on private and public cloud data centers, on personal computing devices — PCs, laptops, tablets, and smartphones — and on IoT (Internet-of-Things) devices.
Here’s a breakdown of what that will look like:
The scale at which we store and access data has become immense, nearly every aspect of life is controlled or monitored by some type of device that holds some data about us. We have become more open with sharing that data and interconnecting it for ease of use in our lives. With this, growth opportunities for innovations and cybercrime are immeasurable because the more data we have, the more there is to steal and attack.
Today there is no shortage of cloud service providers for businesses looking to switch their activities to the cloud. Amazon, Microsoft, Google, and Alibaba provide their own cloud services, coming in at different prices and capabilities to suit every type of enterprise.
To create a reliable environment for businesses, these services have to be checked to ensure secure, on-demand network access to data and business continuity. Companies want to ensure these end-results, leading to the cloud audit process gaining in popularity.
Cloud audits can come in multiple varieties depending on the type or scope of the audit. Audits are usually conducted by an independent group of auditors who investigate the potential of provided cloud services. Internal audits have become a less popular option due to possible bias in the analysis. The goal of a cloud audit is to verify crucial cloud capabilities that state its reliability in its security requirements and performance efficiency and to make sure that costs are optimized.
A cloud computing audit is similar to other types of audits conducted within a business. Its main goal is to check and improve data availability and consider the overall performance and security aspects that should be ensured by the cloud service provider. Audits should be planned and conducted at a minimum set frequency, such as twice per year, or as needed, such as in times of heightened risks or after a security incident. There should also be a record of the audits, maintained on file according to a defined procedure.
A cloud computing audit delivers insights about your cloud infrastructure’s current state and identifies room for potential improvements, optimization, and cloud compliance as well as risks, weaknesses, and vulnerabilities. The ultimate aim of the audit is to align expenditure with the actual demand for data storage, processing, and general accessibility of network and data.
Cloud auditing knowledge can be used to define the design and operational effectiveness in the following areas of cloud computing development:
An audit of a cloud environment is similar to an IT audit. Both examine a variety of operational, administrative, security and performance controls. Cloud audit controls are also similar to IT audit controls but with a focus on the nuances of cloud environments.
Cloud vendors offer several on-demand, as-a-service resources, such as software as a service (SaaS) and platform as a service (PaaS). Audits help assure these offerings are delivered with the appropriate attention to specific controls, especially those involving security policies and risk management.
There are several main types of cloud audit. The type conducted depends on the area that a company chooses to investigate to acquire specific information valuable from the business point of view. These can be active-active2 or active-passive3.
Four primary types of cloud audits, which I will elaborate on below, are:
Cloud infrastructure is, essentially, the term used to describe all the aspects required to use cloud technology. Since cloud computing is based on the shared responsibility model we talked about earlier in the series. It detects infrastructure misconfigurations, vulnerabilities, and threats within the cloud environment. It can also check whether a cloud server has sufficient logging and monitoring capabilities and verify the access and security policies–improving risk management. Another aspect of the security audit extends to information encryption and whether the data stored in the cloud is protected both in transit and at rest. These can be active-active and should be constantly monitored and mitigated.
Vulnerability scanning focuses on the assessment of security vulnerabilities and weaknesses that pose a threat to the computer system’s reliability and security. It is usually automated and results in a more effective network and improved system protection from cyberattacks and other malicious activities.
Vulnerability scanning audits provide a full rundown of every potential point of attack and weakness found within computer software, including internal and external networks. All findings of weak spots are verified and put into a report by qualified architects, and security and DevOps engineers.
A vulnerability scanning cloud audit may include a checkup of:A configuration hardening audit ensures that a system’s security configuration is appropriately set, that the operating system software is updated to stay ahead of new exploits, and that this process runs continuously, using as much automation as possible.
The essential goal of configuration hardening is preventing as many potential exploits as possible, however, it’s difficult for individual companies to see whether their configurations are correct.
Misconfigurations and absent security controls can be detected in advance to provide your business with a detailed report with configuration hardening recommendations.
A configuration hardening audit and review includes assessing:
You can help with this cloud auditing by assessing systems and critical service configurations to harden them against vendor-neutral benchmarks.
The SDLC can be completed through various different methodologies. These include the waterfall model, V-Model, prototyping model, and the spiral method. Each of these methodologies provides its own pros and cons, but the most important thing is that the process itself is secure and there are no vulnerabilities.
Proper configuration of SDLC pipelines is important as this process underlies the creation of working software. If your CI/CD pipeline is insecure, sensitive data may be exposed to outside sources.
Findings are usually compared to benchmarks like industrial or technical standards to review if the investigated cloud infrastructure is actually in good shape or if there are areas that require swift action and implementation of improvements.
Common findings are in areas such as:
Citations:
1Arcserve (USA), LLC, The 2020 Data Attack Surface Report, 2020, https://www.arcserve.com/
2Active-active - An active-active cluster is typically made up of at least two nodes, both actively running the same kind of service simultaneously. The main purpose of an active-active cluster is to achieve load balancing. Load balancing distributes workloads across all nodes in order to prevent any single node from getting overloaded. Because there are more nodes available to serve, there will also be a marked improvement in throughput and response times.
3Active-passive - An active-passive cluster also consists of at least two nodes. However, as the name "active-passive" implies, not all nodes are going to be active. In the case of two nodes, for example, if the first node is already active, the second node must be passive or on standby.