Quantcast
Channel: Ajit Vadakayil
Viewing all articles
Browse latest Browse all 852

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 10 - Capt Ajit Vadakayil

$
0
0



THIS POST IS CONTINUED FROM PART 9, BELOW--



The primary reason behind the shift to the cloud is the cost-friendliness of the technology when compared to on-site storage. This stems from the fact as it does not need any investments on physical investments or any kind of expert for the management as it is handled by the cloud service provider.

Cloud computing removes hardware expenses, as hardware is provided by a vendor. There’s no need to buy, install, configure, and maintain servers, databases, and other components of your runtime environment. Moreover, using cloud-based solutions, you pay only for what you use, so if you don’t need extra resources you can simply scale down and not pay for them.

You pay for the capacity you use. This allows your organization to treat cloud storage costs as an ongoing operating expense instead of a capital expense with the associated upfront investments and tax implications.


The costs associated with the hardware, applications, and bandwidth are responsibility of the supplier. The payment of the service is usually monthly or annual and according to the usage as it follows the model, pay-as-you-go.

Organizations that deploy cloud computing services save more than 35 percent on operating costs each year. Maintaining an in-house IT team big enough to manage local servers can quickly lead to a ballooning budget.

Cost of benefits, which cloud service can help reduce include , fewer upfront costs, since you are not required to purchase hardware. Potential for lower lifetime costs on configuration and maintenance. Access to expert assistance on setup, configuration, maintenance, and software licenses. Potential to consolidate servers and increase workload efficiency..

Server virtualization brings positive transformations, such as reduced hardware costs, improved server provisioning and deployment, better disaster recovery solutions, efficient and economic use of energy, and increased staff productivity.



There are a lot of options available in the market, you need to dig deeper into Shared vs Cloud vs Dedicated hosting to decide which best suits your needs.
Before taking a final call, Ask yourself these Questions:
What’s my monthly budget?
Daily and monthly traffic I expect on my website
Any expected traffic spikes? If yes, how much?
Security requirements.
Do I need a dedicated IP for my website?
Is private SSL required?
Do I need server root access?

Best Cheap Cloud Hosting Providers 2019--

Vultr. Vultr is a very cheap cloud hosting provider which offer plans starting at $2.50 per month. ...
Kamatera – Get 30 Days Free Trial. ...
CloudWays. ...
Host1Plus | Heficed. ...
Digital Ocean. ...
InterServer. ...
A2Hosting [99.99% Uptime] ...
BlueHost [Best for Bloggers]


Now Google is making a terabyte of cloud storage available for just $10. Check out Google Drive's new pricing structure announced last week, which now offers the first 15 GB per month for free. For $100 a month, Google offers as much space you could ever need: 10 terabytes or more.

The Google Cloud Platform Free Tier gives you free resources to learn about Google Cloud Platform (GCP) services by trying them on your own. ... A 12-month free trial with $300 credit to use with any GCP services. Always Free, which provides limited access to many common GCP resources, free of charge.

Scalability in the context of cloud computing can be defined as the ability to handle growing or diminishing resources to meet business demands in a capable way. In essence, scalability is a planned level of capacity that can grow or shrink as needed.

Public cloud solutions allow you to evolve at almost infinite speed. This would not be possible in a local data center. Because the division of resources between customers is done dynamically, your business can double or even triple the amount of computing and storage to meet peak demand.

The hiring of the service is done on demand. This way, you also gain scalability to grow. If there are traffic spikes or the data center reaches the operating limit, simply switch to a more comprehensive plan. Again, it’s a cheaper solution than buying mainframes and routers.

Cloud hosting provides scalability at a vast scale as compared to dedicated hosting. This is a treat for those users who have customers with demands changing on a daily basis. So basically one can scale up or down easily with cloud hosting and meet the flexibility in their demands.

Scalability is one of the most valuable and predominant feature of cloud computing. Through scalability you can scale up your data storage capacity or scale it down to meet the demands of your growing business

Simply put, if a cloud computing system (i.e. networks, storage, servers, applications and services) can rapidly respond to meet new demands – either in size or volume – it is scalable. Traditionally, businesses were tied to physical constraints, such as hard-drive space and memory, all of which impeded scalability

One of the main benefits that come with using public cloud services is the ease of scalability—to a point. For small and medium-sized businesses in general, scalability is pay-as-you-go. The resources are pretty much offered 'on demand' so any changes in activity level can be handled very easily. This, in turn, brings with it cost-effectiveness.

Thanks to the pooling of a large number of resources, users are benefiting from the savings of large-scale operations. There are many services, like Google Drive, which are offered for free.

Finally, the vast network of servers involved in public cloud services means that it can benefit from greater reliability.

On-premises solutions are rather difficult to scale, as the type of hardware needed depends on your application’s demands. If your app experiences heavy traffic, you might need to significantly upgrade on-premises hardware. This problem doesn’t exist with a cloud service, which you can quickly scale up or down with a few clicks. Cloud services are a perfect solution for handling peak loads. With cloud-based services, businesses can use whatever computing resources they need.

Growth constraints are one of the most severe limitations of on-premise storage. With cloud storage, you can scale up as much as you need. Capacity is virtually unlimited.

Cloud hosting allows you to be more flexible with your tech resources. Cloud hosting companies go to great lengths to make sure that you can easily scale your site’s processing power and disk space as you go. This makes cloud hosting a great option for growing businesses.

The consequences of not having enough computing or storage resources are dire: First come performance issues, then users start getting error messages and then they are locked out of applications.

Unfortunately, some organizations panic and try to solve the problem by buying more and more computer hardware. That can make the problem worse: if demand drops, hardware goes underutilized and constrains a company’s capital expenditure budget.

Scalability is the ability to handle growing or diminishing resources to meet business demands. In other words, scalability is a planned level of capacity that can grow or shrink as needed. It means your IT environment is suddenly flexible enough to deliver exactly the right amount of computing power when you need it.

When computer scientists talk about the kind of growth seen in most corporate environments, it’s called exponential growth. That word gets used a lot and may be one of the most misused terms in vogue today. To understand why traditional IT technology cannot be expected to process high volumes of data, it’s important to understand that term.

When faced with an exponentially growing challenge, people often respond with linear solutions. When a law firm that is used to dealing with gigabytes of evidence in litigation starts to get cases involving a terabyte of data, their instinct is to upgrade their technology. That means giving their IT guy a budget to buy more computers.

Unfortunately, computers can be a bottleneck. You can always throw hardware at the problem, but when an application that used to create several gigabytes of data are suddenly creating terabytes of data, a few new computers is not going to address the problem.

Because cloud providers can scale to manage increasing demand, you can have an non-linear solution for your business challenges. A cloud provider like Amazon Web Services hosts trillions of objects at any one time, which means your exponential growth is just a blip to them.

Scalability Testing is defined as the ability of a network, system or a process to continue to function well when changes are done in the size or volume of the system to meet a growing need. It is a type of non-functional testing.

Scalability testing ensures that an application can handle the projected increase in user traffic, data volume, transaction counts frequency, etc. It tests the system, processes, and databases ability to meet a growing need.

It is also referred to as performance testing, as such, it is focused on the behavior of the application when deployed to a larger system or tested under excess load.

In Software Engineering, Scalability Testing is to measure at what point the application stops scaling and identify the reason behind it.

Why do Scalability Testing--
Scalability testing lets you determine how your application scales with increasing workload.
Determine the user limit for the Web application.
Determine client-side degradation and end user experience under load.
Determine server-side robustness and degradation.
What to test in Scalability Testing

Here are few Scalability Testing Attributes:-- 
Response Time
Screen transition
Throughput
Time (Session time, reboot time, printing time, transaction time, task execution time)
Performance measurement with a number of users
Request per seconds, Transaction per seconds, Hits per second
Performance measurement with a number of users
Network Usage
CPU / Memory Usage
Web Server ( request and response per seconds)
Performance measurement under load
Test Strategy for Scalability testing
Test Strategy for Scalability Testing differ in terms of the type of application is being tested. If an application accesses a database, testing parameters will be testing the size of the database in relation to the number of users and so on.

Prerequisites for Scalability Testing-- 
Load Distribution Capability- Check whether the load test tool enables the load to be generated from multiple machines and controlled from a central point.
Operating System- Check what operating systems do the load generation agents and load test master run under
Processor- Check what type of CPU is required for the virtual user agent and load test master
Memory- Check how much memory would be enough for the virtual user agent and load test master

How to do Scalability Testing--
Define a process that is repeatable for executing scalability tests throughout the application life-cycle
Determine the criteria for scalability
Shortlist the software tools required to run the load test
Set the testing environment and configure the hardware required to execute scalability tests
Plan the test scenarios as well as Scalability Tests
Create and verify visual script
Create and verify the load test scenarios
Execute the tests
Evaluate the results
Generate required reports
Scalability Test Plan

Before you actually create the tests, develop a detailed test plan. It is an important step to ensure that the test conforms as per the application requirement.

Following are the attributes for creating a well-defined Test Plan for Scalability Testing.--

Steps for Scripts: The test script should have a detailed step that determines the exact actions a user would perform.
Run-Time Data: The test plan should determine any run-time data that is required to interact with the application
Data Driven Tests: If the scripts need varying data at run-time, you need to have an understanding of all the fields that require this data.
Virtualization is what makes scalability in cloud computing possible.

Virtual machines (VMs) are scalable. They’re not like physical machines, whose resources are relatively fixed.

You can add any amount of resources to VMs at any time. You can scale them up by:--

Moving them to a server with more resources
Hosting them on multiple servers at once (clustering)
The other reason cloud computing is scalable? Cloud providers already have all the necessary hardware and software in place.
Individual businesses, in contrast, can’t afford to have surplus hardware on standby.

Cloud Scaling Strategies--

There are two ways to scale: vertically or horizontally. When you scale vertically, it’s often called scaling up or down. When you scale horizontally, you are scaling out or in.

Cloud Vertical Scaling refers to adding more CPU, memory, or I/O resources to an existing server, or replacing one server with a more powerful server. Amazon Web Services (AWS) vertical scaling and Microsoft Azure vertical scaling can be accomplished by changing instance sizes, or in a data center by purchasing a new, more powerful appliance and discarding the old one. AWS and Azure cloud services have many different instance sizes, so scaling vertically is possible for everything from EC2 instances to RDS databases.

Cloud Horizontal Scaling refers to provisioning additional servers to meet your needs, often splitting workloads between servers to limit the number of requests any individual server is getting. In a cloud-based environment, this would mean adding additional instances instead of moving to a larger instance size.

In practice, scaling horizontally (or out and in) is usually the best practice. It’s much easier to accomplish without downtime—even in a cloud environment, scaling vertically usually requires making the application unavailable for some amount of time. Horizontal scaling is also easier to manage automatically, and limiting the number of requests any instance gets at one time is good for performance, no matter how large the instance.

Manual vs Scheduled vs Automatic

There are essentially three ways to scale in a cloud environment: Manually, scheduled and automatic.
Manual scaling is just as it sounds. It requires an engineer to manage scaling up and out or down and in. 

In the cloud, both vertical and horizontal scaling can be accomplished with the push of a button, so the actual scaling isn’t terribly difficult. However, because it requires a team member’s attention, manual scaling cannot take into account all the minute-by-minute fluctuations in demand seen by a normal application. This also can lead to human error. An individual might forget to scale back down, leading to extra charges.

Scheduled scaling solves some of the problems with manual scaling. Based on your usual demand curve, you can scale out to, for example, 10 instances from 5 pm to 10 pm and then back into two instances from 10 pm to 7 am, then back out to five instances at 5 pm. This makes it easier to tailor your provisioning to your actual usage without requiring a team member to make the changes manually every day.

Automatic scaling (also known as autoscaling) is when your compute, database, and storage resources scale automatically based on predefined rules. For example, when metrics like vCPU, memory, and network utilization rates go above or below a certain threshold, you can scale up, down, out or in. 

Autoscaling makes it possible to ensure your application is always available—and always has enough resources provisioned to prevent performance problems or outages—without paying for far more resources than you are actually using. 

Scaling and Cost Management

Scaling is one of the most important components of cloud cost management. Right Sizing instances, or choosing the correct instance sizes based on your actual application utilization, is one of the easiest ways to reduce cloud costs without affecting performance in any way. 

There are also some cost management strategies, like Reserved Instance (RI) purchases, that take away some of the ability to scale in or down, because you’re committing to using a certain amount and type of resources for one to three years.


When you’re looking for ways to reduce costs, it’s important to understand your current usage patterns and utilization rates to make the best decisions about how to strike a balance between total scaling flexibility and cost management strategies like Reserved Instance purchases.


The most obvious and the biggest advantage of cloud computing is the endless storage capacity which can be expanded on a very nominal monthly fee.

Unlike traditional storage infrastructure, the cloud offers a more secure ecosystem and significantly brings down the chances of hacking, malware and even threat from viruses.

A great thing about using a cloud computing platform is the unlimited capacity of data storage. Even if your cloud storage platform prompts you there’s limited space available, then you can always upgrade it by paying a little extra fee, whenever the need arises.

Managed cloud hosting is built on a high performing and private cloud infrastructure. It utilizes redundancy through its several servers, storage protection, and storage area network to deliver dependable failure protection.

If, however, you need to increase your processing power or storage space, you’ll have to upgrade to a more powerful server. Traditional web hosting companies offer limited computing resources and disk space, and once you consume your allocated resources, you’ll either end up with lower performance or having to pay a higher fee for a different hosting plan. And it isn’t just the $$ cost of moving to a more powerful plan, the extra cost can be counted in the time and hassle of having to make that move.

On the other hand, cloud hosting allows you to more easily scale your plan based on your specific needs. You’ll get access to multiple servers in the same cloud network enabling you to easily make use of the computer resources and storage space you need, when you need it. Some cloud hosting providers also let you track your usage and automatically scale resources through an intuitive management portal.

Off-site management: Your cloud provider assumes responsibility for maintaining and protecting the stored data. This frees your staff from tasks associated with storage, such as procurement, installation, administration, and maintenance. As such, your staff can focus on other priorities.


Quick implementation: Using a cloud service accelerates the process of setting up and adding to your storage capabilities. With cloud storage, you can provision the service and start using it within hours or days, depending on how much capacity is involved..
.
Cloud allows accessing of data or files from any location which is on the web. This comes in very handy in the latest work cultures where many employees are working remotely. In such cases having access to work-related files and documents becomes highly essential.

Public cloud has minimal risk of losing data as most of the cloud service providers will have multiple back-up infrastructures.

With your service provider taking care of maintenance and backups,

Data loss can spell disaster for a company of any size.. Cloud-based storage is much more secure than operating an on-site data center. Organizations that store their data on the premises see 51 percent more security incidents than those that use cloud storage.

Even if one data center was to fail entirely, the network simply redistributes the load among the remaining centers—making it highly unlikely that the public cloud would ever fail. In summary, the benefits of the public cloud are:--
Cost effectiveness
Increased reliability

This alternative is great for startups and other companies that want to maintain a lean structure without investing in equipment or physical space.

Business continuity: Storing data offsite supports business continuity in the event that a natural disaster or terrorist attack cuts access to your premises.

The advantages of cloud solutions are huge, so it stands to reason that the cloud services market is booming.   Global public cloud services market is expected to reach almost $247 billion this year and grow to over $383 billion by 2020.

Disadvantages of the public cloud

You can scale up with a public cloud, but you can also run into surprise costs if you need to move large amounts of data in, out, or even within the public cloud.

Performance can be an issue, too. Your data transmission might be affected by spikes in use across the internet. If application performance is a deal breaker for you, then you may be forced to go with a private cloud.


Benefits of a private cloud--
The private cloud allows you nearly all the same benefits of the public cloud, but more so. 
Most notably, the private cloud can provide a greater level of security, making it ideal for larger businesses, or businesses that handle HIPAA or financial data. With a private cloud, this can be achieved while still allowing the organization to benefit from cloud computing.

Private cloud services offer additional benefits for business users, including more control over the server. This allows it to be tailored to your own preferences and in-house styles. While this can remove some of the scalability options, private cloud providers often offer what is known as "cloud bursting" (aka hybrid clouds)—when non-sensitive data is switched to a public cloud to free up private cloud space in the event of a significant spike in demand.

In summary, the main benefits of the private cloud are:--
Improved security
Greater control over the server
Flexibility in the form of cloud bursting

Cloud computing enables high-speed data flow over the network. As a result, it causes faster big data processing. An enormous amount of IoT data generation which will feed the big data systems. To reduce the complexity of data blending in IoT which is one of its criteria to maximize its benefits. 

Cloud helps in achieving efficiency, accuracy, speed in implementing IoT applications. Cloud helps IoT application development but IoT is not a cloud computing. ... Many cloud service providers have identified this need and started giving IoT specific services to companies to create better IoT solutions

Immediate availability. Cloud solutions are available as soon as you’ve paid for them, so you can start using a cloud service right away. There’s no need to install and configure hardware.

Performance. Cloud companies equip their data centers with high-performance computing infrastructure that guarantees low network latency for your applications.

Security. Cloud infrastructure is kept in safe data centers to ensure a top level of security. Data is backed up and can easily be recovered. Moreover, cloud vendors ensure the security of your data by using networking firewalls, encryption, and sophisticated tools for detecting cybercrime and fraud.

Performance. Cloud hosting gives you better overall performance as compared to regular web hosting as it uses many servers to deliver the computing power and storage space your site needs. Its huge pool of computing resources allows you to tap into more resources, as necessary.

Uptime and consistent availability. Cloud hosting is built to be “self-troubleshooting”. It utilizes several servers which are joined together to automatically take over in case one or more servers face issues. In other words, your site’s visitors won’t experience any downtime even if the server your website is hosted on fails.

Disaster recovery. Disaster recovery is simple and easy with cloud hosting as they’re designed to automatically take regular data backups. Having several versions of your data eliminates the possibility of you losing your data.


A large majority of IT managers see the implementation of a disaster recovery plan as expensive, complex, and difficult to deploy. But now service providers such as Microsoft Azure provide disaster recovery for all major IT systems without the expenditure of any secondary infrastructure.

 One of the advantages of storing data in the cloud is that there isn’t one single point of failure. Your data gets backed up to several servers, so if one of them fails, your organization’s information remains safe and secure.

A single point of failure is what led to the infamous Equifax and Verizon data breaches, and many companies have taken steps to avoid this fatal flaw in storage security. Cloud storage is one way organizations can eliminate this danger.

Cloud computing allows multiple employees to view and make changes to files and documents in real time, providing a much more efficient way for workers to collaborate on projects. Accessing documents in the cloud helps ensure everyone is working from the correct version of a document and that obsolete versions don’t get passed between local sources.

A significant portion of maintaining in-house data storage is performing regular backups. The IT team has to take time to create backups and schedule them around daily operations. Cloud computing services go a long way toward automating these routine backups so your team can get back to doing the work that drives your business forward.

Cloud computing can free up your office for more workspace or amenities while eliminating the need to plan for future equipment expansion. With cloud, you do not have to worry about the installation of dedicated breakers, high voltage lines, special HVAC systems or even backup generators.

 Gone are the days when companies allocated a hefty budget on maintaining and installing their data centers. Now, they can store an endless amount of data, using a cloud computing software online.

Why is cloud better than on-premise?   Dubbed better than on-premise due to its flexibility, reliability and security, cloud removes the hassle of maintaining and updating systems, allowing you to invest your time, money and resources into fulfilling your core business strategies

Cloud can be configured to give your business the same features of a dedicated server in a shared environment. Cloud is more reliable as it runs on multiple servers and even if one component fails, services continue from the other servers. It is scalable on demand. Cloud is available via internet and allows the users to access the data from any location. Users pay for the services they use.

For a managed cloud hosting service the provider will generally take care of the following:--

High Availability.
Automated resource balancing. In case a server fails, cloud servers are able to quickly balance website loads between hardware servers which also automatically take backups. This is typically managed at the virtualization level and can also manage and update software and hardware.

Security Management. Protected firewalls and intrusion detection systems, virtual local area networks (VLAN), and intrusion prevention systems are used by managed cloud servers to offer a high-end security environment.

Hybridization of virtual and physical servers. Apps and database systems are able to share a dedicated network with cloud servers which creates physical and virtual systems on a single system.


As you might expect, managed cloud hosting is more expensive than unmanaged hosting as you are paying for more comprehensive support from the hosting company.

Yes, cloud hosting is more readily scalable than traditional web hosting. Easy scaling is one of the main benefits of the cloud.

With traditional web hosting, particularly shared hosting, hosting servers are allocated a fixed amount of computing resources. All of the websites that are hosted on the same server use those resources.


Cloud computing is entirely dependent on the internet. It cannot access or store data on the cloud without an internet connection.

If your Internet connection goes down, you won’t have access to data stored in the cloud for the duration of the outage.

A big cloud risk is that the vendor can go down as well. Anything from bad weather, DDoS attacks, or a good ol’ system failure can knock the service unresponsive.

99% uptime means 1% downtime. Over the course of 365 days, that’s 3.65 days the service will be down. That’s equal to 87.6 hours.

But when do those hours occur? Late at night? During the day?

If those 87 hours were to occur during business hours, then that’s equivalent to 10 days of downtime.

Can your client live without this service for 10 business days?

And remember: That’s just for the cloud service. The client’s internet connection will also experience downtime. If you again assume 99% uptime and 1% downtime, then that’s as much as 20 business days that your client will not be able to reach the cloud service.

Can your client live without the service for 20 days?

A breach of your data or your client’s data can be devastating depending on the type of data and the extent of the breach.

The costs of investigating and resolving a breach, associated legal expenses, and the losses to a company’s reputation, can be enough to shut its doors..

The security of the data stored on the cloud is something that everybody is concerned about. It doesn’t matter what package you decide to go for while choosing the cloud computing platform, you can’t deviate from the reality, i.e. you will be placing your sensitive data and information in the hands of a third party service.

KOSHER BIG BROTHER KNOWS EVERYTHING..  ALL BIG PLAYERS ARE AGENTS OF THE JEWISH  DEEP STATE..

Cloud storage cons include the following:

Security: Security concerns are common with cloud-based services. Cloud storage providers try to secure their infrastructure with up-to-date technologies and practices, but occasional breaches have occurred, creating discomfort with users.

Administrative control: Being able to view your data, access it, and move it at will is another common concern with cloud resources. Offloading maintenance and management to a third party offers advantages but also can limit your control over your data.

Latency: Delays in data transmission to and from the cloud can occur as a result of traffic congestion, especially when you use shared public internet connections. However, companies can minimize latency by increasing connection bandwidth.

Regulatory compliance: Certain industries, such as healthcare and finance, have to comply with strict data privacy and archival regulations, which may prevent companies from using cloud storage for certain types of files, such as medical and investment records. If you can, choose a cloud storage provider that supports compliance with any industry regulations impacting your business.

There are downsides to using public cloud services. At the top of the list is the fact that the security of data held within a public cloud is a cause for concern. It is often seen as an advantage that the public cloud has no geographical restrictions, making access easy from wherever you are. But on the flip side, this could mean that your server is in a different country which is governed by an entirely different set of security and/or privacy regulations. 

One simply can’t ignore the vulnerabilities of trusting someone else with your company’s sensitive data and information. Storing data on the cloud is akin to exposing your company’s important data and information to the potential threats over the internet. Therefore, to keep such information from becoming a hacker’s playground, always remember to keep your passwords discreet. Plus, every once or twice a month, don’t forget to update your cloud’s password.

EXTRA SENSITIVE DATA , DON’T EVEN COUNT PASSWORDS TO PROTECT..  IT IS A PIECE OF CAKE TO HACK PASSWORDS

Cloud computing refers to the delivery of computing services over the internet—e.g., software applications, networks, storage, servers... You have no control and visibility over how the data center is managed and how and where your data is stored.




Data breaches may involve personal health information (PHI), personally identifiable information (PII), trade secrets or intellectual property. ... If anyone who is not specifically authorized to do so views such data, the organization charged with protecting that information is said to have suffered a data breach.

Data Breaches Expose 4.1 Billion Records In First Six Months Of 2019. According to Risk Based Security research newly published in the 2019 MidYear QuickView Data Breach Report, the first six months of 2019 have seen more than 3,800 publicly disclosed breaches exposing an incredible 4.1 billion compromised records

Google to shut down Google+ early due to bug that leaked data of 52.5 million users. ... This marks the second data breach for Google's disappointing social network. The first occurred in October, when up to 500,000 users' data was compromised. Shortly after, Google announced it would shut down the platform in 2019

In September of 2017, Equifax announced a data breach that exposed the personal information of 147 million people. Under a settlement filed today, Equifax agreed to spend up to $425 million to help people affected by the data breach.

The financial impact of unplanned downtime cannot be understated. For every minute of unplanned downtime due to a data center outage, a company loses $5,600 on average. That’s $300,000 in just one hour. While employees might enjoy the extra time spent in the break room, the productivity lost during that time is money you won’t get back. Unplanned downtime can also heavily damage a company’s reputation if it affects customers.

A data breach is a confirmed incident in which sensitive, confidential or otherwise protected data has been accessed and/or disclosed in an unauthorized fashionAs hackers demonstrated through the celebrity iCloud breach, poor password security can give cybercriminals an all-access pass to your private data. ... However, the biggest cause of concern for Cloud storage isn't hacked data, it's lost data.

One of the most famous examples of failure to protect against this personnel security risk is Edward Snowden and his exposure of the U.S. National Security Agency's surveillance program, PRISM.

Besides data storage of large-scaled data, cloud computing environment usually provides data processing service. ... Cloud computing providers are trusted to maintain data integrity and accuracy.

A data breach occurs when a cybercriminal successfully infiltrates a data source and extracts sensitive information. This can be done physically by accessing a computer or network to steal local files or by bypassing network security remotely


Sometimes, hackers want to steal your data so that they can hold it for ransom. This type of attack is a ransomware attack. ... Hackers usually execute ransomware attacks by gaining unauthorized access to data, then encrypting it or moving it and charging a ransom in order to restore your access to it.




Be sure to use the appropriate security methods, such as Intrusion Detection and Prevention (IDPS) systems.

An Intrusion Prevention System or IPS, also known as an Intrusion Detection and Prevention System or IDPS, is a network security appliance that monitors network and system activities and detects possible intrusions.



Intrusion detection is the process of monitoring the events occurring in your network and analyzing them for signs of possible incidents, violations, or imminent threats to your security policies. 

Intrusion prevention is the process of performing intrusion detection and then stopping the detected incidents. These security measures are available as intrusion detection systems (IDS) and intrusion prevention systems (IPS), which become part of your network to detect and stop potential incidents.
Intrusion detection systems (IDS) and intrusion prevention systems (IPS) constantly watch your network, identifying possible incidents and logging information about them, stopping the incidents, and reporting them to security administrators.

 In addition, some networks use IDS/IPS for identifying problems with security policies and deterring individuals from violating security policies. IDS/IPS have become a necessary addition to the security infrastructure of most organizations, precisely because they can stop attackers while they are gathering information about your network.

The three IDS detection methodologies are typically used to detect incidents.--

Signature-Based Detection compares signatures against observed events to identify possible incidents. This is the simplest detection method because it compares only the current unit of activity (such as a packet or a log entry, to a list of signatures) using string comparison operations.

Anomaly-Based Detection compares definitions of what is considered normal activity with observed events in order to identify significant deviations. This detection method can be very effective at spotting previously unknown threats.

Stateful Protocol Analysis compares predetermined profiles of generally accepted definitions for benign protocol activity for each protocol state against observed events in order to identify deviations.

What is difference IDS and IPS?
The main difference between them is that IDS is a monitoring system, while IPS is a control system. IDS doesn't alter the network packets in any way, whereas IPS prevents the packet from delivery based on the contents of the packet, much like how a firewall prevents traffic by IP address.

The main difference is that an IDS only monitors traffic. If an attack is detected, the IDS reports the attack, but it is then up to the administrator to take action. That's why having both an IDS and IPS system is critical. A good security strategy is to have them work together as a team

An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Any malicious activity or violation is typically reported either to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms

The IPS sits between your firewall and the rest of your network. Because, it can stop the suspected traffic from getting to the rest of the network. The IPS monitors the inbound packets and what they are really being used for before deciding to let the packets into the network

an intrusion detection system (IDS) differs from a firewall in that a firewall looks outwardly for intrusions in order to stop them from happening. Firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network.

intrusion detection systems and intrusion prevention systems are both important parts of network integrity.. Sometimes the systems overlap, and sometimes they’re combined or referred to together as IDPS. Although IPS is becoming a more dominant security method, it’s important to be familiar with both.

An IDS monitors your network for possible dangerous activity, including malicious acts and violations of security protocols. When such a problem is detected, an IDS alerts the administrator but doesn’t necessarily take any other action. There are several types of IDS and several methods of detection employed.

Network Intrusion Detection System (NIDS): A network intrusion detection system (NIDS) monitors packets moving into and out of a network or subset of a network. It could monitor all traffic, or just a selection, to catch security threats. A NIDS compares potential intrusions to known abnormal or harmful behavior. This option is preferred for enterprises, as it’s going to provide much broader coverage than host-based systems.

Host Intrusion Detection System (HIDS): A host intrusion detection system lives on and monitors a single host (such as a computer or device). It might monitor traffic, but it also monitors the activity of clients on that computer. For example, a HIDS might alert the administrator if a video game is accessing private files it shouldn’t be accessing and that have nothing to do with its operations. When HIDS detects changes in the host, it will compare it to an established checksum and alert the administrator if there’s a problem.

HIDS can work in conjunction with NIDS, providing extra coverage for sensitive workstations and catching anything NIDS doesn’t catch. Malicious programs might be able to sneak past a NIDS, but their behavior will be caught by a HIDS.

Types of Intrusion Detection Systems
There are two primary types of intrusion detection systems you should be aware of to ensure you’re catching all threats on your network. Signature-based IDS is more traditional and potentially familiar, while anomaly-based IDS leverages machine learning capabilities. Both have their benefits and limitations:

Signature-based: Signature-based IDS relies on a preprogrammed list of known attack behaviors. These behaviors will trigger the alert. These “signatures” can include subject lines and attachments on emails known to carry viruses, remote logins in violation of organizational policy, and certain byte sequences. It is similar to antivirus software (the term “signature-based” originates with antivirus software).

Signature-based IDS is popular and effective but is only as good as its database of known signatures. This makes it vulnerable to new attacks. Plus, attackers can and do frequently disguise their attacks to avoid common signatures that will be detected. Also, the most thorough signature-based IDS will have huge databases to check against, meaning big bandwidth demands on your system.

Anomaly-based: Anomaly-based IDS begins with a model of normal behavior on the network, then alert an admin anytime it detects any deviation from that model of normal behavior. Anomaly-based IDS begins at installation with a training phase where it “learns” normal behavior. AI and machine learning have been very effective in this phase of anomaly-based systems.

Anomaly-based systems are typically more useful than signature-based ones because they’re better at detecting new and unrecognized attacks. However, they can set off many false positives, since they don’t always distinguish well between attacks and benign anomalous behavior.

Some experts consider intrusion prevention systems to be a subset of intrusion detection. Indeed, all intrusion prevention begins with intrusion detection. But security systems can go one step further and act to stop ongoing and future attacks. When an IPS detects an attack, it can reject data packets, give commands to a firewall, and even sever a connection.

IDS and IPS are similar in how they’re implemented and operate. IPS can also be network- or host-based and can operate on a signature or anomaly basis.

Types of Intrusion Prevention Systems
A robust IT security strategy should include an intrusion prevention system able to help automate many necessary security responses. When risks occur, a prevention tool may be able to help quickly to thoroughly shut down the damage and protect the overall network.

Network-based Intrusion Prevention Systems (NIPS): As the name suggests, a NIPS covers all events on your network. Its detection is signature-based.

Network Behavior Analysis (NBA): NBA is similar to NIPS in that it provides network-wide coverage. But unlike NIPS, NBA operates on anomalies. Like anomaly-based IDS, NBA requires a training phase where it learns the network’s baseline norm.

NBA also uses a method called stateful protocol analysis. Here, the baseline norm is pre-programmed by the vendor, rather than learned during the training phase. But in both cases, the IPS is looking for deviations rather than signatures.

Wireless-based Prevention Systems (WIPS): Protecting your network brings its own unique challenges. Enter WIPS. Most WIPS have two components, including overlay monitoring (devices installed near access points to monitor the radio frequencies) and integrated monitoring (IPS using the APs themselves). 

Combining these two, which is very common, is known as “hybrid monitoring.”
Host-based Intrusion Prevention Systems (HIPS): HIPS live on and protect a single host, providing granular coverage. They are best used in conjunction with a network-wide IPS.



Differences Between IDS and IPS
There are several differences between these two types of systems. IDS only issues alerts for potential attacks, while IPS can take action against them. Also, IDS is not inline, so traffic doesn’t have to flow through it. Traffic does, however, have to flow through your IPS. In addition, false positives for IDS will only cause alerts, while false positives for IPS could cause the loss of important data or functions.



A distributed denial-of-service (DDoS) attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. Such an attack is often the result of multiple compromised systems (for example, a botnet) flooding the targeted system with traffic



In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.



In a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.

A DoS or DDoS attack is analogous to a group of people crowding the entry door of a shop, making it hard for legitimate customers to enter, thus disrupting trade.

Criminal perpetrators of DoS attacks often target sites or services hosted on high-profile web servers such as banks or credit card payment gateways. Revenge, blackmail and activism can motivate these attacks.



DDoS is a type of DOS attack where multiple compromised systems, which are often infected with a Trojan, are used to target a single system causing a Denial of Service (DoS) attack

More than One Third of US Businesses Experience DDoS Attacks. The most common cyberattacks were malware (53 percent) and viruses (51 percent). A distributed denial of service attack (aka DDOS) is very easy, and is in fact widely considered one of the easiest blackhat activities to do. 



For a DDoS attack to be successful, an attacker will spread malicious software to vulnerable computers, mainly through infected emails and attachments.   Layer 7 attacks focus specifically on the layer 7 features such as HTTP, SNMP, FTP etc. Layer 7 attacks require a lot less bandwidth and packets than network layer attacks to disrupt the services

DDoS attacks cannot steal website visitors information. The sole purpose of a DDoS attack is to overload the website resources. However, DDoS attacks can be used as a way of extortion and blackmailing. For example, website owners can be asked to pay a ransom for attackers to stop a DDoS attack


DDoS attacks can last as long as 24 hours, and good communication can ensure that the cost to your business is minimized while you remain under attack.


.
In DDoS attacks, threat actors use different methods to try and amplify the volume of attack traffic generated by a compromised system. The goal is to try and turn small queries and packets into much larger payloads that can then be used to flood a target network

Once the attack is over, try to analyze it in as much detail as possible.

Some of the key questions to ask include:---

What assets were attacked? Was it targeted at your entire network, or did it target specific servers or services?
What were the attack characteristics? Was it a single sustained flood, or did it employ sophisticated attack methods such as multi-vector attacks, dynamic IP spoofing, or burst attacks?
What attack protocols and patterns were used?
What was the peak amount of network traffic, both in terms of data (bits per second) as well as requests (connections per seconds)?
Did the attack impact the network layer, or also the application layer?
Did the attack include encrypted traffic or protocols?
How long did the attack last?

Getting this information will help you get a full picture of what happened.

Apart from analyzing the attack itself, you need to understand how it impacted you.

This is a key step in understanding your internal “cost” of a DDoS attack, and as a result – how much you may be willing to spend in the future to prevent this from happening again. 


Some of the key questions to ask include:---

Was the attack stopped, or did it get through (either entirely, or in part)?
Which services were impacted, to what extent, and for how long?
What were the direct monetary damages (i.e., in lost revenue, lost productivity time, etc.)?
Were there any indirect damages, such as bad press, damage to reputation, customer complaints, etc.?
Did users experience any impact as a result of the attack, either as a result of the attack itself, or as a result of defensive measures (false positives)?
Identify Weak Spots-

The next step after identifying damages is to identify any weak spots in your defense – that is, why was attack traffic able to get through?

Did any attack traffic get through? If so, how much?
Were there any specific attack vectors that were more successful than others? In particular, were there some patterns that were stopped, while others were able to get through?
Were there any targeted resources that were impacted more than others? For example, were there some resources (networks, servers, applications, etc.) that were able to fend off the attack, while others were impacted?
Did legitimate users experience any false positives? What was the ratio of legitimate traffic to malicious traffic that was stopped (or allowed to go through)?

By identifying weak spots, you should try to understand not only what resources were impacted, but also why they were impacted. Was there a particular type of attack that was able to get through, or – conversely – were there specific services that were impacted while others were not?

Another key element to look at is false positives. If your protections are deployed too broadly, this can lead to false-positives which prevent legitimate users from accessing services. Even though not a result of the attack itself, for end customers the experience is the same…

Identifying weak spots in your armor helps you to address them in the next steps.--

Verify Security Vendor SLA
If you have a pre-existing DDoS mitigation service in place, now is the time to check that they met their SLA commitments.
A high-grade DDoS protection service should provide you with technology, capacity and service guarantees to ensure full protection against any type of DDoS threat.

Look at the results of your analysis, based on the points above, and ask yourself the following questions:--

Did my defenses stop the attack?
Was all attack traffic stopped, or did some of it get through?
Were my users able to escape the impact of the attack (either directly, or as false-positives)?
Did my security vendor provide me with all the relevant service guarantees, and was able to meet them?

If the answers to those questions is yes, then great – you are well protected. But if the answer to one (or more) of these questions is no, then maybe you should start looking at alternatives.

Recommended actions to reduce the risk of denial of service:
Check whether your CSP is capable of scaling up bandwidth to withstand DDoS attacks. Also, ask whether they have scrubbing centers to cleanse and filter malicious traffic.
Scrubbing Center. A centralized data cleansing station where traffic is analyzed and malicious traffic (ddos, known vulnerabilities and exploits) is removed.
scrubbing centres are centralised data-cleansing stations, that filter out malicious traffic before it can reach the applications and data centres of customers.


Employees often share passwords for cloud accounts, which increases the risk for data breaches and data losses. According to one report, an average employee will share six passwords with their co-workers. Fifty-four percent of small and midsize businesses see negligent employees as the root cause of data breaches.



The cloud access security broker (CASB) market is forecast to reach 870 million U.S. dollars in size worldwide in 2019. A CASB is a software or service that safeguards the gateway between a company's on-premise IT infrastructure and a cloud provider's infrastructure.

A CASB is a third-party software integration that works across all the major software products used in an organization. The CASB software tracks and analyzes all information sent and received by each tool. This enables a granularity of insight akin to that which was possible when all information sent and received by a company could be tracked on its own servers.

Today, CASBs are used by IT teams to achieve greater visibility, compliance, threat protection, and data security.

CASBs illuminate which shadow IT cloud services are being used in an organization and enable visibility into user activity of sanctioned cloud applications. CASBs give companies a 360-degree view of all the cloud services they use – and let them tailor secure access to each. This comprehensive view also allows an organization to see whether and where their various services overlap in functionality, creating an opportunity to reduce costs.

CASBs can be configured to block specified services, devices, or users – and even post-login behaviors and signals – through adaptive access controls. For example, an employee who either wittingly or unwittingly attempts to upload an infected file can be prevented from doing so, in real time. This allows organizations to detect and respond to negligent or malicious insider threats, privileged user threats, and compromised accounts.

Data security

CASBs also allow IT teams to enforce data-centric security practices such as secure collaboration in cloud services, access control, and information rights management. Controls may be customized based on a variety of factors, such as data classification or user activity. Utilizing advanced data loss prevention (DLP) methods (like document fingerprinting, among others), CASBs can notify IT when sensitive data is being transmitted, allowing them to take further actions as necessary.









CASB products were designed to provide visibility for Shadow IT and limit employee access to unauthorized cloud services. Today, organizations have embraced the cloud, replacing many of their datacenter applications with Software as a Service (SaaS) or moving much of their IT into infrastructure (IaaS) providers like Amazon or Azure. 

Instead of limiting access, CASB's have evolved to protect cloud-hosted data and provide enterprise-class security controls so that organizations can incorporate SaaS and IaaS into their existing security architecture.

CASBs provide four primary security services: Visibility, Data Security, Threat Protection, and Compliance.

A CASB identifies all the cloud services (both sanctioned and unsanctioned) used by an organization's employees. Originally, this only included the services they would use directly from their computer or mobile device, often called "Shadow IT." Today, it is possible for an employee to connect an unsanctioned SaaS directly to a an approved SaaS via API. This "Shadow SaaS" requires more advanced visibility tools.

Shadow IT Monitoring: Your CASB must connect to your cloud to monitor all outbound traffic for unapproved SaaS applications and capture real-time web activity. Since nearly all SaaS applications send your users email notifications, your CASB should also scan every inbox for rogue SaaS communication to identify unapproved accounts on an approved cloud services.

 Shadow SaaS Monitoring: Your CASB must connect to your approved SaaS and IaaS providers to monitor third party SaaS applications that users might connect to their account. It should identify both the service as well as the level of access the user has provided.

Risk Reporting: A CASB should assess the risk level for each Shadow IT/Shadow SaaS connection, including the level of access each service might request (i.e. read-only access to a calendar might be appropriate, read-write access to email might not.) This allows you to make informed decisions and prioritize the applications that need immediate attention.

Event Monitoring: Your CASB should provide information about real-time and historical events in all of your organization’s SaaS applications. If you do not know how the applications are being used, you can not properly control them or properly assess the threats facing your organization.

CASB (Cloud Access Security Broker) solutions may fill many of the security gaps addressed in this article and delivers a holistic view of the entire cloud environment which enables organizations to effectively manage cloud security risks while capitalizing on the benefits offered by cloud computing.

While CASB solutions have been available now for the past 5 years, the market is still immature. Each product on the market has its own strengths and weaknesses and covers different threats. Therefore, it is important to understand your cloud services use cases when it comes to choosing a CASB solution. 

Assessing the impacts on the existing infrastructure is also required. If you are interested in further discussing CASB solutions and being well-advised in designing and choosing the best suited solution, reach out to Orange Consulting.

Conduct risk assessments before migrating to cloud services and use cloud access security brokers (CASBs) as an additional security layer.

A cloud access security broker (CASB) is on-premises or cloud based software that sits between cloud service users and cloud applications, and monitors all activity and enforces security policies.


A CASB can offer a variety of services, including but not limited to monitoring user activity, warning administrators about potentially hazardous actions, enforcing security policy compliance, and automatically preventing malware.

A cloud access security broker, or CASB, is a type of software developed to help IT departments monitor cloud access and usage by employees and partners. CASBs are deployed to help ensure the company's cloud services are used securely and properly.

It is a software tool or service that sits between an organization's on-premises infrastructure and a cloud provider's infrastructure. A CASB acts as a gatekeeper, allowing the organization to extend the reach of their security policies beyond their own infrastructure.

CASBs typically offer the following:--

Firewalls to identify malware and prevent it from entering the enterprise network.
Authentication to checks users' credentials and ensure they only access appropriate company resources.
Web application firewalls (WAFs) to thwart malware designed to breach security at the application level, rather than at the network level.
Data loss prevention (DLP) to ensure that users cannot transmit sensitive information outside of the corporation.

CASBs work by ensuring that network traffic between on-premises devices and the cloud provider complies with the organization's security policies. The value of cloud access security brokers stems from their ability to give insight into cloud application use across cloud platforms and identity unsanctioned use. This is especially important in regulated industries.

CASBs use auto-discovery to identify cloud applications in use and identify high-risk applications, high-risk users and other key risk factors. Cloud access brokers may enforce a number of different security access controls, including encryption and device profiling. They may also provide other services such as credential mapping when single sign-on is not available.

Microsoft includes CASB functionality in its base Azure security services at no extra charge. To meet the needs of IaaS and PaaS users, CASB vendors have added or expanded functionality for security tasks, such as the following:---

Single sign-on (SSO) -- allows an employee to enter their credentials one time and access a number of applications.
Encryption -- encrypts information from the moment it's created until it's sitting at rest in the cloud.
Compliance reporting tools -- ensure that the company's security systems comply with corporate policies and government regulations.
User behavior analytics -- identifies aberrant behavior indicative of an attack or data breach.
The ability of a CASB to address gaps in security extends across software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) environments. In addition to providing visibility, a CASB also allows organizations to extend the reach of their security policies from their existing on-premises infrastructure to the cloud and create new policies for cloud-specific context.

The CASB serves as a policy enforcement center, consolidating multiple types of security policy enforcement and applying them to everything your business utilizes in the cloud—regardless of what sort of device is attempting to access it, including unmanaged smartphones, IoT devices, or personal laptops.


With the increase in workforce mobility, the growth in BYOD and the presence of unsanctioned employee cloud usage, or Shadow IT, the ability to monitor and govern the usage of cloud applications has become essential to the goal of enterprise security. 

Rather than banning cloud services outright and potentially impacting employee productivity, a CASB enables businesses to take a granular approach to data protection and the enforcement of policies—making it possible to safely utilize time-saving, productivity-enhancing, and cost-effective cloud services.







Shadow IT is a term that refers to Information Technology (IT) applications and infrastructure that are managed and utilized without the knowledge of the enterprise's IT department.

Shadow IT, also known as Stealth IT or Client IT, are Information technology (IT) systems built and used within organizations without explicit organizational approval, for example, systems specified and deployed by departments other than the IT department.

Shadow IT is the use of information technology systems, devices, software, applications, and services outside the scope and supervision of an organization's approved IT system. Examples include employee use of: USB flash drives or other personal data storage

Shadow IT can present itself in many ways, a few examples being: Staff sharing files between themselves, suppliers and customers. Often with a cloud file store such as OneDrive, Dropbox or Google Drive

Many people consider shadow IT an important source of innovation, and such systems may become prototypes for future approved IT solutions. On the other hand, shadow IT solutions are not often in line with organizational requirements for control, documentation, security, reliability, etc

One of the main causes of increased cloud-computing risks is the tendency of employees and managers to bypass the IT team and download third-party applications. The growth in the number of SaaS applications that help you carry out random tasks—convert JPEG files into PDF, record and edit video files, instant messaging, etc.—result in employees signing up and using these programs without taking the necessary precautions.

Often, sensitive files are uploaded onto unknown cloud servers to simply convert them into different file types. Though these go undetected most of the time, a single instance of confidential company information surfacing in the public or landing in the hands of competition can damage business reputation forever. 

The explosion of cloud and as-a-service (remote) technologies has made it easy for employees to buy and use their own applications without the help of IT

Shadow IT can introduce security risks when unsupported hardware and software are not subject to the same security measures that are applied to supported technologies. Furthermore, technologies that operate without the IT department’s knowledge can negatively affect the user experience of other employees by impacting bandwidth and creating situations in which network or software application protocols conflict. 

Shadow IT can also become a compliance concern when, for example, an employee stores corporate data in their personal DropBox account.

Feelings toward shadow IT are mixed; some IT administrators fear that if shadow IT is allowed, end users will create data silos and prevent information from flowing freely throughout the organization. Other administrators believe that in a fast-changing business world, the IT department must embrace shadow IT for the innovation it supplies and create policies for overseeing and monitoring its acceptable use.


Popular shadow technologies include personal smartphones, tablets and USB thumb drives. Popular shadow apps include Google Docs, instant messaging services and Skype.




People generally don’t start shadow IT projects because they disrespect the IT department or want to cause trouble. We live in a digital world. As a result, more non-technical workers are becoming tech-savvy through osmosis. They pick up new tech trends here, read about digital transformation there. They dream up all the ways that new software and hardware can make their jobs easier and more productive.

Often, this tinkering moves beyond daydreams and into the office where people outside of the IT team make moves to improve their work via technology. Think marketers researching and purchasing marketing automation software or hoteliers downloading project workforce management software to communicate with staff.

And while these motions might be in conflict with their IT department’s road map, the intentions behind shadow IT comes from an earnest, entrepreneurial place. People who do the work know the work best. The non-technical workers pushing for their own tech solutions have unique insights into their job's pain points, and can quickly recognize a solution’s ability to tackle problems.

Cause for concern

However, good intentions should not to minimize an IT department’s worries around shadow IT. Information technology departments are subject matter experts as well. Their concerns around unofficial technology programs are directly related to the department's areas of expertise, including:

Compliance. IT needs to ensure that solutions are compliant with industry standards. Are teams following license agreements? Are users correctly installing and using encrypted devices?  

Security. This is a no-brainer for IT teams. Job one is to make sure that all tech used within their company is unbreachable and private so that proprietary information remains internal.

Reliability. IT makes sure that systems are suited for the long haul. Tools need to stay live, functional, and updated. They need to scale for the enterprise.

Compatibility. Similarly, enterprise companies often rely on various systems. These large companies have powerful and diverse IT stacks. There’s a danger that shadow projects will not be compatible with legacy software and tools.

Redundancies. When tech projects happen below the board, communication tends to breakdown. Time and money are wasted when departments overlap work on the similar – or even identical – solutions.

 IT departments can empower non-technical workers to explore and iterate on new tech solutions – all while staying within proper IT-defined guardrails. This arrangement will inspire company-wide agility while maintaining centralized, IT-sanctioned approaches to business technology.

One way to implement this change is to embed technical experts within non-technical team. For instance, many marketing teams now have a MarTech division led by IT folks who also have marketing acumen.

No-code platforms represent another opportunity which enables non-technical employees to develop workplace applications without writing code. Using this approach, workers of all stripes can build applications uniquely suited to their needs. 

Meanwhile, IT remains the centralized point of control for this no-code technology, setting guardrails on which teams and individuals can create. In other words, security and governance remain safely in the hands of IT experts.


Security risks at the vendor
When a cloud service vendor supplies a critical service for your business and stores critical data – such as customer payment data and your mailing lists – you place the life of your business in the vendor’s hands.

Ask yourself – how clean are those hands?

Many small businesses know almost nothing about the people and technology behind the cloud services they use.

They rarely consider:--

The character of the vendor’s employees
The security of the vendor’s technology
The secret access the vendor has to their data

Your reputation no longer depends on the integrity of only your business – now it also depends on the integrity of the vendor’s business. And that’s a cloud computing risk.  So you are transferring the responsibility of protecting the data to a third party, but you are still liable if that party fails to live up to the task.

This is one of the many risks in cloud computing. Even if a vendor has your best interests at heart, your interests will always be secondary to theirs.

However, when you use a cloud service provider, the vendor is in control. You have no guarantee that the features you use today will be provided for the same price tomorrow. The vendor can double its price, and if your clients are depending on that service, then you might be forced to pay.

What happens if you are not able to make payment?

If you get behind on your bill, then you may be surprised to find your data is held hostage by the vendor. You cannot access the service and export your data until you pay up.

When you host a service locally, the data and level of service is always in your control. You can confidently assure your clients that their systems and data are safe because they are always within your reach.

Remember: you have many ways to protect your data when it is in control. However, once it’s in the hands of a cloud service provider, you have ceded control to an entity over which you have no oversight.

When you rely on a cloud service for a business-critical task, then you are putting the viability of your business in the hands of two services: the cloud vendor and your ISP.

One of the disadvantages of cloud computing can come in the form of vendor mismatches. Organizations might run into complications when migrating services to a different vendor with a different platform. If this process isn’t handled correctly, data can be exposed to unnecessary vulnerabilities. A good cloud services provider has the expertise to migrate your data between vendors safely.

Malicious insiders could have a wide raging impact on the confidentiality, integrity and availability of enterprise data. With the use of cloud apps, the risk increases as insiders can use their cloud accounts with their unmanaged devices to spread malwares throughout an organization, infect rich data or exfiltrate sensitive data.

A malicious insider can use an unmanaged device to exfiltrate sensitive date from corporate OneDrive storage

A malicious insider can use an unmanaged device to upload malicious file to corporate OneDrive storage or attach it to a Salesforce record. The file may then be opened by a legitimate user connected to the enterprise corporate network.

On the other hand, user credentials may become compromised through phishing attacks of other similar techniques. Threath actors may use the stolen cloud accounts to perform malicious activities.

Sensitive data can be leaked, falsified, infected or destroyed causing significant cost to business. Legal implications are also possible for organizations in highly regulated industries, such as healthcare, if personal information is exposed during cloud account takeover (ATO) incidents.

Unfortunately, these new threats cannot be handled by the standard IT security measures. Therefore, organizations should enhance their security measures to protect their assets against malware, ransomware, cloud ATO and other malicious cloud activities.

Here below some known security measures to protect enterprises against the above-mentioned malicious cloud activities:--

Develop and deploy a data loss prevention (DLP) strategy.
Perform user and entity behavior analytics (UEBA) for better visibility.
Scan and quarantine malware at upload, at download and at rest.
Block known and zero-day threats.

Encrypt sensitive data before it goes to cloud storage.


Identity Sprawl
Historically, users used to authenticate on a central identity datastore to access on-premise corporate apps. Overtime,   business began to use cloud apps and Software as A Service (SaaS) such as ServiceNow or Salesforce. 

Likewise, social media and other shadow cloud apps gained popularity and are incorporated into day-to-day business operations. Suddenly, employees found themselves authenticating against different datastores all over the internet. This ID sprawl increases the risk of user’s cloud accounts takeover and makes protecting access to an organization’s information far more challenging.

Here are some security measures to protect user’s cloud accounts:--

Use a central identity and access management (IAM) platform for approved cloud apps.
Restrict access from unmanaged devices and control the type of cloud apps used companywide.
Enforce multi-factor authentication to access corporate cloud apps.
Apply granular content-based and context-based policies..

Be sure to use the appropriate security methods, such as Intrusion Detection and Prevention (IDPS) systems.



An Intrusion Prevention System or IPS, also known as an Intrusion Detection and Prevention System or IDPS, is a network security appliance that monitors network and system activities and detects possible intrusions.

Intrusion detection is the process of monitoring the events occurring in your network and analyzing them for signs of possible incidents, violations, or imminent threats to your security policies. Intrusion prevention is the process of performing intrusion detection and then stopping the detected incidents. These security measures are available as intrusion detection systems (IDS) and intrusion prevention systems (IPS), which become part of your network to detect and stop potential incidents.

Intrusion detection systems (IDS) and intrusion prevention systems (IPS) constantly watch your network, identifying possible incidents and logging information about them, stopping the incidents, and reporting them to security administrators. 

In addition, some networks use IDS/IPS for identifying problems with security policies and deterring individuals from violating security policies. IDS/IPS have become a necessary addition to the security infrastructure of most organizations, precisely because they can stop attackers while they are gathering information about your network.
The three IDS detection methodologies are typically used to detect incidents.

Signature-Based Detection compares signatures against observed events to identify possible incidents. This is the simplest detection method because it compares only the current unit of activity (such as a packet or a log entry, to a list of signatures) using string comparison operations.

Anomaly-Based Detection compares definitions of what is considered normal activity with observed events in order to identify significant deviations. This detection method can be very effective at spotting previously unknown threats.

Stateful Protocol Analysis compares predetermined profiles of generally accepted definitions for benign protocol activity for each protocol state against observed events in order to identify deviations.

What is difference IDS and IPS?

The main difference between them is that IDS is a monitoring system, while IPS is a control system. IDS doesn't alter the network packets in any way, whereas IPS prevents the packet from delivery based on the contents of the packet, much like how a firewall prevents traffic by IP address.
The main difference is that an IDS only monitors traffic.

 If an attack is detected, the IDS reports the attack, but it is then up to the administrator to take action. That's why having both an IDS and IPS system is critical. A good security strategy is to have them work together as a team

An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Any malicious activity or violation is typically reported either to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms

The IPS sits between your firewall and the rest of your network. Because, it can stop the suspected traffic from getting to the rest of the network. The IPS monitors the inbound packets and what they are really being used for before deciding to let the packets into the network

an intrusion detection system (IDS) differs from a firewall in that a firewall looks outwardly for intrusions in order to stop them from happening. Firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network.

Intrusion detection systems and intrusion prevention systems are both important parts of network integrity.. Sometimes the systems overlap, and sometimes they’re combined or referred to together as IDPS. Although IPS is becoming a more dominant security method, it’s important to be familiar with both.

An IDS monitors your network for possible dangerous activity, including malicious acts and violations of security protocols. When such a problem is detected, an IDS alerts the administrator but doesn’t necessarily take any other action. There are several types of IDS and several methods of detection employed.

Network Intrusion Detection System (NIDS): A network intrusion detection system (NIDS) monitors packets moving into and out of a network or subset of a network. It could monitor all traffic, or just a selection, to catch security threats. A NIDS compares potential intrusions to known abnormal or harmful behavior. This option is preferred for enterprises, as it’s going to provide much broader coverage than host-based systems.

Host Intrusion Detection System (HIDS): A host intrusion detection system lives on and monitors a single host (such as a computer or device). It might monitor traffic, but it also monitors the activity of clients on that computer. 

For example, a HIDS might alert the administrator if a video game is accessing private files it shouldn’t be accessing and that have nothing to do with its operations. When HIDS detects changes in the host, it will compare it to an established checksum and alert the administrator if there’s a problem.

HIDS can work in conjunction with NIDS, providing extra coverage for sensitive workstations and catching anything NIDS doesn’t catch. Malicious programs might be able to sneak past a NIDS, but their behavior will be caught by a HIDS.

Types of Intrusion Detection Systems
There are two primary types of intrusion detection systems you should be aware of to ensure you’re catching all threats on your network. Signature-based IDS is more traditional and potentially familiar, while anomaly-based IDS leverages machine learning capabilities. Both have their benefits and limitations:

Signature-based: Signature-based IDS relies on a preprogrammed list of known attack behaviors. These behaviors will trigger the alert. These “signatures” can include subject lines and attachments on emails known to carry viruses, remote logins in violation of organizational policy, and certain byte sequences. It is similar to antivirus software (the term “signature-based” originates with antivirus software).

Signature-based IDS is popular and effective but is only as good as its database of known signatures. This makes it vulnerable to new attacks. Plus, attackers can and do frequently disguise their attacks to avoid common signatures that will be detected. Also, the most thorough signature-based IDS will have huge databases to check against, meaning big bandwidth demands on your system.

Anomaly-based: Anomaly-based IDS begins with a model of normal behavior on the network, then alert an admin anytime it detects any deviation from that model of normal behavior. Anomaly-based IDS begins at installation with a training phase where it “learns” normal behavior. AI and machine learning have been very effective in this phase of anomaly-based systems.

Anomaly-based systems are typically more useful than signature-based ones because they’re better at detecting new and unrecognized attacks. However, they can set off many false positives, since they don’t always distinguish well between attacks and benign anomalous behavior.

Some experts consider intrusion prevention systems to be a subset of intrusion detection. Indeed, all intrusion prevention begins with intrusion detection. But security systems can go one step further and act to stop ongoing and future attacks. When an IPS detects an attack, it can reject data packets, give commands to a firewall, and even sever a connection.

IDS and IPS are similar in how they’re implemented and operate. IPS can also be network- or host-based and can operate on a signature or anomaly basis.

Types of Intrusion Prevention Systems
A robust IT security strategy should include an intrusion prevention system able to help automate many necessary security responses. When risks occur, a prevention tool may be able to help quickly to thoroughly shut down the damage and protect the overall network.

Network-based Intrusion Prevention Systems (NIPS): As the name suggests, a NIPS covers all events on your network. Its detection is signature-based.

Network Behavior Analysis (NBA): NBA is similar to NIPS in that it provides network-wide coverage. But unlike NIPS, NBA operates on anomalies. Like anomaly-based IDS, NBA requires a training phase where it learns the network’s baseline norm.

NBA also uses a method called stateful protocol analysis. Here, the baseline norm is pre-programmed by the vendor, rather than learned during the training phase. But in both cases, the IPS is looking for deviations rather than signatures.

Wireless-based Prevention Systems (WIPS): Protecting your network brings its own unique challenges. Enter WIPS. Most WIPS have two components, including overlay monitoring (devices installed near access points to monitor the radio frequencies) and integrated monitoring (IPS using the APs themselves). 

Combining these two, which is very common, is known as “hybrid monitoring.”

Host-based Intrusion Prevention Systems (HIPS): HIPS live on and protect a single host, providing granular coverage. They are best used in conjunction with a network-wide IPS.

Differences Between IDS and IPS

There are several differences between these two types of systems. IDS only issues alerts for potential attacks, while IPS can take action against them. Also, IDS is not inline, so traffic doesn’t have to flow through it. Traffic does, however, have to flow through your IPS. In addition, false positives for IDS will only cause alerts, while false positives for IPS could cause the loss of important data or functions.

A distributed denial-of-service (DDoS) attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. Such an attack is often the result of multiple compromised systems (for example, a botnet) flooding the targeted system with traffic

In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.

In a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.

A DoS or DDoS attack is analogous to a group of people crowding the entry door of a shop, making it hard for legitimate customers to enter, thus disrupting trade.

Criminal perpetrators of DoS attacks often target sites or services hosted on high-profile web servers such as banks or credit card payment gateways. Revenge, blackmail and activism can motivate these attacks.

DDoS is a type of DOS attack where multiple compromised systems, which are often infected with a Trojan, are used to target a single system causing a Denial of Service (DoS) attack


More than One Third of US Businesses Experience DDoS Attacks. The most common cyberattacks were malware (53 percent) and viruses (51 percent). A distributed denial of service attack (aka DDOS) is very easy, and is in fact widely considered one of the easiest blackhat activities to do. 

For a DDoS attack to be successful, an attacker will spread malicious software to vulnerable computers, mainly through infected emails and attachments.   Layer 7 attacks focus specifically on the layer 7 features such as HTTP, SNMP, FTP etc. Layer 7 attacks require a lot less bandwidth and packets than network layer attacks to disrupt the services

DDoS attacks cannot steal website visitors information. The sole purpose of a DDoS attack is to overload the website resources. However, DDoS attacks can be used as a way of extortion and blackmailing. For example, website owners can be asked to pay a ransom for attackers to stop a DDoS attack


DDoS attacks can last as long as 24 hours, and good communication can ensure that the cost to your business is minimized while you remain under attack.






A cloud service provider, or CSP, is a company that offers some component of cloud computing -- typically infrastructure as a service (IaaS), software as a service (SaaS) or platform as a service (PaaS) -- to other businesses or individuals. 

Cloud computing security refers to a broad set of policies, technologies, applications, and controls utilized to protect virtualized IP, data, applications, services, and the associated infrastructure of cloud computing. It is a sub-domain of computer security, network security, and, more broadly, information security.

Cloud computing and storage provides users with capabilities to store and process their data in third-party data centers.  Organizations use the cloud in a variety of different service models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models (private, public, hybrid, and community). 

Application-level security issues (or cloud service provider CSP level attacks) refer to intrusion from the malicious attackers due to vulnerabilities of the shared nature of the cloud. Some companies host their applications in shared environments used by multiple users, without considering the possibilities of exposure to security breaches, such as:

1. SQL injection
An unauthorized user gains access to the entire database of an application by inserting malicious code into a standard SQL code. Often used to attack websites, SQL injection can be avoided by the usage of dynamically generated SQL in the code. It is also necessary to remove all stored procedures that are rarely used and assign the least possible privileges to users who have permission to access the database.

2. Guest-hopping attack
In guest-hopping attacks, due to the separation failure between shared infrastructures, an attacker gets access to a virtual machine by penetrating another virtual machine hosted in the same hardware. One possible mitigation of guest-hopping attack is the Forensics and VM debugging tools to observe any attempt to compromise the virtual machine. Another solution is to use the High Assurance Platform (HAP), which provides a high degree of isolation between virtual machines.

3. Side-channel attack
An attacker opens a side-channel attack by placing a malicious virtual machine on the same physical machine as the victim machine. Through this, the attacker gains access to all confidential information on the victim machine. The countermeasure to eliminate the risk of side-channel attacks in a virtualized cloud environment is to ensure that no legitimate user VMs reside on the same hardware of other users.

4. Malicious insider
A malicious insider can be a current or former employee or business associate who maliciously and intentionally abuses system privileges and credentials to access and steal sensitive customer information within the network of an organization. Strict privilege planning and security auditing can minimize this security risk that originates from within an organization.

5. Cookie poisoning
Cookie poisoning means to gain unauthorized access into an application or a webpage by modifying the contents of the cookie. In a SaaS model, cookies contain user identity credential information that allows the applications to authenticate the user identity. Cookies are forged to impersonate an authorized user. A solution is to clean up the cookie and encrypt the cookie data.

6. Backdoor and debug option
The backdoor is a hidden entrance to an application, which was created intentionally or unintentionally by developers while coding. Debug option is also a similar entry point, often used by developers to facilitate troubleshooting in applications. But the problem is that the hackers can use these hidden doors to bypass security policies and enter the website and access the sensitive information. To prevent this kind of attack, developers should disable the debugging option.

7. Cloud browser security
A web browser is a universal client application that uses Transport Layer Security (TLS) protocol to facilitate privacy and data security for Internet communications. TLS encrypts the connection between web applications and servers, such as web browsers loading a website. Web browsers only use TLS encryption and TLS signature, which are not secure enough to defend malicious attacks. One of the solutions is to use TLS and at the same time XML based cryptography in the browser core.

8. Cloud malware injection attack
A malicious virtual machine or service implementation module such as SaaS or IaaS is injected into the cloud system, making it believe the new instance is valid. If succeeded, the user requests are redirected automatically to the new instance where the malicious code is executed. The mitigation is to perform an integrity check of the service instance before using it for incoming requests in the cloud system.

9. ARP poisoning
Address Resolution Protocol (ARP) poisoning is when an attacker exploits some ARP protocol weakness to map a network IP address to one malicious MAC and then update the ARP cache with this malicious MAC address. It is better to use static ARP entries to minimize this attack. This tactic can work for small networks such as personal clouds, but it is easier to use other strategies such as port security features on large-scale clouds to lock a single port (or network device) to a particular IP address.

Network-level security attacks
Cloud computing largely depends on existing network infrastructure such as LAN, MAN, and WAN, making it exposed to some security attacks which originate from users outside the cloud or a malicious insider. In this section, let’s focus on the network level security attacks and their possible countermeasures.

10. Domain Name System (DNS) attacks
It is an exploit in which an attacker takes advantage of vulnerabilities in the domain name system (DNS), which converts hostnames into corresponding Internet Protocol (IP) addresses using a distributed database scheme. DNS servers are subject to various kinds of attacks since DNS is used by nearly all networked applications – including email, Web browsing, eCommerce, Internet telephony, and more. It includes TCP SYN Flood Attacks, UDP Flood Attack, Spoofed Source Address/LAND Attacks, Cache Poisoning Attacks, and Man in the Middle Attacks.

11. Domain hijacking
Domain hijacking is defined as changing a domain’s name without the owner or creator’s knowledge or permission. Domain hijacking enables intruders to obtain confidential business data or perform illegal activities such as phishing, where a domain is substituted by a similar website containing private information. One way to avoid domain hijacking is to force a waiting period of 60 days between a change in registration and a transfer to another registrar. Another approach is to use the Extensible Provisioning Protocol (EPP), which utilizes a domain registrant-only authorization key as a protection measure to prevent unintended name changes. Another approach is to use the Extensible Provisioning Protocol (EPP), which utilizes a domain registrant-only authorization key as a protection measure to prevent unauthorized name changes.

12. IP Spoofing
In IP spoofing, an attacker gains unauthorized access to a computer by pretending that the traffic has originated from a legitimate computer. IP spoofing is used for other threats such as Denial of Service and Middle Attack Man:

a. Denial of service attacks (DoS)
It is a type of attack that tries to make a website or network resource unavailable. The attacker floods the host with a massive number of packets in a short amount of time that require extra processing. It makes the targeted device waste time waiting for a response that never comes. The target is kept so busy dealing with malicious packets that it does not respond to routine incoming requests, leaving the legitimate users with denied service.

An attacker can coordinate hundreds of devices across the Internet to send an overwhelming amount of unwanted packets to a target. Therefore, tracking and stopping DoS is very difficult. TCP SYN flooding is an example of a DoS attack in which the intruder sends a flood of spoofed TCP SYN packets to the victim machine. This attack exploits the limitations of the three-way handshake in maintaining half-open connections.

b. Man In The Middle Attack (MITM)
A man-in-the-middle attack (MITM) is an intrusion in which the intruder relays remotely or probably changes messages between two entities that think they communicate directly with each other. The intruder utilizes network packet sniffer, filtering, and transmission protocols to gain access to network traffic. MITM attack exploits the real-time processing of transactions, conversations, or transfer of other data. It can be reduced using packet filtering by firewall, secure encryption, and origin authentication techniques.

End-user/host level attacks
The cloud end-user or host level attacks include phishing, an attempt to steal the user identity that includes usernames, passwords, and credit card information. Phishing is to send the user an email containing a link to a fake website that looks like a real one. When the user uses the fake website, his username and password will be sent to the hacker who can use them to attack the cloud.

Another method of phishing is to send an email to the user claiming to be from the cloud service company or, for instance, to tell the user to provide their username and password for maintenance purposes. Countermeasures of phishing are the use of Spam filters and spam blockers in the browsers. You can also train the users not to respond to any spoofed email and not to give their credentials to any website.

Emerging attacks called denial of service (DoS) and distributed denial of service attacks (DDoS) are among the top ways that risk application security. DoS and DDos attacks are when legitimate users are prevented from accessing the service commonly through flooding. Flooding targets machines or resources with excessive requests to overload systems that prevent legitimate request from being met.

Denial of service attacks (DoS) is a type of attack that tries to make a website or network resource unavailable. The attacker floods the host with a massive number of packets in a short amount of time that require extra processing. It makes the targeted device waste time waiting for a response that never comes. The target is kept so busy dealing with malicious packets that it does not respond to routine incoming requests, leaving the legitimate users with denied service.

An attacker can coordinate hundreds of devices across the Internet to send an overwhelming amount of unwanted packets to a target. Therefore, tracking and stopping DoS is very difficult. TCP SYN flooding is an example of a DoS attack in which the intruder sends a flood of spoofed TCP SYN packets to the victim machine. This attack exploits the limitations of the three-way handshake in maintaining half-open connections.

Other possible application security issues can result from cloud malware injection. The attacker, in this case, focuses on inserting harmful implementations to cloud services. In other instances, cookie poisoning can modify a user’s cookies so the attacker can gain unauthorized information for identity fraud.

Backdoor attacks is a conventional technique that bypasses system security undetectably to directly access user information directly in the cloud. Through hidden filters on webpages, hidden field attack attackers yield and collect unauthorized information.

When attackers identify two virtual machines hosted on same physical hardware and attempt to penetrate one machine from another, it is called guest hopping. Another way of attack is through SQL injection in which the attack embeds a malicious code into a poorly designed application and then passes it to the back-end database. SQL Injection (SQLi) is an injection attack where an attacker executes malicious SQL statements to control a web application’s database server, thereby accessing, modifying and deleting unauthorized data.

What can SQL Injection do?
There are a lot of things an attacker can do when exploiting an SQL injection on a vulnerable website. By leveraging an SQL Injection vulnerability, given the right circumstances, an attacker can do the following things:--

Bypass a web application’s authorization mechanisms and extract sensitive information
Easily control application behavior that’s based on data in the database
Inject further malicious code to be executed when users access the application
Add, modify and delete data, corrupting the database, and making the application or unusable
Enumerate the authentication details of a user registered on a website and use the data in attacks on other sites
It all depends on the capability of the attacker, but sometimes an SQL Injection attack can lead to a complete takeover of the database and web application.

An inside channel attack refers to a cache attack when cache accesses are monitored to gain encrypted data. At times, a person or employee will attempt to threaten enterprises since they have access to their system and data.

How do SQL Injection attacks work?
A developer usually defines an SQL query to perform some database action necessary for his application to function. This query has one or two arguments so that only desired records are returned when the value for that argument is provided by a user.

An SQL Injection attack plays out in two stages:--
Research: Attacker gives some random unexpected values for the argument, observes how the application responds, and decides an attack to attempt.
Attack: Here attacker provides carefully crafted value for the argument. The application will interpret the value part of an SQL command rather than merely data, the database then executes the SQL command as modified by the attacker.

An unauthorized user gains access to the entire database of an application by inserting malicious code into a standard SQL code. Often used to attack websites, SQL injection can be avoided by the usage of dynamically generated SQL in the code. It is also necessary to remove all stored procedures that are rarely used and assign the least possible privileges to users who have permission to access the database.

Lastly, network level attacks are among the security risks of cloud computing. A DNS attack includes domain hijacking and cross site scripting that can be costly for enterprises. When intruders sends malicious data to computers with an IP address that indicates the message originates from a trusted host, this is known as IP spoofing.

A man-in-the-middle attack, communication between two users is monitored by attackers to gain confidential information. To conclude network level attacks, sniffing is an attack in which data is captured and interpreted while flowing through the network. The attacker is able to gain access to all information that is transported through the network.



A man-in-the-middle attack (MITM) is an intrusion in which the intruder relays remotely or probably changes messages between two entities that think they communicate directly with each other. The intruder utilizes network packet sniffer, filtering, and transmission protocols to gain access to network traffic. MITM attack exploits the real-time processing of transactions, conversations, or transfer of other data. It can be reduced using packet filtering by firewall, secure encryption, and origin authentication techniques.

IP spoofing is the creation of Internet Protocol (IP) packets which have a modified source address in order to either hide the identity of the sender, to impersonate another computer system, or both.

In IP spoofing, an attacker gains unauthorized access to a computer by pretending that the traffic has originated from a legitimate computer. IP spoofing is used for other threats such as Denial of Service and Middle Attack Man:

IP spoofing is the creation of Internet Protocol (IP) packets which have a modified source address in order to either hide the identity of the sender, to impersonate another computer system, or both

In IP spoofing, the attacker modifies the source address in the outgoing packet header, so that the destination computer treats the packet as if it is coming from a trusted source, e.g., a computer on an enterprise network, and the destination computer will accept it.


End-user/host level attacks
The cloud end-user or host level attacks include phishing, an attempt to steal the user identity that includes usernames, passwords, and credit card information. Phishing is to send the user an email containing a link to a fake website that looks like a real one. When the user uses the fake website, his username and password will be sent to the hacker who can use them to attack the cloud.

Another method of phishing is to send an email to the user claiming to be from the cloud service company or, for instance, to tell the user to provide their username and password for maintenance purposes. Countermeasures of phishing are the use of Spam filters and spam blockers in the browsers. You can also train the users not to respond to any spoofed email and not to give their credentials to any website.

 Security concerns associated with cloud computing fall into two broad categories: security issues faced by cloud providers (organizations providing software-, platform-, or infrastructure-as-a-service via the cloud) and security issues faced by their customers (companies or organizations who host applications or store data on the cloud).  The responsibility is shared, however. 

The provider must ensure that their infrastructure is secure and that their clients’ data and applications are protected, while the user must take measures to fortify their application and use strong passwords and authentication measures.


Due to the lower costs and ease of implementing PaaS and SaaS products, the probability of unauthorized use of cloud services increases. However, services provisioned or used without IT's knowledge present risks to an organization. The use of unauthorized cloud services could result in an increase in malware infections or data exfiltration since the organization is unable to protect resources it does not know about. The use of unauthorized cloud services also decreases an organization's visibility and control of its network and data.

 Internet-Accessible Management APIs can be Compromised. CSPs expose a set of application programming interfaces (APIs) that customers use to manage and interact with cloud services (also known as the management plane). Organizations use these APIs to provision, manage, orchestrate, and monitor their assets and users. These APIs can contain the same software vulnerabilities as an API for an operating system, library, etc. Unlike management APIs for on-premises computing, CSP APIs are accessible via the Internet exposing them more broadly to potential exploitation.

Threat actors look for vulnerabilities in management APIs. If discovered, these vulnerabilities can be turned into successful attacks, and organization cloud assets can be compromised. From there, attackers can use organization assets to perpetrate further attacks against other CSP customers.

Separation Among Multiple Tenants Fails. Exploitation of system and software vulnerabilities within a CSP's infrastructure, platforms, or applications that support multi-tenancy can lead to a failure to maintain separation among tenants. This failure can be used by an attacker to gain access from one organization's resource to another user's or organization's assets or data. Multi-tenancy increases the attack surface, leading to an increased chance of data leakage if the separation controls fail.

This attack can be accomplished by exploiting vulnerabilities in the CSP's applications, hypervisor, or hardware, subverting logical isolation controls or attacks on the CSP's management API. To date, there has not been a documented security failure of a CSP's SaaS platform that resulted in an external attacker gaining access to tenants' data.

No reports of an attack based on logical separation failure were identified; however, proof-of-concept exploits have been demonstrated.

Data Deletion is Incomplete. Threats associated with data deletion exist because the consumer has reduced visibility into where their data is physically stored in the cloud and a reduced ability to verify the secure deletion of their data. This risk is concerning because the data is spread over a number of different storage devices within the CSP's infrastructure in a multi-tenancy environment. In addition, deletion procedures may differ from provider to provider. Organizations may not be able to verify that their data was securely deleted and that remnants of the data are not available to attackers. This threat increases as an agency uses more CSP services.

Cloud and On-Premise Threats and Risks

The following are risks that apply to both cloud and on-premise IT data centers that organizations need to address.

Credentials are Stolen. If an attacker gains access to a user's cloud credentials, the attacker can have access to the CSP's services to provision additional resources (if credentials allowed access to provisioning), as well as target the organization's assets. The attacker could leverage cloud computing resources to target the organization's administrative users, other organizations using the same CSP, or the CSP's administrators. An attacker who gains access to a CSP administrator's cloud credentials may be able to use those credentials to access the agency's systems and data.

Administrator roles vary between a CSP and an organization. The CSP administrator has access to the CSP network, systems, and applications (depending on the service) of the CSP's infrastructure, whereas the consumer's administrators have access only to the organization's cloud implementations. In essence, the CSP administrator has administration rights over more than one customer and supports multiple services.

Vendor Lock-In Complicates Moving to Other CSPs. Vendor lock-in becomes an issue when an organization considers moving its assets/operations from one CSP to another. The organization discovers the cost/effort/schedule time necessary for the move is much higher than initially considered due to factors such as non-standard data formats, non-standard APIs, and reliance on one CSP's proprietary tools and unique APIs.

This issue increases in service models where the CSP takes more responsibility. As an agency uses more features, services, or APIs, the exposure to a CSP's unique implementations increases. These unique implementations require changes when a capability is moved to a different CSP. If a selected CSP goes out of business, it becomes a major problem since data can be lost or cannot be transferred to another CSP in a timely manner.

 Increased Complexity Strains IT Staff. Migrating to the cloud can introduce complexity into IT operations. Managing, integrating, and operating in the cloud may require that the agency's existing IT staff learn a new model. IT staff must have the capacity and skill level to manage, integrate, and maintain the migration of assets and data to the cloud in addition to their current responsibilities for on-premises IT.

Key management and encryption services become more complex in the cloud. The services, techniques, and tools available to log and monitor cloud services typically vary across CSPs, further increasing complexity. There may also be emergent threats/risks in hybrid cloud implementations due to technology, policies, and implementation methods, which add complexity. This added complexity leads to an increased potential for security gaps in an agency's cloud and on-premises implementations.

 Insiders Abuse Authorized Access. Insiders, such as staff and administrators for both organizations and CSPs, who abuse their authorized access to the organization's or CSP's networks, systems, and data are uniquely positioned to cause damage or exfiltrate information.

The impact is most likely worse when using IaaS due to an insider's ability to provision resources or perform nefarious activities that require forensics for detection. These forensic capabilities may not be available with cloud resources.

 Stored Data is Lost. Data stored in the cloud can be lost for reasons other than malicious attacks. Accidental deletion of data by the cloud service provider or a physical catastrophe, such as a fire or earthquake, can lead to the permanent loss of customer data. The burden of avoiding data loss does not fall solely on the provider's shoulders. If a customer encrypts its data before uploading it to the cloud but loses the encryption key, the data will be lost. In addition, inadequate understanding of a CSP's storage model may result in data loss. Agencies must consider data recovery and be prepared for the possibility of their CSP being acquired, changing service offerings, or going bankrupt.

This threat increases as an agency uses more CSP services. Recovering data on a CSP may be easier than recovering it at an agency because an SLA designates availability/uptime percentages. These percentages should be investigated when the agency selects a CSP.

CSP Supply Chain is Compromised. If the CSP outsources parts of its infrastructure, operations, or maintenance, these third parties may not satisfy/support the requirements that the CSP is contracted to provide with an organization. An organization needs to evaluate how the CSP enforces compliance and check to see if the CSP flows its own requirements down to third parties. If the requirements are not being levied on the supply chain, then the threat to the agency increases.

This threat increases as an organization uses more CSP services and is dependent on individual CSPs and their supply chain policies.

Insufficient Due Diligence Increases Cybersecurity Risk. Organizations migrating to the cloud often perform insufficient due diligence. They move data to the cloud without understanding the full scope of doing so, the security measures used by the CSP, and their own responsibility to provide security measures. They make decisions to use cloud services without fully understanding how those services must be secured..


Cloud Security Alliance (CSA) is a not-for-profit organization with the mission to “promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing.”

The CSA has over 80,000 individual members worldwide.  CSA gained significant reputability in 2011 when the American Presidential Administration selected the CSA Summit as the venue for announcing the federal government’s cloud computing strategy

The latest CSA  ( Cloud Security Alliance ) report highlights  leading concerns: --

1. Data breaches. "Data is becoming the main target of cyber attacks,".the report's authors point out. "Defining the business value of data and the impact of its loss is essential important for organizations that own or process data." In addition, "protecting data is evolving into a question of who has access to it," they add. "Encryption techniques can help protect data, but negatively impacts system performance while making applications less user-friendly."

2. Misconfiguration and inadequate change control. "Cloud-based resources are highly complex and dynamic, making them challenging to configure. Traditional controls and change management approaches are not effective in the cloud." The authors state "companies should embrace automation and employ technologies that scan continuously for misconfigured resources and remediate problems in real time."

3. Lack of cloud security architecture and strategy. "Ensure security architecture aligns with business goals and objectives. Develop and implement a security architecture framework."

4. Insufficient identity, credential, access and key management. "Secure accounts, inclusive to two-factor authentication and limited use of root accounts. Practice the strictest identity and access controls for cloud users and identities."

5. Account hijacking. This is a threat that must be taken seriously. "Defense-in-depth and IAM controls are key in mitigating account hijacking."

6. Insider threat. "Taking measures to minimize insider negligence can help mitigate the consequences of insider threats. Provide training to your security teams to properly install, configure, and monitor your computer systems, networks, mobile devices, and backup devices." The CSA authors also urge "regular employee training awareness. Provide training to your regular employees to inform them how to handle security risks, such as phishing and protecting corporate data they carry outside the company on laptops and mobile devices."

7. Insecure interfaces and APIs. "Practice good API hygiene. Good practice includes diligent oversight of items such as inventory, testing, auditing, and abnormal activity protections." Also, "consider using standard and open API frameworks (e.g., Open Cloud Computing Interface (OCCI) and Cloud Infrastructure Management Interface (CIMI))."

8. Weak control plane. "The cloud customer should perform due diligence and determine if the cloud service they intend to use possesses an adequate control plane."

9. Metastructure and applistructure failures. "Cloud service providers must offer visibility and expose mitigations to counteract the cloud's inherent lack of transparency for tenants. All CSPs should conduct penetration testing and provide findings to customers."

10. Limited cloud usage visibility. "Mitigating risks starts with the development of a complete cloud visibility effort from the top down. Mandate companywide training on accepted cloud usage policies and enforcement thereof.  All non-approved cloud services must be reviewed and approved by the cloud security architect or third-party risk management."


11. Abuse and nefarious use of cloud services. "Enterprises should monitor their employees in the cloud, as traditional mechanisms are unable to mitigate the risks posed by cloud service usage."



Attacks on the resources used by one or more of the other tenants can affect your operations as well. Attackers may hit the entire network and cause downtime to several clients, depending on the bandwidth available. This can frustrate your clients as well as stall your regular operations.


 EXAMPLE:  Hackers targeted Portland-based cloud computing company Cedexisnin May 2017 in an attack that caused widespread outages across Cedexis’ infrastructure. Many French media outlets, including Le Monde, Le Figaro, and more that used Cedexis services were impacted. Their customers faced downtime because of the denial of service attack on Cedexis’ cloud.


Insider threats include intentional or unintentional behavior by employees that results in exposing or sharing of sensitive data.

This includes mistakenly sharing files with confidential information (like employee social security numbers) with a larger unauthorized group and using inappropriate sharing controls.

Ninety-four percent of all organizations experience at least one insider threat incident every month.

 EXAMPLE:  Data thefts are most common when people jump ship. For example, a salesperson leaving the company for a competitor can easily download customer data from a cloud CRM application. Cloud data thefts such as this are more difficult to detect than the theft of hard-copy documents, for example.

Recommended actions to reduce the risk of an insider attack:--
Improve access controls using tools such as multifactor authentication and authorization to ensure that only the right people have access to your data.

Use computer-based security awareness training courses and employee agreements to prevent intentional or unintentional sharing of confidential data.

Insider threats are not limited to employees. They extend to contractors, supply chain partners, service providers and account compromise attacks that can abuse access to an organization’s assets both on-premise and in the cloud

Disgruntled employees sometimes become malicious insiders.

The most dangerous are those who have received termination notices. They may decide that they have nothing to lose because they aren’t worried about getting fired anymore. 

Depending on the nature of your organization and the work you do, it might be a good idea for them to stop working for your company the moment they know they've been terminated. Get them to give you any physical keys they might have and disable their user accounts right away. 

It may ultimately cost your organization less money to just give your terminated employee their severance pay than to pay them to work an extra few weeks. But if they must work for some time after they've been terminated, watch them especially carefully.

Another type of indication of an employee or a contractor who could be a malicious insider is when they seem unusually enthusiastic about their work. They may volunteer for more work or additional tasks not because they want a raise, but because they want to expand their access to sensitive data

Types of Insider Threats
There are three types of insider threats, Compromised users, Careless users, and Malicious users.
Compromised Employees or Vendors
Compromised employees or vendors are the most important type of insider threat you’ll face. This is because neither of you knows they are compromised. It can happen if an employee grants access to an attacker by clicking on a phishing link in an email. These are the most common types of insider threats.

Careless Employees
Careless employees or vendors can become targets for attackers. Leaving a computer or terminal unlocked for a few minutes can be enough for one to gain access.

Granting DBA permissions to regular users (or worse, using software system accounts) to do IT work are also examples of careless insider threats.

Malicious Insider
Malicious attackers can take any shape or form. They usually have legitimate user access to the system and willfully extract data or Intellectual Property. Since they are involved with the attack, they can also cover up their tracks. That makes detection even more difficult.

The most significant issues with detecting insider threats are:

1. Legitimate Users
The nature of the threat is what makes it so hard to prevent. With the actor using their authentic login profiles, there’s no immediate warning triggered. Accessing large files or databases infrequently may be a valid part of their day to day job requirements.

2. System and Software Context
For the security team to know that something terrible is happening, they need to know what something bad looks like. This isn’t easy as. Usually, business units are the experts when it comes to their software. Without the right context, detecting a real insider threat from the security operations center is almost impossible.

3. Post Login Activities
Keeping track of every user’s activities after they’ve logged in to the system is a lot of work. In some cases, raw logs need to be checked, and each event studied. Even with Machine Learning (ML) tools, this can still be a lot of work. It could also lead to many false positives being reported, adding noise to the problem.

ndicators of Insider Attacks
Detecting attacks is still possible. Some signs are easy to spot and take action on.

Common indicators of insider threats are:

Unexplained Financial Gain
Abuse by Service Accounts.
Multiple failed logins.
Incorrect software access requests.
Large data or file transfers.
Using systems and tools that look for these items can help raise the alarm for an attack. While regular endpoint scans (daily) will ensure workstations stay clean from viruses and malware.


Your cloud environment interfaces with all of your infrastructure stacks and applications, so it’s very important to watch for any insider threats which may exist there.

In order to help prevent insider threats to your cloud, you need to make sure that it’s properly configured for optimal security. Secure-by-default landing zones can prevent new attack surfaces from opening up in development, staging and production environments. 

You must also implement identity access management that’s well suited to the cloud. The principle of least privilege can also be as useful for protecting cloud networks as it is for on-premises networks. No user should have more privileges than they absolutely need in order to do their jobs.

Insider threats can be a lot more dangerous than outsider threats. As far as malicious attackers are concerned, insiders already have authorized access to your buildings and user accounts. An outside attacker needs to work to find an external attack vector into your networks and physical facilities. Those are steps inside attackers can usually skip. It's a lot easier to privilege escalate from a user account you already have than to break into any user account in the first place. 

Organizations, on average, experience 14 Compromised account incidents each month, where unauthorized third-party agents exploit stolen user credentials to gain access to corporate data stored in a public cloud service. Eighty percent of organizations are affected by this risk every month.

 EXAMPLE:  Cybercriminals stole personal data (including the residential addresses and earnings) of 3 million customers of the media and entertainment company WWE (World Wrestling Entertainment). Hackers gained access to the data after targeting a database left unsecured on the Amazon cloud server.

Recommended actions to reduce the risk of a data breach:
Shadow IT and BYOD (bring your own device) practices often lead to data breaches. Strengthen data security by installing anti-malware, encryption, authentication, and data protection software in the personal devices that employees use for work.
Educate employees about the need to keep their manager and IT lead in the loop when using any new applications other than those specified or provided by the IT team.

Check terms and conditions as well as security features offered by CSPs and SaaS providers to ensure data privacy.


Bring Your Own Device (BYOD)
Fifty-nine percent of organizations today allow their employees to bring their own devices to work, a concept called BYOD. While it helps businesses save money on IT equipment, it also increases security risks.


Employees might use unapproved SaaS applications from their personal devices. They may also use personal and official cloud storage applications side-by-side, increasing the risk of confidential data getting posted in a personal space. BYOD policies make it difficult to track employees’ use of business data on their personal devices. Stolen, lost, or misused devices can also result in business data getting breached



A service level agreement (SLA) is a consensus between a client and a service provider. SLA agreement encapsulates roles & responsibilities, exceptions, accountability & penalties for both parties.

The following definitions apply to the SLA:--

"Back-off Requirements" means, when an error occurs, the Application is responsible for waiting for a period of time. This means that after the first error, there is a minimum back-off interval of 1 second and for each consecutive error, the back-off interval increases exponentially up to 32 seconds.

"Covered Service" means Production-Grade Cloud Bigtable Instance.
"Downtime" means more than a five percent Error Rate for a Production-Grade Cloud Bigtable Instance. Downtime is measured based on server side Error Rate.

"Downtime Period" means a period of five consecutive minutes of Downtime with a minimum of 60 requests per minute. Intermittent Downtime for a period of less than five minutes will not be counted towards any Downtime Periods.

"Error Rate" means the number of Valid Requests that result in a response with HTTP Status 50x and Code "Internal Error", "Unknown", or "Unavailable" divided by the total number of Valid Requests during that period. Repeated identical requests do not count towards the Error Rate unless they conform to the Back-off Requirements.

"Financial Credit" means the following for Cloud Bigtable Replicated Instances with Valid Requests using a Multi-Cluster routing policy:

"Monthly Uptime Percentage" means total number of minutes in a month, minus the number of minutes of Downtime suffered from all Downtime Periods in a month, divided by the total number of minutes in a month.

"Production-Grade Cloud Bigtable Cluster" means a Cloud Bigtable Cluster with 3 or more nodes provisioned.

"Production-Grade Cloud Bigtable Instance" means a Cloud Bigtable Instance that only contains Production-Grade Cloud Bigtable Clusters.

"Region" means a region described at https://cloud.google.com/about/locations/, as may be updated by Google from time to time.

"Replicated Instance" means a Production-Grade Cloud Bigtable Instance which contains 2 or more Production-Grade Cloud Bigtable Clusters, with each cluster in a different Zone of either the same Region or different Regions.

"Valid Requests" are requests –
for a Replicated Instance using the Multi-Cluster routing policy and that conform to the Documentation and that would normally result in a non-error response.
for a Replicated Instance using a Single-Cluster routing policy that conform to the Documentation and that would normally result in a non-error response.
for a Zonal Instance that conform to the Documentation, and that would normally result in a non-error response.

"Zonal Instance" means a Production-Grade Cloud Bigtable Instance with one Production-Grade Cloud Bigtable Cluster.



Many small businesses sign up for cloud providers without asking for a robust service level agreement (SLA).

A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud services provider and the client. Earlier, in cloud computing all Service Level Agreements were negotiated between a client and the service consumer

SLAs provide companies a standard to hold each other accountable in regards to customer support efforts. They also create a goal for employees to meet so they remain productive. Most importantly, they can prove that negotiated promises between companies are being kept. 

Depending on the agreement, failing to meet an SLA (often called an SLA violation) can result in a cash payment and/or a discount to the customer. This compensation is for the business inconvenience that may occur from the poor support experience. 

Service-level agreements, amongst other things, bolster trust in and between organizations – making it clear what needs to be done, to what standard, and when.
In the event of a disaster, your cloud provider should have a plan in place to prevent total loss of your data. Cloud providers should have a section of the SLA that describes their disaster recovery and backup solutions in detail. 

Depending on the provider, they may provide automatic backups and snapshots of your data. If the user is required to set up backup and recovery systems, the SLA should outline that. It may not specifically state how to activate them, but you should be aware if you need to activate them or not.


SLAs should not be made up of incomprehensible legal language; look for terms that guarantee a specified level of performance by the CSP. Understanding the extent of security features, e.g., encryption and data loss prevention, offered by the vendor along with the technical and business features—up-time, resilience, etc.—will help to ensure that your data in the cloud is secured.


Multifactor authentication: Multifactor authentication is an authentication method that allows access to a portal or application only after the user successfully presents two or more pieces of evidence. A two-factor authentication that uses a password as well as a one-time authentication key (OTP) is an example of this.

The front line of defense for any system is encryption. It uses complex algorithms to conceal or encrypt information. To decrypt these files, you must have a confidential encryption key. Encryption helps prevent confidential data from falling into the wrong hands. Encrypting data at rest is a must while encrypting data in-transit is highly advised.

What is encryption algorithm?
A mathematical procedure for performing encryption on data. Through the use of an algorithm, information is made into meaningless cipher text and requires the use of a key to transform the data back into its original form. Blowfish, AES RC4, RC5, and RC6 are examples of encryption algorithms.


Data backup is the process of duplicating data to allow for its retrieval in case of a data loss event. It helps to ensure that data is not lost because of natural disasters, theft, or any other mishap

A firewall is a network security tool that monitors incoming and outgoing traffic to detect anomalies. It also blocks specific traffic based on a defined set of rules. Cloud-based virtual firewalls help to filter network traffic to and from the internet and secure the data center.


A software bug is a coding error that causes an unexpected defect in a computer program. In other words, if a program does not perform as intended, it is most likely because of a bug

A software bug is an error, flaw or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.



The process of finding and fixing bugs is termed "debugging" and often uses formal techniques or tools to pinpoint bugs, and since the 1950s, some computer systems have been designed to also deter, detect or auto-correct various computer bugs during operations.

Most bugs arise from mistakes and errors made in either a program's source code or its design, or in components and operating systems used by such programs. A few are caused by compilers producing incorrect code. A program that contains many bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy (defective). 

Bugs can trigger errors that may have ripple effects. Bugs may have subtle effects or cause the program to crash or freeze the computer. Other bugs qualify as security bugs and might, for example, enable a malicious user to bypass access controls in order to obtain unauthorized privileges.

The Top Reasons for Software Bugs:
Lack of Communication. Lack of organized communication leads to miscommunication. ...
Recurring Ambiguity in Requirements. ...
Missing Process Framework. ...
Programming Errors. ...
Too much Rework. ...

Self Imposed Pressures.






Here are possible occurrences that threaten data integrity:--

Data breaches, locks, and removals are among the possibilities that can risk in the loss of data integrity. Data breaches are incidents in which sensitive data are stolen by individuals who have no authorization to do so. Also referred to vendor lock-in, data lock is a situation in which customers using a service cannot easily convert to a competitor’s product or service, usually as a result or incompatible technologies. In the case of data removal where there is a rendering representation of data loss, recovery may become difficult if the server broke down or failed.

Guest-hopping attack
In guest-hopping attacks, due to the separation failure between shared infrastructures, an attacker gets access to a virtual machine by penetrating another virtual machine hosted in the same hardware. One possible mitigation of guest-hopping attack is the Forensics and VM debugging tools to observe any attempt to compromise the virtual machine. Another solution is to use the High Assurance Platform (HAP), which provides a high degree of isolation between virtual machines.

Side-channel attack
An attacker opens a side-channel attack by placing a malicious virtual machine on the same physical machine as the victim machine. Through this, the attacker gains access to all confidential information on the victim machine. The countermeasure to eliminate the risk of side-channel attacks in a virtualized cloud environment is to ensure that no legitimate user VMs reside on the same hardware of other users.

Malicious insider
A malicious insider can be a current or former employee or business associate who maliciously and intentionally abuses system privileges and credentials to access and steal sensitive customer information within the network of an organization. Strict privilege planning and security auditing can minimize this security risk that originates from within an organization.

Cookie poisoning
Cookie poisoning means to gain unauthorized access into an application or a webpage by modifying the contents of the cookie. In a SaaS model, cookies contain user identity credential information that allows the applications to authenticate the user identity. Cookies are forged to impersonate an authorized user. A solution is to clean up the cookie and encrypt the cookie data.

Cookie poisoning is the act of manipulating or forging session cookies for the purpose of bypassing security measures and achieving impersonation and breach of privacy. By forging these cookies, an attacker can impersonate a valid client, and thus gain information and perform actions on behalf of the victim. Or attackers can use forged cookies to trick a server into accepting a new version of the original intercepted cookie with modified values. The ability to forge such session cookies (or more generally, session tokens) stems from the fact that all tokens are not generated in a secure way.

A cookie is information that a web site puts on your hard disk so that it can remember something about you at a later time. More technically, it is information for future use that is stored by the server on the client side of a client / server communication. Typically, a cookie records your preferences when using a particular site. Cookies stored on your computer's hard drive maintain bits of information that allow web sites you visit to authenticate your identity, speed up your transactions, monitor your behavior, and personalize their presentations for you. 

How do cookies work?

When a user visits a site, the site sends a tiny piece of data, called a cookie, which is stored on the user's computer by their browser. The browser sends the cookie back to the server with every request the browser makes to that server, such as when the user clicks a link to view a different page or adds an item to a shopping basket.

The data stored in the cookie lets the server know with whom it is interacting so it can send the correct information back to the user. Cookies are often used by web servers to track whether a user is logged in or not, and to which account they are logged in. Cookie-based authentication is stateful for the duration of multiple requests and has been the default method for handling user authentication for a long time. It binds the user authentication credentials to the user's requests and applies the appropriate access controls enforced by the web application.

A typical example of a cookie use begins with a user entering their login credentials, which the server verifies are correct. The server then creates a session that is stored in a database, and a cookie containing the session ID is returned to the user's browser. On every subsequent request, the browser returns the cookie data, and the session ID is verified by the server against the database; if it is valid, the request is processed. When the user logs out of the site, the session is usually destroyed on both the client and server side, but if the user has checked the “Keep me logged in” or “Remember me” option, the cookie will persist on the user's computer



Cookies can be accessed by persons unauthorized to do so due to insufficient security measures. An attacker can examine a cookie to determine its purpose and edit it so that it helps them get user information from the website that sent the cookie.

Cross-site scripting (XSS) injection attacks are a common method used to steal session cookies. If attackers can find a page on a site that is vulnerable to XSS injection, they can insert a script into the page that sends them the session cookie of everyone that views the page. The cookie then enables the attackers to impersonate its rightful owner, enabling them to stay logged in to the victim's account for as long as they want, without ever having to enter a password.

Alternative cookie attacks include predicting, brute force hacking or replicating the contents of a valid authentication cookie. Any such forged cookies would enable the attacker to impersonate a site's genuine users.

How can we prevent cookie poisoning?
As cookie poisoning is fairly easy to do, adequate cookie-poisoning protection should detect cookies that were modified on a client machine by verifying that cookies which are sent by the client are identical to the cookies that were set by the server.

Ingrian Networks has developed a patented platform which provides a means for securing cookies authenticity. When cookies pass through the platform, sensitive information is encrypted. A digital signature is created that is used to validate the content in all future communications between the sender and the recipient. If the content is tampered with, the signature will no longer match the content and will be refused access by the server.

In addition, web applications should be developed so that certain key parameters are not stored within cookies so as to minimize the damage if they are stolen or forged. 

Backdoor and debug option

The backdoor is a hidden entrance to an application, which was created intentionally or unintentionally by developers while coding. Debug option is also a similar entry point, often used by developers to facilitate troubleshooting in applications. But the problem is that the hackers can use these hidden doors to bypass security policies and enter the website and access the sensitive information. To prevent this kind of attack, developers should disable the debugging option.


Cloud browser security
A web browser is a universal client application that uses Transport Layer Security (TLS) protocol to facilitate privacy and data security for Internet communications. TLS encrypts the connection between web applications and servers, such as web browsers loading a website. Web browsers only use TLS encryption and TLS signature, which are not secure enough to defend malicious attacks. One of the solutions is to use TLS and at the same time XML based cryptography in the browser core.

Cloud malware injection attack

A malicious virtual machine or service implementation module such as SaaS or IaaS is injected into the cloud system, making it believe the new instance is valid. If succeeded, the user requests are redirected automatically to the new instance where the malicious code is executed. The mitigation is to perform an integrity check of the service instance before using it for incoming requests in the cloud system.

A cloud browser is a cloud-based combination of a web browser application with a virtualized container that implements the concept of remote browser isolation. Commands from the web are executed in a secure container, separate from the user endpoint, and accessed by the remote display protocol. By placing a browser application in the cloud, it becomes more centralized, manageable, cost effective, scalable and protected.

Enhanced security: Due to the fact that no executable web code ever reaches the user endpoint, pieces of malicious code and potential threats are blocked.

Better privacy: Unlike traditional web browsers, cloud browsers protect a user's digital identity and location during each session by passing information through a cloud-based data center.
Implemented cloud service benefits: Similar to other cloud services, cloud browser providers will manage the maintenance, scalability and capacity. This gives administrators a single point of command and control.

Reduced need for point solutions: Browsing within the cloud minimizes the necessity of added functionality such as content filtering, data loss protection (DLP), SSL inspection, endpoint security protection, domain name system (DNS) services, firewalls and VPNs

ARP poisoning
Address Resolution Protocol (ARP) poisoning is when an attacker exploits some ARP protocol weakness to map a network IP address to one malicious MAC and then update the ARP cache with this malicious MAC address. It is better to use static ARP entries to minimize this attack. This tactic can work for small networks such as personal clouds, but it is easier to use other strategies such as port security features on large-scale clouds to lock a single port (or network device) to a particular IP address.




 Network-level security attacks
Cloud computing largely depends on existing network infrastructure such as LAN, MAN, and WAN, making it exposed to some security attacks which originate from users outside the cloud or a malicious insider..

Domain Name System (DNS) attacks

It is an exploit in which an attacker takes advantage of vulnerabilities in the domain name system (DNS), which converts hostnames into corresponding Internet Protocol (IP) addresses using a distributed database scheme. DNS servers are subject to various kinds of attacks since DNS is used by nearly all networked applications – including email, Web browsing, eCommerce, Internet telephony, and more. It includes TCP SYN Flood Attacks, UDP Flood Attack, Spoofed Source Address/LAND Attacks, Cache Poisoning Attacks, and Man in the Middle Attacks.




Domain hijacking

Domain hijacking is defined as changing a domain’s name without the owner or creator’s knowledge or permission. Domain hijacking enables intruders to obtain confidential business data or perform illegal activities such as phishing, where a domain is substituted by a similar website containing private information. One way to avoid domain hijacking is to force a waiting period of 60 days between a change in registration and a transfer to another registrar. 



Another approach is to use the Extensible Provisioning Protocol (EPP), which utilizes a domain registrant-only authorization key as a protection measure to prevent unintended name changes. Another approach is to use the Extensible Provisioning Protocol (EPP), which utilizes a domain registrant-only authorization key as a protection measure to prevent unauthorized name changes.



All cloud based security issues can be broken down into four departments. Risks include data integrity, application security, content and policy security, and network level attacks.

https://static.googleusercontent.com/media/1.9.22.221/en//enterprise/pdf/whygoogle/google-common-security-whitepaper.pdf



SOMEBODY ASKED ME

WHAT IS THIS EXTREME VIOLENCE IN NE STATES ABOUT CAB ?

WELL THIS VIOLENCE IS SPONSORED BY DEEP STATE ( ROTHSCHILD ) FUNDED NGOs.

ONLY ONE PERSON ON THE PLANET CAN TELL YOU WHY JEW ROTHSCHILD INTRODUCED "INNER LINE PERMIT" IN NE STATES AND WHY CHINA WANTS TO ANNEX ARUNACHAL PRADESH..

MY NEXT POST WILL BE ABOUT THIS..

IT WILL MAKE DONALD TRUMPS OLD BALLS GO TRRR PRRR BRRRR.

AJIT DOVAL-- STOP RESTING ON YOUR PAST DEEP PAKISTAN LAURELS.. MOVE YOUR SCRAWNY ASS !

capt ajit vadakayil
..


  1. THE CAB BILL WAS PASSED BY BOTH HOUSES OF THE PARLIAMENT.. AFTER PRESIDENT SIGNS IT, THE ILLEGAL COLLEGIUM JUDICARY WILL SIT IN REVIEW LIKE GODS..

    ALL THESE JUDGES ARE ALLOWED TO DO IS TO INTERPRET LAWS..

    THESE JUDGES ARE NOT EMPOWERED TO DO JUDICAL REVIEW OF LAWS PASSED BY THE PARLIEMENT.

    THESE STUPID JUDGES DO NOT KNOW THAT OUR CONSTITUTION PROTECTS ONLY LAW ABIDING INDIAN CITIZENS.. OUR CONSTITUTION DOES NOT PROTECT TRAITOR INDIAN CITIZENS, ILLEGAL MUSLIM ROHINGYA IMMIGRANTS OR ANY ANIMAL EVEN IF IT IS INDIAN.

    WE ASK MODI TO IMMEDIATELY BAN STARE DECICIS, WHERE PAST STUPID JUDGEMENTS OF JUDICIARY SANS CONTEXT ARE USED AS A LANGOT – NAY—ADDENDUM TO THE CONSTITUTION..

    THE NJAC BILL WAS PASSED UNANIMOUSLY BY BOTH LOK SABHA AND RAJYA SABHA WITH PRESIDENT SIGNING THE LAW.. YET ILLEGAL COLLEGIUM JUDICIARY USED A UNCONSTITUTIONAL JUDICIAL REVIEW PROCESS AND STRUCK IT DOWN..

    JUDICIAL REVIEW EMPOWERS THE ILLEGAL COLLEGIUM JUDICIARY TO DECIDE THE FATE OF LAWS PASSED BY THE ELECTED AND ACCOUNTABLE LEGISTLATURE WHICH REPRESENTS THE SOVEREIGN WILL OF THE PEOPLE...

    JUDGES DO NOT KNOW THAT THE CONSTITUTION EMPOWERS PRESIDENT AND STATE GOVERNORS WITH ENORMOUS SUBJECTIVE AAND DISCRETIONARY POWERS TO SUSTAIN DHARMA ( NOT BLIND JUSTICE )…

    WE THE PEOPLE WARN THE ELECTED EXECUTIVE— JUDICIAL REVIEW IS AN UNDEMOCRATIC SYSTEM AND IT IS DANGEROUS TO THE WATAN WHEN COLLEGIUM JUDGES ARE CONTROLLED BY THE DEEP STATE..

    THE SUPREME COURT IS ITSELF BOUND BY THE CONSTITUTION OF INDIA AND THE PARLIAMENT CAN AMEND THE CONSTITUTION ANY TIME THEY WANT..

    http://ajitvadakayil.blogspot.com/2018/11/the-indian-constitution-does-not-allow.html

    WE THE PEOPLE ARE ABOVE THE CONSTITUTION.. THE CONSTITUTION CAN NEVER BE USED TO STAITJACKET WE THE PEOPLE, THE WATAN OR DHARMA..

    WE NEED TO FORM MILITARY COURTS IN INDIA TO HANG TRAITOR JUDGES IN PAYROLL OF THE DEEP STATE.. THESE JUDGES ARE IN CAHOOTS WITH BENAMI MEDIA..

    PEOPLE IN NE STATES CONTROLLED BY JEW ROTHSCHILD ( DEEP STATE ) FUNDED NGOs ARE ENGAGED IN RIOTS.. MY NEXT POST WILL EXPLAIN WHY?

    OUR POTHOLE JOURNALISTS AND STUPID JUDGES DO NOT EVEN KNOW WHAT IS “INNER LINE PERMIT” IN ARUNACHAL PRADESH CREATED BY JEW ROTHSCHILD WHO RULED INDIA ..

    WE KNOW ILLEGAL COLLEGIUM JUDGES HAVE FOREIGN SUPPORT WHEN THEIR LEGISTLATE AND DO EXTREME JUDICIAL OVERREACH..

    Capt ajit vadakayil
    ..
    .
    1. PUT ABOVE COMMENT IN WEBSITES OF-
      LAW MINISTER PRASAD
      ATTORNEY GENERAL
      LAW MINISTRY CENTRE/ STATES
      CJI BOBDE
      ATTORNEY GENERAL
      ALL SUPREME COURT JUDGES
      ALL STATE HIGH COURT CHIEF JUSTICES
      PMO
      PM MODI
      AJIT DOVAL
      RAW
      IB
      NIA
      ED
      CBI
      AMIT SHAH
      HOME MINISTRY
      DEFENCE MINISTER/ MINISTRY
      ALL 3 ARMED FORCE CHIEFS
      RSS
      AVBP
      VHP
      MOHAN BHAGWAT
      RAM MADHAV
      MUKESH AMBANI
      RATA TATA
      ANAND MAHINDRA
      KUMARAMANGALAMBIRLA
      LAXMI MNARAYAN MITTAL
      AZIM PREMJI
      KAANIYA MURTHY
      RAHUL BAJAJ
      RAJAN RAHEJA
      NAVEEN JINDAL
      GOPICHAND HINDUJA
      DILIP SHANGHVI
      GAUTAM ADANI
      AMISH TRIPATHI
      DEVDUTT PATTANAIK
      CHETAN BHAGAT
      PAVAN VARMA
      RAMACHANDRA GUHA
      ROMILA THAPAR
      IRFAN HABIB
      NIVEDITA MENON
      UDDHAV THACKREY
      RAJ THACKREY
      SONIA GANDHI
      PRIYANKA VADRA
      RAHUL GANDHI
      VIVEK OBEROI
      GAUTAM GAMBHIR
      ASHOK PANDIT
      ANUPAM KHER
      KANGANA RANAUT
      VIVEK AGNIHOTRI
      KIRON KHER
      MEENAKSHI LEKHI
      SMRITI IRANI
      PRASOON JOSHI
      MADHUR BHANDARKAR
      SWAPAN DASGUPTA
      SONAL MANSINGH
      MADHU KISHWAR
      SUDHIR CHAUDHARY
      GEN GD BAKSHI
      SAMBIT PATRA
      RSN SINGH
      GVL NARASIMHA RAO
      JAVED AKHTAR
      ASDDUDDIN OWAISI
      FAZAL GHAFOOL ( MES)
      E SREEDHARAN
      MOHANLAL
      SURESH GOPI
      MAMMOOTTY
      SOLI BABY
      FALI BABY
      KATJU BABY
      SALVE BABY
      PGURUS
      PRAKASH KARAT
      SITARAM YECHURY
      D RAJA
      RANA AYYUB
      SHEHLA RASHID
      FATHER CEDRIC PRAKASH
      ANNA VETTICKAD
      ANNIE RAJA
      JOHN BRITTAS
      SRI SRI RAVISHANKAR
      SADGURU JAGGI VASUDEV
      MATA AMRITANANDA MAYI
      BABA RAMDEV

      WEBSITES OF DESH BHAKTS
      SPREAD ON SOCIAL MEDIA EVERY WHICH WAY.


THIS POST IS NOW CONTINUED TO PART 11 , BELOW--





CAPT AJIT VADAKAYIL
..

Viewing all articles
Browse latest Browse all 852

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>