Quantcast
Channel: Ajit Vadakayil
Viewing all articles
Browse latest Browse all 852

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 9 - Capt Ajit Vadakayil

$
0
0


THIS POST IS CONTINUED FROM PART 8, BELOW--




DO AN EXPERIMENT –
COLLECT ALL AI MACHINES AND LET THEM DECIDE ON BASIS OF DATA , IF THE HOLOCAUST REALLY TOOK PLACE ..  

HOW MANY PEOPLE WERE KILLED BY JEW EISENHOWER AND HOW MANY BY JEW HITLER ?.





Deliberate bias exists in AI , but it’s not easy to detect them in the domain of technology, which boils down to ones and zeroes.

As AI gains more and more ground, we must  deal with biases ( deliberate ) that may not be obvious in an algorithm’s outcome.

Whether used to improve defenses or offload security tasks, it’s essential that we trust that the outcome the AI is giving us is not biased. 

In  cyber security, AI bias is a form of risk—the more information, context, and expertise you feed your AI, the more you’re able to manage security risks and blind spots. Otherwise, various types of bias, from racial and cultural prejudices to contextual, industry-related forms of bias, can impact the AI.

When AI models are based on false security assumptions or unconscious biases, they do more than threaten a company’s security posture. They can also cause significant business impact. AI that is tuned to qualify benign or malicious network traffic based on non-security factors can miss threats, allowing them to waltz into an organization’s network.  It can also overblock network traffic, barring what might be business-critical communications.

As an example, imagine that an AI developer views one region of the world as safe, because it’s an ally nation, and another as malicious, because it’s an authoritarian regime. The developer therefore allows all the network traffic from the former to enter, while blocking all traffic from the latter. 

This type of aggregate bias can cause AI to overlook other security contexts that might be more important.

Mistrained AI-powered security systems may fail to identify something that should be identified as a fraud element, a vulnerability, or a breach. Biased rules within algorithms inevitably generate biased outcomes.

Data itself can create bias when the source materials aren’t diverse. AI that’s fed biased data is going to understand only a partial view of the world and make decisions based on that narrow understanding. In cybersecurity, that means threats will be overlooked. 

For instance, if a spam classifier wasn’t trained on a representative set of benign emails, such as emails in various languages or with linguistic idiosyncrasies like slang, it will inevitably produce false positives, because English is a stupid language . Even common, intentional misuse of grammar, spelling, or syntax can prompt a spam classifier to block benign text.

AI models can suffer from tunnel vision, too. As a cyber threat’s behavioral pattern varies based on factors like geography or business size, it’s important to train AI on the various environments that a threat operates in and the various forms it takes on. 

For instance, in a financial services environment, if you build AI to only detect identity-based issues, it won’t recognize malicious elements outside that setting. Lacking broad coverage, this Al model would be unable to identify threats outside the niche threat pattern it was taught.

If businesses are going to make AI an integral asset in their security arsenal, it’s essential they understand that AI that is not fair and accurate cannot be effective. One way to help prevent bias within AI is to make it cognitively diverse: The computer scientists developing it, the data feeding it, and the security teams influencing it should have multiple and diverse perspectives. 

Through cognitive diversity, the blind spot of one expert, one data point, or one approach can be managed by the blind spot of another, getting as close to no blind spots—and no bias—as possible.
In cyber security, you have to look at the elements producing the outcome. That is where you monitor for bias—and that is where you correct it.

The biggest problem with machine learning systems is that we ourselves don't quite understand everything they're supposedly learning, nor are we certain they're learning everything they should or could be. We've created systems that draw mostly, though never entirely, correct inferences from ordinary data, by way of logic that is by no means obvious

One of the things that human beings tend to spend a lot of their time doing is determining whether a causal relationship or a correlation exists between two series of events. For instance, the moon relative to Earth directly correlates with the level of the ocean tides. If the relationship between these two series over time were plotted on an x/y chart, the points would appear to fall on a sinusoidal curve. It's not too difficult for someone to write a function that describes such a curve.

The whole point of machine learning is to infer the relationships between objects when, unlike the tides, it isn't already clear to human beings what those relationships are. Machine learning is put to use when linear regression or best-fit curves are insufficient -- when math can't explain the relationship. But perhaps that should have been our first clue: If no mathematical correlation exists, then shouldn't any other kind of relationship we can extrapolate be naturally weaker? 

A convolutional neural network, which carries the abbreviation CNN, is a type of learning system that builds an image in memory that incorporates aspects of all the data it's been given. So if a CNN is taught to recognize a printed or handwritten character of text, it's because it's seen several examples of every such character, and has built up a "learned" image of each one that has its basic features.

People have a tendency to fear what they don't understand. Nothing amplifies those fears more profoundly than the web, whose contributors have recently speculated that bias may be imprinted upon machine learning algorithms by programmers ( payroll of the deep state ) with nefarious motives. 

The algorithms and structures that govern AI will only be effective if they do not reflect the subconscious biases of the programmers who create them

As algorithms are trusted more often to make the same types of deductions or inferences that humans would make, at some point, you might think they'd start making the same types of errors. If an artificial intelligence can mimic the reasoning capacity of human beings, perhaps it's inevitable it will adopt some of their mental foibles as well. At the extreme, it could appear that an AI has a subconscious motivation.

What most people are worried about is, when they run the algorithm, some mysterious or unknown input or stimulus changes the output to be something that's outside of the margin of error, but you may or may not be aware of it,

At the root of what all machine intelligence is about is, you're trying to predict decisions better. If a decision gets distorted in some way, whatever process that decision is a part of, potentially can lead to an incorrect answer or a sub-optimal path in the decision tree.

If human biases truly are imprinted upon AI algorithms, either subconsciously or through a phenomenon we don't yet understand, then what's to stop that same phenomenon from interfering with humans tasked with correcting that bias?

Yet at the highest levels of public discussion today, the source of error in neural network algorithms is being treated not as a mathematical factor but a subliminal influence.

The human mind evolved survival strategies over countless generations, all of which culminated in people's capability to make snap-judgment, rash, risk-averse decisions. Humans learned to leap to conclusions, in other words, when they didn't have time to think about it.

Machine learning systems are, by design, not rule-based. Indeed, their entire objective is to determine what the rules are or might be, when we don't know them to begin with. If human cognitive biases actually can imprint themselves upon machine learning, their only way into the system is through the data.

In machine learning, bias is a calculable estimate of the degree to which inferences made about a set of data tend to be wrong. By "wrong" in this context, we don't mean improper or unseemly, like the topic of a political argument on Twitter, but rather inaccurate. 

In the mathematical sense, there may be any number of ways to calculate bias, but here is one methodology that has the broadest bearing in the context of AI software: Quantitatively, bias in a new algorithm is the difference between its determined rate of error and the error rate of an existing, trusted algorithm in the same category. Put another way, when we get down to 0s and 1s, all bias is relative.

ML models are opaque and inherently biased

Machine learning systems are "black boxes" -- devices with clear inputs and outputs, but offering no insight into the connections between the two..

Indeed, neural networks are, by design, non-deterministic. Like human minds, though on a much more limited scale, they can make inferences, deductions, or predictions without revealing how. 

That's a problem for an institution whose algorithms determine whether to approve an applicant's request for credit. Laws require credit reporting agencies to be transparent about their processes.
That becomes almost impossible if the financial institutions controlling the data on which they report can't explain what's going on for themselves.

So if an individual's credit application is turned down, it would seem the processes that led to that decision belong to a mechanism that's opaque by design.

Machine learning algorithms themselves may amplify bias if they make predictions that are more skewed than the training data. Such amplification often occurs through two mechanisms:---- 

1) incentives to predict observations as belonging to the majority group and 2) runaway feedback loops.
In order to maximize predictive accuracy when faced with an imbalanced dataset, machine learning algorithms are incentivized to put more learning weight on the majority group, thus disproportionately predicting observations to belong to that majority group.

Feedback loops are especially problematic when sub-groups in the training data exhibit large statistical differences (e.g. one precinct has a much higher crime rate than others); a model trained on such data will quickly “run away” and make predictions that fall into the majority group only, thereby generating ever-more lopsided data that are fed back into the model.

Even when sub-groups are statistically similar, feedback loops can still lead to noisy and less accurate predictions. Algorithms where the predictive outcome determines what feedback the algorithm receives—e.g. recidivism prediction, language translation, and social media news feeds—should always be diligently monitored for the presence of feedback loops bias.

Bias in data and algorithms are interrelated. When an algorithm is fed training data where one group dominates the sample, it is incentivized to prioritize learning about the dominant group and over-predict the number of observations that belong to the dominant group. 

This tendency is exacerbated when the model’s predictive accuracy is relatively low. Conversely, if the data were balanced relative to the predictive accuracy, the model would have nothing to gain by over-predicting the dominant group.

Bias can also be perpetuated through a feedback loop if the model’s own biased predictions are repeatedly fed back into it, becoming its own biased source data for the next round of predictions. In the machine learning context, we no longer just face the risk of garbage in, garbage out—when there’s garbage in, more and more garbage may be generated through the ML pipeline if one does not monitor and address potential sources of bias.

One key to de-biasing data is to ensure that a representative sample is collected in the first place. Bias from sampling errors can be mitigated by collecting larger samples and adopting data collection techniques such as stratified random sampling. 

While sampling errors won’t go away entirely, rapid data growth—2.5 quintillion bytes per day and counting—and growing data collection capability have made it easier than ever to mitigate sampling errors compared to the past.

Bias from non-sampling errors are much more varied and harder to tackle, but one should still strive to minimize these kinds of errors through means such as proper training, establishing a clear purpose and procedure for data collection, and conducting careful data validation. For example, in response to the image-classification database that contained disproportionately few wedding images from India, 

Google deliberately sought out contributions from India to make the database more representative.
For algorithms that make classification decisions among different groups, it is also important to consider the performance of the model against metrics other than accuracy—for example, the false positive rate or false negative rate.

For example, consider a criminal-justice algorithm used to assign risk scores for recidivism to defendants. Someone is labeled as “high risk” if they have a ⅔ predicted chance of reoffending within two years. Suppose the training data only contain two groups: Group A and Group B; each group has a different underlying profile for recidivism. In this example, possible alternative model metrics would be:

False positive rate: the probability of labeling someone as high risk, even though they did not reoffend.

False negative rate: the probability of labeling someone as low risk, even though they did reoffend.

One can then apply model constraints to make the algorithm satisfy some fairness rule. Common rules include1:-- 
.
Predictive parity: Let algorithms make predictions without considering characteristics such as gender and race. In the recidivism example, white and black defendants would be held to the same risk scoring standards.

Well-calibrated: In situations with more than one predicted outcome (for example, risk scores on a scale of one to nine instead of simply high versus low risk), this would mean the proportion predicted to reoffend is the same across groups for every possible score value.

Error rate balance: Requiring that certain performance measures be held equal across groups. In the recidivism example, the algorithm would be required to achieve the same false positive rate or the same false negative rate across groups A and B.

Collecting data that perfectly represent all subgroups in a population, while certainly helpful, is not a panacea. If the underlying systems being modeled are themselves unjust then the model results will still end up reflecting these biased behaviors. Conversely, removing bias from ML, though it may generate less ethically troubling results, will not fix the underlying social injustices either.

Employees should be trained on identifying their own biases in order to increase their awareness of how their own assumptions and perceptions of the world influence their work. In Israel Palestinians are getting screwed..

ML is not a magical solution that will solve all of the world’s problems but, like any other tool, has its limitations and weaknesses will help with maintaining a more realistic perspective on what these models can (and cannot) achieve.


WE NEED ( LIKE ON A CIGARETTE PACKET ) WARNINGS ABOUT THE RISKS OF USING AI IN SECURITIES AND EXCHANGE COMMISSION FILINGS.


Machine learning is a sub-field of AI.

Computer algorithms that have the ability to “learn” or improve in performance over time on some task

Essentially, it is a machine that learns from data over time. This learning is through a statistical process that starts with a body of data and tries to derive a rule or procedure that explains the data or can predict future data.

The resulting output is called a model.

This is different from the traditional approach to artificial intelligence, which involved a programmer trying to translate the way humans make decisions into software code. The vast majority of artificial intelligence in the world today is powered by machine learning.

Currently, many ML systems are far more accurate than humans at a variety of tasks, from driving to diagnosing certain diseases



Machine vision is a specific ML approach that allows computers to recognize and evaluate images.   It is used by Google to help you search images and by Facebook to automatically tag people in photos.

Machine vision is the use of a camera or multiple cameras to inspect and analyze objects automatically, usually in an industrial or production environment. The data acquired then can be used to control a process or manufacturing activity.

A machine vision system uses a camera to view an image, computer vision algorithms then process and interpret the image, before instructing other components in the system to act upon that data. ... But a machine vision system doesn't work without a computer and specific software at its core

Machine vision is the ability of a computer to see; it employs one or more video cameras, analog-to-digital conversion (ADC) and digital signal processing (DSP). The resulting data goes to a computer or robot controller. Machine vision is similar in complexity to voice recognition

Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry.

Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems.

The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.

The overall machine vision process includes planning the details of the requirements and project, and then creating a solution. During run-time, the process starts with imaging, followed by automated analysis of the image and extraction of the required information.

Both computer vision and machine vision use image capture and analysis to perform tasks with speed and accuracy human eyes can’t match

In the post below I show a award winning computer vision project my son did at Cornell.






Both computer vision and machine vision use image capture and analysis to perform tasks with speed and accuracy human eyes can’t match. With this in mind, it’s probably more productive to describe these closely related technologies by their commonalities—distinguishing them by their specific use cases rather than their differences.

Computer vision and machine vision systems share most of the same components and requirements:--

An imaging device containing an image sensor and a lens
An image capture board or frame grabber may be used (in some digital cameras that use a modern interface, a frame grabber is not required)
Application-appropriate lighting
Software that processes the images via a computer or an internal system, as in many “smart” cameras

So what’s the actual difference?

Computer vision refers to automation of the capture and processing of images, with an emphasis on image analysis.

In other words, CV’s goal is not only to see, but also to process and provide useful results based on the observation.


Machine vision refers to the use of computer vision in industrial environments, making it a subcategory of computer vision.




Detecting defects and quickly mitigating the cause of those defects is an essential aspect of any manufacturing process. Companies have turned to machine vision solutions to proactively address the occurrence and root cause of defects. 

By installing cameras on the production line and training a machine learning model to identify the complex variables that define a good product vs. a bad product, it’s possible to identify defects in real time and determine where in the manufacturing process the defects are occurring so proactive steps can be taken.

To achieve your computer or machine vision goals, you first need to train the machine learning models that make your vision system “intelligent.” And for your machine learning models to be accurate, you need high volumes of annotated data, specific to the solution you’re building. 

There are free, public-use datasets available that work well for testing algorithms or performing simple tasks, but for most real-world projects to succeed, specialized datasets are required to ensure they contain the right metadata. 

For example, implementing computer vision models within autonomous vehicles requires extensive image annotation to label people, traffic signals, cars, and other objects. Anything less than total precision is going to be a huge problem for a self-driving car.

Companies may choose to deploy an in-house annotation team to perform this type of image annotation, but it can be costly and divert valuable employees from working on core technology
Annotation literally means to label a given data like image, video etc.. for further references purposes. This is done by assigning some sort of keywords on the particular area of text, image etc.. ..

This is achieved by the annotations which can be from image, text, video, or audio.
Image annotation is done by humans manually using the image annotation tools to store the large volume of data stored after annotation.


Although the line between CV and MV has blurred, both are best defined by their use cases. Computer vision is traditionally used to automate image processing, and machine vision is the application of computer vision in real-world interfaces, such as a factory line


Computer vision is a form of artificial intelligence where computers can “see” the world, analyze visual data and then make decisions from it or gain understanding about the environment and situation. One of the driving factors behind the growth of computer vision is the amount of data we generate today that is then used to train and make computer vision better. 

Our world has countless images and videos from the built-in cameras of our mobile devices alone. But while images can include photos and videos, it can also mean data from thermal or infrared sensors and other sources. Along with a tremendous amount of visual data (more than 3 billion images are shared online every day), the computing power required to analyze the data is now accessible and more affordable. 

As the field of computer vision has grown with new hardware and algorithms so has the accuracy rates for object identification. In less than a decade, today’s systems have reached 99 percent accuracy from 50 percent making them more accurate than humans at quickly reacting to visual inputs.

One of the critical components to realizing all the capabilities of artificial intelligence is to give machines the power of vision. To emulate human sight, machines need to acquire, process and analyze and understand images. The tremendous growth in achieving this milestone was made thanks to the iterative learning process made possible with neural networks. It starts with a curated dataset with information that helps the machine learn a specific topic.

Here are some of the most examples of computer vision in practice today: -- 


      Autonomous vehicles
Computer vision is necessary to enable self-driving cars. Manufacturers such as Tesla, BMW, Volvo, and Audi use multiple cameras, lidar, radar, and ultrasonic sensors to acquire images from the environment so that their self-driving cars can detect objects, lane markings, signs and traffic signals to safely drive.  

      Google Translate app
All you need to do to read signs in a foreign language is to point your phone’s camera at the words and let the Google Translate app tell you what it means in your preferred language almost instantly. By using optical character recognition to see the image and augmented reality to overlay an accurate translation, this is a convenient tool that uses computer vision.

      Facial recognition
China is definitely on the cutting edge of using facial recognition technology, and they use it for police work, payment portals, security checkpoints at the airport and even to dispense toilet paper and prevent theft of the paper at Tiantan Park in Beijing, among many other applications.

      Healthcare
Since 90 percent of all medical data is image based there is a plethora of uses for computer vision in medicine. From enabling new medical diagnostic methods to analyze X-rays, mammography and other scans to monitoring patients to identify problems earlier and assist with surgery, expect that our medical institutions and professionals and patients will benefit from computer vision today and even more in the future as its rolled out in healthcare.

      Real-time sports tracking
Ball and puck tracking on televised sports has been common for a while now, but computer vision is also helping play and strategy analysis, player performance and ratings, as well as to track the brand sponsorship visibility in sports broadcasts.

      Agriculture
There are semi-autonomous combine harvester that uses artificial intelligence and computer vision to analyze grain quality as it gets harvested and to find the optimal route through the crops.  There’s also great potential for computer vision to identify weeds so that herbicides can be sprayed directly on them instead of on the crops. This is expected to reduce the amount of herbicides needed by 90 percent.

      Manufacturing
Computer vision is helping manufacturers run more safely, intelligently and effectively in a variety of ways. Predictive maintenance is just one example where equipment is monitored with computer vision to intervene before a breakdown would cause expensive downtime.  Packaging and product quality are monitored, and defective products are also reduced with computer vision.


 There is already a tremendous amount of real-world applications for computer vision, and the technology is still young. As humans and machines continue to partner, the human workforce will be freed up to focus on higher-value tasks because the machines will automate processes that rely on image recognition.



I WAS THE FIRST TO INTRODUCE PREDICTIVE MAINTENANCE AT SEA


http://ajitvadakayil.blogspot.com/2010/12/predictive-maintenance-on-chemical.html



Predictive analytics is the use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future.

Predictive analytics uses historical data to predict future events. Typically, historical data is used to build a mathematical model that captures important trends. That predictive model is then used on current data to predict what will happen next, or to suggest actions to take for optimal outcomes.

Predictive algorithms is used for text classification which involves high dimensional training data sets. It is a simple algorithm and known for its effectiveness to quickly build models and make predictions by using this algorithm. Naive Bayes algorithm is primarily considered for solving text classification problem

The poetically named “random forest” is one of data science’s most-loved prediction algorithms.
The random forest is a classification algorithm consisting of many decisions trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.


Random Forest combines hundreds of decisions trees together in order to arrive at a better prediction than a single tree could make by itself.









A collective of decision trees is called a Random Forest. To classify a new object based on its attributes, each tree is classified, and the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest).

One big advantage of random forest is that it can be used for both classification and regression problems, which form the majority of current machine learning systems.

Random forest adds additional randomness to the model, while growing the trees. Instead of searching for the most important feature while splitting a node, it searches for the best feature among a random subset of features. This results in a wide diversity that generally results in a better model.

Therefore, in random forest, only a random subset of the features is taken into consideration by the algorithm for splitting a node. You can even make trees more random by additionally using random thresholds for each feature rather than searching for the best possible thresholds (like a normal decision tree does).

While random forest is a collection of decision trees, there are some differences.

If you input a training dataset with features and labels into a decision tree, it will formulate some set of rules, which will be used to make the predictions.

"Deep" decision trees might suffer from overfitting. Most of the time, random forest prevents this by creating random subsets of the features and building smaller trees using those subsets. Afterwards, it combines the subtrees. It's important to note this doesn’t work every time and it also makes the computation slower, depending on how many trees the random forest builds.

One of the biggest advantages of random forest is its versatility. It can be used for both regression and classification tasks, and it’s also easy to view the relative importance it assigns to the input features.

Random forest is also a very handy algorithm because the default hyperparameters it uses often produce a good prediction result. Understanding the hyperparameters is pretty straightforward, and there's also not that many of them.

One of the biggest problems in machine learning is overfitting, but most of the time this won’t happen thanks to the random forest classifier. If there are enough trees in the forest, the classifier won’t overfit the model.   

The main limitation of random forest is that a large number of trees can make the algorithm too slow and ineffective for real-time predictions. In general, these algorithms are fast to train, but quite slow to create predictions once they are trained. 

A more accurate prediction requires more trees, which results in a slower model. In most real-world applications, the random forest algorithm is fast enough but there can certainly be situations where run-time performance is important and other approaches would be preferred.

And, of course, random forest is a predictive modeling tool and not a descriptive tool, meaning if you're looking for a description of the relationships in your data, other approaches would be better.
The random forest algorithm is used in a lot of different fields, like banking, the stock market, medicine and e-commerce.

In finance, for example, it is used to detect customers more likely to repay their debt on time, or use a bank's services more frequently. In this domain it is also used to detect fraudsters out to scam the bank. In trading, the algorithm can be used to determine a stock's future behavior.

In the healthcare domain it is used to identify the correct combination of components in medicine and to analyze a patient’s medical history to identify diseases.

Random forest is used in e-commerce to determine whether a customer will actually like the product or not.
Random forest is a great algorithm to train early in the model development process, to see how it performs. Its simplicity makes building a “bad” random forest a tough proposition.

The algorithm is also a great choice for anyone who needs to develop a model quickly. On top of that, it provides a pretty good indicator of the importance it assigns to your features.

Random forests are also very hard to beat performance wise. Of course, you can probably always find a model that can perform better, like a neural network for example, but these usually take more time to develop, though they can handle a lot of different feature types, like binary, categorical and numerical.

Overall, random forest is a (mostly) fast, simple and flexible tool, but not without some limitations.



Cloud infrastructure has to do with the hardware and software components required to ensure proper implementation of a cloud computing model. Critical examples of what makes up this infrastructure include a network and virtualization software, storage, servers.

Before discussing what makes up cloud infrastructure, we would first have to define what cloud computing is. 

Cloud Computing in Simple Terms
Cloud computing is defined as a process through which computing power, e.g., RAM, Network Speed, CPU; is delivered as a service over a network as opposed to physically providing them to an individual’s location.

Common examples of such cloud computing systems are Azure, Amazon Web Services (AWS), IBM, Google Cloud

An excellent example of how cloud computing works can be depicted by you traveling by bus. Each passenger (alighting at different stop-points) is given a ticket which keeps them seated in a position until they reach their destinations. Cloud computing is quite like the bus, which takes different people (i.e., data) to different places (i.e., users), allowing each person to use its service at a minimal fixed cost.



Cloud computing has become essential in today’s world, as data storage has become one of the top priorities in each field. This is because lots of businesses spend hefty amounts getting and protecting their data, requiring a reliable IT support structure. 

Since most companies, e.g., small and medium scale businesses cannot afford in-house infrastructures, which cost a lot, cloud computing serves as the middle point. As a matter of fact, because of how efficient this form of data storage is, as well as the low cost of maintenance, big businesses are rapidly being attracted to it as well!

With an in-house IT server, lots of attention has to be paid to ensure there are no glitches in the system. Should there be any faults, you risk losing a lot. It is merely cost-effective to go with cloud computing and the infrastructure that comes with it.

Cloud computing offers three major types of services, and they are:---

Software as a Service (SaaS)
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)

Depending on business requirements, one or more of these cloud computing offers are utilized by companies.   Data integrity is the basis to provide cloud computing service such as SaaS, PaaS, and IaaS.



SaaS (Software as a Service)


SaaS allows people to use cloud-based web applications.

Software, as a service has to do with a software distribution model whereby applications hosted by a service provider or vendor via the cloud, is made available through the same means.

This is becoming a trendy delivery model, as opposed to buying a software application and installing it on your computer as obtainable in the past. Using SaaS, you make use of the software as a subscriber, monthly.

Through this service, you get all of your regular duties done, such as sales, accounting, planning, and invoicing.


Software as a Service (SaaS) uses applications (such as word processors or spreadsheets) based on the cloud. This service offers greater data security (in the event of a hardware crash) and keeps information readily accessible from any connected computer.

One major advantage of the public model is its relative simplicity, especially when considering the prospect of a business’s entire staff using a variety of SaaS applications to perform their daily work.


SaaS services enable rapid development and implementation of applications to increase productivity, open new markets for information technology development service industries, increase use of these SaaS services will increase the use of internet bandwidth, and also integrate applications with various devices. .

SaaS  software is owned, delivered and managed remotely by one or more providers. To start, Software-as-a-Service, or SaaS, is a popular way of accessing and paying for software.  Instead of installing software on your own servers, SaaS companies enable you to rent software that’s hosted, this is typically the case for a monthly or yearly subscription fee.  

To ensure your organization’s privacy and security is intact, verify the SaaS provider has secure user identity management, authentication, and access control mechanisms in place. Also, check which database privacy and security laws they are subject to.

SaaS is like going by bus. Buses have assigned routes, and you share the ride with other passengers.

Email services such as Gmail and Hotmail are examples of cloud-based SaaS services. Other examples of SaaS services are office tools (Office 365 and Google Docs), customer relationship management software (Salesforce), event management software (Planning Pod), and so on.

SaaS services are usually available with a pay-as-you-go (which means subscription) pricing model. All software and hardware are provided and managed by a vendor, so you don’t need to install or configure anything. The application is ready to go as soon as you get your login and password
SaaS solutions can be used for:--

Personal purposes. Millions of individuals all over the world use email services (Gmail, Hotmail, Yahoo), cloud storage services (Dropbox, Microsoft OneDrive), cloud-based file management services (Google Docs), and so on. People may not realize it, but all of these cloud services are actually SaaS services.

Business. Companies of various sizes may use SaaS solutions such as corporate email services (Gmail is available for businesses, for example), collaboration tools (Trello), customer relationship management software (Salesforce, Zoho), event management software (EventPro, Cvent), and enterprise resource planning software (SAP S/4HANA Cloud ERP).

SaaS services offer plenty of advantages to individuals and businesses:--

Access to applications from anywhere. Unlike on-premises software, which can be accessed only from a computer (or a network) it’s installed on, SaaS solutions are cloud-based. Thus, you can access them from anywhere there’s internet access, be it your company’s office or a hotel room.

Can be used from any device. Cloud-based SaaS services can be accessed from any computer. You only need to sign in. Many SaaS solutions have mobile apps, so they can be accessed from mobile devices as well.

Automatic software updates. You don’t need to bother updating your SaaS software, as updates are carried out by a cloud service vendor. If there are any bugs or technical troubles, the vendor will fix them while you focus on your work instead of on software maintenance.

Low cost. Compared to on-premises software, SaaS services are rather affordable. There’s no need to pay for the whole IT infrastructure; you pay only for the service at the scale you need. If you need extra functionality, you can always update your subscription.

Simple adoption. SaaS services are available out-of-the-box, so adopting them is a piece of cake. We’ve already mentioned what you need to do: just sign up. It’s as simple as that. There’s no need to install anything.

SaaS solutions have certain disadvantages as well, so let’s mention a couple of them:--

You have no control over the hardware that handles your data.
Only a vendor can manage the parameters of the software you’re using.




PaaS (Platform as a Service)

Using Platform as a service, developers can build services and applications, hosted by the cloud and accessible to users via the Internet.

 Benefiting from PaaS requires constant updating and addition of new features.

Such benefits include being able to ensure adequate software support and management services, networking, storage, testing, and collaborations.


PaaS is the broad collection of application infrastructure (middleware) services. These services include application platform, integration, business process management and database services.


PaaS is like taking a taxi. You don’t drive a taxi yourself, but simply tell the driver where you need to go and relax in the back seat.


Thanks to PaaS solutions, software developers can deploy applications, from simple to sophisticated, without needing all the related infrastructure (servers, databases, operating systems, development tools, etc). Examples of PaaS services are Heroku and Google App Engine.

PaaS vendors supply a complete infrastructure for application development, while developers are in charge of the code.

Just like SaaS, Platform as a Service solutions are available with a pay-as-you-go pricing model.

PaaS solutions are used mostly by software developers. PaaS provides an environment for developing, testing, and managing applications. PaaS is therefore the perfect choice for software development companies.

PaaS provides a number of benefits to developers:--

Reduced development time. PaaS services allow software developers to significantly reduce development time. Server-side components of the computing infrastructure (web servers, storage, networking resources, etc.) are provided by a vendor, so development teams don’t need to configure, maintain, or update them. Instead, developers can focus on delivering projects with top speed and quality.

Support for different programming languages. PaaS cloud services usually support multiple programming languages, giving developers an opportunity to deliver various projects, from startup MVPs to enterprise solutions, on the same platform.

Easy collaboration for remote and distributed teams. PaaS gives enormous collaboration capabilities to remote and distributed teams. Outsourcing and freelancing are common today, and many software development teams are comprised of specialists who live in different parts of the world. PaaS services allow them to access the same software architecture from anywhere and at any time.

High development capabilities without additional staff. PaaS provides development companies with everything they need to create applications without the necessity of hiring additional staff. All hardware and middleware is provided, maintained, and upgraded by a PaaS vendor, which means businesses don’t need staff to configure servers and databases or deploy operating systems.


PaaS cloud services have certain disadvantages:--

You have no control over the virtual machine that’s processing your data.   PaaS solutions are less flexible than IaaS. For example, you can’t create and delete several virtual machines at a time.

Platform as a Service (PaaS): a cloud computing service that offers users an online environment for application development, deployment, and updating.




IaaS (Infrastructure as a Service)

IaaS is also a significant service model of cloud computing. Through Infrastructure as a Service, access is provided to computing resources over a virtualized environment, i.e., the cloud.  IaaS is a virtual data center.

The infrastructure provided here includes virtual server space, network connections, bandwidth, IP addresses, and load balancers. Here, a pool of hardware resources is extracted out of several servers and subsequently delivered over several data centers. Hence, there is a sense of reliability to IaaS.

IaaS serves as a complete computing package. If small scale businesses are looking to cut out the cost on IT infrastructure, this is one proven, viable means of doing so. Every year, lots of cash would otherwise be put into purchasing and buying new components such as hard drives, network connections, external storage devices; etc. when utilizing IaaS, all of this is bypassed!


IaaS is cloud computing in its most basic form, IaaS offers the fundamentals of one’s computing infrastructure, including data storage, networking, and server space.

IaaS gives compute resources, complemented by storage and networking capabilities are owned and hosted by providers and available to customers on-demand.

Cybercriminals’ most effective weapon in a ransomware attack is the network itself, which enables the malicious encryption of shared files on network servers, especially files stored in infrastructure-as-a-service (IaaS) cloud providers,

Cloud computing isn’t one monolithic type of offering, but an assortment of services aimed at meeting the various IT needs of an organization.

In the IaaS model, third-party service providers host hardware equipment, operating systems and other software, servers, storage systems, and various other IT components for customers in a highly automated delivery model.   In some cases, IaaS providers also handle tasks such as ongoing systems maintenance, data backup, and business continuity.

Organizations that use IaaS can self-provision the infrastructure services and pay for them on a per-use basis. Fees are typically paid by the hour, week, or month, depending on the service contract. In some cases, providers charge clients for infrastructure services based on the amount of virtual machine (VM) capacity they’re using over a period of time.

Similar to other cloud computing services, IaaS provides access to IT resources in a virtualized environment, across a public connection that’s typically the internet. But with IaaS, you are provided access to virtualized components so that you can create your own IT platforms on it—rather than in your own data center.


IaaS is not to be confused with PaaS, a cloud-based offering in which service providers deliver platforms to clients that allow them to develop, run, and manage business applications without the need to build and maintain the infrastructure such software development processes typically require.

IaaS also differs from SaaS, a software distribution model in which a service provider hosts applications for customers and makes them available to these customers via the internet.

The pool of IaaS services offered to clients is pulled from multiple servers and networks that are generally distributed across numerous data centers that are owned, operated, and maintained by cloud providers.

IaaS resources can be either single-tenant or multitenant, and they are hosted in the service provider’s data center.

“Multitenant” means multiple clients share those resources, even though their systems are kept separate. This is the most common way to deliver IaaS because it is both highly efficient and scalable, allowing cloud computing’s generally lower costs. 

A cloud-based IaaS provider typically offers greater scalability, greater selection of technology options, on-demand availability, and usually much better security because it has created its IaaS platform to support hundreds or thousands of customers.

IaaS is like leasing a car. When you lease a car, you choose the car you want and drive it wherever you wish, but the car isn’t yours. Want an upgrade? Just lease a different car!

IaaS services can be used for a variety of purposes, from hosting websites to analyzing big data. Clients can install and use whatever operating systems and tools they like on the infrastructure they get. Major IaaS providers include Amazon Web Services, Microsoft Azure, and Google Compute Engine.

As with SaaS and PaaS, IaaS services are available on a pay-for-what-you-use model.

As you can see, each cloud service (IaaS, PaaS, and SaaS) is tailored to the business needs of its target audience. From the technical point of view, IaaS gives you the most control but requires extensive expertise to manage the computing infrastructure, while SaaS allows you to use cloud-based applications without needing to manage the underlying infrastructure. Cloud services, thus, can be depicted as a pyramid:





IaaS solutions can be used for multiple purposes. Unlike SaaS and PaaS, IaaS provides hardware infrastructure that you can use in a variety of ways. It’s like having a set of tools that you can use for constructing the item you need.


Here are several scenarios when you can use IaaS:--

Website or application hosting. You can run your website or application with the help of IaaS (for example, using Elastic Compute Cloud from Amazon Web Services).

Virtual data centers. IaaS is the best solution for building virtual data centers for large-scale enterprises that need an effective, scalable, and safe server environment.

No expenses on hardware infrastructure. IaaS vendors provide and maintain hardware infrastructure: servers, storage, and networking resources. This means that businesses don’t need to invest in expensive hardware, which is a substantial cost savings as IT hardware infrastructure is rather pricey.

Perfect scalability. Though all cloud-based solutions are scalable, this is particularly true of Infrastructure as a Service, as additional resources are available to your application in case of higher demand. Apps can also be scaled down if demand is low.

Reliability and security. Ensuring the safety of your data is a IaaS vendor’s responsibility. Hardware infrastructure is usually kept in specially designed data centers, and a cloud provider guarantees security of your data.

Finally, let’s specify the disadvantages of IaaS cloud solutions:--

IaaS is more expensive than SaaS or PaaS, as you in fact lease hardware infrastructure.  All issues related to the management of a virtual machine are your responsibility.


Cloud-based is a term that refers to applications, services or resources made available to users on demand via the Internet from a cloud computing provider's servers. cloud computing is becoming increasingly linked to artificial intelligence (AI). 

That relationship is permanently and substantially changing the relationship between cloud computing and AI. Google and Amazon are among the big-name companies offering cloud computing options for customers. The links between cloud computing and AI are so substantial that they spurred a new cloud computing option called AIaaS, or AI-as-a-service. 

AIaaS allows everyone, independent of their knowledge, to utilize Artificial Intelligence. For developers simple APIs are provided, for users without coding skills graphical user interfaces along with detailed instructions are made available by which means a data processing pipeline can be clicked together.

Artificial Intelligence as a Service (AIaaS) is basically third-party offering of artificial intelligence outsourcing. ... There are some Cloud AI service providers which provide specialized hardware required for few AI tasks, such as GPU based processing for intensive workloads etc.

In traditional programming you hard code the behavior of the program. In machine learning, you leave a lot of that to the machine to learn from data.

AIaaS is perfect for testing fast and without a big investment in hardware or software new approaches to your problems. The performance of several different AI algorithms are typically compared to each other in these so called proof of concepts which can be easily done with AIaaS.

Artificial Intelligence As a Service (AIaaS) allows enterprises to leverage AI for specific use cases and lower the risk and cost at the same time. This can include a sampling of multiple public cloud platforms to test different machine learning algorithms. The applicability of AIaaS cuts across all sectors and since the features associated with each service providers are different, customers, namely enterprises can opt from a plethora of options.

One of the biggest advantage associated with AIaaS has been the reduced cost and time in deploying the solution. By providing a ready infrastructure and pre-trained algorithms, it saves businesses from setting up their own applications. While earlier business solutions had to develop their own application, in this case, all that companies need to do is contact a service provider.

Since AIaaS is built on existing cloud framework by training the machine learning models and then deploying to VMs and containers for inference. Without creating custom machine learning models, service providers make use of the underlying infrastructure which would have otherwise built on IaaS (Infrastructure as a Service) and SaaS (Software as a Service). This is another key advantage as it reduces investment risk and increases strategic flexibility.

Usability: With AWS, Microsoft and Google dominating the sector, in an attempt to be more than just service providers, companies are also competing with each other to build tools for data scientist and developers. Added to this is the move to open-source their platforms like TensorFlow, Caffe and AutoML enabling developers to build a custom AI model.

Scalability: It will allow enterprises to grow by starting small and allowing them to increase their AI operations gradually with time.


Types of AIaaS--

Machine learning framework: This tool enables developers to build their own model and learn from an existing pool of data. It will allow building machine learning tasks without the requirement of the big data environment.

Third party APIs: These are created to increase the functionalities in an existing application. NLP, computer vision, translation, knowledge mapping, emotion detections are some of the common options for APIs.

AI-powered bots: Chatbots that uses natural language processing (NLP) capabilities to imitate the language patterns by learning from human conversations are a common type of AIaaS.

Fully-managed ML services: This uses drag-and-drop tools, cognitive analytics and custom-created data models to generate richer machine learning values.



Without virtualization, cloud computing might be a scam. Virtualization has to do with apportioning single physical servers to multiple logical servers. As soon as the physical server is divided, each valid server can then act like a physical server, running independent operating system and applications.

Since virtualization is a critical part of cloud infrastructure, several popular companies provide this service to the vast number of people who need it. These services are both cost-effective and time respect.

Especially for software developers and testers, virtualization is pretty essential; giving developers a solid platform on which to write code, which can then run in several different environments and scenarios. They are also able to test the code.

The three significant purposes of virtualization are network virtualization, server virtualization, and storage virtualization.

Network virtualization refers to a method of combining available network resources by dividing the available bandwidth into channels, each of which is independent of the other channels, and each of which can be assigned to a particular server or device.

Storage virtualization deals with the pooling of physical storage from several network storage devices into a single storage device managed from a central console. This form of virtualization is commonly used in storage area networks.

As for server virtualization, it involves the masking of server resources such as processors, RAM, operating systems; from server users. This form of virtualization aims to ensure an increase in resource sharing while reducing the burden of computation on users.

Unlocking cloud infrastructure needs a steady input of virtualization; as it decouples software from hardware. This makes personal computers, for example, able to borrow extra memory from the hard disk through the use of virtual memory

Cloud infrastructure is beyond helpful and has come to stay. If you would be utilizing cloud platforms, then you need to be on top of your knowledge game about the infrastructure you’d be using!











In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet.

Normal servers,  refers to the regular physical technology you're installing somewhere in the room, while cloud server is perceived as an online system able to store a large amount of data, deliver software services, balance loading time, automate business process and operations, and allow enough customization



Cloud storage involves at least one data server that a user connects to via the internet. The user sends files manually or in an automated fashion over the Internet to the data server which forwards the information to multiple servers. The stored data is then accessible through a web-based interface.


Cloud Storage (e.g. Backblaze B2, Amazon S3, Microsoft Azure, Google Cloud)--

These services are where many online backup, syncing, and sharing services store their data. Cloud storage providers typically serve as the endpoint for data storage. These services usually provide APIs (application program interfaces), CLIs (command line interfaces), and access points for individuals and developers to tie in their cloud storage offerings directly.

This allows developers to create programs that use the cloud storage solution in any way they see fit. A good way to think about cloud storage is as a building block for whatever tool or service you want to create.

Cloud storage services are priced per unit stored, meaning you pay for the amount of storage that you use and access. Since these services are designed for high availability and durability, data can live solely on these services, though we still recommend having multiple copies of your data, just in case.

Cloud storage allows you to save data and files in an off-site location that you access either through the public internet or a dedicated private network connection. Data that you transfer off-site for storage becomes the responsibility of a third-party cloud provider. The provider hosts, secures, manages, and maintains the servers and associated infrastructure and ensures you have access to the data whenever you need it.

Cloud storage delivers a cost-effective, scalable alternative to storing files on on-premise hard drives or storage networks. Computer hard drives can only store a finite amount of data. When users run out of storage, they need to transfer files to an external storage device.

Traditionally, organizations built and maintained storage area networks (SANs) to archive data and files. SANs are expensive to maintain, however, because as stored data grows, companies have to invest in adding servers and infrastructure to accommodate increased demand.

Cloud storage services provide elasticity, which means you can scale capacity as your data volumes increase or dial down capacity if necessary. By storing data in a cloud, your organization save by paying for storage technology and capacity as a service, rather than investing in the capital costs of building and maintaining in-house storage networks.

You pay for only exactly the capacity you use. While your costs might increase over time to account for higher data volumes, you don’t have to overprovision storage networks in anticipation of increased data volume.

How does it work?

Like on-premise storage networks, cloud storage uses servers to save data; however, the data is sent to servers at an off-site location. Most of the servers you use are virtual machines hosted on a physical server. As your storage needs increase, the provider creates new virtual servers to meet demand.

Cloud computing is when entities share a network of remotely accessible servers. The servers are hosted on the Internet, allowing businesses to manage data “in the cloud” instead of on a local server. It’s a shared space in which devices in the network can access data from anywhere.

The basic idea of cloud computing is that your information is stored online, available for you to access it whenever you want and from any computer or Internet-ready device.


Cloud Computing lets us deploy the service quickly in fewer clicks. This quick deployment lets us get the resources required for our system within minutes.

Cloud computing is ideal for businesses that have to carry out database management in remote locations., with everything being readily available over the cloud, businesses can enhance their efficiency and increase their productivity, simultaneously.

Cloud computing requires no hefty machinery or equipment to run. That makes it the most cost-effective method to maintain and use. Cloud computing eliminates the cost of setting up data centers and looking after them.

Cloud computing has become an integral part of the digital ecosystem. Ever since its introduction in 2006 by the Amazon Web Services, it has come a long way . Cloud has come to resonate with the hearts of millions of professionals and businesses who look at it as the ultimate solution to the data storage issues.

Small and large businesses all over the world are switching to Cloud-based solutions, and it has been predicted that by 2020, enterprises will shift 83% of their workload to cloud-computing platforms. Amazon Web Services, Google Cloud Platform, Microsoft Azure and IBM Bluemix are a few of the front-runners of the cloud computing realm.

There are three major players in the public cloud platforms arena - Amazon Web Services (AWS), Microsoft's Azure, and Google Cloud Platform. The top cloud computing companies are addressing a large and growing market

Global data will touch 175 Zettabytes by 2025.   Now if you want to imagine it in real terms, 1 Zettabyte equals 1 trillion Gigabytes.  Digitization of the world is unstoppable. Today, 77 percent of businesses have one or more applications or part of their infrastructure in the cloud.

Pandora’s box is open.

This form of storage has become a popular way for people to store their music, movies and other media. For businesses, the cloud presents a way to store data securely online. Cloud computing offers significant benefits over regular storage and makes data available to everyone who needs it

In-house data storage costs companies a significant amount of money. There’s the up-front price tag of purchasing each new server as well as the cost of installing them. Then you need to ensure the equipment is maintained properly and backed up regularly.

With cloud computing, the headache associated with maintaining in-house systems disappears as you have the support of your service provider. Because the cost of infrastructure is included in your plan and split among all the service provider’s clients, you save money.

Cloud is available via internet and allows the users to access the data from any location. Users pay for the services they use. Dedicated servers are physical servers and you have the entire server for your own websites. They are more secure and perform better.

Virtual servers allow multiple servers to run on one physical host and share the resources which can be way more efficient and cost effective. ... Each virtual server would have its own IP address, which would allow remote access to the server through the Local Area Network.

A cloud server is a logical server that is built, hosted and delivered through a cloud computing platform over the Internet. Cloud servers possess and exhibit similar capabilities and functionality to a typical server but are accessed remotely from a cloud service provider

Cloud hosting involves a network of servers (possibly hundreds) joined together to act as one ‘mega server’. Cloud hosting enables you to handle large volumes of unexpected traffic because you are not relying on a single server, but rather a network (or ‘cloud’) of servers.

If one server malfunctions, your data will be backed up by other servers in the cloud network and your users will experience no downtime. The ability to scale quickly and inherent redundancy are the key benefits of cloud hosting.

With cloud hosting platforms you can monitor your usage and seemlessly scale when you need more storage space, bandwidth or processing power. Alternatively, during times when you’re facing low volumes of traffic, it allows you to scale back your plan allowing you to reduce hosting costs.

Generally speaking, companies that opt for cloud hosting don’t own any of the servers. In fact, they may not even know what sort of hardware their websites are hosted on or where the server is located.  Users of cloud hosting services don’t have to understand how to set up and maintain a complex and scalable hosting infrastructure because that is all handled by the cloud hosting company.

Any regular shared or VPS hosting provider will host several websites (sometimes 100’s) on a single server. This means your website will have to share the resources (such as processing power and storage space) with other websites on the same physical server. This places obvious limitations to the performance of any one site.

Cloud hosting, on the other hand, utilizes many servers instead of a single server so that there is technically no limit to performance. If any site on the cloud experiences unexpected traffic spikes, the cloud as a whole is able to handle the increase in demand without impacting the other sites. Since cloud hosting is more robust and dynamic compared to regular web hosting, it gives you better overall performance and is inherently more scalable than regular web hosting services.

The hundreds of regulations that govern different types of data are complex to understand, time-consuming to apply and laborious to maintain. Why not let a cloud storage service provider do the heavy lifting when it comes to compliance? A good provider operates in full compliance with all applicable regulations so you don’t have to worry about incurring violations.

Cloud platforms don’t require your employees to be tethered to their desks all day every day. Since they access these services via the Internet, they can use desktop computers, smartphones, and other devices to connect to your cloud platform. 

Moreover, your team will have access to all your files and software even if they don’t have a physical presence in the office. Whether they’re traveling or working remotely, they will be able to log in to their cloud accounts and get their work done.

Once an individual or a company registers itself to the cloud, they can access it from anywhere in the world just in a matter of seconds. Files in the cloud can be accessed from anywhere with an Internet connection. This allows you to move beyond time zone and geographic location issues.

If your business is not investing in cloud-computing technology, then all of your valuable data is unfortunately tied to the office computer it resides in. This may not look like the actual problem, but the reality is that if your local hardware faces a problem, you might end up permanently losing all your data. 

This is one of the most common issues as computers can malfunction for a lot of reasons, from viral infections, to age-related hardware deterioration, to simple user error, or they can even be misplaced or stolen

The accessibility of the cloud also fosters better collaboration among your workforce. Team members can access and share files through your cloud platform, which allows them to work on documents together and see others’ updates as they happen.

Make sure to work with a reliable ISP when you’re looking to implement cloud services. It’s also crucial to ensure that you have sufficient Internet speeds and bandwidth to support your cloud needs.

With virtualization, software called a hypervisor sits on top of physical hardware and abstracts the machine's resources, which are then made available to virtual environments called virtual machines. These resources can be raw processing power, storage, or cloud-based applications containing all the runtime code and resources required to deploy it.

If the process stops here, it's not cloud—it's just virtualization.

Virtual resources need to be allocated into centralized pools before they're called clouds. Adding a layer of management software gives administrative control over the infrastructure, platforms, applications, and data that will be used in the cloud. An automation layer is added to replace or reduce human interaction with repeatable instructions and processes, which provides the self-service component of the cloud.

Clouds deliver the added benefits of self-service access, automated infrastructure scaling, and dynamic resource pools, which most clearly distinguish it from traditional virtualization.
There are three main types of cloud hosting: public cloud, private cloud, and hybrid cloud.

A public cloud is what you would use to host your website. It is built on a standard cloud computing model which handles files, apps, disk storage, and services made available to the general public through the web.

Amazon Web Services, Microsoft Azure, and Google Cloud are examples of public clouds. Some of the most common real-world examples of public cloud services include services like cloud-based server hosting, storage services, webmail, and online office applications.

Public clouds are usually multitenant, i.e., there are many organizations sharing space on the same cloud.

A private cloud,  handles files, apps, storage space, and services that are present behind a safe corporate firewall that is controlled by a corporate IT manager or the IT department. You might use a private cloud to maintain your corporate intranet for example, with access limited to employees who can access it using a secure VPN connection.

Private cloud is infrastructure dedicated entirely to your business that’s hosted either on-site or in a service provider’s data center. Private clouds offer a lot of benefits, which can come at a cost.
A hybrid cloud is a mix of the two (private and public) that stay distinct but still work together allowing you to make use of a multiple deployment model.

Some form of hybrid option is obviously the best solution given the Public vs. Private debate.  Keep all important corporate data secure behind some Private Cloud-based service on stored securely using an array of on-premise servers, while access to that data remains tightly controlled and accessed through a combination of SaaS and DBaaS services. 

This provides the best of both worlds: mobile and web-based access to corporate applications with high usability, while important data remains secure.

Cloud hosting enables websites owners to stay flexible with their budget and site resources. With cloud hosting, you only pay for what you use. You can sign up for a standard level pricing plan while, at the same time, have the ability to scale these resources as your website continues to grow.

This is achieved via close resource usage tracking. You can track and allocate additional computing power and storage space to your website by using an intuitive resource management portal. This will also allow you to view your website’s usage whenever you want while being able to keep a close eye on your billing.

Cloud infrastructure can include a variety of bare-metal, virtualization, or container software that can be used to abstract, pool, and share scalable resources across a network to create a cloud. At the base of cloud computing is a stable operating system (like Linux). This is the layer that gives users independence across public, private, and hybrid environments.

Alibaba is the leading cloud provider in China and an option for multi-national companies building infrastructure there. In its December quarter, Alibaba delivered cloud revenue growth of 84 percent to $962 million.

To keep data secure, the front line of defense for any cloud system is encryption. ... Yes, the only way to keep your data safe for certain is to lock it up in a safe beneath the ground. That being said, your cloud-stored data is generally safer than your locally stored data.

The cloud may be fine for your pictures and music, but when you start thinking about personal information, such as passwords, that a business keeps on their clients and customers, the stakes go way up. For one thing, you don’t really know where the data is being stored, so you don’t have the first idea of the level of data security

An equally important concern, particularly for government agencies and military, isn’t just the security of the servers themselves; it’s the people who have access to them as part of their job.

A lot of people already make use of a wide array of cloud computing services without even realizing it. Gmail, Facebook, Google Drive, One Drive, and even Instagram are all cloud-based applications. For all of these services, users are sending their personal data to the cloud-hosted server that accumulates the data for later access.

Because the infrastructure of the cloud is owned and managed by the service provider, businesses may worry about not having enough control over the service. This is where the provider’s end-user license agreement (EULA) can help you out. 

It explains what limits the provider can place on your use of the deployment. All legitimate cloud comuting providers allow your organization to exert control over your applications and data, even if it doesn’t allow you to alter the infrastructure in any way.

When a provider presents you with a service level agreement (SLA), it helps to make sure you understand every word of it. This will help you confirm what you can and can’t do with the service.

You will be charged a very nominal fee for your data to be stored on the cloud. This is one of the major advantages of cloud computing which was a critical issue in conventional computing and storage systems. Storage capacity can be expanded to many terabytes (TB) for much less amount than storing in the local storage devices.

With cloud storage, you only pay for the amount of storage you require. If your business experiences growth, then the cloud operator can help accommodate your corresponding growth in data storage needs. 

All you will have to do is vary how much you pay to extend the storage you have. This also works in the same way if your business shrinks and you require less storage space at a reduced rate.
Cloud storage is a plus if a business’s IT infrastructure is damaged or destroyed by a natural disaster or stolen in a ransomware attack.

With cloud storage, you will be able to create multiple copies of the same file and house them in off-site servers that won’t succumb to the same disasters as your own facility. In many cases, you can set up your cloud to create automatic backups so your information will always be up to date. Having these backups in the cloud will allow you to restore any lost information in a matter of minutes.

Cloud services aggregate data from thousands of small businesses. The small businesses believe they are pushing security risks to a larger organization more capable of protecting their data.



For startups and small to medium-sized businesses (SMEs), that can’t afford costly server maintenance, but also may have to scale overnight, the benefits of utilizing the cloud are especially great.

Data stored on cloud servers can be lost through a natural disaster, malicious attacks, or a data wipe by the service provider. Losing sensitive data is devastating to firms, especially if they have no recovery plan. Google is an example of the big tech firms that have suffered permanent data loss after being struck by lightning four times in its power supply lines.

Amazon was another firm that lost its essential customer data back in 2011.



Manageability: Cloud Computing eliminates the need for IT infrastructure updates and maintenance since the service provider ensures timely, guaranteed, and seamless delivery of our services and also takes care of all the maintenance and management of our IT services according to the service-level agreement (SLA).

Sporadic batch processing: Cloud Computing lets us add or subtract resources and services according to our needs. So, if the workload is not 24/7, we need not worry about the resources and services getting wasted and we won’t end up stuck with unused services.

Strategic edge: Cloud Computing provides a company with a competitive edge over its competitors when it comes to accessing the latest and mission-critical applications that it needs without having to invest its time and money on their installations. It lets the company focus on keeping up with the business competition by offering access to the most trending and in-demand applications and doing all the manual work of installing and maintaining the applications for the company.

Downtime: Downtime is considered as one of the biggest potential downsides of using Cloud Computing. The cloud providers may sometimes face technical outages that can happen due to various reasons, such as loss of power, low Internet connectivity, data centers going out of service for maintenance, etc. This can lead to a temporary downtime in the cloud service.

Vendor lock-in: When in need to migrate from one cloud platform to another, a company might face some serious challenges because of the differences between vendor platforms. Hosting and running the applications of the current cloud platform on some other platform may cause support issues, configuration complexities, and additional expenses. The company data might also be left vulnerable to security attacks due to compromises that might have been made during migrations.

Limited control: Cloud customers may face limited control over their deployments. Cloud services run on remote servers that are completely owned and managed by service providers, which makes it hard for the companies to have the level of control that they would want over their back-end infrastructure.

Distributed cloud is the application of cloud computing technologies to interconnect data and applications served from multiple geographic locations. Distributed, in an information technology (IT) context, means that something is shared among multiple systems which may also be in different locations.

A distributed cloud refers to having computation, storage, and networking in a micro-cloud located outside the centralized cloud. Examples of a distributed cloud include both fog computing and edge computing

By 2025, will generate and process more than 75% of their data outside of traditional centralised data centres — that is, at the “edge” of the cloud.

Edge computing enables data to be analysed, processed, and transferred at the edge of a network. The idea is to analyse data locally, closer to where it is stored, in real-time without latency, rather than send it far away to a centralised data centre. So whether you are streaming a video on Netflix or accessing a library of video games in the cloud, edge computing allows for quicker data processing and content delivery.

The basic difference between edge computing and cloud computing lies in where the data processing takes place.

At the moment, the existing Internet of Things (IoT) systems perform all of their computations in the cloud using data centres. Edge computing, on the other hand, essentially manages the massive amounts of data generated by IoT devices by storing and processing data locally. That data doesn’t need to be sent over a network as soon as it processed; only important data is sent — therefore, an edge computing network reduces the amount of data that travels over the network.

The true potential of edge computing will become apparent when 5G networks go mainstream in a year from now. Users will be able to enjoy consistent connectivity without even realising it.
Edge computing does not need contact with any centralized cloud, although it may interact with one. In contrast to cloud computing, edge computing refers to decentralized data processing at the edge of the network



Edge computing refers to data processing power at the edge of a network instead of holding that processing power in a cloud or a central data warehouse. ... Edge computing does not replace cloud computing, however. In reality, an analytic model or rules might be created in a cloud then pushed out to edge devices.

Azure IoT Edge is a fully managed service built on Azure IoT Hub. ... By moving certain workloads to the edge of the network, your devices spend less time communicating with the cloud, react more quickly to local changes and operate reliably even in extended offline periods.

The edge of the IoT is where the action is. It includes a wide array of sensors, actuators, and devices—those system end-points that interact with and communicate real-time data from smart products and services.

Edge infrastructure  takes advantage of cloud infrastructure but keeps assets at the edge of the network

Edge computing can  reduce data transport requirements, thereby saving network bandwidth costs and avoiding data storage proliferation. A Content Distribution Network (CDN) is one example of how performance and efficiency can be improved by storing content closer to its users.

Edge processing refers to the execution of aggregation, data manipulation, bandwidth reduction and other logic directly on an IoT sensor or device. ... The more work the device can do to prepare the data for the cloud, the less work the cloud needs to do

Edge computing is a distributed, open IT architecture that features decentralized processing power, enabling mobile computing and Internet of Things (IoT) technologies. In edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data center.

At the Edge, the latest microcontrollers are combing with a vast array of sensors to permit a relatively high level of processing to occur and decisions to be made locally—all without going back to the Cloud. That has some obvious advantages; namely, it allows for real-time decisions as the latencies associated with Cloud-based decision making go away.


Nvidia, one of the biggest players in the design and manufacture of graphics and AI acceleration hardware, has just announced its EGX edge computing platform to help telecom operators adopt 5G networks capable of supporting edge workloads. The new Nvidia Aerial software developer kit will help telecom companies build virtualised radio access networks that will let them support smart factories, AR/VR and cloud gaming.

Internet for Things focuses on the Internet while the IoT focuses on the devices (Things). When we enable machines or non-computing devices to connect with the world of the Internet and generate useful data with the help of sensors or other techniques applicable on non-computing devices, the Internet of Things application happens.

The Internet of Things is simply "A network of Internet connected objects able to collect and exchange data." It is commonly abbreviated as IoT. ... In a simple way to put it, You have "things" that sense and collect data and send it to the internet. This data can be accessible by other "things" too.


The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. ... Traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), and others all contribute to enabling the Internet of things. 



Consumer connected devices include smart TVs, smart speakers, toys, wearables and smart appliances. Smart meters, commercial security systems and smart city technologies -- such as those used to monitor traffic and weather conditions -- are examples of industrial and enterprise IoT devices.

IoT Cloud is a platform from Salesforce.com that is designed to store and process Internet of Things (IoT) data. IoT devices are becoming a part of the mainstream electronics culture and people are adopting smart devices into their homes faster than ever. ... The more data that IoT devices collect, the smarter they will become.

Cities will transform into smart cities through the use of IoT connected devices Cloud computing provides necessary tools and services to create IoT applications. Cloud helps in achieving efficiency, accuracy, speed in implementing IoT applications. Cloud helps IoT application development but IoT is not a cloud computing

IoT allows flow of data between devices and AI can help to make sense of this data. AI is expected to be the key propellant to the growth of the IoT revolution and take it to a new level.

An Internet of Things (IoT) gateway is a physical device or software program that serves as the connection point between the cloud and controllers, sensors and intelligent devices. ... A gateway provides a place to preprocess that data locally at the edge before sending it on to the cloud.

The IoT is getting smarter. Companies are incorporating artificial intelligence—in particular, machine learning—into their Internet of Things applications and seeing capabilities grow, including improving operational efficiency and helping avoid unplanned downtime. The key: finding insights in data.

Dubbed AIoT, the convergence of AI and IoT is a powerful tool, either at the Edge or in the Cloud. The goal for the technology, which is sometimes referred to as Artificial Intelligence of Things, is to achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics. If implemented properly, those AI analytics can transform IoT data into useful information for an improved decision-making process.



The Artificial Intelligence of Things (AIoT) is the combination of artificial intelligence (AI) technologies with the Internet of Things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics. 

AI can be used to transform IoT data into useful information for improved decision making processes, thus creating a foundation for newer technology such as IoT Data as a Service (IoTDaaS).

AIoT is transformational and mutually beneficial for both types of technology as AI adds value to IoT through machine learning capabilities and IoT adds value to AI through connectivity, signaling and data exchange. 

As IoT networks spread throughout major industries, there will be an increasingly large amount of human-oriented and machine-generated unstructured data. AIoT can provide support for data analytics solutions that can create value out of this IoT-generated data.

With AIoT, AI is embedded into infrastructure components, such as programs, chipsets and edge computing, all interconnected with IoT networks. APIs are then used to extend interoperability between components at the device level, software level and platform level. These units will focus primarily on optimizing system and network operations as well as extracting value from data.

While the concept of AIoT is still relatively new, many possibilities exist to improve industry verticals, such as enterprise, industrial and consumer product and service sectors, and will continue to arise with its growth. 

AIoT could be a viable solution to solve existing operational problems, such as the expense associated with effective human capital management (HCM) or the complexity of supply chains and delivery models.




Many AIoT applications are currently retail product oriented and often focus on the implementation of cognitive computing in consumer appliances. For example, smart home technology would be considered a part of AIoT as smart appliances learn through human interaction and response.

In terms of data analytics, AIoT technology combines machine learning with IoT networks and systems in order to create data "learning machines." This can then be applied to enterprise and industrial data use cases to harness IoT data, such as at the edge of networks, to automate tasks in a connected workplace. Real time data is a key value of all AIoT use cases and solutions.

In one specific use case example, AIoT solutions could also be integrated with social media and human resources-related platforms to create an AI Decision as a Service function for HR professionals.


The convergence of Artificial Intelligence (AI) and Internet of Things (IoT) technologies and solutions (AIoT) is leading to thinking networks and systems that are becoming increasingly more capable of solving a wide range of problems across a diverse number of industry verticals. AI adds value to IoT through machine learning and improved decision making. IoT adds value to AI through connectivity, signaling, and data exchange.

The intelligent cloud is ubiquitous computing, enabled by the public cloud and artificial intelligence (AI) technology, for every type of intelligent application and system you can envision. ... Users get real-time insights and experiences, delivered by highly responsive and contextually aware apps.
Enterprises are embracing the cloud to run their mission-critical workloads. 

The number of connected devices on and off-premises, and the data they generate continue to increase requiring new enterprise network edge architectures. We call this the intelligent edge – compute closer to the data sources and users to reduce latency.

The intelligent cloud, with its massive compute power, storage and variety of services works in concert with the intelligent edge using similar programming models to enable innovative scenarios and ubiquitous compute. Networking is the crucial enabler integrating the intelligent cloud with the intelligent edge.

Ubiquitous computing (or "ubicomp") is a concept in software engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets and terminals in everyday objects such as a refrigerator or a pair of glasses. 

The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, computer networks, mobile protocols, location and positioning, and new materials.

Most of the Internet of Things (IOT) devices are based on Ubiquitous Computing. Some of the examples are: Apple Watch. Amazon Echo Speaker.

The fact that most communication in ubiquitous computing is wireless makes the role of security all the more important because radio communication can be manipulated more easily.



Ubiquitous computing is a paradigm in which the processing of information is linked with each activity or object as encountered. It involves connecting electronic devices, including embedding microprocessors to communicate information.


Ubiquitous means everywhere. Pervasive means "diffused throughout every part of." In computing terms, those seem like somewhat similar concepts. Ubiquitous computing would be everywhere, and pervasive computing would be in all parts of your life





Intelligent Cloud Sync has been introduced in 2018 to enable a cloud companion tenant to on-prem implementations so that they can make use of the great advantages of artificial intelligence and other SaaS tools to enhance business insights as well

Put simply, the cloud is a set of computers that someone else is managing. When talking about syncing and sharing services like Dropbox, Box, Google Drive, OneDrive, or any of the others, people often assume they are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services, meaning some of your favorite apps are built in the cloud, sometimes using third party cloud storage. To

Cloud Sync (e.g. Dropbox, iCloud Drive, OneDrive, Box, Google Drive)

These services sync folders on your computer or mobile device to folders on other machines or into the cloud, allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service, or for tiers of data that you are allowed to use. 

If there is data loss, sometimes these services even have a version history feature. Of course, only files that are in the synced folders are available to be recovered, resulting in sync services not being able to get back files that were never synced.

Cloud file syncing is an application that keeps files in different locations up to date through the cloud. For cloud file syncing, a user sets up a cloud-based folder, to which the desired files are copied. This folder makes the files accessible via a web interface for multiple users, on whatever device they are using.

Cloud backup saves a copy of data on remote storage to protect it from undesired events, at the same time cloud storage is designed for getting access to data from anywhere. Cloud sync lets multiple users work with data remotely using any number of devices and synchronize changes across all the users involved.

Cloud Sync is a way to keep the same updated files in different locations through cloud storage services. No matter where they were geographically edited or changed, the file will be the same from where ever it is accessed. This is a great way to keep files current, consistent, and accessible to users across multiple locations and platforms. 

For instance, when a user edits or updates a file, the changes are automatically synchronized with the corresponding folder(s). This can save organizations real time and money, because it enables users to spend less time hunting for documents and more time doing their day-to-day business operations.

How do I connect cloud storage platforms?
Most cloud storage platforms offer what is commonly known as a “sync folder” in your file system. These sync folders automatically copy files in their most up-to-date versions to all devices connected to the corresponding cloud account. For example, a Dropbox sync folder on your device will automatically sync content to your Dropbox cloud storage account. To connect and sync various cloud storage platforms with one another, you will need an application that is able to integrate and synchronize content among your multiple cloud services.

How does Cloud Sync work?

There is no one right answer, as it will all depend on your personal cloud syncing and cloud migration needs. Do you need a one-way sync? Possibly a bi-directional sync (or two-way sync), or to manage a multi or hybrid-cloud environment? Whatever your cloud migration needs may be, consider synchronizing your content using a hosted system or third-party tool.  On the other hand, you could even try to create your very own DIY sync system. Either way, setting up a cloud sync system can create platform freedom among users.




Typically, you connect to the storage cloud either through the internet or a dedicated private connection, using a web portal, website, or a mobile app. The server with which you connect forwards your data to a pool of servers located in one or more data centers, depending on the size of the cloud provider’s operation.

As part of the service, providers typically store the same data on multiple machines for redundancy. This way, if a server is taken down for maintenance or suffers an outage, you can still access your data.

Cloud storage is available in private, public and hybrid clouds.


Public storage clouds: In this model, you connect over the internet to a storage cloud that’s maintained by a cloud provider and used by other companies. Providers typically make services accessible from just about any device, including smartphones and desktops and let you scale up and down as needed.

Private cloud storage: Private cloud storage setups typically replicate the cloud model, but they reside within your network, leveraging a physical server to create instances of virtual servers to increase capacity. You can choose to take full control of an on-premise private cloud or engage a cloud storage provider to build a dedicated private cloud that you can access with a private connection. 

Organizations that might prefer private cloud storage include banks or retail companies due to the private nature of the data they process and store.

Hybrid cloud storage: This model combines elements of private and public clouds, giving organizations a choice of which data to store in which cloud. For instance, highly regulated data subject to strict archiving and replication requirements is usually more suited to a private cloud environment, whereas less sensitive data (such as email that doesn’t contain business secrets) can be stored in the public cloud. Some organizations use hybrid clouds to supplement their internal storage networks with public cloud storage.

Pros and cons
As with any other cloud-based technology, cloud storage offers some distinct advantages. But it also raises some concerns for companies, primarily over security and administrative control

Examples
There are three main types of cloud storage: block, file, and object. Each comes with its set of advantages:

Block storage
Traditionally employed in SANs, block storage is also common in cloud storage environments. In this storage model, data is organized into large volumes called “blocks." Each block represents a separate hard drive. Cloud storage providers use blocks to split large amounts of data among multiple storage nodes. Block storage resources provide better performance over a network thanks to low IO latency (the time it takes to complete a connection between the system and client) and are especially suited to large databases and applications.

Used in the cloud, block storage scales easily to support the growth of your organization’s databases and applications. Block storage would be useful if your website captures large amounts of visitor data that needs to be stored.







File storage
The file storage method saves data in the hierarchical file and folder structure with which most of us are familiar. The data retains its format, whether residing in the storage system or in the client where it originates, and the hierarchy makes it easier and more intuitive to find and retrieve files when needed. File storage is commonly used for development platforms, home directories, and repositories for video, audio, and other files.

Object storage
Object storage differs from file and block storage in that it manages data as objects. Each object includes the data in a file, its associated metadata, and an identifier. Objects store data in the format it arrives in and makes it possible to customize metadata in ways that make the data easier to access and analyze. Instead of being organized in files or folder hierarchies, objects are kept in repositories that deliver virtually unlimited scalability. Since there is no filing hierarchy and the metadata is customizable, object storage allows you to optimize storage resources in a cost-effective way.

Cloud storage for business
A variety of cloud storage services is available for just about every kind of business— anything from sole proprietor to large enterprises.

If you run a small business, cloud storage could make sense, particularly if you don’t have the resources or skills to manage storage yourself.  Cloud storage can also help with budget planning by making storage costs predictable, and it gives you the ability to scale as the business grows.

If you work at a larger enterprise (e.g., a manufacturing company, financial services, or a retail chain with dozens of locations), you might need to transfer hundreds of gigabytes of data for storage on a regular basis. In these cases, you should work with an established cloud storage provider that can handle your volumes. In some cases, you may be able to negotiate custom deals with providers to get the best value.

Security
Cloud storage security is a serious concern, especially if your organization handles sensitive data like credit card information and medical records. You want assurances your data is protected from cyber threats with the most up-to-date methods available. You will want layered security solutions that include endpoint protection, content and email filtering and threat analysis, as well as best practices that comprise regular updates and patches. And you need well-defined access and authentication policies.

Most cloud storage providers offer baseline security measures that include access control, user authentication, and data encryption. Ensuring these measures are in place is especially important when the data in question involves confidential business files, personnel records, and intellectual property. Data subject to regulatory compliance may require added protection, so you need to check that your provider of choice complies with all applicable regulations.


Whenever data travels, it is vulnerable to security risks. You share the responsibility for securing data headed for a storage cloud. Companies can minimize risks by encrypting data in motion and using dedicated private connections (instead of the public internet) to connect with the cloud storage provider.


Backup
Data backup is as important as security. Businesses need to back up their data so they can access copies of files and applications— and prevent interruptions to business—if data is lost due to cyberattack, natural disaster, or human error.

Cloud-based data backup and recovery services have been popular from the early days of cloud-based solutions. Much like cloud storage itself, you access the service through the public internet or a private connection. Cloud backup and recovery services free organizations from the tasks involved in regularly replicating critical business data to make it readily available should you ever need it in the wake of data loss caused by a natural disaster, cyber attack or unintentional user error.

Cloud backup offers the same advantages to businesses as storage—cost-effectiveness, scalability, and easy access. One of the most attractive features of cloud backup is automation. Asking users to continually back up their own data produces mixed results since some users always put it off or forget to do it. This creates a situation where data loss is inevitable. With automated backups, you can decide how often to back up your data, be it daily, hourly or whenever new data is introduced to your network.

Backing up data off-premise in a cloud offers an added advantage: distance. A building struck by a natural disaster, terror attack, or some other calamity could lose its on-premise backup systems, making it impossible to recover lost data. Off-premise backup provides insurance against such an event.

Servers
Cloud storage servers are virtual servers—software-defined servers that emulate physical servers. A physical server can host multiple virtual servers, making it easier to provide cloud-based storage solutions to multiple customers. The use of virtual servers boosts efficiency because physical servers otherwise typically operate below capacity, which means some of their processing power is wasted.

This approach is what enables cloud storage providers to offer pay-as-you-go cloud storage, and to charge only for the storage capacity you consume. When your cloud storage servers are about to reach capacity, the cloud provider spins up another server to add capacity—or makes it possible for you to spin up an additional virtual machine on your own.

Open source
If you have the expertise to build your own virtual cloud servers, one of the options available to you is open source cloud storage. Open source means the software used in the service is available to users and developers to study, inspect, change and distribute.

Open source cloud storage is typically associated with Linux and other open source platforms that provide the option to build your own storage server. Advantages of this approach include control over administrative tasks and security.

Cost-effectiveness is another plus. While cloud-based storage providers give you virtually unlimited capacity, it comes at a price. The more storage capacity you use, the higher the price gets. With open source, you can continue to scale capacity as long as you have the coding and engineering expertise to develop and maintain a storage cloud.

Different open source cloud storage providers offer varying levels of functionality, so you should compare features before deciding which service to use. Some of the functions available from open source cloud storage services include the following:

Syncing files between devices in multiple locations
Two-factor authentication
Auditing tools
Data transfer encryption
Password-protected sharing
Pricing

As mentioned, cloud storage helps companies cut costs by eliminating in-house storage infrastructure. But cloud storage pricing models vary. Some cloud storage providers charge monthly the cost per gigabyte, while others charge fees based on stored capacity. Fees vary widely; you may pay $1.99 or $10 for 100 GB of storage monthly, based on the provider you choose. Additional fees for transferring data from your network to the fees based on storage cloud are usually included in the overall service price.

Providers may charge additional fees on top of the basic cost of storage and data transfer. For instance, you may incur an extra fee every time you access data in the cloud to make changes or deletions, or to move data from one place to another. The more of these actions you perform on a monthly basis, the higher your costs will be. Even if the provider includes some base level of activity in the overall price, you will incur extra charges if you exceed the allowable limit.

Providers may also factor the number of users accessing the data, how often users access data, and how far the data has to travel into their charges. They may charge differently based on the types of data stored and whether the data requires added levels of security for privacy purposes and regulatory compliance.

Examples
Cloud storage services are available from dozens of providers to suit all needs, from those of individual users to multinational organizations with thousands of locations. For instance, you can store emails and passwords in the cloud, as well as files like spreadsheets and Word documents for sharing and collaborating with other users. This capability makes it easier for users to work together on a project, which explains while file transfer and sharing are among the most common uses of cloud storage services.

Some services provide file management and syncing, ensuring that versions of the same files in multiple locations are updated whenever someone changes them. You can also get file management capability through cloud storage services. With it, you can organize documents, spreadsheets, and other files as you see fit and make them accessible to other users. Cloud storage services also can handle media files, such as video and audio, as well as large volumes of database records that would otherwise take up too much room inside your network.

Whatever your storage needs, you should have no trouble finding a cloud storage service to deliver the capacity and functionality you need.

Cloud storage and IBM
IBM Cloud Storage offers a comprehensive suite of cloud storage services, including out-of-the-box solutions, components to create your own storage solution, and standalone and secondary storage.

Benefits of IBM Cloud solutions include:--

Global reach
Scalability
Flexibility
Simplicity

You also can take advantage of IBM’s automated data backup and recovery system, which is managed through the IBM Cloud Backup WebCC browser utility. The system allows you to securely back up data in one or more IBM cloud data centers around the world.

Storage software is predicted to overtake storage hardware by 2020, by which time it will need to manage 40 zettabytes (40 sextillion bytes) of data.

As you’re moving apps and workloads to the cloud, you’ll have new options for data storage. It’s just as important to keep your data safe in the cloud as it was on-premises, though.

Google Cloud meets these needs with a variety of Persistent Disk features. Persistent Disk is our high-performance block storage option that you can use with either Compute Engine or Google Kubernetes Engine (GKE).   


Note that disks and snapshots are always encrypted, and data is replicated multiple times to provide extraordinarily high durability. Here, we’ll dive into three generally available features that help you meet backup and recovery needs in the way that works best for your business data.

Google Kubernetes Engine (GKE) is a management and orchestration system for Docker container and container clusters that run within Google's public cloud services. Google Kubernetes Engine is based on Kubernetes, Google's open source container management system



GKE gives you complete control over every aspect of container orchestration, from networking, to storage, to how you set up observability—in addition to supporting stateful application use cases. However, if your application does not need that level of cluster configuration and monitoring, then fully managed Cloud Run might be the right solution for you.

Fully managed Cloud Run is an ideal serverless platform for stateless containerized microservices that don’t require Kubernetes features like namespaces, co-location of containers in pods (sidecars) or node allocation and management.

Data Recovery
Beyond just where and how your data is stored, it’s important to consider how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take a while. Generally, the sync and share services only allow customers to download files over the internet. 

If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors. If the process of downloading from your sync/share service will take three days, one thing to consider is having to keep the computer online the entire time or risk an error if the download were to get interrupted. 

One thing to be wary of with syncing and sharing services is that if you are sharing your folders or directories with others, if they add or remove files from shared directories, they will also be added or removed from your computer as well.


With cloud storage services, you can usually only retrieve data over the internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

With cloud storage, data is distributed amongst bi-costal data centers. Syncing technology makes it possible to link up and update data quickly, but storing data in the cloud makes syncing unnecessary. When all your data is stored in the cloud, you know exactly where every piece of information is at any given time. 

Cloud file syncing is an application that keeps files in different locations up to date through the cloud. For cloud file syncing, a user sets up a cloud-based folder, to which the desired files are copied. This folder makes the files accessible via a web interface for multiple users, on whatever device they are using



Tech-giants like Amazon (AWS), Microsoft (Azure) or Google (Compute Engine) today give the possibility to create virtual machines in their cloud networks to support or even replace physical servers.


Desktop as a service (DaaS) is a cloud computing solution in which virtual desktop infrastructure is outsourced to a third-party provider. ... Desktop as a service is also known as a virtual desktop or hosted desktop services


Containers as a service (CaaS) is a cloud service that allows software developers and IT departments to upload, organize, run, scale, manage and stop containers by using container-based virtualization.






If your business needs a virtual machine, opt for Infrastructure as a Service.

Cloud servers have all the software they require to run and can function as independent units. A virtual server is a server that shares hardware and software resources with other operating systems (OS), versus dedicated servers. Because they are cost-effective and provide faster resource control, virtual servers are popular in Web hosting environments.. Why virtual servers are better?

It’s easy to confuse virtualization and cloud, particularly because they both revolve around creating useful environments from abstract resources. However, virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system, and clouds are IT environments that abstract, pool, and share scalable resources across a network. 

To put it simply, virtualization is a technology, where cloud is an environment.

Pros of Virtual Servers--

You’ve created a cloud if you’ve set up an IT system that:--

Can be accessed by other computers through a network.
Contains a repository of IT resources.
Can be provisioned and scaled quickly.

Virtualization has its own benefits, such as server consolidation and improved hardware utilization, which reduces the need for power, space, and cooling in a datacenter. Virtual machines are also isolated environments, so they are a good option for testing new applications or setting up a production environment.

Choice between cloud and virtual private server (VPS) hosting-

A VPS, or virtual private server, is a form of multi-tenant cloud hosting in which virtualized server resources are made available to an end user over the internet via a cloud or hosting provider.

Each VPS is installed on a physical machine, operated by the cloud or hosting provider, that runs multiple VPSs. But while the VPSs share a hypervisor and underlying hardware, each VPS runs its own operating system (OS) and applications and reserves its own portion of the machine's resources (memory, compute, etc.).

A VPS offers levels of performance, flexibility, and control somewhere between those offered by multi-tenant shared hosting and single-tenant dedicated hosting. 

While it might seem counterintuitive that the multi-tenant VPS arrangement would be called ‘private’—especially when single-tenant options are available—the term ‘VPS’ is most commonly used by traditional hosting providers to distinguish it from shared hosting, a hosting model where all the hardware and software resources of a physical machine are shared equally across multiple users.

VPS hosting is a partition of a dedicated server machine. It acts like an autonomous server, but is actually a self-contained folder on a powerful physical host system that also hosts other virtual servers for other clients. Although you share space on the physical host, your virtual server is independent of the others.

The security of VPS hosting is almost on par with that of a dedicated physical server. The VPS is independent of any other VPSes on the same physical host, as if it were a separate machine, but poor security measures taken by the owner of one VPS could affect others on the same physical server. 

However, this possibility is much less likely than with shared hosting. The centralized location of the physical host offers added security to those operations with critical data whose location must be known and restricted to comply with data security regulations.

The security of cloud hosting is also quite high. Your server is completely separated from other clients, as with a VPS. However, the web-based nature of the infrastructure might make it vulnerable to attacks since it is physically distributed and thus harder to secure. In addition, since the data is housed in many locations, it may not be possible to comply with some regulations on data security.



Cloud servers ordinarily have a lower entry cost than dedicated servers. However, cloud servers tend to lose this advantage as a company scales and requires more resources. There are also features that can increase the cost of both solutions. ... Cloud servers are typically billed on a monthly OpEx model..

The term ‘managed cloud hosting’ is sometimes also used to refer to the managed hosting of a specific application such as WordPress or some other content or commerce application in the cloud. In such cases the provider does a fully managed installation of the application on their cloud infrastructure which is specially tuned to run that particular application at the best possible performance.

In this way you know you have the best possible hosting set up for high performance of that particular application.

This type of managed cloud hosting provider often also add application specific services as well such as:--

Managed backups
Application specific security optimizations
Managed content distribution networks (CDN)

Managed application caching and application specific performance optimizations




Many smaller firms simply leverage public backup services to gain an added layer of security for important documents and media, while larger enterprises take advantage of a growing number of SaaS, IaaS, PaaS, and DBaaS providers

Database as a service (DBaaS) is a cloud computing service model that provides users with some form of access to a database without the need for setting up physical hardware, installing software or configuring for performance. Think of it as a more focused form of PaaS.. DBaaS instance provides customers with a fully managed and dedicated SQL Server environment, capable of running multiple databases.

 In database systems, SQL statements are used to generate queries from a client program to the database. This can allow the users to execute a wide range of fast data manipulation. So to basically put it, SQL is the main language that allows your database servers to store and edit the data on it


Here are the 15 most popular DBaaS brands.---

Microsoft Azure SQL Database
Amazon RDS
Amazon DynamoDB
Microsoft Azure Table Storage
Microsoft DocumentDB
MongoDB Atlas
Google Cloud SQL
Google Cloud Datastore
Amazon Aurora
Oracle Database Cloud Service
Amazon SimpleDB
Google Cloud Bigtable
IBM Cloudant
Compose (IBM)

Google Cloud Spanner


SQL Server is a relational database management system (RDBMS) developed by Microsoft. It is primarily designed and developed to compete with MySQL and Oracle database.

Microsoft SQL Server Express is a version of Microsoft's SQL Server relational database management system that is free to download, distribute and use. It comprises a database specifically targeted for embedded and smaller-scale applications.

SQL Server allows you to run multiple services at a go, with each service having separate logins, ports, databases, etc. Critical components of SQL Server are Database Engine, SQL Server, SQL Server Agent, SQL Server Browser, SQL Server Full-Text Search, etc.

The term "Network-as-a-Service" (NaaS) is often used along with other marketing terms like cloud computing, along with acronyms such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and Communication-as-a-Service (CaaS).

Anything-as-a-service, or XaaS, refers to the growing diversity of services available over the Internet via cloud computing as opposed to being provided locally, or on premises.


Monitoring as a service (MaaS) is one of many cloud delivery models under anything as a service (XaaS). It is a framework that facilitates the deployment of monitoring functionalities for various other services and applications within the cloud.

Companies that serve the medical industry must abide by HIPAA regulations stating that data must be stored in two separate locations. Law firms may need permission from clients before storing personal client data on a cloud server. Furthermore, the fact that an on-premise server does not require an internet connection is a benefit for certain types of companies.

Some companies will need both on-premise and cloud storage. Others may prefer to manage their own customized storage solution without outside help.

 In June 2019 Google experienced a five-hour outage that took down many of its other services. Apple iCloud had a similar outage in 2015, but it lasted seven hours. Unexpected outages like these can bring your entire business to a grinding halt until they are fixed.

When a firm is unaware of the risk posed by workers using cloud services, the employees could be sharing just about anything without raising eyebrows. Insider threats have become common in the modern market. For instance, if a salesman is about to resign from one firm to join a competitor firm, they could upload customer contacts to cloud storage services and access them later.


The example above is only one of the more common insider threats today. Many more risks are involved with exposing private data to public servers.

An IT system audit must check the compliance of IT system vendors and data in the cloud servers. These are the three crucial areas that need to be frequently audited by cloud service customers:--

i. Security in the cloud service facility,
ii. Access to the audit trail, and
iii. the internal control environment of the cloud service provider.


A cloud auditor is a party that can perform an independent examination of cloud service controls with the intent to express an opinion thereon. ... A cloud auditor can evaluate the services provided by a cloud provider in terms of security controls, privacy impact, performance, etc.

Cloud storage services do not guarantee the integrity of the data that users store in the cloud. Thus, public auditing is necessary, in which a third-party auditor (TPA) is delegated to audit the integrity of the outsourced data. This system allows users to enjoy on-demand cloud storage services without the burden of continually auditing their data integrity. 

However, certain TPAs might deviate from the public auditing protocol and/or collude with the cloud servers. In this article, we propose an identity-based public auditing (IBPA) scheme for cloud storage systems. In IBPA, the nonces in a blockchain are employed to construct unpredictable and easily verified challenge messages, thereby preventing the forging of auditing results by malicious TPAs to deceive users. 

Users need only to verify the TPAs’ auditing results in batches to ensure the integrity of their data that are stored in the cloud. A detailed security analysis shows that IBPA can preserve data integrity against various attacks.

Escrow as a Service (or EaaS) is a simple Cloud Resilience solution for managing the risks associated with cloud-hosted software. It provides access to your unique application and data when your software supplier is no longer around to support you.

An EaaS solution begins with a contractual agreement between your organization and your software supplier. After this is drawn up, your technology, data and associated materials will be held and secured by NCC Group as a trusted, independent third party.

Depending on the level of assurance required by your organization, you can then choose between two solutions.

An EaaS Access solution gives you fast access to the live cloud environment where your supplier is hosting your software if they can no longer support you.


Alternatively, EaaS Replicate provides a separately hosted and mirrored instance of your unique cloud environment stored in NCC Group’s secure cloud vault.



Distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. The components interact with one another in order to achieve a common goal. 

The goal of distributed computing is to make such a network work as a single computer. Distributed systems offer many benefits over centralized systems, including the following: Scalability. The system can easily be expanded by adding more machines as needed..


Hadoop is a framework which uses simple programming models to process large data sets across clusters of computers. ... On the other hand, cloud computing is a model where processing and storage resources can be accessed from any location via the internet. Hadoop is an open source distributed processing framework that manages data processing and storage for big data applications running in clustered systems. 



Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework


Cloud computing provides the ability to store and correlate relevant pieces of data, and update at regular intervals based on new incoming data streams. Artificial intelligence provides that nice cherry on top – now you have a place to store data, what do you do with it? 

Technology like knowledge catalogs and cognitive search platforms would allow institutions and investors to define collections from which they can create connections and search based on context rather than just keywords. This is incredibly important as there are petabytes of data that are processed and transmitted through the public internet on a daily basis.

The idea of a computer is really just an abstract model for systems that store and manipulate data according to a set of instructions called algorithms. The implementation of this model can take many forms. We are used to thinking of it as the personal computer on our desktop, but with the rise of mobile computing, the internet and cloud computing this computing is becoming pervasive but also integrated through these cloud platforms. 

Cloud computing platforms have been a key innovation over the past decade, although a relatively straightforward idea – of centralizing computing resources within a datacenter and then delivering them as a service over a network – the outcomes of doing this are though extremely impactful. 

The demand for computation resources does not scale linearly with the size of the data but scales quadratically or cubically with the size of the data and when you are talking about billions of data points that causes problems and we need new computing platforms to mitigate that.

Throughout human history, computing power was a scarce resource, and until the past few years, high-end computer processing and storage offerings were out of the reach of all except the largest of organizations and then at the cost of millions of dollars. 

However, with the advent of global scale cloud computing, high-end computing is now available to organizations of almost all size at low cost and on-demand. The arrays of billion-dollar scale data centers owned and operated by Amazon, Google, and Microsoft, are now at the fingertips of many. Many of the largest applications on the internet today run on cloud computing infrastructure. 

Take for example Airbnb, that now coordinates an average of half a million people’s accommodation each night in 65,000 cities with their platform running almost entirely on Amazon Web Services. Likewise, each month Netflix delivers a dillion hours of video streaming globally by running on Amazon cloud. Indeed Amazon’s AWS is so widely used that when it doesn’t work right, the entire internet is in jeopardy.

A basic driver behind many of the recent business disruptions in a wide range of industries is the transformation of computing resources from a scarce to an abundant resource. Combining cloud computing, with advances in algorithms and mobile computing we get machine learning platforms that are able to coordinates and run ever large and more complex service systems. 

This allows an increasing swath of human activity to be captured by algorithms, which allows it to be split apart, transformed, altered, and recombined. These platforms bring about an ever growing integration between technology and services. 

As data and information processing become more pervasive and computation becomes embed within virtually all systems, traditional divides are going to become ever more blurred, information technology and socio-economic organization will become ever more integrated and inseparable. As the saying goes, every company will become a technology company and this will fundamentally change the structure and nature of those organizations.

What is happening today is a convergence of these cloud computing platforms, new algorithms and the rise of the services economy. Recent years have seen the emergence of physical products that are digitally networked with other products and with information systems to enable the creation of smart service systems which are coordinated via algorithms. 

What is happening as we move into the services economy is that products become commoditized, people stop wanting to own things, what they want is to be able to push a button on their smartphone and the thing delivered as-a-service. An app for food services, an app for transport services, an app for accommodation, etc. and of course all these services are delivered on demand via cloud platforms that are coordinated via advanced algorithms.

Services are not like products, whereas products were mass produced, services have to be personalized; products were static once-off purchases, services are processes; products were about things, services are about functionality and value. 

Service systems are all about the coordination of different components around the end user’s specific needs, to do that you need lots of data, advanced analytics, and cloud computing. We can already see the data-driven services organization in the form of Uber, Alibaba or DiDi Chuxing, which don’t own anything they just use data and advanced analytics within their platform to coordinate resources toward delivering a service. Service companies like DiDi would be impossible without data.

What will differentiate one company from another is not how fancy their product is, but how seamless and integrated their service system is and this is done through their capacity to master data and analytics. 

Organizations will become platforms and will compete based on their intelligence, which will be contained in their algorithms and people. In short, the physical technologies of the industrial age are being converted into services and connected to cloud platforms wherein advanced algorithms coordinate them. 

Everything becomes data, your physical activity, traffic, purchases, and the data gets moved to the cloud, but it gets processed and compared with other devices, it is no longer just what you do but what everyone else also does, which keeps making the system smarter and smarter. 

This is the essence of the process we are going through today; datafication converting everything into data, cloud platforms for aggregating and running the machine learning for processing it and iterating on that. 

Through servitization and dematerialization organizations become differentiated based on their data and algorithms as algorithmic systems extend to coordinate more and more spheres of human activity as we move further into the unknown world of the information age.

Best free cloud storage in 2019--

Google Drive.
pCloud.
Microsoft OneDrive.
Dropbox.
MediaFire.

Cloud Backup (e.g. Backblaze Computer Backup and Carbonite)

These services should typically work automatically in the background. The user does not usually need to take any action like setting up and working out of specific folders like with sync services (though some online services do differ and you may want to make sure there are no gotchas, like common directories being excluded by default). 

Backup services typically back up new or changed data that is on your computer to another location. Before the cloud became an available and popular destination, that location was primarily a CD or an external hard drive, but as cloud storage became more readily available and affordable, quickly it became the most popular offsite storage medium. 

Typically cloud backup services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have version history and rollback features in case there is data loss or accidental file deletion.

Cloud backup, also known as online backup or remote backup, is a strategy for sending a copy of a physical or virtual file or database to a secondary, off-site location for preservation in case of equipment failure or catastrophe. The secondary server and storage systems are usually hosted by a third-party service provider, who charges the backup customer a fee based on storage space or capacity used, data transmission bandwidth, number of users, number of servers or number of times data is accessed.

Implementing cloud data backup can help bolster an organization's data protection strategy without increasing the workload of information technology (IT) staff. The labor-saving benefit may be significant and enough of a consideration to offset some of the additional costs associated with cloud backup, such as data transmission charges.

How cloud backup works

In an organization’s data center, a backup application copies data and stores it on a different media or another storage system for easy access in the event of a recovery situation. While there are multiple options and approaches to off-site backup, cloud backup serves as the off-site facility for many organizations. In an enterprise, the company might own the off-site server if it hosts its own cloud service, but the chargeback method would be similar if the company uses a service provider to manage the cloud backup environment.

Basic steps of cloud backup

While the amount of steps may vary based on backup method or type, this is the basic process involved with cloud backup, also known as online backup or remote backup.
There are a variety of approaches to cloud backup, with available services that can easily fit into an organization's existing data protection process. Varieties of cloud backup include:

Backing up directly to the public cloud. One way to store organizational resources is by duplicating resources in the public cloud. This method entails writing data directly to cloud providers, such as AWS or Microsoft Azure. The organization uses its own backup software to create the data copy to send to the cloud storage service. 

The cloud storage service then provides the destination and safekeeping for the data, but it does not specifically provide a backup application. In this scenario, it is important that the backup software is capable of interfacing with the cloud's storage service. Additionally, with public cloud options, IT professionals may need to look into supplemental data protection procedures.

Backing up to a service provider. In this scenario, an organization writes data to a cloud service provider that offers backup services in a managed data center. The backup software that the company uses to send its data to the service may be provided as part of the service, or the service may support specific commercially-available backup applications.

Choosing a cloud-to-cloud (C2C) backup. These services are among the newest offerings in the cloud backup arena. They specialize in backing up data that already lives in the cloud, either as data created using a software as a service (SaaS) application or as data stored in a cloud backup service. As its name suggests, a cloud-to-cloud backup service copies data from one cloud to another cloud. The cloud-to-cloud backup service typically hosts the software that handles this process.

Using online cloud backup systems. There are also hardware alternatives that facilitate backing up data to a cloud backup service. These appliances are all-in-one backup machines that include backup software and disk capacity along with the backup server. The appliances are about as close to plug-and-play as backup gets, and most of them also provide a seamless (or nearly so) link to one or more cloud backup services or cloud providers. 

The list of vendors that offer backup appliances that include cloud interfaces is long, with Quantum, Unitrends, Arcserve, Rubrik, Cohesity, Dell EMC, StorageCraft and Asigra active in this arena. These appliances typically retain the most recent backup locally, in addition to shipping it to the cloud backup provider, so that any required recoveries can be made from the local backup copy, saving time and transmission costs.

When an organization engages a cloud backup service, the first step is to complete a full backup of the data that needs to be protected. This initial backup can sometimes take days to finish uploading over a network as a result of the large volume of data that is being transferred. In a 3-2-1 backup strategy, where an organization has three copies of data on two different media, at least one copy of the backed up data should be sent to an off-site backup facility so that it is accessible even if on-site systems are unavailable.

Using a technique called cloud seeding, a cloud backup vendor sends a storage device -- such as a disk drive or tape cartridge -- to its new customer, which then backs up the data locally onto the device and returns it to the provider. This process removes the need to send the initial data over the network to the backup provider.

If the amount of data in the initial backup is substantial, the cloud backup service may provide a full storage array for the seeding process. These arrays are typically small network-attached storage (NAS) devices that can be shipped back and forth relatively easily. After the initial seeding, only changed data is backed up over the network.



Cloud backup services are typically built around a client software application that runs on a schedule determined by the purchased level of service and the customer's requirements. For example, if the customer has contracted for daily backups, the application collects, compresses, encrypts and transfers data to the cloud service provider's servers every 24 hours. 

To reduce the amount of bandwidth consumed and the time it takes to transfer files, the service provider might only provide incremental backups after the initial full backup.

Cloud backup services often include the software and hardware necessary to protect an organization's data, including applications for Exchange and SQL Server. Whether a customer uses its own backup application or the software the cloud backup service provides, the organization uses that same application to restore backed up data. Restorations could be on a file-by-file basis, by volume or a full restoration of the complete backup.

If the volume of data to be restored is very large, the cloud backup service may ship the data on a complete storage array that the customer can hook up to its servers to recover its data. This is, in effect, a reverse seeding process. Restoring a large amount of data over a network can take a long time.

A key feature of cloud backup restorations is that they can be done anywhere from nearly any kind of computer. For example, an organization could recover its data directly to a disaster recovery site in a different location if its data center is unavailable.

Types of backup

In addition to the various approaches to cloud backup, there are also multiple backup methods to consider. While cloud backup providers give customers the option to choose the backup method that best fits their needs and applications, it is important to understand the differences among the three main types.

Full backups copy the entire data set every time a backup is initiated. As a result, they provide the highest level of protection. However, most organizations cannot perform full backups frequently because they can be time-consuming and take up too much storage capacity.

Incremental backups only back up the data that has been changed or updated since the last backup. This method saves time and storage space, but can make it more difficult to perform a complete restore. Incremental is a common form of cloud backup because it tends to use fewer resources.

Differential backups are similar to incremental backups because they only contain data that has been altered. However, differential backups back up data that has changed since the last full backup, rather than the last backup in general. This method solves the problem of difficult restores that can arise with incremental backups.

Pros and cons

Before choosing cloud backup as a backup strategy, it is important to weigh the potential pros and cons that are associated with using a third-party to store data. The advantages of cloud backup include:

Generally, it is cheaper to back up data using a cloud backup service compared to building and maintaining an in-house backup operation. The associated cloud backup costs will rise as the volume of backup data rises, but the economies are likely to continue to make cloud backup an attractive choice. 

Some providers may offer free cloud backup, but the amount of backup capacity is typically limited which makes free backup appropriate for some home users and only the smallest of companies.

The cloud is scalable, so even as a company's data grows, it can still be easily backed up to a cloud backup service. But organizations need to be wary of escalating costs as data volume grows. By weeding out dormant data and sending it to an archive, a company can better manage the amount of data it backs up to the cloud.

Managing cloud backups is simpler because service providers take care of many of the management tasks that are required with other forms of backup.

Backups are generally more secure against ransomware attacks because they are performed outside of the office network. Backup data is typically encrypted before it is transmitted from the customer's site to the cloud backup service, and usually remains encrypted on the service's storage systems.

Cloud backups help lower the risk of common data backup failures caused by improper storage, physical media damage or accidental overwrites.

A cloud backup service can help to consolidate a company's backup data because the service can back up main data center storage systems, remote office servers and storage devices, and end-user devices such as laptops and tablets.

Backed up data is accessible from anywhere.

Despite its many benefits, there are some disadvantages and challenges to using a cloud backup service, including:--

The backup speed depends on bandwidth and latency. For example, when many organizations are using the internet connection, the backup could be slower. This could be bothersome when backing data up, but could be an even greater issue when it is necessary to recover data from the service.
Costs can escalate when backing up large amounts of data to the cloud.

As with any use of cloud storage, data is moved outside of an organization's buildings and equipment and into the control of an outside provider. Therefore, it is incumbent to learn as much as possible about the cloud backup provider's equipment, physical security procedures, data protection process and fiscal viability.

Best practices

While strategies, technologies and providers widely vary, there are several agreed upon best practices when it comes to implementing cloud backup in the enterprise. In general, a few guidelines are:

Understand all aspects of the cloud backup provider service-level agreement (SLA) such as how data is backed up and protected, where vendor offices are located and how costs accumulate over time.
Do not rely on any one method or storage medium for backup.

Test backup strategies and data recovery checklists to ensure they are sufficient in the case of a disaster.

Have administrators routinely monitor cloud backups to make sure processes are successful and uncorrupted.

Choose a data restore destination that is easily accessible and does not overwrite existing data.

Make decisions about specific data or files to back up based on the criticality of the information to business operations.

Use metadata properly to enable the quick location and restoration of specific files.

Consider using encryption for data that must stay confidential.

Special considerations

When choosing a cloud backup service provider, there are a few additional considerations to weigh. Some companies have special needs related to data protection, but not all cloud backup providers are able to meet those needs. 

For example, if a company must comply with a specific regulation such as HIPAA or GDPR, the cloud backup service needs to be certified as compliant with data handling procedures as defined by that regulation. While an outside firm provides the backup, the customer is still responsible for the data, and could face serious consequences -- including steep fines -- if the cloud backup provider does not maintain the data appropriately.

Data archiving is another special consideration when selecting a cloud backup service. Archiving is different from routine data backup. Archived data is data that is not currently needed but still needs to be retained. Ideally, that data should be removed from the daily backup stream because it is likely unchanged and it unnecessarily increases the volume of backup data transmissions. 

Some cloud backup providers offer archiving services to complement their backup products. Archive data is generally stored on equipment geared to longer retentions and infrequent access, such as tape or low-performing disk systems. That type of storage is generally less expensive than storage used for active backups.

Cloud backup vs. cloud DR

Cloud backup and cloud disaster recovery are not the same, but they are connected. While cloud backup services can be tapped to recover data and resume operations after a disruptive event, it should be noted that they are not necessarily specifically oriented to provide all the features and services that a true DRaaS offering would provide.

For example, to use the data that was saved to a cloud backup service to recover from a disaster, it would have to include much more than just data files, such as operating systems, application software, drivers and utilities. Users would have to set up their backup routines to include those elements specifically, such as by mirroring entire servers to the cloud backup service.









THIS POST IN NOW CONTINUED TO PART 10 , BELOW-






CAPT AJIT VADAKAYIL
..

Viewing all articles
Browse latest Browse all 852

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>