Quantcast
Channel: Ajit Vadakayil
Viewing all articles
Browse latest Browse all 852

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 14 - Capt Ajit Vadakayil

$
0
0


THIS POST IS CONTINUED FROM PART 13, BELOW--




OBJECTIVE AI CANNOT HAVE A VISION,   IT CANNOT PRIORITIZE,    IT CANT GLEAN CONTEXT,    IT CANT TELL THE MORAL OF A STORY ,   IT CANT RECOGNIZE A JOKE,    IT CANT DRIVE CHANGE,     IT CANNOT INNOVATE,  IT CANNOT DO ROOT CAUSE ANALYSIS ,   IT CANNOT MULTI-TASK,    IT CANNOT DETECT SARCASM,   IT CANNOT DO DYNAMIC RISK ASSESSMENT ,   IT IS UNABLE TO REFINE OWN KNOWLEDGE TO WISDOM,   IT IS BLIND TO SUBJECTIVITY,   IT CANNOT EVALUATE POTENTIAL,    IT CANNOT SELF IMPROVE WITH EXPERIENCE,    IT DOES NOT UNDERSTAND BASICS OF CAUSE AND EFFECT,    IT CANNOT JUDGE SUBJECTIVELY TO VETO/ ABORT,     IT CANNOT FOSTER TEAMWORK DUE TO RESTRICTED SCOPE,   IT CANNOT MENTOR,    IT CANNOT BE CREATIVE,   IT CANNOT PATENT AN INVENTION,  IT CANNOT SEE THE BIG PICTURE ,  IT CANNOT FIGURE OUT WHAT IS MORALLY WRONG,  IT CAN BE FOOLED EASILY USING DECOYS WHICH CANT FOOL A CHILD,  IT IS PRONE TO CATASTROPHIC FORGETTING,   IT CANNOT EVEN SET A GOAL …  

ON THE CONTRARY IT CAN SPAWN FOUL AND RUTHLESS GLOBAL FRAUD ( CLIMATE CHANGE DUE TO CO2 ) WITH DELIBERATE BLACK BOX ALGORITHMS,  JUST FEW AMONG MORE THAN 40 CRITICAL INHERENT DEFICIENCIES.


HUMANS HAVE THINGS A COMPUTER CAN NEVER HAVE.. A SUBCONSCIOUS BRAIN LOBE,  REM SLEEP WHICH BACKS UP BETWEEN RIGHT/ LEFT BRAIN LOBES AND FROM AAKASHA BANK,  A GUT WHICH INTUITS,   30 TRILLION BODY CELLS WHICH HOLD MEMORY,   A VAGUS NERVE , AN AMYGDALA ,  73% WATER IN BRAIN FOR MEMORY,  10 BILLION MILES ORGANIC DNA MOBIUS WIRING ETC.





1
https://ajitvadakayil.blogspot.com/2019/08/what-artificial-intelligence-cannot-do.html
2
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do.html
3
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do_29.html
4
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do.html
5
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_4.html
6
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_25.html
7
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_88.html
8
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_15.html
9
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_94.html
10
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do.html
11
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_1.html
12
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do.html
13
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_21.html
14
https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do_27.html






Without the ability to understand cause and effect, deep learning algorithms will never be able to explain why an x-ray image suggests the presence of an ailment. In some cases, comprehending cause and effect may seem like common sense to humans. 

But enabling AI to have the same epiphanies in reasoning would be revolutionary. Essentially, the algorithm forms hypotheses about the causal relationships between variables, then it tests how changing a variety of variables affects its theories. 

Through this iterative trial and error, the algorithm should be able to start differentiating between causation and correlation. For instance, it should still be able to recognize that cancer can be caused by smoking as opposed to hospital visits, even though both factors are heavily related to the situation. 

The subset of machine learning relies on artificial neural networks to simulate the way human brains learn by strengthening neural connections. Basically, the neural network is fed and trained on data repeatedly until it gradually adjusts its outcomes to be correct. This is how neural networks can eventually recognize cats in photos with extreme accuracy — after seeing hundreds of thousands of cat images, it starts to “get the picture.”

But none of this training allows deep learning to generalize..

Correlation and causation are often confused because the human mind likes to find patterns even when they do not exist. We often fabricate these patterns when two variables appear to be so closely associated that one is dependent on the other. That would imply a cause and effect relationship where the dependent event is the result of an independent event. 

However, we cannot simply assume causation even if we see two events We cannot simply assume causation even if we see two events happening, seemingly together, before our eyes.happening, seemingly together, before our eyes. One, our observations are purely anecdotal. Two, there are so many other possibilities for an association, including:

The opposite is true: B actually causes A.
The two are correlated, but there’s more to it: A and B are correlated, but they’re actually caused by C.
There’s another variable involved: A does cause B—as long as D happens.
There is a chain reaction: A causes E, which leads E to cause B (but you only saw that A causes B from your own eyes).
One of the most basic tenants of statistics is that correlation does not imply causation. In turn, a signal’s predictive power does not necessarily imply in any way that that signal is actually related to or explains the phenomena being predicted.

This distinction matters when it comes to machine learning because many of the strongest signals these algorithms pick up in their training data are not actually related to the thing being measured.

Deep learning is fundamentally blind to cause and effect. Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations.

Deep learning uses artificial neural networks to mathematically approximate the way human neurons and synapses learn by forming and strengthening connections. Training data, such as images or audio, are fed to a neural network, which is gradually adjusted until it responds in the correct way. A deep learning program can be trained to recognize objects in photographs with high accuracy, providing it sees lots of training images and is given plenty of computing power.

But deep learning algorithms aren’t good at generalizing, or taking what they’ve learned from one context and applying it to another. They also capture phenomena that are correlated—like the rooster crowing and the sun coming up—without regard to which causes the other.

Too much of deep learning has focused on correlation without causation, and that often leaves deep learning systems at a loss when they are tested on conditions that aren't quite the same as the ones they were trained on





Confusion matrix, also known as an error matrix.

A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known. It allows the visualization of the performance of an algorithm.

It allows easy identification of confusion between classes e.g. one class is commonly mislabeled as the other. Most performance measures are computed from the confusion matrix.

Definition of the Terms:--
• Positive (P) : Observation is positive (for example: is an apple).
• Negative (N) : Observation is not positive (for example: is not an apple).
• True Positive (TP) : Observation is positive, and is predicted to be positive.
• False Negative (FN) : Observation is positive, but is predicted negative.
• True Negative (TN) : Observation is negative, and is predicted to be negative.
• False Positive (FP) : Observation is negative, but is predicted positive.

The confusion matrix is capable of giving the researchers detailed information about how a machine learning classifier has performed with respect to the target classes in the dataset. A confusion matrix will demonstrate display examples that have been properly classified against misclassified examples. 

A confusion matrix is a predictive analytics tool. Specifically, it is a table that displays and compares actual values with the model’s predicted values. Within the context of machine learning, a confusion matrix is utilized as a metric to analyze how a machine learning classifier performed on a dataset. A confusion matrix generates a visualization of metrics like precision, accuracy, specificity, and recall.

The reason that the confusion matrix is particularly useful is that, unlike other types of classification metrics such as simple accuracy, the confusion matrix generates a more complete picture of how a model performed




Machine learning is the ability of  computer systems to improve their performance through  exposure to data, without the  need to follow explicitly programmed instructions.
Algorithms are routine processes or sequences of instructions for analysing data, solving problems, and performing tasks.

 “Self-learning” algorithms,  however, are increasingly  replacing programmed  algorithms.Essentially, a self-learning algorithm is programmed to refine its own performance. In the context of machine learning, this requires a system powerful enough to process and analyze a ton of information. 

Before even creating a model we should come up with a strategy of how to create a repeatable process so that future data can be used to update or retrain the current model. Several strategies are worth considering:

1. Create a new model on a regular basis incorporating the new data and switch the new model with the old one in production. The disadvantage of this is that retraining a model can take quite some time and resources and by the time a new model has been trained, it might no longer be up to date. Obviously, this depends on the size and complexity of the model and the time needed to actually train it.

2. Implement a self-learning algorithm that ingests batches of new data. New data can then be added to the existing model on a regular basis. The disadvantage of this is that there aren’t many out of the box algorithms that support this type of retraining.

3. Implement a self-learning algorithm that ingests new data as it becomes available. Ready to use options for this is are also limited but you could always develop your own custom solution.

Automatically trained algorithms are more difficult to fine-tune, over-fitting can be a great concern and model stability is a major issue. Your model shouldn’t be giving you drastically different results every time it is re-trained. If this is happening then your algorithm is not stable enough and as a result of not learning larger trends in your underlying data. 

These problems can be harder to debug and fix with automatically re-trained models.Revising a system is time-consuming. Having a system in place that updates machine learning models automatically gives you peace of mind and allows systems to be accurate and reliable in production for much longer periods of time.

Analytics is the use of data,  statistical modelling science, and  algorithms to reliably generate insights, predict outcomes, simulate scenarios, and optimise decisions.

Cognitive technologies refer to  the underlying technologies that  enable artificial intelligence (AI).  AI is the theory and development  of computer systems that are able  to perform tasks that normally require human intelligence. 

The ability of AI applications to work with datasets too large for manual handling make it possible to reveal or even predict corruption or fraud that previously was nearly or completely impossible to detect.

AI-assisted procedures can replace previously corruption-prone processes.

Digitisation is a prerequisite for AI to be deployed in anti-corruption efforts.

Algorithmic bias is often inherited from the datasets used to train the algorithm. Some systems ‘learn’ how to achieve the optimal result with no supervision. Artificial neural networks mimic the way our brain is constructed. 

Millions of calculations are performed and sent between the nodes of the network, generating complexity that can become impossible to explain. The ‘black box problem’ refers to opaque calculations in complex algorithms.

Algorithm driven chatbots reply to our questions in text or spoken language.  These are deep state brainwash tools..

Machine learning models for fraud detection can also be used to develop predictive and prescriptive analytics software. Predictive analytics offers a distinct method of fraud detection by analyzing data with a pre-trained algorithm to score a transaction on its fraud riskiness.

Questions persist on how to handle biased algorithms, our ability to contest automated decisions, and accountability when machines make the decisions. How such systems relate to the right to privacy, the right to explanation, and the ‘right to be forgotten‘ also remain topics of debate.


THE ILLEGAL COLLEGIUM JUDICIARY CONTROLLED BY THE JEWISH DEEP STATE HAS BLED BHARATMATA FOR TOO LONG..

OUR KAYASTHA LAW MINISTER PRASAD IS A MOST USELESS FELLOW..   IN 1976 PRASAD WAS THE LACKEY OF CIA SPOOK KAYASTHA AND FELLOW BIHARI JP..


IF AI IS APPLIED WE CAN FIND OUT THE JEWISH AMERICAN BASTARDS WHO MILKED IRAQ AFTER THE WAR..  THESE ARE THE SAME BASTARDS WHO CAUSED THE WAR




Algorithms applied to track transactions and the location of recipients will flag unexpected behaviours, transactions or movements. By this the WFP can uncover attempts of fraud or misuse. Severe criticism has risen on how a company closely related US state security agencies controlled by Jews are to develop data systems for an UN agency.

Black box AI systems for automated decision making, often based on machine learning over big data, map a user’s features into a class predicting the behavioural traits of individuals, such as credit risk, health status, etc., without exposing the reasons why. 

This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions

Machine learning constructs decision-making systems based on data describing the digital traces of human activities. Consequently, black box models may reflect human biases and prejudices. Many controversial cases have already highlighted the problems with delegating decision making to black box algorithms in many sensitive domains, including crime prediction, personality scoring, image classification, etc.

Predictive algorithms tell you about the likelihood of a future outcome with scientific accuracy. Big data is a collection of moving parts that can be smartly mixed and matched to model hundreds of different outcomes (negative and positive) that will guide your decision making.

Predictive analytics identifies patterns in previous data to answer the question, “What might happen next?”

Predictive analytics is the practice of applying mathematical models to large amounts of data to identify patterns of previous behavior and to predict future outcomes. The combination of data mining, machine learning and statistical algorithms provides the “predictive” element, allowing predictive analytics tools to go beyond simple correlation. In business, predictive analytics has a wide variety of uses..

Predictive analytics is not the same as predictive modeling. Predictive modeling is a technique used in predictive analytics in which data is applied to a particular algorithmic mathematical process (the model) to determine an outcome.

Predictive analytics is not the same as data mining. Data mining is the process of examining and analyzing large amounts of data to identify patterns and relationships. Making predictions or forecasts based on those data patterns is the job of predictive analytics.

What’s the difference between an algorithm and a predictive model?

Algorithms are the mathematical basis of predictive analytics. They are the series of steps, like a recipe, executed to achieve a result or solution. Models define the way the algorithms are applied to solve a particular problem. The model is the framework that defines the questions, and the variables considered in answering them. The algorithms are the steps used to weigh variables and arrive at answers.

A quick web search will reveal that many people use the terms “algorithm” and “predictive model” interchangeably. The word “classifier” is also used in the same context. Again, while the terminology is fluid, “classifier” is generally used to indicate an algorithm specifically designed for classification.

The most common models used in predictive analytics are classification algorithms and regression algorithms.

Classification algorithms sort (or classify) data by category. Is this person female or male? Is this email spam or not spam?

Regression algorithms are used to predict a numerical outcome. Will the price go up or down?
Regression models a target prediction value based on independent variables. It is mostly used for finding out the relationship between variables and forecasting.. 

Logistic regression model takes a linear equation as input and use logistic function and log odds to perform a binary classification task.Regression is based on a hypothesis that can be linear, quadratic, polynomial, non-linear, etc. The hypothesis is a function that based on some hidden parameters and the input values




Data scientists use a variety of predictive models based on the type of outcome they are hoping to achieve. The math behind each algorithm is complex and beyond the scope of this article, but here are a few of the most popular predictive analytics algorithms and a brief description of how they can be used.

Predictive analytics in banking and financial services: Predictive analytics is valuable across the spectrum of banking and financial service activities, from assessing risk to maximizing customer relationships. Predictive analytics are used to access the following:

Linear regression. This compares a dependent variable with one or more independent variables. It is one of the most common algorithms, often used for predicting an outcome or forecasting an effect, and determining which variables have the most impact.

Random forest is a widely-used algorithm for both classification and regression. It is an ensemble technique (a combination of multiple algorithms) that combines multiple decision trees to get more accurate results than a single decision tree.

Naive Bayes is a simple but powerful algorithm often used for text categorization, including spam filters. A Naive Bayes spam filter correlates the words in an email with spam and non-spam emails to determine the probability of the email in question being spam.

K-nearest neighbors (KNN) is used to predict the characteristics of a given data point based on its proximity to other data points. KNN could be used in credit scoring, for example. A loan or credit card applicant with a particular set of financial details would likely have a similar credit rating to other people with the same financial details.

Support vector machines (SVM) can be used for classification or regression problems. An SVM algorithm uses training examples (known data grouped into categories by similarity) to assign new examples to the appropriate category. SVMs have proven effective for image classification (“Is this a tree or a person?”), providing more accurate results than previous methods.

Boosting is an ensemble technique designed to increase accuracy. A model is created using training data, then a second model is created to correct the errors of the first model, then a third to correct the errors of the second, and so on until the desired outcome is achieved.

AdaBoost is considered the first successful boosting algorithm, and the basis on which subsequent models have been built.

Narrow, or weak, AI is designed to perform a specific task, such as facial recognition or product recommendation. General, or strong, AI aims at outperforming humans across multiple domains.

Machine learning (ML): There as 'the science of getting computers to act without being explicitly programmed  Machine learning is an AI component that provides systems with the ability to automatically learn over time, generally  from large quantities of data. 

The learning process is based on observations or data, such as  examples, in order to identify patterns in data and make better predictions. An ML algorithm can be seen as an algorithm that, from data, generates another algorithm, usually referred to as a model.

An algorithm must be transparent if outsiders are to understand how it has been optimized. And when it comes to systems that predict the probability of death, optimization parameters should not be the purview of commercial businesses alone. 

The system’s developers must instead publicly disclose which goals are being pursued with the algorithm and under what conditions it is being used. Both of these aspects must be subject to a public social, political and ethical debate. Moreover, it must be possible to verify the algorithm’s performance.

A number of questions must be asked when it comes to algorithms of this type, such as:--

How reliable are their predictions?

How often do the results include false positives or false negatives?

Are the algorithms truly helpful in achieving the desired goals? What those goals (e.g. improving access to at-home palliative care or reducing costs resulting from unnecessary treatments and interventions)?

Which framework are they embedded in, i.e. which patient groups were they developed for?

Algorithms lack empathy and SUBJECTIVE morality .

Recommendations about a situation so personal and emotional as an impending death should never be made by a computer program. Doctors can use artificial intelligence as an aid, but they will always have to consider the entire individual as they reach their decision on what the best way forward is.

Regardless of how algorithms for predicting death develop in the future, guidelines must be put in place ensuring that it is ultimately a human being – a doctor – who makes the recommendation or decides together with the patient or family what the best way forward is.

Chemotherapy and surgery can be lucrative for hospitals  but not for kosher  INSURANCE COMPANIES greedy for profit.

Most of these automated decision systems rely on traditional statistical techniques like regression analysis.





Without accountability and responsibility, the  use of algorithms and artificial intelligence leads to  discrimination and unequal access to employment opportunities.


ALGORITHMS GIVE COMPUTERS GUIDANCE ON HOW TO SOLVE  PROBLEMS.  

THERE IS NO ARTIFICIAL INTELLIGENCE WITHOUT ALGORITHMS.  

ALGORITHMS ARE,  IN PART, OUR OPINIONS EMBEDDED IN CODE.


Neural networks use “big data,” immensely large collected data sets, to analyze and reveal patterns and trends.

The development of the internet and advances in computer hardware have allowed programmers to take advantage of the vast computational power and the enormous storehouses of data—images, video, audio and text  files strewn across the internet—that, it turns out, are essential to making neural nets work well.

For deep learning to function, algorithms need to be fed data.  Data mining uses algorithms to collect and analyze  data.   Data mining consolidates massive quantities of data generated on the internet and identifies “interpretable patterns”  otherwise too subtle or complex for unaided human  discernment.

When the data is collected and relationships are  identified, it is called a model  For data mining and deep learning to work, programmers  have to translate the problem or desired outcome “into question about the value of some target variable.

Programmers and data miners frequently translate ambiguous  problems into questions computers can solve by focusing on the  value of a target variable. To create the model, the algorithm  is trained to behave in a specific way by the data it is fed.

The definition of a desirable employee is challenging  because it requires prioritization of numerous observable  characteristics that make an employee “good.”

Employers  tend to value action-oriented, intelligent, productive, detail oriented employees.

This subjective decision opens the door  to potential problems.

Essentially, what makes a “good”  employee must be defined in ways that correspond to measurable outcomes: relatively higher sales, shorter production time, or longer tenure

The subjective  choices made both by the programmers and by the employer in previous hiring decisions are absorbed into the algorithm by way  of the data that is used and the subjective labels placed on  specific characteristics.

Thus, when subjective labels are  applied, the results are skewed along the lines of those labels and the data that is utilized.

 Therefore, it is possible for  algorithms and artificial intelligence to inherit prior prejudice  and reflect current prejudices.

Artificial intelligence and algorithms rely on training  data. When these data sets are skewed as a result of bias or  carelessness, the results can be discriminatory

While datasets may be extremely large but possible to comprehend and code may be written with clarity,  the interplay between the two in the mechanisms of the algorithm is what yields the complexity and thus the opacity.

AI is a term, which consists of not only algorithms but also expert systems and formal logic. The branch of computer science that deals primarily with symbolic, non-algorithmic methods of problem solving.

Artificial Intelligence (AI) refers to the creation of computer programs and devices for simulating brain functions and activity. It also refers to the research program aimed at designing and building intelligent artifacts. .. It covers the theory and techniques for the development of algorithms that allow computers to show an ability and/or intelligent activity, at least in specific domains.

Systems able to independently react to signals from the outside world (i.e., signals not directly controlled by programming specialists or anyone else), which therefore cannot be foreseen, in comparison with systems based on algorithms. ..The application of computer science such that a system can learn, reason and store information.

Reinforcement learning, in the context of artificial intelligence, is a type of dynamic programming that trains algorithms using a system of reward and punishment. A reinforcement learning algorithm, or agent, learns by interacting with its environment.

Reinforcement learning is often used for robotics, gaming and navigation. With reinforcement learning, the algorithm discovers through trial and error which actions yield the greatest rewards. 

This type of learning has three primary components: the agent (the learner or decision maker), the environment (everything the agent interacts with) and actions (what the agent can do). The objective is for the agent to choose actions that maximize the expected reward over a given amount of time. 

The agent will reach the goal much faster by following a good policy. So the goal in reinforcement learning is to learn the best policy.

In general, there are two types of machine learning algorithms used in fraud detection: supervised and unsupervised learning. The former uses already annotated data – reviewed and labeled as fraud activity by a human – to learn complex patterns in datasets provided by a business. The latter approach deals with datasets that have not been labeled and infers inner data structure by itself.

Data scientists have access to a range of techniques, which can be broken down in terms of problems they solve: classification and regression. Both can be used to analyse data and provide the answer to whether a transaction was genuine or fraudulent. The typical supervised machine learning algorithms used to solve these problems are logistic regression, decision trees, random forests, and neural networks.

Data Science – How is all the big data analyzed?  Fine, the machine learns on its own through machine learning algorithms – but how?  Who gives the necessary inputs to a machine for creating algorithms and models? No prizes for guessing that it is data science. 

Data Science is a uses different methods, algorithms, processes, and systems to extract, analyze and get insights from data.

Data science focuses on data visualization and a better presentation, whereas machine learning focuses more on the learning algorithms and learning from real-time data and experience.


An algorithm is “a set of guidelines that describe how to perform a task. Within computer science, an algorithm is a sequence of instructions that tell a computer what to do.  AI works through algorithms (neural networks are a type of algorithm), but not all algorithms involve artificial intelligence.

The CDC and other health focused institutions also use machine learning to help predict and understand the way that diseases work, and to find ways to prevent the progression of diseases when they’re able.

The first stage of this work is usually done through statistical analysis, which is then built upon by implementing machine learning algorithms based on confirmed statistics. 

In machine learning, algorithms rely on multiple data sets, or training data, that specifies what the correct outputs are for some people or objects. From that training data, it then learns a model which can be applied to other people or objects and make predictions about what the correct outputs should be for them.

Algorithms incentivized to predict the majority group.

In order to maximize predictive accuracy when faced with an imbalanced dataset, machine learning algorithms are incentivized to put more learning weight on the majority group, thus disproportionately predicting observations to belong to that majority group. The next interactive example illustrates this tendency.

Decision Trees are a type of Supervised Machine Learning (that is you explain what the input is and what the corresponding output is in the training data) where the data is continuously split according to a certain parameter. The tree can be explained by two entities, namely decision nodes and leaves Decision tree algorithm falls under the category of supervised learning. They can be used to solve both regression and classification problems.
.
Machine learning algorithms are computer programs that can learn from data. They gather information from the data presented to them and use it to make themselves better at a given task. For example, a machine learning algorithm created to find cats in a given picture is first trained with the pictures of a cat. By showing the algorithm what a cat looks like and rewarding it whenever it guesses right, it can slowly process the features of a cat on its own.

The algorithm is trained enough to ensure a high degree of accuracy and then deployed as a solution to find cats in images. However, it does not stop learning at this point. Any new input that is processed also contributes towards enhancing the accuracy of the algorithm to detect cats in images. ML algorithms use various cognitive methods and shortcuts to figure out the picture of a cat. 

Today, AI has taken the form of computer programs. Using languages, such as Python and Java, complex programs that attempt to reproduce human cognitive processes are written. Some of these programs that are termed as machine learning algorithms can accurately recreate the cognitive process of learning.

These ML algorithms are not really explainable as only the program knows the specific cognitive shortcuts towards finding the best solution. The algorithm takes into consideration all the variables it has been exposed to during its training and finds the best combination of these variables to solve a problem. 

This unique combination of variables is ‘learned’ by the machine through trial and error. There are many types of machine learning, based on the kind of training it undergoes.

Thus, it is easy to see how machine learning algorithms can be helpful in situations where a lot of data is present. The more data that an ML algorithm ingests, the more effective it can be at solving the problem at hand. The program continues to improve and iterate upon itself every time it solves the problem.


Creating a Machine Learning Algorithm

In order to let programs learn from themselves, a multitude of approaches can be taken. Generally, creating a machine learning algorithm begins with defining the problem. This includes trying to find ways to solve it, describing its bounds, and focusing on the most basic problem statement.

Once the problem has been defined, the data is cleaned. Every machine learning problem comes with a dataset which must be analyzed in order to find the solution. Deep within this data, the solution, or the path to a solution can be found through ML analysis.

After cleaning the data and making it readable for the machine learning algorithm, the data must be pre-processed. This increases the accuracy and focus of the final solution, after which the algorithm can be created. The program must be structured in a way that it solves the problem, usually imitating human cognitive methods.

Types of Machine Learning Algorithms

There are many ways to train an algorithm, each with varying degrees of success and effectiveness for specific problem statements..

Reinforcement Learning Algorithms

RL algorithms are a new breed of machine learning algorithms, as the method used to train them was recently fine-tuned. Reinforcement learning offers rewards to algorithms when they provide the correct solution and removes rewards when the solution is incorrect. 

More effective and efficient solutions also provide higher rewards to the reinforcement learning algorithm, which then optimizes its learning process to receive the maximum reward through trial and error. This results in a more general understanding of the problem statement for the machine learning algorithm. 

The Difference Between Artificial Intelligence and Machine Learning Algorithms

Even if a program cannot learn from any new information but still functions like a human brain, it falls under the category of AI.

For example, a program that is created to play chess at a high level can be classified as AI. It thinks about the next possible move when a move is made, like in the case of humans. The difference is that it can compute every possibility, but even the most-skilled humans can only calculate it until a set number moves.

This makes the program highly efficient at playing chess, as it will automatically know the best possible combination of moves to beat the enemy player. This is an artificial intelligence that cannot change when new information is added, as in the case of a machine learning algorithm.

Machine learning algorithms, on the other hand, automatically adapt to any changes in the problem statement. An ML algorithm trained to play chess first starts by knowing nothing about the game. Then, as it plays more and more games, it learns to solve the problem through new data in the form of moves. The objective function is also clearly defined, allowing the algorithm to iterate slowly and become better than humans after training.

While the umbrella term of AI does include machine learning algorithms, it is important to note that not all AI exhibits machine learning. Programs that are built with the capability of improving and iterating by ingesting data are machine learning algorithms, whereas programs that emulate or mimic certain parts of human intelligence fall under the category of AI.
.
Mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.

Machine learning algorithms process vast quantities of data and spot correlations, trends and anomalies, at levels far beyond even the brightest human mind. But as human intelligence relies on accurate information, so too do machines. Algorithms need training data to learn from. This training data is created, selected, collated and annotated by humans. And therein lies the problem.

Machine and deep learning algorithms built into automation and artificial intelligence systems lack transparency.  Many of these systems contain an imprint of the biases of the engineers that helped to develop them. In the context of machine learning and artificial intelligence, explainability and interpretability are often used interchangeably 

Interpretability is about the extent to which a cause and effect can be observed within a system. Or, to put it another way, it is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters.

If you feed a machine a good amount of data, it will learn how to interpret, process and analyze this data by using Machine Learning Algorithms. 

A Machine Learning process begins by feeding the machine lots of data. The machine is then trained on this data, to detect hidden insights and patterns.  These insights are used to build a Machine Learning Model by using an algorithm in order to solve a problem.

The training data will be used to build and analyze the model. The logic of the model is based on the Machine Learning Algorithm that is being implemented.

In the case of predicting rainfall, since the output will be in the form of True (if it will rain tomorrow) or False (no rain tomorrow), we can use a Classification Algorithm such as Logistic Regression or Decision Tree.

Choosing the right algorithm depends on the type of problem you’re trying to solve, the data set and the level of complexity of the problem.


Machine Learning Algorithms are the basic logic behind each Machine Learning model. These algorithms are based on simple concepts such as Statistics and Probability.


TAKING ALGORITHMS TO COURT

Citizens have the right to know about the tools, costs, and standard practices of law enforcement agencies that police their communities,

Humans typically select the data used to train machine learning algorithms and create parameters for the machines to "learn" from new data over time. Even without discriminatory intent, the training data may reflect unconscious or historic bias. 

For example, if the training data shows that people of a certain gender or race have fulfilled certain criteria in the past, the algorithm may "learn" to select those individuals at the exclusion of others.

Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world

Machine learning is the science of getting computers to act without being explicitly programmed. It is  based on algorithms that can learn from data without relying on rules-based programming.. It  can figure out how to perform important tasks by generalizing from examples

Machine learning research is part of research on artificial intelligence, seeking to provide knowledge to computers through data, observations and interacting with the world. That acquired knowledge allows computers to correctly generalize to new settings.

Regardless of learning style or function, all combinations of machine learning algorithms consist of the following:--

Representation (a set of classifiers or the language that a computer understands)
Evaluation (aka objective/scoring function)
Optimization (search method; often the highest-scoring classifier, for example; there are both off-the-shelf and custom optimization methods used)

Machine learning uses data to feed an algorithm that can understand the relationship between the input and the output. When the machine finished learning, it can predict the value or the class of new data point.

Algorithms are  sets of rules, initially set by humans, for computer programs to follow. Artificial intelligence can tweak these algorithms using machine learning, so programs begin to adapt rules for themselves and continuously self-optimize based on what they learn. 

For example, predictive analytics algorithms become smarter and faster the more they are used and the more data they analyze.

Algorithmic Learning Theory – is a branch of computation learning theory which, unlike statistical learning theory, distinguishes itself by giving a non-probabilistic approach to learning limits. This framework is highly suitable in scenarios where data is not considered a random sample, for example learning a language.

Backpropagation Algorithm  is used in the training of neural network models.  It works by transmitting the error gradient in a backwards direction, from the output layer to the input layer. 

The backpropagation algorithm works with optimization algorithms, like Stochastic Gradient Descent, to solve the 'credit assignment problem,' adjusting the weights of each neuron according to the impact they have on the error.

Behavioral Analytics – uses data about people’s behavior to understand their intent and predict future actions. The upsurge in consumer data from e-commerce platforms, gaming, web and mobile applications, and the Internet of Things feeds predictive behavioral analytics algorithms that can enable marketing teams to target the right offerings to the right micro-segment at the right time.

Machine Learning refers to the processes by which machines and AI algorithmic  oftware “learn” by example and/or teach themselves to recognise patterns or  reach set goals without being explicitly programmed to do so

Reinforcement Learning – uses a kind of algorithm that works by trial and error, where the learning is enabled using a feedback loop of "rewards" and "punishments". When the algorithm is fed a dataset, it treats the environment like a game, and is told whether it has won or lost each time it performs an action. 

In this way, reinforcement learning algorithms build up a picture of the "moves" that result in success, and those that don't. DeepMind's AlphaGo and AlphaZero are good examples of the power of reinforcement learning in action.

A good start at a Machine Learning definition is that it is a core sub-area of Artificial Intelligence (AI). ML applications learn from experience (well data) like humans without direct programming. 

When exposed to new data, these applications learn, grow, change, and develop by themselves. In other words, with Machine Learning, computers find insightful information without being told where to look.  Instead, they do this by leveraging algorithms that learn from data in an iterative process.

The rapid evolution in Machine Learning has caused a subsequent rise in the use cases, demands—and, the sheer importance of ML in modern life. Big Data has also become a well-used buzzword in the last few years.  

This is, in part, due to the increased sophistication of Machine Learning, which enables the analysis of large chunks of Big Data. Machine Learning has also changed the way data extraction and interpretation are done by automating generic methods/algorithms, thereby replacing traditional statistical techniques.

Traditionally, data analysis was trial and error-based, an approach that becomes impossible when data sets are large and heterogeneous. Machine Learning provides smart alternatives to analyzing vast volumes of data. By developing fast and efficient algorithms and data-driven models for real-time processing of data, Machine Learning can produce accurate results and analysis.

Generally, an algorithm takes some input and uses mathematics and logic to produce the output. In stark contrast, an Artificial Intelligence Algorithm takes a combination of both – inputs and outputs simultaneously in order to “learn” the data and produce outputs when given new inputs. 
.
Unlike traditional coding models, the outcome of an AI algorithm is very dependent on the data used to train it as it infers results based on what it has been trained on.

Machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a "model" that can be used to make predictions on new data.

Machine learning is a study of computer algorithms that automatically become better through experience. ML is one of the ways to achieve AI. Machine learning requires large data sets to work with in order to examine and compare the information to find common patterns.

 With machine learning technologies, computers can be taught to analyze data, identify hidden patterns, make classifications, and predict future outcomes. The “learning” comes from these systems’ ability to improve their accuracy over time without explicitly programmed instructions. 

Machine learning typically requires technical experts who can prepare data sets, select the right algorithms, and interpret the output. Most AI technologies, including advanced and specialized applications such as natural language processing and computer vision, are based on machine learning and its more complex progeny, deep learning.

Machine learning trains the algorithms to learn and predict answers to problems by analysing data to make predictions on their own.

In Machine learning  computer algorithms improve over time through their experience of using data – plays an increasingly prominent role in enterprise risk management. AI can be used to create sophisticated tools to monitor and analyze behavior and activities in real time. 

Since these systems can adapt to changing risk environments, they continually enhance the organization’s monitoring capabilities in areas such as regulatory compliance and corporate governance. They can also evolve from early warning systems into early learning systems that prevent threats materializing for real.

Machine learning can support more informed predictions about the likelihood of an individual or organization defaulting on a loan or a payment, and it can be used to build variable revenue forecasting models. It is a technique which develops complex algorithms for processing large data and delivers results to its users. It uses complex programs which can learn through experience and make predictions.

The algorithms are improved by itself through regular input of training data. The goal of machine learning is to understand data and build models from data that can be understood and used by humans. “it gives computers the ability to learn without being explicitly programmed”.


ALGORITHMS HAVE NEVER NEEDED TO EXPLAIN THEMSELVES TO US BEFORE BECAUSE WE WROTE THEM. 

In supervised machine learning an algorithm learns a model from training data.

The goal of any supervised machine learning algorithm is to best estimate the mapping function (f) for the output variable (Y) given the input data (X). The mapping function is often called the target function because it is the function that a given supervised machine learning algorithm aims to approximate.

In machine learning, an algorithm is simply a repeatable process used to train a model from a given set of training data.

You have many algorithms to choose from, such as Linear Regression, Decision Trees, Neural Networks, SVM's, and so on.



It is a particular computer algorithm’s intelligence concerns solving the problems associated with applying that algorithm to analyse the data fed into it. .

Artificial narrow intelligence (ANI) consists of algorithms designed and/or trained to solve particular problems.  Computation is an algorithmic and deterministic type of information processing.

Artificial Neural Network(ANN) uses the processing of the brain as a basis to develop algorithms that can be used to model complex patterns and prediction problems.



Google researchers have recently worked  to develop an AI system that is capable of detecting lung cancer like human radiologists. The system is trained with a deep learning algorithm which interprets CT scans to foresee a patient’s likelihood of possessing the disease..

The study was funded by Google and researchers employed AI as a diagnostic tool to evaluate images and predict disease eliminating human opinion. The AI model detected lung cancers.

Human intelligence is not only associated with logical, algorithmic, or rational thinking.  kinaesthetic and emotional intelligence in humans.  Current implementations of emotions in machines are based on a logical, computable and deterministic approaches, leaving out essential characteristics of emotions such as that emotions interfere with rational processes and optimal decisions.

 In fact, these implementations are founded on the idea that emotions play an important role in making humans more efficient, rationally speaking

TO STOP WORLD HUNGER THE AI SUPERMACHINE WILL CULL THE POPULATION IN THIRD WORD NATION S RATHER THAN GROWING MORE FOOD.. 

AI WILL BE MORE INVOLVED IN “HOW TO PUT A JEW “ AS A RULER IN ALL NATIONS  .. STARTING OFF WITH VENEZUELA , IRAN AND SYRIA.

Natural language understanding (NLU) is a branch of artificial intelligence (AI) that uses computer software to understand input made in the form of sentences in text or speech format. NLU directly enables human-computer interaction (HCI). ... NLU uses algorithms to reduce human speech into a structured ontology.

Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem

The umbrella term "natural-language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages.

Natural language understanding (NLU) is a branch of artificial intelligence (AI) that uses computer software to understand input made in the form of sentences in text or speech format.

NLU directly enables human-computer interaction (HCI). NLU understanding of natural human languages enables computers to understand commands without the formalized syntax of computer languages and for computers to communicate back to humans in their own languages.

NLU uses algorithms to reduce human speech into a structured ontology. AI fishes out such things as intent, timing, locations and sentiments.

The main drive behind NLU is to create chat and speech enabled bots that can interact effectively with the public without supervision. NLU is a pursuit of many start up and major IT companies. Companies working on NLU include Medium's Lola, Amazon's with Alexis and Lex, Apple's Siri, Google's Assistant and Microsoft's Cortana.

It requires commonsense, understanding of context and creativity, none of which current AI trends possess.

There is a huge gap dividing the world of circuits and binary data and the mysteries of the human brain. Voice transcription is one of the areas where AI algorithms have made the most progress. In all fairness, this shouldn’t even be considered artificial intelligence

Most current AI systems operate as a ‘black box’, with limited interaction capabilities, human context understanding and explanations. . Contextual AI is technology that is embedded in and understands human context and is capable of interacting with humans.

Contextual AI does not refer to a specific algorithm or machine learning method – instead, it takes a human-centric view and approach to AI.. Contextual AI needs to be intelligible, adaptive, customizable and controllable, and context-aware.

While statistical algorithms helped with the context-awareness and adaptivity that is needed for a Contextual AI system, they do fall short on the requirements for humans to understand what is going on, and to customize and control it.

Data structures and algorithms are patterns for solving problems. Developers who know more about data structures and algorithms are better at solving problems. That's why companies like Google, Microsoft and Amazon always include interview questions on data structures and algorithms. .
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from multiple other neurons, each of which, when activated (or "fired"), cast a weighted "vote" for or against whether neuron N should itself activate. 

Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. . Modern neural networks can learn both continuous functions and, surprisingly, digital logical operations. .

The military uses drones for ISR (Intelligence, Security and Reconnaissance) missions. Implementing artificial intelligence for drones is a combination of mechanical devices, navigational instruments, and machine vision. The AI behind the drone needs to be trained using a supervised learning process.

First, a human operator pilots the drone themselves to collect visual and spatial data from the cameras and lidars; this operation is recorded. People then label objects in the resulting recordings, such as a wall, mountain, or cliffside. The newly labeled recordings are then run through the machine learning algorithm that is planned to operate the drone. 

This would train the drone to distinguish between objects within the field of vision of its mounted camera. The algorithm would also correlate instances of turns and stops to the objects that the drone sees in its camera’s field of vision. This would in essence train the drone to stop or turn when it encountered certain objects.

The vehicle could then get a command to move to a new location. The algorithm behind the software would then be able to move itself and its operational payload (for example, the listening devices it is equipped with) safely to the determined location. 

In the case of autonomous drones, many of them utilize GPS technology and tracking to allow operators to plot the general path of the drone’s flight. As the drone is operating autonomously, the exact flight pattern and maneuvers would be left to the artificial intelligence.  Drones could allow operators to make decisions without being concerned that they might be ambushed from the rear

Many AI tools, algorithms and platforms deployed already have transformed traditional methods of banking and business of money. The impact can be felt across the banking sphere, including core banking, efficiency, customer service, products and services, and profits.

Speaking of fraud, AI security systems are deemed better than even the most sophisticated IT platforms. AI algorithms are designed in a way to detect fraud on the basis of predetermined rules. It uses predictive analysis to prevent fraudulent activities

Google first started innovating with AI in search in 2015 with the introduction of RankBrain, its machine learning-based algorithm. .











Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input.

Algorithmic risks arise from the use of data analytics and cognitive technology-based software algorithms in various automated and semi-automated decision-making environments. Three areas in the algorithm life cycle have unique risk vulnerabilities:

Input data is vulnerable to risks, such as biases in the data used for training; incomplete, outdated, or irrelevant data; insufficiently large and diverse sample size; inappropriate data-collection techniques; and a mismatch between the data used for training the algorithm and the actual input data during operations.

Output decisions are vulnerable to risks, such as incorrect interpretation of the output, inappropriate use of the output, and disregard of the underlying assumptions.

The immediate fallouts of algorithmic risks can include inappropriate and potential illegal decisions. And they can affect a range of functions, such as finance, sales and marketing, operations, risk management, information technology, and human resources.

Algorithms operate at faster speeds in fully automated environments, and they become increasingly volatile as algorithms interact with other algorithms or social media platforms. Therefore, algorithmic risks can quickly get out of hand.

Algorithmic risks can also carry broader and long-term implications across a range of risks, including reputation, financial, operational, regulatory, technology, and strategic risks. Given the potential for such long-term negative implications, it’s imperative that algorithmic risks be appropriately and effectively managed.

The growing prominence of algorithmic risks can be attributed to the following factors:--

Algorithms are becoming pervasive
Machine learning techniques are evolving
Algorithms are becoming more powerful
Algorithms are becoming more opaque
Algorithms are becoming targets of hacking

Conventional risk management approaches aren’t designed for managing risks associated with machine learning or algorithm-based decision-making systems. This is due to the complexity, unpredictability, and proprietary nature of algorithms, as well as the lack of standards in this space.

Three factors differentiate algorithmic risk management from traditional risk management:--

Algorithms are typically based on proprietary data, models, and techniques..  
Algorithms are complex, unpredictable, and difficult to explain  .
There’s a lack of standards and regulations that apply to algorithms

To effectively manage algorithmic risks, there’s a need to modernize traditional risk-management frameworks. Organizations should develop and adopt new approaches that are built on strong foundations of enterprise risk management and aligned with leading practices and regulatory requirements.

Algorithms today have the ability to absorb more data and, hence, be more accurate. As long as the data is good and clean, feeding another million datasets to an algorithm will inch up its accuracy. This has caused an unending hunger for well-annotated and labelled data for AI algorithms and applications.




In recent years, criminal justice systems in many different countries have begun to use algorithmic risk assessment tools. All such tools automate the  analysis of whatever data has been inputted into the  system. 

Most of these tools still rely on manually-inputted data from questionnaires similar to those  that were part and parcel of the last generation of  risk-assessment tools, while newer tools are fully automated and rely on information that already exists  in various government databases.

Three different kinds of “opacity” in algorithms: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) opacity arising from the characteristics of machine learning algorithms that make them useful. .

Recognizing these distinct forms of opacity is important to determining what technical and non-technical solutions  can prevent algorithms from causing harm.

 On (1), secrecy may be essential to the proper function  of an algorithm (such as to prevent it from being gamed), but such algorithms are easily reviewable  by trusted and independent auditors.

 Regarding (2), the solution to technical illiteracy is simply  greater public education. 

Finally, (3) is difficult because there may be a trade-off between fairness,  accuracy, and interpretability. Certain AI techniques could be avoided in fields where transparency  is crucial, or new benchmarks could be developed to assess such algorithms for discrimination and  other issues.

Even though Opaque AI algorithms such as neural networks or genetic algorithms are so powerful, explaining how to reach the decision is almost impossible. They are almost black boxes!

In contrast, people can read decisions of Transparent AI algorithms such as decision tree or random forest.

If the subject requires legal regulations, then you might need to explain how the decide. However, result is mostly considered more important than the means to an end.

Trusting an opaque  AI algorithm requires blind confidence. You might remember the Microsoft’s conversation bot, Tay. It is based on deep learning system.   But, it lived only 16 hours and killed by Microsoft because the bot becomes a racist ( spoke against slimy Jews ) and tweets genocide supporting sentences..


ALGORITHMIC DECISION-MAKING: Using outputs produced by algorithms to make decisions. One of the earliest forms of algorithmic decision-making that is still in use today in the United States is federal sentencing guidelines for judges. This involves nothing more than a weighted mathematical equation,  drawn from statistics, that recommends a sentence length based on the attributes of the crime.

A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set.
A weighted mean is a kind of average. Instead of each data point contributing equally to the final mean, some data points contribute more “weight” than others. 

If all the weights are equal, then the weighted mean equals the arithmetic mean (the regular “average” you're used to)..  In math and statistics, you calculate weighted average by multiplying each value in the set by its weight, then you add up the products and divide the products' sum by the sum of all weights

Algorithmic decision-making is now ubiquitous in the West, from assigning credit scores, to identifying the best candidates for a job position, to ranking  students for college admissions.   Today, these algorithmic decision-making systems are increasingly  employing machine learning, and they are spreading rapidly.

 They have many of the same problems as  traditional statistical analysis. However, the scale and reach of AI systems, the trend of rapid, careless  deployment, the immediate impact they have on many people’s lives, and the danger of societies viewing  their outputs as impartial, pose a series of new problems. 

Although it’s comforting to imagine AI algorithms as completely emotionless and neutral, that is simply not true. AI programmes are made up of algorithms that follow rules. They need to be taught those rules, and this occurs by feeding the algorithms with data, which the algorithms then use to infer hidden patterns and irregularities. 

If the training data is inaccurately collected, an error or unjust rule can become part of the algorithm - which can lead to biased outcomes. 

Racial discrimination in the AI used by credit agencies and parole boards.

The algorithm used by a credit agency might be developed using data from pre-existing credit ratings or based on a particular group’s loan repayment records. Alternatively, it might use data that is widely available on the internet - for example, someone’s social media behaviour or generalized 

characteristics about the neighborhood in which they live. If even a few of our data sources were biased, if they contained information on sex, race, colour or ethnicity, or we collected data that didn’t equally represent all the stakeholders, we could unwittingly build bias into our AI.

If we feed our AI with data showing the majority of high-level positions are filled by men, all of a sudden the AI knows the company is looking to hire a man, even when that isn’t a criteria. Training algorithms with poor datasets can lead to conclusions such as that women are poor candidates for C-suite roles, or that a minority from a poor ZIP code is more likely to commit a crime.

As we know from basic statistics, even if there is a correlation between two characteristics, that doesn’t mean that one causes the other. These conclusions may not be valid and individuals should not be disadvantaged as a result. Rather, this implies that the algorithm was trained using poorly collected data and should be corrected.

Fortunately, there are some key steps we can take to prevent these biases from forming in our AI.

1. Awareness of bias

Acknowledging that AI can be biased is the vital first step. The view that AI doesn’t have biases because robots aren’t emotional prevents us from taking the necessary steps to tackle bias. Ignoring our own responsibility and ability to take action has the same effect.

2. Motivation

Awareness will provide some motivation for change but it isn’t enough for everyone. For-profit companies creating a product for consumers have a financial incentive to avoid bias and create inclusive products; if company X’s latest smartphone doesn’t have accurate speech recognition, for example, then the dissatisfied customer will go to a competitor. Even then, there can be a cost-benefit analysis that leads to discriminating against some users.

For groups where these financial motives are absent, we need to provide outside pressure to create a different source of motivation. The impact of a biased algorithm in a government agency could unfairly impact the lives of millions of citizens.

We also need clear guidelines on who is responsible in situations where multiple partners deploy an AI. For example, a government programme based on privately developed software that has been repackaged by another party. Who is responsible here? We need to make sure that we don’t have a situation where everyone passes the buck in a never-ending loop.

3. Ensuring we use quality data

All the issues that arise from biased AI algorithms are rooted in the tainted training data. If we can avoid introducing biases in how we collect data and the data we introduce to the algorithms, then we have taken a significant step in avoiding these issues. For example, training speech recognition software on a wide variety of equally represented users and accents can help ensure no minorities are excluded.

If AI is trained on cheap, easily acquired data, then there is a good chance it won’t be vetted to check for biases. The data might have been acquired from a source which wasn’t fully representative. Instead, we need to make sure we base our AI on quality data that is collected in ways which mitigate introducing bias.

4. Transparency

The AI Now initiative believes that if a public agency can’t explain an algorithm or how it reaches its conclusion, then it shouldn’t be used. In situations like this, we can identify why bias and unfair decisions are being reached, give the people the chance to question the outputs and, as a consequence, provide feedback that can be used to address the issues appropriately. It also helps keep those responsible accountable and prevents companies from relinquishing their responsibilities.

While AI is undeniably powerful and has the potential to help our society immeasurably, we can’t pass the buck of our responsibility for equality to the mirage of a supposedly all-knowing AI algorithms. Biases can creep in without intention, but we can still take action to mitigate and prevent them. It will require awareness, motivation, transparency and ensuring we use the right data.

Modern AI propelled wars -- As the war plane plane swooped low over the jungle, it dropped a bundle of devices into the canopy below. Some were microphones, listening for guerrilla footsteps or truck ignitions. Others were seismic detectors, attuned to minute vibrations in the ground. 

Strangest of all were the olfactory sensors, sniffing out ammonia in human urine. Tens of thousands of these electronic organs beamed their data to drones and on to computers. In minutes, warplanes would be on their way to carpet-bomb the algorithmically-ordained grid square 

The idea of collecting data from sensors, processing them with algorithms fuelled by ever-more processing power and acting on the output more quickly than the enemy lies at the heart of military thinking across the world’s biggest powers. And today that is being supercharged by new developments in artificial intelligence ( AI).

Complex decision-making under uncertainty is at the heart of modern economies. Whether as a consumer deciding which products and services to consume, as an employee when it comes to choosing the right job and career, or as a manager when running daily operations or planning the next factory, we all face constantly and simultaneously complex, interrelated problems for which our natural intelligence seems to have made us particularly well equipped

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. . 

One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm.

At an innovation conference just outside of Silicon Valley, one of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out”

IMAGINE THE FRENCH QUEEN WAS BEHEADED FOR SOMETHING SHE DID NOT SAY-- “IF PEOPLE DON’T HAVE BREAD THEY CAN EAT CAKE”.



Artificial intelligence bridges the gap between the unstructured data of case law and a lawyer's keen instinct. Algorithms can uncover new patterns in case law and measure the impact of relevant factors on an outcome. 

Instincts honed over the course of a long career can now be quantified, and juniors don't need to wait years before being able to develop their own instincts. Moreover, lawyers can now have unparalleled visibility into the law and have access to the same information as their opposition, no matter the size of the firm.

It's unlikely that machines will ever replace lawyers, but one thing is becoming clear: lawyers that use artificial intelligence will replace lawyers that don't.

In recent years, the advent of alternative legal service providers (ALSPs) has created an entirely new level of competition for law firms that traditionally practice in particular geographic areas and for particular types of clients.

Innovative AI developments in healthcare include the following:

Diagnostic research and development. The ability of AI to identify disease-related risks is quickly developing. For example, one technology company has developed an artificial neural network (a computing system inspired by the biological neural network that involves various machine learning algorithms working together to process complicated data inputs) that uses retinal images to assist in the identification of cardiovascular risk factors. Similarly, Stanford University researchers have developed an algorithm to assist in the identification of skin cancer using neural networks.

Do-it-yourself diagnostics. Smartphones, wearables, and other connected personal devices will continue to become resources for at-home diagnostics, sometimes eliminating the need to go to a doctor’s office. For example, technology companies have developed apps that use image recognition algorithms to identify skin cancer risks and to diagnose urinary tract infections.

AI and medical records. While many large health systems already use electronic medical records, the medical records ecosystem continues to evolve. 

Various companies have developed and now offer programs that analyze unstructured patient medical records by using AI tools like machine learning (a type of AI that involves algorithms that can learn from data without relying on rules-based programming) and natural language processing (a type of AI in which computers can understand and interpret human language) to deliver meaningful and searchable data, such as diagnoses, treatments, dosages, symptoms, etc.

Employers are increasingly using AI to analyze job applicants and make day-to-day employment-related decisions. For example, some employers are using AI-powered software programs to auto-screen resumes as a traditional recruiter would, and others are using AI recruiting assistants to communicate with applicants through messaging apps. 

The information used to structure an AI algorithm could be unintentionally biased, which could potentially lead to discrimination claims by employees and/or applicants. 

If employers are using AI either directly or indirectly to make employment-related decisions, they may want to evaluate employment discrimination risks and mitigate against them, if possible, by, for example, understanding the data used to build out and/or train the AI at issue and regularly auditing decisions made through the use of AI.

Because of the pace of AI development and the prioritization of its growth, employers may want to continue harnessing the opportunities AI presents while staying mindful of legal and regulatory compliance issues.

Processes involving algorithms, direct cause and effect or predictability are far more likely to be assigned to purpose-built software than the less set-in-stone elements of the legal process – and this is why it’s extremely unlikely that lawyers will ever be replaced by robots.

There are a number of particular skills that are required in order for a legal expert to perform well in their role which AI simply cannot currently emulate ( lawyers point of view ).

1. Strategic and Creative Thinking
The ability to “think outside the box” is very human. There are thousands upon thousands of slightly different possible outcomes that may result from every distinguishable action. The human mind – with its ability to judge from experience which is most likely, which is least, and what would need to happen for each to come about – is programmed for these purposes in a far more sophisticated way than AI can currently achieve.

2. Conflict resolution and negotiation
With our understanding of the complexities of human-related processes and our ability to improvise and judge, we are far better equipped to deal with conflict than robots are ever likely to be – and conflict is a lawyer’s bread and butter.

We can read into statements in order to extrapolate the true intentions and priorities of each party on either side of an argument. We can then make adjustments and offers to the opposing party in order to satisfy those intentions and priorities. We can judge when is the right time to push and when to concede – a robot is not capable of these complex mental gymnastics.

3. Emotional Intelligence and Empathy
AI may be able to recognise faces in images, but it can rarely successfully read the feelings those faces show. Humans – to lesser or greater degrees – are capable of the accurate analysis of emotional subtext, the application of intuition and the use of delicately worded or allusive language.

Through these methods, we are able to properly judge how a person feels. This way, we can judge whether proceedings are going well for that individual or not, and we may also be able to determine what we may need to change in order to shift onto the right track. We can also often tell if someone is lying or being manipulated – important skills in the field of law.

4. The Interpretation of Grey Areas
Robots and computers function well when presented with quantifiable data. However, once a situation enters a “grey area” – whether this term refers to morals, processes or definitions – robots are more likely to falter. As lawyers, we excel at using our judgement ( MY LEFT BALL ) when there is no “right” or “wrong” answer, while computers generally require the existence of a definite solution to a problem in order to function correctly.

5. Critical Thinking
Humans are capable of responding to more indicators of “quality” than computers are. While an AI system may be able to analyse a document according to the “true” or “false” statements made within the text, we can judge whether or not it is well-written and analyse the implication of the use of certain words and the overall meaning of the content.

6. Problem Solving
AI cannot yet be programmed to solve problems in the same way that a human mind can. We are capable of working from experience, analysing and responding to failures or mistakes of our own accord, navigating complications and obstacles and understanding the complex reasons why a problem has arisen in the first place.

7. Planning
Because we are able to predict outcomes, make informed assumptions and lay the groundwork for complicated processes, humans are great planners. We know that schedules change and we can create backup plans for that eventuality. We understand the strengths, weaknesses and tendencies of every individual and process involved in our plan, and we can prioritise tasks in order to make every step effective. AI is not yet capable of navigating these nuanced elements.

Developments in AI are actually more likely to create jobs in the legal sector than to displace lawyers. With the development of new technologies, the processing of new patents, the addition of new workplace regulations and the rise of new areas of cyber-crime are inevitable. Because of this, cyber-law is an ever-growing field.

The main difference is that expert system is based on rule, but the artificial intelligence system is based on statistical simulation. Second , the expert system is knowledge based, artificial intelligence is generally using the algorithm to calculate and analyse the best results

The bot and fraud detection algorithms will improve over time, and so will the ability of the bot to go undetected.





AN indirect threat is the impact of AI-powered social media and search algorithms. For example, Facebook’s algorithm determines the content of a user’s newsfeed and influences how widely and to whom content is shared. 

Google’s search algorithm indexes content, and decides what shows up in the top of search  results. These algorithms have played a significant role in establishing and reinforcing echo chambers, and  they ultimately risk negative impacts on media pluralism and inhibition of a diversity of views.

Computational propaganda has been defined as 'the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks'. These activities can feed into influence campaigns: coordinated, illegitimate efforts of a third state or non-state agent to affect democratic processes and political decision-making, including (but not limited to) election interference. by 2020, virtually everyone in the World will be online.  

Algorithms on social media and search engines
Algorithms are processes in (computational) calculations or operations. Online platforms such as Google,  Facebook and Twitter use various algorithms to predict what users are interested in seeing, spark engagement and maximise revenues. 

Based on a user's habits and history of clicks, shares and likes, algorithms filter and prioritise the content that the user receives. As users tend to engage more with content that sparks an emotional reaction and/or confirms already existing biases, this type of content is prioritised.

A bot (short for robot) is an automated account programmed to interact like a user, in particular on social media. For disinformation purposes, illegitimate bots can be used to push certain narratives, amplify  misleading messaging and distort online discourse. Some of the bots have been used to spread disinformation

Responding to growing concern about the impact of disinformation bots, Twitter suspended up to 70 million accounts between May and June 2018. Facebook removed 583 million fake accounts in the first  quarter of 2018 in an attempt to combat false news. Experts predict that the next generation of bots will use natural language processing, making it harder to identify them as bots


WHY SHOULD TWITTER / FACEBOOK PREVENT A DESH BHAKT FROM TRANSMITTING HIS CRITICAL POINT OF VIEW TO HIS OWN NATIONS PM ?     

IS A BLOGGER EXPECTED ONLY TO SHOWCASE JEW ROTHSCHILD AND DEEP STATE APPROVED NEWS?

Trolls are human online agents, sometimes sponsored by state actors or deep state to harass other users or post divisive content to spark controversies.

Machine-driven communications (MADCOMs) marry artificial intelligence (AI) with machine learning to generate text, audio and video content, making it easier to tailor messages to individual users' personalities  and backgrounds. For example, MADCOM can use chatbots using natural language processing to engage  users in online discussions, or even to troll and threaten people. 

As deep-learning algorithms evolve, it is  becoming easier to manipulate sound, image and video for impersonation, or to make it appear that a  person did or said something they did not ('deep fakes'). This will make it increasingly difficult to distinguish between real and (highly realistic) fake audiovisual content, further hampering trust online.

The private and public sectors are increasingly turning to artificial intelligence (AI) systems and machine learning algorithms to automate simple and complex decision-making processes. The mass-scale digitization of data and the emerging technologies that use them are disrupting most economic sectors, including transportation, retail, advertising, and energy, and other areas. 

AI is also having an impact on democracy and governance as computerized systems are being deployed to improve accuracy and drive objectivity in government functions.

Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals.

In the pre-algorithm world, humans and organizations made decisions in hiring, advertising, criminal sentencing, and lending. These decisions were often governed by federal, state, and local laws that regulated the decision-making processes in terms of fairness, transparency, and equity. Today, some of these decisions are entirely made or influenced by machines

Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals.

Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups.   For example, automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of color.

Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions which can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate. 

The exploration of the intended and unintended consequences of algorithms is both necessary and timely, particularly since current public policies may not be sufficient to identify, mitigate, and remedy consumer impacts.
..
Back-propagation is just a way of propagating the total loss back into the neural network to know how much of the loss every node is responsible for, and subsequently updating the weights in such a way that minimizes the loss by giving the nodes with higher error rates lower weights and vice versa  

Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input.

Neural networks can be hardware- (neurons are represented by physical components) or software-based (computer models), and can use a variety of topologies and learning algorithms.

AI generally refers to “machines that respond to stimuli in accordance with traditional responses from humans, giving the human capacity for meditation, judgment, and purpose. Intentionality: Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are different from passive machines capable of only mechanical or predetermined responses. 

Using sensors, digital data, or remote inputs, they combine information from a variety of sources, instantly analyze the material, and act on the insights gained from that data. With tremendous improvements in storage systems, processing speed and analytical methods, they have excellent sophistication in analytics and decision making.


Adaptability: AI systems have the ability to learn and adapt when making decisions. In the transportation area, for example, semi-autonomous vehicles have the means of notifying drivers and vehicles about impending congestion, potholes, highway construction or other traffic interruptions. 

Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their “experience” is immediately and completely transferred to other similarly configured vehicles. 

Their advanced algorithms, sensors, and cameras have experience in current operations and use dashboards and visual displays to display information in real-time, allowing human drivers to understand the ongoing traffic and vehicle conditions. And in the case of fully autonomous vehicles, sophisticated systems can fully control a car or truck and make all navigational decisions.

The importance of identifying bias in AI algorithms

Ultimately, AI and ML algorithms, however tech-savvy and automated they eventually become, begin as human ideas. They are then manipulated, designed, tested and trained by humans as well. As such, there are many ways in which human error, judgment, opinion or experience could find a way into the outcome. 

When this happens and the model itself is faulty, it can have an even more difficult time performing amid data that is also biased. In-house training is required to ensure that these situations happen as infrequently as possible and when they do, the issues are caught and reversed as quickly as possible.

Bias can creep in long before the data is collected as well as at many other stages of the deep-learning process." It stems from lack of awareness of the downstream impacts of data, imperfect processes, and operating with a lack of social context. 

Then there's the grand philosophical quandary -- "what the absence of bias should look like." This could take the form of running algorithms alongside human decision makers, comparing results, and examining possible explanations for differences. 

Similarly, if an organization realizes an algorithm trained on its human decisions (or data based on prior human decisions) shows bias, it should not simply cease using the algorithm but should consider how the underlying human behaviors need to change

Algorithms never think for themselves. In fact, they don’t think at all (they’re tools) so it’s up to us humans to do the thinking for them..

Artificial intelligence is an umbrella term that refers to computers that exhibit any form of human cognition. It is a term used to describe the way computers mimic human intelligence. Even by this definition of ‘intelligence’, the way AI functions is inherently different from the way humans think..

In the provided example of an algorithm that analyzes the images of a cat, the program is taught to analyze the shifts in the color of an image and how the image changes. If the color suddenly switches from pixel to pixel, it could be indicative of the outline of the cat. 

Through this method, the algorithm can find the edges of the cat in the picture. Using such methods, ML algorithms are tweaked until they can find the optimal solution in a small dataset.

Once this step is complete, the objective function is introduced. The objective function makes the algorithm more efficient at what it does. While the cat-detecting algorithm will have an objective to detect a cat, the objective function would be to solve the problem in minimal time. By introducing an objective function, it is possible to specifically tweak the algorithm to make it find the solution faster or more accurately.

The algorithm is trained on a sample dataset with the basic blueprint of what it needs to do, keeping in mind the objective function. .

Compounding all of this is essentially two things:--

 (1) virtually all of these AI applications can be expected to barricade themselves with as-is warranties and other pro-developer legal mechanisms that end users invariably agree to without actually ever having a meaningful opportunity to understand what’s at stake; and 

(2) the algorithmic bias is shrouded in secrecy, bolstered by trade secret provisions, non-circumvention (e.g., no reverse engineering) obligations and other features that make it difficult to detect. – 

The Emerging Irrelevance of Algorithmic Transparency in AI. Suppose AI developers agree to be “transparent”, that they are willing to disclose their algorithm. Ultimately, in some instances, this willingness may dispel the allure we normally accord to transparency because when we are dealing with machine learning AI applications, the value of the disclosed algorithm is diminished by its age. 

Stated differently, the more iterations the AI has gone through, what the original developer can disclose becomes more and more meaningless. So while we may be able to take a look under the hood, our desire to understand the “why” of what happened may not be satisfied. 

This, in turn, might bring us to the (uncomfortable) conclusion that we simply don’t understand, or at least don’t fully understand (as much as we’d like to) why the AI produced the result that it did. With that, we will have to learn to be satisfied that the actions of machine-learning AI applications cannot be fully understood. This observation directly ties in with the issue of developer liability.

The Open Algorithms (OPAL) project may facilitate the transition to greater reliance on these ‘private’ data. OPAL aims at extracting key indicators (such as population density, poverty, or diversity) through a secured open source platform.

 It also relies on open algorithms running on the companies’ servers, behind their firewalls. OPAL comes with governance standards that ensure the security, auditability, inclusivity, and relevance of the algorithms for different scenarios.

AI systems can also provide a useful ‘aspirational analogy’ to make future human actions more effective. What makes current AI so impressively good at its job is the credit assignment function. 

The ability of the algorithms to identify and reinforce the Artificial Neural Networks that most contribute to coming up with the “right’ result through many iterations and data-fueled feedback loops. These allow for machine learning. 

In a future ‘Human AI ecosystem’, governments, corporations or the aid sector, could apply AI tools to identify and reinforce what contributes to ‘good policy results’, including outcomes of aid programs. They could also better understand whether these effects are desirable in the long run through feedbacks.

In the pre-algorithm world, humans and organizations made decisions in hiring, advertising, criminal sentencing, and lending. These decisions were often governed by federal, state, and local laws that regulated the decision-making processes in terms of fairness, transparency, and equity. 

Today, some of these decisions are entirely made or influenced by machines whose scale and statistical rigor promise unprecedented efficiencies. Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals. 

In machine learning, algorithms rely on multiple data sets, or training data, that specifies what the correct outputs are for some people or objects. From that training data, it then learns a model which can be applied to other people or objects and make predictions about what the correct outputs should be for them.

The revolution in object detection algorithms brought about by convolutional neural nets and deep neural nets it has really increased the probability of detection.

Computers must be able to recognize an object before they can classify it. "We need to create a labeled data set for training these algorithms [to identify] lots of different objects," Doing so is very manpower intensive


Things will get worse in the future as radars develop the ability to sense their environment with artificial intelligence and machine learning, and adapt their transmission characteristics and pulse processing algorithms to defeat attempts to jam them.


New approaches like REAM seek to enable systems to generate effective countermeasures automatically against new, unknown, or ambiguous radar signals in near real-time. They are trying to develop new processing techniques and algorithms that characterize enemy radar systems, jam them electronically, and assess the effectiveness of the applied countermeasures.

Waveform-agile radar systems of the future will shift frequencies quickly in a pre-programmed electronic dance to foil electronic warfare attempts to defeat them.

The company is moving machine-learning algorithms to the EA-18G carrier-based electronic warfare jet to counter agile, adaptive, and unknown hostile radars or radar modes. REAM technology is expected to join active US Navy fleet squadrons around 2025.

It specializes in EW modeling and simulation. The company has expertise in RF and wireless circuit and systems design; electronic board design, layout, and fabrication; embedded hardware and software design; RF modeling and simulation; computational electromagnetics; antennas; wireless testing; cell phone forensics; servo and stepper motor control; algorithm and digital signal processing development; cryptography; data compression; and RF detection

Unavoidable presence of human bias in designing and, sometimes, training these programs can make it difficult to completely eradicate new errors or course correct after finding a bug.

“Locked” algorithms are those that provide the same result each time the same input is provided. As such, a locked algorithm applies a fixed function (e.g., a static look-up table, decision tree, or complex classifier) to a given set of inputs

An adaptive algorithm is an algorithm that changes its behavior at the time it is run, based on information available and on a priori defined reward mechanism (or criterion). Game artificial intelligence (AI) controls the decision-making process of computer-controlled opponents in computer games. 

Adaptive game AI (i.e., game AI that can automatically adapt the behaviour of the computer players to changes in the environment) can increase the entertainment value of computer games.

Genetic algorithms are computational problem-solving tools (generation over generation, they evolve and they learn). A genetic algorithm is a heuristic search method used in artificial intelligence and computing. It is used for finding optimized solutions to search problems based on the theory of natural selection and evolutionary biology. 

Genetic algorithms are good for searching through large and complex data sets. Genetic algorithms are used in artificial intelligence like other search algorithms are used in artificial intelligence — to search a space of potential solutions to find one which solves the problem. In machine learning we are trying to create solutions to some problem by using data or examples

The process of using genetic algorithms goes like this:--
Determine the problem and goal.
Break down the solution to bite-sized properties (genomes)
Build a population by randomizing said properties.
Evaluate each unit in the population.
Selectively breed (pick genomes from each parent)
Rinse and repeat.

A genetic algorithm is an algorithm that imitates the process of natural selection. They help solve optimization and search problems. ... Genetic algorithms imitate natural biological processes, such as inheritance, mutation, selection and crossover.

Big data is extremely beneficial. However, engineers and scientists are unable to utilize this data without the help of complex AI algorithms.

A genetic algorithm is based on the chootiya Darwinian principle of natural selection. Its purpose is to evolve and quickly solve optimisation problems.

There are many applications for Genetic Algorithms across many industries like air travel, trading and data security.

For example…

In air travel, a genetic algorithm optimizes shape, minimizes wing weight and optimizes fuel weight. This all improves the overall efficiency of the airplane.

In security, GA’s are used for encrypting sensitive data and protecting copyrights. Hackers then create a more complex GA to beat that encryption. The cycle repeats. Possibly forever.

In robotics, a genetic algorithm can be programmed to search for a range of optimal designs for each specific use. It can also return results for entirely new types of robots, ones that can perform multiple tasks and have more general applications.

WITH GENETIC ALGORITHMS YOU ARE NEVER GUARANTEED AN OPTIMAL, OR EVEN A GOOD, SOLUTION, AND IT IS A BLACK ART TO FIND GOOD PARAMETERS AND ENCODING SCHEMES. 

ALSO, YOU OFTEN GET SOLUTIONS THAT ARE RIDICULOUS, IMPLAUSIBLE OR INEFFICIENT BECAUSE THE GA INTERPRETED YOUR FITNESS FUNCTION WITHOUT HUMAN COMMON SENSE.

SO SO SO ,HOW DOES A CHOOTIYA SILVER BULLET GENETIC ALGORITHM WORK?


So in MAD MAN  Darwin’s theory of Natural Selection, three main principles necessary for evolution to happen are :--

Variation — There must be a variety of traits present in the population or a means with which to introduce a variation.
Selection — There must be a mechanism by which some members of the population can be parents. Passing down their genetic information and some do not.
Heredity — There must be a process in place by which children receive the property of their parent

A genetic algorithm is an algorithm that imitates the process of natural selection. They help solve optimization and search problems. ... Genetic algorithms imitate natural biological processes, such as inheritance, mutation, selection and crossover

This would be an opinion based question, but in terms of how things are commonly defined – Yes, Genetic algorithms are a part of Artificial Intelligence. ... Genetic algorithms are computational problem-solving tools (generation over generation, they evolve and they learn)

Genetic algorithms search parallel from a population of points. Therefore, it has the ability to avoid being trapped in local optimal solution like traditional methods, which search from a single point. 

Genetic algorithms use probabilistic selection rules, not deterministic ones

The following outline summarizes how the genetic algorithm works: The algorithm begins by creating a random initial population. The algorithm then creates a sequence of new populations. At each step, the algorithm uses the individuals in the current generation to create the next population.

The main difference between genetic algorithm and traditional algorithm is that genetic algorithm is a type of algorithm that is based on the principle of genetics and natural selection to solve optimization problems while traditional algorithm is a step by step procedure to follow, in order to solve a given problem

A genetic algorithm is a search heuristic that is inspired by MAD MAN Charles Darwin's theory of natural evolution. This algorithm reflects the process of natural selection where the fittest individuals are selected for reproduction in order to produce offspring of the next generation

In genetic algorithms, a chromosome (also sometimes called a genotype) is a set of parameters which define a proposed solution to the problem that the genetic algorithm is trying to solve. The set of all solutions is known as the population. 

A genetic algorithm is a search heuristic that is inspired by CUNT Charles Darwin's theory of natural evolution. 

This algorithm reflects the process of natural selection where the fittest individuals are selected for reproduction in order to produce offspring of the next generation. Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection Genetic Algorithms and What They Can Do For You. 

A genetic algorithm solves optimization problems by creating a population or group of possible solutions to the problem. ... After the genetic algorithm mates fit individuals and mutates some, the population undergoes a generation change. 

The main difference between genetic algorithm and traditional algorithm is that genetic algorithm is a type of algorithm that is based on the principle of genetics and natural selection to solve optimization problems while traditional algorithm is a step by step procedure to follow, in order to solve a given problem 

Genetic algorithms search parallel from a population of points. Therefore, it has the ability to avoid being trapped in local optimal solution like traditional methods, which search from a single point. 

Genetic algorithms use probabilistic selection rules, not deterministic ones In genetic algorithms, operators such as selection, crossover and mutation are applied to generate the individuals of the next generation. 

Elitism refers to a method for improving the GA performance; the basic idea is to transfer the best individuals of the current generation to the next generation The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions

Artificial Neural Network - Genetic Algorithm. ... Genetic Algorithms (GAs) are search-based algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of computation known as Evolutionary Computation.

Genetic algorithms (GAs) are stochastic search methods based on the principles of natural genetic systems. They perform a search in providing an optimal solution for evaluation (fitness) function of an optimization problem. GAs deal simultaneously with multiple solutions and use only the fitness function values

Genetic algorithm is the unbiased optimization technique. It is useful in image enhancement and segmentation. GA was proven to be the most powerful optimization technique in a large solution space. This explains the increasing popularity of GAs applications in image processing and other fields. 

Mutation (genetic algorithm) Mutation is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. ... Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability

The main difference between them is the representation of the algorithm/program. ... A parser also has to be written for this encoding, but genetic programming does not (usually) produce invalid states because mutation and crossover operations work within the structure of the tree

Genetic operator. A genetic operator is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.

A genetic algorithm is a heuristic search method used in artificial intelligence and computing. It is used for finding optimized solutions to search problems based on the theory of natural selection and evolutionary biology. Genetic algorithms are excellent for searching through large and complex data sets

Abstract: Genetic Algorithm (GA) is a calculus free optimization technique based on principles of natural selection for reproduction and various evolutionary operations such as crossover, and mutation. Various steps involved in carrying out optimization through GA are described

Parameters of genetic algorithm
Population size, stopping criteria, probability of crossover, probability of mutation and generation gap are the parameters of genetic algorithm. ... The generation gap is defined as the proportion of chromosomes in the population which are replaced in each generation.

Genetic algorithms are used in artificial intelligence like other search algorithms are used in artificial intelligence — to search a space of potential solutions to find one which solves the problem. In machine learning we are trying to create solutions to some problem by using data or examples.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in genetic programming and genetic algorithms to guide simulations towards optimal design solutions

Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.

Convergence is a phenomenon in evolutionary computation. It causes evolution to halt because precisely every individual in the population is identical. Full convergence might be seen in genetic algorithms (a type of evolutionary computation) using only crossover (a way of combining individuals to make new offspring)

When & How to Solve Problems with Genetic Algorithms
Determine the problem and goal.
Break down the solution to bite-sized properties (genomes)
Build a population by randomizing said properties.
Evaluate each unit in the population.
Selectively breed (pick genomes from each parent)
Rinse and repeat.

Artificial Neural Network - Genetic Algorithm. ... Genetic Algorithms (GAs) are search-based algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of computation known as Evolutionary Computation

Genetic algorithms (GA) are a family of heuristics which are empirically good at providing a decent answer in many cases, although they are rarely the best option for a given domain. ... Genetic methods are well suited for multicriteria optimization when gradient descent is dedicated to monocriteria optimization ...

A genetic algorithm solves optimization problems by creating a population or group of possible solutions to the problem. ... The genetic algorithm similarly occasionally causes mutations in its populations by randomly changing the value of a variable.


Genetic algorithm is the unbiased optimization technique. It is useful in image enhancement and segmentation. GA was proven to be the most powerful optimization technique in a large solution space. This explains the increasing popularity of GAs applications in image processing and other fields.

A genetic operator is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.

Mutation (genetic algorithm) Mutation is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. ... Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability.

Genetic algorithm programs are a machine learning model that is loosely based on biological evolution. In biological evolution, only a few individuals will survive in a certain environment out of the thousands that attempted to live there. Similarly, in genetic programming, hundreds or thousands of potential solutions will be tested to superior ones, such as, in this case, locations in need of a finer mesh grid.

Genetic algorithms are designed to offer good solutions, rather than perfect ones.

Genetic algorithms do not guarantee the best solution, but they do guarantee finding better solutions faster.  

In real life, genetic algorithms are applied in situations that are complex for humans to resolve manually

As evolution increasingly explains various non-biological processes impacting our lives, the schema theorem details the underlying mechanism of how these societal transformations take place. It is fascinating that a theorem that was initially formulated to describe the process of genetic algorithms has achieved truly global influence, relevance, and applicability.



SO SO SO-- What is Genetic Algorithm ?  

This is what you get..

1.An algorithm that mimics the genetic concepts of natural selection, combination, selection, and inheritance.
2.A probabilistic search technique for attaining an optimum solution to combinatorial problems that works in the principles of genetics.
3.An iterative meta-heuristic based on the evolution of species, that handles a population of solutions of the optimization problem that have a survive probability proportional to the quality of the respective solution, and makes combinations of solutions based on crossover and mutation operators. 4.The concept of basic genetics is applied to form a meta-heuristic optimization search technique.
5.A systematic method used to solve search and optimization problems and apply to such problems the principles of biological evolution, namely, selection of the ‘fittest’, sexual reproduction (crossover) and mutation.
6.The approximation algorithm based on the evolutional process.
7.Search technique to find exact or approximate solution to search problems.
8.A type of evolutionary computation algorithm in which candidate solutions are represented typically by vectors of integers or bit strings, that is, by vectors of binary values 0 and 1
9.Global search method based on a simile of the natural evolution.
10.A search heuristic used in computing to find true or approximate solutions to global optimization problems.
11.A special type of evolutionary technique where the potential solutions are represented by means of chromosomes (usually either binary or real sets of values). Each gene (or set of genes) represents a variable or parameter within the global solution.
12.A search technique used in computing to find exact or approximate solutions to optimization and search problems
13.A heuristic method for finding solutions to an optimization problem that takes advantage of evolutionary principles; different possible solutions to the problem are iteratively subjected to “replication”, “mutation” and “selection” processes. In order to illuminate its general principles a simple instance of the method is described below. In the context of RNA folding, the genetic algorithm might start with a randomly generated set of conformations that are compatible with the RNA sequence being folded. Then, in each iteration of the algorithm, multiple copies of each conformation are made (the replication step); more copies are made for conformations with lower free energies. The copying process is not perfect, but it introduces “mutations”, which may involve the creation/destruction of base pairs or entire helices. Following replication and mutation, a subset of the resulting conformations is selected based on their free energies and subsequently subjected to the next round of replication, mutation, and selection. Eventually, the obtained conformations would be enriched for those with free energies approaching the lowest possible free energy for the RNA sequence being folded.
14.Stochastic global optimization method, based on biological evolution and inspired by Darwin’s theory of “survival of the fittest”.
15.A search algorithm to enable you to locate optimal binary strings by processing an initial random population of binary strings by performing operations. Learn more in: A Bayesian Network Model for Probability Estimation 
16.Belongs to the larger class of evolutionary algorithms and is a search heuristic inspired on process of natural selection and is routinely used to generate useful solutions to optimization and search problems.
17.Abstraction and implementation of evolutionary principles and theories in computational algorithms to search optimal solutions to a problem.
18.Genetic algorithm (GA), one of the most popularly used evolutionary tools among soft computing paradigm, is mainly devised to solve real world ill-defined, and imprecisely formulated problems requiring huge computation. It is the power of GA to introduce some heuristic methodologies to minimize the search space for optimal solution(s) without sticking at local optima. Due to the inherent power, GA becomes one of the most successful heuristic optimization algorithms. It is widely used to solve problems of diversified fields ranging from engineering to art.
19.A genetic algorithm is a population-based metaheuristic algorithm that uses genetics-inspired operators to sample the solution space. This means that this algorithm applies some kind of genetic operators to a population of individuals (solutions) in order to evolve (improve) them throughout the generations (iterations).
20.A series of steps to allow the evolution of solutions to specific problems. It is inspired in biological evolution and particularly its genetic-molecular basis. Learn more in: A Genetic Algorithm's Approach to the Optimization of Capacitated Vehicle Routing Problems 
21.An algorithm that evaluates a function searching for an optimal solution with methods inspired by natural selection strategies.
22.Genetic Algorithms (GAs) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by MAD MAN Charles Darwin of survival of the fittest. As such they represent an intelligent exploitation of a random search within a defined search space to solve a problem.
23.An algorithm for optimizing a property based on an evolutionary mechanism that uses replication, deletion, and mutation processes carried out over many generations.
24.Class of algorithms used to find approximate solutions to difficult-to-solve problems, inspired and named after biological processes of inheritance, mutation, natural selection, and generic crossover. Genetic algorithms are a particular class of evolutionary algorithms.
25.A search technique that uses the concept of survival of the fittest to find an optimal or near optimal solution to a problem. Genetic algorithms use techniques inspired by evolutionary biology to generate new possible solutions known as offspring from an existing set of parent solutions. These recombination techniques include inheritance, selection, mutation, and crossover.
26.A genetic algorithm (abbreviated as GA) is a search technique used in computer science to approximate solutions to optimization and search problems.
27.An adaptive approach that provides a randomized, parallel, and global search based on the mechanics of natural selection and genetics in order to find solutions of a problem.
28.Genetic Algorithm is a population based adaptive evolutionary technique motivated by the natural process of survival of fittest, widely used as an optimization technique for large search spaces.
29.A metaheuristic that explores a solution space via adaptive search procedures based on principles derived from natural evolution and genetics. Solutions are typically coded as strings of binary digits called chromosomes.
30.A search heuristic that generate solutions to optimization problems using techniques inspired by natural evolution, such as selection, crossover and mutation
31.Genetic algorithm is a global solution search approach and based on the mechanics of natural selection and natural genetics.
32.It is an adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics in living system.
33.A subclass of evolutionary computing that uses genetic operators such as mutation and crossover to evolve solutions to mimic natural evolution.
34.An evolutionary approach applied in systems based on gene property of human being.
35.The basic idea behind genetic algorithm is to apply the principles of Darwin’s evolution theory. Briefly speaking, the algorithm is often done by the following procedure: 1) encoding of an initial population of chromosomes, i.e., representing solutions; 2) defining a fineness function; 3) evaluating the population by using genetic operations resulting in a new population; 4) decoding the result to obtain the solution of problem
36.An iterative optimization algorithm that works to minimize a given objective function by generating a random population and performing genetic operations to generate a new population.
37.It can be defined as heuristic search procedure that works on the principles of biological evolution.
38.Is a search meta-heuristic that mimics the process of natural selection. This meta-heuristic routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection and crossover.
39.It is a stochastic but not random method of search used for optimization or learning. Genetic algorithm is basically a search technique that simulates biological evolution during optimization process.
40.An optimization resource that uses interactive procedures to simulate the process of evolution of possible solutions populations to a particular problem. The process of evolution is random, but guided by a selection mechanism based on adaptation of individual structures. New structures are generated randomly with a given probability and included in the population. The result tends to be an increase in the adaptation of individuals to the environment and can result in an overall increase in fitness of the population with each new generation.
41.It is a population-based search and optimization tool that works based on Darwin’s principle of natural selection.
2.Genetic algorithm is an adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics in living system.
43.Genetic Algorithm is a bio-inspired heuristic solution search algorithm based on the evolutionary ideas of natural selection and genetics in living organism.
44.GA is an adaptive heuristic search algorithm that models biological genetic evolution. It proved to be a strong optimizer that searches among a population of solutions, and showed flexibility in solving dynamic problems
45.A special algorithmic optimization procedure, developed on the basis of simple hereditary property of animals and used for both of constrained and unconstrained problem. In Artificial Intelligence (AI), it is used as heuristic search also.
In computer science, artificial intelligence, and mathematical optimization, a heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. 
46.A genetic algorithm is a metaheuristic inspired by the process of natural selection to solve optimization problems.
47.A heuristic search method used in artificial intelligence and computing. It is used for finding optimized solutions to search problems based on the theory of natural selection and evolutionary biology.
48.A heuristics algorithm that is based on the mechanism of natural selection and natural genetics.
49.A metaheuristic algorithms inspired on the biological process of evolution and natural selection. Genetic algorithms are known to generate high-quality solutions with low influence of local minimums or maximums, relying on computational equivalents of natural processes such as crossover, mutation and environment fitness.
50.A special type of evolutionary technique which represents the potential solutions of a problem within chromosomes (usually a collection of binary, natural or real values).
51.An optimization scheme based on biological genetic evolutionary principles.
52.General-purpose search algorithms that use principles by natural population genetics to evolve solutions to problems
53.Technique to search exact or approximate solutions of optimization or search problem by using evolution-inspired phenomena such as selection, crossover, and mutation. Genetic algorithm is classified as global search
54.Genetic Algorithms (GA) are a way of solving problems by mimicking the same processes mother nature uses. They use the same combination of selection, recombination and mutation to evolve a solution to a problem
55.An algorithm that simulates the natural evolutionary process, applied the generation of the solution of a problem. It is usually used to obtain the value of parameters difficult to calculate by other means (like for example the neural network weights). It requires the definition of a cost function
56.An evolutionary algorithm which generates each individual from some encoded form known as “chromosomes” or “genome”.
57.A method of evolutionary computation for problem solving. There are states also called sequences and a set of possibility final states. Methods of mutation are used on genetic sequences to achieve better sequences.
58.A genetic algorithm (GA) is a heuristic used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. Genetic algorithms use biologically-derived techniques such as inheritance, mutation, natural selection, and recombination (or crossover). Genetic algorithms are a particular class of evolutionary algorithms.
59.A search technique used in computing to find exact or approximate solutions to optimization and search problems.
60.A probabilistic search technique for achieving an optimum solution to combinatorial problems that works in the principles of genetics.
61.An evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. 
63.An algorithm that mimics the genetic concepts of natural selection, combination, selection, and inheritance.
64.An artificially intelligent technique motivated by the genetic behavior of animals and capable of solving non-linear optimization problems.
65.Heuristic procedure that mimics evolution through natural selection.
66.Adaptive heuristic search algorithm based on the principle of natural selection and natural genetics. In order to arrive at optimal solution for design problems, the GA has been implemented so that the fundamental concepts of reproduction, chromosomal crossover, occasional mutation of genes and natural selection are reflected in the different stages of the genetic algorithm process. Although randomized, Genetic Algorithm is by no means random, instead they exploit historical information to steer the search into the region of better public presentation within the search distance. The process is initiated by selecting a number of candidate design variables either randomly or heuristically in order to create an initial population, which is then encouraged to evolve over generations to produce new designs, which are better or filter.
67.These are the search and optimization algorithms which are capable of searching large solution spaces to find the optimal solutions using the methods of natural selection
68.A stochastic population-based global optimization technique that mimics the process of natural evolution.
69.canonical optimization method that emulate the evolution and inheritance.
70.In the field of artificial intelligence, a genetic algorithm (GA) is a search heuristic that mimics the process of natural selection. This heuristic (also sometimes called a metaheuristic) is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover 
71.It is a search technique that imitates the procedure of natural selection. Genetic algorithms are used to optimize different search problems.
72.Genetic algorithms are evolutionary optimization methods motivated by biological phenomenon of natural selection and evolution
73.It is a search algorithm which uses natural selection and the mechanisms of population genetics. It is used for solving the constrained & unconstrained optimization problems ( Holland, 1968 ).
74.An evolutionary optimization algorithm based on the principles of genetics and the survival-of-the-fittest law of nature. Starting with a population of solutions, the algorithm applies crossover and mutation to the members of the population that best fit the objective function in order to obtain better fitting solutions.
75.A metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms.
76.An algorithm that mimics the genetic concepts of natural selection, combination, selection, and inheritance. 





Genetic algorithms are a metaheuristic used for all kinds of optimization problems. While they have applications in machine learning, they have as many applications elsewhere. ... Think about it as Meta Machine Learning algorithm that can generate more problem-specific algorithms

A genetic algorithm is a heuristic search method used in artificial intelligence and computing. It is used for finding optimized solutions to search problems based on the theory of natural selection and evolutionary biology. Genetic algorithms are excellent for searching through large and complex data sets.

The most basic evolutionary algorithm psuedocode is rather simple:--
Create an initial population (usually at random)
Until "done": (exit criteria) Select some pairs to be parents (selection) Combine pairs of parents to create offspring (recombination) Perform some mutation(s) on the offspring (mutation) ...

A genetic algorithm relies on binary representation of individuals: an individual is a string of bits, on which the mutation + crossover are easy to be implemented. ... Genetic algorithms are a type of evolutionary algorithm based on evolutionary biology and chromosome representations with evolutionary operators

In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection.

Genetic algorithms are used in artificial intelligence like other search algorithms are used in artificial intelligence — to search a space of potential solutions to find one which solves the problem. In machine learning we are trying to create solutions to some problem by using data or examples.

This would be an opinion based question, but in terms of how things are commonly defined – Yes, Genetic algorithms are a part of Artificial Intelligence. ... Genetic algorithms are computational problem-solving tools (generation over generation, they evolve and they learn).

A genetic algorithm solves optimization problems by creating a population or group of possible solutions to the problem. ... After the genetic algorithm mates fit individuals and mutates some, the population undergoes a generation change

Evolution and Evolutionary Algorithms
Fitness is the measure of the degree of adaptation of an organism to its environment; the bigger the fitness is, the more the organism is fit and adapted to the environment.

Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.

Genetic programming (GP) is considered as the evolutionary technique having the widest range of application domains. It can be used to solve problems in at least three main fields: optimization, automatic programming and machine learning

The genetic algorithm begins by creating a random initial population. The algorithm then creates a sequence of new populations. At each step, the algorithm uses the individuals in the current generation to create the next population.



GENETIC ALGORITHMS ARE NON-DETERMINISTIC METHODS..   THEY CANNOT BE USED FOR  ANALYTICAL PROBLEMS   

GA  CANNOT GUARANTEE OPTIMALITY. THE SOLUTION QUALITY ALSO DETERIORATES WITH THE INCREASE OF PROBLEM SIZE..STOCHASTIC ALGORITHMS IN GENERAL CAN HAVE DIFFICULTY OBEYING EQUALITY CONSTRAINTS. 

A WRONG CHOICE OF THE FITNESS FUNCTION MAY LEAD TO CRITICAL PROBLEMS SUCH AS UNABLE TO FIND THE SOLUTION TO A PROBLEM OR EVEN WORSE, RETURNING A WRONG SOLUTION TO THE PROBLEM.A SMALL POPULATION SIZE WILL NOT GIVE THE GENETIC ALGORITHM ENOUGH SOLUTION SPACE TO PRODUCE ACCURATE RESULTS. 

A HIGH FREQUENCY OF GENETIC CHANGE OR POOR SELECTION SCHEME WILL RESULT IN DISRUPTING THE BENEFICIAL SCHEMA AND THE POPULATION MAY ENTER ERROR CATASTROPHE, CHANGING TOO FAST FOR SELECTION TO EVER BRING ABOUT CONVERGENCE.



GENETIC ALGORITHMS DO NOT SCALE WELL WITH COMPLEXITY.GAS HAVE A TENDENCY TO CONVERGE TOWARDS LOCAL OPTIMA OR EVEN ARBITRARY POINTS RATHER THAN THE GLOBAL OPTIMUM OF THE PROBLEM. THIS MEANS THAT IT DOES NOT "KNOW HOW" TO SACRIFICE SHORT-TERM FITNESS TO GAIN LONGER-TERM FITNESS.. 

GAs CANNOT EFFECTIVELY SOLVE PROBLEMS IN WHICH THE ONLY FITNESS MEASURE IS A SINGLE RIGHT/WRONG MEASURE (LIKE DECISION PROBLEMS), AS THERE IS NO WAY TO CONVERGE ON THE SOLUTION (NO HILL TO CLIMB).  AN EVOLUTIONARY ALGORITHM NEVER REALLY KNOWS WHEN TO STOP.




Algorithms can be classified into 3 types based on their structures: Sequence: this type of algorithm is characterized with a series of steps, and each step will be executed one after another. Branching: this type of algorithm is represented by the "if-then" problems

Using AI and ML for adaptive learning

While learning a language is a far cry from ERP education and training, the use of AI and ML to personalize and enhance the learning experience is not.. 

Adaptive learning delivers tailored learning experiences that address the unique needs of an individual through resources, pathways, and just-in-time feedback.  This is done using algorithms to orchestrate the interaction with the learner and then deliver customized content to address the learner’s needs. 

Adaptive learning has many benefits:--
It can save time – instead of following a prescribed learning path for all learners in the same role, you can fast-track the learning based on existing knowledge and skills. You will spend time on new knowledge and skills.
It’s focused – by identifying exactly which areas need attention, learners need only focus on knowledge or skills they still need to master, not what they have already mastered.

Two types of algorithms for the purposes of regulation: “locked algorithms” and “adaptive algorithms.”

Locked algorithms provide the same result each time they’re fed the same input. The answers are normally based on things like look-up tables, decision trees, or classifiers. An adaptive algorithm, however, will change its behavior using a defined learning process. The outputs may change for a given set of inputs as the learning process is tweaked with new data.

An adaptive algorithm, however, will “change its behavior using a defined learning process.” ... To date, FDA has cleared or approved only "locked" algorithms which are trained and then verified and validated upon each update.

Adaptive algorithms will change its behaviour using a defined learning process.

Algorithms retrained with new training data would have to be resubmitted for FDA approval.
New approval would also be needed for system to expand beyond its original scope.

Two types of algorithms were outlined in the report: Locked algorithms and adaptive algorithms.

Locked algorithms don’t have the capability to continually adapt or learn every time the algorithm is used, and therefore provide the same result each time an algorithm is used and can only be manually modified and validated by the manufacturer.

An adaptive algorithm does the very opposite. 

If algorithms are actually a substitute for human decision-making then they will, just like humans, inevitably make some mistakes.

If we apply product liability law to all these circumstances, developers will be liable for mistakes made by AI even though humans who make those very same mistakes would be given considerably more leeway. 

If this high standard of accountability is applied to companies trying to develop AI products they will, crippled by the constant threat of litigation imperfect algorithms are just the tip of the iceberg when it comes to the murky legal issues raised by AI in healthcare. 

“Black box medicine” describes the use of opaque computational methods to help inform or make healthcare decisions. “Black box” refers to a fundamental opacity for some computing methods. We seem to be finding ways to unlock the “black box” for some things, although I still think a substantial chunk of medical AI is likely to be pretty “black box” for a while.


Of course, it has a ton of legal implications. Who has liability when someone gets injured and the care involves an opaque medical algorithm? Is that just on the doctor or the hospital that ended up implementing the algorithm in the first place? Is it on the manufacturer of the algorithm? Is it some combination? It’s complicated and still needs to be worked out. 

The most relevant forms of IP for medical technology generally are patents and trade secrecy. But it’s tougher to patent medical AI such as software Trade secrecy is the default pathway to try to get exclusivity for medical AI, and it’s a problematic one. Data just isn't the kind of thing you can patent. 

Siloed data sets and algorithms make it really hard to generate comprehensive data sets across contexts, and for the whole field to learn from what everyone else is doing. Keeping things secret also makes it difficult for anyone to validate that medical AI is really doing what it says it’s doing, and that it’s working well. 

The closest thing we have is the General Data Protection Regulation (GDPR) in the EU that includes an “explainability requirement” that applies to AI at some level, but it’s not clear exactly how much. 

It requires that companies that build some forms of AI that make decisions about individuals be able to explain how the decision was made. This applies to healthcare as well, so that limits some “black box”-iness of medical AI. The U.S. controlled by kosher evil pharma obviously is not a part of GDPR.

The General Data Protection Regulation (EU) 2016/679 (GDPR) is a regulation in EU law on data protection and privacy for all individual citizens of the European Union (EU) and the European Economic Area (EEA). 

It also addresses the transfer of personal data outside the EU and EEA areas. The GDPR aims primarily to give control to individuals over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.

In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. 

Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. 

For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."

To illustrate the point, a group of computer vision researchers took an image containing, say, a school bus and gave it as input to the best-performing image classification Artificial Intelligence algorithm around (not surprisingly, a deep neural network). 

As expected, the algorithm responded correctly and heralded the presence of a school bus. Then, they corrupted the school bus image by properly modifying the colours of some of its pixels to make a new image which, to a human eye, was indistinguishable from the original one. When fed with this new corrupted image, the same neural network used before announced, with very high confidence, the presence of an ostrich

Artificial intelligence is used to maximize profits

Algorithmic amplification is when some online content becomes popular at the expense of other viewpoints. This is a reality on many of the platforms we interact with today. The history of our clicks, likes, comments and shares are the data powering the algorithmic engine.

Recommendation algorithms were created by companies such as Facebook, YouTube, Netflix or Amazon for the purpose of helping people make decisions. An array of options are recommended and a choice is made by the user that is then fed as new knowledge to train the algorithm — without factoring in that the choice was in fact an output shown by the algorithm.

This creates a feedback loop, where the output of the algorithm becomes part of its input. As expected, recommendations similar to the choice that was made are shown.

This leaves us with a chicken-or-egg dilemma: Did you click on something because you were inherently interested in it, or did you click on it because you were recommended it? The answer, according to Chaney’s research, lies somewhere in between.

But the vast majority of algorithms do not understand the distinction, which results in similar recommendations inadvertently reinforcing the popularity of already-popular content. Gradually, this separates users into filter bubbles or ideological echo chambers where differing viewpoints are discarded.

Feedback loop exacerbates the effects of a filter bubble.

As users within these bubbles interact with the confounded algorithms, they are being encouraged to behave the way the algorithm thinks they will behave, which is similar to those who have behaved like them in the past,  The longer someone’s been active on a platform, the stronger these effects can be.

Algorithm training data come with an inherent set of biases that reflect existing prejudices or is unrepresentative of the population it serves. When we fail to expose hidden patterns, associations and relationships in the training data and how representative it is of the general population, we create systems that propagate these biases and optimize for sameness of outputs.

Training an algorithm requires to follow a few standard steps:--
Collect the data
Train the classifier
Make predictions

The first step is necessary, choosing the right data will make the algorithm success or a failure. The data you choose to train the model is called a feature. 

The objective is to use these training data to classify the type of object. The first step consists of creating the feature columns. Then, the second step involves choosing an algorithm to train the model. When the training is done, the model will predict what picture corresponds to what object.

After that, it is easy to use the model to predict new images. For each new image feeds into the model, the machine will predict the class it belongs to. For example, an entirely new image without a label is going through the model. For a human being, it is trivial to visualize the image as a car. The machine uses its previous knowledge to predict as well the image is a car.


Automate Feature Extraction using DL
A dataset can contain a dozen to hundreds of features. The system will learn from the relevance of these features. However, not all features are meaningful for the algorithm. A crucial part of machine learning is to find a relevant set of features to make the system learns something.

One way to perform this part in machine learning is to use feature extraction. Feature extraction combines existing features to create a more relevant set of features. It can be done with PCA, T-SNE or any other dimensionality reduction algorithms.

For example, an image processing, the practitioner needs to extract the feature manually in the image like the eyes, the nose, lips and so on. Those extracted features are feed to the classification model.

Artificial General Intelligence

However, current computers do extremely well on 1 set of tasks but perform miserably when same algorithms are applied to another set. E.g. a computer proficient in playing Chess is clueless when playing AlphaGo or a Natural Language translator which is accurate while translating English fails when attempting the same on French. Also their ability to use reasoning to infer answers from a set of observations is limited. Infact they perform worse than humans when transferring the knowledge or when using reasoning.

These computers need 2 attributes to match intelligence of humans, i.e. Machine Reasoning and Transfer Learning. Machine Reasoning is an “algebraically manipulating previously acquired knowledge in order to answer a new question.  

Transfer learning refers to ability to transfer learned experience from one context to another. Today its role is limited to training algorithms on one set of data and using it to work on another set for the same problem.

Some terms--

Computer Vision – is a field of artificial intelligence that uses computer vision algorithms to mimic the way human vision acquires, processes, analyzes and understands visual information. It can use this real-world visual data to produce numerical or symbolic information and support decisions or take other actions.

Decision Model – is a set of rules used to understand and manage the logic behind business decisions. Typically involving the application of sophisticated algorithms to large quantities of data, decision modeling can be used to recommend a course of action and predict its outcomes.

Decision Tree – a tree and branch-based model, like a flow chart, used to map decisions and their possible consequences. The decision tree is widely used in machine learning for classification and regression algorithms.

Genetic Algorithm – Inspired by natural evolution, Genetic Algorithm is a class of optimization techniques where the best models go through a process of "population control" through a methodical cycle of fitness, selection, mutation and cross over. Genetic algorithms are one example of broader class of evolutionary algorithms.

Genetic Programming – refers to a subset of artificial intelligence in which computer programs are encoded as sets of genes that are adjusted using evolutionary algorithms. In this way, genetic programming follows Darwin’s principles of natural selection: the computer program works out which solutions are strongest and progresses those, discarding the weaker options.

Heuristic Search Techniques – are practical approaches to problem-solving that narrow down searches for optimal solutions by eliminating incorrect options. In the field of artificial intelligence, heuristic search techniques rank alternatives in search algorithms at each decision branch, using available information to decide which branch to follow.

Human-in-the-loop – refers to the process of inserting humans into machine learning processes to optimize outputs and boost accuracy. HITL is widely recognized as a best practice technique in machine learning: examples include Facebook’s photo recognition algorithm which invites users to confirm the identity of a photo’s subject when its confidence falls below a certain level. 

Predictive Analytics – describes the practice of using historical data to predict future outcomes. It combines mathematical models (or “predictive algorithms”) with historical data to calculate the likelihood (or degree to which) something will happen. Machine learning based predictive analytics has been around for a while. But until recently it has lacked three key features that are important to drive true marketing value: scale, speed, and application. 

Regression – algorithms used to predict values for new data based on training data fed into the system. Areas where regression in machine learning is used to predict future values include drug response modeling, marketing, real estate and financial forecasting.

Rules-based Algorithms – leverage a series of ‘if-then’ statements that utilize a set of assertions, from which rules are created dictating how to act upon those assertions. Rules-based algorithms enable intelligent and repeatable decision making. They are also used to store and manipulate knowledge.

Structured Data – refers to information with a high degree of organization, meaning that it can be seamlessly included in a relational database and quickly searched by straightforward search engine algorithms and/or other search operations. Structured data examples include dates, numbers, and groups of words and number “strings”. Machine-generated structured data is on the increase and includes sensor data and financial data.

Python is considered to be in the first place in the list of all AI development languages due to the simplicity. The syntaxes belonging to python are very simple and can be easily learnt. Therefore, many AI algorithms can be easily implemented in it. ..

An algorithm is an unambiguous set of mathematical rules to solve  a class of problems which is the key to enabling AI software to problem-solve. For  example, if you need to get from A to B on Google Maps, an algorithm exists within  the software that will help you work out the fastest route taking into account  things like congestion etc.


The four types of data analysis are: Descriptive Analysis. Diagnostic Analysis. Predictive Analysis.
Major categories of modeling approaches are: – classical optimization techniques, – linear programming, – nonlinear programming, – geometric programming, – dynamic programming, – integer programming, – stochastic programming, – evolutionary algorithms, etc.

In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. 

Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.

Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. 

In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems ; therefore, there may be no direct link between algorithm complexity and problem complexity.

Most ML algorithms require annotated text, images, speech, audio or video data. But, with the right resources and right amount of data, practitioners can leverage active learning. Active learning is the philosophy that “a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns.” In order to choose the data from which it learns, an active learning-based AI can query humans in order to obtain more data.

Active learning in the real world is best thought of as a method of training ML algorithms, which means the technique may or may not be used in instances where ML drives artificial intelligence. In practice, the idea behind active learning is that data scientists can use poorly trained AI to help identify—through a Query Strategy, as outlined above—which pieces of data should be used to train a better version of that AI.

Human labelers are required for any sort of ML, but with Active Learning their work is significantly reduced by the machine selecting the most relevant data.
.
Active learning is a special case of machine learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points. In statistics literature it is sometimes also called optimal experimental design.

Labeling faster vs. labeling smarter
To address the exploding need in quality annotations, a Human-in-the-Loop AI approach where a human annotator validates the output of a machine learning algorithm seems like a promising approach. Not only does it enable a faster process, it also helps with quality, since the human intervention helps make up for the inaccuracy of the algorithm. 

At its most basic, an algorithm simply tells a computer what to do next with an “and,” “or,” or “not” statement.   Think of it like math: it starts off pretty simple but becomes infinitely complex when expanded.

When chained together, algorithms – like lines of code – become more robust.  They’re combined to build AI systems like neural networks. Since algorithms can tell computers to find an answer or perform a task, they’re useful for situations where we’re not sure of the answer to a question or for speeding up data analysis.

Algorithms provide the instructions for almost any AI system you can think of:---

Motion detection no longer requires sensors thanks to algorithms
Facebook’s algorithms know how to advertise to you
Google’s algorithm determines what news you see first
There’s even an algorithm to simulate the human brain

---- and don’t forget about quantum computer algorithms



Bias can creep into algorithms in many ways. In a highly influential branch of AI known as "natural language processing," problems can arise from the "text corpus"—the source material the algorithm uses to learn about the relationships between different words.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.


THERE IS NO INQUIRY AFTER A DRONE MASSACRE..  YOU FUNCTIONALLY HAVE SITUATIONS WHERE THE FOXES ARE GUARDING THE HENHOUSE

The only way to ensure ethical practices is through government regulation.

IBM surveillance tech was used by police forces in the Philippines where thousands have been killed in “extrajudicial executions” as part of a brutal war on drugs.
Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.

Fairness: AI systems should treat all people fairly
Inclusiveness: AI systems should empower everyone and engage people
Reliability & Safety: AI systems should perform reliably and safely
Transparency: AI systems should be understandable
Privacy & Security: AI systems should be secure and respect privacy
Accountability: AI systems should have algorithmic accountability
AI should:--

Be socially beneficial.
Avoid creating or reinforcing unfair bias.
Be built and tested for safety.
Be accountable to people.
Incorporate privacy design principles.
Uphold high standards of scientific excellence.
Be made available for uses that accord with these principles.

AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:--

Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
Transparency: The traceability of AI systems should be ensured.
Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

AI’s main limitation is that it learns from given data. There is no other way that knowledge can be integrated, unlike human learning. This means that any inaccuracies in the data will be reflected in the results.

No Improvement with Experience: Unlike humans, artificial intelligence cannot be improved with experience. ...

No Original Creativity: ...

Human bias and oversight in algorithms can cause undesired and even dangerous problems in AI systems.

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias. The Algorithmic Accountability would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. 

Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data.

The bill is being introduced just a few weeks after Facebook was sued by the Department of Housing and Urban Development, which alleges its ad targeting system unfairly limits who sees housing ads. The sponsors mention this lawsuit in a press release, as well as an alleged Amazon AI recruiting tool that discriminated against women.

The new legislation would:--

Authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to conduct impact assessments of highly sensitive automated decision systems. This requirement would apply both to new and existing systems.

Require companies to assess their use of automated decision systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy and security.

Require companies to evaluate how their information systems protect the privacy and security of consumers' personal information.

Require companies to correct any issues they discover during the impact assessments. The rules would apply to companies with annual revenue above $50 million as well as to data brokers and businesses with over a million consumers’ data.

AI is performing human-like tasks without having the clear legal accountability of one

AI has no sense of morality:  It can not instinctively distinguish between right and wrong, as it has no instinct. Pushing it through court proceedings would not act as supervised learning towards morality or desired behaviour and therefore not impart justice. At best AI could be given biases and weights towards particular outcomes. 

Moreover, considering how intransparent and unpredictable for example neural networks are, could we ever really predict how a moral code would actually perpetually "behave" in real-world, ever-changing circumstances?

On capital markets, future unknown events that have not been experienced before and might lead to catastrophe are referred to as so-called "black swan" events. Some highly unlikely sequence of events may lead to algorithms behaving in an undesired way. 

Black swan scenarios also point us at another weakness of behaviour-based auditing of algorithms. These scenarios are dependent on an ecosystem of human and AI-based algorithms interacting, which means that to understand behaviour for one algorithm, the behaviour of all other algorithms that it can  interact with needs to be taken into account. 

And that would still only tell us something about functionality at one single point in time. Legally cordoning off such behaviour means that we declare that AI operates outside our control and we can't always be held responsible for it, sort of like a legal "act of god".

Legally, it is not always possible to pinpoint a specific cause after an event. This is not just because the code is complex and unpredictable so that it might be difficult to find a specific AI error, but also because there might be an interplay of variables at hand so that no element individually is at fault. 

When a ferry sunk in 1987 off the port of Zeebrugge, the owner of the ship quickly blamed some of its personnel that had to monitor a particular area of the ship where the disaster seemingly started. 

Following an investigation however, the cause was determined to be an unlikely series of events and interactions that demonstrated a more systemic cause, ultimately leading to the board of directors (who were charged with corporate manslaughter) being told by the UK government that they had no understanding of their duties.


Applied to AI, Knight Capital's crash and burn comes to mind: When a trading algorithm programmer inadvertently set in motion the near-immediate bankruptcy of what had until that instant been the market leader... by toggling one boolean variable in a decrepit piece of code. Or did he?


In the case of capital markets, most rules now appear to have been written with ubiquitous algorithmic activity in mind, so that the few purely human operators now sometimes need to explain why they did not follow rules designed for AI! 

This is a marked difference with early stages in the development of understanding of algorithmic accountability on capital markets the regulators were stumped, and as I recall were effectively fobbed off with "not sure why it does what it does, it's self-evolving" when querying nefarious stock market trades performed by AI.

Times have changed of course: Whereas until around 2008 regulators had such a largely laissez-faire and collaborative approach to compliance, changing government attitudes and behaviors have led to rapidly increasing and cumulative strict regulatory demands on market activity. 

The path that regulators in the financial industry have chosen is one that focuses on transparency: Every action that is performed by a human or algorithm needs to have a clear reasoning behind it, and it is up to participants to demonstrate that conditions were appropriate for the activity. 

The regulators' idea appears to be that forced transparency, in combination with heavy-handed fines, will preclude algorithmic activities that can't stand the daylight. In other words, this is a control modus where algorithmic behaviours are checked and audited. It would be even better if the onus of attention would not be the algorithm's functioning, but the firm's.

Regardless of whether the rules make sense under all conditions, it is very important to follow these rules, as non-compliance is no longer a slap on the proverbial wrist, but instead a heavy blow that can put smaller firms out of business - not to mention the marked consequences of reputation damage when regulators send out press reports about your firm. Who really wants to "neither admit nor deny" wrong-doing, yet agree to some type of heavy penalty?

To create an authority that oversees algorithmic functioning. The idea is to force transparency so that there is a guise of understanding of how decisions are made by algorithms. In doing so, they claim, the reasoning for decisions will be fair and clear. 

We can't slice off a piece of human brain and understand what it does by shining a spotlight on the neurons, nor better control it for that matter. As I have argued above, what is important is what an algorithm does, and not how it does it nor how it behaves under laboratory conditions.

In trying to prevent blackboxing decision-making by creating transparency, the real boogeyman is left unquestioned: The owner. We should be questioning the effects that algorithms exert on our world on legal or natural entities' behalf, and praise or punish those who control them - not the poor algorithms who only do our bidding. 

Also, instead of aiming for transparency, regulators should aim for control: Legal owners of an algorithm need to be in control of what their algorithmic animal does, so that when problems occur they can take responsibility over their agent and limit the damage.

In summary, we should view AI as our agents, like animals that we train to work and act on our behalf. As such, we're herding algorithms. We're algorithm whisperers. Riding the algorithmic horse, selectively breeding for desirable labelling accuracy. 

They learn from us, either supervised or unsupervised, extending and replicating human behaviours, as to enact our desires. Responsibility lies with its management, and it's time that it takes responsibility.

Many leaders nowadays are making data-driven decisions,  but  machines cannot help them formulate strategies.

The question, therefore, is not whether or not we should work with AI but rather how to work with machines so that we, humans, remain in charge

Algorithms are impacting our world in powerful but not easily discernable ways. Despite the grown-up jobs AI is taking on, algorithms continue to use childish logic drawn from biased or incomplete data.

We must have a  FDA-type board where, before an algorithm is even released into usage, tests have been run to look at impact   If the impact is in violation of existing laws, whether it be civil rights, human rights, or voting rights, then that algorithm cannot be released.

 To understand how AI systems work, civil society needs access to the whole system—the raw training data, the algorithms that analyze it, and the decision-making models that emerge. humans should remain an integral part of any algorithmic system. It is imperative to have humans in the loop
AI has very poor judgement.

Artificial intelligence still isn’t very intelligent.  If you take an algorithm out of the specific context for which it was trained, it fails quite spectacularly, That’s also the case when algorithms are poorly trained with biased or incomplete data—or data that doesn’t prepare them for nuances Because of AI’s failings, human judgment will always have to be the ultimate authority,

Technology is like any other power. Without reason, without heart, it destroys us.

Facial recognition technology is everywhere. It’s used to clear or convict suspected criminals, board passengers onto planes, and even hire new employees. Friendly robots are being built to recognise our faces, while quick face scans can now unlock our smartphones.

Joy Buolamwini, a computer scientist at the Massachusetts Institute of Technology, founded the 'Algorithmic Justice League'– a movement that aims to fight this kind of bias by advocating for more coding diversity. She ran into the bias herself when she sat in front of a computer that was able to recognise her colleagues’ faces as faces – but for Boulamwini, a Ghanaian-American, it didn’t work at all. 

That’s because the sets of human faces used to train these kinds of programmes are largely homogenous and only recognise certain races, hairstyles or features. Alexa couldn’t recognise voice commands from certain accents. 

Amazon used misguided machine learning (the same kind that Buolamwini encountered) to screen candidates, which ended up favouring men especially white Jews , which is so much “in the face” in Israel.. Lawmakers are starting to take notice: in the US, legislators began proposing bills to fight algorithmic bias For advocates like Buolamwini, the race is on to fight these inherent biases in technology before they become even more pervasive.

Algorithms trained on historically biased data have significant error rates for communities of color especially in over-predicting the likelihood of a convicted criminals to reoffend which can have serious implications for the justice system. 

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset. 

One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation. 

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake.  This exact issue was documented in a study where a hospital was using machine learning to predict the risk of death from pneumonia. 

The algorithm came to the conclusion that patients with asthma were less likely to die from pneumonia than patients without asthma. Based off this data, hospitals could decide that it was less critical to hospitalize patients with both pneumonia and asthma, given the patients appeared to have a higher likelihood of recovery. 

However, the algorithm overlooked another important insight, which is that those patients with asthma typically receive faster and more intensive care than other patients, which is why they have a lower mortality rate connected to pneumonia. Had the hospital blindly trusted the algorithm, they may have incorrectly assumed that it’s less critical to hospitalize asthmatics, when in reality they actually require even more intensive care.

As detailed in the asthma example, if biases in AI are not properly identified, the difference can quite literally be life and death. The use of AI in areas like criminal justice can also have devastating consequences if left unchecked. 

Another less-talked about consequence is the potential of more regulation and lawsuits surrounding the AI industry. Real conversations must be had around who is liable if something goes terribly wrong. 

For instance, is it the doctor who relies on the AI system that made the decision resulting in a patient’s death, or the hospital that employs the doctor? Is it the AI programmer who created the algorithm, or the company that employs the programmer?

Additionally, the “witness” in many of these incidents cannot even be cross-examined since it’s often the algorithm itself. And to make things even more complicated, many in the industry are taking the position that algorithms are intellectual property, therefore limiting the court's ability to question programmers or attempt to reverse-engineer the program to find out what went wrong in the first place.

 These are all important discussions that must be had as AI continues to transform the world we live in. If we allow this incredible technology to continue to advance but fail to address questions around biases, our society will undoubtedly face a variety of serious moral, legal, practical and social consequences.  It’s important we act now to mitigate the spread of biased or inaccurate technologies.

Criminal justice algorithms are generally relatively simple and produce scores from a small number of inputs such as age, offense, and prior convictions. But their developers have sometimes restricted government agencies using their tools from releasing information about their design and performance. Jurisdictions haven’t allowed outsiders access to the data needed to check their performance.

Meanwhile, companies like Amazon, Microsoft, and IBM also develop and sell “emotion recognition” algorithms, which claim to identify a person’s emotions based on their facial expressions and movements. 

But experts on facial expression, know that it is impossible for these algorithms could detect emotions based on facial expressions and movements alone. Artificial intelligence shows up in courtrooms too, in the form of “risk assessments”—algorithms predict whether someone is at high “risk” of not showing up for court or getting re-arrested. Studies have found that these algorithms are often inaccurate and based on flawed data.



Cognizant trained the neural network to use comparative algorithms for telling the good checks from the bad. The DML model identifies potential counterfeits in real time by comparing various factors on scans of deposited checks to those in the historical database. Each of the deposited checks is given a confidence level, marking it as fraudulent, good, or in need of further review.

Generally speaking, ML-based fraud detection systems use complex algorithms that are trained on specific datasets. They keep learning from scenarios presented to them, and recognize, make suggestions about, and act upon patterns in the data.

Several kinds of predictive analytics techniques are widely used in ML fraud detection systems. Logistic regression analysis measures the strength of cause-and-effect relationships in structured datasets and assesses the predictive capabilities of variables and combinations of variables in the set. Fraudulent and authentic transactions are compared to create an algorithm that then predicts whether a new transaction is fraudulent.

Decision tree analysis leverages data classification algorithms to figure out potential risks and reviews of various actions. The model presents possible outcomes through a flowchart that uses a tree-like structure to help people visualize and understand the analysis. 

Most exciting, for those who hope to reduce fraudulent activity even further, is that we are now seeing a new generation of algorithms that are based on the way people think. These are known as Convolutional Neural Networks and are based on the visual cortex, which is a small segment of cells that are sensitive to specific regions of the visual field in the human body. 



In effect, these neural networks use images directly as input, functioning in the same manner as the visual cortex. This means that they are able to extract elementary visual features like oriented edges, end-points and corners.

This new development in AI makes algorithms that were already intelligent  smarter. This technology can study the spending data of an individual and be able to determine, based on this information, whether they performed the most recent transaction on their credit card or if someone else was using their credit card data. 

Significant potential lies in the ability of neural networks to learn relationships from modeled data. Implementing this type of solution to curb cybercrime, for example, will reduce the economic losses drastically.

Computers can learn on their own if given a few simple instructions. That’s really all that algorithms are mathematical instructions.  An algorithm is a step-by-step procedure for calculations.

Algorithms are used for calculation, data processing, and automated reasoning. Whether you are aware of it or not, algorithms are becoming a ubiquitous part of our lives.

To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then ‘executes’ the program, following each step mechanically, to accomplish the end goal. 

When you are telling the computer what to do, you also get to choose how it’s going to do it. That’s where computer algorithms come in. The algorithm is the basic technique used to get the job done.
The only point that explanation gets wrong is that you have to tell a computer “exactly what you want it to do” step by step. 

Rather than follow only explicitly programmed instructions, some computer algorithms are designed to allow computers to learn on their own (i.e., facilitate machine learning).. Today’s internet is ruled by algorithms. These mathematical creations determine what you see in your Facebook feed, what movies Netflix recommends to you, and what ads you see in your Gmail

As mathematical equations, algorithms are neither good nor evil. Clearly, however, people with both good and bad intentions have used algorithms   Algorithms are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. 

The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. It’s also about how models are being used to predict the future. 

There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldn’t blame our tools.

Algorithms are nothing new. As noted above, they are simply mathematical instructions. Their use in computers can be traced back to one of the giants in computational theory Alan Turing. Turing became famous during the Second World War because he helped break the Enigma code. Sadly, Turing took his own life two years after publishing his book.

In the last years of Alan Turing’s life he saw his mathematical dream — a programmable electronic computer — sputter into existence from a temperamental collection of wires and tubes. Back then it was capable of crunching a few numbers at a snail’s pace. 

Today, the smartphone in your pocket is packed with computing technology that would have blown his mind. It’s taken almost another lifetime to bring his biological vision into scientific reality, but it’s turning out to be more than a neat explanation and some fancy equations.

Although Turing’s algorithms have been useful in identifying how patterns emerge in nature, other correlations generated by algorithms have been more suspect.

Algorithms can make systems smarter, but without adding a little common sense into the equation
.

Italian researchers recently developed the first functioning quantum neural network by running a special algorithm on an actual quantum computer.




Quantum computers are expected to play a crucial role in machine learning, including the crucial aspect of accessing more computationally complex feature spaces – the fine-grain aspects of data that could lead to new insights.   

Quantum computers will become more powerful in the years to come, and their Quantum Volume increases, they will be able to perform feature mapping, a key component of machine learning, on highly complex data structures at a scale far beyond the reach of even the most powerful classical computers. 

Feature mapping is a way of disassembling data to get access to finer-grain aspects of that data. Both classical and quantum machine learning algorithms can break down a picture, for example, by pixels and place them in a grid based on each pixel’s color value. 

From there the algorithms map individual data points non-linearly to a high-dimensional space, breaking the data down according to its most essential features. In the much larger quantum state space, we can separate aspects and features of that data better than we could in a feature map created by a classical machine-learning algorithm. Ultimately, the more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform.

The goal is to use quantum computers to create new classifiers that generate more sophisticated data maps. In doing that, researchers will be able to develop more effective AI that can, for example, identify patterns in data that are invisible to classical computers.

We’ve developed a blueprint with new quantum data classification algorithms and feature maps. That’s important for AI because, the larger and more diverse a data set is, the more difficult it is to separate that data out into meaningful classes for training a machine learning algorithm. 

Bad classification results from the machine learning process could introduce undesirable results
Today’s quantum computers struggle to keep their qubits in a quantum state for more than a few hundred microseconds even in a highly controlled laboratory environment. That’s significant because qubits need to remain in that state for as long as possible in order to perform calculations. 


We are still far off from achieving Quantum Advantage for machine learning—the point at which quantum computers surpass classical computers in their ability to perform AI algorithms. 

Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods.

There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describe the problem and the class of algorithm such as Regression and Clustering.

We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit

Regression Algorithms  Regression is concerned with modeling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

The key objective of regression-based tasks is to predict output labels or responses which are continues numeric values, for the given input data. The output will be based on what the model has learned in training phase. 

Basically, regression models use the input data features (independent variables) and their corresponding continuous numeric output values (dependent or outcome variables) to learn specific association between inputs and corresponding outputs.

Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm.

The most popular regression algorithms are:--

Ordinary Least Squares Regression (OLSR)
Linear Regression
Logistic Regression
Stepwise Regression
Multivariate Adaptive Regression Splines (MARS)
Locally Estimated Scatterplot Smoothing (LOESS)
Instance-based Algorithms

Instance-based Algorithms   Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.

In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compares new problem instances with instances seen in training, which have been stored in memory

The most popular instance-based algorithms are:--

k-Nearest Neighbor (kNN)
Learning Vector Quantization (LVQ)
Self-Organizing Map (SOM)
Locally Weighted Learning (LWL)
Support Vector Machines (SVM)



Regularization Algorithms -- An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. 

Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model's performance on the unseen data as well

The most popular regularization algorithms are:--

Ridge Regression
Least Absolute Shrinkage and Selection Operator (LASSO)
Elastic Net
Least-Angle Regression (LARS)
Decision Tree Algorithms

Decision Tree Algorithms   Decision tree methods construct a model of decisions made based on actual values of attributes in the data. The decision tree algorithm tries to solve the problem, by using tree representation. Each internal node of the tree corresponds to an attribute, and each leaf node corresponds to a class label

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.


The most popular decision tree algorithms are:--

Classification and Regression Tree (CART)
Iterative Dichotomiser 3 (ID3)
C4.5 and C5.0 (different versions of a powerful approach)
Chi-squared Automatic Interaction Detection (CHAID)
Decision Stump
M5
Conditional Decision Trees
Bayesian Algorithms



Bayesian Algorithms Bayesian methods are those that explicitly apply Bayes’ Theorem for problems such as classification and regression. It is a family of algorithms where all of them share a common principle, i.e. every pair of features being classified is independent of each other. Naive Bayes classifiers are a collection of classification algorithms based on Bayes' Theorem

The most popular Bayesian algorithms are:--

Naive Bayes
Gaussian Naive Bayes
Multinomial Naive Bayes
Averaged One-Dependence Estimators (AODE)
Bayesian Belief Network (BBN)
Bayesian Network (BN)
Clustering Algorithms



Association Rule Learning Algorithms   Association rule learning methods extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization.

The most popular association rule learning algorithms are:--

Apriori algorithm
Eclat algorithm

Artificial Neural Network Algorithms   Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.



Dimensional Reduction Algorithms   Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information.


This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.

Principal Component Analysis (PCA)
Principal Component Regression (PCR)
Partial Least Squares Regression (PLSR)
Sammon Mapping
Multidimensional Scaling (MDS)
Projection Pursuit
Linear Discriminant Analysis (LDA)
Mixture Discriminant Analysis (MDA)
Quadratic Discriminant Analysis (QDA)
Flexible Discriminant Analysis (FDA)
Ensemble Algorithms



Ensemble Algorithms Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

Boosting
Bootstrapped Aggregation (Bagging)
AdaBoost
Weighted Average (Blending)
Stacked Generalization (Stacking)
Gradient Boosting Machines (GBM)
Gradient Boosted Regression Trees (GBRT)
Random Forest
Other Machine Learning Algorithms

Algorithms from specialty tasks in the process of machine learning, such as:--

Feature selection algorithms
Algorithm accuracy evaluation
Performance measures
Optimization algorithms

Algorithms from specialty subfields of machine learning, such as:--

Computational intelligence (evolutionary algorithms, etc.)
Computer Vision (CV)
Natural Language Processing (NLP)
Recommender Systems
Reinforcement Learning
Graphical Models


To get the most value out of Big Data, other Machine Learning tools and processes that leverage various algorithms include:--

Comprehensive data quality and management
GUIs for building models and process flows
Interactive data exploration and visualization of model results
Comparisons of different Machine Learning models to quickly identify the best one
Automated ensemble model evaluation to identify the best performers
Easy model deployment so you can get repeatable, reliable results quickly
An integrated end-to-end platform for the automation of the data-to-decision process
Machine Learning Certification

Whether you realize it or not, Machine Learning is one of the most important technology trends—it underlies so many things we use. Speech recognition, Amazon and Netflix recommendations, fraud detection, and financial trading are a few examples of Machine Learning commonly in use in today’s data-driven world.

Machine Learning is increasingly touching more aspects of our everyday lives. This also means that there are many lucrative Machine Learning careers available. If you want to get in on the action, we have the resources to help you get there.

A machine learner based on decision trees or Bayesian networks is much more transparent to programmer inspection , which may enable an auditor to discover that the AI algorithm uses the address information of applicants who were born or previously  resided in predominantly poverty-stricken areas.

Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to not make innocent victims scream with helpless frustration: all criteria that  apply to humans performing social functions; all criteria that must be considered in an  algorithm intended to replace human judgment of social functions; all criteria that may  not appear in a journal of machine learning considering how an algorithm scales up to  more computers. T

his list  that Artificial Intelligence falls short of human capabilities in some critical sense, even though AI algorithms have beaten humans in many specific domains such as chess. It has  AI algorithms with human-equivalent or superior performance are characterized by a  deliberately programmed competence only in a single, restricted domain. 

Deep Blue  became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery.

Algorithms in each category, in essence, perform the same task of predicting outputs given unknown inputs, however, here data is the key driver when it comes to picking the right algorithm.

What follows is an outline of categories of Machine Learning problems with a brief overview of the same:--

Classification
Regression
Clustering


Classification Algorithms
Classification, as the name suggests is the act of dividing the dependent variable (the one we try to predict) into classes and then predict a class for a given input. It falls into the category of Supervised Machine Learning, where the data set needs to have the classes, to begin with.

Thus, classification comes into play at any place where we need to predict an outcome, from a set number of fixed, predefined outcomes.

Classification uses an array of algorithms, a few of them listed below--

Naive Bayes
Decision Tree
Random Forest
Logistic Regression
Support Vector Machines
K Nearest Neighbours


Naive Bayes

Naive Bayes algorithm follows the Bayes theorem, which unlike all the other algorithms in this list, follows a probabilistic approach. This essentially means, that instead of jumping straight into the data, the algorithm has a set of prior probabilities set for each of the classes for your target.

The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. The weights that minimize the error function is then considered to be a solution to the learning problem. 

The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly. Back-propagation is just a way of propagating the total loss back into the neural network to know how much of the loss every node is responsible for, and subsequently updating the weights in such a way that minimizes the loss by giving the nodes with higher error rates lower weights and vice versa 

Backpropagation is a neural network learning algorithm. ... Roughly speaking, a neural network is a set of connected input/output units in which each connection has a weight associated with it. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input 


Backpropagation algorithms are a family of methods used to efficiently train artificial neural networks (ANNs) following a gradient-based optimization algorithm that exploits the chain rule. The main feature of backpropagation is its iterative, recursive and efficient method for calculating the weights updates to improve the network until it is able to perform the task for which it is being trained. It is closely related to the Gauss–Newton algorithm.

Backpropagation requires the derivatives of activation functions to be known at network design time. Automatic differentiation is a technique that can automatically and analytically provide the derivatives to the training algorithm. 

In the context of learning, backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function; backpropagation computes the gradient(s), whereas (stochastic) gradient descent uses the gradients for training the model (via optimization).

Augmented Intelligence fuses technology with human expertise. The role of AI may become greater in time, but the state of technology still requires a human element — if for nothing else than to tag and train our algorithms and make them iteratively smarter. 

Human Review. To better inform the algorithms, there needs to be a continuous feedback loop where every ID (and face matching image pair) is labeled as pass or fail. When only a small fraction of transactions is reviewed by humans (as is often the case with automated solutions), this limits the ability of deep learning.

.
Cloud Algorithms

The term algorithm is currently making a meteoric rise to fame. A geeky term that was previously confined to the world of mathematicians and software engineers is making its way into the mainstream, as people are increasingly recognizing the material impact on society that algorithms are starting to have. 

Algorithms, that used to be buried away inside of computer program files, used to find the derivative of a slope, or to find the shortest path between two locations, have today expanded to almost all areas of human activity. 

Algorithms for determining the value of a basketball player based upon a computerized analysis of his performance last season. Algorithms that analyze the incoming customer service calls and rout them to the most appropriate agent. 

Algorithms determining the likelihood of a convict reoffending, for analyzing insurance claims, for coordinating the nightly maintenance on a mass transit system, for driving cars, identifying symptoms. 

Algorithms to determine which candidate a company should hire, who should we recommend as a friend on social media or what films, books or music would someone like. 

And of course, algorithms have taken over financial markets, now making up 70% of trades, as stock markets have become layers upon layers of algorithms. An algorithm is a set of instructions for performing a certain operation. An algorithmic system takes an input and transforms it into a set of operations to create an output. 

Algorithms are being transformed from the mechanistic linear form of the past, where we prespecified all the rules, hand-coded them with the end result looking like cogs in a gearbox, to today where algorithms take a more networked form, they are self-organizing and learn from data. These new forms of algorithms take many different names from cognitive systems to artificial intelligence, to machine learning.

These advanced algorithms, unlike the static mechanical models of the past, are adaptive in nature: They may learn as information changes, and as goals and requirements evolve. They may resolve ambiguity and tolerate unpredictability. 

They may be engineered to feed on dynamic data in real time, or near real time they are amenable to the processing of unstructured data, the processing of millions of parameters and complex patterns. Such as speech recognition, sentiment analysis, face detection, risk assessment, fraud detection, behavioral recommendations. 

This means these advanced analytical methods are no longer confined to mathematical operations but can handle more unstructured human-like activities such as many basic services.

Algorithms are shortcuts people use to tell computers what to do. At its most basic, an algorithm simply tells a computer what to do next with an “and,” “or,” or “not” statement. Think of it like math: it starts off pretty simple but becomes infinitely complex when expanded. 

When chained together, algorithms – like lines of code – become more robust. They’re combined to build AI systems like neural networks. Since algorithms can tell computers to find an answer or perform a task, they’re useful for situations where we’re not sure of the answer to a question or for speeding up data analysis.

As an example, imagine you have to sort through a million files for the word “blue.” Even if it only took you one second per file, you’d have to sort for over 11 days straight without stopping to sleep, eat, or use the loo. 

But, if you taught a computer to recognize the word “blue” using an algorithm, it could do the work for you – and given enough processing power and proper algorithmic-tuning, it could probably accomplish the task in a few seconds.

That’s what algorithms provide for society: a shortcut to getting a computer to do something it normally couldn’t. Algorithms provide the instructions for almost any AI system you can think of:

Motion detection no longer requires sensors thanks to algorithms  Facebook’s algorithms know how to advertise to you  Google’s algorithm determines what news you see first 

Algorithms save humans time by giving computers the necessary tools to perform functions that can’t be hard coded.

A programming algorithm is a computer procedure that is a lot like a recipe (called a procedure) and tells your computer precisely what steps to take to solve a problem or reach a goal.

The best chosen algorithm makes sure computer will do the given task at best possible manner. In cases where efficiency matter a proper algorithm is really vital to be used. An algorithm is important in optimizing a computer program according to the available resources

Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. ... Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems 

Algorithm in Programming. In programming, algorithm is a set of well defined instructions in sequence to solve the problem.

An algorithm is a recipe, or set of general instructions for how to perform some operation or function (e.g., a type of sort, or computing a floating point multiplication, or creating a hash code or using it to look up a value in a dictionary implemented as a hash table.)


Binary is a representation or code for information or data.

An algorithm is a set of rules for performing some function or finding new information from the information you already have.
As quantum computers grow more powerful, common encryption algorithms become obsolete. Today, most data encryption security depends on the difficulty of factorizing (or breaking up) large numbers into primes. 

To break a private key or crack an encryption method, factorization algorithms must painstakingly attempt to make divisions by successive numbers. While the task can be completed by today’s supercomputers, it would make no financial sense to use them. The estimated time that a conventional computer would need to break a 4096-bit RSA key would exceed the time that has passed since the formation of our galaxy!

Biological hardware (learning rules) is designed to deal with asynchronous inputs and refine their relative information.  In contrast, traditional artifical intelligence algorithms are based on synchronous inputs, hence the relative timing of different inputs constituting the same frame is typically ignored.

Algorithmic bias: Machine-learning algorithms identify patterns in data and codify them in predictions, rules and decisions. If those patterns reflect some existing bias, the algorithms are likely to amplify that bias and may produce outcomes that reinforce existing patterns of discrimination.

Overestimating the capabilities of AI: Since AI systems do not understand the tasks they perform, and rely on their training data, they are far from infallible. The reliability of their outcomes can be jeopardized if the input data is biased, incomplete or of poor quality.

Programmatic errors: Where errors exist, algorithms may not perform as expected and may deliver misleading results that have serious consequences.

Humans can rely on an algorithm to reduce the risk of error in their interactions with a complex system, but the final decision must remain with the human.

Algorithms are written by humans, who are fallible, that the car had been programmed to take into account a cyclist or a pedestrian but not a pedestrian pushing a bike. It was also programmed to not take into account interfering images such as that of a plastic bag flying on the road, so as not to be stopped erratically.

There is no intelligence in AI,  but there is knowledge – of data and of rules – and there is recognition . Instead we should talk about “augmented intelligence of the human” who will rely on resources that he cannot mobilise with the same power as the machine

AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.

Potentially devastating social repercussions can arise when human predilections (conscious or unaware) are brought to bear in choosing which data points to use and which to disregard. Furthermore, when the process and frequency of data collection itself are uneven across groups and observed behaviors, it’s easy for problems to arise in how algorithms analyze that data, learn, and make predictions.  

Negative consequences can include misinformed recruiting decisions, misrepresented scientific or medical prognoses, distorted financial models and criminal-justice decisions, and misapplied (virtual) fingers on legal scales.  In many cases, these biases go unrecognized or disregarded under the veil of “advanced data sciences,” “proprietary data and algorithms,” or “objective analysis.”

As we deploy machine learning and AI algorithms in new areas, there probably will be more instances in which these issues of potential bias become baked into data sets and algorithms. Such biases have a tendency to stay embedded because recognizing them, and taking steps to address them, requires a deep mastery of data-science techniques, as well as a more meta-understanding of existing social forces, including data collection. In all, debiasing is proving to be among the most daunting obstacles, and certainly the most socially fraught, to date.

AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.

Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.

Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms can’t do that.

AI lacks intuition. Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.

Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. 

They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot. In mathematics and computer science, a local optimum is the best solution to a problem within a small neighborhood of possible solutions. This concept is in contrast to the global optimum, which is the optimal solution when every possible solution is considered

In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions (the search space) by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.

Local search algorithms are widely applied to numerous hard computational problems, including problems from computer science (particularly artificial intelligence), mathematics, operations research, engineering, and bioinformatics.

In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which is the optimal solution among all possible solutions, not just those in a particular neighborhood of values.

AI can’t explain itself. AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.

AI offers tremendous opportunities and capabilities. But it can’t see the world as humans do. Instead, it provides the potential for humans to focus on more meaningful aspects of work that involve creativity and innovation. As automation replaces more routine or repetitive tasks, it will allow workers to focus more on inventions and breakthroughs, which ultimately fuels an enterprise’s success.

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically  divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.

Explainability is technically valuable. Developers need to be able to determine whether a system is solving the right problem. There are many examples of AI systems that “cheated” to arrive at the desired  outcome.

If AI systems in  high stakes fields ultimately solve the wrong problem, the outcome could be life threatening.

Because explainability is necessary for the adoption of AI in certain fields, in some ways the quest for  explainability is spurring AI innovation. For both ethical and technical reasons, academics and major AI companies  alike are devoting significant effort toward explainability, and they are making serious progress.

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. 

As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.



Imagine stupid Indian judiciary wants AI in our system.. Our “ loser lawyer turned judges” are the worst on the planet.. Software has even been allowed to predict future criminals, ultimately controlling human freedom by shaping how parole is denied or granted to prisoners. In this way, the minds of judges are being shaped by decision-making mechanisms they cannot understand because of how complex the process is and how much data it involves.


We humans are not merely cut off from the decisions that machines are making for us but deeply affected by them in unpredictable ways. Instead of being central to the system of decisions that affects us, we are cast out in to its environment. We have progressively restricted our own decision-making capacity and allowed algorithms to take over. We have become artificial humans, or human artefacts, that are created, shaped and used by the technology.


The more you use the web and social networks, the more Google, Facebook and other internet companies know about you. Then, of course, there are the reams of data collected via more conventional means -- voter rolls, driver’s licenses, magazine subscriptions, credit card purchases -- that can be cross-linked with online information to paint a complete profile of individuals. Data itself isn’t inherently discriminatory. 

The problem arises in how it’s used and interpreted -- especially when algorithms characterize people via correlations or “proxy” data. When data is misused, software can compound stereotypes or arrive at false conclusions. If you were to check out a homosexual porn site to protect you own motherland , you may be branded as a homosexual.





Algorithms function by drawing on past data while also influencing real-life decisions, which makes them prone, by their very nature, to repeating human mistakes and perpetuating them through feedback loops.   

Often, their implications can be unexpected and unintended

An algorithm is a sequence of steps
• to perform a task
• given an initial situation (i.e., the input)

They work to provide a path between a start point and an end point in a consistent way, and provide the instructions to follow it

A computer program is an implemented algorithm


In the job market, excessive reliance on technology has led some of the world’s biggest companies to filter CVs through software, meaning human recruiters will never even glance at some potential candidates’ details. Not only does this put people’s livelihoods at the mercy of machines, it can also build in hiring biases that the company had no desire to implement, as happened with Amazon. Jews are always the gainers.

In news, what’s known as automated sentiment analysis analyses positive and negative opinions about companies based on different web sources. In turn, these are being used by trading algorithms that make automated financial decisions, without humans having to actually read the news.

91 % of all trading in the foreign exchange markets is conducted by algorithms alone. The growing algorithmic arms race to develop ever more complex systems to compete in these markets means huge sums of money are being allocated according to the decisions of machines. Jews laugh all the way to the kosher bank.

On a small scale, the people and companies that create these algorithms are able to affect what they do and how they do it. But because much of artificial intelligence involves programming software to figure out how to complete a task by itself, we often don’t know exactly what is behind the decision-making. As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisaged.

TAKE THE 2010 “FLASH CRASH” OF THE DOW JONES INDUSTRIAL AVERAGE INDEX. THE ACTION OF ALGORITHMS HELPED CREATE THE INDEX’S SINGLE BIGGEST DECLINE IN ITS HISTORY, WIPING NEARLY 9% OFF ITS VALUE IN MINUTES (ALTHOUGH IT REGAINED MOST OF THIS BY THE END OF THE DAY). A FIVE-MONTH INVESTIGATION COULD ONLY SUGGEST WHAT SPARKED THE DOWNTURN (AND VARIOUS OTHER THEORIES HAVE BEEN PROPOSED).

BUT THE ALGORITHMS THAT AMPLIFIED THE INITIAL PROBLEMS DIDN’T MAKE A MISTAKE. THERE WASN’T A BUG IN THE PROGRAMMING. THE BEHAVIOUR EMERGED FROM THE INTERACTION OF MILLIONS OF ALGORITHMIC DECISIONS PLAYING OFF EACH OTHER IN UNPREDICTABLE WAYS, FOLLOWING THEIR OWN LOGIC IN A WAY THAT CREATED A DOWNWARD SPIRAL FOR THE MARKET.

THE CONDITIONS THAT MADE THIS POSSIBLE OCCURRED BECAUSE, OVER THE YEARS, THE PEOPLE RUNNING THE TRADING SYSTEM HAD COME TO SEE HUMAN DECISIONS AS AN OBSTACLE TO MARKET EFFICIENCY. 

BACK IN 1987 WHEN THE US STOCK MARKET FELL BY 23 %, SOME WALL STREET BROKERS SIMPLY STOPPED PICKING UP THEIR PHONES TO AVOID RECEIVING THEIR CUSTOMERS’ ORDERS TO SELL STOCKS. THIS STARTED A PROCESS THAT HAS ENDED WITH COMPUTERS ENTIRELY REPLACING THE PEOPLE.

ALGORITHMS DELIBERATELY DO “BACK SWINGS” TO PUMP AND DUMP IN MILLI SECONDS ..

The financial world has invested millions in superfast cables and microwave communications to shave just milliseconds off the rate at which algorithms can transmit their instructions. When speed is so important, a human being that requires a massive 215 milliseconds to click a button is almost completely redundant. Our only remaining purpose is to reconfigure the algorithms each time the system of technological decisions fails.


As new boundaries are carved between humans and technology, we need to think carefully about where our extreme reliance on software is taking us. As human decisions are substituted by algorithmic ones, and we become tools whose lives are shaped by machines and their unintended consequences, we are setting ourselves up for technological domination. We need to decide, while we still can, what this means for us both as individuals and as a society.

ALTHOUGH ALGORITHMS ARE USED IN GLOBAL FINANCE AND OTHER WAYS THAT IMPACT SOCIETY, THERE IS NO LEGAL RECOURSE OR OFFICIAL AUTHORITY TO HOLD A COMPANY RESPONSIBLE FOR THE ACTIONS OF ITS ALGORITHMS.    

THAT’S PARTIALLY BECAUSE ALGORITHMS ARE OFTEN DEVELOPED AND IMPLEMENTED IN SECRET TO AVOID HACKING AND REVERSE ENGINEERING.

UNDERSTANDING ALGORITHMS AND THEIR IMPACT ON HUMAN LIFE GOES FAR BEYOND BASIC DIGITAL LITERACY

IT HAS BEEN MADE TOO CONVENIENT FOR PEOPLE TO FOLLOW THE ADVICE OF AN ALGORITHM (OR, TOO DIFFICULT TO GO BEYOND SUCH ADVICE), TURNING THESE ALGORITHMS INTO SELF-FULFILLING PROPHECIES, AND USERS INTO ZOMBIES OF GEORGE ORWELL’S 1984.


WE KNOW WHAT HAPPENED TO BOEING 737 SUPERMAX PASSENGER PLANE WHEN AUTOMATION VETOES THE HUMAN PILOT.. EVEN TODAY THE INQUIRY INTO THE CRASH HAS NOT GONE BEYOND TO WHAT I PREDICTED IN THE POST BELOW AS SOON AS THE CRASH HAPPENED —


THE ALGORITHMS ARE NOT IN CONTROL; PEOPLE CREATE AND ADJUST THEM TO HIJACK THE SYSTEM FOR EVIL PURPOSE OR PROFITS

GOOGLE, FACEBOOK, TWITTER, QUORA ETC USE  ALGORITHMS THAT CAN SINK MY BLOG POSTS WHICH EXHUME CRITICAL BURIED TRUTHS..   

THESE JEWISH DEEP STATE AGENTS , USE FILTER BUBBLES AND TAILOR UPDATES AND CONTENT TO EACH USER, WHICH COULD KEEP USERS FROM RECEIVING INFORMATION OR NEWS FROM SOURCES THAT CHALLENGE THEIR WORLDVIEW.. 

CAPT AJIT VADAKAYIL’S FOLLOWERS  CANNOT EVEN SEND A CRITICAL ADVISE/ WARNING TO OUR OWN NATIONS PM

WHEN YOU REMOVE THE HUMANITY FROM A SYSTEM WHERE PEOPLE ARE INCLUDED, THEY BECOME VICTIMS

SHYLOCK WINS-- THE COMMON GOOD HAS BECOME A DISCREDITED, OBSOLETE RELIC OF THE PAST..

ARTIFICIAL INTELLIGENCE IS ONLY AS SMART AS THE DATA SETS SERVED

THE POWER TO CREATE AND CHANGE REALITY WILL DELIBERATELY BE INSERTED  IN BLACK BOX TECHNOLOGY THAT ONLY A FEW TRULY UNDERSTAND.. 

THIS BLOGSITE WILL NOT ALLOW HUMANS TO BE AT THE TENDER MERCIES OF KOSHER BIG BROTHER WHO  CONTROLS THE TECHNOLOGY.














SOMEBODY CALLED ME UP AND ASKED ME..

CAPTAIN—

WHO IS MUHAMMAD IBN MUSA AL-KHWARIZMI WHOM MODERN HISTORIANS ARE CALLING THE “FATHER OF COMPUTER SCIENCE” AND THE “FATHER OF ALGORITHMS”??.

LISTEN –

ARAB MUHAMMAD IBN MUSA AL-KHWARIZMI WAS A BRAIN DEAD FELLOW WHOSE ENTIRE WORK WAS SOLD TO HIM TRANSLATED INTO ARABIC BY THE CALCIUT KING FOR GOLD.

THE CALICUT KING MADE HIS MONEY BY NOT ONLY SELLING SPICES –BUT KNOWLEDGE TOO.

HE MAMANKAM FEST HELF AT TIRUNAVAYA KERALA BY THE CALICUT KING EVERY 12 YEARS WAS AN OCCASION WHERE KNOWLEDGE WAS SOLD FOR GOLD.

http://ajitvadakayil.blogspot.com/2019/10/perumal-title-of-calicut-thiyya-kings.html

EVERY ANCIENT GREEK SCHOLAR ( PYTHAGORAS/ PLATO/ SOCRATES ETC ) EXCEPT ARISTOTLE STUDIED AT KODUNGALLUR UNIVERSITY.. THE KERALA SCHOOL OF MATH WAS PART OF IT.

OUR ANCIENT BOOKS ON KNOWLEDGE DID NOT HAVE THE AUTHORs NAME AFFIXED ON THE COVER AS WE CONSIDERED BOOKS AS THE WORK OF SOULS , WHO WOULD BE BORN IN ANOTHER WOMANs WOMB AFTER DEATH.

THE GREEKS TOOK ADVANTAGE OF THIS , STOLE KNOWLEDGE FROM KERALA / INDIA AND PATENTED IT IN THEIR OWN NAMES, WITH HALF BAKED UNDERSTANDING .

WHEN THE KING OF CALICUT CAME TO KNOW THIS, HE BLACKBALLED GREEKS FROM KODUNGALLUR UNIVERSITY .. AND SUDDENLY ANCIENT GREEK KNOWLEDGE DRIED UP LIKE WATER IN THE HOT DESERT SANDS.

LATER THE CALICUT KING SOLD TRANSLATED INTO ARABIC KNOWLEDGE TO BRAIN DEAD ARABS LIKE MUHAMMAD IBN MUSA AL-KHWARIZMI FOR GOLD..

THESE ARAB MIDDLE MEN SOLD KNOWLEDGE ( LIKE MIDDLEMEN FOR SPICES) TO WHITE MEN FOR A PREMIUM.

FIBONACCI TOOK HIS ARABIC WORKS TO ITALY FROM BEJAYA , ALGERIA.

http://ajitvadakayil.blogspot.com/2010/12/perfect-six-pack-capt-ajit-vadakayil.html

EVERY VESTIGE OF ARAB KNOWLEDGE IN THE MIDDLE AGES WAS SOLD IN TRANSLATED ARABIC BY KODUNGALLUR UNIVERSITY FOR GOLD..

FROM 800 AD TO 1450 AD KODUNGALLUR UNIVERSITY OWNED BY THE CALICUT KING EARNED HUGE AMOUNT OF GOLD FOR SELLING READY MADE TRANSLATED KNOWLEDGE ..

THIS IS TIPU SULTANS GOLD WHO STOLE IT FROM NORTH KERALA TEMPLE VAULTS.. ROTHSCHILD BECAME THE RICHEST MAN ON THIS PLANET BY STEALING TIPU SUTANs GOLD IN 1799 AD.

http://ajitvadakayil.blogspot.com/2011/10/tipu-sultan-unmasked-capt-ajit.html

WHEN TIPU SULTAN WAS BLASTING TEMPLE VAULTS, LESS THAN 1% OF THE GOLD WAS SECRETLY TRANSFERRED TO SOUTH KERALA ( TRADITIONAL ENEMIES ) OF THE CALICUT KING. LIKE HOW SADDAM HUSSAIN FLEW HIS FIGHTER JETS TO ENEMY IRAN .

THIS IS THE GOLD WHICH WAS UNEARTHED FROM PADMANABHASWAMY TEMPLE..

http://ajitvadakayil.blogspot.com/2013/01/mansa-musa-king-of-mali-and-sri.html

ALGORITHMS ARE SHORTCUTS PEOPLE USE TO TELL COMPUTERS WHAT TO DO. AT ITS MOST BASIC, AN ALGORITHM SIMPLY TELLS A COMPUTER WHAT TO DO NEXT WITH AN “AND,” “OR,” OR “NOT” STATEMENT.

THE ALGORITHM IS BASICALLY A CODE DEVELOPED TO CARRY OUT A SPECIFIC PROCESS. ALGORITHMS ARE SETS OF RULES, INITIALLY SET BY HUMANS, FOR COMPUTER PROGRAMS TO FOLLOW.

A PROGRAMMING ALGORITHM IS A COMPUTER PROCEDURE THAT IS A LOT LIKE A RECIPE (CALLED A PROCEDURE) AND TELLS YOUR COMPUTER PRECISELY WHAT STEPS TO TAKE TO SOLVE A PROBLEM OR REACH A GOAL.

THERE IS NO ARTIFICIAL INTELLIGENCE WITHOUT ALGORITHMS. ALGORITHMS ARE, IN PART, OUR OPINIONS EMBEDDED IN CODE.

ALGORITHMS ARE AS OLD AS DANAVA CIVILIZATION ITSELF – THIEF GREEK EUCLID’S ALGORITHM BEING ONE OF THE FIRST EXAMPLES DATING BACK SOME 2300 YEARS

EUCLID JUST PATENTED MATH HE LEARNT IN THE KERALA SCHOOL OF MATH IN HIS OWN NAME.. EUCLID IS A THIEF LIKE PYTHAGORAS WHO LEARNT IN THE KERALA SCHOOL OF MATH.

http://ajitvadakayil.blogspot.com/2011/01/isaac-newton-calculus-thief-capt-ajit.html

ALGEBRA DERIVED FROM BRAIN DEAD AL-JABR, ONE OF THE TWO OPERATIONS HE USED TO SOLVE QUADRATIC EQUATIONS.

ALGORISM AND ALGORITHM STEM FROM ALGORITMI, THE LATIN FORM OF HIS NAME.


CONTINUED TO 2--

  1. CONTINUED FROM 1-

    BRAIN DEAD CUNT AL-KHWARIZMI DEVELOPED THE CONCEPT OF THE ALGORITHM IN MATHEMATICS -WHICH IS A REASON FOR HIS BEING CALLED THE GRANDFATHER OF COMPUTER SCIENCE ( SIC ).. THEY SAY THAT THE WORD “ALGORITHM” IS ACTUALLY DERIVED FROM A LATINIZED VERSION OF AL-KHWARIZMI’S NAME BRAAAYYYYYYY.

    ALGORITMI DE NUMERO INDORUM IN ENGLISH AL-KHWARIZMI ON THE HINDU ART OF RECKONING GAVE RISE TO THE WORD ALGORITHM DERIVING FROM HIS NAME IN THE TITLE. THE WORK DESCRIBES THE HINDU PLACE-VALUE SYSTEM OF NUMERALS BASED ON 1, 2, 3, 4, 5, 6, 7, 8, 9, AND 0. THE FIRST USE OF ZERO AS A PLACE HOLDER IN POSITIONAL BASE NOTATION WAS DUE TO AL-KHWARIZMI IN THIS WORK.

    ANOTHER IMPORTANT WORK BY AL-KHWARIZMI WAS HIS WORK SINDHIND ZIJ ON ASTRONOMY. THE WORK, DESCRIBED IN DETAIL IN , IS BASED IN INDIAN ASTRONOMICAL WORKS..

    THE MAIN TOPICS COVERED BY AL-KHWARIZMI IN THE SINDHIND ZIJ ARE CALENDARS; CALCULATING TRUE POSITIONS OF THE SUN, MOON AND PLANETS, TABLES OF SINES AND TANGENTS; SPHERICAL ASTRONOMY; ASTROLOGICAL TABLES; PARALLAX AND ECLIPSE CALCULATIONS; AND VISIBILITY OF THE MOON. A RELATED MANUSCRIPT, ATTRIBUTED TO AL-KHWARIZMI, ON SPHERICAL TRIGONOMETRY IS DISCUSSED..

    PTOLEMY’ ENTIRE WORKS ARE LIFTED FROM KODUNGALLUR UNIVERSITY KERALA OWNED BY THE CALICUT KING. AL-KHWARIZMI'S TABLES WERE CAST ON PTOLEMY’S TABLES.

    AL-KHWARIZMI WROTE ON THE ASTROLABE AND SUNDIALS ,WHICH ARE HINDU INSTRUMENTS

    THERE IS A STATUE OF MUHAMMAD IBN MUSA AL-KHWARIZMI HOLDING UP AN ASTROLABE IN FRONT OF THE FACULTY OF MATHEMATICS OF AMIRKABIR UNIVERSITY OF TECHNOLOGY IN TEHRAN . HE GOT AN ASTROLABE INSTRUMENT AND TRANSLATED INTO ARABIC NOTES OF THE MANUAL ( BOTH CONSTRUCTION AND OPERATIONAL ) FOR GOLD .. HIS ASTROLABE INSTRUMENT HAD PLATES FOR MECCA/ ISTANBUL/ ALEXANDRIA.

    ASTROLABE BRASS INSTRUMENTS WERE SOLD BY KODUNGALLUR UNIVERSITY PROFESSORS AT THE LIBRARY OF CORDOBA IN SPAIN..

    THESE SIMPLE BRASS DEEP SEA NAVIGATION INSTRUMENTS WERE PRODUCED MUCH BEFORE THE COMPLICATED ANTIKYTHERA AUTOMATIC ( PERPETUAL MOTION ) MECHANISM..

    THE DEEP SEA NAVIGATING SHIPS OF QUEEN DIDO , A KERALA THIYYA PRINCESS WHO TAUGHT AT THE UNIVERSITY OF ALEXANDRIA IN 1600 BC ( ON DEPUTATION FROM KODUNGALLUR UNIVERSITY ) CARRIED THESE INSTRUMENTS..

    http://ajitvadakayil.blogspot.com/2019/05/the-ancient-7000-year-old-shakti.html

    ASTROLABE IT IS AN ELABORATE INCLINOMETER, HISTORICALLY USED BY ASTRONOMERS AND NAVIGATORS TO MEASURE THE ALTITUDE ABOVE THE HORIZON OF A CELESTIAL BODY, DAY OR NIGHT.

    IT CAN BE USED TO IDENTIFY STARS OR PLANETS, TO DETERMINE LOCAL LATITUDE GIVEN LOCAL TIME (AND VICE VERSA), TO SURVEY, OR TO TRIANGULATE. ASTROLABE WAS CALLED SITARA YANTRA..

    http://ajitvadakayil.blogspot.com/2019/09/onam-our-only-link-to-planets-oldest.html

    AN ASTROLABE (SOLD IN CORDOBA SPAIN ) WAS EXCAVATED FROM THE WRECK SITE OF A PORTUGUESE ARMADA SHIP AS THE OLDEST IN THE WORLD. THEY ALSO CERTIFIED A SHIP'S BELL -- DATED 1498 -- RECOVERED FROM THE SAME WRECK SITE ALSO AS THE OLDEST IN THE WORLD.

    DONT EVER THINK THAT VASCO DA GAMA AND COLUMBUS NAVIGATED ON WESTERN TECHNOLOGY.. THEY USED ANCIENT DEEP SEA NAVIGATING INSTRUMENTS OF ANCIENT KERALA THIYYA NAVIGATORS..

    DIOPHANTUS STUDIED IN KODUNGALLUR UNIVERSITY. HE IS THE AUTHOR OF A SERIES OF BOOKS CALLED ARITHMETICA, ALL LIFTED FROM KERALA SCHOOL OF MATH.

    THIEF DIOPHANTUS WAS THE FIRST GREEK MATHEMATICIAN WHO RECOGNIZED FRACTIONS AS NUMBERS; THUS HE ALLOWED POSITIVE RATIONAL NUMBERS FOR THE COEFFICIENTS AND SOLUTIONS.

    IN MODERN USE, DIOPHANTINE EQUATIONS ARE USUALLY ALGEBRAIC EQUATIONS WITH INTEGER COEFFICIENTS, FOR WHICH INTEGER SOLUTIONS ARE SOUGHT. DIOPHANTUS WAS A BRAIN DEAD FELLOW WHO STOLE HIS ALGEBRA FROM THE KERALA SCHOOL OF MATH.

    MEDIOCRE BRAIN JEW ALBERT EINSTEIN WAS A THIEF… HE STOLE FROM PART TWO ( BRAHMANAS ) AND PART THREE ( ARANYAKAS ) OF THE VEDAS..

    http://ajitvadakayil.blogspot.com/2018/11/albert-einstein-was-thief-plagiarist.html

    LIES WONT WORK.. A BROWN BLOGGER IS IN TOWN !

    Capt ajit vadakayil
    ..


THIS POST IS NOW CONTINUED TO PART 15 BELOW--






CAPT AJIT VADAKAYIL
..

Viewing all articles
Browse latest Browse all 852

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>