Quantcast
Channel: Ajit Vadakayil
Viewing all articles
Browse latest Browse all 852

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 7 - Capt Ajit Vadakayil

$
0
0


THIS POST IS CONTINUED FROM PART 6, BELOW--





Local Interpretable Model-Agnostic Explanation (LIME) is an algorithm that provides a novel technique for explaining the outcome of any predictive model in an interpretable and faithful manner. 

It works by training an interpretable model locally around a prediction you want to explain.
To better understand how LIME works, let's consider two distinct types of interpretability:

Global interpretability: Global interpretations help us understand the entire conditional distribution modeled by the trained response function, but global interpretations can be approximate or based on averages.

Local interpretability: Local interpretations promote understanding of a single data point or of a small region of the distribution, such as a cluster of input records and their corresponding predictions, or decile of predictions and their corresponding input rows. 

Because small sections of the conditional distribution are more likely to be linear, local explanations can be more accurate than global explanations.

LIME is designed to provide local interpretability, so it is most accurate for a specific decision or result.

Locally faithful explanations capture the classifier behavior in the neighborhood of the instance to be explained. To learn a local explanation, LIME approximates the classifier's decision boundary around a specific instance using an interpretable model. 

LIME is model-agnostic, which means it considers the model to be a black-box and makes no assumptions about the model behavior. This makes LIME applicable to any predictive model.

In order to learn the behavior of the underlying model, LIME perturbs the inputs and sees how the predictions change. The key intuition behind LIME is that it is much easier to approximate a black-box model by a simple model locally than by a single global model

Most Machine Learning algorithms are black boxes, but LIME has a bold value proposition: explain the results of any predictive model. The tool can explain models trained with text, categorical, or continuous data

While the techniques above offer practical steps that data scientists can take, LIME is an actual method developed by researchers to gain greater transparency on what’s happening inside an algorithm. The researchers explain that LIME can explain “the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.”

What this means in practice is that the LIME model develops an approximation of the model by testing it out to see what happens when certain aspects within the model are changed. Essentially it’s about trying to recreate the output from the same input through a process of experimentation.





As the ‘AI era’ of increasingly complex, smart, autonomous, big-data-based tech comes upon us, the algorithms that fuel it are getting under more and more scrutiny.

Whether you’re a data scientist or not, it becomes obvious that the inner workings of machine learning, deep learning, and black-box neural networks are not exactly transparent.

In the wake of high-profile news reports concerning user data breaches, leaks, violations, and biased algorithms, that is rapidly becoming one of the biggest — if not the biggest — sources of problems on the way to mass AI integration in both the public and private sectors.

Here’s where the push for better AI interpretability and explainability takes root.

By now, much more justifiable apprehensions, grounded in the socio-economic reality, took place in the public consciousness:--

● When AI is making judgements and appraising risks, why and how does it come to the conclusions it presents?
● What is considered failure and success? Why?
● If there’s an error or a biased logic, how do we know?
● How do we identify and fix such issues?
● Are we sure we can trust AI?


These are the questions that need to be answered in order to be able to rely on AI, and be sure about its accountability. Here’s where AI interpretability and explainability comes into play.

AI Interpretability vs Explainability

Interpretability is about the extent to which a cause and effect can be observed within a system. Or, to put it another way, it is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters. It’s being able to look at an algorithm and go yes-- I can see what’s happening here.

Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms. It’s easy to miss the subtle difference with interpretability, but consider it like this: interpretability is about being able to discern the mechanics without necessarily knowing why. Explainability is being able to quite literally explain what is happening.

Where machine learning and AI is concerned, “interpretability” and “explainability” are often used interchangeably, though it’s not correct for 100% of situations. While closely related, these terms denote different aspects of predictability and understanding one can have of complex systems, algorithms, and vast sets of data. See below:--

● Interpretability refers to the ability to observe cause-and-effect situations in a system, and, essentially, predict which changes will cause what type of shifts in the results (without necessarily understanding the nitty-gritty of it all).
● Explainability is basically the ability to understand and explain ‘in human terms’ what is happening with the model; how exactly it works under the hood.


The difference is subtle enough, but it’s there. While usually both can co-exist, some situations might require one and not the other: for example, when explaining what’s behind a predictive model to the higher-ups of the banking or the pharmaceutical industry, demonstrating the measures taken to minimize or eliminate the possibility of bias in the risk assessment models for their legal systems.



Important Properties Of Explainability
Portability: It defines the range of machine learning models where the explanation method can be used.
Expressive Power: It defines as the structure of an explanation that a method is able to generate.
Translucency: This describes as to how much the method of explanation depends on the machine learning model. Low translucency methods tend to have higher portability.
Algorithmic Complexity: It defines the computational complexity of a method where the explanations are generated.

Fidelity: High fidelity is considered as one of the important properties of an explanation as low fidelity lacks in explaining the machine learning model.



Interpretability
Interpretability is defined as the amount of consistently predicting a model’s result without trying to know the reasons behind the scene. It is easier to know the reason behind certain decisions or predictions if the interpretability of a machine learning model is higher.

Evaluation Of Interpretability
Application Level Evaluation: This is basically the real-task. It means putting the explanation into the product and the end user will do all the tests.
Human Level Evaluation: This is a simple task or can be termed as a simplified application level evaluation. In this case, the experiments are carried out by laypersons by making the experiments cheaper and testers can be found easily.
Function level evaluation: This is an approach where an anonymous person already evaluates the class of model. This approach is also known as a proxy task.

Understanding The Difference
You can distinguish the difference between these two by a simple instance. For instance, a school student doing a little experiment on titration, the result can be interpreted as what will be the next step as far as it can be done until the outcome is found out. This is interpretability. And the chemistry behind this experiment is the definition of explainability.



Black box AI systems for automated decision making, often based on machine learning over big data, map a user's features into a class predicting the behavioural traits of individuals, such as credit risk, health status, etc., without exposing the reasons why.

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. ... One way to gain explainability in AI systems is to use machine learning algorithms that are inherently explainable.




Why Does Machine Learning Need to Be Explainable?

Being able to present and explain extremely complex mathematical functions behind predictive models in understandable terms to human beings is an increasingly necessary condition for real-world AI applications.

As algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision-making grow among policymakers, regulators, and the general public. 

In such an environment, interpretability and explainability are crucial for achieving fair, accountable and transparent (FAT) machine learning, complying with the needs and standards for:---

1. Business adoption
It is paramount for any business predictions to be easily explained to a boss, a customer, or a commercial legal adviser. Simply speaking, when any justification for an important business decision is reduced to “the algorithm made us do it,” you’ll have a hard time making anyone — be it investors, CEOs, CIOs, end customers, or legal auditors — buy the fairness, reliability, and business logic of this algorithm.

2. Regulatory oversight
Applying regulations, such as the GDPR, regional and local laws, to machine learning models can only be fully achieved with the FAT principles at the core. For example, Article 22 Recital 71 of the GDPR specifically states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

In turn, Articles 13 and 15 stress repeatedly that data subjects have a right to the disclosure of “meaningful information about the logic involved” and of “the significance and the envisaged consequences” of automated decision-making.

To make a GDPR-compliant AI is to make an interpretable, explainable AI. In the world of rapidly developing and spreading laws regarding data, that can soon mean “to make any compliant AI is to make an interpretable, explainable AI.”

3. Minimizing bias
The problem of algorithmic bias and the dangers it can harbor when allowed into machine learning systems are well-known and documented. While the main reason behind biased AI is the poor quality of data fed into it, the lack of transparency in the proceedings and, as a result, inability to quickly detect bias are among the key factors here, as well.

Imagine the times when interpretable and explainable AI becomes the norm. Then the ability to understand not only the fundamental techniques used in a model but also particular cause-and-effect ties found in those specific algorithms would allow for faster and better bias detection. This has a potential to eliminate the problem itself, or at least to allow for a much quicker and more effective solution to it, which is one of the main socio-economic reasons behind the current push for both fair and ethical AI.

4. Model documentation
Regardless of the type and scope of a software development project, probably no one has ever described documentation keeping as fun. Yet it must be done, and predictive models are no exception.
Where AI, machine learning, and especially black-box deep learning are concerned, in some cases this usually tedious task can become impossible altogether. 

Basically speaking, black-box modeling can be great for dealing with data regardless of a particular mathematical structure of the model, but if you need to document the specifics — be it for a commercial, educational, or other project — you’re out of luck. This model would need to become both interpretable and explainable, in order for an efficient documentation to be created.

While questions of transparency and ethics may feel abstract for the data scientist on the ground, there are, in fact, a number of practical things that can be done to improve an algorithm’s interpretability and explainability.

When humans make decisions, they have the ability to explain their thought process behind it. They can explain the rationale; whether its driven by observation, intuition, experience or logical thinking ability. Basic ML algorithms like decision trees can be explained by following the tree path which led to the decision. But when it comes to complex AI algorithms, the deep layers are often incomprehensible by human intuition and are quite opaque.

Data scientists may have trouble explaining why their algorithm gave a decision and the laymen end-user may not simply trust the machine’s predictions without contextual proof and reasoning.

There need to be three steps which should be fulfilled by the system :---

1) Explained the intent behind how the system affects the concerned parties

2) Explain the data sources you use and how you audit outcomes

3) Explain how inputs in a model lead to outputs.

Interpret means to explain or to present in understandable terms. In the context of ML systems, interpretability is the ability to explain or to present in understandable terms to a human   Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. 

It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision   Interpretability is about the extent to which a cause and effect can be observed within a system. ... Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms



Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations will make black-box approaches difficult to use in Business, because they often are not able to explain why a machine decision has been made.

The neural networks employed by conventional AI must be trained on data, but they don’t have to understand it the way humans do. They “see” data as a series of numbers, label those numbers based on how they were trained and solve problems using pattern recognition. When presented with data, a neural net asks itself if it has seen it before and, if so, how it was labeled it previously.

In contrast, cognitive AI is based on concepts. A concept can be described at the strict relational level, or natural language components can be added that allow the AI to explain itself. A cognitive AI says to itself: “I have been educated to understand this kind of problem. You're presenting me with a set of features, so I need to manipulate those features relative to my education.”

The more information that is submitted to the model for regularity the better it gets. So dissimilar to customary data management and cleaning systems, Machine learning algorithms improve the situation with scale.

With regards to fueling particular functions, AI can do a large portion of the work for us. By concentrating on the machine learning deliberately getting cleverer about how it uses, rates and analyzes data, we can diminish coding-hours as well as stress less over the faulty data.

Machine learning methods are often based on neural networks, which can be basically seen as black boxes that turn input into output. Not being able to access the knowledge within the machine is a constant headache for developers, and many times for users as well

Researchers are studying other significant variables, like how much the attacker actually knows about the AI system. For example, in what we call “white-box” attacks, the adversary knows the model and its features. In “gray-box” attacks, they don’t know the model, but do know the features. In “black-box” attacks, they know neither the model nor the features. 

Even in a black-box scenario, adversaries remain undaunted. They can persistently use brute-force attacks to break through and manipulate the AI malware classifier. This is an example of what is called “transferability”—the use of one model to trick another model.

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implemention of the social right to explanation.  

Transparency rarely comes for free and that there are often trade-offs between the accuracy and the explainability of a solution

The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.  Another consideration is info-besity (overload of information), thus, full transparency may not be always possible or even required.

DeepLIFT (Deep Learning Important Features)

DeepLIFT is a useful model in the particularly tricky area of deep learning. It works through a form of backpropagation: it takes the output, then attempts to pull it apart by ‘reading’ the various neurons that have gone into developing that original output.

Essentially, it’s a way of digging back into the feature selection inside of the algorithm (as the name indicates).

Layer-wise relevance propagation
Layer-wise relevance propagation is similar to DeepLIFT, in that it works backwards from the output, identifying the most relevant neurons within the neural network until you return to the input (say, for example, an image).


DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. 

DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass

Interpretability is the degree to which a human can understand the cause of a decision

Boolean Decision Rules via Column Generation: This algorithm provides access to classes which implements a directly interpretable supervised learning method for binary classification that learns a 

Boolean rule in disjunctive normal form (DNF) or conjunctive normal form (CNF) using column generation (CG). For classification problems, Boolean Decision Rules tends to return simple models that can be quickly understood.

Generalised Linear Rule Models: Generalised Linear Rule Models are applicable for both classification and regression problems. For classification problems, Generalised Linear Rule Models can achieve higher accuracy while retaining the interpretability of a linear model. 

ProfWeight: This algorithm can be applied to the neural networks in order to produce instance weights that can be further applied to the training data to learn an interpretable model.

Teaching AI to Explain Its Decisions: This algorithm is an explainability framework that leverages domain-relevant explanations in the training dataset to predict both labels and explanations for new instances.  

Contrastive Explanations Method: This algorithm is the basic version for classification with numerical features can be used to compute contrastive explanations for image and tabular data.

Contrastive Explanations Method with Monotonic Attribute Functions: This algorithm is a Contrastive Image explainer which leverages Monotonic Attribute Functions. The main idea behind this algorithm is to explain images using high level semantically meaningful attributes that may either be directly available or learned through supervised or unsupervised methods

Disentangled Inferred Prior Variational Auto-Encoder (DIP-VAE): This algorithm is an unsupervised representation learning algorithm which usually takes a given feature and learns a new representation in a disentangled manner in order to make the resulting features more understandable. 


ProtoDash: This algorithm is a way of understanding a dataset with the help of prototypes. It provides exemplar-based explanations for summarising dataset as well as explaining predictions made by an AI model. It employs a fast gradient-based algorithm to find prototypes along with their (non-negative) importance weights.


Explainability may not be very important when you are classifying images of cats and dogs – but as ML models are being used for the more extensive and critical problems, XAI becomes extremely important if the ML model is predicting the presence of a disease like diabetes from a patient’s test results, doctors need substantial evidence as to why the decision was made before suggesting any treatment. .


Currently, AI models are evaluated using metrics such as accuracy or F1 score on validation data. Real-world data may come from a slightly different distribution than training data, and the evaluation metric may be unjustifiable. Hence, the explanation, along with a prediction, can transform an untrustworthy model into a trustworthy one. .


There are three crucial blocks to develop explainable AI system:--
.

Explanation interface
The explanation generated by the explainable model should be shown to humans in human-understandable formats. There are many state-of-the-art human-computer interaction techniques available to generate compelling explanations. Data visualization models, natural language understanding and generation, conversational systems, etc. can be used for the interface.

Psychological model of explanation--

Humans take most of the decisions unconsciously for which they don’t have any explanations. Hence, psychological theories can help developers as well as evaluators. More powerful explanations will be generated by considering psychological requirements. E.g. a user can rate on the clarity of the generated explanation, which will help to understand user satisfaction. And the model can be continuously trained depending on user rating.

Explainability can be a mediator between AI and society. It is also a useful tool for identifying issues in the ML models, artifacts in the training data, biases in the model, for improving model, for verifying results, and most importantly for getting an explanation. Even though explainable AI is complex, it will be one of the focused research areas in the future.

Distrust, unfairness, bias and ethical ramifications of automated ML decisions are now increasingly common.

Imagine an advanced fighter aircraft is patrolling a hostile conflict area and a bogie suddenly appears on radar accelerating aggressively at them. The pilot, with the assistance of an Artificial Intelligence co-pilot, has a fraction of a second to decide what action to take – ignore, avoid, flee, bluff, or attack.  
The costs associated with False Positive and False Negative are substantial – a wrong decision that could potentially provoke a war or lead to the death of the pilot.  What is one to do…and why?

A false positive state is when the IDS identifies an activity as an attack but the activity is acceptable behavior. A false positive is a false alarm. A false negative state is the most serious and dangerous state. This is when the IDS identifies an activity as acceptable when the activity is actually an attack.
A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.

In application security testing, false positives alone don’t determine the full accuracy. False positives are just one of the four aspects that determine its accuracy – the other three being ‘true positives,’ ‘true negatives,’ and ‘false negatives.’

False Positives (FP): Tests with fake vulnerabilities that were incorrectly reported as vulnerable

True Positives (TP): Tests with real vulnerabilities that were correctly reported as vulnerable

False Negatives (FN): Tests with real vulnerabilities that were not correctly reported as vulnerable

True Negatives (TN): Tests with fake vulnerabilities that were correctly not reported as vulnerable

Therefore, a true positive rate (TPR) is the rate at which real vulnerabilities were reported, correctly. A false positive rate (FPR) is the rate at which fake vulnerabilities were reported as real, incorrectly.

Explainable Artificial Intelligence (XAI) is critical for physicians, engineers, technicians, physicists, chemists, scientists and other specialists whose work is governed by the exactness of the model’s results, and who simply must understand and trust the models and modeling results. XAI is a legal mandate in regulated verticals such as banking, insurance, telecommunications and others. For AI to take hold in healthcare, it has to be explainable. .

There is no mature auditing framework in place for AI, nor any AI-specific regulations, standards or mandates. Precedents don’t exist. Auditability, explainability, transparency and replicability (reproducibility) are often suggested as means of avoiding bias.

Explainability is intrinsically challenging because explanations are often incomplete because they omit things that cannot be explained understandably. Algorithms are inherently challenging to explain. Take, for instance, algorithms using “ensemble” methodologies. Explaining how one model works is hard enough. Explaining how several models work both individually and together is exponentially more difficult.

Transparency is usually a good thing. However, it if it requires disclosing source code or the engineering details underpinning an AI application, it could raise intellectual property concerns. And again, transparency about something that may be unexplainable in laymen’s terms would be of limited use.




Many AI algorithms are really black boxes: partially, or not understood both by those who create them and those who interact with them. Obviously, this is problematic: there are risks both for the companies and organizations that deploy these AIs, and the people who interact with them. More explainable AIs seem to be in everyone's best interests. Nevertheless, good intentions and practice often clash: there are real, pragmatic reasons why many AIs are not engineered in such a way that they are easily explained.

A model can be a black box for one of two reasons: (a) the function that the model computes is far too complicated for any human to comprehend, or (b) the model may in actual fact be simple, but its details are proprietary and not available for inspection.

Machine learning is a subset of Artificial Intelligence (AI) that focuses on getting machines to make decisions by feeding them data.

It is paramount for any business predictions to be easily explained to a boss, a customer, or a commercial legal adviser. Simply speaking, when any justification for an important business decision is reduced to “the algorithm made us do it,” you’ll have a hard time making anyone — be it investors, CEOs, CIOs, end customers, or legal auditors — buy the fairness, reliability, and business logic of this algorithm.

Users need to know the “whys” behind the workings, such as why an algorithm reached its recommendations—from making factual findings with legal repercussions to arriving at business decisions, such as lending, that have regulatory repercussions—and why certain factors (and not others) were so critical in a given instance.

As domains like healthcare look to deploy artificial intelligence and deep learning systems, where questions of accountability and transparency are particularly important, if we’re unable to properly deliver improved interpretability, and ultimately explainability, in our algorithms, we’ll seriously be limiting the potential impact of artificial intelligence.

There are 8 underlying reasons why an AI solution can become hard or impossible to explain.
  
Reason 1: The way data is generated is not understood
The base resource that machine learning engineers work with is data. However, the exact meaning and source of this data is often nebulous, and prone to misinterpretation. Data might come from a CRM, be self-reported and collected through a survey, purchased from a third-party provider, ... To make matters worse, machine learning engineers often only have a label to work with, and no further details. For example, we could have a dataset that contains a user for each row, and one column named post_count. A seasoned machine learning engineer will immediately start asking questions: count of posts since when? Does this include deleted posts? What is the exact definition of a post? Sadly, while answering this for a single column is often doable (but resource-intensive), answering it for thousands of columns is both extremely time-consuming and complex.

This brings us to our second underlying reason...

Reason 2: The data given to an algorithm is feature-rich
In a quest to have more predictive power, and thanks to the ever growing computational power of our computers, most machine learning practicioners tend to work with very large, very feature rich datasets. With feature-rich, we mean that for every observation (e.g. a person whose personality we want to predict, our row in our previous example), we have many different types of data (e.g. timestamped posts, their interactions with other users, their signup date, ..., our columns in our previous example). It's quite common to have thousands (and many, many more) different types of data in many machine learning problems.

Reason 3: The way data is processed is complex
Machine learning engineers often don't just take the data as such and feed it to an algorithm, the process it ahead of time. Data can be enriched (creating additional data types from existing ones: such as turning a date into another variable that says if it's a national holiday or not), combined (such as reducing the output of many sensors to just a few signals) and linked (by getting data from other data sources). Each of these operations bring additional complexity in understanding and explaining the base data the algorithm is learning from.

Reason 4: The way additional training data is generated (augmentation) is complex
Many use cases of machine learning allow for the generation of additonal training data, called augmentation. Homever, these generative approaches to getting more and better training data can often be complex, and modify the learnings of the algorithm in subtle, unintuitive ways.

Reason 5: The algorithms that are used don't balance complexity and explanatory power (regularization)
It's often difficult to balance the predictive explanatory power of a model and the complexity of a model. Luckily, there are a slew of techniques available today to do just that for machine learning engineers, called "regularization" techniques. These techniques weigh the cost of adding complexity versus the additional explanatory power that this complexity bring, and attempt to strike a good balance. The under- or mis-application of regularization in models can lead to very, very complex models.

Reason 6: The algorithms that are used are allowed to learn unintuitive relationships (non-linearity)
Linear relationships are ones where an increase in one variable causes a set increase (or decrease) in another variable. For example, the relationship between signups to a new service and profits could be linear: for every new signup, your profit increases by a set amount. Some machine learning models can only learn linear relationships (such as the aptly named "linear regression"). These models tend to be easier to explain, but also miss out on a lot of nuance. For example, your profits might initially increase with every signup, but then decrease after a certain number of signups, because you need additional support staff for your service. While some models can learn these relationships, they are often much trickier to explain.

Reason 7: The algorithms that are used are combined (ensembling)
Many complex AI applications don't rely on a single algorithm, but a whole host of algorithms. This "chaining" of algorithms is called "ensembling". This practice is extremely common in machine learning today, but adds complexity: if a single algorithm is hard to explain, imagine having to explain the combined output of 50-100 algorithms working together.

Reason 8: There is no additional explanatory layer used

Rather than trying to make models explainable through simplicity (and as such, often sacrificing the explanatory power), another approach has emerged in the last couple of years that aim to add a glass layer on top of black-box models, that figuratively allow to peer inside the models. These models, such as Shapely Additive Explanations (SHAP), use both the data and the black box model to explain the prediction generated by the model in question.

Neural networks are, by design, non-deterministic. Like human minds, though on a much more limited scale, they can make inferences, deductions, or predictions without revealing how. That's a problem for an institution whose algorithms determine whether to approve an applicant's request for credit. 

Laws in the U.S. and elsewhere require credit reporting agencies to be transparent about their processes. That becomes almost impossible if the financial institutions controlling the data on which they report can't explain what's going on for themselves.

So if an individual's credit application is turned down, it would seem the processes that led to that decision belong to a mechanism that's opaque by design.

Machine learning: Improved ML through faster structured prediction. Examples include Boltzmann machines, quantum Boltzmann machines, semi-supervised learning, unsupervised learning and deep learning;




Again, Explainable AI (XAI), Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision

Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. 

XAI is an implemention of the social right to explanation. Some claim that transparency rarely comes for free and that there are often trade-offs between the accuracy and the explanaibility of a solution.

Left unchecked, lack of transparency can lead to biased outcomes that put people and businesses at risk. The answer to this is explainable AI.

As AI algorithms increase in complexity, it becomes more  difficult to make sense of how they work. In some cases, Interpretable and explainable AI will be essential for  business and the public to understand, trust and effectively manage ‘intelligent’ machines. Organisations  that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.

To benefit from AI, businesses have to consider not just the mechanics of production ML but also managing any customer and/or community concerns. Left unaddressed, these concerns can materialize in customer churn, corporate embarrassment, brand value loss, or legal risk.

Trust is a complex and expansive topic, but at its core, there is a need to understand and explain ML and feel confident that the ML is operating correctly, within expected parameters and free from malicious intrusion. In particular, the decisions made by the production ML should be explainable - i.e. a human-interpretable explanation must be provided. 

This is becoming needed in regulations such as the GDPR’s Right to Explanation Clause . Explainability is closely tied to fairness - the need to be convinced that the AI is not accidentally or intentionally rendering biased decisions.

Employed across industries, AI applications unlock smartphones using facial recognition, make driving decisions in autonomous vehicles, recommend entertainment options based on user preferences, assist the process of pharmaceutical development, judge the creditworthiness of potential homebuyers, and screen applicants for job interviews. 

AI automates, quickens, and improves data processing by finding patterns in the data, adapting to new data, and learning from experience. In theory, AI is objective—but in reality, AI systems are informed by human intelligence, which is of course far from perfect. Algorithmic Accountability Act The Potential for Bias in AI

As AI becomes ubiquitous in its applications across industries, so does its potential for bias and discrimination. Understanding the inherent biases in underlying data and developing automated decision systems with explainable results will be key to addressing and correcting the potential for unfair, inaccurate, biased, and discriminatory AI systems.

Facebook says it performs a public service by mining digital traces to identify people at risk for suicide. Google says its smart home can detect when people are getting sick. Though these companies may have good intentions, their explanations also serve as smoke screens that conceal their true motivation: profit.

Informing and influencing consumers with traditional advertising is an accepted part of commerce. However, manipulating and exploiting them through behavioral ads that leverage their medical conditions and related susceptibilities is unethical and dangerous. It can trap people in unhealthy cycles of behavior and worsen their health. Targeted individuals and society suffer while corporations and their advertising partners prosper.

Emergent medical data can also promote algorithmic discrimination, in which automated decision-making exploits vulnerable populations such as children, seniors, people with disabilities, immigrants, and low-income individuals. Machine learning algorithms use digital traces to sort members of these and other groups into health-related categories called market segments, which are assigned positive or negative weights.

 For instance, an algorithm designed to attract new job candidates might negatively weight people who use wheelchairs or are visually impaired. Based on their negative ratings, the algorithm might deny them access to the job postings and applications. In this way, automated decision-making screens people in negatively weighted categories out of life opportunities without considering their desires or qualifications. 

Because emergent medical data are mined secretly and fed into black-box algorithms that increasingly make important decisions, they can be used to discriminate against consumers in ways that are difficult to detect. On the basis of emergent medical data, people might be denied access to housing, jobs, insurance, and other important resources without even knowing it

In recent years, advances in computer science have yielded algorithms so powerful that their creators have presented them as tools that can help us make decisions more efficiently and impartially. But the idea that algorithms are unbiased is a fantasy; in fact, they still end up reflecting human biases. And as they become ever more ubiquitous, we need to get clear on what they should — and should not — be allowed to do.

We need an algorithmic bill of rights to protect us from the many risks AI is introducing into our lives ..  Algorithmic Accountability Act. If passed, it would require companies to audit their algorithms for bias and discrimination.

Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.

Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.

Related to transparency is the demand for explainability. All algorithmic systems should carry something akin to a nutritional label laying out what went into them

Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.

A demand for the right to consent has been gathering steam as more people realize that images of their faces are being used to power facial recognition technology. NBC reported that IBM had scraped a million photos of faces from the website Flickr — without the subjects’ or photographers’ permission. The news sparked a backlash.

People may have consented to having their photos up on Flickr, but they hadn’t imagined their images would be used to train a technology that could one day be used to surveil them. Some states, like Oregon and Washington, are currently considering bills to regulate facial recognition. 

Imagine you’re applying for a new job. Your prospective bosses inform you that your interview will be conducted by a robot — a practice that’s already in use today. Regardless of what they tout as the benefits of this AI system, you should have the right to give or withhold consent, Permission must be granted,” not taken for granted.”



Freedom from bias: We have the right to evidence showing that algorithms have been tested for biasrelated to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes.



THIS BIAS HAS NOTHING TO DO WITH VARIANCE..

SO I WILL CALL IT BIAS2 ..


AI Bias vs. Human Bias – highlights how artificial intelligence (AI), just like humans, is subject to bias2. This is not because AI determines something to be true or false for any illogical reasons. It’s because latent human bias2 may exist in machine learning, starting with the creation of an algorithm to the interpretation of data and subsequent interactions.

As algorithms become more complicated, fears of undetected bias2, mistakes, and miscomprehensions creeping into decision-making grow among policymakers, regulators, and the general public

When one examines a data sample, it is imperative to check whether the sample is representative of the population of interest. A non-representative sample where some groups are over- or under-represented inevitably introduces bias2 in the statistical analysis. A dataset may be non-representative due to sampling error and non-sampling errors.

Whereas error makes up all flaws in a study’s results, bias2 refers only to error that is systematic in nature. Whenever a researcher conducts a probability survey they must include a margin of error and a confidence level. This allows any person to understand just how much effect random sampling error could have on a study’s results.

Bias2,  cannot be measured using statistics due to the fact that it comes from the research process itself. Because of its systematic nature, bias2 slants the data in an artificial direction that will provide false information to the researcher. For this reason, eliminating bias2 should be the number one priority of all researchers.


 Sampling errors refer to the difference between a population value and a sample estimate that exists only because of the sample that happened to be selected. Sampling errors are especially problematic when the sample size is small relative to the size of the population. For example, suppose we sample 100 residents to estimate the average US household income

Non-sampling errors are typically more serious and may arise from many different sources such as errors in data collection, non-response, and selection bias2. 

Typical examples include poorly phrased data-collection questions, web-only data collection that leave out people who don’t have easy access to the internet, over-representation of people that feel particularly strongly about a subject, and responses that may not reflect one’s true opinion.



In theory, AI is objective—but in reality, AI systems are informed by subjective human intelligence..  ML models are opaque and inherently biased  ..A machine learning algorithm gets its knowledge from data, and if data are somehow biased then the decisions made by the algorithm will be biased as well.

Machine learning systems are, by design, not rule-based. Indeed, their entire objective is to determine what the rules are or might be, when we don't know them to begin with. If human cognitive biases actually can imprint themselves upon machine learning, their only way into the system is through the data.

While algorithm bias2 occurs at the development stage, there are other places where it could affect the ML process as a whole, wherein established techniques can make a major difference. Once such touchpoint is the data sampling stage. In short, when the machine model interacts with a data sample, the intent is for that sample to fully replicate the problem space that the machine will ultimately operate within.

However, there are instances where the sample does not fully convey the entire environment and as such, the model is not entirely prepared to accommodate its new settings with optimal flexibility. Consider, for example, a bicycle that is designed to perform on both mountainous terrains and roadways with equal ease. Yet, it is only tested in mountainous conditions. In this case, the training data would have sample bias2 and the resulting model might not operate in both environments with equal optimization because its training was incomplete and incomprehensive.

To avoid this, developers can follow myriad techniques to ensure that the sample data they utilize is congruent with the realistic population at hand. This will require taking multiple samples from said populations and testing them to gauge their representativeness before using them at the sampling stage..

For example, if you want to use AI to make recommendations on who best to hire, feed the algorithm data about successful candidates in the past, and it will compare those to current candidates and spit out its recommendations.

Whether the AI algorithms are themselves biased is also an open question. Machine-learning algorithms haven’t been optimized for any definition of fairness .  They have been optimized to do a task.

Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. 

Algorithmic bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. 

The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the 2018 European Union's General Data Protection Regulation.

In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower sampling probability than others. It results in a biased sample, a non-random sample of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.


While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. 

Take hiring as an example: If you give a computer a data set with 10 Palestinian Muslim candidates and 300 white Jews candidates and ask it to predict the best person for the job, we all know what the results will be.. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Medical sources sometimes refer to sampling bias as ascertainment bias



NEW VACCINES AND NEW GMO FOOD ARE FIRST TRIED OUT IN THIRD WORLD NATIONS, USING THE POPULATION AS GUINEA PIGS ..USING SOME ARTIFICIAL INTELLIGENCE BIASED ALGORITHMS... 

KILL OFF PALESTINIANS  AND  ROMA GYPSIES  –DON’T WASTE MONEY ON THEM..

Data sets about CONSCIOUS humans are particularly susceptible to bias, while data about the physical world are less susceptible.  Human-generated data is the biggest source of bias

Neural networks use deep learning algorithms, creating connections organically as they evolve. At this stage, AI programs become far more difficult to screen for traces of bias, as they are not running off a strict set of initial data parameters.

Data provides the building blocks in the learning phase of AI. Neural networks, machine learning, deep learning – they all have one thing in common: They need huge amounts of data to become better. AI can only outgrow itself if fed with enormous amounts of data

Humans typically select the data used to train machine learning algorithms and create parameters for the machines to "learn" from new data over time. Even without discriminatory intent, the training data may reflect unconscious or historic bias. For example, if the training data shows that people of a certain gender or race have fulfilled certain criteria in the past, the algorithm may "learn" to select those individuals at the exclusion of others.

Four factors drive public distrust of algorithmic decisions:-- 
Amplification of Biases: Machine learning algorithms amplify biases – systemic or unintentional – in the training data.

Opacity of Algorithms: Machine learning algorithms are black boxes for end users. This lack of transparency – irrespective of whether it’s intentional or intrinsic6 – heightens concerns about the basis on which decisions are made.

Dehumanization of Processes: Machine learning algorithms increasingly require minimal-to-no human intervention to make decisions. The idea of autonomous machines making critical, life-changing decisions evokes highly polarized emotions.

Accountability of Decisions: Most organizations struggle to report and justify the decisions algorithms produce and fail to provide mitigation steps to address unfairness or other adverse outcomes. Consequently, end-users are powerless to improve their probability of success in the future.

What happens with all that data? Tech companies feed our digital traces into machine learning algorithms and, like modern day alchemists turning lead into gold, transform seemingly mundane information into sensitive and valuable health data.

Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns - a system for spotting skin cancer might be paying more attention to whether the photo was taken in a doctor’s office. ML doesn’t ‘understand’ anything - it just looks for patterns in numbers, and if the sample data isn’t representative, the output won’t be either. 

Meanwhile, the mechanics of ML might make this hard to spot The most obvious and immediately concerning place that this issue can come up is in human diversity, and there are plenty of reasons why data about people might come with embedded biases

AI bias’ or ‘machine learning bias’ problem: a system for finding patterns in data might find the wrong patterns, and you might not realise.


Questions persist on how to handle biased algorithms, our ability to contest automated decisions, and accountability when machines make the decisions.   In reality, machine learning models reproduce the inequalities that shape the data they’re fed.

Being able to present and explain extremely complex mathematical functions behind predictive models in understandable terms to human beings is an increasingly necessary condition for real-world AI applications.

The problem of algorithmic bias and the dangers it can harbor when allowed into machine learning systems are well-known and documented. While the main reason behind biased AI is the poor quality of data fed into it, the lack of transparency in the proceedings and, as a result, inability to quickly detect bias are among the key factors here, as well.

Imagine the times when interpretable and explainable AI becomes the norm. Then the ability to understand not only the fundamental techniques used in a model but also particular cause-and-effect ties found in those specific algorithms would allow for faster and better bias detection.

When the data are incomplete, incorrect, or outdated-- if there is insufficient data to make certain  conclusions, or the data are out of date, results will naturally be inaccurate. Unfortunately, biased data and biased parameters are the rule rather than the exception. Because data are produced by humans, the information carries all the natural human bias within it.

Researchers have  begun trying to figure out how to best deal with and mitigate bias, including whether it is possible to  teach ML systems to learn without bias;  however, this research is still in its nascent stages. For the  time being, there is no cure for bias in AI systems.

The use of historical data that is biased-- because ML systems use an existing body of data to identify patterns, any bias in that data is naturally reproduced.

When developers choose to include parameters that are proxies for known bias-- for example, although developers of an algorithm may intentionally seek to avoid racial bias by not including race as a parameter, the algorithm will still have racially biased results if it includes common proxies for race, like  income, education, or postal code.

When developers allow systems to conflate correlation with causation. Take credit scores as an example. People with a low income tend to have lower credit scores, for a variety of reasons. If an ML  system used to build credit scores includes the credit scores of your Facebook friends as a parameter, it will result in lower scores among those with low-income backgrounds, even if they have otherwise strong financial indicators, simply because of the credit scores of their friends.

Today, algorithmic decision-making is largely digital. In many cases it employs statistical methods. Before AI, algorithms were deterministic—that is, pre-programmed and unchanging. Because they are based in statistical modeling,  these algorithms suffer from the same problems as traditional statistics, such as poorly sampled data, biased data, and measurement errors.

Bias can  be perpetuated through a feedback loop if the model’s own biased predictions are repeatedly fed back into it, becoming its own biased source data for the next round of predictions. In the machine learning context, we no longer just face the risk of garbage in, garbage out—when there’s garbage in, more and more garbage may be generated through the ML pipeline if one does not monitor and address potential sources of bias.

One key to de-biasing data is to ensure that a representative sample is collected in the first place. Bias from sampling errors can be mitigated by collecting larger samples and adopting data collection techniques such as stratified random sampling.


Bias from non-sampling errors are much more varied and harder to tackle, but one should still strive to minimize these kinds of errors through means such as proper training, establishing a clear purpose and procedure for data collection, and conducting careful data validation.





Companies think AI is a neutral arbitrator because it’s a creation of science, but it’s not,It is a reflection of humans — warts, beauty, and all. This is a high-consequence problem.  Most AI systems need to see millions of examples to learn to do a task. 

But using real-world data to train these algorithms means that historical and contemporary biases against marginalized groups get baked into the programs..It’s humans that are biased and the data that we generated that is training the AI to be biased. It’s a human problem that humans need to take ownership of.

There are ways, however, to try to maintain objectivity and avoid bias with qualitative data analysis:--
Use multiple people to code the data. ...
Have participants review your results. ...
Verify with more data sources. ...
Check for alternative explanations. ...

Review findings with peers.


AI is a two-edged sword. It can be used by the good and the bad. Biases can be amplified. Data biases that exist in the data that is piled up will lead to biases in understanding and outcomes of the AI systems .They don't have common sense yet. Computers are super intelligent, but in narrow areas They can create fake news, and can churn out fake images and fake narratives..


 IN REALITY G6 NATIONS ARE BEGGARS—AI  CONVERTS THEM INTO SUPER RICH NATIONS.

We are beginning to understand both the repercussions of using selective datasets and how AI algorithms can incorporate and exacerbate the unconscious biases of their developers. We are creating algorithms that are used to detect patterns in data, and we often use a top-down approach AI is not able to intuit to solve certain problems or explain how it reached a conclusion. Furthermore, if that data is flawed by systematic historical biases, those biases will be replicated at scale.

To borrow a phrase: bias in, bias out.

We have approached AI development from the top-down, largely dictated by the viewpoints of developed nations and first-world cultures. No surprise then that the biases we see in the output of these systems reflect the unconscious biases of these perspectives.

Bias2 can be thought of as errors caused by incorrect assumptions in the learning algorithm. Bias can also be introduced through the training data, if the training data is not representative of the population it was drawn from.

Diversifying data is certainly one step to alleviate those biases, as it would allow for more globalized inputs that may hold very different priorities and insights. But no amount of diversified data will fix all the issues if it is fed into a model with inherent biases,.

Rather than top-down approaches that seek to impose a model on data that may be beyond its contexts, we should approach AI as an iterative, evolutionary system. If we flip the current model to be built-up from data rather than imposing upon it, then we can develop an evidence-based, idea-rich approach to building scalable AI-systems. The results could provide insights and understanding beyond our current modes of thinking.

A “top-down” approach recommends coding values in a rigid set of rules that the system must comply with. ... The other approach is often called “bottom-up,” and it relies on machine learning (such as inverse reinforcement learning) to allow AI systems to adopt our values by observing human behavior in relevant scenarios.

The other advantage to such a bottom-up approach is that the system could be much more flexible and reactive. It could adapt as the data changes and as new perspectives are incorporated.

Consider the system as a scaffold of incremental insights so that, should any piece prove inadequate, the entire system does not fail. We could also account for much more diversified input from around the globe, developing iterative signals to achieve cumulative models to which AI can respond.

Biased AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.

This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.

While this is interesting and vital work, the potential for bias to derail drives for equality and fairness runs deeper, to levels which may not be so easy to fix with algorithms.

A "top-down" approach may not produce solutions to every problem, and may even stifle innovation.
Biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Injecting deliberate bias into algorithmic decision making could be devastatingly simple and effective. This might involve replicating or accelerating pre-existing factors that produce bias. Many algorithms are already fed biased data. Attackers could continue to use such data sets to train algorithms, with foreknowledge of the bias they contained. 

The plausible deniability this would enable is what makes these attacks so insidious and potentially effective. Attackers would surf the waves of attention trained on bias in the tech industry, exacerbating polarization around issues of diversity and inclusion.

The idea of “poisoning” algorithms by tampering with training data is not wholly novel. Top U.S. intelligence officials have warned (PDF) that cyber attackers may stealthily access and then alter data to compromise its integrity. Proving malicious intent would be a significant challenge to address and therefore to deter.

Bias is a systemic challenge—one requiring holistic solutions. Proposed fixes to unintentional bias in artificial intelligence seek to advance workforce diversity, expand access to diversified training data, and build in algorithmic transparency (the ability to see how algorithms produce results).

As with technological advances throughout history, we must continue to examine how we implement algorithms in society and what outcomes they produce. Identifying and addressing bias in those who develop algorithms, and the data used to train them, will go a long way to ensuring that artificial intelligence systems benefit us all, not just those who would exploit them.

However, because machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups.  

For example, automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of color.

Pre-existing human biases may creep in at different stages – framing of the problem, selection and preparation of input data, tuning of model parameters and weights, interpretation of the model outputs, etc. - either intentionally or unintentionally making the algorithms for decision-making biased

Algorithms and data must be externally audited for bias and made available for public scrutiny whenever possible. Workplace must be made more diverse to detect and prevent blind spots. Cognitive bias training must be required. 

Regulations must be relaxed to allow use of sensitive data to detect and alleviate bias. Effort should be made to enhance algorithm literacy among users. Research on algorithmic techniques for reducing human bias in models should be encouraged.

Bias is the difference between a model’s estimated values and the “true” values for a variable.

Machine learning bias, also known as algorithm bias or AI bias,  occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions in the machine learning process. Three types of bias can be distinguished: information bias, selection bias, and confounding.
 Three keys to managing bias when building AI--

Choose the right learning model for the problem. ...
Choose a representative training data set. ...

Monitor performance using real data.

Sample Bias/Selection Bias:   This type of bias rears its ugly head when the distribution of the training data fails to reflect the actual environment in which the machine learning model will be running.

If the training data covers only a small set things you're interested in,and then you test it on something outside that set, it will get it wrong. It'll be 'biased' based on the sample it's given. The algorithm isn't wrong; it wasn't given enough different types of data to cover the space it's going to be applied in. That's a big factor in poor performance for machine learning algorithms. You have to get the data right.

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities

Bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.

AI algorithms are built by humans; training data is assembled, cleaned, labeled and annotated by humans. Data scientists need to be acutely aware of these biases and how to avoid them through a consistent, iterative approach, continuously testing the model, and by bringing in well-trained humans to assist.

A “top-down” approach recommends coding values in a rigid set of rules that the system must comply with. It has the benefit of tight control, but does not allow for the uncertainty and dynamism AI systems are so adept at processing. 

The other approach is often called “bottom-up,” and it relies on machine learning (such as inverse reinforcement learning) to allow AI systems to adopt our values by observing human behavior in relevant scenarios. However, this approach runs the risk of misinterpreting behavior or learning from skewed data.

Top-Down is inefficient and slow but with a tight reign
Bottom-Up is flexible but risky and bias-prone.

Solution:  Hybridise – Top-Down for Basic Norms, Bottom-Up for Socialization

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.

Many of the standard practices in deep learning are not designed with bias detection in mind. Deep-learning models are tested for performance before they are deployed, creating what would seem to be a perfect opportunity for catching bias.

 But in practice, testing usually looks like this: computer scientists randomly split their data before training into one group that’s actually used for training and another that’s reserved for validation once training is done. That means the data you use to test the performance of your model has the same biases as the data you used to train it. Thus, it will fail to flag skewed or prejudiced results.

The best way to detect bias in AI is by cross-checking the algorithm you are using to see if there are patterns that you did not necessarily intend. Correlation does not always mean causation, and it is important to identify patterns that are not relevant so you can amend your dataset.

 One way you can test for this is by checking if there is any under- or overrepresentation in your data. If you detect a bias in your testing, then you must overcome it by adding more information to supplement that underrepresentation.

While AI systems can get quite a lot right, humans are the only ones who can look back at a set of decisions and determine whether there are any gaps in the datasets or oversight that led to a mistake

The use of AI in areas like criminal justice can also have devastating consequences if left unchecked.

AI is currently used in a black-box manner. In layman’s terms, this means the only thing of value is its output and not its decision making process. The reason for this is simple: the decision making of most AI model boils down to mathematical optimization over a set of probabilities. 

“I optimized a mathematical function” is a bullshit explanation.  

Things have gotten so opaque that even seminal experts in a field are unable to explain why an AI model work.. The field has taken a toll for the worse when physicists are attempting to explain AI models with quantum mechanics.



One practical compromise between the needs of XAI and realities of current AI models is through the glass box algorithm. The glass box algorithm is a unique creature; it quantifies some sort of uncertainty in its predictions, so that the user may understand when its predictions are unreliable.
Investors using black box methods conceal their true risk under the guise of proprietary technology, leaving regulators, and investors without a true picture of operations which is needed to assess risk accurately.

Hedge funds and some of the world’s largest investment managers now routinely use a black box or black box like model to manage their complicated investment strategies.

Depending on what algorithms are used, it is possible that no one, including the algorithm’s creators, can easily explain why the model generated the results that it did

The same problem is relevant in the banking industry as well. If regulators pose a question: how AI has reached at a conclusion with regard to a banking problem, banks should be able to explain the same. 

For example, if an AI solution dealing with anti-money laundering compliance comes up with an anomalous behaviour or suspicious activity in a transaction, the bank using the solution should be able to explain the reason why the solution has arrived at that decision. Such an audit is not possible with a black box AI model

The main problem with a black box model is its inability to identify possible biases in the machine learning algorithms. Biases can come through prejudices of designers and faulty training data, and these biases lead to unfair and wrong decisions. Bias can also happen when model developers do not implement the proper business context to come up with legitimate outputs.

AI-powered algorithms are increasingly used for decisions that affect our daily lives. Therefore, if an algorithm runs awry, the consequences can be disastrous. For a company it can cause serious reputational damage and lead to fines of tens of millions of dollars

In the banking industry, which is subject to stricter regulatory oversight across the globe, an incorrect decision can cost billions of dollars for an institution. If a bank wants to employ AI, it is imperative for it to subject the particular solution to rigorous, dynamic model risk management and validation. 
The bank must ensure that the proposed AI solution has the required transparency depending on the use case.

Worst of all, it may hurt customers, for instance by unintentionally treating them unfairly if there are biases in the algorithm or training data. This may lead to a serious breach of trust, which can take decades to rebuild.

Black box AI complicates the ability for programmers to filter out inappropriate content and measure bias, as developers can't know which parts of the input are weighed and analyzed to create the output.

Explainable AI or interpretable AI or transparent AI deals with techniques in artificial intelligence which can make machine learning algorithms trustworthy and easily understandable by humans. Explainability has emerged as a critical requirement for AI in many cases and has become a new research area in AI.


It is mandatory that banks should take the necessary oversight to prevent their AI models from being a black box. As of now, the AI use cases are mostly in low-risk banking environments, where human beings still take the final decision with machines just providing valuable assistance in decision making.

 In future, banks will be under pressure to remove some of the human oversight for cost savings amid increasing scale of operations. At that point, banks cannot run with risky black box models that can lead to inefficiencies and risks. 

They need to ensure that their AI solutions are trustworthy and have the required transparency to satisfy internal and external audits. In short, the bright future of AI in banking could be assured only through explainable AI.

The first challenge in building an explainable AI system is to create a bunch of new or modified machine learning algorithms to produce explainable models. Explainable models should be able to generate an explanation without hampering the performance.

The best way to do so is to ensure levels of transparency in the algorithm’s innate structure. In particular, algorithms must be intrinsically traceable to give enough visibility without impairing its performance. With visibility, at the very least, humans will be able to stop and redirect AI decisions if the situation presents itself.


Many of the XAI algorithms developed to date are relatively simple, like decision trees, and can only be used in limited circumstances.



Imperative programming. All programming can be understood in the abstract sense as a kind of specification. Imperative programming is a specification that tells a computer the exact and detailed sequence of steps to perform. These also will include conditions to test, processes to execute and alternative paths to follow (i.e. conditions, functions and loops). All of the more popular languages we have heard of (i.e. JavaScript, Java, Python, C etc) are all imperative languages. When a programmer writes an imperative program, he formulates in his mind the exact sequence of task that need to be composed to arrive at a solution.


Declarative programming. This kind of programming does not burden the user with the details of the exact sequence of steps that must be performed. Rather, a user only needs to specify (or declare) the form of the final solution. The burden of figuring out the exact steps to execute to arrive at the specified solution is algorithmically discovered by the system. Spreadsheets are an example of this kind of programming. With Spreadsheets, you don’t specify how a computer should compute its results, rather you only need to specify the dependencies between the cells to compute a final result. 

You could have a long chain of dependencies and the spreadsheet will figure out which to calculate first. The query language SQL is also a well known example of declarative programming. A SQL processor optimizes the kinds of retrievals it needs to execute to arrive at table of data that satisfies the user specified query. Other examples of declarative programming is Haskell (i.e. functional programming) and Prolog (i.e. logic programming). Mathematics, of the symbolic computation kind, can also be classified as declarative programming.

Imperative code is where you explicitly spell out each step of how you want something done, whereas with declarative code you merely say what it is that you want done



Imperative - you instruct a machine what to do step by step. Example: assembly language.

Declarative - you instruct a machine what you want to get and it supposes to figure it how to do it. Example: SQL.






Generative programming is used to describe program generators) or alternatively “organic programming”. This kind of programming has at its origins methods in connectionist inspired artificial intelligence. It derives from methods coming from Deep Learning, evolutionary algorithms and reinforcement learning. This kind of programming is best visually demonstrated by what is known as Generative Adversarial Networks


Constraint programming, differential programming (i.e. Deep Learning) and generative programming share a common trait. The program or algorithm that discovers the solution is fixed. In other words, a programmer does not need to write the program that translates the specification into a solution. Unfortunately though, the fixed program is applicable only in narrow domains. This is known as the “No Free Lunch” theorem in machine learning. You can’t use a linear programming algorithm to solve an integer programming problem. Deep Learning however has a unique kind of general capability that the same kind of algorithm (i.e. stochastic gradient descent) appears to be applicable to many problems.



  1. https://timesofindia.indiatimes.com/india/500-indians-alerted-about-government-backed-phishing-google/articleshow/72285551.cms

    THE MAIN MOTTO OF PHISHING EMAILS IS: TRICKING USERS TO CLICK EMAILS OR LINKS AND CAUSE MONETARY LOSS TO THEM.

    PHISHING ATTACKS ARE MADE BY CYBERCRIMINALS TO GRAB SENSITIVE INFORMATION (I.E. BANKING INFORMATION, CREDIT CARD INFORMATION, STEALING OF CUSTOMER DATA AND PASSWORDS) AND MISUSE THEM.

    HACKERS SPREAD THEIR PHISHING NET TO CATCH DIFFERENT TYPES OF PHISH. BE IT A SMALL PHISH OR A BIG WHALE, THEY ARE ALWAYS AT A PROFIT.

    PHISHING ATTACKS ARE DONE BY CYBERCRIMINALS, WHO TRICK THE VICTIM, BY CONCEALING THEIR IDENTITY BY MASKING THEMSELVES AS A TRUSTED IDENTITY AND LURING THEM INTO OPENING DECEPTIVE EMAILS FOR STEALING SENSITIVE INFORMATION. THESE ATTACKS ARE SUCCESSFUL BECAUSE OF LACK OF SECURITY KNOWLEDGE, AMONGST THE MASSES. IN SHORT, PHISHING ATTACK IS A DISGUISED ATTACK MADE BY HACKER IN A VERY SOPHISTICATED WAY.

    ON THE CONTRARY PHISHING SCAMS ARE THOSE WHEREIN THOUSANDS OF USERS ARE TARGETED AT A TIME BY CYBERCRIMINALS. FOR E.G. FAKE GOOGLE MAIL’S LOGIN PAGE IS CREATED AND EMAILS ARE SENT STATING TO CHECK THEIR ACCOUNTS. HUGE SCAMS LEAD TO HUGE LOSSES. SURVEYS SHOW A PHISHING INCREASE OF 250 PER CENT APPROXIMATELY, AS PER MICROSOFT. CHECK OUT THE DETAILS.

    THERE ARE MANY TYPES OF PHISHING ATTACKS AND PHISHING SCAMS CARRIED OUT BY HACKERS. A FEW OF THEM ARE:

    EMAIL PHISHING:
    MANY BUSINESS OWNERS ARE UNAWARE ABOUT THE INSECURE AND FRAUD LINKS AND EMAILS. FOR E.G. THE VICTIM GETS AN E-MAIL FROM THE HACKER TO CHECK SOME UNKNOWN TRANSACTIONS IN THEIR BUSINESS BANK ACCOUNT, WITH A FAKE LINK ATTACHED TO A SITE WHICH IS ALMOST AS GOOD AS REAL. WITHOUT THINKING FOR A SECOND, THE VICTIM OPENS THE FAKE LINK AND ENTERS THE ACCOUNT DETAILS AND PASSWORDS. THAT’S IT. YOU ARE ATTACKED.

    SPEAR PHISHING:
    SPEAR PHISHING IS AN EMAIL ATTACK DONE BY A FOE PRETENDING TO BE YOUR FRIEND. TO MAKE THEIR ATTACK SUCCESSFUL, THESE FRAUDSTERS INVEST IN A LOT OF TIME TO GATHER SPECIFIC INFORMATION ABOUT THEIR VICTIMS; I.E. VICTIM’S NAME, POSITION IN COMPANY, HIS CONTACT INFORMATION ETC.

    THEY LATER CUSTOMISE THEIR EMAILS, WITH THE GATHERED INFORMATION, THUS TRICKING THE VICTIM TO BELIEVE THAT THE EMAIL IS SENT FROM A TRUSTWORTHY SOURCE.

    FAKE URL AND EMAIL LINKS ARE ATTACHED IN THE EMAIL ASKING FOR PRIVATE INFORMATION. SPEAR PHISHING EMAILS ARE TARGETED TOWARDS INDIVIDUALS AS WELL AS COMPANIES TO STEAL SENSITIVE INFORMATION FOR MAKING MILLIONS.

    DOMAIN SPOOFING:
    HERE THE ATTACKER FORGES THE DOMAIN OF THE COMPANY, TO IMPERSONATE ITS VICTIMS. SINCE THE VICTIM RECEIVES AN EMAIL WITH THE SAME DOMAIN NAME OF THE COMPANY, THEY BELIEVE THAT IT’S FROM TRUSTED SOURCES, AND HENCE ARE VICTIMISED.

    BEFORE A FEW YEARS THERE WERE ONLY 2 TYPES OF PHISHING ATTACKS.

    EMAIL PHISHING & DOMAIN SPOOFING. EITHER THE EMAIL NAME WAS FORGED, OR THE DOMAIN NAME WAS FORGED TO ATTACK VICTIMS. BUT AS TIME FLIES, CYBERCRIMINALS COME UP WITH VARIOUS TYPES OF ATTACKS WHICH ARE MENTIONED BELOW:
    WHALING:
    WHALING PHISHING ATTACK OR CEO FRAUD AS THE NAME SUGGESTS ARE TARGETED ON HIGH PROFILE INDIVIDUALS LIKE CEO, CFO, COO OR SENIOR EXECUTIVES OF A COMPANY. THE ATTACK IS ALMOST LIKE SPEAR PHISHING; THE ONLY DIFFERENCE IS THAT THE TARGETS ARE LIKE WHALES IN A SEA AND NOT FISH. HENCE THE NAME “WHALING” IS GIVEN FOR THESE PHISHING ATTACKS.

    FRAUDSTERS TAKE MONTHS TO RESEARCH THESE HIGH VIPS, THEIR CONTACTS AND THEIR TRUSTED SOURCES, FOR SENDING FAKE EMAILS TO GET SENSITIVE INFORMATION, AND LATER STEAL IMPORTANT DATA AND CASH THUS HAMPERING THE BUSINESS. SINCE THEY TARGET SENIOR MANAGEMENTS, THE BUSINESS LOSSES CAN BE HUGE WHICH MAKES WHALING ATTACKS MORE DANGEROUS.

    VISHING:
    VOIP (VOICE) + PHISHING = VISHING.

    TILL NOW PHISHING ATTACKS WERE MADE BY SENDING EMAILS. BUT WHEN ATTACKS ARE DONE BY TARGETING MOBILE NUMBERS, IT’S CALLED VISHING OR VOICE PHISHING.


    CONTINUED TO 2-
    ReplyDelete
    Replies
    1. CONTINUED FROM 1--

      IN VISHING ATTACKS, THE FRAUDSTERS CALL ON MOBILE, AND ASK FOR PERSONAL INFORMATION, POSING THEMSELVES AS A TRUST-WORTHY IDENTITY. FOR E.G. THEY MAY PRETEND TO BE A BANK EMPLOYEE, EXTRACT BANK ACCOUNT NUMBERS, ATM NUMBERS OR PASSWORDS, AND ONCE YOU HAVE HANDED THAT INFORMATION, IT’S LIKE GIVING THESE THIEVES, ACCESS TO YOUR ACCOUNTS AND FINANCES.

      SMISHING:
      SMS + PHISHING = SMISHING.

      JUST LIKE VISHING, MODE OF SMISHING ATTACKS IS ALSO RELATED TO MOBILES. HERE THE ATTACKER SENDS A SMS MESSAGE TO THE TARGET PERSON, TO OPEN A LINK OR AN SMS ALERT. ONCE THEY OPEN THE FAKE MESSAGE OR ALERT, THE VIRUS OR MALWARE IS INSTANTLY DOWNLOADED IN THE MOBILE. IN THIS WAY, THE ATTACKER CAN GET ALL THE DESIRED INFORMATION STORED ON YOUR MOBILE, USEFUL FOR STEALING YOUR MONEY.

      CLONE PHISHING:
      CLONE MEANS DUPLICATE OR IDENTICAL. GIVING JUSTICE TO THE NAME, CLONE PHISHING IS WHEN AN EMAIL IS CLONED BY THE FRAUDSTER, TO CREATE ANOTHER IDENTICAL AND PERFECT EMAIL TO TRAP EMPLOYEES.

      SINCE IT’S A PERFECT REPLICA OF THE ORIGINAL ONE, FRAUDSTERS TAKE ADVANTAGE OF ITS LEGITIMATE LOOK AND ARE SUCCESSFUL IN THEIR MALICIOUS INTENTIONS.

      SEARCH ENGINE PHISHING:
      THIS IS A NEW TYPE OF PHISHING WHEREIN THE FRAUDSTER MAKES WEB SITE COMPRISING OF ATTRACTIVE BUT FAKE PRODUCTS, FAKE SCHEMES OR FAKE OFFERS TO ATTRACT CUSTOMERS. THEY EVEN TIE-UP WITH FRAUDULENT BANKS FOR FAKE INTEREST SCHEMES. THEY GET THEIR WEBSITE INDEXED BY SEARCH ENGINES AND LATER WAIT FOR THEIR PREY.

      ONCE A CUSTOMER VISITS THEIR PAGE AND ENTERS THEIR PERSONAL INFORMATION TO PURCHASE PRODUCT, OR FOR ANY OTHER PURPOSE, THEIR INFORMATION GOES IN THE HANDS OF FRAUDSTERS, WHO CAN CAUSE THEM HUGE DAMAGES.

      WATERING HOLE PHISHING:
      IN THIS TYPE OF PHISHING, THE ATTACKER KEEPS A CLOSE WATCH ON THEIR TARGETS. THEY OBSERVE THE SITES WHICH THEIR TARGETS USUALLY VISIT AND INFECT THOSE SITES WITH MALWARE. IT’S A WAIT AND WATCH SITUATION, WHEREIN THE ATTACKER WAITS FOR THE TARGET TO RE-VISIT THE MALICIOUS SITE. ONCE THE TARGETED PERSON OPENS THE SITE AGAIN, MALWARE IS INFECTED IN THE COMPUTER OF THE PERSON, WHICH GRABS ALL THE REQUIRED PERSONAL DETAILS OR CUSTOMER INFORMATION LEADING TO DATA BREACH.

      THOUGH THE CYBERHACKERS WHO TARGET PHISHING ATTACKS ON INDIVIDUALS OR COMPANIES ARE MASTER MINDS, THERE ARE CERTAIN PRECAUTIONARY MEASURES, WHICH CAN PREVENT THEM FROM SUCCEEDING. LET’S HAVE A LOOK.

      PRECAUTIONS & PREVENTIONS OF PHISHING ATTACKS:--
      RE-CHECK URL BEFORE CLICKING UNKNOWN OR SUSPICIOUS LINKS
      DO NOT OPEN SUSPICIOUS EMAILS OR SHORT LINKS
      CHANGE PASSWORDS FREQUENTLY
      EDUCATE AND TRAIN YOUR EMPLOYEES FOR IDENTIFYING AND CEASING PHISHING ATTACKS
      RE-CHECK FOR SECURED SITES; I.E. HTTPS SITES
      INSTALL LATEST ANTI-VIRUS SOFTWARE, ANTI-PHISHING SOFTWARE AND ANTI-PHISHING TOOLBARS
      DON’T INSTALL ANYTHING FROM UNKNOWN SOURCES
      ALWAYS OPT FOR 2-FACTOR AUTHENTICATION
      TRUST YOUR INSTINCTS
      UPDATE YOUR SYSTEMS WITH LATEST SECURITY MEASURES
      INSTALL WEB-FILTERING TOOLS FOR MALICIOUS EMAILS
      USE SSL SECURITY FOR ENCRYPTION
      REPORT PHISHING ATTACKS AND SCAMS TO APWG (ANTI-PHISHING WORKING GROUP)

      AI PROVIDES A LEVEL OF PROTECTION IN THE CYBERSECURITY REALM THAT IS UNFEASIBLE FOR HUMAN OPERATORS.. GOOGLE USES MACHINE LEARNING TO WEED OUT VIOLENT IMAGES, DETECT PHISHING AND MALWARE, AND FILTER COMMENTS. THIS SECURITY AND FILTERING ARE OF AN ORDER OF MAGNITUDE AND THOROUGHNESS THAT NO HUMAN-BASED EFFORT COULD EQUAL.

      ONE OF THE MOST NOTORIOUS PIECES OF CONTEMPORARY MALWARE – THE EMOTET TROJAN – IS A PRIME EXAMPLE OF A PROTOTYPE-AI ATTACK. EMOTET’S MAIN DISTRIBUTION MECHANISM IS SPAM-PHISHING, USUALLY VIA INVOICE SCAMS THAT TRICK USERS INTO CLICKING ON MALICIOUS EMAIL ATTACHMENTS.

      THE EMOTET AUTHORS HAVE RECENTLY ADDED ANOTHER MODULE TO THEIR TROJAN, WHICH STEALS EMAIL DATA FROM INFECTED VICTIMS. THE INTENTION BEHIND THIS EMAIL EXFILTRATION CAPABILITY WAS PREVIOUSLY UNCLEAR, BUT EMOTET HAS RECENTLY BEEN OBSERVED SENDING OUT CONTEXTUALIZED PHISHING EMAILS AT SCALE.


      CONTINUED TO 3-

    2. CONTINUED FROM 2--

      THIS MEANS IT CAN AUTOMATICALLY INSERT ITSELF INTO PRE-EXISTING EMAIL THREADS, ADVISING THE VICTIM TO CLICK ON A MALICIOUS ATTACHMENT, WHICH THEN APPEARS IN THE FINAL, MALICIOUS EMAIL. THIS INSERTION OF THE MALWARE INTO PRE-EXISTING EMAILS GIVES THE PHISHING EMAIL MORE CONTEXT, THEREBY MAKING IT APPEAR MORE LEGITIMATE.

      EMOTET IS A TROJAN THAT IS PRIMARILY SPREAD THROUGH SPAM EMAILS (MALSPAM). THE INFECTION MAY ARRIVE EITHER VIA MALICIOUS SCRIPT, MACRO-ENABLED DOCUMENT FILES, OR MALICIOUS LINK. ... EMOTET IS POLYMORPHIC, WHICH MEANS IT CAN CHANGE ITSELF EVERY TIME IT IS DOWNLOADED TO EVADE SIGNATURE-BASED DETECTION.

      ONCE EMOTET HAS INFECTED A HOST, A MALICIOUS FILE THAT IS PART OF THE MALWARE IS ABLE TO INTERCEPT, LOG, AND SAVE OUTGOING NETWORK TRAFFIC VIA A WEB BROWSER LEADING TO SENSITIVE DATA BEING COMPILED TO ACCESS THE VICTIM'S BANK ACCOUNT(S). EMOTET IS A MEMBER OF THE FEODO TROJAN FAMILY OF TROJAN MALWARE.

      ONCE ON A COMPUTER, EMOTET DOWNLOADS AND EXECUTES A SPREADER MODULE THAT CONTAINS A PASSWORD LIST THAT IT USES TO ATTEMPT TO BRUTE FORCE ACCESS TO OTHER MACHINES ON THE SAME NETWORK. ... THE EMAILS TYPICALLY CONTAIN A MALICIOUS LINK OR ATTACHMENT WHICH IF LAUNCHED WILL RESULT IN THEM BECOMING INFECTED WITH TROJAN.EMOTET..

      A BANKER TROJAN IS A MALICIOUS COMPUTER PROGRAM DESIGNED TO GAIN ACCESS TO CONFIDENTIAL INFORMATION STORED OR PROCESSED THROUGH ONLINE BANKING SYSTEMS. BANKER TROJAN IS A FORM OF TROJAN HORSE AND CAN APPEAR AS A LEGITIMATE PIECE OF SOFTWARE UNTIL IT IS INSTALLED ON AN ELECTRONIC DEVICE.

      EVERY DAY, ARTIFICIAL INTELLIGENCE ENABLES WINDOWS DEFENDER AV TO STOP COUNTLESS MALWARE OUTBREAKS IN THEIR TRACKS.

      YET THE CRIMINALS BEHIND THE CREATION OF EMOTET COULD EASILY LEVERAGE AI TO SUPERCHARGE THIS ATTACK. ., BY LEVERAGING AN AI’S ABILITY TO LEARN AND REPLICATE NATURAL LANGUAGE BY ANALYSING THE CONTEXT OF THE EMAIL THREAD, THESE PHISHING EMAILS COULD BECOME HIGHLY TAILORED TO INDIVIDUALS.

      THIS WOULD MEAN THAT AN AI-POWERED EMOTET TROJAN COULD CREATE AND INSERT ENTIRELY CUSTOMIZED, MORE BELIEVABLE PHISHING EMAILS. CRUCIALLY, IT WOULD BE ABLE TO SEND THESE OUT AT SCALE, WHICH WOULD ALLOW CRIMINALS TO INCREASE THE YIELD OF THEIR OPERATIONS ENORMOUSLY.

      SPEAR PHISHING AGAIN---
      IN SPEAR PHISHING (TARGETED PHISHING), EMAILS WITH INFECTED ATTACHMENTS OR LINKS ARE SENT TO INDIVIDUALS OR ORGANISATIONS IN ORDER TO ACCESS CONFIDENTIAL INFORMATION. WHEN OPENING THE LINK OR ATTACHMENT, MALWARE IS RELEASED, OR THE RECIPIENT IS LED TO A WEBSITE WITH MALWARE THAT INFECTS THE RECIPIENT'S COMPUTER.

      DURING THE 2016 US PRESIDENTIAL CAMPAIGN, FANCY BEAR – A HACKER GROUP AFFILIATED WITH RUSSIAN MILITARY INTELLIGENCE ( SIC ) – USED SPEAR PHISHING TO STEAL EMAILS FROM INDIVIDUALS AND ORGANISATIONS ASSOCIATED WITH THE US DEMOCRATIC PARTY.

      THE ONLINE ENTITIES DCLEAKS AND GUCCIFER 2.0 LEAKED THE DATA VIA MEDIA OUTLETS AND WIKILEAKS TO DAMAGE HILLARY CLINTON'S CAMPAIGN. IN JULY 2018, SPECIAL COUNSEL ROBERT MUELLER INDICTED RUSSIAN INTELLIGENCE OFFICERS ALLEGED TO BE BEHIND THE ATTACK ( SIC) . ANOTHER STATE-SPONSORED RUSSIAN HACKER GROUP, COZY BEAR, HAS USED SPEAR PHISHING TO TARGET NORWEGIAN AND DUTCH AUTHORITIES. THIS PROMPTED THE DECISION TO COUNT THE VOTES FOR THE 2017 DUTCH GENERAL ELECTION BY HAND.

      AI-BASED SYSTEMS ARE ABLE TO ADAPT TO CONTINUOUSLY CHANGING THREATS AND CAN MORE EASILY HANDLE NEW AND UNSEEN ATTACKS. THE PATTERN AND ANOMALY SYSTEMS CAN ALSO HELP TO IMPROVE OVERALL SECURITY BY CATEGORIZING ATTACKS AND IMPROVING SPAM AND PHISHING DETECTION.

      RATHER THAN REQUIRING USERS TO MANUALLY FLAG SUSPICIOUS MESSAGES, THESE SYSTEMS CAN AUTOMATICALLY DETECT MESSAGES THAT DON'T FIT THE USUAL PATTERN AND QUARANTINE THEM FOR FUTURE INSPECTION OR AUTOMATIC DELETION. THESE INTELLIGENT SYSTEMS CAN ALSO AUTONOMOUSLY MONITOR SOFTWARE SYSTEMS AND AUTOMATICALLY APPLY SOFTWARE PATCHES WHEN CERTAIN PATTERNS ARE DISCOVERED.

      capt ajit vadakayil
      ..





  1. THE HONGKONG PROTESTS ARE FUNDED AND CONTROLLED BY THE JEWISH OLIGARCHY..

    EVEN CHINESE DO NOT KNOW THAT JEW MAO AND JEW MAURICE STRONG WERE DEEP STATE AGENTS...

    JEWS WHO HAVE MONOPOLISED THE MAFIA AND CRIME IN HONGKONG DO NOT WANT TO BE EXTRADITED TO THE CHINESE MAINLAND..

    MACAU GAMBLING IS FAR MORE THAN LAS VEGAS.. DRUG MONEY IS LAUNDERED HERE.

    ROTHSCHILD CONTROLLED PORTUGAL LEGALIZED GAMBLING IN MACAU IN 1850..

    ROTHSCHILD RULED INDIA..NOT THE BRITISH KING OR PARLIAMENT.. HE GREW OPIUM IN INDIA AND SOLD IT IN CHINA.. HIS DRUG MONEY WAS LAUNDERED IN HONGKONG HSBC BANK.

    KATHIAWARI JEW GANDHI WAS ROTHSCHILDs AGENT WHEN IT CAME TO SUPPORTING OPIUM CULTIVATION IN INDIA..

    http://ajitvadakayil.blogspot.com/2019/07/how-gandhi-converted-opium-to-indigo-in.html

    INDIAN FARMERS WHO REFUSED TO CULTIVATE OPIUM WERE SHIPPED OFF ENMASSE AS SLAVES ABROAD WITH FAMILY..

    http://ajitvadakayil.blogspot.com/2010/04/indentured-coolie-slavery-reinvented.html

    INDIAN AND AMERICAN OLIGARCHY ( CRYPTO JEWS ) WERE ALL DRUG RUNNERS OF JEW ROTHSCHILD..

    http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html

    http://ajitvadakayil.blogspot.com/2010/12/dirty-secrets-of-boston-tea-party-capt.html

    DRUG CARTELS OF COLOMBIA/ MEXICO USE HONGKONG TO LAUNDER THEIR DRUG MONEY..

    GAMBLING TOURISM IS MACAU'S BIGGEST SOURCE OF REVENUE, MAKING UP MORE THAN 54% OF THE ECONOMY. VISITORS ARE MADE UP LARGELY OF CHINESE NATIONALS FROM MAINLAND CHINA AND HONG KONG.

    HONGKONG IS NOW FLOODED WITH DRUGS .. DUE TO HIGH STRESS AT WORK, PEOPLE ARE ADDICTED .. HOUSE RENT IN HONGKONG IS VERY HIGH DUE TO THE JEWISH OLIGARCHS WHO CONTROL HONGKONG.

    IN 2012, HSBC HOLDINGS PAID US$ 1.9 BILLION TO RESOLVE CLAIMS IT ALLOWED DRUG CARTELS IN MEXICO AND COLOMBIA TO LAUNDER PROCEEDS THROUGH ITS BANKS. HSBC WAS FOUNDED BY ROTHSCHILD.

    CHINA'S EXCESSIVELY STRICT FOREIGN EXCHANGE CONTROLS ARE INDIRECTLY BREEDING MONEY LAUNDERING, PROVIDING A HUGE DEMAND FOR UNDERGROUND KOSHER MAFIA BANKS.

    FACTORY MANUFACTURERS CONVERT HONG KONG DOLLARS AND RENMINBI WITH UNDERGROUND BANKS FOR CONVENIENCE WHILE CASINOS IN MACAU OFFER RECEIPTS TO GIVE LEGITIMACY TO SUSPECT CURRENCY FLOWS.

    INDIA WAS NO 1 EXPORTED OF PRECURSOR CHEMICALS LIKE EPHEDRINE TO MEXICO FOR PRODUCING METH.. TODAY CHINA ( GUANGDONG ) HAS TAKEN OVER POLE POSITION..

    http://ajitvadakayil.blogspot.com/2017/02/breaking-bad-tv-serial-review-where.html

    EL CHAPO AND HIS DEPUTY IGNACIO "NACHO" CORONEL VILLARREAL USED HONG KONG TO LAUNDER BILLIONS OF DOLLARS..TO GET SOME IDEA WATCH NETFLIX SERIES “NARCOS MEXICO” AND “EL CHAPO”.

    BALLS TO THE DECOY OF "FREEDOM " FOR HONGKONG CITIZENS.. IT IS ALL ABOUT FREEDOM FOR JEWISH MAFIA TO USE HONGKONG TO LAUNDER DRUG MONEY.

    JEW ROTHSCHILD COULD SELL INDIAN OPIUM IN CHINA ONLY BECAUSE THE CHINESE MAFIA AND SEA PIRATES WAS CONTROLLED BY HIM AND JEW SASSOON.

    COLOMBIAN/ MEXICAN DRUG CARTEL KINGS FEAR EXTRADITION TO USA.. SAME NOW WITH HONGKONG MONEY LAUNDERING MAFIA..

    https://ajitvadakayil.blogspot.com/2019/11/paradox-redemption-victory-in-defeat.html

    THE 2019 HONG KONG PROTESTS HAVE BEEN LARGELY DESCRIBED AS "LEADERLESS".. BALLS, IT IS 100% CONTROLLED BY JEWS

    PROTESTERS COMMONLY USED LIHKG, ( LIKE REDDIT ) AN ONLINE FORUM, AN OPTIONALLY END-TO-END ENCRYPTED MESSAGING SERVICE, TO COMMUNICATE AND BRAINSTORM IDEAS FOR PROTESTS AND MAKE COLLECTIVE DECISIONS ..

    THE KOSHER WEBSITE IS WELL-KNOWN FOR BEING THE ULTIMATE PLATFORM FOR DISCUSSING THE STRATEGIES FOR THE LEADERLESS ANTI-EXTRADITION BILL PROTESTS IN 2019..

    CONTINUED TO 2-







    1. CONTINUED FROM 1-

      HONGKONG PROTESTERS USE LIHKG TO MICROMANAGE STRIKE STRATEGIES , CALL FOR BACKUP OR ARRANGE LOGISTICS SUPPLIES FOR THOSE ON THE FRONT LINES OF CLASHES WITH POLICE.

      LIHKG CALLS ON RESIDENTS TO SKIP WORK AND CLASSES AND VANDALISE. HONGKONGERS STICK TO LIHKG AS POSTS ARE PREDOMINANTLY IN THEIR NATIVE TONGUE, CANTONESE.

      LIHKG IS A SAFE HAVEN FOR THESE PROTESTING PEOPLE CONTROLLED BY JEWSIH OLIGARCHS.

      AN ACCOUNT CAN ONLY BE CREATED WITH AN EMAIL ADDRESS PROVIDED BY AN INTERNET SERVICE PROVIDER OR HIGHER EDUCATION INSTITUTION, MEANING THE USER CANNOT HIDE THEIR IDENTITY FROM LIHKG.

      THE JEWISH OLIGARCHS KNOW THEIR PRIVATE ARMY. THE FORUM DOES NOT REQUIRE USERS TO REVEAL ANY PERSONAL INFORMATION, INCLUDING THEIR NAMES, SO THEY CAN REMAIN ANONYMOUS.

      LIHKG IS ALSO FERTILE GROUND FOR DOXXING PEOPLE NOT SUPPORTIVE OF THE MOVEMENT AGAINST THE EXTRADITION BILL. ONE POLICE OFFICER FOUND HIMSELF A TARGET OF PUBLIC MOCKERY WHEN HIS NAME AND PICTURE WERE LEAKED, ALONG WITH PRIVATE TINDER CONVERSATIONS REQUESTING SEXUAL FAVOURS IN A POLICE STATION.

      THE PHRASE “BE WATER, MY FRIEND”, ORIGINALLY SAID BY MARTIAL ARTS LEGEND BRUCE LEE, HAS BECOME A MANTRA FOR PROTESTERS, WHO HAVE TAKEN A FLUID APPROACH TO THEIR RALLIES.

      THE PHRASE HAS BEEN POPULARISED ON LIHKG AS A WAY TO PROVIDE ENCOURAGEMENT AND UNITE CITIZENS.

      INDIAN JOURNALISTS ARE ALL STUPID POTHOLE EXPERTS, RIGHT ?

      capt ajit vadakayil
      ..
  2. POOR AJIT DOVAL AND RAW

    THESE ALICES IN WONDERLAND DONT EVEN KNOW THAT URBAN NAXALS/ KASHMIRI SEPARATISTS / SPONSORING DEEP STATE NGOs ARE USING TELEGRAM FOR THEIR DESH DROHI PURPOSES..

    TELEGRAM WITH 210 MILLION ACTIVE USERS IS A CLOUD-BASED INSTANT MESSAGING AND VOICE OVER IP SERVICE. TELEGRAM CLIENT APPS ARE AVAILABLE FOR ANDROID, IOS, WINDOWS PHONE, WINDOWS NT, MACOS AND LINUX. USERS CAN SEND MESSAGES AND EXCHANGE PHOTOS, VIDEOS, STICKERS, AUDIO AND FILES OF ANY TYPE.

    THE DEEP STATE USES TELEGRAM FOR REGIME CHANGE.. TELEGRAM IS DUBBED AS A "JIHADI MESSAGING APP".

    ISIS WHICH WAS FUNDED ARMED AND CONTROLLED BY JEWISH DEEP STATE USED TELEGRAM..

    https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia

    LAUNCHED IN 2013, BY A ANTI-PUTIN RUSSIAN JEW , TELEGRAM COMPANY HAS MARKETED THE APP AS A SECURE MESSAGING PLATFORM IN A WORLD WHERE ALL OTHER FORMS OF DIGITAL COMMUNICATION SEEM TRACKABLE.

    IT HAS FEATURES SUCH AS END-TO-END ENCRYPTION (WHICH PREVENTS ANYONE EXCEPT THE SENDER AND RECEIVER FROM ACCESSING A MESSAGE), SECRET CHATROOMS, AND SELF-DESTRUCTING MESSAGES.

    USERS ON TELEGRAM CAN COMMUNICATE IN CHANNELS, GROUPS, PRIVATE MESSAGES, OR SECRET CHATS. WHILE CHANNELS ARE OPEN TO ANYONE TO JOIN (AND THUS USED BY TERRORIST GROUPS TO DISSEMINATE PROPAGANDA), SECRET CHATS ARE VIRTUALLY IMPOSSIBLE TO CRACK BECAUSE THEY’RE PROTECTED BY A SOPHISTICATED FORM OF ENCRYPTION.

    THE COMBINATION OF THESE DIFFERENT FUNCTIONS IN A SINGLE PLATFORM IS WHY GROUPS LIKE ISIS USE TELEGRAM AS A “COMMAND AND CONTROL CENTER”.. THEY CONGREGATE ON TELEGRAM, THEN THEY GO TO DIFFERENT PLATFORMS. THE INFORMATION STARTS IN THE APP, THEN SPREADS TO TWITTER, FACEBOOK.

    SECRET CHATS ARE PROTECTED BY END-TO-END ENCRYPTION. HOW THIS WORKS IS THAT EVERY USER IS GIVEN A UNIQUE DIGITAL KEY WHEN THEY SEND OUT A MESSAGE. TO ACCESS THAT MESSAGE, THE RECEIVER HAS TO HAVE A KEY THAT MATCHES THE SENDER’S EXACTLY, SO THAT MESSAGES FROM ANY ONE USER CAN ONLY BE READ BY THE INTENDED RECIPIENT.

    THIS MAKES IT ALMOST IMPOSSIBLE FOR MIDDLEMEN SUCH AS POLICE OR INTELLIGENCE AGENCIES TO ACCESS THE FLOW OF INFORMATION BETWEEN THE SENDER AND RECEIVER.

    EVEN IF POLICE CAN IDENTIFY WHO IS SPEAKING TO WHOM, AND FROM WHERE, THEY HAVE NO WAY OF KNOWING WHAT THEY’RE SAYING TO EACH OTHER. IN FACT, BECAUSE THE ENCRYPTION HAPPENS DIRECTLY BETWEEN THE TWO USERS, EVEN TELEGRAM ( BALLS , THEY KNOW ) ITSELF HAS NO WAY OF KNOWING WHAT’S IN THESE MESSAGES

    BEFORE A USER SENDS A MESSAGE IN A SECRET CHAT, THEY CAN CHOOSE TO SET A SELF-DESTRUCT TIMER ON IT, WHICH MEANS THAT SOME TIME AFTER THE MESSAGE HAS BEEN READ, IT AUTOMATICALLY AND PERMANENTLY DISAPPEARS FROM BOTH DEVICES.

    COMPARED WITH OTHER SOCIAL MEDIA PLATFORMS, TELEGRAM HAS EXTREMELY LOW BARRIERS TO ENTRY. ALL USERS HAVE TO DO TO SET UP AN ACCOUNT IS PROVIDE IS A CELLPHONE NUMBER, TO WHICH THE APP THEN SENDS AN ACCESS CODE.

    IT’S COMMON PRACTICE FOR TERRORISTS TO SUPPLY ONE CELLPHONE NUMBER TO SET UP THEIR ACCOUNT BUT USE ANOTHER TO ACTUALLY OPERATE THE ACCOUNT.

    THE SIM CARD YOU USE TO OPEN YOUR TELEGRAM ACCOUNT AND THE SIM CARD YOU ACTUALLY USE ON THE PHONE WITH THE APPLICATION DON’T HAVE TO THE SAME.

    NOT ONLY DOES THIS MAKE IT HARDER FOR LAW ENFORCEMENT OFFICIALS TO TRACK DOWN TERRORISTS THROUGH TELEGRAM, IT ALSO MAKES IT EASIER FOR TERRORISTS TO SET UP A NEW ACCOUNT ONCE THEY DISCOVER THEIR PREVIOUS ONE HAS BEEN EXPOSED TO THE POLICE.

    ANOTHER ATTRACTIVE FEATURE OF THE APP IS THAT IT’S REALLY QUITE HARD TO GET BOOTED OFF IT.

    TELEGRAM’S MESSAGING SERVICE IS POPULAR BECAUSE IT OFFERS A “SECRET CHAT” FUNCTION ENCRYPTED WITH TELEGRAM’S PROPRIETARY MTPROTO PROTOCOL.

    capt ajit vadakayil
    ..

  1. What has happened in Hong kong will be studied refined and applied globally. Chilling scenarios. Hope GOI also learns from this.
    1. “MONEY LAUNDERING” COVERS ALL KINDS OF METHODS USED TO CHANGE THE IDENTITY OF ILLEGALLY OBTAINED MONEY (I.E. CRIME PROCEEDS) SO THAT IT APPEARS TO HAVE ORIGINATED FROM A LEGITIMATE SOURCE.

      THE TECHNIQUES FOR LAUNDERING FUNDS VARY CONSIDERABLY AND ARE OFTEN HIGHLY INTRICATE.

      IN HONG KONG, CRIME PROCEEDS ARE GENERATED FROM VARIOUS ILLEGAL ACTIVITIES. THEY CAN BE DERIVED FROM DRUG TRAFFICKING, SMUGGLING, ILLEGAL GAMBLING, BOOKMAKING, BLACKMAIL, EXTORTION, LOAN SHARKING, TAX EVASION, CONTROLLING PROSTITUTION, CORRUPTION, ROBBERY, THEFT, FRAUD, COPYRIGHT INFRINGEMENT, INSIDER DEALING AND MARKET MANIPULATION.

      WHEN CRIME PROCEEDS ARE LAUNDERED, CRIMINALS WOULD THEN BE ABLE TO USE THE MONEY WITHOUT BEING LINKED EASILY TO THE CRIMINAL ACTIVITIES FROM WHICH THE MONEY WAS ORIGINATED.

      THE MONEY LAUNDERING MAFIA IN HONGKONG IS JEWISH CONTROLLED BY THE DEEP STATE..

      HONG KONG IS A MAJOR PIPELINE THROUGH WHICH INTERNATIONAL FRAUDSTERS, GLOBAL DRUG-TRAFFICKING CARTELS, PEOPLE-SMUGGLING GANGS AND ONLINE RACKETEERS FUNNEL THEIR ILL-GOTTEN GAINS.

      ONLINE FRAUD, INVESTMENT FRAUD, DRUGS, LOAN-SHARKING, BOOKMAKING, ILLEGAL GAMBLING, TAX EVASION AND CORRUPTION WERE ALL CRIMES ASSOCIATED WITH THE MONEY-LAUNDERING CASES IN HONGKONG

      HSBC WAS FOUNDED BY JEW ROTHSCHILD TO LAUNDER OPIUM DRUG MONEY

      http://ajitvadakayil.blogspot.com/2019/07/how-gandhi-converted-opium-to-indigo-in.html

      IN A LANDMARK CASE, THE HSBC BANK AGREED TO PAY A US$1.9 BILLION IN FINES IN 2012, AFTER ADMITTING IT KNOWINGLY MOVED HUNDREDS OF MILLIONS FOR MEXICAN DRUG CARTELS AND ILLEGALLY SERVED CLIENTS IN IRAN, MYANMAR, LIBYA, SUDAN AND CUBA IN VIOLATION OF US SANCTIONS.

      UNDER THE TERMS OF THE SETTLEMENT, FEDERAL PROSECUTORS AGREED TO DROP ALL CHARGES AFTER FIVE YEARS IF THE BANK PAID THE FINE, TOOK REMEDIAL ACTION AND AVOIDED COMMITTING NEW VIOLATIONS.

      http://ajitvadakayil.blogspot.com/2010/11/drug-runners-of-india-capt-ajit.html

      HE DEEP STATE ENSURED THAT AUTHORITIES FAILED TO PROSECUTE ANY SBC SENIOR EXECUTIVES AND ALLOWED THE BANK ITSELF TO WALK AWAY WITH NO CRIMINAL RECORD.

      MOST OF THE PARSI JUDGES AND LAWYERS IN INDIA ARE DESCENDANTS OF DRUG RUNNERS IN THE PAYROLL OF SASSOON AND ROTHSCHILD.

      THOUSANDS OF FILIPINO DOMESTIC WORKERS IN HONG KONG DUPED INTO PAYING FOR BOGUS JOBS IN CANADA AND BRITAIN HAVE BEEN FRAUD VICTIMS AS WELL AS UNWITTING CONTRIBUTORS TO A MONEY-LAUNDERING SCHEME THAT AUTHORITIES HAVE IGNORED

      PEOPLE LINKED TO A JEWISH MAID AGENCY UNDER SCRUTINY USED INTERNATIONAL BANKS LOCALLY TO REPEATEDLY TRANSFER MILLIONS OF HONG KONG DOLLARS IN SMALL SUMS TO BURKINA FASO, MALAYSIA, NIGERIA AND TURKEY.

      INSTEAD OF DOING HIS JOB, AJIT DOVAL IS SITTING NEXT TO MODI IN ALL HIS FOREIGN JAUNTS.. AND BOTH ARE BABES IN THE WOODS WHEN IT COMES TO WORLD INTRIGUE..



THIS POST IS NOW CONTINUED TO PART 8, BELOW--




PSSSSTT--

WHEN A WOMAN FEELS THAT A MAN CAN IMPALE HER , AND LIFT HER OFF  THE GROUND USING SHEER PP ( PRICK POWER ) AND CRY  - “LOOK MAA NO HANDS” – SHE IS YOURS..

WHEN YOU ASK A WOMAN , WHAT TYPE OF MAN SHE PREFERS— SHE WILL GIVE HAJAAAAR BULLSHIT —SENSE OF HUMOUR/ POETICAL/ HUGE BANK BALANCE/ NATTY LOOKS / SENSITIVE/ CHIVALROUS ,  BLAH BLAH FUCKIN’ BLAH

MY LEFT BALL !


IN HER WILDEST DARK WET DREAMS SHE JUST NEEDS THE VIRILE CAVEMAN WITH SILVER HAIR..



CAPT AJIT VADAKAYIL
..


Viewing all articles
Browse latest Browse all 852

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>