Quantcast
Channel: Ajit Vadakayil
Viewing all articles
Browse latest Browse all 852

WHAT ARTIFICIAL INTELLIGENCE CANNOT DO , a grim note to the top 100 intellectuals of this planet , Part 12 - Capt Ajit Vadakayil

$
0
0


THIS POST IS CONTINUED FROM PART 11, BELOW—

.
OBJECTIVE AI CANNOT HAVE A VISION, IT CANNOT PRIORITIZE,  IT CANT GLEAN CONTEXT,  IT CANT TELL THE MORAL OF A STORY , IT CANT RECOGNIZE A JOKE,  IT CANT DRIVE CHANGE,   IT CANNOT INNOVATE, IT CANNOT DO ROOT CAUSE ANALYSIS , IT CANNOT DO DYNAMIC RISK ASSESSMENT , IT CANNOT SELF IMPROVE WITH EXPERIENCE,  IT DOES NOT UNDERSTAND BASICS OF CAUSE AND EFFECT, IT CANNOT JUDGE SUBJECTIVELY TO VETO/ ABORT,  IT CANNOT FOSTER TEAMWORK DUE TO RESTRICTED SCOPE,   IT CANNOT EVEN SET A GOAL …  IT CAN SPAWN GLOBAL FRAUD WITH DELIBERATE BLACK BOX ALGORITHMS,  JUST FEW AMONG MORE THAN 40 CRITICAL INHERENT DEFICIENCIES.



1
https://ajitvadakayil.blogspot.com/2019/08/what-artificial-intelligence-cannot-do.html
2
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do.html
3
https://ajitvadakayil.blogspot.com/2019/10/what-artificial-intelligence-cannot-do_29.html
4
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do.html
5
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_4.html
6
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_25.html
7
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_88.html
8
https://ajitvadakayil.blogspot.com/2019/11/what-artificial-intelligence-cannot-do_15.html
9
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_94.html
10
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do.html
11
https://ajitvadakayil.blogspot.com/2019/12/what-artificial-intelligence-cannot-do_1.html
12

https://ajitvadakayil.blogspot.com/2020/02/what-artificial-intelligence-cannot-do.html



Anomaly detection is a technique used to identify unusual patterns that do not conform to expected behavior, called outliers

Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.

Anomaly detection is applicable in a variety of domains, such as intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, and detecting ecosystem disturbances. It is often used in preprocessing to remove anomalous data from the dataset.

Anomaly Detection is the technique of identifying rare events or observations which can raise suspicions by being statistically different from the rest of the observations. Such “anomalous” behaviour typically translates to some kind of a problem like a credit card fraud, failing machine in a server, a cyber attack, etc.

Conventionally, businesses use fixed set of thresholds to identify metrics that cross the threshold, to mark them as anomalies.

An anomaly can be broadly categorized into three categories –

Point Anomaly: A tuple in a dataset is said to be a Point Anomaly if it is far off from the rest of the data.
Contextual Anomaly: An observation is a Contextual Anomaly if it is an anomaly because of the context of the observation.
Collective Anomaly: A set of data instances help in finding an anomaly.


It can be done in the following ways – 


Supervised Anomaly Detection: This method requires a labeled dataset containing both normal and anomalous samples to construct a predictive model to classify future data points. The most commonly used algorithms for this purpose are supervised Neural Networks, Support Vector Machine learning, K-Nearest Neighbors Classifier, etc.


Unsupervised Anomaly Detection: This method does require any training data and instead assumes two things about the data ie Only a small percentage of data is anomalous and Any anomaly is statistically different from the normal samples. Based on the above assumptions, the data is then clustered using a similarity measure and the data points which are far off from the cluster are considered to be anomalies.  .  

Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set.

Since detecting anomalies is a fairly generic task, a number of different machine learning algorithms have been created to tailor the process to specific use cases.

Here are a few common types:--

Detecting suspicious activity in a time series, for example a log file. Here, the dimension of time plays a huge role in the data analysis to determine what is considered a deviation from normal patterns.
Detecting credit card fraud based on a feed of transactions in a labeled dataset of historical frauds. In this type of supervised learning problem, we can train a classifier to classify a transaction as anomalous or fraudulent given that we have a historical dataset of known transactions, authentic and fraudulent.
Detecting a rare and unique combination of a real estate asset’s attributes — for instance, an apartment building from a certain vintage year and a rare unit mix. At Skyline AI, we use these kinds of anomalies to capture interesting rent growth correlations and track down interesting properties for investment.

An anomaly is an extremely rare episode, hard to assign to a specific class, and hard to predict. It is an unexpected event, unclassifiable with current knowledge. It's one of the hardest use cases to crack in data science because:

The current knowledge is not enough to define a class.  More often than not, no examples are available in the data to describe the anomaly

So, the problem of anomaly detection can be easily summarized as looking for an unexpected, abnormal event of which we know nothing and for which we have no data examples. As hopeless as this may seem, it is not an uncommon use case.

Fraudulent transactions, for example, rarely happen and often occur in an unexpected modality
Expensive mechanical pieces in IoT will break at some point without much indication on how they will break

A new arrhythmic heart beat with an unrecognizable shape sometimes shows up in ECG tracks
A cybersecurity threat might appear and not be easily recognized because it has never been seen before

Anomaly detection is a monitoring mechanism, in which a system keeps an eye on important key metrics of the business, and alerts users whenever there is a deviation from normal behavior. 

Conventionally, businesses use fixed set of thresholds to identify metrics that cross the threshold, to mark them as anomalies. However, this method is reactive in nature, which means by the time businesses recognize threshold violations, the damage caused would have amplified multi-fold. What is needed, is a system that constantly monitors data streams for anomalous behavior, and alert users in real-time to facilitate timely action.

Anomaly detection algorithms are capable of analyzing huge volumes of historical data to establish a ‘Normal’ range, and raising red flags when outliers are seen to be deviating from the tolerable range.

A good anomaly detection system should be able to perform the following tasks:--

Identification of signal type and select appropriate model
Forecasting thresholds
Anomaly identification and scoring
Finding root cause by correlating various identified anomalies
Obtaining feedback from users to check quality of anomaly detection
Re-training of the model with new data


Anomalies are identified whenever a particular metric moves beyond the specified threshold. However, it is important to quantify the magnitude of deviation of the anomaly, in order to prioritize which anomaly needs to be investigated/solved first. In the scoring phase, each anomaly is scored as per the magnitude of deviation from median or based on how long the deviated metric sustains from normal behavior. Larger the deviation, higher the score.

Anomaly detection systems are usually designed around tight bounds to highlight deviation quickly, but in the process sometimes these systems raise many false alarms. In fact, false positives is known to be one of the prevalent issues in the area of anomaly detection. 

One cannot underrate the flexibility that needs to be provided to end user, to change the status of a data point from anomaly to normal. After receiving this feedback, models needed to be updated/retrained to avoid identified false positives from recurring.

The system needs to re-train on new data continuously, to adapt as per the newer trends. It is possible that the pattern itself does change due to the change in operating environment, rather than anomalous deviating behavior. However, there should be a balance in the mechanism. Updating the model too frequently requires excessive amount of computational resources, and lower frequency of updating results in a deviation of the model from the actual trend.

Overall, anomaly detection is gaining increased importance in recent years, due to exponential growth of available data, and the absence of impactful mechanisms to use this data. Anomaly detection systems are better fit in identifying significant deviations, and at the same time ignoring the not worthy noises from the ocean of data — enabling business with the right alarms and insights at the right time.

AI and ML have made it a bit easier to detect the proliferation of malware and identify early on in the lifecycle if a file/resource is showing signs of belligerent behaviour. This level of automation has been possible with pattern detection, behaviour-based anomaly detection and advanced use of heuristics – all based on Machine-learned solutions – to keep the intruders out.

Anomaly detection is applicable in a variety of domains, such as intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, and detecting ecosystem disturbances. It is often used in preprocessing to remove anomalous data from the dataset. The most simple, and maybe the best approach to start with, is using static rules. 

The Idea is to identify a list of known anomalies and then write rules to detect those anomalies. Rules identification is done by a domain expert, by using pattern mining techniques, or a by combination of both. Unsupervised machine learning algorithms, however, learn what normal is, and then apply a statistical test to determine if a specific data point is an anomaly. 

A system based on this kind of anomaly detection technique is able to detect any type of anomaly, including ones which have never been seen before. Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. ... 

By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection researchA statistical anomaly occurs when something falls out of normal range for one group, but not as a result of being in that group

Anomalies are problems that can occur in poorly planned, un-normalised databases where all the data is stored in one table (a flat-file database). Insertion Anomaly - The nature of a database may be such that it is not possible to add a required piece of data unless another piece of unavailable data is also added

In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular unsupervised methods) will fail on such data, unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro clusters formed by these patterns.

Anomaly detection is one AI approach in particular that could help banks identify fraudulent transactions and transfers. With predictive analytics, banks could both detect fraud and score transactions by risk level based on a wider range of customer data.

Teradata is an AI firm selling fraud detection solutions to banks. They claim their machine learning platform can enhance banking fraud detection by helping their data analytics software recognize potential fraud cases while avoiding acceptable deviations from the norm. In other cases, these deviations may be flagged and end up as false positives that offer the system feedback to “learn” from its mistakes.

They were able to:--

Reduce their false positives by 60% and were expected to reach 80% as the machine learning model continued to learn.
Increase detection of real fraud by 50%
Refocus their time and resources toward actual cases of fraud and identifying new fraud methods.

Machine learning models for fraud detection can also be used to develop predictive and prescriptive analytics software. Predictive analytics offers a distinct method of fraud detection by analyzing data with a pre-trained algorithm to score a transaction on its fraud riskiness.

Prescriptive analytics takes the predictions made from the correlations of a predictive analytics engine and uses it to provide recommendations for what to do once fraud is detected.

Both predictive and prescriptive analytics software require the same data and training to implement. Banking data experts or data scientists employed by the client bank will need to label a high volume of transactions as either fraudulent or legitimate, and then run all of them though the machine learning model. This allows the machine learning model to be able to recognize fraud methods used in the fraudulent transactions.

Defenses Against Data Poisoning--
Similar to evasion attacks, when it comes to defenses, we’re in a hard spot. Methods exist but none guarantee robustness in 100% of cases.
The most common type of defenses is outlier detection, also knows as “data sanitization” and “anomaly detection”. The idea is simple — when poisoning a machine learning system the attacker is by definition injecting something into the training pool that is very different to what it should include — and hence we should be able to detect that.
The challenge is quantifying “very”. Sometimes the poison injected is indeed from a different data distribution and can be easily isolated:

An interesting twist on anomaly detection is micromodels. The Micromodels defense was proposed for cleaning training data for network intrusion detectors. The defense trains classifiers on non-overlapping epochs of the training set (micromodels) and evaluates them on the training set. 

By using a majority voting of the micromodels, training instances are marked as either safe or suspicious. Intuition is that attacks have relatively low duration and they could only affect a few micromodels at a time.

Anomaly detection has been used in predictive analytics now for several years. By analyzing the data, this function pinpoints activities outside the normal operations and expectations, whether those activities were good or bad.

By building upon the foundation of anomaly detection, contribution analysis provides you with the context in which an activity occurred. It will investigate the anomaly and analyze the data, which affords you with the actionable areas on which to focus your efforts. Scientists from Google’s health-tech subsidiary have pioneered innovative ways of creating revolutionary healthcare insights through artificial intelligence prediction algorithms.










WHAT IS "DEATH BY ALGORITHM "?


IT IS THE BLATANT MISUSE OF ARTIFICIAL INTELLIGENCE .. PALESTINIANS  HAVE BEEN AT THE RECEIVING END ..

On balakot night Indian airforce shot down their own helicopter .. the blame was squarely put of automation gone awry..

Deliberate economic crashes are now blamed on ai algorithms.. A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason.

But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s. At root, an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing.

When ai automation is first tried out,, it is first used on denizens of third world nations.. In this aircraft humans could not veto automation and take over the control to manual..


We must ban Lethal autonomous weapons (LAWS) –using AI  technology to replace the humans with an algorithm that makes the decision of when to shoot or whom to shoot.

Millions of inncocent women and children were killed from the air in Libya/ Iraq/ Syria/ yemen..  the un turns a nelsons’s eye because jews are not killed this way..

As early as  2007, Noel Sharkey published a dire warning in The Guardian titled “Robot Wars Are a Reality.” An expert in artificial intelligence and robotics, Sharkey expressed concern about the use of battlefield robots; electronic soldiers that act independently of any human control.

Sharkey argued that we “are sleepwalking into a brave new world where robots decide who and when to kill.” 

War is no longer fought primarily on the battlefield. Developments in computing, cyber warfare and artificial intelligence, have changed the way nations and non-state actors engage in hostilities. 

Technologies which enable point and shoot (or click) destruction are growing exponentially year by year, fuelled by a digital revolution that has ricocheted across the globe. Nations have realized that today’s conflicts are waged by 1s and 0s, and that algorithms can be trusted allies in the never ending war. It is now essential for the world to grasp the reality that robot wars are no longer just the fictive imaginings of science fiction..

Can robots and drones be programmed to comply with international human rights law? Since computers are susceptible to viruses and hacking, it is also feasible to assume that the systems which control killer robots could be overtaken by state and non-state actors.


The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead.

 Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary. 

But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.

At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. 

Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?

When some embedded software experts spent 20 months digging into the code were they able to prove the family’s case, revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output. 


The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, not least when the algorithms may also have to defend themselves from hackers?

In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand..


At core, computer programs are bundles of such algorithms. Recipes for treating data. On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent.


Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”). This has revolutionized areas of medicine, science, transport, communication, making it easy to understand the utopian view of computing that held sway for many years.

f we tend to discuss algorithms in almost biblical terms, as independent entities with lives of their own, it’s because we have been encouraged to think of them in this way. Corporations like Facebook and Google have sold and defended their algorithms on the promise of objectivity, an ability to weigh a set of conditions with mathematical detachment and absence of fuzzy emotion. No wonder such algorithmic decision-making has spread to the granting of loans/ bail/benefits/college places/job interviews and almost anything requiring choice.

far from eradicating human biases, algorithms could magnify and entrench them. After all, software is written by overwhelmingly affluent white men – and it will inevitably reflect their assumptions ..Bias doesn’t require malice to become harm, and unlike a human being, we can’t easily ask an algorithmic gatekeeper to explain its decision..


Big companies like Google “algorithmic audits” of any systems directly affecting the public, a sensible idea that the tech industry will fight tooth and nail, because algorithms are what the companies sell; the last thing they will volunteer is transparency.

We might call these algorithms “dumb”, in the sense that they’re doing their jobs according to parameters defined by humans. The quality of result depends on the thought and skill ( and human biases like with Palestinians and roma gypsies ) with which they were programmed.

At the other end of the spectrum is the more or less distant dream of human-like artificial general intelligence, or AGI. A properly intelligent machine would be able to question the quality of its own calculations, based on something like our own intuition (which we might think of as a broad accumulation of experience and knowledge). 

To put this into perspective, Google’s DeepMind division has been lauded for creating a program capable of mastering arcade games, starting with nothing more than an instruction to aim for the highest possible score.




https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war


Currently, the use of multiple UAVs in drone swarms is garnering huge interest from the research community, leading to the exploration of topics such as UAV cooperation, multi-drone autonomous navigation, etc.

Researchers have been working on UAV pursuit-evasion based on two probable approaches.  Researchers propose the use of vision-based deep learning object detection and reinforcement learning for detecting and tracking a UAV (target or leader) by another UAV (tracker or follower).

The proposed framework uses vision data captured by a UAV and deep learning to detect and follow another UAV. The algorithm is divided into two parts, the detection of the target UAV and the control of UAV navigation (follower UAV). The deep reinforcement learning approach uses a deep convolutional neural network (CNN) to extract the target pose based on the previous pose and the current frame.

The network works like a Q-learning algorithm. The output is a probabilistic distribution between a set of possible actions such as translation, resizing (e.g., when a target is moving away), or stopping. For each frame from the captured sequence, the algorithm iterates over its predictions until the network predicts a “stop” when the target is within the desired position.

Q-learning is a model-free reinforcement learning algorithm. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model (hence the connotation "model-free") of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations.


Q-learning algorithm involves an agent, a set of states and a set of actions per state. It uses Q-values and randomness at some rate to decide which action to take.


The second approach uses deep learning object detection for the detection and tracking of a UAV in a pursuit-evasion scenario. The deep object detection approach uses images captured by a UAV to detect and follow another UAV. This approach uses historical detection data from a set of image sequences and inputs this data to a SAP algorithm in order to locate the area with a high probability UAV presence.

The proposed framework uses images captured by a UAV and a deep learning network to detect and follow another UAV in a pursuit-evasion scenario. The position of the detected target UAV (detected bounding box) is sent to a high-level controller that decides on the controls to send to the follower UAV to keep the target close to the centre of its image frame.

In this work, researchers aim to develop architecture capable of tracking moving targets using predictions over time from a sequence of previously captured frames. The proposed algorithm tracks the target and moves a bounding box according to each movement prediction. The architecture is based on an ADNet (action-decision network).

This work presents two approaches for UAV pursuit-evasion. The obtained results show that the proposed approach is effective in tracking moving objects in complex outdoor scenarios.

.SAP improved the detection of distant UAVs. The obtained results demonstrate the efficiency of the proposed algorithms and show that both approaches are interesting for a UAV pursuit-evasion scenario

Actors in our criminal justice system increasingly rely on computer algorithms to help them predict how dangerous certain people and certain physical locations are. These predictive algorithms have spawned controversies because their operations are often opaque and some algorithms use biased data. 

Yet these same types of predictive algorithms inevitably will migrate into the national security sphere, as the military tries to predict who and where its enemies are. Because military operations face fewer legal strictures and more limited oversight than criminal justice processes do, the military might expect – and hope – that its use of predictive algorithms will remain both unfettered and unseen


  1. https://timesofindia.indiatimes.com/india/no-privacy-left-for-anyone-whats-happening-asks-sc/articleshow/71913853.cms

    THE ILLEGAL COLLEGIUM JUDICIARY CONTROLLED BY THE JEWIS DEEP STATE HAS BLED BHARATMATA FOR TOO LONG..

    OUR KAYASTHA LAW MINISTER PRASAD IS A MOST USELESS FELLOW.. IN 1976 PRASAD WAS THE LACKEY OF CIA SPOOK KAYASTHA AND FELLOW BIHARI JP..

    http://ajitvadakayil.blogspot.com/2017/08/right-to-privacy-in-india-is-not.html

    IN 2017, THE UNION FINANCE MINISTRY, INDIA LAUNCHED ‘PROJECT INSIGHT‘ TO MONITOR HIGH VALUE TRANSACTIONS, INCLUDING MONITORING SOCIAL MEDIA ACCOUNTS, TO DETECT SPENDING PATTERNS AND TO COMPARE THESE WITH TAX RECORDS.

    THE UNION FINANCE MINISTRY ENTERED INTO A CONTRACT WORTH US$100 MILLION WITH L&T INFOTECH (LARSEN & TOUBRO) TO HELP WITH PROJECT INSIGHT.

    BENAMI MEDIA CRITICS ACCUSE THE PROJECT OF BEING VIOLATIVE OF INDIVIDUALS’ PRIVACY.

    SORRY--

    PRIVACY WILL BE AFFORDED ONLY TO LAW ABIDING CITIZENS , NOT PAKISTANI ISI FUNDED TRAITORS WHO WANT TO KILL BHARATAMATA..

    HOME MINISTER AMIT SHAH WAS THROWN INTO JAIL BY TRAITOR JUDICIARY TO PROTECTS PAKISTANI ISIS FUNDED TERRORIST SOHRABUDDIN.. ALL IT TOOK WAS A PIL FROM A PAKISTAN ISI CONTROLLED NGO..

    https://en.wikipedia.org/wiki/Death_of_Sohrabuddin_Sheikh

    capt ajit vadakayil
    ..





























  1. PRESIDENT TRUMP IS VISITING INDIA..

    CAPT AJIT VAADKAYIL DEMANDS FROM PM MODI-- DO NOT GET OVERAWED..

    THREE US PRESIDENTS REAGAN/ BUSH SR AND CLINTON ARE RESPONSIBLE FOR THE COCAINE ADDICTION OF USA.. THE WHITE HOUSE IS THE NO 1 CONSUMER OF COCAINE IN A SINGLE BUILDING WORLD WIDE.

    https://en.wikipedia.org/wiki/Miguel_%C3%81ngel_F%C3%A9lix_Gallardo

    MEXICAN COCAINE DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO RAN GUNS TO NICARAGUA FOR THE US PRESIDENT USING CIA AS A MEDIATOR.. THIS IS NO BIG SECRET..

    MATTA-BALLESTEROS WAS USED BY CIA FIRST TO ARM THE NICARAGUA CONTRAS TO BRING THE PATRIOTIC SANDINISTA GOVT DOWN..

    SANDINISTAS WERE BRANDED AS BAAAAD COMMIES BY REAGAN .. IN REALITY SANSDINISTAS KICKED OUT STEALING JEWISH OLIGARCHS FROM NICARAGUA..

    https://en.wikipedia.org/wiki/Juan_Matta-Ballesteros

    MEXICAN COCAINE DRUG LORD MIGUEL ÁNGEL FÉLIX GALLARDO DROVE A WEDGE BETWEEN DEA AND CIA.. CIA WAS PROTECTING THIS DRUG LORD FROM DEA..

    DRUG LORD EL CHAPO WAS IN CAHOOTS WITH DEA.. WHEN HE WAS ESCAPING VIA A 1.8 KM LONG TUNNEL FROM JAIL, THE MEXICAN PRESIDENT AND US PRESIDENT WERE MONITORING THIS ESCAPE..

    https://en.wikipedia.org/wiki/Joaqu%C3%ADn_%22El_Chapo%22_Guzm%C3%A1n

    EL CHAPO FUNDED CIA/ DEA AND THE WHITE HOUSE -SO THAT THE US PRESIDENT CAN CIRCUMVENT US CONGRESS FOR SLUSH FUNDS TO CAUSE REGIME CHANGE WOLD WIDE..

    REGIME CHANGE IS ALL ABOUT PUTTING PUPPETS ON THRONES SO THAT JEWS CAN STEAL..

    CAPT AJIT VADAKAYIL ASKS NSA AJIT DOVAL.. YOU ARE NOW 75 YEARS OLD.. DYING YOUR HAIR BLACK CUTS NO ICE .. OTHER THAN BULLSHIT PAKISTANI STUFF, DO YOU UNDERSTAND WORLD INTRIGUE?.. CAN YOU EVEN UNDERSTAND ARTIFICIAL INTELLIGENCE? WHY DONT YOU FUCK OFF AND ALLOW SOMEONE YOUNGER AND MORE TECH SAVVY TO BE THE NSA ?

    CHAIWAALA MODI KNOWS NOTHING REPEAT NOTHING ABOUT WORLD INTRIGUE... SEE THE WAY HE BEHAVES WHEN DONALD TRUMP COMES TO INDIA..

    capt ajit vadakayil
    ..

THIS POST IS NOW CONTINUED TO PART 13 , BELOW--






CAPT AJIT VADAKAYIL
..

Viewing all articles
Browse latest Browse all 852

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>