"BIG DATA, MISINFORMATION AND REALISM"

"The fog of war or the cloud of war?"
"BIG DATA, MISINFORMATION AND REALISM"

“In the era of social media, where fake news is much more common, state actors now with their ‘bot armies’ are involved in spreading fake news and state media unsurprisingly engage in fake news. But what is quite peculiar or interesting in the type of fake news that is spread where it is sometimes clearly obvious that its fake where old footage and game footage is spread by these state actors. It’s as if these state actors haven’t tried using deepfake or google veo 3?”

“Data is delicious according to the data scientist where data is desired by corporations, advertisers and governments.”

“With this mass collection of data by these actors, they now have so much data that a new subject arose about big data. With big data, it becomes very difficult in terms of inspect each unit of data within a dataset especially when there is a large amounts of data. In other words, algorithms and particularly machine learning algorithims are increasingly employed by these actors where they use it to navigate through this big data.”

“These algorithms are not perfect especially if you’re using machine learning algorithms that are susceptible to false positives and false negatives. You might think that a machine learning algorithm has a small margin error. However it is not as simple as that.”

“The ‘small margin of error’ is the average difference between the output of the machine learning algortihm with the test data (subset of data A) input and the amount of a different subset of data A.”

“Now the machine learning algorithm possibly might classify something as ‘positive’ according to the dataset it has been trained on when it is actually negative i.e. false postivies and vice versa. Now you might think this is ‘insignificant’ but something to understand is that the algorithm itself is trying to find a general pattern within the dataset it’s being tested on where you find individuals insisting on making sure the dataset consists of things that have correctly classified i.e. image of black hair is classified as individual with black hair etc.”

“If the dataset consists of things that are incorrectly classified (i.e. a dataset mentions an image of a black hair that has been classified as blonde hair), the machine learning algorithms detects false patterns which are not reflective of reality. In other words if the opponent understands the general pattern in which the model is detecting, the opponent now recognizes what to input without getting detected. This is analogous to a hitman disguising himself as a waiter where he blends into the crowd where he is technically detected but he blends in within the crowd which is different from a hitman not using a disguise where he tries to penetrate through the building without being detected by cameras and guards.”

“If you think that chatgtp is not susceptible to input blending in within their patterns, you are mistaken. There is something called, ‘Slopsquatting’ where the A.I. ‘hallucinates’ an output of a software package (which is similar but not identical to what the programmer asked for) where that package in github contains malware. If you are confused by this, there are domains that have the name ‘binng’ which isn’t the same of bing of course but is similar to it but these hackers try to exploit typos, in this case these hackers try to exploit the ‘hallucinations’ of A.I. To summarise, not even A.I. prevents someone from blending their input as part of the general pattern in which A.I. detects stuff.”

“Now what is the relationship between these algorithms and misinformation promoted by state actors? State actors understands the other side uses algorithms which absorbs photos which contains metadata and other forms of data. A diverse range of misinformation is used by state-actors where they use game-footage, old footage, fake documents, false quotations and more where they attempt to confuse their enemy and the spectators watching this including the algorithms where the state actor uses a diverse range of false information where they want that falsehood blending into the algorithm and you might think that it is a ‘small’ thing but do you think that missing a mark in your exam paper is a ‘small thing’? Now what makes you think that this small thing is insignificant if a country is waging a war?”

“In the recent clash between pakistan and hindustan, social media pages were flooded with conflicting claims of f-16s being downed and other fabricated claims. But such claims are treated by realists as possibilities. Where the realist assumes that such trust does not exist between two nation-states especially belligerent countries where he treats the claims issued by these governments with either skeptism or an acknowledgement. If the realist acknowledges these claims as ‘possibilities’ he pushes himself towards the position of risk-averse behaviour where he thinks that even if it is extremely unlikely, the realist views that possibility seriously out of fear that it might possibly be the case but the person who is willing to take the risk is willing ignoring these possibilities whilst he pushes through.”

“These algorithms are part of the correlation between technology and international relations where such algorithms distort information which is fed into spy agencies which places hidden marks of the reports they send to the civilian government.”

“The fog has existed in the past and it still exists today yet this fog is quite different especially in the era of cybernetics.”

Write a comment
No comments yet.