Artificial Intelligence is Gullible, shielding Machine Learning from manipulation

Must Read

Parth Satam
Parth Satam
Parth Satam has worked with The Asian Age, Mid-Day and is presently a Principal Correspondent with Fauji India magazine. Parth maintains a keen interest in defence, aerospace and foreign affairs and has covered crime, national security and India's defence establishment for a decade. He can be reached on Email: satamp@gmail *Views are personal

With Artificial Intelligence (AI) and Machine Learning (ML), computerized automation is being taken to another level, for purposes of convenience in labour, time and profit. ML, an offshoot of AI, besides involving computers and machines executing routine jobs which require a marginal level of higher-order thinking, also sees devices adapting to any slight complexity the task might throw up, even it is not explicitly programmed for such a scenario.

This the machine does by using available data, direct instruction or own experience to look for patterns and make better decisions. However, ML and AI were recently found to be ignoring deep contextual and semantic areas considerations, owing to the absence of regular societal interaction and human biases themselves, which crept into the machines.

For instance, in late June in Michigan’s Detroit, a facial recognition image search, based on the CCTV grab of a black male suspect who stole $3800 worth watches from a retail store, threw up a false hit, leading cops to the home of a similar-looking black individual, living near Detroit, who arrested him from his front lawn as his wife and daughter looked on in horror. Robert Julian-Borchak Williams was later released on bail for insufficient evidence, after having being detained for over 30 hours.

Prior to that in November last year, Apple Card’s ‘black box algorithm’ displayed a clear gender bias where a man found himself being granted a credit limit 20 times higher than what was allowed his wife, despite her having a stronger credit score. A study, however, has found another type of shortcoming in ML, where it can’t recognize input manipulation in roles where a third party, invested in a positive outcome of the ML’s “biased prediction”, tweaks data that the ML’s algorithm can’t recognise. In tasks like processing resumes or insurance claims, where ML algorithms used for speeding up the review, are difficult to be “adversarially trained” to detect tricky inputs which mask areas which make the applications ineligible i.e. lack of qualification that disqualifies one from a job or causes one’s insurance claim to be rejected.

The study has therefore recommended retention of human supervision as it provides “domain expertise” to complement the ML system, in such fields where ML efficacy can be bypassed. The study, ‘Machine Learning and Human Capital Complementarities: Experimental Evidence on Bias Mitigation’, takes the case of patent applications, to explore this vulnerability of ML. This is because the inherent loopholes in the tedious and time consuming work of examining patent claims, exploiting the permitted practice to use hyphenated words, assigning new meanings to words and peppering it with irrelevant information, enhances the likelihood of it being perceived as “novel” or “non-obvious applications”.  

To highlight how much the practice of language manipulation is prevalent in patent applications, the study cites a US Government Accountability Office Report (GAO) report of 2016 that estimated 64% percent of examiners found excessive references making it difficult to complete ‘prior art’ in timed deadlines, “and 88 percent of examiners reported consistently encountering irrelevant references.”

The study was published in the November 2019 issue of the Strategic Management Journal. Conducted by Prithwiraj Choudhury of the Harvard Business School and Evan Starr and Rajshree Agarwal, both from the Robert H. Smith School of Business in the University of Maryland. The study focused on how the US Patent and Trademark Office’s (USPTO) new ML technology, while quickly and accurately identifying the relevant ‘prior art’, besides the algorithm being capable of learning and correcting new ways applicants manipulate it, was still vulnerable to the applicants dynamically updating their writing strategies.

“This makes it practically impossible to adversarially train an ML algorithm,” the study said. Thus a combination of human intervention that includes people with both “domain specific” expertise and “vintage” skills are recommended. The former includes people with experience in the subject matter where the ML is being used, in this case patent examination, and the latter means persons with technical training and knowledge of Computer Science and Engineering (CS&E), which helps in dealing with the core technical aspects  of the ML programme itself.

No Robotic Overlords Yet

The researchers cites both observational and experimental evidence to show how patent language changes overtime, as per feedback from the USPTO examiners, even within narrowly defined class of ‘art’ (the technical term for the field/discipline to which the patent application pertains). Thus unsupervised examination by ML technologies risks accepting patent applications, supported by rapidly evolving patent language and parlance, where a similar to exactly same patent already exists. In this situation, domain expertise is needed to improve the search strategy and a CS&E background is required to effectively operate the user-interface.

The ML system, otherwise does successfully meet its primary role: offer the core speed in finding similar patents to the one being assessed, thereby reducing USPTO response times. “However, we note the extent to which ML privileges vintage-specific human capital depends on the pace at which future iterations of the technology reduce the need for such specialized skills. These contributions are replete with important managerial implications, given the firm’s pre-existing workforce is its most important capability, and creating the appropriate mix of complementary assets/technology is critical to productivity,” said the authors, touching an unintended socio-political note, about inclusivity of people from various strata and ethnicities affecting an organization’s and a machine’s performance.

The findings also provides an answer to the long-running philosophical question thrown by the science fiction genre about humans being eventually supplanted by sentient machines, either for benevolent or tyrannical purposes. The prediction remains a huge NO, at least in the foreseeable future. “AI’s substitution for humans in cognitive tasks is overstated…ML technologies can substitute humans for prediction tasks, but not for judgment tasks,” the authors conclude. “Amidst the fierce debate about AI’s future replacement of humans at work, there is scant attention to how productivity gains from substituting ML, for older vintages may be conditioned by complementary human capital,” it adds further.

As long there remains a potential for ‘input incompleteness’, like a small perturbation to a photo that can dramatically alter how ML tools classify it, even if it looks nearly identical to the human eye – of which Robert Williams became a victim, – the report says “domain-specific expertise as a complement to ML, will continue to retain value.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

More Articles Like This