Do the algorithms know everything or must help it them the humans?

Do the algorithms know everything or must help it them the humans?
The new science of data questions when it is necessary that a person supervises to a decision automated as a medical diagnosis or the concession of a loan.

There are armies formed by the minds most shining of computer science that have been dedicated to increase the probabilities of securing a sale. The abundance of data and intelligent programs of the era of Internet has opened the door to customized marketing to size, announcements and the product recommendations.

Deny it if they want, but is not a fact without importance. It is not necessary more to pay attention to the great reorganization, caused by the technology, of the sectors of the publicity, mass media and the retail sale.

This decision making automated is thought to eliminate the humans of the equation, but the impulse to want that somebody supervises the results that the computer vomits is very human. Many experts in mathematical data consider that marketing is a plate of Petri with few risks — and, — in which yes, lucrative to complete the tools of the new science. “What happens if my algorithm equivocation? That somebody sees the erroneous announcement ”, comments Claudia Perlich, a specialist in data who works for a company of new creation that is dedicated to the customized publicity. “What damage can do? It is not a false positive of a breast cancer ”.

But the risk increases as the economy and the society are soaked of the methods and the mentality of the science of the data. The great companies and those of new creation begin to use the technology to make decisions related to the medical diagnosis, the prevention of the delinquency and the concession of loans. In these scopes, the application of the science of the data poses doubts exceeds when it is necessary that a person kindly supervises the results of an algorithm.

These doubts are giving foot to a branch of the academic studies well-known as algorithmic responsibility. The organizations who guard by the public interest and the civil rights are examining at great length the repercussions that the science of the data has, as much by their errors as by their possibilities. In the prologue of a report of the past month of September, civil Rights, macrodata and our algorithmic future, Wade Henderson, president of Conferencia by the Leadership in Human rights and Civiles, wrote: “ the macrodata can and must contribute us all more security, economic opportunities and comfort ”.

Pay attention to the loans for the consumption, a market in which are several specialized companies of new creation in macrodata. Their methods represent the digital version of the most elementary principle of the bank: it knows his clients. These new specialized credit organizations in data assure that, when compiling data of sources as the contacts of the social networks, or even observing the way in which an applicant fills in a form of Internet, can know the borrowers best than ever and predict if they will give back the best loan than if they were limited to study the credit file of somebody.

What they promise is a more effective financing and a valuation of the loans, which will save people thousands of million dollars. But the loans based on macrodata depend on some computer science algorithms that analyze piles of data meticulously and are learning during the process. It is a very complex and automated system (and until his defenders they have doubts).

“ Toman a decision exceeds you, without you have nor idea from why they have taken it ”, Rajeev Date explains, who invests in credit organizations that use the science of the data and has been attached director of the Office of Financial Protection of the Consumer. “ that is disquieting ”.

The preoccupation is similar also in other scopes. Ever since its Watson computer overcame the winners of the televising contest Jeopardy! for four years, IBM has been taking the technology of the artificial intelligence based on data much more of the games of I devise there. The medical assistance has been one of the great projects. The history of the use of the technology “ specialized ” to contribute to the medical decision making has been dissapointing; the systems have not been quite intelligent nor quite fast thing to really help the doctors actually daily.

Medical service

But the scientists of IBM, in collaboration with investigators of some outstanding medical groups — among them the Clinic Cleveland, Clinical May and the Oncologic Center Memorial Sloan Kettering —, are obtaining advances. Watson is able to read medical documents at a speed at which they would be incomprehensible for the humans: thousands of them per second, in search of indications, important correlations and ideas.

The program has been used to train the Medicine students and is beginning to be used in oncologic clinical surroundings to provide diagnoses and recommendations of treatment, as if outside an ingenious digital assistant.

IBM also has created a called software Watson Paths, a visual tool that allow the doctor to see the tests and the deductions in which Watson has been based to make a recommendation.

“ it is not enough with giving an answer immediately ”, it affirms Eric Brown, person in charge in IBM of the technology related to Watson.

Watson Paths aims at the necessity that there is some class of machine-human translation as the science of the data progresses. As Danny Hillis says, expert in artificial intelligence: “ the key that will cause that it works and is acceptable to eyes of the society will be the history that counts ”. Not a narration exactly, but rather a pursuit information that explains the way in which an automated decision has been taken. “How it affects to us? ”, asks Hillis. “To what extent it is a decision of the machine and to what extent she is human? ”, it adds.

One of the expositions is that the human continues comprising of the process. The data and the software give life to the new credit organizations that use the science of the data. But, in San Francisco, one of these companies of new creation, Earnest, it causes that at least one of its employees reviews the predictive recommendations of the program, although is rare that it rejects what the algorithms dictate. “ Pensamos that the human factor always will be an important part of the process, since allows us to assure to us that we were not mistaken ”, it says to Louis Beryl, co-founder and delegated advisor of Earnest.

But that position, thinks others, is not more than a comforting illusion; perhaps it is the good marketing, but not necessarily good science of the data. They affirm that the fact to grant to a human capacity of I veto within a algorithmic system introduces a human slant. After all, which promises the decision making based on the macrodata is that the decisions based on the data and the analysis — more science, less intuition and less abuse — will provide better results.

However, although the optimism is just, is an important challenge, given to the complexity and the opacity of the science of the data. Be able a capricious will technology that promises great average benefits to protect the sufficient thing to the individual of a mysterious decision and that could have a lasting effect in the life of a person?

A possible solution, according to Gary King, director of the Institute of Quantitative Social Sciences of Harvard, would be that the creators of the algorithms that grant scores alter them not to obtain the maximum benefit or yield, but so that the value that grant to the person is something greater, which would reduce the risk of being mistaken.

In the banking sector, for example, an algorithm could be fit so that it reduced the probability erroneously of cataloguing of profiteer the applicant of a loan, although it entails that the credit organization ends up granting more uncollectable loans.

“ the objective ”, says King, “ is not necessarily that a human supervises the result a posteriori, but to improve the quality of the classification of the individuals ”.

In a certain sense, a mathematical model is equivalent to a metaphor, a descriptive simplification. It leaks, but also it distorts a little. For that reason, sometimes, a human assistant can contribute that dose of clarified data that escapes to him to the algorithmic robot. “, often both together they can work far better that the algorithm by itself ”, affirms King.

Steve Lohr, columnist of New York Times specialized in technology, is the author of Date.



©2015 New York Times News Service

Translation of News Clips.






Twittear