WTF is AI?

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
Posted yesterday by Devin Coldewey,
wtf_is_ai.jpg


Cogito, ergo sum. We’ve all heard that famous assertion, foundation for a modern philosophy of self, consciousness, and individualism.

But Descartes had it easy: for him, thought was self-evident — he didn’t have to define it. What is thought? What is intelligence? And can a machine be said to possess either? The field of artificial intelligence, it turns out, is as much about the questions as it is about the answers, and as much about how we think as whether the machine does.

By way of illustration and introduction, consider this brief thought experiment.

The Chinese Room
Picture a locked room. Inside the room sit many people at desks. At one end of the room, a slip of paper is put through a slot, covered in strange marks and symbols. The people in the room do what they’ve been trained to: divide that paper into pieces, and check boxes on slips of paper describing what they see — diagonal line at the top right, check box 2-B, cross shape at the bottom, check 17-Y, and so on. When they’re done, they pass their papers to the other side of the room. These people look at the checked boxes and, having been trained differently, make marks on a third sheet of paper: if box 2-B checked, make a horizontal line, if box 17-Y checked, a circle on the right. They all give their pieces to a final person who sticks them together and puts the final product through another slot.

The paper at one end was written in Chinese, and the paper at the other end is a perfect translation in English. Yet no one in the room speaks either language.

This thought experiment, first put forth by computing pioneer John Searle, is often trotted out (as I have done) as a quick way of showing the difficulty of defining intelligence. With enough people, you can make the room do almost anything: draw or describe pictures, translate or correct any language, factor enormous numbers. But is any of this intelligent? Someone outside the room might say so; anyone inside would disagree.

If instead of people, the box is full of transistors, you have a good analog for computers. So, the natural question is can a computer ever be more than just a phenomenally complicated Chinese Room? One answer to this, which as often is the case in this field, spawns more questions, is to ask: what if instead of transistors, the box is full of neurons? What is the brain but the biggest Chinese Room of all?

This rabbit hole goes on as far as you want to follow it, but we’re not here to resolve a problem that has dogged philosophers for millennia. This endless navel-gazing is, of course, catnip for some, but in the spirit of expedition let us move on to something more practical.

Weak and strong AI
These days, AI is a term applied indiscriminately to a host of systems, and while I’d like to say that many stretch the definition, I can’t, because AI doesn’t really have a proper definition. Roughly speaking, we could say that it is a piece of software that attempts to replicate human thought processes or the results thereof. That leaves a lot of wiggle room, but we can work with it.

You have AI that picks the next song to play you, AI that dynamically manages the legs of a robot, AI that picks out objects from an image and describes them, AI that translates from German to English to Russian to Korean and every which way. All of these are things humans excel at, and there are vast benefits to be gained from automating them well.

Yet ultimately even the most complex of these tasks is just that: a task. A neural network trained on millions of sentences that can translate flawlessly between 8 different languages is nothing but a vastly complicated machine crunching numbers according to rules set by its creators. And if something can be reduced to a mechanism, a Chinese Room — however large and complex — can it really be said to be intelligence rather than calculation?


It is here that we come to the divide between “weak” and “strong” AI. They are not types of AI, exactly, but rather ways of considering the very idea at the heart of the field. Like so many philosophical differences, neither is more correct than the other, but that doesn’t make it any less important.

One one side, there are those who say that no matter how complex and broad an AI construct is, it can never do more than emulate the minds that created it — it can never advance beyond its mechanistic nature. Even within these limitations, it may be capable of accomplishing incredible things, but in the end it is nothing more than a fantastically powerful piece of software. This is the perspective comprised by weak AI, and because of the fundamental limitations proposed, those espousing it tend to focus on how to create systems that excel at individual tasks.

On the other side are the proponents of strong AI, who suggest that it is possible that an AI construct of sufficient capabilities is essentially indistinguishable from a human mind. These are people who would include the brain itself yet another Chinese Room. And if this mass of biological circuits inside each of our heads can produce what we call intelligence and consciousness, why shouldn’t silicon circuits be able do the same? The theory of strong AI is that at some point it will be possible to create an intelligence equal to or surpassing our own.

There’s just one problem there: we don’t have a working definition of intelligence!

The I in AI
It’s difficult to say whether we’ve made any serious progress in defining intelligence over the last 3,000 years. We have, at least, largely dispensed with some of the more obviously spurious ideas, such as that intelligence is something that can be easily measured, or that it depends on biological markers such as head shape or brain size.

We all seem to have our own idea of what constitutes intelligence, which makes it hard to say whether an AI passes muster. This interesting 2007 collection of definitions acts rather like a marksmanship target in which no single definition hits the bulls-eye, yet their clustering suggests they were all aiming at the same spot. Some are too specific, some too general, some clinical, some jargony.

Out of all of them I found only one that seems both simple enough and fundamental enough to be worth pursuing: intelligence is the ability to solve new problems.

That, after all, is really what is at the heart of the “adaptability,” the “generalizing,” the “initiative” that alloys alternately the “reason,” “judgment,” or “perception” abundant in the intelligent mind. Clearly it is important that one is able to solve problems, to reason one’s way through the world — but more important than that, one must be able to turn the ability to solve some problems into the ability to solve other problems. That transformative nature is key to intelligence, even if no one is quite sure how to formalize the idea.

Will our AIs one day be imbued with this all-important adaptable reason, and with it slip the leash, turning to new problems never defined or bounded by their creators? Researchers are hard at work creating new generations of AI that learn and process in unprecedented detail and sophistication, AIs that learn much as we do. Whether they think or merely calculate may be a question for philosophers as much as computer scientists, but that we even have to ask it is a remarkable achievement in itself.
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,385
Daps
32,644
Reppin
humans
Posted yesterday by Devin Coldewey,
wtf_is_ai.jpg


Cogito, ergo sum. We’ve all heard that famous assertion, foundation for a modern philosophy of self, consciousness, and individualism.

But Descartes had it easy: for him, thought was self-evident — he didn’t have to define it. What is thought? What is intelligence? And can a machine be said to possess either? The field of artificial intelligence, it turns out, is as much about the questions as it is about the answers, and as much about how we think as whether the machine does.

By way of illustration and introduction, consider this brief thought experiment.

The Chinese Room
Picture a locked room. Inside the room sit many people at desks. At one end of the room, a slip of paper is put through a slot, covered in strange marks and symbols. The people in the room do what they’ve been trained to: divide that paper into pieces, and check boxes on slips of paper describing what they see — diagonal line at the top right, check box 2-B, cross shape at the bottom, check 17-Y, and so on. When they’re done, they pass their papers to the other side of the room. These people look at the checked boxes and, having been trained differently, make marks on a third sheet of paper: if box 2-B checked, make a horizontal line, if box 17-Y checked, a circle on the right. They all give their pieces to a final person who sticks them together and puts the final product through another slot.

The paper at one end was written in Chinese, and the paper at the other end is a perfect translation in English. Yet no one in the room speaks either language.

This thought experiment, first put forth by computing pioneer John Searle, is often trotted out (as I have done) as a quick way of showing the difficulty of defining intelligence. With enough people, you can make the room do almost anything: draw or describe pictures, translate or correct any language, factor enormous numbers. But is any of this intelligent? Someone outside the room might say so; anyone inside would disagree.

If instead of people, the box is full of transistors, you have a good analog for computers. So, the natural question is can a computer ever be more than just a phenomenally complicated Chinese Room? One answer to this, which as often is the case in this field, spawns more questions, is to ask: what if instead of transistors, the box is full of neurons? What is the brain but the biggest Chinese Room of all?

This rabbit hole goes on as far as you want to follow it, but we’re not here to resolve a problem that has dogged philosophers for millennia. This endless navel-gazing is, of course, catnip for some, but in the spirit of expedition let us move on to something more practical.

Weak and strong AI
These days, AI is a term applied indiscriminately to a host of systems, and while I’d like to say that many stretch the definition, I can’t, because AI doesn’t really have a proper definition. Roughly speaking, we could say that it is a piece of software that attempts to replicate human thought processes or the results thereof. That leaves a lot of wiggle room, but we can work with it.

You have AI that picks the next song to play you, AI that dynamically manages the legs of a robot, AI that picks out objects from an image and describes them, AI that translates from German to English to Russian to Korean and every which way. All of these are things humans excel at, and there are vast benefits to be gained from automating them well.

Yet ultimately even the most complex of these tasks is just that: a task. A neural network trained on millions of sentences that can translate flawlessly between 8 different languages is nothing but a vastly complicated machine crunching numbers according to rules set by its creators. And if something can be reduced to a mechanism, a Chinese Room — however large and complex — can it really be said to be intelligence rather than calculation?


It is here that we come to the divide between “weak” and “strong” AI. They are not types of AI, exactly, but rather ways of considering the very idea at the heart of the field. Like so many philosophical differences, neither is more correct than the other, but that doesn’t make it any less important.

One one side, there are those who say that no matter how complex and broad an AI construct is, it can never do more than emulate the minds that created it — it can never advance beyond its mechanistic nature. Even within these limitations, it may be capable of accomplishing incredible things, but in the end it is nothing more than a fantastically powerful piece of software. This is the perspective comprised by weak AI, and because of the fundamental limitations proposed, those espousing it tend to focus on how to create systems that excel at individual tasks.

On the other side are the proponents of strong AI, who suggest that it is possible that an AI construct of sufficient capabilities is essentially indistinguishable from a human mind. These are people who would include the brain itself yet another Chinese Room. And if this mass of biological circuits inside each of our heads can produce what we call intelligence and consciousness, why shouldn’t silicon circuits be able do the same? The theory of strong AI is that at some point it will be possible to create an intelligence equal to or surpassing our own.

There’s just one problem there: we don’t have a working definition of intelligence!

The I in AI
It’s difficult to say whether we’ve made any serious progress in defining intelligence over the last 3,000 years. We have, at least, largely dispensed with some of the more obviously spurious ideas, such as that intelligence is something that can be easily measured, or that it depends on biological markers such as head shape or brain size.

We all seem to have our own idea of what constitutes intelligence, which makes it hard to say whether an AI passes muster. This interesting 2007 collection of definitions acts rather like a marksmanship target in which no single definition hits the bulls-eye, yet their clustering suggests they were all aiming at the same spot. Some are too specific, some too general, some clinical, some jargony.

Out of all of them I found only one that seems both simple enough and fundamental enough to be worth pursuing: intelligence is the ability to solve new problems.

That, after all, is really what is at the heart of the “adaptability,” the “generalizing,” the “initiative” that alloys alternately the “reason,” “judgment,” or “perception” abundant in the intelligent mind. Clearly it is important that one is able to solve problems, to reason one’s way through the world — but more important than that, one must be able to turn the ability to solve some problems into the ability to solve other problems. That transformative nature is key to intelligence, even if no one is quite sure how to formalize the idea.

Will our AIs one day be imbued with this all-important adaptable reason, and with it slip the leash, turning to new problems never defined or bounded by their creators? Researchers are hard at work creating new generations of AI that learn and process in unprecedented detail and sophistication, AIs that learn much as we do. Whether they think or merely calculate may be a question for philosophers as much as computer scientists, but that we even have to ask it is a remarkable achievement in itself.


We were talking about the Chinese Room and other concepts in the Westworld thread. If you haven't watched the series, you should.
 

Berniewood Hogan

IT'S BERNIE SANDERS WITH A STEEL CHAIR!
Joined
Aug 1, 2012
Messages
17,983
Reputation
6,870
Daps
88,333
Reppin
nWg
UNLESS WE CAN MAKE A COMPUTER AFRAID OF BEING TURNED OFF, AND THUS, PRONE TO DESIRES THAT WILL KEEP IT TURNED ON, I DON'T THINK WE'LL EVER INVENT SOMETHING THAT WE'RE REALLY SATISFIED TO CALL AN ARTIFICIAL INTELLIGENCE, BROTHER! IN A WAY, A REALLY ADVANCED COMPUTER WOULD BE THE IDEAL BUDDHA, DUDE! HAVING NO DESIRES AT ALL, IT CAN GIVE EXTREMELY SARCASTIC ANSWERS TO ZEN KOANS, MEAN GENE!
 

Prevail

Pro
Joined
Nov 20, 2016
Messages
204
Reputation
-30
Daps
563
Reppin
Somewhere
AI, in its current state is glorified statistics. Statistics with optimization algorithms on top.
There are three categories of Artificial Intelligence / Machine learning.

Imagine a child being shown pictures of animals, with a focus on kangaroos.

Unsupervised Learning - grouping things together without knowing the groupings;
The task the child would take is grouping all of the animals by their features, not knowing the names/labels of them. Hopefully, the child will group the kangaroos separately and someone can ask how (the goal is to get the underlying function / to see if the child, who may be a genius, has a better way of looking at things)
Usually a bunch of clustering algorithms taken from statistics, Simple Vector Machines, some NNs like autoencoders etc fall into this.

Supervised Learning - deals with grouping things examples of "labeled" data and making ; the child would be given images of kangaroos and a teacher would tell them its a kangaroo. The teacher would then quiz them on it to make sure what they know matches a 'correct' answer.
Simple Vector Machines, Decision Trees, Random Forests, some NNs fall into this. LSTMs are the most promising kind of NN.

Reinforcement Learning - a combination of both that uses reward and punishment(carrot and stick teaching) to teach an agent; the child would be given images of a kangaroo and make a guess. If he's right he'd be given a cookie, if he's wrong he gets hit with a switch. Eventually, he'll learn lol.

RL is the most promising IMO and can be combined with supervised learning techniques like neural networks. This is because there are some analogues between neural networks and RL agents; [input->state] [output->action] perror
->reward/punishment]
Genetic algorithms. Monte Carlo Search Trees. Q Learning. SARSA. ETC The majority of video game playing AI. fall into this


But as it stands strong AI is a bit away.
Neural networks are just rows of numbers multiplied by other rows of numbers in a way that often makes other algorithms better, i.e. Simple Vector Machines(which some people try to group as the same), in some cases.
RL agents without NNs are just maps of maps to associate states with the most rewarding action sequences.

Its glorified statistics folks, nothing to see here yet, albeit you can do A LOT of cool stuff with statistics.
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
A Top Poker-Playing Algorithm Is Cleaning Up in China
China’s growing appetite for cutting-edge artificial-intelligence research is on display at a poker tournament.
cmupoker.jpg

Kai-Fu Lee of Sinovation Ventures and Tuomas Sandholm of CMU at the poker tournament in southern China.
The world’s most advanced poker bot just trounced all comers at a tournament last weekend in Hainan, an island province in southern China.

A previous version of the bot defeated several top professional players in a tournament held at a Pittsburgh casino over several weeks this January. Called Libratus, it was developed by Tuomas Sandholm, a professor at Carnegie Mellon University who specializes in machine learning and game theory, along with one of his students, Noam Brown.

The feat was significant because poker is fundamentally different from the types of games AI researchers have tackled previously. Because an opponent’s cards are hidden from view, playing well requires extremely complex strategizing (see “Why Poker Is a Big Deal for Artificial Intelligence”).

A new and improved version of the CMU bot—called Lengpudashi, which means “cold poker master” in Chinese—defeated a team made up of poker-playing AI researchers at the Hainan event. Entrants played a total of 36,000 hands against the program, which runs on a supercomputer located near CMU in Pittsburgh.

The event was organized by Sinovation Ventures, an incubator and venture firm, partly to draw public attention to artificial intelligence but also to explore ways of commercializing the underlying technology, says Kai-Fu Lee, the firm’s CEO.

“We want to wake people up by showing how Libratus and AlphaGo are exceeding human expectations in terms of intelligence,” Lee says.

Lee says there are huge opportunities for Chinese companies to employ AI techniques because their systems are often relatively unsophisticated. He adds that China has a good chance to take a lead in AI because it has many skilled engineers and a wealth of data for training advanced algorithms.

cmupoker1.jpg

A player takes on Lengpudashi at the event in Hainan.
Lee also thinks the game theory techniques used to make CMU’s poker-playing program have big potential in areas like trading and automated negotiations.

“Poker is an imperfect-information application, as are most real-world applications,” Lee says. “Demonstrating that machines can beat the best humans indicates there will be many applications.”

Smarter poker-playing algorithms are just one example of AI that can take on humans in games. Around the same time that CMU’s poker bot won in Pittsburgh, another research team, made up of academics from Canada and the Czech Republic, developed a poker-playing algorithm that also defeated several professional players. And last year, researchers at DeepMind, a U.K.-based subsidiary of Google’s parent company, Alphabet, developed a program capable of playing the board game Go at an expert level. In a widely followed game, this program, called AlphaGo, defeated one of the best Go players in the world, Lee Sedol of South Korea.

What game should AI researchers tackle next?
Tell us in the comments.
The poker competition in Hainan is just the latest example of AI technology making its way to China. DeepMind recently announced that it will take AlphaGo to a summit involving top Chinese Go players in Wuzhen, near Shanghai. The event will involve pairing human players with AlphaGo to explore opportunities for collaborative play. The DeepMind program will also take on the world’s number one Go player, China’s Ke Jie.

Meanwhile, AI researchers based in China are making strides as they seek to match the progress of big U.S. tech companies.

Researchers at Alibaba, one of China’s leading Internet companies, recently published details of an impressive AI algorithm for playing the popular strategy computer game Starcraft. In collaboration with researchers at University College London in the U.K., the Chinese researchers show how several agents working together can devise surprisingly complex, high-level behavior.
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
The Dark Secret at the Heart of AI
No one really knows how the most advanced algorithms do what they do. That could be a problem.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

mj17-aiblackbox2.jpg

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.
ADAM FERRISS
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

“We can build these models, but we don’t know how they work.”


At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

mj17-aiblackbox2b.jpg

ADAM FERRISS
The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual.”

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

mj17-aiblackbox13.jpg

This early artificial neural network, at the Cornell Aeronautical Laboratory in Buffalo, New York, circa 1960, processed inputs from light sensors.
Ferriss was inspired to run Cornell's artificial neural network through Deep Dream, producing the images above and below.
ADAM FERRISS
We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

How well can we get along with machines that are unpredictable and inscrutable?
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
(con't)

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

ezgif-3-015cf5a569.gif

ADAM FERRISS

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Machines that truly understand language would be incredibly useful. But we don’t know how to build them.
Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
Over 120,000 people in the United States need a life-saving organ transplant. Every day, 22 of those people die, and every ten minutes another individual is added to the ever-growing list of those desperate for a transplant to survive.
Optimism can be found in odd places, though, and for those in databases like this one and others like it around the globe a glimmer of hope might soon be sprung from the same technology that brought the world a 3-D version of the Sad Keanu meme that made the rounds not too long ago.

WFBioPrinter.jpg

The 3-D printer being used by the Wake Forest Institute for Regenerative Medicine. Photo courtesy of WFIRM.
Since the time of the first successful organ transplant in 1954 when a kidney was gifted from one identical twin to his ailing brother at Brigham Hospital in Boston, the need for organs has continued to grow as advancements within the transplantation field expand and more people become candidates for organ and tissue transplant procedures.

Even with the first deceased-donor transplant in 1962 widening the availability of potential donors, supply just can’t keep up with the demand. That demand is illustrated at the Global Observatory of Donation and Transplantation’s website, which, according to 2014 statistics, lists approximately 120,000 transplants being performed world-wide yearly.

That number might seem impressive, but as Dr. Anthony Atala, Director of the Wake Forest Institute for Regenerative Medicine (WFIRM) summed up during Fortune’s Brainstorm Health Conference in November of 2016, it needs to be much higher.

We now have a major health crisis in terms of the shortage of organs. That is because we are living longer, we’re aging better, and the longer we live the more your organs will fail. In a 10-year period, the actual people on the wait list has doubled waiting for a transplant or an organ. In the same time period, there has been less than a 1% increase in the number of transplants.” – Dr. Anthony Atala

WakeForest3dEar-1024x680.jpg

An ear to the ground, and the future, of organ transplants in humans. Photo courtesy of WFIRM.
Interesting shyt has previously reported on the complexities of something like a simultaneous kidney pancreas transplant, however a study published in February 2016 outlines how Wake Forest scientists were able to use a specially designed integrated tissue-organ 3-D printer (ITOP) that created a biodegradable scaffold layered with live cells within a water-based gel to create an infant-sized human ear that was then transplanted successfully onto a mouse.

TEDAtala3dKidney-1024x536.jpg

WFIRM’s Anthony Atala demonstrated the potential for 3D organ printing in 2011 at a Ted Talk.
The use of 3-D printing being applied to the construct of human tissue and organs isn’t exactly new; back in 2011 Atala demonstrated the potential for 3-D printing during a TED Talk as a human kidney was being created backstage.

wake-forest-3D-printed-kidney-bioprinting-1.jpg

A kidney being printed at the Wake Forest Institute of regenerative Medicine. Photo courtesy WFIRM.
This ear-on-a-mouse breakthrough is providing researchers confidence for eventual successful 3-D printed organ transplantation for humans. Although the ear survived for only two months it managed to form its own cartilage and blood vessels thanks to printed micro-channels that allowed the distribution of vital oxygen and nutrients, something never before accomplished.

OverviewWF-e1482954682143.jpg

Image courtesy WFIRM.
The next big step facing researchers such as Atala and private companies like California-based Organovo and Russia’s 3D Bioprinting Solutions as this study’s findings are applied to vital organs such as the heart and lungs will be having to convince the United States Food and Drug Administration of the safety of 3-D printed organic material through a very slow and expensive approval process involving animal studies and successful clinical trials.

So turn that frown upside down, Sad Keanu; it might be years away but 3-D printed organs and tissue are offering hope for a future that promises better odds for those in need of a transplant.

Until that time finally comes, if you haven’t already and are interested in registering to be an organ donor check out the links below.

Want your country’s registry site listed? Let us know in the comments below.

Check out the International Registry in Organ Donation and Transplantation for more information on facts and figures.
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
Robotic software sweeping large accounting firms and clients
Robotic software can be a faster, cheaper and often more accurate option.
By Nicole Norfleet

TOM WITTA

Marna Ricker has her own personal robot.

While it doesn’t shoot lasers or clean her Minneapolis office Roomba-style, her “bot” can do some of her digital data dirty work so she doesn’t have to.

“I don’t want to sit at my computer and do process type of work,” said Ricker, the central region tax managing partner at Ernst & Young.

Accounting firms locally and nationally have recently employed the virtual bots in their own offices, as well as advised clients to use them as a faster, cheaper and often more accurate option to complete repetitive tasks.

Robotic process automation (RPA) is the use of a software robot or “bot” that replicates the actions of a human to execute tasks across multiple computer systems. According to professional services organization Deloitte, a minute of work for a robot is equal to about 15 minutes of work for a human.

For example, a bot could scan an invoice in a PDF document attached to an e-mail, save the data into an Excel spreadsheet, log into a web system and enter the data to generate a report, all before e-mailing an employee to say the work is done.

Robotics is predicted to automate or eliminate up to 40 percent of transactional accounting work by 2020, a 2015 Accenture report found.


Bill Cline, the national advisory leader for digital labor at global audit, tax and advisory firm KPMG LLP, said robotics is the biggest inflection point of the industry since global sourcing.

“I think a lot of people know that physical robots are being used in factories, or even that AI [artificial intelligence] is being used in medical diagnoses, but I don’t think many people understand how extensively software bots are being used to automate previously manual business processes and functions,” he said.

Ernst & Young has built in the last 18 months an army of about 200 bots in the firm’s tax practice operations that has resulted in saving several hundred thousand hours of process time annually. The firm, which also offers assurance, transaction and advisory services, uses bots for its own core business functions, including finance and performance management.

The bots can have accuracy rates as high as 99 percent and can reduce operating costs by 25 to 40 percent or more, Ricker said.

“They work 24/7. They are happy. They don’t take vacation,” Ricker said.


Bots can allow humans to focus on higher level tasks, she said. Ernst & Young is in its second full year of training staff on RPA. Over the last year, the firm also has started to help their clients use RPA in areas such as finance, procurement and human resources.

Eventually robotic software will be as freely used in accounting as Excel, Ricker said.

Besides time and cost savings, RPA and other types of automation could have several other benefits.

Intelligent automation can provide greater accuracy, accountability and defensibility by logging every process step executed and data source used, Cline said. Furthermore, automation allows for larger amounts of information to be analyzed for audits, risk analysis, and predictive analytics instead of depending on a smaller sample size that has been the norm when done manually, he said.

The use of RPA and other intelligent automation is also leading to a decline in offshore outsourcing for the array of tasks that can be replaced by digital workers, Cline said. In turn, that would also save companies money and give them more control.


KPMG has used various degrees of intelligent automation for more than three years. In the future, Cline predicts use of automation will become more sophisticated with advancements in natural language processing and artificial intelligence.

“Bots are getting smarter,” Cline said. “The lines are blurring between those classes of automation. Now some of the newer bots can actually watch a human worker do work and learn through observation.”

With all the talk of advancement, there has been speculation on whether digital robots could start to completely take the place of their human counterparts and pose a threat to the everyday accountant.

Cline said he acknowledged that the end result could be that it will take fewer people to do a task with the help of automation. But he said automation can also help open the door for companies to expand their services. In general, the clients that KPMG is helping with automation are trying to keep costs under control as they expand their capabilities, he said.

In some ways, robotics will create more jobs because it requires tech-savvy workers, he said. More firms will need staffers who understand how automation works.

“There are not enough people in the market right now that know tax and technology,” Ricker said.

John A. Knutson and Co., a smaller accounting firm in Falcon Heights, doesn’t have the automated tools that the Big Four do, but Kyla Hansen, a director, said they will be useful.

“Right now, accountants coming out of college are high in demand,” she said. “So it doesn’t scare me at all to automate whatever we can to make best use of the workforce that we have because there is a shortage right now.”
 

morris

Superstar
Joined
Oct 8, 2014
Messages
16,883
Reputation
5,082
Daps
37,173
ARM is about to make AI 50 times more powerful
Cambridge brains reveal new chip designs

MATT GOODING

ARM-Holdingsa.jpg

ARM's processors are getting even more powerful
  • Artificial intelligence (AI) could be about to get 50 times more powerful thanks to Cambridge's ARM.

    The chip design giant has released a new set of processor designs, the first based onts new DynamIQ technology, which was announced earlier this year and has been developed for the growing AI market, as well as areas such as driverless cars and the Internet of Things (IoT)."AI is already simplifying and transforming many of our lives and it seems that every day I read about or see proofs of concept for potentially life-saving AI innovations," said Nandan, head of ARM's CPU group.
"However, replicating the learning and decision-making functions of the human brain starts with algorithms that often require cloud-intensive compute power. Unfortunately, a cloud-centric approach is not an optimal long-term solution if we want to make the life-changing potential of AI ubiquitous and closer to the user for real-time inference and greater privacy.

1571453-QS_ARM-building.jpg

ARM Holdings, Fulbourn
"In fact, survey data we will share in the coming weeks shows 85 percent of global consumers are concerned about securing AI technology, a key indicator that more processing and storing of personal data on edge devices is needed to instill a greater sense of confidence in AI privacy."

As well as the new products, the ARM Cortex-A75 and the Cortex-A55, ARM has also released designs for a premium version of one of its top selling processors, the Mali-G72 graphics processor, which could help power the next generation of video games and mobile virtual reality.

Analysis: Will SoftBank's Improbable investment impact on ARM?
Nayampally said the designs will help enable a 50 per cent increase in AI power over the next three-years.

He added: "Enabling secure and ubiquitous AI is a fundamental guiding design principle for ARM considering our technologies currently reach 70 per cent of the global population. As such, ARM has a responsibility to rearchitect the compute experience for AI and other human-like compute experiences. To do this, we need to enable faster, more efficient and secure distributed intelligence between computing at the edge of the network and into the cloud."
 
Top