What AlphaGo Can Teach Us About How People Learn

We are, of course, looking at ways to apply the Mujero to real-world problems, and have some encouraging initial results. To give a concrete example, video is dominated by traffic on the Internet, and a major open problem is how to compress those videos as efficiently as possible. You can think of this as a problem of reinforcement learning because these are very complex programs that compress video, but what you see is unknown. But when you plug in something like a Musero, our initial results look very promising in terms of saving significant amounts of data, perhaps like the 5 percent bits that are used in compressing a video.

In the long run, where do you think reinforcement learning will have the greatest impact?

I think of a system that, as a user, can help you achieve your goals as effectively as possible. A really powerful system that looks at everything you see, in which you have all the same senses, able to help you achieve your goals in your life. I think this is really important. Another transformational, long-term view is the one thing that can provide personalized health care solutions. There are privacy and ethical issues that have to be addressed, but it will have great transformational value; This will change the face of medicine and the quality of life of people.

Do you think that machines will learn to do in your lifetime?

I don’t want to put a deadline on it, but I would say that whatever a human being can achieve, I eventually think that a machine can. The brain is a computational process, I don’t think there’s any magic going on there.

Can we get to the point where we can understand and apply algorithms as effective and powerful as the human brain? Well, I have no idea what timescale is. But I think the journey is exciting. And we should aim to achieve it. The first step in taking that journey is to try to understand what it means to gain intelligence? What problem are we trying to solve in intellect?

Beyond the practical uses, are you confident that you can go from games like chess and attic to real intelligence? Do you think reinforcement will lead to learning Machines with common senseThe

There is a hypothesis, we call it a yield-sufficient hypothesis, which says that the necessary process of intelligence can be as simple as a system that seeks to maximize its returns, and that it can achieve a goal Tries and tries to maximize returns. It is enough to give rise to all the characteristics of the intellect that we see in natural intelligence. This is a hypothesis, we do not know if it is true, but it gives research a direction.

If we take general knowledge specifically, then the reward-enough-hypothesis well states, if common sense is useful to a system, that it actually means helping it achieve its goals. Should do

It seems that you feel that your area of ​​expertise – reinforcement learning-understanding, or “solving”, is in some sense fundamental to intelligence. Is that correct?

I really see it as very necessary. I think the big question is, is it true? Because it certainly flies by how many people look at AI, which is that intelligence has an incredibly complex collection, and they each have their own kind of problem that they are solving or its own special The way is to work for something like general knowledge, or perhaps there is no clear problem definition either. This theory says no, it can actually be a very clear and simple way of thinking about all of intelligence, which is that it’s a goal-optimization system, and if we set goals really well Find a way to adapt to it, then actually all these other things will emerge from that process.