In 2018, AI has passed these five hurdles to develop better.

Although the hype about killer robots has never stopped, the artificial intelligence field (AI) has indeed made many significant advances in 2017. For example, a robot called "Libratus" kills a lot in a poker game. In the real world, machine learning is being used to improve the agricultural and medical industries. But have you recently talked to Apple Assistant Siri or Amazon's artificial intelligence assistant Alexa? Then you will know that despite many hype about AI, billionaires are very worried about this, but there are still many things that AI can't do or understand. There are five more difficult issues here, and experts will focus on them next year.

1. Human discourse meaning

How can AI develop better in 2018? Still need to go through these five hurdles

Machines now seem to be better able to use text and language than ever before. Facebook's AI can read the description on the picture for the visually impaired, and Google's AI can provide short suggestions for replying to the message. However, software still does not really understand the meaning of our words and the ideas we share with them. Melanie Mitchell of Portland State University said: “We can apply the concepts we have learned in different ways and apply them to new environments. These AIs and machines The learning system does not have such capabilities."

Mitchell believes that today's software exists as a so-called "meaning barrier" by mathematician Gian-Carlo Rota. Many leading AI research teams are trying to figure out how to overcome it. One of the tasks is to give the machine a foundation of common sense and the material world, which also underpin our thinking. For example, Facebook researchers are trying to teach software to understand reality by watching videos. Other teams are trying to imitate our understanding of the world. Google has been studying software that tries to learn to understand metaphors. Mitchell has tried some systems to explain what happened in the photo with an analogy and a concept about the world.

2. The reality gap hinders the robot revolution

How can AI develop better in 2018? Still need to go through these five hurdles

The robot hardware is ready, and you can buy a drone with a high-definition camera and a palm-sized device for $500. The machines that dragged the boxes and walked on two legs also improved a lot. So why are we not surrounded by bustling mechanical helpers? Today's robots lack a brain that matches their complex muscles.

When robots are engaged in specific tasks that require specific programming to accomplish, they can learn operations such as grabbing objects through trial and error (and errors), but the process is relatively slow. A promising shortcut is to let the robot train in the virtual analog world and then download those hard-won knowledge to the body of the physical robot. However, this method is plagued by the term “reality gap”, which describes the robot's ability to learn skills during the simulation process, but is not always effective when it is ported to a physical world machine.

The “reality gap” is shrinking. In October of this year, Google reported very optimistic results in the experiment. The simulation and the real robot arm learned to pick different objects, including tape, toys and combs. However, for those who study driverless cars, more progress is needed. To reduce the time and money spent testing under real traffic and road conditions, companies driving virtual cars on simulated streets are competing to deploy virtual cars on simulated streets.

Chris Urmson, chief executive of driverless car startup Aurora, said that making virtual testing more suitable for real cars is one of his team's top priorities. Umson was previously responsible for leading the driverless car project of Google's parent company Alphabet. He said: "We will see very clearly next year how we can use this advantage to accelerate learning."

3. Prevent AI hackers

How can AI develop better in 2018? Still need to go through these five hurdles

Software that runs our grids, security cameras, and mobile phones is always plagued by security breaches. We shouldn't expect software for driverless cars and home robots to be different. The truth may be even worse: there is evidence that the complexity of machine learning software has introduced new ways of attack for hackers. Researchers this year found that you can hide a secret trigger point in a machine learning system to enter evil mode with a specific signal trigger. The New York University research team designed a street sign recognition system that is functional, except when it comes to seeing yellow sticky notes. A yellow note was placed on a parking sign in Brooklyn, which reported the sign as a speed limit sign. The potential of this hacking technology can cause a lot of trouble for driverless cars.

Electric Scooter Batteries

Shenzhen Sunbeam New Energy Co., Ltd , https://www.sunbeambattery.com