What can AI do in 2022?

Advancements about what AI can do is pushed year by year, as a continuely developed technology. So, what are some recent advancements in early 2022? While the technology is heavily researched and engineered, lately, it builds from the same foundation the past decade or two, for the most part, namely: machine learning. A key problem for achieving good accuracy and success with their algorithms has always been: not enough data. Tesla among others has figured out an interesting way of producing computer-generated data, via virtual simulations, even of driving scenarios, which arguably is a very complex context and scenario. This can easily be seen as neat progress in the development of AI technology. What you could ask yourself is if this is a step that opens the door to tackle a lot of previously difficult tasks of low predictability, or if it remains more or less a status quo, in terms of where the technology sits.

A neat advancement in AI is Tesla’s car simulations for training their autopilot driving is presented in this video from “Two Minute Papers”, found here: https://www.youtube.com/watch?v=6hkiTejoyms. To summarize: 8 cameras around the car identify objects, traffic lights, lanes, etc, based on the camera picture. This is generated into a 3D vector space. Based on this, it can not only simulate the immediate surrounding but also make predictions of upcoming events, further down the road, to some extent.

What, then, about all those rare events, that maybe happen once every ten years, but is paramount that a driver can react to? For example, what if a mudslide pulls half of a hillside down towards a road, or someone in an “unusual” looking vehicle, like in the recent Paralympics in Tokyo 2021 when a self-driving car did not stop for a man in a wheelchair. The main limitation of machine learning has always been the same: not enough data, to be able to handle these kinds of rare events. As in, with enough data, the driving would be perfect (perfectly human, at least). First and foremost, what Tesla does is to share data and camera video between cars, to be able to train the self-driving algorithm based on more data. All well and good and a natural approach.

Here is the twist, though: based on the 3D data and camera video, Tesla generates additional scenarios and simulations of traffic scenes. This technique of retro-fitting 3D vector scenarios into realistic-looking video is not developed by Tesla, per se, but has steadily become more advanced and realistic looking. The result, employed by Tesla, is that the self-driving algorithm can use even more data to train on. Even more so, it can throw in various weird scenarios and situations, like a huge Elk wandering the streets. It should be noted that the hard data on some added benefits and added accuracy is not clearly shown in the texts and video available above. rather, it is the principles that is of more interest. What they attempt to pick up on and handle are those more “unique” or unusual cases that previously were not captured by the limited real-world data.

Another example of generated data that is implemented in real life, which also is seen in the video, is with a robot hand learning to juggle with a cube. Everything is virtual, but when the algorithm is connected to a real robot hand with a real cube, it handles the cube very excellently.

Another benefit of this kind of approach is to combine it with reinforcement learning. This machine learning technology has shown great advancement in recent years. Especially the famous AlphaGo-algorithm that beat the best human player of the Chinese game of Go, and it has been applied to various computer games with similar results. Basically, what you can do with reinforcement learning is to step back and forth in an action sequence of events, to test different alternatives and approaches. This can easily become obvious why it is so powerful in relation to games like Chess or Go, as well as computer games. Not only that, but the virtual/simulated nature of the events enables the algorithm to play against itself in millions of iterations, to test different actions against itself as an opponent to optimize actions and decisions.

To be able to apply this technology to real-life implementations, such as auto-driving, or robot movement, is very interesting, and worthy of an eyebrow or two to be raised. It is a great deed indeed to be able to go back and forth to test out different moves and actions in tricky situations you encounter, to find the optimal solution. It literally becomes a little bit like a video game, where you get to redo the same tricky cliff-jump or what have you, over and over, until you are able to clear it. It is also one step closer to the Hollywood movie “Edge of Tomorrow” with Tom Cruise, where he relives the same day over and over, but remembering all: thus, he can test new actions every iteration what outcomes it produces. Much like a computer game.

What the two examples of the board game, and computer games, share, is that it is a limited environment of possible outcomes, with some set rules of what is possible. This is an initial hinder reinforcement learning can hit when the application is more closely based on real-life data and scenarios: I mean, how can you simulate the chaotic “real world” to test various actions?

The robot hand and cube simulation can be seen as one initial response to this limitation. The advancement of Tesla AI is a further response to the critique that you cannot simulate complex scenes and environments.

Still, this robot-hand scenario can be seen as a fairly limited environment as well, with not so much complexity. It is not unimaginable that the movements of the joints of a robot hand, as well as the physics of forces applied to the cube by the different parts of the hand, and gravity, could be virtually simulated fairly accurately, thus be able to predict outcomes of movements fairly accurately. Thus, the simulation transfers well to the real world because of the lack of complexity.

So, is driving a car really a complex environment? On the one hand, yes, the image or sensory input from the real world is dirty, bright, wet, foggy, and dark, all mixed in from the chaotic real world of events. The number of different objects, cars, and people, is truly wild and varied. It’s a jungle out there plain and simple, and sometimes the jungle literally falls down on the road in front of your face.

On the other hand, roads and traffic situations tend to be fairly constant and similar, for the most part. Roads, traffic signs, and lanes have national and to some extent international standards. Cars are fairly similar, and people driving cars behaves fairly similarly. Thanks to traffic regulations that have existed for so long, or simply implicitly agreed upon behavior in traffic, it is fairly easy for us to predict what other cars will do. If it is was not that easy for us, there would be far more frustration and traffic accident between human drivers, and the inner road rager would probably show it’s ugly face to a far greater extent. But, thankfully, for the most part you can rely on previously had experience when driving around the familiar-looking streets, enabling you to concentrate more on meta-goals, such as: “how to arrive at my destination in this new area I’ve never been in before”, or, “when to stop for gas” (this is something AI could do too, of course).

So, all in all, if you have a lot of experience driving, it is not a very effortful task. Nor would it be an effortful task if practicing a bit of fiddling with a cube in your hand, or learning some cool-looking tricks like spinning a pen between your fingers. It takes a lot of training and experience, and technological advancements have enabled simulations of creating even more data. The hurdle for humans to clear is similar to that for machines: to have enough data/experience. This hurdle machines will probably clear a bit more, thanks to advancements such as simulations, to tasks like the ones mentioned. However, in this sense, driving can be seen as a not-so-complex task to become some level of an expert, since all you need is time and training.

How do you define complexity, then? Here is an attempt: A truly complex problem space is when an environment, situation, and circumstances within it are mostly unique, rare, and unusual, and thus difficult to practice on. You can still become knowledgable and an expert about this complex problem space, with an above 50% accuracy in predicting outcomes. This is in contrast with motor skills or driving skills, where you do want to get closer to 100% accuracy. Most people appreciate 0% serious traffic incidents, at least. Thus, truly complex problem space and environment are ones where you cannot form accurate intuitions.

In a nutshell, this is one of the bigger problems in Human Resources Management, that they have tried to crack for a long time. Lately, they have also looked for AI to help them. The complexity in this area is also shown in the empirical statistics, where you do have a lot of science establishing things like various personality traits being more successful in different positions. Yet, even if findings remain significant, they also remain with low accuracy: be it 65% or 80% of extraverts that perform better than the introverts in management positions, it still remains 20-35% where predictions were wrong. Even with the billions of data points that machine learning algorithms within human resources do have, at times, it is perhaps a bit under-whelming what they produce.

It would be a great feat indeed, if you were able to simulate the real world, such as a workplace, and through the wonder of reinforcement learning be able to twist and turn factors and attributes of people and scenarios. However, when people and society are involved, predictions immediately become more difficult to make. There is no super-computer like the one in the TV series “West World”, that predicts the human actors’ every move. We are left to wonder why people do what they do, at times, and can only try to guess and explain it after their deed. For example, why the leader of Russia declares war on their former brethren, many of us, even experts, have tried to explain the past two weeks. It is difficult to make an algorithm with accurate predictions when we barely understand ourselves in our chaotic world.

Anders Persson
Latest posts by Anders Persson (see all)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.