Newsletter - August 2018 - IA: Autonomous cars of tomorrow
Newsletter August 2018
That's it, it's back to school and our newsletters resume! We return with a subject that fascinates us and animates our platform of socially responsible digital tasks: Artificial intelligence (AI).
With the rise of new technologies and the transfer of tomorrow's society to digital, artificial intelligence is gaining ground in our daily lives. It impacts all sectors of the global economy.
What is artificial intelligence? It is a set of techniques (machine learning, deep learning, computer vision, natural language processing,...) that allow machines to imitate a form of human intelligence (mainly through simple and repetitive tasks). Today, artificial intelligence is at the heart of all the development strategies of major industrial sectors such as automotive, real estate, health, etc. with object or image recognition. Indeed, we have several concrete examples in the course of experimentation that deserve to be interested in it as the autonomous car that uses the technique of visual recognition, from the deep learning.
Artificial intelligence exists today thanks to the assimilation of billions of data (Big data), instruments that allows the machine to understand and learn to the machine to think like humans. This data transferred to the machine remains a complex challenge for the AI. Indeed, there is still progress so that the machine can see and think or decide as a human.
Deep-learning is a sub-category of automatic learning (Machine learning), a sub-domain of artificial intelligence. Thanks to deep learning, An algorithm can be programmed to detect certain shapes and details from images coming from a video camera. Depending on the assigned database, they will be able to spot an individual wanted in a crowd, detect the satisfaction rate at the exit of a store by detecting smiles, etc.
According to Jean Ponce, a researcher in artificial vision and director of the Department of Informatics at ENS, « Recognizing an object poses a lot of problems, an object does not have the same appearance depending on the angle of the shot, and two chairs do not have the same shape, color and texture, for example. » . In the same way, a human or animal does not have the same shape depending on their position or from the viewpoint of the image, which increases the difficulty of identifying them for the machine. How can an independent car differentiate a hitchhiker from a police officer who beckons to stop? A plastic bag on the road of a stone to avoid?
Let's take the case of the autonomous car:
Today, Tesla sells more than 250 000 cars that are already on the road. Each of these cars have high quality cameras and sensors to capture the environment, the world around them. Video footage collected by a fleet of this size can fuel a learning system using deep learning, a technique based on artificial neural networks, has properties " Very similar to the human visual system ». Other companies that sell autonomous cars also use this new feature, which is to track objects through capturing data from videos.
As the artificial vision industry moves from identifying simple objects to tracking objects, we need tools to annotate the video stream and allow machines to recognize and decide to act according to different Situations that would arise in front of them. To achieve this, AI specialists try to teach the machines to recognize these forms through supervised learning. For example, to allow a program to recognize a dog, it is necessary to "train" it by providing many images of dogs, so that it can then be able to spot on new images. That is why Stanford University's Vision Lab, in the United States, has developed Imagent, a database of hundreds of thousands of images, all carefully labeled by hand, made available to researchers who need them.
The traditional approach to an object-tracking project is to divide the video into individual images, and then annotate each image separately, ensuring consistent identifiers for each single object in sequential images. Being a difficult job, special attention to detail is necessary for the annotation of these videos by an external support to the intelligent platform that drives the machine.
Thanks to the video annotation for object recognition, an entire video sequence can be evaluated as a whole, as the clip contains 2 images or 2 000 images. This feature makes it easier and faster to track a single object, even if it's moved, from the beginning to the end of a video. If the object disappears from the camera view and reappears later (remember to overtake a cyclist in traffic to blow them up at the next intersection). The precision is now adaptable thanks to the video annotation tool according to the density of the captured objects.
Our intelligent, socially responsible digital task platform is currently capable of:
In order to allow machines to recognize objects, shapes, animals and much more. Among the projects that our community had the opportunity to work on, we had:
The intelligence of our community of Hiteuses allied with the intelligence of our platform will allow your robot training projects, machines to grow.
Today, our community brings together more than 490 people from 11 developing countries in Francophone and Anglophone Africa. We allow them to earn additional income by carrying out the digital tasks of our platform and train them to be actors of the digital world of tomorrow. In fact, the world is transforming and in the face of these major challenges, training will play a major role. Isahit's mission is also to help the threatened and developing populations, to integrate the digital world of tomorrow to avoid a digital divide.
Read the newsletter in PDF: click here
To learn more about what our smart platform and community are capable of doing, check out our use cases: click here.
To request a free quote, use our Online Price simulator.
We have a wide range of solutions and tools that will help you train your algorithms. Click below to learn more!