Close your eyes and think of a white rose, and your mind will bring that picture up for you. Whether you have seen a white rose yesterday or one several years ago, you have seen them enough times for your brain to now recognize what one looks like.
So, this is how our mind works.
A startup in the U.K. is leveraging this idea to lay the foundation for the future of artificial intelligence. The company uses machine learning to enable computers and smartphones to model visual information just like the human brain does.
“A computer could use these visual models for various tasks, from improving video streaming to automatically generating elements of a realistic virtual world.”
The graduates of Imperial College London created the Magic Pony Technology, a company wherein large neutral networks are trained to process visual information. The company can create high-quality videos or images from low-quality ones. If you are wondering how, here is how the company achieves this.
It begins by feeding example images to a computer, which first converts them into low-resolution pictures before determining the difference between the two. This is not new though. Other companies have demonstrated this feat as well. The only difference being, Magic Pony Technology can do it on an ordinary graphics processor. This can open up many applications. The company has demonstrated this by improving a live game feed in real time.
Rob Bishop, co-founder of Magic Pony says:
“Magic Pony is currently in talks with several large companies interested in licensing the technology. Online video-streaming businesses rely heavily on video compression [and] our first product demonstrates that image quality can be greatly enhanced using deep learning, and fast mobile GPUs now allow us to deploy it anywhere.”
Furthermore, Bishop believes that the technology could improve the quality of images captured on smartphones with low-resolution cameras or in low light. There are other applications as well. For instance, the technology could be used to convert pixelated computer graphics into high-resolution ones. They could even automatically generate miles of realistic looking terrain and textures from earlier examples for games or even virtual-reality environments. Now, how cool is that?
The approach is definitely unique. The technology doesn’t need to manually label examples in order to process a video footage. Instead, it recognizes the difference between high and low-resolution and then teaches itself what edges, textures and straight lines should look like. This definitely makes us wonder what the future of artificial intelligence will look like.