How computers got shockingly good at recognizing images

Companies such as Google are now creating their own chips with neural networking technology in order to enhance how their software is able to dissect and recognize images. Many experts note that 2012 was the most vital year in the development of image recognition because that is when the AlexNet paper came out. One of these experts is Sean Gerrish, who spoke about how before the development of AlexNet paper, everyone just put image recognition technology on the back burner.

Key Takeaways:

  • As data sets and networks expand, the levels of accuracy across software brands significantly increases.
  • Images are recognized by the right side of the graphic’s neurons lighting up to a certain digit.
  • Each graphic has an output of a sequence of neurons that display themselves in a specific way to enhance recognition.

“Prior to 2012, deep neural networks were something of a backwater in the machine learning world. But then Krizhevsky and his colleagues at the University of Toronto submitted an entry to a high-profile image recognition contest that was dramatically more accurate than anything that had been developed before.”

Read more: https://arstechnica.com/science/2018/12/how-computers-got-shockingly-good-at-recognizing-images/