Document (#38876)

Author
Markoff, J.
Title
Researchers announce advance in image-recognition software
Source
http://www.nytimes.com/2014/11/18/science/researchers-announce-breakthrough-in-content-recognition-software.html
Year
2014
Abstract
Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding.
Content
"Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. "I consider the pixel data in images and video to be the dark matter of the Internet," said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. "We are now starting to illuminate it." Dr. Li and Mr. Karpathy published their research as a Stanford University technical report. The Google team published their paper on arXiv.org, an open source site hosted by Cornell University.
In the longer term, the new research may lead to technology that helps the blind and robots navigate natural environments. But it also raises chilling possibilities for surveillance. During the past 15 years, video cameras have been placed in a vast number of public and private spaces. In the future, the software operating the cameras will not only be able to identify particular humans via facial recognition, experts say, but also identify certain types of behavior, perhaps even automatically alerting authorities. Two years ago Google researchers created image-recognition software and presented it with 10 million images taken from YouTube videos. Without human guidance, the program trained itself to recognize cats - a testament to the number of cat videos on YouTube. Current artificial intelligence programs in new cars already can identify pedestrians and bicyclists from cameras positioned atop the windshield and can stop the car automatically if the driver does not take action to avoid a collision. But "just single object recognition is not very beneficial," said Ali Farhadi, a computer scientist at the University of Washington who has published research on software that generates sentences from digital pictures. "We've focused on objects, and we've ignored verbs," he said, adding that these programs do not grasp what is going on in an image. Both the Google and Stanford groups tackled the problem by refining software programs known as neural networks, inspired by our understanding of how the brain works. Neural networks can "train" themselves to discover similarities and patterns in data, even when their human creators do not know the patterns exist.
In living organisms, webs of neurons in the brain vastly outperform even the best computer-based networks in perception and pattern recognition. But by adopting some of the same architecture, computers are catching up, learning to identify patterns in speech and imagery with increasing accuracy. The advances are apparent to consumers who use Apple's Siri personal assistant, for example, or Google's image search. Both groups of researchers employed similar approaches, weaving together two types of neural networks, one focused on recognizing images and the other on human language. In both cases the researchers trained the software with relatively small sets of digital images that had been annotated with descriptive sentences by humans. After the software programs "learned" to see patterns in the pictures and description, the researchers turned them on previously unseen images. The programs were able to identify objects and actions with roughly double the accuracy of earlier efforts, although still nowhere near human perception capabilities. "I was amazed that even with the small amount of training data that we were able to do so well," said Oriol Vinyals, a Google computer scientist who wrote the paper with Alexander Toshev, Samy Bengio and Dumitru Erhan, members of the Google Brain project. "The field is just starting, and we will see a lot of increases."
Computer vision specialists said that despite the improvements, these software systems had made only limited progress toward the goal of digitally duplicating human vision and, even more elusive, understanding. "I don't know that I would say this is 'understanding' in the sense we want," said John R. Smith, a senior manager at I.B.M.'s T.J. Watson Research Center in Yorktown Heights, N.Y. "I think even the ability to generate language here is very limited." But the Google and Stanford teams said that they expect to see significant increases in accuracy as they improve their software and train these programs with larger sets of annotated images. A research group led by Tamara L. Berg, a computer scientist at the University of North Carolina at Chapel Hill, is training a neural network with one million images annotated by humans. "You're trying to tell the story behind the image," she said. "A natural scene will be very complex, and you want to pick out the most important objects in the image.""
Footnote
A version of this article appears in print on November 18, 2014, on page A13 of the New York edition with the headline: Advance Reported in Content-Recognition Software. Vgl.: http://cs.stanford.edu/people/karpathy/cvpr2015.pdf. Vgl. auch: http://googleresearch.blogspot.de/2014/11/a-picture-is-worth-thousand-coherent.html. https://news.ycombinator.com/item?id=8621658 Vgl. auch: https://news.ycombinator.com/item?id=8621658.
Theme
Automatisches Indexieren
Form
Bilder
Object
Google

Similar documents (content)

  1. Benson, A.C.: OntoPhoto and the role of ontology in organizing knowledge (2011) 0.13
    0.13031287 = sum of:
      0.13031287 = product of:
        0.5429703 = sum of:
          0.047468286 = weight(abstract_txt:levels in 4556) [ClassicSimilarity], result of:
            0.047468286 = score(doc=4556,freq=1.0), product of:
              0.11639923 = queryWeight, product of:
                1.0235646 = boost
                5.219915 = idf(docFreq=649, maxDocs=44218)
                0.021785697 = queryNorm
              0.40780586 = fieldWeight in 4556, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.219915 = idf(docFreq=649, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
          0.067030296 = weight(abstract_txt:intelligence in 4556) [ClassicSimilarity], result of:
            0.067030296 = score(doc=4556,freq=1.0), product of:
              0.14650817 = queryWeight, product of:
                1.1483417 = boost
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.021785697 = queryNorm
              0.45751917 = fieldWeight in 4556, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
          0.06889275 = weight(abstract_txt:describing in 4556) [ClassicSimilarity], result of:
            0.06889275 = score(doc=4556,freq=1.0), product of:
              0.14920959 = queryWeight, product of:
                1.1588802 = boost
                5.90999 = idf(docFreq=325, maxDocs=44218)
                0.021785697 = queryNorm
              0.46171796 = fieldWeight in 4556, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.90999 = idf(docFreq=325, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
          0.07448191 = weight(abstract_txt:artificial in 4556) [ClassicSimilarity], result of:
            0.07448191 = score(doc=4556,freq=1.0), product of:
              0.15717433 = queryWeight, product of:
                1.1894084 = boost
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.021785697 = queryNorm
              0.4738809 = fieldWeight in 4556, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
          0.1003632 = weight(abstract_txt:capable in 4556) [ClassicSimilarity], result of:
            0.1003632 = score(doc=4556,freq=1.0), product of:
              0.19174796 = queryWeight, product of:
                1.3137283 = boost
                6.699675 = idf(docFreq=147, maxDocs=44218)
                0.021785697 = queryNorm
              0.5234121 = fieldWeight in 4556, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.699675 = idf(docFreq=147, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
          0.18473388 = weight(abstract_txt:photographs in 4556) [ClassicSimilarity], result of:
            0.18473388 = score(doc=4556,freq=2.0), product of:
              0.22857854 = queryWeight, product of:
                1.4343592 = boost
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.021785697 = queryNorm
              0.8081856 = fieldWeight in 4556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.078125 = fieldNorm(doc=4556)
        0.24 = coord(6/25)
    
  2. Next generation search engines : advanced models for information retrieval (2012) 0.09
    0.08569687 = sum of:
      0.08569687 = product of:
        0.26780272 = sum of:
          0.024295345 = weight(abstract_txt:created in 357) [ClassicSimilarity], result of:
            0.024295345 = score(doc=357,freq=1.0), product of:
              0.118226945 = queryWeight, product of:
                1.0315694 = boost
                5.260737 = idf(docFreq=623, maxDocs=44218)
                0.021785697 = queryNorm
              0.20549753 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.260737 = idf(docFreq=623, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.024406979 = weight(abstract_txt:working in 357) [ClassicSimilarity], result of:
            0.024406979 = score(doc=357,freq=1.0), product of:
              0.11858882 = queryWeight, product of:
                1.0331469 = boost
                5.268782 = idf(docFreq=618, maxDocs=44218)
                0.021785697 = queryNorm
              0.2058118 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.268782 = idf(docFreq=618, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.03140621 = weight(abstract_txt:greater in 357) [ClassicSimilarity], result of:
            0.03140621 = score(doc=357,freq=1.0), product of:
              0.14029583 = queryWeight, product of:
                1.1237316 = boost
                5.7307405 = idf(docFreq=389, maxDocs=44218)
                0.021785697 = queryNorm
              0.22385705 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.7307405 = idf(docFreq=389, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.03307369 = weight(abstract_txt:scientists in 357) [ClassicSimilarity], result of:
            0.03307369 = score(doc=357,freq=1.0), product of:
              0.1452188 = queryWeight, product of:
                1.1432774 = boost
                5.830419 = idf(docFreq=352, maxDocs=44218)
                0.021785697 = queryNorm
              0.22775075 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.830419 = idf(docFreq=352, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.033515148 = weight(abstract_txt:intelligence in 357) [ClassicSimilarity], result of:
            0.033515148 = score(doc=357,freq=1.0), product of:
              0.14650817 = queryWeight, product of:
                1.1483417 = boost
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.021785697 = queryNorm
              0.22875959 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.037240956 = weight(abstract_txt:artificial in 357) [ClassicSimilarity], result of:
            0.037240956 = score(doc=357,freq=1.0), product of:
              0.15717433 = queryWeight, product of:
                1.1894084 = boost
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.021785697 = queryNorm
              0.23694044 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.05627817 = weight(abstract_txt:videos in 357) [ClassicSimilarity], result of:
            0.05627817 = score(doc=357,freq=1.0), product of:
              0.20697968 = queryWeight, product of:
                1.3649101 = boost
                6.9606886 = idf(docFreq=113, maxDocs=44218)
                0.021785697 = queryNorm
              0.2719019 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.9606886 = idf(docFreq=113, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
          0.027586231 = weight(abstract_txt:software in 357) [ClassicSimilarity], result of:
            0.027586231 = score(doc=357,freq=1.0), product of:
              0.16212101 = queryWeight, product of:
                1.7083421 = boost
                4.3560514 = idf(docFreq=1541, maxDocs=44218)
                0.021785697 = queryNorm
              0.17015827 = fieldWeight in 357, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                4.3560514 = idf(docFreq=1541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=357)
        0.32 = coord(8/25)
    
  3. Z39.67-1993: Computer software description (1993) 0.07
    0.06953352 = sum of:
      0.06953352 = product of:
        0.579446 = sum of:
          0.16534258 = weight(abstract_txt:describing in 8732) [ClassicSimilarity], result of:
            0.16534258 = score(doc=8732,freq=1.0), product of:
              0.14920959 = queryWeight, product of:
                1.1588802 = boost
                5.90999 = idf(docFreq=325, maxDocs=44218)
                0.021785697 = queryNorm
              1.1081231 = fieldWeight in 8732, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.90999 = idf(docFreq=325, maxDocs=44218)
                0.1875 = fieldNorm(doc=8732)
          0.22684193 = weight(abstract_txt:sometimes in 8732) [ClassicSimilarity], result of:
            0.22684193 = score(doc=8732,freq=1.0), product of:
              0.18422806 = queryWeight, product of:
                1.2877101 = boost
                6.5669885 = idf(docFreq=168, maxDocs=44218)
                0.021785697 = queryNorm
              1.2313104 = fieldWeight in 8732, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.5669885 = idf(docFreq=168, maxDocs=44218)
                0.1875 = fieldNorm(doc=8732)
          0.18726154 = weight(abstract_txt:software in 8732) [ClassicSimilarity], result of:
            0.18726154 = score(doc=8732,freq=2.0), product of:
              0.16212101 = queryWeight, product of:
                1.7083421 = boost
                4.3560514 = idf(docFreq=1541, maxDocs=44218)
                0.021785697 = queryNorm
              1.1550726 = fieldWeight in 8732, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3560514 = idf(docFreq=1541, maxDocs=44218)
                0.1875 = fieldNorm(doc=8732)
        0.12 = coord(3/25)
    
  4. Ridi, R.: Phenomena or noumena? : Objective and subjective aspects in knowledge organization (2016) 0.07
    0.06866708 = sum of:
      0.06866708 = product of:
        0.42916924 = sum of:
          0.053117674 = weight(abstract_txt:even in 3164) [ClassicSimilarity], result of:
            0.053117674 = score(doc=3164,freq=1.0), product of:
              0.11110142 = queryWeight, product of:
                5.0997415 = idf(docFreq=732, maxDocs=44218)
                0.021785697 = queryNorm
              0.47810078 = fieldWeight in 3164, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.0997415 = idf(docFreq=732, maxDocs=44218)
                0.09375 = fieldNorm(doc=3164)
          0.056961942 = weight(abstract_txt:levels in 3164) [ClassicSimilarity], result of:
            0.056961942 = score(doc=3164,freq=1.0), product of:
              0.11639923 = queryWeight, product of:
                1.0235646 = boost
                5.219915 = idf(docFreq=649, maxDocs=44218)
                0.021785697 = queryNorm
              0.489367 = fieldWeight in 3164, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.219915 = idf(docFreq=649, maxDocs=44218)
                0.09375 = fieldNorm(doc=3164)
          0.11342096 = weight(abstract_txt:sometimes in 3164) [ClassicSimilarity], result of:
            0.11342096 = score(doc=3164,freq=1.0), product of:
              0.18422806 = queryWeight, product of:
                1.2877101 = boost
                6.5669885 = idf(docFreq=168, maxDocs=44218)
                0.021785697 = queryNorm
              0.6156552 = fieldWeight in 3164, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.5669885 = idf(docFreq=168, maxDocs=44218)
                0.09375 = fieldNorm(doc=3164)
          0.20566866 = weight(abstract_txt:recognizing in 3164) [ClassicSimilarity], result of:
            0.20566866 = score(doc=3164,freq=1.0), product of:
              0.27395064 = queryWeight, product of:
                1.5702772 = boost
                8.008008 = idf(docFreq=39, maxDocs=44218)
                0.021785697 = queryNorm
              0.7507508 = fieldWeight in 3164, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                8.008008 = idf(docFreq=39, maxDocs=44218)
                0.09375 = fieldNorm(doc=3164)
        0.16 = coord(4/25)
    
  5. Blobel, B.: Ontologies, knowledge representation, artificial intelligence : hype or prerequisite for international pHealth interoperability? (2011) 0.07
    0.06839096 = sum of:
      0.06839096 = product of:
        0.4274435 = sum of:
          0.061970618 = weight(abstract_txt:even in 760) [ClassicSimilarity], result of:
            0.061970618 = score(doc=760,freq=1.0), product of:
              0.11110142 = queryWeight, product of:
                5.0997415 = idf(docFreq=732, maxDocs=44218)
                0.021785697 = queryNorm
              0.5577842 = fieldWeight in 760, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.0997415 = idf(docFreq=732, maxDocs=44218)
                0.109375 = fieldNorm(doc=760)
          0.09384242 = weight(abstract_txt:intelligence in 760) [ClassicSimilarity], result of:
            0.09384242 = score(doc=760,freq=1.0), product of:
              0.14650817 = queryWeight, product of:
                1.1483417 = boost
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.021785697 = queryNorm
              0.64052683 = fieldWeight in 760, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                5.8562455 = idf(docFreq=343, maxDocs=44218)
                0.109375 = fieldNorm(doc=760)
          0.10427468 = weight(abstract_txt:artificial in 760) [ClassicSimilarity], result of:
            0.10427468 = score(doc=760,freq=1.0), product of:
              0.15717433 = queryWeight, product of:
                1.1894084 = boost
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.021785697 = queryNorm
              0.66343325 = fieldWeight in 760, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                6.0656753 = idf(docFreq=278, maxDocs=44218)
                0.109375 = fieldNorm(doc=760)
          0.16735578 = weight(abstract_txt:advance in 760) [ClassicSimilarity], result of:
            0.16735578 = score(doc=760,freq=1.0), product of:
              0.21545482 = queryWeight, product of:
                1.3925741 = boost
                7.1017675 = idf(docFreq=98, maxDocs=44218)
                0.021785697 = queryNorm
              0.7767558 = fieldWeight in 760, product of:
                1.0 = tf(freq=1.0), with freq of:
                  1.0 = termFreq=1.0
                7.1017675 = idf(docFreq=98, maxDocs=44218)
                0.109375 = fieldNorm(doc=760)
        0.16 = coord(4/25)