Welcome To Website IAS

Hot news
Achievement

Independence Award

- First Rank - Second Rank - Third Rank

Labour Award

- First Rank - Second Rank -Third Rank

National Award

 - Study on food stuff for animal(2005)

 - Study on rice breeding for export and domestic consumption(2005)

VIFOTEC Award

- Hybrid Maize by Single Cross V2002 (2003)

- Tomato Grafting to Manage Ralstonia Disease(2005)

- Cassava variety KM140(2010)

Centres
Website links
Vietnamese calendar
Library
Visitors summary
 Curently online :  48
 Total visitors :  7671306

The science of deep learning
Thursday, 2020/12/10 | 08:28:32

Richard Baraniuk, David Donoho, and Matan Gavish

PNAS December 1, 2020 117 (48) 30029-30032

 

Scientists today have completely different ideas of what machines can learn to do than we had only 10 y ago.

 

In image processing, speech and video processing, machine vision, natural language processing, and classic two-player games, in particular, the state-of-the-art has been rapidly pushed forward over the last decade, as a series of machine-learning performance records were achieved for publicly organized challenge problems. In many of these challenges, the records now meet or exceed human performance level.

 

A contest in 2010 proved that the Go-playing computer software of the day could not beat a strong human Go player. Today, in 2020, no one believes that human Go players—including human world champion Lee Sedol—can beat AlphaGo, a system constructed over the last decade. These new performance records, and the way they were achieved, obliterate the expectations of 10 y ago. At that time, human-level performance seemed a long way off and, for many, it seemed that no technologies then available would be able to deliver such performance.

 

Systems like AlphaGo benefited in this last decade from a completely unanticipated simultaneous expansion on several fronts. On the one hand, we saw the unprecedented availability of on-demand scalable computing power in the form of cloud computing, and on the other hand, a massive industrial investment in assembling human engineering teams from a globalized talent pool by some of the largest global technology players. These resources were steadily deployed over that decade to allow rapid expansions in challenge problem performance.

 

The 2010s produced a true technology explosion, a one-time–only transition: The sudden public availability of massive image and text data. Billions of people posted trillions of images and documents on social media, as the phrase “Big Data” entered media awareness. Image processing and natural language processing were forever changed by this new data resource as they tapped the new image and text resources using the revolutionary increases in computing power and newly globalized human talent pool.

 

The field of image processing felt the impact of the new data first, as the ImageNet dataset scraped from the web by Fei-Fei Li and her collaborators powered a series of annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) prediction challenge competitions. These competitions gave a platform for the emergence and successive refinement of today’s deep-learning paradigm in machine learning.

 

Deep neural networks had been developing steadily since at least the 1980s; however, their heuristic construction by trial-and-error resisted attempts at analysis. For quite some time in the 1990s and 2000s, artificial neural networks were regarded with suspicion by scientists insisting on formal theoretical justification. In this decade they began to dominate prediction challenges like ImageNet. The explosion of image data on the internet and computing resources from the cloud enabled new, highly ambitious deep network models to win prediction challenges by substantial margins over more “formally analyzable” methods, such as kernel methods.

 

In fact, the performance advantage of deep networks over more “theoretically understandable” methods accelerated as the decade continued. The initial successes famously involved separating pictures of cats from dogs, but soon enough successes came in full-blown computer vision problems, like face recognition and tracking pedestrians in moving images.

 

A few years following their initial successes in image processing, deep networks began to penetrate natural language processing, eventually producing in the hands of the largest industrial research teams systems able to translate any of 105 languages to any other, even language pairs for which almost no prior translation examples were available.

These articles expose many surprises, paradoxes, and challenges. They remind us that there are many academic research opportunities emerging from this rapidly developing field. Mentioning only a few: Deep learning might be deployed more broadly in science itself, thereby accelerating the progress of existing fields; theorists might develop better understanding of the conundrums and paradoxes posed by this decade’s deep-learning revolution; and scientists might understand better how industry-driven innovations in machine learning are affecting societal-level systems. Such opportunities will be challenging to pursue, not least because they demand new resources and talent. We hope that this special issue stimulates vigorous new scientific efforts pursuing such opportunities, leading perhaps to further discussions on deep learning in the pages of future editions of PNAS.

 

See: https://www.pnas.org/content/117/48/30029

Back      Print      View: 240

[ Other News ]___________________________________________________
  • Egypt Holds Workshop on New Biotech Applications
  • UN Agencies Urge Transformation of Food Systems
  • Taiwan strongly supports management of brown planthopper—a major threat to rice production
  • IRRI Director General enjoins ASEAN states to invest in science for global food security
  • Rabies: Educate, vaccinate and eliminate
  • “As a wife I will help, manage, and love”: The value of qualitative research in understanding land tenure and gender in Ghana
  • CIP Director General Wells Reflects on CIP’s 45th Anniversary
  • Setting the record straight on oil palm and peat in SE Asia
  • Why insect pests love monocultures, and how plant diversity could change that
  • Researchers Modify Yeast to Show How Plants Respond to Auxin
  • GM Maize MIR162 Harvested in Large Scale Field Trial in Vinh Phuc, Vietnam
  • Conference Tackles Legal Obligations and Compensation on Biosafety Regulations in Vietnam
  • Iloilo Stakeholders Informed about New Biosafety Regulations in PH
  • Global wheat and rice harvests poised to set new record
  • GM Maize Harvested in Vietnam Field Trial Sites
  • New label for mountain products puts premium on biological and cultural diversity
  • The Nobel Prize in Physiology or Medicine 2016
  • Shalabh Dixit: The link between rice genes and rice farmers
  • People need affordable food, but prices must provide decent livelihoods for small-scale family farmers
  • GM Seeds Market Growth to Increase through 2020 Due to Rise in Biofuels Use

 

Designed & Powered by WEBSO CO.,LTD