MTI外刊阅读精选DeepMind新电脑具有三维想象力.docx
MTI外刊阅读精选 DeepMind新电脑具有三维想象力DeepMind self-training computer creates 3D model from 2D snapshotsDeepMind, Googles artificial intelligence subsidiary in London, has developed a self-training vision computer that generates “a full 3D model of a scene from just a handful of 2D snapshots”, according to its chief executive.谷歌(Google)在伦敦的人工智能子公司 DeepMind开发了一款自我训练的视觉计算机。据 DeepMind的首席执行官介绍,这款计算机“仅利用几张 2D快照就能生成一个完整的 3D场景模型”。The system, called the Generative Query Network, can then imagine and render the scene from any angle, said Demis Hassabis.杰米斯哈萨比斯(Demis Hassabis)表示,这套被称为“生成式查询网络”(GQN)的系统可以从任何角度想象和呈现场景。GQN is a general-purpose system with a vast range of potential applications, from robotic vision to virtual reality simulation. Details were published on Thursday in the journal Science.GQN是一个通用系统,具有广泛的应用潜力,从机器人视觉到虚拟现实模拟。详情发表在周四出版的科学(Science)刊物上。“Remarkably, the DeepMind scientists developed a system that relies only on inputs from its own image sensors and that learns autonomously and without human supervision,” said Matthias Zwicker, a computer scientist at the University of Maryland who was not involved in the research.“值得一提的是,(DeepMind 的科学家)开发了只依赖自身图像传感器所输入信息,就可以自主学习的系统,且无需人类监督,”马里兰大学(University of Maryland)的计算机科学家马蒂亚斯茨威格(Matthias Zwicker)说,他没有参与这项研究。This is the latest in a series of high-profile DeepMind projects, which are demonstrating a previously unanticipated ability by AI systems to learn by themselves, once their human programmers have set the basic parameters.这是 DeepMind一系列备受瞩目项目中最新的一个,这些项目展示了一种之前未曾预料到的人工智能系统自学能力在编程人员为其设定基本参数之后。In October DeepMinds AlphaGo taught itself to play Go, the ultra-complex board game, far better than any human player. Last month another DeepMind AI system learned to find its way around a maze, in a way that resembled navigation by the human brain.去年 10月,DeepMind 的 AlphaGo自学了围棋这种超级复杂的棋类游戏,然后轻松击败了人类棋手。上月,DeepMind 的另一个人工智能系统学会了在迷宫中寻找路径,其方式类似于人类大脑的导航功能。The GQN project aimed to replicate the effortless way in which a living brain learns about the world just by looking around.GQN项目旨在复制一个人类大脑仅仅通过环顾四周就能了解世界的那种轻松方式。“It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,” said Ali Eslami, a DeepMind researcher. “However we found that sufficiently deep networks can learn about perspective and lighting without any human engineering. This was a super-surprising finding.”“之前无法知道神经网络能否学会以如此精确和可控的方式来创建图像,”DeepMind 研究员阿里埃斯拉米(Ali Eslami)说。“然而,我们发现具有足够深度的网络可以在没有任何人类工程干预的情况下学习透视和光线。这是一个非常惊人的发现。”GQN has two parts. The first looks at a scene through image sensors and represents it in computer code. The second is a “generation network” that predicts or imagines the scene from a previously unobserved viewpoint.GQN由两部分组成。第一部分是通过图像传感器来观察场景,然后用计算机代码将其表达出来。第二部分是“生成式网络”,它可以从先前未观察到的视角来预测或想象场景。Future GQN systems promise to be more versatile and to require less processing power than todays computer vision techniques, which are trained with large data sets of annotated images produced by humans.未来的 GQN系统有望比今天的计算机视觉技术的功能更为强大,所需的处理能力也会更低。目前的计算机视觉技术是用由人类生成的大量带标注的图像数据集来训练的。But GQN still has limitations it has so far shown its imaginative prowess only in relatively simple scenes containing a small number of objects.但 GQN仍有局限性GQN 迄今只在包含少量物件的相对简单场景中展示了想象力。“Although we need more data and faster hardware before we can deploy this new type of system in the real world, it takes us one step closer to understanding how we may build machines that learn by themselves,” said Mr Eslami.“尽管我们需要更多的数据和更快的硬件才能在现实世界中部署这种新型系统,但这让我们在了解如何建造拥有自学能力的机器方面向前迈进了一步。”埃斯拉米说。