首页 >> 新闻 >> 正文

抚顺市清原县妇幼保健院泌尿外科康社区抚顺石油一厂职工医院男科最好的医院

2019年05月24日 20:08:05来源:管大全

Can a robot your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-ing devices that can interact meaningfully (and profitably) with humans using artificial intelligence.机器人能读懂你的情绪吗?苹果(Apple)、谷歌(Google)、Facebook等科技公司的似乎是肯定的。它们共斥资数十亿美元用于研发能读懂情绪的设备,让设备利用人工智能与人类进行有意义(并可带来利润)的互动。These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.这些企业正寄望于一种流行了100多年的有关情绪的看法:微笑、愤怒和其他面部活动都是在表达某种情绪,这是与生俱来的,而且是全球相通的。但这种看法正确吗?科学家在全球各地进行了实验。他们利用面部动作的图片(噘嘴、微笑),每张图片的后面都列出一些描述情绪的词汇(悲伤、惊讶、高兴等等),然后要求实验对象选择与面部动作最匹配的词汇。有时,他们会讲述一个有关情绪的故事,然后让实验对象在不同的面部表情中做出选择。Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.西方人大约有85%选择了预期词汇。东方人的得分较低,但总的来说,这足以说明眼睛睁大、皱鼻和其他面部动作都是全球通用的情绪表达方式。这些研究重复了多次,结果都一样,通用的情绪似乎成了刀不入的科学事实,就像重力法则一样,这对于机器人和他们的创造者来说是个好消息。But if you tweak these emotion-matching experiments slightly, the evidence for universal expressions dissolves. Simply remove the lists of emotion words, and let subjects label each photo or sound with any emotion word they know. In these experiments, US subjects identify the expected emotion in photos less than 50 per cent of the time. For subjects in remote cultures with little western contact, the results differ even more.然而,如果你稍微调整一下这些情绪匹配实验,表情具有普适性的据就消失了。如果去掉情绪词汇列表,让实验对象用他们知道的情绪词汇来描述图片或声音。在这些实验中,美国实验对象的正确率不到50%,对于与西方接触不多的遥远文化的实验对象而言,结果就更不同了。Overall, we found that these and other sorts of emotion-matching experiments, which have supplied the primary evidence for universal emotions, actually teach the expected answers to participants in a subtle way that escaped notice for decades — like an unintentional cheat sheet. In reality, you’re not “ing” faces and voices. The surrounding situation, which provides subtle cues, and your experiences in similar situations, are what allow you to see faces and voices as emotional.总的来说,我们发现,这些实验以及其他各种情绪匹配实验(提供了情绪具有普适性的主要据)以一种微妙的方式把预期的教给了实验参与者,而这是几十年来人们未曾注意到的——就像无意中的打小抄。在现实中,你并没有在“阅读”面部和声音。提供细微提示的周围环境以及你在类似情境下的经验,让你把面部活动和声音视为是情绪的表达。A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light. Your brain processes this so quickly that the other person’s face and voice seem to speak for themselves. A hypothetical emotion-ing robot would need tremendous knowledge and context to guess someone’s emotional experiences.双眉紧锁可能意味着一个人生气了,但在其他背景下,这可能意味着他们在思考问题或因为光照强烈而眯着眼。你的大脑处理速度很快,以至于别人的面部和声音似乎在表达一种情绪。假想中的能读懂情绪的机器人需要大量知识和背景来猜测一个人的情绪体验。So where did the idea of universal emotions come from? Most scientists point to Charles Darwin’s The Expression of the Emotions in Man and Animals (1872) for proof that facial expressions are universal products of natural selection. In fact, Darwin never made that claim. The myth was started in the 1920s by a psychologist, Floyd Allport, whose evolutionary spin job was attributed to Darwin, thus launching nearly a century of misguided beliefs.那么通用情绪的观点从何而来?多数科学家举出查尔斯?达尔文(Charles Darwin)1872年的著作《人与动物的情绪表达》(The Expression of the Emotions in Man and Animals)作为据,明面部表情是自然选择的通用产物。实际上,达尔文从未这么说过。这种说法源于上世纪20年代的心理学家弗洛伊德?奥尔波特(Floyd Allport),他的进化论解释工作被认为是出自达尔文,这致使错误的观点延续了近一个世纪。Will robots become sophisticated enough to take away jobs that require knowledge of feelings, such as a salesperson or a nurse? I think it’s unlikely any time soon. You can probably build a robot that could learn a person’s facial movements in context over a long time. It is far more difficult to generalise across all people in all cultures, even for simple head movements. People in some cultures shake their head side to side to mean “yes” or nod to mean “no”. Pity the robot that gets those movements backwards. Pity even more the human who depends on that robot.机器人会变得足够复杂以至于夺走需要理解情绪的工作吗?例如销售人员或护士。我认为,这不太可能很快出现。你或许可以制造一台能够在特定环境下经过长期学习从而理解人类面部表情的机器人。但把所有文化中所有人的面部表情概括出来就困难多了,即便是简单的头部动作。在一些文化中,摇头的意思是“是”,点头的意思是“不”。把这些动作搞反的机器人会很可怜。那些依赖这些机器人的人类就更可怜了。Nevertheless, tech companies are pursuing emotion-ing devices, despite the dubious scientific basis There is no universal expression of any emotion for a robot to detect Instead, variety is the norm.尽管如此,科技公司正寻求研发能读懂情绪的设备,尽管其科学基础可疑。任何情绪都没有通用的表达方式来供机器人识别,多样性才是常态。The writer is author of ‘How Emotions Are Made: The Secret Life of the Brain’本文作者著有《情绪如何产生:大脑的秘密生活》(How Emotions Are Made: The Secret Life of the Brain)一书 /201704/504939。

  • It's the biggest party of the year 全年最大的欢庆聚会 New Year celebrations in London   For lots of people in Britain, the 31st of December, or New Year's Eve as we call it, is the biggest party of the year. It's a time to get together with friends or family and welcome in the coming year.   New Year's parties can take place at a number of different venues. Some people hold a house party; others attend street parties, while some just go to their local for a few drinks with their mates. Big cities, like London, have large and spectacular fireworks displays.  New Year's parties can take place at a number of different venues. Some people hold a house party; others attend street parties, while some just go to their local for a few drinks with their mates. Big cities, like London, have large and spectacular fireworks displays.   New Year celebrations in London   There's one thing that all New Year's Eve parties have in common: the countdown to midnight. When the clock strikes twelve, revellers give a loud cheer, pop champagne corks and give each other a kiss.   They then link arms and sing a song called Auld Lang's Syne, by a Scottish poet called Robert Burns. Not many people can remember all the lyrics, but the tune is well known, so lots of people just hum along.   The parties then continue into the early hours of the morning with lots of dancing and drinking. Because of this, for a lot of people New Year's Day starts with a hangover. Other people might spend the day visiting relatives or friends they haven't managed to catch up with for a while. Whatever happens, New Year's Day tends to be very relaxed.   In Britain, it's popular to make a promise to yourself about something you are going to do, or want to stop doing, in the New Year. This is called a New Year's resolution. Typical resolutions include giving up smoking and joining a gym to get fit. However, the promise is often broken quite quickly and people are back into their bad habits within weeks or days.   New Year's Day is the last bank holiday of the festive season, which means most people have to go to work the next day: bright and fresh and y for the new year ahead!   Notes:  New Year's Eve 元旦前夜  welcome in 迎接   venues 地点   attend 参加  local 当地的  mates 伙伴;好朋友  fireworks displays 烟花展  countdown 倒计时  revellers 欢庆聚会者  pop (香槟酒)瓶塞弹跳而出  Auld Lang's Syne 友谊地久天长  lyrics 歌词  hum 哼哼(小调)  New Year's Day 元旦日  hangover 喝醉酒后隔天早上醒来头痛  New Year's resolution 新年决心  get fit 健身  bad habits 恶习  bank holiday 公共假日  festive season 节日气氛浓厚的季节 /200803/28882。
  • Google#39;s DeepMind have revealed a new speech synthesis generator that will be used to help computer voices, like Siri and Cortana, sound more human.谷歌旗下的人工智能公司DeepMind近日研制出了一种新型语音合成系统, 该技术可以让如Siri和Cortana这样的计算机合成语音听起来更接近真实人声。Named WaveNet, the model works with raw audio waveforms to make our robotic assistants sound, err, less robotic.这项名为WaveNet的技术通过研究原始音频波形,使机器人助手的声音听起来不那么像机器人。WaveNet doesn#39;t control what the computer is saying, instead it uses AI to make it sound more like a person, adding breathing noises, emotion and different emphasis into senteneces.WaveNet并不会控制计算机的说话内容,它只会应用人工智能技术在句子中添加呼吸声、情感和各种重音,从而使计算机语音听起来更像真人。Generating speech with computers is called text-to-speech (TTS) and up until now has worked by piecing together short pre-recorded syllables and sound fragments to form words.用计算机合成语音的技术叫做“从文本到语音(TTS)”,现存的工作原理是将提前录制好的短音节和声音碎片合成语言。As the words are taken from a database of speech fragments, it#39;s very difficult to modify the voice, so adding things like intonation and emphasis is almost impossible.由于语言是从语音碎片数据库中提取出来的,声音很难修饰,所以几乎不可能添加声调和重音等因素。This is why robotic voices often sound monotonous and decidedly different from humans.这就是为什么机器人语音听起来很生硬,明显和人声不同。WaveNet however overcomes this problem, by using its neural network models to build an audio signal from the ground up, one sample at a time.然而WaveNet克了这个难关,利用神经元网络模型从头建立一个音频信号,每次生成一个样本。During training the DeepMind team gave WaveNet real waveforms recorded from human speakers to learn from.培训期间,DeepMind团队让WaveNet学习了一些真实记录的人类语音波形。Using a type of AI called a neural network, the program then learns from these, much in the same way a human brain does.通过一种叫做神经元网络的人工智能技术,这个系统可以像人类的大脑一样对这些波形进行学习。The result was that the WaveNet learned the characteristics of different voices, could make non-speech sounds, such as breathing and mouth movements, and say the same thing in different voices.所以WaveNet学习了不同声音的特点,可以发出非语言声音,比如呼吸声和嘴部活动的声音,并且可以用不同的声音说同样的内容。Despite the exciting advancement, the system still requires a huge amount of processing power, which means it will be a while before the technology appears in the likes of Siri.虽然这个系统有激动人心的进步,但是它需要很强大的处理能力,这意味着这项技术并不能很快应用到Siri当中。Google#39;s machine learning unit DeepMind is based in the UK and have previously made headlines when their computer beat the Go World champion earlier this year.Google旗下的机器学习技术企业DeepMind总部设在英国,今年早些时候,他们的计算机因打败了围棋世界冠军而上了头条。 /201609/468205。
分页 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29