黄仁勋初次回应DeepSeek!(附实录)

日期:2025-02-25 08:46 浏览:

起源:三言Pro 三言Pro新闻 往年1月尾,DeepSeek宣布的R1模子对全部科技圈形成了宏大惊动,英伟达更是回声下跌16.79%,市值蒸发5900亿美元,创下美国金融史记载。 英伟达谈话人事先表现:“DeepSeek是一项杰出的人工智能提高,也是测试时光缩放的完善例子。” 只管英伟达曾经回血,不外其CEO黄仁勋始终未公然回应此事。 周四,黄仁勋在一场访谈中初次回应了DeepSeek,他表现投资者对DeepSeek 在人工智能范畴获得的停顿存在曲解,这招致了市场对英伟达股票的过错反映。 DeepSeek以低本钱高机能激发存眷后,投资者开端质疑科技公司投入巨额本钱建立AI基本设的须要性。 黄仁勋表现,市场的激烈反映源于投资者的误读。只管 R1 的开辟仿佛增加了对算力的依附,但人工智能行业仍需强盛的算力来支撑模子后练习处置方式,这些方式能让AI模子在后练习停止推理或猜测。 “从投资者的角度来看,他们以为天下分为预练习跟推理两个阶段,而推理就是向 AI 发问并破即失掉谜底。我不晓得这种曲解是谁形成的,但显然这种观点是过错的。” 黄仁勋指出,预练习依然主要,但后处置才是“智能最主要的局部”,也是“进修处理成绩的要害环节”。 别的,黄仁勋还以为R1开源后,寰球范畴内展示出的热忱令人难以相信,“这是一件极端令人高兴的事件”。 黄仁勋访谈重要环节实录: 黄仁勋: What‘s really exciting and you probably saw,what happened with DeepSeek. The world‘s first reasoning model that’s open sourced,and it is so incredibly exciting the energy around the world as a result of R1 becoming open sourced,incredible. 真正令人高兴的是,你可能曾经看到了,DeepSeek产生了什么。天下上第一个开源的推理模子,这太不堪设想了,由于R1酿成了开源的,寰球都因而而充斥了能量,真是不堪设想。 拜访者: Why do people think this could be a bad thing?I think it‘s a wonderful thing. 为什么人们以为这可能是一件好事呢?我以为这是一件美妙的事件。 黄仁勋: Well,first of all,I think from an investor from an investor perspective,there was a mental model that,the world was pretraining, and then inference.And inference was,you ask an AI question and it instantly gives you an answer,one shot answer. I don‘t know whose fault it is,but obviously that paradigm is wrong.The paradigm is pre training,because we want to have foundation you need to have a basic level of foundational understanding of information.In order to do the second part which is post training.So pretraining is continue to be rigorous. The second part of it and this is the most important part actually of intelligence is we call post training,but this is where you learn to solve problems.You have foundational information.You understand how vocabulary works and syntax work and grammar works,and you understand how basic mathematics work,and so you take this foundational knowledge you now have to apply it to solve problems. 起首,我以为从投资者的角度来看,从前存在一种头脑模子是,天下是事后练习好的,而后是推理。推理就是你问AI一个成绩,它破即给你一个谜底,一次性答复。我不晓得这是谁的错,但显然这种形式是过错的。 准确的形式应当是进步行预练习,由于咱们想要有一个基本,你须要对信息有一个基础的懂得程度,以便停止第二个局部,也就是前期练习。以是预练习要持续坚持谨严。第二局部现实上是智能最主要的局部,咱们称之为后练习,但这是你进修处理成绩的处所,你曾经控制了基本常识,你清楚词汇是怎样任务的,句法是怎样任务的,语法是怎样任务的,你清楚了基础数学是怎样任务的,以是你当初必需利用这些基本常识来处理现实成绩…… So there‘s a whole bunch of different learning paradigms that are associated with post training,and in this paradigm,the technology has evolved tremendously in the last 5 years and computing needs is intensive.And so people thought that oh my gosh,pretraining is a lot less,they forgot that post training is really quite intense. 因而后练习有一系列良多差别的进修形式,在这种形式下,技巧在从前五年里获得了宏大的提高,盘算需要十分年夜,以是人们以为,哦天那,预练习要少得多。然而他们忘却了后练习实在相称年夜。 And then now the 3rd scaling law is ,the more reasoning that you do,the more thinking that you do before you answer a question.And so reasoning is a fairly compute intensive part of.And so I think the market responded to R1 as ‘oh my gosh AI is finished’,you know it dropped out of the sky ,we don‘t need to do any computing anymore.It’s exactly the opposite. 当初第三条缩放定律是,你做的推理越多,你在答复成绩之前思考得越多,推理就会越好,这是一个盘算量相称年夜的进程。因而我以为市场对R1的反映是“哦我的天哪,AI到头了“,就似乎它突如其来,咱们不再须要停止任何盘算了,但现实上完整相反。 新浪财经大众号 24小时转动播报最新的财经资讯跟视频,更多粉丝福利扫描二维码存眷(sinafinance)

0
首页
电话
短信
联系