OpenAI远远不只是给了我们一个对话机器人或搜索引擎,更给我们展现了下一代云计算智能平台的雏形。
01
第一,我对大模型未来的预测:
1a) ChatGPT不是或不只是取代谷歌搜索 
1b) OpenAI 会颠覆亚马逊云在内的计算平台 
1c) 微软的未来依靠OpenAI对应
I was asked about OpenAI's ChatGPT quite a bit, as it has taken the world by storm last two weeks. It only took 5 days for ChatGPT to have 1 million users!

Here are my three predictions on OpenAI/ChatGPT:
1a) ChatGPT is NOT just your next Google Search. 
ChatGPT may get you more comprehensive (and even more pertinent) answers on certain topics but the efficiency and the speed are in a different universe. Mind the hype you have seen around "a new alternative to Google". 
1b) OpenAI is your next Amazon Web Services (AWS). 
I am actually not so interested in the Chatbot part of the ChatGPT or OpenAI. 
But here is why OpenAI is your next AWS: OpenAI Large Language Model (LLM) provides a glimpse of the future computation resource abstraction we have never seen before. 
I was able to get my first Unix shell commands to work on ChatGPT a week ago. 
If the virtualization revolution is about being able to program the infrastructure, and the Serverless revolution is about being able to program the business logic without thinking about the infrastructure, then the OpenAI LLM revolution is about getting business results without thinking about defining the detailed business logic a priori.
ChatGPT is not the first or only service OpenAI offers. AWS started with S3 only but now is offering thousands of services. OpenAI will get there and will have a couple of well-funded competitors (just like Azure and GCP to AWS). 
For most people out there, we should not worry about the large-model build-out but instead, we should think about what are the "Snowflake/Datadog" etc. cloud-native services and apps we can build on top of the new computation abstraction. 
I plan to talk more about this topic at Cube's Supercloud event next month. 
1c) Microsoft's future depends on OpenAI.
Here is an interesting article summarizing why Microsoft depends on OpenAI: 
        https://analyticsindiamag.com/does-microsofts-future-depend-on-openai/
It is likely true that the rest of Fortune 500 companies depend on OpenAI or its competing platform to survive and thrive in the next 20 years. 
02
第二,对应用场景的看法:
2a) 商业市场在企业服务(ToB),不在消费者(ToC)
2b) 少数应用相当于“全自动驾驶”,多数应用相当于“辅助驾驶”
2c) 真正的挑战不在人工智能(AI)的缺陷有多少,而是职场人士找到如何与AI共存
Secondly, there are both huge opportunities and huge challenges to apply the large language models in the enterprise.


2a) The commercialization of the large model technology is on the enterprise side, not on the consumer side.


In the enterprise business world, AI stands for ActionItem, not Artificial Intelligence. No action, no intelligence.


For example, if we apply AI in the cybersecurity world, the vendor or the customer should be able to take action on the resulting classification or recommendations with confidence. How to make AI actionable with confidence has been a challenging yet rewarding journey for me before and after the "ChatGPT Era" began.


2b) The use cases will be more like "Level 3 driving assistant" instead of the "Level 5 full self-driving"


At this point, every industry is looking for its own AutoPilot-like use cases. AI full automation works well for the advertisement/marketing industry (because a mistake here and there is easily tolerated) but not so much for many other enterprises (because a mistake can be perceived as fatal). People might be disappointed if they are only looking for "full automation" of something that they don't want to tolerate the mistake.


For the vast majority of the enterprise software industry including the cybersecurity industry, the near-term opportunity is "Level 3 driving assistant", NOT "Level 5 full self-driving", borrowing the self-driving industry term.

2c) We need a collaborative relationship between humans and models to boost human mankind's overall productivity.


My experience tells me that we should not expect AI to handle too complex a job end-to-end in the foreseeable future. Instead, we should break down the professional's job so that AI can fulfill certain subtasks very well. It is like delegating the "highway cruise control" part of driving to AI, but not the entire road driving.


For example, Security Operations (SecOps) professionals can benefit from the ChatGPT-based AI assistant. Today, a SecOps professional can only triage and/or investigate a dozen security alerts a day, but with ChatGPT, such a professional may handle hundreds of security alerts a day. Human professionals will still play the critical "quarterback" role for the security alert triage game, but we now have dozens or hundreds of "running backs" on the field. 

Thanks to Dr. Andrew Hoang Nguyen who allowed me to share his experiment and the screenshot for a typical SecOps task: 
          "GPT-3/ChatGPT could be an incredible tool for detecting potential malicious command line executions with rich explanations."

As you can see, ChatGPT was able to do something quite amazing. The key for the cyber industry is to really break the overall SecOps job into subtasks so that AI can take care of some subtasks very well.
Human professionals need more frameworks and more design patterns to assist them to break a complex project into subtasks suitable for AI.  There are a lot of challenges ahead given models have biases and humans have egos. 
03
第三,对模型“技术和应用匹配”的成熟度的看法:
3a) OpenAI的大老板今天发话了:“ChatGPT的局限性很强”
3b) OpenAI大模型已经上路,“技术和应用匹配”的拐点已经到达。

Thirdly, the large model penetration into the enterprise is here and now. 
3a) "ChatGPT is incredibly limited", according to OpenAI's Boss, Sam Altman. 
I very much agree with Sam for calling out ChatGPT's limitation. I appreciate his honesty. If anyone thinks ChatGPT can take the "quarterback" role for serious enterprise use cases, the person will be hugely disappointed. 
3b) Despite the limitation, the time for applying the Large Language Model to the enterprise is here and now. 
I know Sam's tweet today might have made my view more controversial but to me, as long as we don't misuse ChatGPT for the "quarterback" role, and we work around the limitations gracefully, the #LargeLanguageModels (GPT3 or Dalle2) technology can be leveraged as the "running back" assistant role well and can be relied on to assist many mission-critical tasks. 
Also, I fully anticipate the upcoming #GPT4 model will be far more powerful and will make far fewer mistakes. I have heard that the lab version of the GPT-4 model did super well on the # SAT too. :) 
Frankly, almost every great technology inventor has underestimated the use cases that the receiving end folks came up with. I always learned interesting use cases from my customers in my enterprise software career too. 
Let me close off by quoting Box CEO Aaron Levie's tweet today too. 
OpenAI will improve GPT models significantly from here but the rest of the world should not wait for perfection to happen. The tipping point for a huge productivity boost in the enterprise has already arrived.

延伸阅读
继续阅读
阅读原文