This vision has already sparked waves in the corporate world. During recent meetings with CIOs of leading high-tech manufacturing companies, I observed an intriguing phenomenon: shortly after Huang's keynote, CEOs reached out to their CIOs, questioning the relevance of their current IT investments and strategies in the face of this AI-driven disruption. Nvidia CEO Jensen Huang delivered the most notable keynote address, predicting an AI Factory will replace the current Software Factory:
- Software Factory consists of pre-programmed software applications that retrieve defined information.
- Using Nvidia GPU technology, which exponentially scales computation power, generative AI can dynamically generate information and output based on prompts. Therefore, Software Factory will no longer be relevant.
- The Nvidia Inference Microservice (NIM) provides pre-trained models in optimized containers to accelerate the creation of generative AI applications.
As a result of Huang's keynote speech, I witnessed an interesting phenomenon.
During my subsequent meetings with CIOs of leading high-tech manufacturing companies, many told me that shortly after Huang delivered his keynote address, their CEOs called them with questions about whether their current IT investments, initiatives, operations, resource plans, etc., would be disrupted by generative AI as Huang predicted.
In fact, Huang’s view aligned with what we discussed in my prior post – why AI needs a new era of probabilistic adaptability?
- Nearly all of today’s IT systems and software applications are inherently deterministic, i.e., build-to-spec based on well-understood requirements to deliver pre-defined outcomes.
- With AI can continually uncover new findings and generative AI can start with a prompt and dynamically generate information or output, i.e., non-predefined and non-predictable, IT systems and applications must become probabilistic-capable to stay relevant in the era of generative AI.
This raises a few questions every business and tech executive should ask.
- Can we protect our existing IT investment from being disrupted immediately, as Huang predicted, and still cost-effectively practice AI?
- How can we avoid the need for rip-and-replace by making the current inherently deterministic IT systems smarter, i.e., becoming probabilistic-ready to work with the dynamic nature of generative AI and enabling the gradual build-out of Huang's 'AI Factory' vision?
- How can we ensure all future applications, automation, and orchestrations are AI and generative AI-ready?
- How can effective governance be used to ensure boundaries and goals when using generative AI? Let's brainstorm and share perspectives.