Semantics Relevance

NVIDIA Corp. (NVDA) Q2 2024 Earnings Call Transcript


Logical Analysis Report (click to view);
Knowledge Map (click to view)
*Knowledge Map Navigation: Spatial co-ordinations are initially random, and will automatically re-arrange to minimize complexity based on distance between relationships. Mouse down and drag to pan. Right click on the strategic diagram toggles between motion and stationary. Hover over abstract node (orange) to view abstractions. Hover over leaf node to view corresponding narrative. Left click on the leaf node expands the narrative to view full text.

Knowledge Diagram Navigation:

Spatial co-ordinations are initially random, and will automatically re-arrange to minimize complexity based on distance between relationships. Mouse down and drag to pan. Right click on the strategic diagram toggles between motion and stationary. Hover over abstract node (orange) to view abstractions. Hover over leaf node to view corresponding narrative. Left click on the leaf node expands the narrative to view full text.

Narrative Analysis - Report

Key Focus

  • And so to be able to work closely architecturally to have our engineers work hand in hand to improve the networking performance and the computing performance has been really powerful, really terrific ...
  • NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. ...
  • And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant. ...
  • The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency. ...
  • No momentum supporting factor found

    Challenge supporting factors

  • (enterprise,nvidia_ai,pre-trained)
  • (software,enterprise,pre-trained)
  • (software,models,pre-trained)
  • (software,nvidia_ai,enterprise,pre-trained)
  • (software,nvidia_ai,pre-trained)
  • (software,enterprise,on-prem)
  • (software,models,on-prem)
  • (software,nvidia_ai,enterprise,on-prem)
  • (software,nvidia_ai,on-prem)
  • (enterprise,gpu,nodes)
  • (enterprise,nvidia_ai,gpu,nodes)
  • (software,enterprise,nodes)
  • (software,gpu,nodes)
  • (software,nvidia_ai,enterprise,nodes)
  • (software,nvidia_ai,nodes)
  • Work-in-progress supporting factors

  • (nvidia,performance,tco)
  • (nvidia,performance,energy)
  • (nvidia,performance,architecturally)
  • (nvidia,computer,speech)
  • (nvidia,model,speech)
  • (nvidia,model,self-driving)
  • (nvidia,computer,recommenders)
  • (nvidia,model,recommenders)
  • (nvidia,computer,real-time)
  • (nvidia,model,real-time)
  • (nvidia,model,infrastructure)
  • (nvidia,computer,inference)
  • (nvidia,model,inference)
  • (nvidia,model,computer)
  • (nvidia,enterprise,workstations)

  • Time PeriodChallengeMomentumWIP
    Report23.15 0.00 76.86

    High Level Abstraction (HLA) combined

    High Level Abstraction (HLA)Report
    (1) (nvidia,performance,tco)100.00
    (2) (nvidia,performance,energy)99.75
    (3) (nvidia,performance,architecturally)99.26
    (4) (nvidia,computer,speech)98.27
    (5) (nvidia,model,speech)97.28
    (6) (nvidia,model,self-driving)96.54
    (7) (nvidia,computer,recommenders)95.80
    (8) (nvidia,model,recommenders)95.06
    (9) (nvidia,computer,real-time)94.32
    (10) (nvidia,model,real-time)93.58
    (11) (nvidia,model,infrastructure)92.59
    (12) (nvidia,computer,inference)91.36
    (13) (nvidia,model,inference)91.11
    (14) (nvidia,model,computer)89.88
    (15) (nvidia,enterprise,workstations)87.90
    (16) (nvidia,workstations)87.41
    (17) (enterprise,servicenow,services)87.16
    (18) (nvidia,enterprise,services)86.42
    (19) (nvidia,enterprise,servicenow)85.68
    (20) (nvidia,enterprise,pcs)84.94
    (21) (enterprise,servicenow,lighthouse)83.70
    (22) (nvidia,enterprise,lighthouse)83.46
    (23) (nvidia,enterprise,developers)82.96
    (24) (nvidia,developers,users)81.73
    (25) (nvidia,users)81.23
    (26) (nvidia,computer,accelerates)80.99
    (27) (nvidia,vehicle)80.00
    (28) (enterprise,nvidia_ai,pre-trained)79.01
    (29) (software,enterprise,pre-trained)77.53
    (30) (software,models,pre-trained)77.28
    (31) (software,nvidia_ai,enterprise,pre-trained)76.05
    (32) (software,nvidia_ai,pre-trained)74.81
    (33) (software,dgx,pcie)74.57
    (34) (software,enterprise,pcie)74.32
    (35) (software,nvidia_ai,enterprise,pcie)73.83
    (36) (software,nvidia_ai,pcie)71.85
    (37) (software,enterprise,on-prem)71.60
    (38) (software,models,on-prem)71.36
    (39) (software,nvidia_ai,enterprise,on-prem)71.11
    (40) (software,nvidia_ai,on-prem)68.89
    (41) (enterprise,gpu,nodes)68.64
    (42) (enterprise,nvidia_ai,gpu,nodes)65.19
    (43) (software,enterprise,nodes)61.73
    (44) (software,gpu,nodes)61.48
    (45) (software,nvidia_ai,enterprise,nodes)61.23
    (46) (software,nvidia_ai,nodes)60.49
    (47) (software,enterprise,models)60.00
    (48) (software,nvidia_ai,enterprise,models)59.51
    (49) (software,nvidia_ai,models)58.77
    (50) (software,dgx,h100)58.52
    (51) (software,enterprise,h100)58.02
    (52) (software,nvidia_ai,enterprise,h100)57.78
    (53) (software,nvidia_ai,h100)57.04
    (54) (software,gpu,shortly)56.79
    (55) (software,gpu,providers)56.54
    (56) (software,gpu,nvidia_h100_tensor_core_gpus)56.30
    (57) (software,gpu,nvidia_ai)56.05
    (58) (software,gpu,enterprise)53.33
    (59) (software,gpu,dependencies)52.84
    (60) (software,services,workers)52.35
    (61) (software,services,stand-alone)52.10
    (62) (software,services,professionals)51.60
    (63) (software,services,productivity)50.86
    (64) (software,services,microsoft)50.12
    (65) (software,services,education)49.38
    (66) (software,services,ai_copilot)48.64
    (67) (software,models,velocity)47.90
    (68) (software,models,hardware)47.65
    (69) (software,models,cloud)47.16
    (70) (software,compilers)46.67
    (71) (software,dgx,compilers)46.42
    (72) (software,dgx,nvidia_ai)46.17
    (73) (software,dgx,infrastructure)45.93
    (74) (software,dgx,enterprise)45.68
    (75) (software,dgx,dollars)45.43
    (76) (software,developers)45.19
    (77) (enterprise,nvidia_ai,gpu,software)43.70
    (78) (enterprise,gpu,teraflops)39.01
    (79) (enterprise,nvidia_ai,gpu,teraflops)37.28
    (80) (enterprise,gpu,rtx)33.33
    (81) (enterprise,nvidia_ai,gpu,rtx)31.60
    (82) (enterprise,nvidia_ai,rtx)28.15
    (83) (enterprise,gpu,performance)25.43
    (84) (enterprise,nvidia_ai,gpu,performance)23.70
    (85) (enterprise,gpu,nvidia_rtx)19.51
    (86) (enterprise,nvidia_ai,gpu,nvidia_rtx)17.78
    (87) (enterprise,csps,internet,consumer)13.83
    (88) (enterprise,nvidia_ai,workstation)13.33
    (89) (enterprise,nvidia_ai,teraflops)11.60
    (90) (enterprise,nvidia_ai,performance)10.62
    (91) (enterprise,l40s,state-of-the-art)9.14
    (92) (enterprise,l40s,hyperscalers)8.89
    (93) (enterprise,servicenow,ai_lighthouse)8.64
    (94) (enterprise,internet)8.15
    (95) (enterprise,csps,consumer,revenue)4.44
    (96) (enterprise,csps,internet,revenue)3.70
    (97) (enterprise,csps,revenue)2.72
    (98) (enterprise,csps,consumer,data_center)2.47
    (99) (enterprise,csps,data_center)1.23
    (100) (enterprise,csps,internet,data_center)0.49

    Back to top of page

    Supporting narratives:

    Please refer to knowledge diagram for a complete set of supporting narratives.

    • challenge - Back to HLA
      • Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, pre-trained models and APIs. ...
      • High Level Abstractions:
        • (enterprise,nvidia_ai,pre-trained)
        • (software,enterprise,models)
        • (software,models,pre-trained)
        • (software,nvidia_ai,enterprise,models)
        • (software,nvidia_ai,pre-trained)
        • (software,nvidia_ai,models)
        • (software,enterprise,pre-trained)
        • (software,nvidia_ai,enterprise,pre-trained)
        • Inferred entity relationships (13)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,workstation) [inferred]
        • (enterprise,nvidia_ai,pcie,software) [inferred]
        • (nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,rtx) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (enterprise,models,software) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]
        • (models,nvidia_ai,software) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (enterprise,nvidia_ai,on-prem,software) [inferred]
        • (models,software,velocity) [inferred]
        • (enterprise,models,nvidia_ai,software) [inferred]

    • challenge - Back to HLA
      • Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, pre-trained models and APIs ...
      • High Level Abstractions:
        • (software,enterprise,on-prem)
        • (software,nvidia_ai,on-prem)
        • (software,models,on-prem)
        • (software,models,cloud)
        • (software,nvidia_ai,enterprise,on-prem)
        • Inferred entity relationships (9)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,workstation) [inferred]
        • (enterprise,nvidia_ai,pcie,software) [inferred]
        • (enterprise,nvidia_ai,rtx) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (nvidia_ai,on-prem,software) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (models,software,velocity) [inferred]

    • challenge - Back to HLA
      • So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale. So this run time called NVIDIA AI enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other ...
      • High Level Abstractions:
        • (software,gpu,nodes)
        • (enterprise,gpu,nodes)
        • (enterprise,nvidia_ai,gpu,software)
        • (software,nvidia_ai,enterprise,nodes)
        • (enterprise,nvidia_ai,gpu,nodes)
        • (software,enterprise,nodes)
        • (software,gpu,nvidia_ai)
        • (software,gpu,enterprise)
        • (software,nvidia_ai,nodes)
        • Inferred entity relationships (17)
        • (enterprise,gpu,nvidia_ai,software) [inferred]
        • (nodes,nvidia_ai,software) [inferred]
        • (enterprise,nodes,nvidia_ai,software) [inferred]
        • (enterprise,gpu,nvidia_ai,nvidia_rtx) [inferred]
        • (enterprise,gpu,software) [inferred]
        • (enterprise,gpu,teraflops) [inferred]
        • (enterprise,gpu,nvidia_ai,performance) [inferred]
        • (enterprise,gpu,nodes) [inferred]
        • (enterprise,gpu,rtx) [inferred]
        • (enterprise,gpu,nvidia_rtx) [inferred]
        • (enterprise,gpu,nvidia_ai,rtx) [inferred]
        • (enterprise,gpu,nodes,nvidia_ai) [inferred]
        • (enterprise,gpu,nvidia_ai,teraflops) [inferred]
        • (enterprise,gpu,performance) [inferred]
        • (gpu,nodes,software) [inferred]
        • (gpu,nvidia_ai,software) [inferred]
        • (enterprise,nodes,software) [inferred]

    • WIP - Back to HLA
      • The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency. ...
      • High Level Abstractions:
        • (nvidia,performance,tco)
        • (nvidia,performance,energy)
        • Inferred entity relationships (1)
        • (nvidia,performance,tco) [inferred]

    • WIP - Back to HLA
      • And so to be able to work closely architecturally to have our engineers work hand in hand to improve the networking performance and the computing performance has been really powerful, really terrific ...
      • High Level Abstractions:
        • (nvidia,performance,architecturally)
        • Inferred entity relationships (1)
        • (nvidia,performance,tco) [inferred]

    • WIP - Back to HLA
      • NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. ...
      • High Level Abstractions:
        • (nvidia,computer,speech)
        • (nvidia,model,computer)
        • (nvidia,computer,recommenders)
        • (nvidia,model,recommenders)
        • (nvidia,model,speech)
        • Inferred entity relationships (7)
        • (computer,nvidia,speech) [inferred]
        • (model,nvidia,self-driving) [inferred]
        • (model,nvidia,speech) [inferred]
        • (model,nvidia,recommenders) [inferred]
        • (computer,nvidia,real-time) [inferred]
        • (model,nvidia,real-time) [inferred]
        • (computer,nvidia,recommenders) [inferred]

    • WIP - Back to HLA
      • And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant ...
      • High Level Abstractions:
        • (nvidia,model,self-driving)
        • Inferred entity relationships (3)
        • (model,nvidia,recommenders) [inferred]
        • (model,nvidia,real-time) [inferred]
        • (model,nvidia,speech) [inferred]

    • WIP - Back to HLA
      • NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases ...
      • High Level Abstractions:
        • (nvidia,model,real-time)
        • (nvidia,model,inference)
        • (nvidia,computer,accelerates)
        • (nvidia,computer,real-time)
        • (nvidia,computer,inference)
        • Inferred entity relationships (7)
        • (computer,nvidia,speech) [inferred]
        • (model,nvidia,self-driving) [inferred]
        • (model,nvidia,speech) [inferred]
        • (model,nvidia,recommenders) [inferred]
        • (model,nvidia,real-time) [inferred]
        • (computer,nvidia,real-time) [inferred]
        • (computer,nvidia,recommenders) [inferred]

    • WIP - Back to HLA
      • And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant. ...
      • High Level Abstractions:
        • (nvidia,model,infrastructure)
        • Inferred entity relationships (4)
        • (model,nvidia,recommenders) [inferred]
        • (model,nvidia,self-driving) [inferred]
        • (model,nvidia,real-time) [inferred]
        • (model,nvidia,speech) [inferred]

    • WIP - Back to HLA
      • More developers create more applications that make NVIDIA more valuable for customers. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments and robotics. ...
      • High Level Abstractions:
        • (nvidia,enterprise,workstations)
        • (nvidia,workstations)
        • (nvidia,enterprise,pcs)
        • Inferred entity relationships (5)
        • (enterprise,nvidia,pcs) [inferred]
        • (nvidia,workstations) [inferred]
        • (enterprise,nvidia,workstations) [inferred]
        • (enterprise,nvidia,servicenow) [inferred]
        • (enterprise,nvidia,services) [inferred]

    • WIP - Back to HLA
      • AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services. ...
      • High Level Abstractions:
        • (enterprise,servicenow,services)
        • (nvidia,enterprise,services)
        • Inferred entity relationships (3)
        • (enterprise,nvidia,pcs) [inferred]
        • (enterprise,nvidia,workstations) [inferred]
        • (enterprise,nvidia,servicenow) [inferred]

    • WIP - Back to HLA
      • AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services ...
      • High Level Abstractions:
        • (nvidia,enterprise,servicenow)
        • (enterprise,servicenow,lighthouse)
        • (nvidia,enterprise,lighthouse)
        • Inferred entity relationships (5)
        • (enterprise,nvidia,pcs) [inferred]
        • (enterprise,lighthouse,servicenow) [inferred]
        • (enterprise,nvidia,workstations) [inferred]
        • (enterprise,lighthouse,nvidia) [inferred]
        • (enterprise,nvidia,services) [inferred]

    • WIP - Back to HLA
      • More developers create more applications that make NVIDIA more valuable for customers ...
      • High Level Abstractions:
        • (nvidia,enterprise,developers)
        • Inferred entity relationships (4)
        • (enterprise,nvidia,pcs) [inferred]
        • (enterprise,nvidia,workstations) [inferred]
        • (enterprise,nvidia,servicenow) [inferred]
        • (enterprise,nvidia,services) [inferred]

    • WIP - Back to HLA
      • NVIDIA has hundreds of millions of CUDA-compatible GPUs worldwide. Developers need a large installed base to reach end users and grow their business. ...
      • High Level Abstractions:
        • (nvidia,developers,users)
        • (nvidia,users)
        • Inferred entity relationships (1)
        • (nvidia,users) [inferred]

    • WIP - Back to HLA
      • MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA's GPU chiplet. The partnership covers a wide range of vehicle segments from luxury to entry level. ...
      • High Level Abstractions:
        • (nvidia,vehicle)

    • WIP - Back to HLA
      • Now we're seeing, at this point, probably hundreds of millions of dollars annually for our software business, and we are looking at NVIDIA AI enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100. ...
      • High Level Abstractions:
        • (software,enterprise,pcie)
        • (software,dgx,pcie)
        • (software,nvidia_ai,enterprise,pcie)
        • (software,nvidia_ai,enterprise,h100)
        • (software,nvidia_ai,h100)
        • (software,nvidia_ai,pcie)
        • (software,dgx,h100)
        • (software,enterprise,h100)
        • Inferred entity relationships (11)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,workstation) [inferred]
        • (h100,nvidia_ai,software) [inferred]
        • (enterprise,nvidia_ai,rtx) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]
        • (enterprise,h100,software) [inferred]
        • (nvidia_ai,pcie,software) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (enterprise,nvidia_ai,on-prem,software) [inferred]
        • (enterprise,h100,nvidia_ai,software) [inferred]

    • WIP - Back to HLA
      • Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly. ...
      • High Level Abstractions:
        • (software,gpu,shortly)
        • (software,gpu,providers)

    • WIP - Back to HLA
      • Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly ...
      • High Level Abstractions:
        • (software,gpu,nvidia_h100_tensor_core_gpus)

    • WIP - Back to HLA
      • So this run time called NVIDIA AI enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other. ...
      • High Level Abstractions:
        • (software,gpu,dependencies)

    • WIP - Back to HLA
      • For example, AI Copilot such as those just announced by Microsoft can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field ...
      • High Level Abstractions:
        • (software,services,workers)
        • (software,services,professionals)
        • Inferred entity relationships (2)
        • (services,software,stand-alone) [inferred]
        • (services,software,workers) [inferred]

    • WIP - Back to HLA
      • And that stand-alone software continues to grow where we are providing both the software services, upgrades across there as well ...
      • High Level Abstractions:
        • (software,services,stand-alone)
        • Inferred entity relationships (1)
        • (services,software,workers) [inferred]

    • WIP - Back to HLA
      • For example, AI Copilot such as those just announced by Microsoft can boost the productivity of over 1 billion office workers and tens of millions of software engineers ...
      • High Level Abstractions:
        • (software,services,microsoft)
        • (software,services,productivity)
        • (software,services,ai_copilot)
        • Inferred entity relationships (2)
        • (services,software,stand-alone) [inferred]
        • (services,software,workers) [inferred]

    • WIP - Back to HLA
      • For example, AI Copilot such as those just announced by Microsoft can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. ...
      • High Level Abstractions:
        • (software,services,education)
        • Inferred entity relationships (2)
        • (services,software,stand-alone) [inferred]
        • (services,software,workers) [inferred]

    • WIP - Back to HLA
      • And then lastly, because of our scale and velocity, we were able to sustain this really complex stack of software and hardware, networking and compute and across all of these different usage models and different computing environments ...
      • High Level Abstractions:
        • (software,models,velocity)
        • (software,models,hardware)
        • Inferred entity relationships (1)
        • (models,software,velocity) [inferred]

    • WIP - Back to HLA
      • And none of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI, and optimizing software and infrastructure software requires AI to even develop ...
      • High Level Abstractions:
        • (software,dgx,compilers)
        • (software,compilers)

    • WIP - Back to HLA
      • Now we're seeing, at this point, probably hundreds of millions of dollars annually for our software business, and we are looking at NVIDIA AI enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100 ...
      • High Level Abstractions:
        • (software,dgx,dollars)
        • (software,dgx,nvidia_ai)
        • (software,dgx,enterprise)

    • WIP - Back to HLA
      • And none of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI, and optimizing software and infrastructure software requires AI to even develop. ...
      • High Level Abstractions:
        • (software,dgx,infrastructure)

    • WIP - Back to HLA
      • The second characteristic of our company is the installed base. You have to ask yourself, why is it that all the software developers come to our platform. And the reason for that is because software developers seek a large installed base so that they can reach the largest number of end users, so that they could build a business or get a return on the investments that they make ...
      • High Level Abstractions:
        • (software,developers)

    • WIP - Back to HLA
      • These will include powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. ...
      • High Level Abstractions:
        • (enterprise,gpu,performance)
        • (enterprise,nvidia_ai,gpu,performance)
        • (enterprise,gpu,teraflops)
        • (enterprise,nvidia_ai,gpu,teraflops)
        • Inferred entity relationships (13)
        • (enterprise,gpu,nvidia_rtx) [inferred]
        • (enterprise,gpu,nvidia_ai,rtx) [inferred]
        • (enterprise,gpu,nvidia_ai,software) [inferred]
        • (enterprise,gpu,nodes,nvidia_ai) [inferred]
        • (enterprise,gpu,nvidia_ai,teraflops) [inferred]
        • (enterprise,gpu,performance) [inferred]
        • (enterprise,gpu,nvidia_ai,nvidia_rtx) [inferred]
        • (enterprise,gpu,software) [inferred]
        • (gpu,nvidia_ai,software) [inferred]
        • (enterprise,gpu,teraflops) [inferred]
        • (enterprise,gpu,nodes) [inferred]
        • (enterprise,gpu,nvidia_ai,performance) [inferred]
        • (enterprise,gpu,rtx) [inferred]

    • WIP - Back to HLA
      • These will include powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory ...
      • High Level Abstractions:
        • (enterprise,nvidia_ai,gpu,nvidia_rtx)
        • (enterprise,gpu,nvidia_rtx)
        • (enterprise,gpu,rtx)
        • (enterprise,nvidia_ai,rtx)
        • (enterprise,nvidia_ai,gpu,rtx)
        • Inferred entity relationships (20)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,gpu,nvidia_ai,software) [inferred]
        • (enterprise,nvidia_ai,pcie,software) [inferred]
        • (enterprise,gpu,nvidia_ai,nvidia_rtx) [inferred]
        • (enterprise,gpu,software) [inferred]
        • (enterprise,gpu,teraflops) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (enterprise,gpu,nodes) [inferred]
        • (enterprise,gpu,nvidia_ai,performance) [inferred]
        • (enterprise,gpu,rtx) [inferred]
        • (enterprise,gpu,nvidia_rtx) [inferred]
        • (enterprise,nvidia_ai,workstation) [inferred]
        • (enterprise,gpu,nvidia_ai,rtx) [inferred]
        • (enterprise,gpu,nodes,nvidia_ai) [inferred]
        • (enterprise,gpu,nvidia_ai,teraflops) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (enterprise,gpu,performance) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]
        • (gpu,nvidia_ai,software) [inferred]
        • (enterprise,nvidia_ai,on-prem,software) [inferred]

    • WIP - Back to HLA
      • They can be configured with NVIDIA AI enterprise or NVIDIA Omniverse inside.. . We also announced three new desktop workstation GPUs based on the Ada generation. ...
      • High Level Abstractions:
        • (enterprise,nvidia_ai,workstation)
        • Inferred entity relationships (7)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,pcie,software) [inferred]
        • (enterprise,nvidia_ai,rtx) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (enterprise,nvidia_ai,on-prem,software) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]

    • WIP - Back to HLA
      • These will include powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. They can be configured with NVIDIA AI enterprise or NVIDIA Omniverse inside. ...
      • High Level Abstractions:
        • (enterprise,nvidia_ai,performance)
        • (enterprise,nvidia_ai,teraflops)
        • Inferred entity relationships (8)
        • (enterprise,nvidia_ai,pre-trained,software) [inferred]
        • (enterprise,nvidia_ai,workstation) [inferred]
        • (enterprise,nvidia_ai,pcie,software) [inferred]
        • (enterprise,nvidia_ai,rtx) [inferred]
        • (enterprise,nvidia_ai,pre-trained) [inferred]
        • (enterprise,nvidia_ai,performance) [inferred]
        • (enterprise,nvidia_ai,on-prem,software) [inferred]
        • (enterprise,nvidia_ai,teraflops) [inferred]

    • WIP - Back to HLA
      • And in combination with HP, Dell, and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage generative AI. ...
      • High Level Abstractions:
        • (enterprise,l40s,state-of-the-art)

    • WIP - Back to HLA
      • And so the L40S is going to -- is off to a great start and the world's enterprise and hyperscalers are really clamoring to get L40S deployed. ...
      • High Level Abstractions:
        • (enterprise,l40s,hyperscalers)

    • WIP - Back to HLA
      • We've partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast tracking the development of enterprise AI capabilities ...
      • High Level Abstractions:
        • (enterprise,servicenow,ai_lighthouse)
        • Inferred entity relationships (1)
        • (enterprise,servicenow,services) [inferred]

    • WIP - Back to HLA
      • Colette, I think last quarter, you had said CSPs were about 40% of your Data Center revenue, consumer Internet at 30%, enterprise 30%. Based on your remarks, it sounded like CSPs and consumer Internet may have been a larger percentage of your business ...
      • High Level Abstractions:
        • (enterprise,internet)

    • WIP - Back to HLA
      • Colette, I think last quarter, you had said CSPs were about 40% of your Data Center revenue, consumer Internet at 30%, enterprise 30%. ...
      • High Level Abstractions:
        • (enterprise,csps,internet,data_center)
        • (enterprise,csps,consumer,revenue)
        • (enterprise,csps,internet,revenue)
        • (enterprise,csps,consumer,data_center)
        • (enterprise,csps,revenue)
        • (enterprise,csps,internet,consumer)
        • (enterprise,csps,data_center)
        • Inferred entity relationships (8)
        • (consumer,csps,data_center,enterprise) [inferred]
        • (csps,data_center,enterprise) [inferred]
        • (csps,enterprise,revenue) [inferred]
        • (consumer,csps,enterprise,revenue) [inferred]
        • (csps,data_center,enterprise,internet) [inferred]
        • (csps,enterprise,internet,revenue) [inferred]
        • (enterprise,internet) [inferred]
        • (consumer,csps,enterprise,internet) [inferred]

    • WIP - Back to HLA
      • Based on your remarks, it sounded like CSPs and consumer Internet may have been a larger percentage of your business. ...
      • High Level Abstractions:
        • (enterprise,csps,internet,consumer)
        • Inferred entity relationships (5)
        • (csps,enterprise,revenue) [inferred]
        • (consumer,csps,enterprise,revenue) [inferred]
        • (csps,enterprise,internet,revenue) [inferred]
        • (consumer,csps,data_center,enterprise) [inferred]
        • (enterprise,internet) [inferred]