Hitting the Wall with Server Infrastructure for Artificial Intelligence
Businesses are struggling with numerous variables to determine what their stance should be regarding artificial intelligence (AI) applications that deliver new insights using deep learning. The business opportunities are exceptionally promising. Not acting could potentially be a business disaster as competitors gain a wealth of previously unavailable data to grow their customer base. Most organizations are aware of the challenge, and their lines of business (LOBs), IT staff, data scientists, and developers are working to define an AI strategy.
IDC believes that this emerging environment is to date still highly undefined, even as businesses must make critical decisions. Should businesses develop in-house or use VARs, systems integrators, or consultants? Should they deploy on-premise, in the cloud, or in some hybrid form? Can they use existing infrastructure, or do AI applications and deep learning require new servers with new capabilities? We believe that many of these questions can be answered by starting with a well-coordinated small initiative on-premise and then scaling it while keeping a close watch on the impacts.
The sections that follow in this white paper were informed by an extensive IDC survey among 100 adopters of accelerated compute infrastructure for AI applications in North America as well as by findings from 8 in-depth interviews with organizations that are running AI on accelerated compute.