Rethinking Your Infrastructure for Enterprise AI
IDC strongly believes that the days of homogeneous compute, in which a single architecture dominates all compute in the datacenter, are over. This truth has become increasingly evident as more and more businesses have started to launch artificial intelligence (AI) initiatives. Many of them are in an experimental stage with AI and a few have reached production readiness, but all of them are cycling unusually fast through infrastructure options to run their newly developed AI applications and services on.
The main reason for this constant overhauling of infrastructure is that most of the standard infrastructure that is being used in the datacenter for the bulk of workloads is not very suitable for the extreme data intensive nature of AI. Not only is the performance and I/O of a typical server lacking for deep learning (DL) but the data lakes that are the breeding grounds for AI model development are unequipped for this critical task. The data lakes consist of slow monocultures based on traditional schemas that take weeks if not months to prepare for AI modeling. These data lakes are also considered noncritical for the business, whereas, once AI starts being developed on them, they will become hypercritical.
This white paper discusses these challenges and looks at how IBM proposes to help businesses overcome them.