In eight decades, artificial intelligence has moved from purview of science fiction to reality. Here's a quick history of AI computation.
What led to the growth of AI computation?
The growth in AI computation can be attributed to three main factors: increased computational power, the availability of vast training data, and advancements in algorithms. Initially, computation power grew according to Moore’s Law, but with the advent of the Deep Learning Era in 2012, this growth accelerated, allowing for more complex AI models to be developed.
How has AI computation changed over the decades?
In the early days of AI, such as the 1950s with Claude Shannon's robotic mouse, computation was minimal, measured in floating point operations (FLOPs). Fast forward to today, the compute used to train advanced AI models like Minerva is nearly 6 million times greater than that used for earlier models like AlexNet, reflecting a dramatic increase in computational needs and capabilities.
What challenges does AI face in computation growth?
The future growth of AI computation may face challenges such as the increasing demand for compute power required by large-scale models, which could slow down progress if not met. Additionally, the exhaustion of available data for training AI models could hinder the development of new technologies, despite the significant funding currently being directed towards AI advancements.