Dynamically Typed

AI and efficiency

OpenAI released an analysis showing that “since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet” (Hernandez and Brown, 2020).