Abstract
This study benchmarks the performance and cost-efficiency of various AWS instances for AI image generation using the CompVis/stable-diffusion-v1-4 model [1][2]. We evaluate multiple instance types, focusing on performance metrics such as total duration, costs (on-demand, reserved, spot), GPU and memory utilization, temperature, and power draw [3]. Our findings highlight the strengths and weaknesses of each instance type, providing valuable insights for optimizing AI workflows and selecting the most suitable instances. High GPU utilization is emphasized for intensive tasks, while lower temperatures and power draw are noted for sustainability. This analysis empowers researchers, developers, and businesses to maximize AI processing efficiency and manage costs effectively[4].
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2023 North American Journal of Engineering Research