Adopt serverless architecture for AI/ML workload processes
Building an ML model takes significant computing resources that need to be optimized for efficient utilization.
Building an ML model takes significant computing resources that need to be optimized for efficient utilization.
As part of your AI/ML process, you should evaluate using a pre-trained model and use transfer learning to avoid training a new model from scratch.
Large-scale AI/ML models require significant storage space and take more resources to run as compared to optimized models.
Data computation for ML workloads and ML inference is a significant contributor to the carbon footprint of the ML application. Also, if the ML model is running on the cloud, the data needs to be transferred and processed on the cloud to the required format that can be used by the ML model for inference.
Training an AI model implies a significant carbon footprint. The underlying framework used for the development, training, and deployment of AI/ML needs to be evaluated and considered to ensure the process is as energy efficient as possible.
Selecting the right hardware/VM instance types for training is one of the choices you should make as part of your energy-efficient AI/ML process.
Efficient storage of the model becomes extremely important to manage the data used for ML model development.
Evaluate and use alternative, more energy efficient, models that provide similar functionality.
Depending on the model parameters and training iterations, training an AI/ML model consumes a lot of power and requires many servers which contribute to embodied emissions.