Skip to main content

28 docs tagged with "role:cloud-engineer"

View All Tags

Cache static data

From an energy-efficiency perspective, it's better to reduce network traffic by reading the data locally through a cache rather than accessing it remotely over the network. Shortening the distance a network packet travels means that less energy is required to transmit it. Similarly, from an embodied carbon perspective, we are more efficient with hardware when a network packet traverses through less computing equipment.

Choose the region that is closest to users

From an energy-efficiency perspective, it's better to shorten the distance a network packet travels so that less energy is required to transmit it. Similarly, from an embodied-carbon perspective, when a network packet traverses through less computing equipment, we are more efficient with hardware.

Compress transmitted data

From an energy-efficiency perspective, it's better to minimise the size of the data transmitted so that less energy is required because the network traffic is reduced.

Delete unused storage resources

From an embodied carbon perspective, it's better to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.

Encrypt what is necessary

Data protection through encryption is a crucial aspect of our security measures. However, the encryption process can be resource-intensive at multiple levels.

Match utilization requirements of virtual machines (VMs)

It's better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon. Two servers running at low utilization rates will consume more energy than one running at a high utilization rate. In addition, the unused capacity on the underutilized server could be more efficiently used for another task or process.

Match utilization requirements with pre-configured servers

It's better to have one VM running at a higher utilization than two running at low utilization rates, not only in terms of energy proportionality but also in terms of embodied carbon. Two servers running at low utilization rates will consume more energy than one running at a high utilization rate. In addition, the unused capacity on the underutilized server could be more efficiently used for another task or process.

Optimise storage utilization

It's better to maximise storage utilisation so the storage layer is optimised for the task, not only in terms of energy proportionality but also in terms of embodied carbon. Two storage units running at low utilization rates will consume more energy than one running at a high utilization rate. In addition, the unused capacity on the underutilised storage unit could be more efficiently used for another task or process.

Optimize average CPU utilization

CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.

Optimize peak CPU utilization

CPU usage and utilization varies throughout the day, sometimes wildly for different computational requirements. The larger the variance between the average and peak CPU utilization values, the more resources need to be provisioned in stand-by mode to absorb those spikes in traffic.

Queue non-urgent processing requests

All systems have periods of peak and low load. From a hardware-efficiency perspective, we are more efficient with hardware if we minimise the impact of request spikes with an implementation that allows an even utilization of components. From an energy-efficiency perspective, we are more efficient with energy if we ensure that idle resources are kept to a minimum.

Reduce transmitted data

From an energy-efficiency perspective, it's better to minimize the size of the data transmitted so that less energy is required because the network traffic is reduced.

Scale down kubernetes applications when not in use

In order to reduce carbon emissions and costs, Dev&Test Kubernetes clusters can turn off nodes out of office hours. Thereby, optimization is implemented at the cluster level. For production clusters, where nodes need to stay up and running, optimization needs to be implemented at the application level.

Scale infrastructure with user load

Demand for resources depends on user load at any given time. However, most applications run without taking this into consideration. As a result,resources are underused and inefficient.

Scale Kubernetes workloads based on relevant demand metrics

By default, Kubernetes scales workloads based on CPU and RAM utilization. In practice, however, it's difficult to correlate your application's demand drivers with CPU and RAM utilization. Scaling your workload based on relevant demand metrics that drive scaling of your applications, such as HTTP requests, queue length, and cloud alerting events can help reduce resource utilization, and therefore also your carbon emissions.

Scan for vulnerabilities

Many attacks on cloud infrastructure seek to misuse deployed resources, which leads to an unnecessary spike in usage and cost.

Set storage retention policies

From an embodied carbon perspective, it's better to have an automated mechanism to delete unused storage resources so we are efficient with hardware and so that the storage layer is optimised for the task.

Shed lower priority traffic

When resources are constrained during high-traffic events or when carbon intensity is high, more carbon emissions will be generated from your system. Adding more resources to support increased traffic requirements introduces more embodied carbon and more demand for electricity. Continuing to handle all requests during high carbon intensity will increase overall emissions for your system. Shedding traffic that is lower priority during these scenarios will save on resources and carbon emissions. This approach requires an understanding of your traffic, including which call requests are critical and which can best withstand retry attempts and failures.

Time-shift Kubernetes cron jobs

The carbon emissions of a software system depends on the power consumed by that software, but also on the Carbon intensity of the electricity it is powered on. For this reason, running energy-efficient software on carbon intensive electricity grid, might be inefficient to reduce its global carbon emissions. Carbon aware time scheduling, is about scheduling workloads to execute, when electricity carbon intensity is low.