Software is eating the world. It is also the only sector that has a significant lack of transparency regarding its supply chain. Yes, for smartphones and end-user devices there has been some progress, notably by Apple and others to create transparency. But what about digital infrastructure, including cloud, and data centers that power the majority of today’s applications? Most software applications today do not pay attention to their supply chain - it just works. Even environmentalists and climate change start-ups have yet to acknowledge this issue.
For more than a decade software developers have been working in an abstract infrastructure away, which slows down development and is not as enjoyable as solving interesting problems by building applications. But as Moore’s Law is continually decelerating — meaning we don’t have access to faster servers and chips until every two years — the result is more software = more infrastructure. A significant barrier is that nobody can agree on the dimensions of that infrastructure, a great example being the lack of transparency.
To decarbonize software, it needs the ability to measure its own footprint
To decarbonize software, however, and create an environmentally sustainable digital economy, software needs to know its own footprint. This is especially true for server-side applications, which are often imperceptible or only exist in a web browser, with the majority of the computation happening somewhere in a data center, invisible to the user. This requires transparent data made available from all of the suppliers — cloud providers, data center owners and operators, and hardware manufacturers.
Furthermore, to enable software and its developers to make better decisions, this data needs to be available within the infrastructure, meaning that the physical server, virtual machine, or docker container has the ability to inform the developer of current and past emissions, as well as the pollution that was created from the software activity. With no information from the runtime, it becomes difficult to make better decisions, e.g. to delay or skip a computation.
Building a framework for the IT footprint data as an open standard, and the tools to calculate it
The data needed to determine the footprint of a server-side software application should have the same structure, the same information, which is available across different server, virtualization, and container environments. Therefore, we have begun our work in creating an open standard platform as part of our Digital Carbon Footprint (DCF) active Steering Group.
Next, the data has to be collected from the infrastructure and enriched with additional information — for example, the current CO2-eq concentration of the power grid in which the data center facility and the server are located, to calculate a real carbon footprint. For this we are developing an Environmental Data Agent (to run within the infrastructure) and Open Data Hub (to provide open data to help the collaboration and enrichment of the data). You can read more about these initiatives on Github and contribute to the development.
Come shape supply chain transparency in software
Are you a developer and you want to try out this API? Do you want to work and contribute to the open standard? Are you a data center facility operator or cloud provider and want to be the first to offer this API to your customers?
Come collaborate with us.