From privacy to CO2 emissions: the implications of information technology
BLOG: Heidelberg Laureate Forum

Information technology seems to be playing an ever-growing role in our society. But while the positive effects are clear to see, the potential pervasive effects are not always as obvious. In a #HLF21 dialogue that included laureates Shafrira Goldwasser, David Patterson, Joseph Sifakis, and Efim Zelmanov, as well as moderator Tarek Besold, the implications of technology were put in the spotlight.

Goldwasser’s first thoughts were that we should first understand just how big an impact machine learning and information technology have on our lives already. This is not an issue for future generations to bother with – it’s a problem for now.
“The availability of large data and machine learning has altered the face of every aspect of our life,” Goldwasser says. “Infrastructure, finance medicine, the way we shop, the way we are treated by doctors, the way we receive loans, and so forth. Courts are also using data more and more”.
The grasp that technology has on our society runs even deeper, Goldwasser argues. Even our basic set of values is being controlled and influenced by algorithms – essentially, we’re trusting the power of statistical prediction to define important parts of our society. The problem is that we’re not entirely sure how this power meshes the idea of how our society should run. For instance, we don’t know if these algorithms are fair, if they foster inequality, or if they are equitable. This also causes a power shift: whoever has more data has more power, says Goldwasser, and privacy is often a mere afterthought.
For David Patterson, the environmental impact of machine learning is a concern. “I think all of us are concerned about climate change and what it’s gonna mean,” he comments, adding that the tremendous excitement about machine learning has recently been dampened by concerns about the emissions produced by training large models. However, Patterson believes these concerns are overblown, and this environmental impact has been greatly exaggerated.
But one environmental impact that hasn’t been exaggerated, the laureates agree, is that of Bitcoin. Bitcoin uses more energy than whole country of Argentina – and its energy consumption continues going up. This has a lot to do with the way Bitcoin itself is designed, as other cryptocurrencies don’t use as much energy. As Goldwasser explains, Bitcoin works on the technical premise of proof of work, whereas other protocols deal with consensus, which uses less energy. Patterson agrees and mentions that whoever invented Bitcoin either didn’t have the technical background to design a better protocol or simply didn’t care.

Privacy
The discussion then moved to one of the thorniest matters around information technology and machine learning: privacy.
Firstly, who is responsible for ensuring citizens’ privacy in the age of machine learning and the internet? Both Sifakis and Zelmanov agree it’s governments.
“Governments and institutions, they define the rules of the game, they are responsible,” says Sifakis. “As long as governments, agencies, institutions, they don’t enforce some rules, it is a problem.”
Sifakis also mentions the case of autonomous vehicles, which allows many producers to operate on the mechanism of self-certification – basically enabling the companies to decide if they’re good enough. This is a slippery slope, Sifakis continues. “We should not be permissive [..] there should be limits.”
Zelmanov and Patterson also pointed out that privacy means different things in different places and countries have wildly contrasting ideas about what is acceptable. The panel mentions China, for instance, where the government has, until recently, shown little regard for individual privacy. Does this mean that other countries should also ditch privacy as a foregone hope? Absolutely not, says Goldwasser.
“My point is, I think of we focus our attention on realizing what the problems are and defining them, then we can address them. To say that with privacy, the genie is out of the bottle and we can’t do anything – I cannot disagree more,” she comments.
Ultimately, governments will always play catch-up with companies as they try to regulate technologies, the panel agrees. But that’s not necessarily a bad thing, because, Patterson argues, we’ve seen that governments do have the regulatory mechanisms to enforce positive change when it comes to technology.
The dialogue ended with a discussion on work and technology. In the pandemic, we’ve all seen just how quickly the way we work can change, and thankfully, we have the means to enable a large part of the population to work from home. Even before the pandemic, machine learning benefitted from digital work, especially in the field of data labeling – the process of applying informative labels to raw data (images, text videos, etc). But in the context of extreme poverty in some parts of the world, this has led to the creation of “digital sweatshops“, where people can be overworked and underpaid.
This is not a new concept – it’s just the medium that’s different. The panel seems to agree that it’s not the technology that’s causing issues, but rather the social context, so that’s where the problem should be addressed first.
For better or for worse, machine learning is still dependent on manual human work, and will likely continue to be so for the foreseeable future. “To be against the idea of human beings labeling data would be a pretty strong statement,” Patterson emphasizes. Ultimately, perhaps this is the most important lesson: technology often seems like it’s creating new challenges and problems – but oftentimes, those problems are actually old problems in a new form.
It’s up to our society and its democratic system of checks and balances to ensure that information technology is used for the progress and benefit of society. As was discussed in a previous laureate dialogue, algorithms are like recipes: they are neither good nor bad. They are a tool, and it’s up to use them properly.
Governments define the rules of the game
Yes. According to Internet censorship and surveillance by country the internet is controlled in 20 countries by the government, namely in Bahrain, Belarus, Bangladesh, China, Cuba, Ethiopia, India, Iran, North Korea, Pakistan, Russia, Saudi Arabia, Sudan, Syria, Turkmenistan, United Arab Emirates, United Kingdom, United States, Uzbekistan, Vietnam.
Today, the internet is often the public square within which our political debates are conducted. The internet replaced to some degree the newspaper as a place where opinions are exchanged. Internet censorship is therefore similar to censorship towards the press – only that much more information is shared by the internet than was ever shared by newspapers.
There was a short period where many believed in internet freedom, where this term encompassed digital rights, freedom of information, the right to Internet access, freedom from Internet censorship, and net neutrality. But this time is over. On the one hand side, individuals have to be protected from hate speech and fake news, on the other, governments have experienced the power of the internet and they therefore want to control the internet in order to control the society. This is mainly true of authoritative regimes, and in 2021 we have more authoritative regimes than there were ten years before.
CO2 emissions from AI technologies
The CO2 emissions from computers with artificial intelligence roughly correspond to the required computer resources per application multiplied by the number of running applications. In the early days of artificial computing – around the nineties – there were already promising concepts such as deep learning, but the computing needs of e.g. deep learning could not be met by the hardware of the time. Today one would say: The CO2 emissions of early deep learning were immense . But that’s not really true, because apart from a few researchers in Montreal and Switzerland, nobody used deep learning in the 1990s. CO2 emissions are therefore a measure of both the computing requirements of a technology and the spread of a technology. In the end, only resource efficient software will spread around the world. This even applies to Bitcoin, which currently consumes a lot of resources (approx. 1/1000 of the world’s electricity), but which will not conquer the world simply because a single Bitcoin transaction can take minutes and uses a lot of energy. The same considerations apply to artificial intelligence.
Before the invention and spread of GPUs [Graphical Processing Units], deep learning was a curiosity. Today, deep learning is the dominant technique of artificial intelligence: because GPUs have solved the hardware limitations of deep learning.
And now we already have several AI accelerator technologies. Not just GPUs, but also TPUs (Tensor Processing Units) and neuromorphic hardware such as Intel’s Loihi 2, which is based on Spiking Neural Network technology and implements 1 million neurons (120 million synapses), but only consumes milliwatts of energy.
And in 2022 there will even be photonic hardware to replace tensor processor units. Lightmatters’ photonic chip enables linear algebra operations such as matrix multiplication to be performed 10 times faster than on an electronic chip, while consuming 10 times less energy.
Insight: The energy consumption of successful mass market technologies is high not because the underlying technology consumes a lot of energy, but because the technology is widespread.
Complemental description: The large storage and calculation requirements of large language models such as BERT, GPT-2 and GPT-3 limit the spread of these language models, because it is not the case that users are already using larger computers for this and spending millions of dollars on electricity bills shell out.
Compression methods can help. The arxiv article Prune Once for All: Sparse Pre-Trained Language Models reports on a 40-fold compression of pre-trained BERT language models by the method of pruning boosted with an 8-bit quantization. It turns out that the fine-tuned version is still compressed afterwards.
And all of this with a loss of accuracy of only 1%.
Conclusion: Successful AI is always resource efficient AI
Liability for accidents with self-driving cars
Quote from above:
Answer: A company that claims that its cars drive autonomously is liable for any accident in any of its vehicles if the vehicle was driving autonomously at the time of the accident and the autonomous driving level was 3 or higher.
Tesla calls its advanced driving system FSD, which means complete self-driving, but Tesla’s current driving system is still on
Autonomy level 2, which means that the driver must be able to take over at any time and the driver is responsible for every accident, even if the car was self-driving. This situation has been criticized by several observers as the drivers appear to be pushing the car too hard and many drivers do not pay enough attention to driving the cars. On the other hand, this situation is ideal for Tesla, as it is not yet liable for accidents at autonomy level 2, but can learn a lot from the driving experience of its software-controlled cars.
And these are the autonomy levels of cars:
– Level 0 (no driving automation)
– Level 1 (driver assistance)
– Level 2 (partial driving automation) The human driver is liable
– Level 3 (Conditional Driving Automation) Car companies are liable
– Level 4 (High Driving Automation) Car companies are liable
– Level 5 (Full Driving Automation) Car companies are liable
Unjust work versus no work at all
Quote from above:
Yes, cheap outsourced labor is unfair and an expression of a deep income gap between the rich and poor world, which can be expressed by the following statistical fact taken from Gallup 2013
The large income gap between rich and poor countries is the basis for offshoring work by rich in poor countries. It is the material basis for something like low-wage countries. But some low-wage countries have worked their way up to medium-sized societies in just a few years / decades. The most important example of this is certainly China.
But this opportunity to work your way up, this opportunity could soon be gone forever. Artificial intelligence in particular, which was the subject of this article, could initially automate simple tasks such as chauffeuring, building houses and infrastructure and much more, and then cover more and more work. That leads to underemployment, even unemployment. In rich countries this can be compensated for with a basic income, but in poor countries the automation of work threatens the last chances of advancement. The labor force of the people in poor countries could no longer be in demand in the future, which could make it even more difficult for the societies there to catch up with our prosperous world.