The rise of greedy robots

Given the impressive advancement of machine intelligence in recent years, many people have been speculating on what the future holds when it comes to the power and roles of robots in our society. Some have even called for regulation of machine intelligence before it’s too late. My take on this issue is that there is no need to speculate – machine intelligence is already here, with greedy robots already dominating our lives.

Machine intelligence or artificial intelligence?

The problem with talking about artificial intelligence is that it creates an inflated expectation of machines that would be completely human-like – we won’t have true artificial intelligence until we can create machines that are indistinguishable from humans. While the goal of mimicking human intelligence is certainly interesting, it is clear that we are very far from achieving it. We currently can’t even fully simulate C. elegans, a 1mm worm with 302 neurons. However, we do have machines that can perform tasks that require intelligence, where intelligence is defined as the ability to learn or understand things or to deal with new or difficult situations. Unlike artificial intelligence, there is no doubt that machine intelligence already exists.

Airplanes provide a famous example: we don’t commonly think of them as performing artificial flight – they are machines that fly faster than any bird. Likewise, computers are super-intelligent machines. They can perform calculations that humans can’t, store and recall enormous amounts of information, translate text, play Go, drive cars, and much more – all without requiring rest or food. The robots are here, and they are becoming increasingly useful and powerful.

Who are those greedy robots?

Greed is defined as a selfish desire to have more of something (especially money). It is generally seen as a negative trait in humans. However, we have been cultivating an environment where greedy entities – for-profit organisations – thrive. The primary goal of for-profit organisations is to generate profit for their shareholders. If these organisations were human, they would be seen as the embodiment of greed, as they are focused on making money and little else. Greedy organisations “live” among us and have been enjoying a plethora of legal rights and protections for hundreds of years. These entities, which were formed and shaped by humans, now form and shape human lives.

Humans running for-profit organisations have little choice but to play by their rules. For example, many people acknowledge that corporate tax avoidance is morally wrong, as revenue from taxes supports the infrastructure and society that enable corporate profits. However, any executive of a public company who refuses to do everything they legally can to minimise their tax bill is likely to lose their job. Despite being separate from the greedy organisations we run, humans have to act greedily to effectively serve their employers.

The relationship between greedy organisations and greedy robots is clear. Much of the funding that goes into machine intelligence research comes from for-profit organisations, with the end goal of producing profit for these entities. In the words of Jeffrey Hammerbacher: The best minds of my generation are thinking about how to make people click ads. Hammerbacher, an early Facebook employee, was referring to Facebook’s business model, where considerable resources are dedicated to getting people to engage with advertising – the main driver of Facebook’s revenue. Indeed, Facebook has hired Yann LeCun (a prominent machine intelligence researcher) to head its artificial intelligence research efforts. While LeCun’s appointment will undoubtedly result in general research advancements, Facebook’s motivation is clear – they see machine intelligence as a key driver of future profits. They, and other companies, use machine intelligence to build greedy robots, whose sole goal is to increase profits.

Greedy robots are all around us. Advertising-driven companies like Facebook and Google use sophisticated algorithms to get people to click on ads. Retail companies like Amazon use machine intelligence to mine through people’s shopping history and generate product recommendations. Banks and mutual funds utilise algorithmic trading to drive their investments. None of this is science fiction, and it doesn’t take much of a leap to imagine a world where greedy robots are even more dominant. Just like we have allowed greedy legal entities to dominate our world and shape our lives, we are allowing greedy robots to do the same, just more efficiently and pervasively.

Will robots take your job?

The growing range of machine intelligence capabilities gives rise to the question of whether robots are going to take over human jobs. One salient example is that of self-driving cars, that are projected to render millions of professional drivers obsolete in the next few decades. The potential impact of machine intelligence on jobs was summarised very well by CGP Grey in his video Humans Need Not Apply. The main message of the video is that machines will soon be able to perform any job better or more cost-effectively than any human, thereby making humans unemployable for economic reasons. The video ends with a call to society to consider how to deal with a future where there are simply no jobs for a large part of the population.

Despite all the technological advancements since the start of the industrial revolution, the prevailing mode of wealth distribution remains paid labour, i.e., jobs. The implication of this is that much of the work we do is unnecessary or harmful – people work because they have no other option, but their work doesn’t necessarily benefit society. This isn’t a new insight, as the following quotes demonstrate:

  • “Most men appear never to have considered what a house is, and are actually though needlessly poor all their lives because they think that they must have such a one as their neighbors have. […] For more than five years I maintained myself thus solely by the labor of my hands, and I found that, by working about six weeks in a year, I could meet all the expenses of living.” – Henry David Thoreau, Walden (1854)
  • “I think that there is far too much work done in the world, that immense harm is caused by the belief that work is virtuous, and that what needs to be preached in modern industrial countries is quite different from what always has been preached. […] Modern technique has made it possible to diminish enormously the amount of labor required to secure the necessaries of life for everyone. […] If, at the end of the war, the scientific organization, which had been created in order to liberate men for fighting and munition work, had been preserved, and the hours of the week had been cut down to four, all would have been well. Instead of that the old chaos was restored, those whose work was demanded were made to work long hours, and the rest were left to starve as unemployed.” – Bertrand Russell, In Praise of Idleness (1932)
  • “In the year 1930, John Maynard Keynes predicted that technology would have advanced sufficiently by century’s end that countries like Great Britain or the United States would achieve a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.” – David Graeber, On the Phenomenon of Bullshit Jobs (2013)

This leads to the conclusion that we are unlikely to experience the utopian future in which intelligent machines do all our work, leaving us ample time for leisure. Yes, people will lose their jobs. But it is not unlikely that new unnecessary jobs will be invented to keep people busy, or worse, many people will simply be unemployed and will not get to enjoy the wealth provided by technology. Stephen Hawking summarised it well recently:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Where to from here?

Many people believe that the existence of powerful greedy entities is good for society. Indeed, there is no doubt that we owe many beneficial technological breakthroughs to competition between for-profit companies. However, a single-minded focus on profit means that in many cases companies do what they can to reduce their responsibility for harmful side-effects of their activities. Examples include environmental pollution, multinational tax evasion, and health effects of products like tobacco and junk food. As history shows us, in truly unregulated markets, companies would happily utilise slavery and child labour to reduce their costs. Clearly, some regulation of greedy entities is required to obtain the best results for society.

With machine intelligence becoming increasingly powerful every day, some people think that to produce the best outcomes, we just need to wait for robots to be intelligent enough to completely run our lives. However, as anyone who has actually built intelligent systems knows, the outputs of such systems are strongly dependent on the inputs and goals set by system designers. Machine intelligence is just a tool – a very powerful tool. Like nuclear energy, we can use it to improve our lives, or we can use it to obliterate everything around us. The collective choice is ours to make, but is far from simple.

Public comments are closed, but I love hearing from readers. Feel free to contact me with your thoughts.

Yes, the world has always been greedy. This reminds me of Dijkstra greedy algorithm which is used to find the shortest route. There is a lot of “steps” for an organization to become profitable. Greediness tries to find the most cost-efficient way to achieve the goal of being profitable. Let us assume that each road is a railway and trains traverse to their destinations. Each decision path will sacrifice other trains waiting to cross to their destination. If human stupidity does not overrule again, our scarce resource ultimately will be constrained by economics to one element only: time. Where do we want humans to allocate spending their time on?

Greediness will always thrive in the sense it is seen as a trait of growth by society. War, which in today’s society we ultimately condemn, was viewed in the past as one way for one nation to gain growth. Growth was limited to the domain of a specific country and the rest treated as an enemy. The end of the war was enforced by the right of not interfering other one’s own property. People had to find other means to gain growth. Thus, the concept of greediness was enforced. Greediness is using emotional appeal to manipulate other’s people habits to a specific domain.

The problem with greediness is whether people evaluate the emotional appeal matching to something positive or not positive to self and society. The maximum capacity of getting that right is:

  1. The effort of people on having a multi-disciplinary knowledge of multiple domains
  2. The effort of people of using their knowledge to their daily decisions.

Most consumers are passive on the above two points due to society constraints. More specifically, if people focus learning from other domains, they have the risk of underperforming on their main domain having a competitive disadvantage on their prospect of their career. This limited domain in bayesian terms makes people have low confidence for many topics leaving others to influence our decision making. I think that low confidence is the main causation we see a high rise trend where people’s decisions rely more upon push messages instead of pull messages. I haven’t seen to this day sophisticated push messages where the user has a choice of options what to see except in the on boarding phase of a product. In addition, the on boarding phase are only reasons why you should use me. There will never be a phase on reasons when not to use me. If a specific domain of a product could say to the user based on the personalized information it gathered: “Hey, you shouldn’t be using me in this situation. Use Bob instead, it will make your life easier” (This will be possible with the evolution of data). The problem is a specific domain will never explain alternative domains that can solve a user’s individual problem better because there is not a commission fee of recommending one user to another domain with qualitative information. This causes a domain which consists a set of employees to not have an interest in researching alternative domains that can solve a specific problem better because the current system has not placed a platform to reward it with a commission fee. Instead, the only way for a specific domain to thrive is by copying others ideas or owning them through acquisitions. This demotivates innovation in great sum. So far, it is only people with consciousness, with value or no value, such as start-up entrepreneurs that leave old positions and people who contribute in open source correspondingly, that go the extra mile to innovate. My whole hypothesis is that our natural instincts are a machine learner, and our only task is to do progress on everything, even our own personal life.

If those two points happen, the rule of greediness will be overruled. People will consciously evaluate whether that emotional appeal makes sense in the big picture because their jobs will force them to associate their domain with alternatives to gain a commission fee. That will gain them a more robust interdisciplinary domain knowledge causing them to have more confidence on pulling than pushing information of other domains they start to know about. Passive consumers will be less passive. The value was based before by war, now greediness, later it will be all about evaluation.

Your point of people doing less work emphasizes a more passive society than it already is. I do not propose that as that will make our situation worse. The problem is the type of tasks people do, not the task itself. People need to do tasks that progress our society instead of being passive like the game of civilization. It is the only way that makes us happy and has a purpose. Like we humans create machine learning instances have an end goal purpose, so we as humans are machine learners for a purpose where we can handle any situation that becomes a problem. Our starter pack was human suffering, hunger, and death to solve problems. Now it becomes less so and we have to be motivated by it beyond extrinsic rewards.

Subscribe