Social media giant Facebook announced that it will create an independent Institute for Ethics in Artificial intelligence (AI) with an initial investment of $7.5 million over a period of five years. Technical University of Munich (TUM) in Germany will collaborate with this project which aims to explore fundamental issues affecting the use and impact of AI.

The Institute will address issues that affecting the use and impact of AI such as Safety, Privacy, Fairness, and transparency. TUM was ranked 6th in the world for AI research by the Times Higher Education magazine in 2018.

The institute will conduct independent, evidence-based research to provide insight and guidance for society, industry, legislators and decision-makers across the private and public sectors.

The institute will address issues that affect the use and impact of AI, such as safety, privacy, fairness and transparency.

“At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy.

The institute will also benefit from Germany’s position at the forefront of the conversation surrounding ethical frameworks for AI “including the creation of government-led ethical guidelines on autonomous driving” and its work with European institutions on these issues.

Need for Ethics in AI:

Machines can evolve at much faster rate than man. So it’s obvious that, with the power of AI, machine will be ahead of man. Without ethics, this can backfire on mankind. AI has a great many applications, and has been solving various problems for humans. But such a powerful technology raises equal concerns about its possible misuse. And there have already been multiple cases of AI being used for malicious purposes. There have been already big challenges due to AI as discussed below:

Just recently, MIT experimented with an AI that was fed with data from the “darkest corners of Reddit.” The result was the world’s first psychopath AI, Norman.

Another recent example is Google’s Duplex AI, a natural-sounding robo assistant that makes calls on your behalf. Even though Google is still developing this AI, people have raised concerns over privacy and possible misuse by marketers or political campaigns.

Other issues includes:

Unemployment: The hierarchy of labour is concerned primarily with automation. If we look at trucking  it currently employs millions of individuals. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

Inequality: The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money

Humanity:Tech addiction is the new frontier of human dependency. Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships. This can affect human interactions and relationship. People might prefer software and devices to human being for their relationship needs. However, when used right, this could evolve into an opportunity to nudge society towards more beneficial behaviour.

Security : The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us

Values to guide the AI revolution

Be socially beneficial: Development ofAI technologies should consider a broad range of social and economic factors, and should proceed where the overall likely benefits substantially exceed the foreseeable risks and downsides.

Avoid creating or reinforcing unfair bias: Avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

Be built and tested for safety: Develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

Be accountable to people: Design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. All AI technologies should be subjected to appropriate human direction and control.

Should incorporate privacy design principles in the development and use of AI be made available for uses that confirm to societal ethical standards.

AI is not a magic solution to all of life’s problems. It is best seen as a tool that, when developed in accordance with the values mentioned  above, can enhance human-led projects

Challenges in AI in ethical domain:

Brad Smith, the President of Microsoft who co-authored laid out six principles for the AI to consider – fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. According to Smith, just a consensus on ethical AI is not enough. We need to take these principles and put them into law. Only by creating a future where AI law is as important a decade from now as, say, privacy law is today, will we ensure that we live in a world where people can have confidence that computers are making decisions in ethical ways.

The adoption of AI is accelerating as businesses see its transformational value to power new innovations and growth. As organisations embrace AI, it is critical to find better ways to train and sustain these systems – securely and with quality – to avoid adverse effects on business performance, brand reputation, compliance and humans.

The general outlook is that the onus is on humans to build an AI that meets ethical parameters. As AI algorithms are created and trained by humans, there is a very high possibility of human bias being built into these algorithms. AI systems are superior to humans in speed and capability which, when used in malicious ways, can cause damage that is much higher in magnitude. Just as in humans, mistakes are unavoidable for AI algorithms too. The issue is the lack of accountability that would otherwise be a deterrent to negative actions.

Niti Aayog recently released a discussion paper on the technology. Titled “National Strategy for Artificial Intelligence”, the paper touched on the issue of ethics along with exploring the massive benefits of the AI. Most discussions on ethical considerations of AI are a derivation of the FAT framework (Fairness, Accountability and Transparency). A consortium of Ethics Councils at each Centre of Research Excellence can be set up and it would be expected that all COREs adhere to standard practice while developing AI technology and products,” said the paper.

Data is one of the primary drivers of AI solutions, and thus appropriate handling of data, ensuring privacy and security is of prime importance. Challenges include data usage without consent, risk of identification of individuals through data, data selection bias and the resulting discrimination of AI models, and asymmetry in data aggregation.

Experts suggest that AI can have a positive impact, provided the government is able to implement the right regulations. A team that audits and certifies AI systems on factors such as trust, safety, accountability would certainly help in keeping malicious use of AI under check. It would foster safe, explainable and unbiased use of AI.

Share This