How do we measure something that varies so much from person to person? How can we implement an objective scale to measure morality? What is really right and wrong?
Such philosophical questions were asked during our break-out sesssions with TKS. A topic that was labeled as ‘Laws of AI' took a very thought-provoking turn that made us question even our own code of ethics
Here I will be dicussing-
Background of the discussion
Ideas that were discussed
Weaknesses on those ideas
My Ideas
Weaknesses on my idea
Concluding remarks
We first go to the safety of humans part when dicsussing the potential laws. Most of us agreed on the Isaac Asimov's Three Laws of Robotics which include:
A robot shall not harm a human , or by inaction allow a human to come to harm.
A robot shall obey any instruction given to it by a human,
A robot shall avoid actions or situations that could cause it to come to harm itself.
After trying to get in detail about this topic, we eventually faced a block that seemed impossible to get past. It is only after I got off the call that I realized a calculative way on how we can push through this obstacle. As our generation moves to this new digital age where so much becomes automated, the need to figure out and plan such aspects of AI increases in need. And so I write one of the many ways we can achieve this.
When we started, we decided that we need a govermental body to program and shape the AI and thier moral code for public service. But how should we choose the people in this leadership position? many suggestions included a group of programmers and machine engineers to shape those AIs or maybe just govermental politians holding that position. We also tried to discuss how would we find a scale to measure morality in the first place, that question led to many suggestions that were eventually shot down because we realized morality just can’t be meausred. There was also the issue from the public, we were very aware that illegal AI could become a thing, where people use AI for thier own benefit, to this there were sugestions such as limited acess on AI infomation or making strict laws to limit making bots such like these
There could be many problems in these suggestions though. As you might have already guessed putting such important matters to a politian or any programmer /engineer, can result in the same problem of using it for their own advantage or making a mistake and ending the world by accident. The real question is what qualifications does a person need to direct a proper moral code in the AI? And how can the morality even be put on a concrete scale to measure and judge it? And are we even able to put our biases away and judge the morality in a neutral way? what is even the neutral way of judging morality? the argument went in circles and circles. As for the limits to the public making AI would be seen as a violiation of the right to information, the public will not like the idea of innovation being held away from them. Even if this law passes, every country will need to agree on this or even stricter protocols like making AI illegal to public will be needed.
After I thought long and hard about this I came to a conclusion, we could instate a board of people from the goverment and the people who are voted from the public.T he govermental side will be incharge on seeing the benefit of the economical and poltical part of the country and the people from the public will be incharge of seeing the benefit of the people and thier needs or wants. And the code of morality can be decided from them for thier country, this does have to however, abide to the international law for AI. Holding conferences and often talking to the public about the progress. Another way I thought of was ethics through machine learning, I started to think was why did we not blame each other in terms of morality in the conference, we were all aware of how different everyone’s viewpoint was but we did not argue, why did we accept them? I came to the conclusion that it’s because we all understand that others do not grow up like us, they come from differnet backgrounds, ethnicites, areas and that often shapes the way they think what is right and wrong. So if the AI learns thier code of morality from others around them, it could lead to the same result. If we have the power to influence the AI and not completetly program it from scratch, it makes them less likely to be argued with.
Below here is a small video on more details on how I think the programming should go about-
There are some problems with my analysis though, in order to create a board and a whole system to decide on this, it will very costly and who knows? maybe the whole process of coming to a decision might take weeks or months, even the education level of each board member can be different which won’t make it fair. There are many ways to expolit this method. Since AI can learn thier moral code through thier enviorment, there can be cases such as bullying or emotionally harassing other humans as well as robots (physcial harm to others is not under this list as the 3 laws of AI is the base of the bot). It is also extremely possible that alot of you will disagree with the morality layer in the video, it seems cold-blooded to take a risk-assesment on somone’s life. This whole idea can only work for regsitered AI bots, the public can instate thier own bots that bypass this whole system.
If you made it to the end of this page I thank you, this is my first newsletter so there are obviously some mistakes/inaccuracies that I have looked over. If you could give me some advice or just share your opinon on the whole topic, it would be greatly appreciated.