Mumbai, April 19 -- If technology magazines are to be believed, Artificial Intelligence (AI) would have to make moral choices in the very near future. Drones in a war zone would have to decide-and decide quickly- whether to drop a bomb on an enemy hideout near a hospital. Selfdriving cars would have to make a choice between slamming the brakes suddenly and injuring their passenger or hitting a jaywalking pedestrian. Serious attempts are being made to distil moral principles from observing human decision-making in experimental conditions. There is a strong push to make machines learn from humans and human artefacts on decision-making and morality. Futurologists like Ray Kurzweil predict that by 2029, we would have machines that can do all th...