Connect with us

Finance

Would you automate your conscience if you could?

blogaid.org

Published

on

Would you automate your conscience if you could?

It seems obvious that moral artificial intelligence would be better than the alternative. Can we, and should we, align AI’s values ​​with our own? want to Unpleasant? This is the question underlying this conversation between EconTalk host Russ Roberts and psychologist Paul Bloom.

Leaving aside (at least for now) the question of whether AI will become smarter, what benefits would that bring? Morale Offer AI? Wouldn’t these benefits outweigh the potential costs? Let’s hear what you must say! Share your reactions to the prompts below in the comments. As Russ says, we’d love to hear from you.

1- How would you describe the relationship between morality and intelligence? Works more intelligently necessary implies more moral – in humans or in AI? Can Does more intelligence offer a greater chance of morality? What should AI do to learn to develop human-like morality? How much of (human) intelligence comes from education? How much morality?

2- Where does (human) cruelty come from? Bloom suggests that intelligence is largely innate but continually influenced later, while morality is largely culture-bound. To what extent would AI need to be acculturated before it could acquire any semblance of morality? Bloom reminds us that “… most of the things that we watch and are completely shocked and horrified by are done by my people who don’t see themselves as villains.” To what extent could acculturation create cruel AI?

4- Roberts asks, since humans don’t really get high marks for morality, why not use the super intelligence of AI to solve moral problems – a kind of data-driven morality? (A useful follow-up question he asks is: Why don’t we make cars that can’t go over the speed limit?) Bloom notes the clear tension between morality and autonomy. How would AI staff to ease this tension? How can it cause such tension? worse? Continuing with the theme of morality versus autonomy, where does the authoritarian impulse come from? Why the [utopian] human urge to impose moral rules/instruments on others? Roberts says, “I’m not convinced that the nanny state is motivated solely by the fact that I want you not to smoke because I know what’s best for you. I think part of it is, I want you to not smoke because I want you to do something I do.” Is this a unique human characteristic? Could it be a trait transferable to AI?

5- Roberts says, “The country I used to live in and love, the United States, seems to be pulling itself apart, as does much of the West. That doesn’t seem right. I see many dysfunctional aspects of life in the modern world. Am I too pessimistic?” How would you respond to Russian?

Bonus question: In response to Roberts’ question above, Bloom answers: “I have no problem admitting that great economic freedom has helped change the living standards of humanity by billions of people. That’s a good thing. I have no problem with the idea that there has been cultural evolution, and that’s a good thing, that a lot of it has been productive and means people are living more pleasant lives. I think the question is whether the so-called Enlightenment project itself is the source of all this.”

To what extent do you agree with Bloom? This question also came up recently this episode of the Great Antidote with David Boazwho maintain that not only the Enlightenment exists responsible it is a project for such positive change continuous. Again: to what extent do you agree with this?