Loading Events

« All Events

  • This event has passed.

Camden Philosophical Society: Do Robots with High-Level A.I. Have “Moral Standing?” – Open to all

Tuesday, September 17 @ 3:30 pm 5:30 pm

Do robots with high-level Artificial Intelligence have “moral standing,” and what does it mean in terms of rights and responsibilities if they do? These are questions the Camden Philosophical Society will explore at its Tuesday, Sep. 17 session – the second in its series of deep dives into ethical and philosophical issues raised by the development of generative Artificial Intelligence (AI).

The Society’s Sep. 17 meeting will, as usual, be a hybrid gathering from 3:30-5:30 pm EDT. All are welcome to participate, in-person at the Picker Room of the Camden Public Library or by Zoom. That goes for visitors, as well as year-rounders in Maine, and friends of the society wherever you may be.

If you wish to participate via Zoom, please email sarahmiller@usa.net. You will receive a Zoom invitation on the morning of the meeting. Click on the “Join Zoom Meeting” link in that invitation at the time of the event.

Readings

The gathering will start with viewing of a portion of an episode from the television series Star Trek: Next Generation, in which Captain Jean-Luc Picard argues before a tribunal that the robot known as Data has moral standing and should not be dismantled, as a scientist is proposing to do. A more complete description and discussion of the issues raised in this video are available in The Conversation: https://theconversation.com/if-a-robot-is-conscious-is-it-ok-to-turn-it-off-the-moral-implications-of-building-true-ais-130453 . PDFs of all articles are also provided at the bottom of the page.

Picard argues, somewhat amusingly, in a style reminiscent of Socrates in many of the Platonic Dialogues. The short video can be accessed here https://www.youtube.com/watch?v=vjuQRCG_sUw or attendees can wait until the day to see it as part of the group.

We will also consider where robots stand today relative to that point at which the fictional court in Star Trek ruled that Data was far enough along toward consciousness that he should be allowed to pursue the question of his potential himself. Sonoma State University philosophy professor and ethics law and society scholar John P. Sullins argued as far back as 2006 that robots were advanced enough to have moral standing https://scholarworks.calstate.edu/downloads/fj2362670 (or PDF is provided as an attachment to this document).

These debates concern primarily the question of whether sentient machines should be granted the rights that go with moral standing. People are usually considered to have responsibilities as well as rights. What happens when AI programs do things which might be considered ethically dubious, if not outright illegal or in contravention of international accords, such as the Geneva Convention?

These issues are enumerated here in the context of the US legal system: https://www.greenspunlaw.com/blog/ai-and-criminal-liability.cfm

This question has also been widely discussed over the last year in the context of AI programs Israel is alleged to have used in its ongoing conflict with Palestinians. This reading identifies and discusses some of the moral issues raised by such military technology, whoever owns and operates it: https://theconversation.com/gaza-war-israel-using-ai-to-identify-human-targets-raising-fears-that-innocents-are-being-caught-in-the-net-227422