Artificial Intelligence Turns Deep: Who's in control?
Call for papers
Deadline extended! Submit your abstract
by February 28th.
Learn more about our speakers
We will be adding more as they are confirmed.
Registration Opens in January
$225 for members of IRAS and Partner or Sponsor Organizations
$325 for non-members
Star Island offers Discounts!
Discounts are available for first time attendees, former pelicans, folks who haven't come in a while, and everyone else. Check out the discounts page below.
IRAS Scholarships now available
IRAS offers a range of fellowships and scholarships for students, seniors (over 55) and the top abstract submissions
“I’m sorry Dave, I’m afraid I can’t do that….” said HAL in the most famous line in 2001: A Space Odyssey. So, Dave had to disable HAL to regain control of the spaceship and dodge annihilation. From the earliest myths of artificially created beings until today, the question of “who’s in control” has troubled us. Now it looks like we’re really going to have to deal with it in our lifetimes. On the 50th anniversary of 2001, IRAS returns to considering the prospects, opportunities and dangers of Artificial Intelligence (AI), first discussed at an IRAS summer conference in 1968 by Marvin Minsky, a founder of AI research and consultant to Stanley Kubrick and Arthur C. Clarke as they directed and created 2001.
This conference will address how AI may shape our future as well as our ability to foresee and control how AI will reshape us.
Deep learning neural networks and advances in big data manipulation have led to rapid progress in machine learning and associated capabilities. Investment in AI will grow more than 30 times between 2016 and 2020, to at least a $50 billion industry. New AI products will enhance sales, data analysis, and diagnostic and predictive services for medicine, government, science and industry. We are on the cusp of creating machines that can operate in environments that require significant autonomy, such as self-driving vehicles and, ominously, weapon systems. The future of AI is likely to have powerful consequences related to jobs, income distribution, criminal and social justice and our policy in general. This growing international commercial and governmental juggernaut, itself subject to concentrated and frequently unaccountable control, presents just one of AI’s many challenges. How will humans identify and find meaning in life as the breadth of skills unique to living, sentient beings shrinks? The consequences of the interplay of AI and the human mind, and our very self-concepts, are likely to be equally profound. If we succeed in creating science fiction’s “conscious” machine, what would be our duties to it (as well as its duties to us)? The values and orientations fostered by a religion and science perspective will be crucial to the responsible development and utilization of AI technology as it unfolds.
We will review the current state and potential future developments of AI technologies and consider the following questions as seen by AI experts and those in related fields:
* What are the true benefits of AI for the future of society?
* How do we assure ourselves that all of society will truly benefit from AI?
* How can we avoid the various pitfalls that are now being debated concerning the control of AI in the future?
* What are the ethical, social, legal, and religious factors that ought to be considered to assure the benefits of AI for society?
* What is the appropriate role of religious wisdom and traditions in helping to maintain this control when considered in more secular ethical, social, and legal circumstances?
* How can religious wisdom and traditions, in particular, inform more secular deliberations about controlling the future of AI?
* What are the roles of religion and science in contributing to the dialogue to optimize the benefits of AI to society?
* How can we create an ongoing process to maintain human control of the future of AI?
Program Co-chairs: Terry Deacon and Sol Katz
Conference Co-chairs: Abby Fuller and Ted Laurenson