Once artificial general intelligence is developed, it will put humanity at tremendous risk. Yet if done right, it can make people’s lives much better. If you’d like to learn more, read Superintelligence, by Professor Nick Bostrom of the Future of Humanity Institute at Oxford University.

In 2010, I noticed that the Machine Intelligence Research Institute (then known as SIAI) desperately needed to engage with academia and the wider community. So, I published and spoke on MIRI topics. They saw what I was doing, and in 2011 asked me to join them with the title Research Associate. My work contributed to a tremendous strengthening of the organization: In a year we published more peer-reviewed papers, than in all previous years combined.

I also saw that Israel, though a strong force in computer science, artificial intelligence, and entrepreneurship, is weakly represented in the AI-risk community. So, I took on the role of reaching out to professors and technology leaders to spread AI-risk ideas.

I served in that role until I was confident in MIRI’s ability to move ahead without my help, in early 2014.

But I still do what I can, where I think I can make a different. A lecture “What’s Worrying Elon Musk,” May 2016, at a meetup that attracts some of Israel’s stronger engineers.

Peer-reviewed articles and lectures

(Google Scholar shows citations.)

  • Joshua Fox and Carl Shulman (2010), “Superintelligence does not imply benevolence,” Proceedings of the VIII European Conference on Computing and Philosophy, Oct, 2010. Ed. Klaus Mainzer. (Munich: Verlag Dr. Hut), pp. 456-461. Long abstract and lecture  available online.
  • Joshua Fox (2011), “Morality and Super-Optimizers,” Future of Humanity, Oct. 2011, Van Leer Institute, Jerusalem. Abstract and lecture available online.
  • Roman Yampolskiy and Joshua Fox (2012/13), “Artificial general intelligence and the human mental model,” The Singularity Hypotheses, ed. Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart. (London: Springer, The Frontiers Collection). Article available online.
  • Roman Yampolskiy and Joshua Fox (2013), “Safety engineering for Artificial General Intelligence,” Topoi 32/2, special issue on the ethics of building intelligent machines. Article available online.
  • Joshua Fox (2012), “Unequal under the Law,” from the 8th Annual Colloquium on the Law of Futuristic Persons, Second Life. Lecture available online.

Popular and other articles and lectures.

  • H+Magazine, 2011
  • An  article on acausal trade at LessWrong Wiki (also here), apparently the only existing intro to the topic.
  • Other LessWrong Wiki articles:  AIXI, Paperclip MaximizerSubgoal Stomp, Terminal Value, Anvil Problem,  Computronium, and Benevolence.
  • Blog posts
  •  LessWrong posts
  • A bet with Professor Robin Hanson on how artificial intelligence will take over the world, if indeed it does: As a society of human emulations or as de-novo engineered artificial general intelligence. This emerged from my reviewing a draft of his book when I challenged him to put his money where his mouth is. As the father of prediction markets, Professor Hanson took up this offer.
  • “Superintelligence, Unhuman Intelligence,” Galileo (Israel’s leading popular-science magazine), May 2012. This was the first-ever article on the topic in any popular science magazine.  I also conducted an online discussion in a live Q&A session. בינה על-אנושית, בינה אל-אנושית , גלילאו, מאי 2012.
  • “Unequal under the law: Artificial general intelligence and the legal system.” Singularity Unconference, Tel Aviv, Oct. 2012.
  • “The Societal Implications of Artificial General Intelligence,” at the graduate seminar on Technology and Society, Tel Aviv University, May 2012.
  • “The Ultimate Technology” at the Singularity Unconference, Tel Aviv, Oct. 2011.
  • “Human Intelligence, Artificial Intelligence,” at Professor Ilya Levin’s  graduate seminar, Tel Aviv University, March 2011.
  • “Human Intelligence, Artificial Intelligence,” Transhumanist Club, Bar Ilan University, Oct. 2010.