The world's not yet not perfect. Working on it.

The cloud doesn’t like algorithms that require huge memory

Our new article is out.

Human life expectancy: A mathematical constant

We are already in a transhuman era. The pre-transhumanist mindset is already hard to understand.

Here is a book of logarithm and other tables. This edition was published 1895 or later, and the first edition in 1868.

It has a mortality table.

The mortality table  presents data from 1746! That’s 122 years earlier, or actually 149 years because the updated edition did not bother correcting it.

The mortality table is right between the multiples of square roots and a trig table.

In the entire book, nothing else is biological or social. It’s all math, with a bit of physics, astronomy, and chemistry at the end.

In this book, human lifespans are a basic law of the universe.

When I think about mortality statistics, my main thoughts are “I love it that lifespan has doubled” and “Let’s do it again, fast. Then again, and again.” My concept of life expectancy is of a malleable fact of nature, like whether a patch of ground is covered by trees or grass.

We have already transcended the human condition of 1868. Next transcension, coming soon.

The Singularity Wars

A re-post from LessWrong.com

(This is a introduction, for  those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Singularity Wars.

The Bay Area had a Singularity University and a Singularity Institute, each going in a very  different direction. You’d expect to see something like the People’s Front of Judea and the Judean People’s Front, burning each other’s grain supplies as the Romans moved in.

The Singularity Institute for Artificial Intelligence was founded first, in 2000, by Eliezer Yudkowsky.

Singularity University was founded in 2008. Ray Kurzweil, the driving force behind SU, was also active in SIAI, serving on its board in varying capacities in the years up to  2010.

SIAI’s multi-part name was clunky, and their domain, singinst.org, unmemorable. I kept accidentally visiting siai.org for months, but it belonged to the Self Insurance Association of Illinois. (The cool new domain name singularity.org, recently acquired after a rather uninspired site appeared there for several years, arrived shortly before it was no longer relevant.) All the better to confuse you with, SIAI has been going for the last few years by the shortened name Singularity Institute, abbreviated SI.

The annual Singularity Summit  was launched by SI, together with Kurzweil, in 2006. SS was SI’s premier PR mechanism, mustering geek heroes to give their tacit endorsement for SI’s seriousness, if not its views, by agreeing to appear on-stage.

The Singularity Summit was always off-topic for SI: more SU-like than SI-like. Speakers spoke about whatever technologically-advanced ideas interested them. Occasional SI representatives spoke about the Intelligence Explosion, but they too would often stray into other areas like rationality and the scientific process. Yet SS remained firmly in SI’s hands.

It became clear over the years that SU and SI  have almost nothing to do with each other except for the word  “Singularity.” The word has three major meanings, and of these, Yudkowsky favored the Intelligence Explosion while Kurzweil pushed Accelerating Change.

But actually, SU’s activities have little to do with the Singularity, even under Kurzweil’s definition. Kurzweil writes of a future, around the 2040s,  in which the human condition is altered beyond recognition. But SU mostly deals with whizzy next-gen technology.  They are doing something important, encouraging technological advancement with a focus on helping humanity, but they spend little time working on optimizing the end of our human existence as we know it.  Yudkowsky calls what they do “technoyay.” And maybe that’s what the Singularity means, nowadays. Time to stop using the word.

(I’ve also heard SU graduates saying “I was at Singularity last week,” on the pattern of “I was at Harvard last week,” eliding “University.” I think that that counts as the end of Singularity as we know it.)

You might expect SU and SI to get in a stupid squabble about the name. People love fighting over words. But to everyone’s credit, I didn’t hear squabbling, just confusion from those who were  not in the know. Or you might expect SI to give up, change its name and close down the Singularity Summit. But lo and behold, SU and SI settled the matter sensibly, amicable, in fact … rationally. SU bought the Summit and the entire “Singularity” brand from SI — for money! Yes! Coase rules!

SI chose the new name Machine Intelligence Research Institute. I like it.

The term “Artificial Intelligence” got burned out in the AI Winter in the early 1990’s. The term has been firmly taboo since then, even in the software industry, even in the  leading edge of the software industry. I did technical evangelism for Unicorn, a leading industrial ontology software startup, and the phrase “Artificial Intelligence” was most definitely out of bounds. The term was not used even inside the company. This was despite a founder with a CoSci PhD, and a co-founder with a masters in AI.

The rarely-used term “Machine Intelligence” throws off that baggage, and so, SI managed to ditch two taboo words at once.

The MIRI name is perhaps too broad. It could serve for any AI research group. The Machine Intelligence Research Institute focuses on decreasing the chances of a negative Intelligence Explosion and increasing the chances of a positive one, not on rushing to develop machine intelligence ASAP. But the name is accurate.

In 2005, the Future of Humanity Institute at Oxford University was founded, followed by the Centre for the Study of Existential Risk at Cambridge University in early 2013. FHI is doing good work, rivaling MIRI’s and in some ways surpassing it. CSER’s announced research area, and the reputations of its founders, suggest that we can expect good things. Competition for the sake of humanity! The more the merrier!

In late 2012, SI spun off the Center for Applied Rationality. Since 2008, much of SI’s energies, and particularly those of Yudkowsky, had gone to LessWrong.com and the field of rationality. As a tactic to bring in smart, committed new researchers and organizers, this was highly successful, and who can argue with the importance of being more rational? But as a strategy for saving humanity from existential AI risk, this second focus was a distraction. SI got the point, and split off CFAR.

Way to go, MIRI! So many of the criticisms I had about SI’s strategic direction and its administration in the  years I first encountered it in 2005 have been resolved recently.

Next step: A much much better human future.

The TL;DR, conveniently at the bottom of the article to encourage you to actually read it, is:

  • MIRI (formerly SIAI, SI): Working to avoid existential risk from future machine intelligence, while increasing the chances of positive outcome
  • CFAR: Training in applied rationality
  • CSER: Research towards avoiding existential risk, with future machine intelligence as a strong focus
  • FHI: Researching various transhumanist topics, but with a strong research program in existential risk  and future machine intelligence in particular
  • SU: Teaching and encouraging the development of next-generation technologies
  • SS: An annual forum for top geek heroes to speak  on whatever interests them. Favored topics include societal trends, next-gen science and technology, and transhumanism.

GitHub project: Spark/Cassandra with Java 8 Lambdas

I built a small project to accompany my Datanami article. It illustrates collaborative filtering using MLlib on Apache Spark. It accesses the data in Cassandra using the DataStax connector..

I wrote the key class twice, in both Java 7 and Java 8 to illustrate how much easier lambdas make things. Java 8 is not quite as good as Scala for Big Data–and Spark itself is written in Scala–but if you’re team is already deep into Java and you’re not ready to chance, you can get half the benefits of functional programming (and strongly typed functional programming, for that matter) but upgrading to Java 8.



Article up at Datanami: Apache Spark and Cassandra with Java 8

I’ve been digging deep into Apache Spark recently, and in particular seeing how it plays with Cassandra and with the new functional programming features of Java 8.

My latest article on the topic just appeared on Datanami, the leading technical/business site on Big Data.





Apache Spark on Cassandra: Matching jobs and employers

Once upon a time, programmers stored data in flat files. That was tough: Each file was tied to the C/Fortran/COBOL code that wrote and read it, sequentially byte by byte, in a  binary format unusable without that exact code.  Then, relational databases came along.  Schemas  defined the structure of the data. A pure-functional language, SQL., alllowed convenient, flexible access.  Then came transactions,  replication, clustering, security, administrative tools, and the whole towering stack which DBAs use through this day.

Big Data, until a decade  ago was much like pre-RDB data-files: Everything had to be coded ad-hoc. Then, Hadoop came along, Moores Law gave it a kick, and big data took off. But it took a out-of-the-box  framework, Hadoop to let non-specialist programmers  focus on their core business without having to be Big Data experts.


Still, Hadoop was a bare-bones  framework for coding algorithms structured as  MapReduce tasks, and not much else. It was missing a lot of those useful layers that evolved in the RDB. Over the last decade, some of these  layers sprouted up on top of Hadoop:  Pig and Hive for easy querying, instead of writing procedural Java code line by line ; HBase for a structured dataset on top of the unstructured  HDFS; Oozie for workflow instead of  chaining together MapReduce processes by hand; and much much more.  The crowning jewel, the rider on top of the Hadoop elephant, was Mahout for machine learning.  Machine learning and data mining were always the most talked about reason to use Hadoop. But in practice, most uses of Hadoop involved trivial transformation of data files, extracting the simplest of statistics.  It took Mahout to make it useful.

Yet as the stack grew, it got more and more top-heavy. Either you had to install the pieces yourself, or else work from a virtual machine.

Apache Spark, Cassandra, and Java 8, make it a lot easier.  I’m going to explain how I put those technologies together to suggest matches between software developers and tech employers. What would have taken weeks for a team of distributed-systems developers and machine learning experts a decade ago can now be cranked out in a few days by a single Java developer. Power!

Sign up for the RSS at this blog for more posts. If you want to follow the code, I’ve extracted a minimal viable example: Watch it here at GitHub.

© 2017 JTF

Theme by Anders NorenUp ↑