Sunday, 8 September 2013

Principle of Angels

Another week another dollar and on the internet people have been upsetting others with their views.  Oh wait it's the internet, people are always getting upset over something someone else has said.  So no change there then.

I've just started skimming through The Society of Mind by Marvin Minsky, which is nearly thirty years old according to the copyright date inside.  Had I done my research before buying it I probably wouldn't have bothered with it.  Not because it's a bad book, or because it's arguments are not interesting, but just because it isn't really telling me anything new that I didn't already know from primary research sources. If unlike me you have not been following the research into artificial intelligence then I would highly recommend this book as a good introduction to the debate on why we can make artificial non-human minds.

What is interesting is that this book is in opposition to Roger Penrose's book The Emperors New Mind, which is equally old with an original 1989 copyright date.  I've put this book down to rest, because it is hard going with pages of formulas.  Penrose takes the opposite stance to Minsky and argues against artificial intelligence.

I think both authors are right, but for different reasons.  Will we be able at some point make artificial intelligence like Minsky suggests?  Undoubtedly the answer is yes, but with certain terms and conditions that require that our technological civilization lasts long enough to achieve them.  I would liken AI to nuclear fusion and say it's about fifty years away, except that nuclear fusion is likely to happen within the next fifty years, whereas AI is in my opinion less likely to happen within that time frame, and therefore Penrose is right in that we don't know what we don't know to make an artificial mind.

It's not because I don't think it can't be done, it's because I don't think we have the theoretical base on which to construct minds outside of the good old fashioned biological imperative to reproduce ourselves.  I'm firmly in the camp that humans are biological machines, but we don't even have a pathology of mental illness that would at least be evidence that we understood how a mind works (my core professional area).

Back to the internet and the greying of the SF Worldcon, sorry can't resist commenting.  Lot's of furore amongst those that chat about some stuff about the BOF* at this years Worldcon and where have all the youngsters gone?  And OMG it's the end of fandom as we know it unless we do something.  Then comparisons are made with DragonCon and why can't the Worldcon be more like that?

All I can say is that in 1939 the SF Worldcon the idea that media convention where actors were feted was probably not in the forefront of peoples minds.  The Worldcon has historically been a convention for people interested in reading and writing SF.  As for encouraging the youngsters who go to ComicCon and DragonCon to go to a Worldcon, I would suggest that unless they are interested in reading and writing then one is probably on to a hiding for nothing.  YMMV, feel free to leave comments.

Back now to what I'm reading.  I'm currently well stuck into Jane Fenn's Principle of Angels, which I'm looking forward to finishing off this afternoon.  I'm really enjoying it and Jane's voice is so clear that I feel I'm having a conversation with her about the story and the ideas she is developing as I read each page.  Highly recommended.  On the watching front; we are still working through Stargate SG1, now on season four.

On my own writing, my work in progress has seen me add another 7,901 words to the first draft of my third novel, which now stands at 30,702 words in total.  An interesting development in how I write has occurred (the dynamic of plot versus characterization), so obviously I've blown through one of these learning points on the curve to becoming proficient novelist.

*Boring Old Fogey, or other equally suitable and or offensive term for the letter F of your choice.


  1. I seem to remember that nuclear fusion has been forty years away since I first heard of it in the 60s. I suspect that it will still forty years right up until the time someone realises that people in general would rather have their energy from a distributed rather than centralised system. Whether that will be photovoltaics, energy harvesting windows, LENR or if they live in Sussex a gas well in the back garden or something else completely only time will tell.

    As for AI, leaving aside the question of whether we would recognise a higher intelligence if we saw it, I think the problem* is a different one. Over the last 50 years a great many jobs have been lost to automation in industry, and with automation set to make inroads into the service industries in the next couple, the whole social landscape is set to change. Whether the upcoming upheaval will produce a utopia, distopia or something that is boringly and messily familiar, I would not like to predict. But I would suggest that AI will always be problematical until we come to terms with living in a more or less automated society.

    * this guy can describe the problem much better than I can

  2. Thank you for the comment. In my opinion what we are seeing with automation is artificial intelligence agents, which can arguably be classified as weak AI as they functionally approximate the behaviours that biological minds exhibit without any readily apparent consciousness being present.

    This leads into discussions around what is consciousness, how does it differ from intelligence and the difference between sapience and consciousness etc?