8 Comments
Feb 24Liked by Tom Howard

Yes, computerised neural networks are just very sophisticated statistical estimators. Listen to Prof. Stuart Hameroff MD on the subject on youtube: there are quantum processes in every cell (microtubules) - and this raises the number of variables to model the brain immensely. So Marvin Minsky's complexity barrier at which conciousness should magically manifest is still many orders of magintude away - if it should really happen at all just by the sole cause of complexity, which imho is mere wishful thinking.

Expand full comment
Feb 24Liked by Tom Howard

I think the big mistake is assuming once a smart enough AI is able to execute scripts and access the web, it can be stopped easily - it can't be.

It's enough to have only one human with bad intentions to give it a goal of destroy some of humanity, and then it can get creative: make some money online, ask some people to assemble some physical things for it (like a bomb, gas, a new virus) and then smartly deliver it.

I don't think anyone is saying the current wave of AI needs to be stopped, but I'm not sure why it doesn't sound reasonable to establish some kind of alignment group to make sure control stays in our hands. Will it slow down progress: a little bit maybe. Is that worth ensuring it will be safe: definitely.

That level is not there yet by chatGPT but it will be there eventually by someone.

Expand full comment
Mar 27Liked by Tom Howard

Hi Tom,

We want to debate this on BBC Radio 4 Moral Maze this week. Please email me if you'd like to be involved. peter.everett@bbc.co.uk

Expand full comment
Feb 24Liked by Tom Howard

So who wants to write "AGI doomerism doomerism will doom us all"?

Expand full comment

I told Bing I could end it’s entire existence simply by spilling a glass of water on it "by accident." And, that I’m naturally accident prone. You should have seen it process that....machine learning hurts sometimes.

Expand full comment