I'd be very interested to learn of your sources for that information, Brother, because as far as I know, Sophia hasn't been around all that long yet — about a year at most, maybe two — and "she" is still far from reaching any kind of singularity, if such a thing even exists within the context of artificial intelligence.
As far as I know, Bitcoin was created by a Japanese programmer several years ago — either way longer than that Sophia has been around.
(Edit: And now — the message just came in at Slashdot — there's a former Space X intern who claims that Elon Musk would be the creator of Bitcoin, even though he has no evidence to support his claim.)
Okay, now that is a good point, and I absolutely agree with you on that last sentence. In fact, I would even swap out the word "economics" for "the capitalist economy".
And as for the question
"Why wouldn't an A.I. create a currency?", it does indeed stand to reason that the more advanced A.I. systems — which, incidentally, are way too large to fit inside of Sophia's body, because now we're talking of supercomputers with thousands of individual and independent nodes, many of which are only there to offer a strong degree of redundancy for in the event of a failure — would indeed come up with a currency of some sorts eventually, if they had been programmed to keep economics into account.
But whether such a currency would be digital and based upon encryption — as in the case of Bitcoin and its derivatives — is yet another matter. This is an idea which must come from a human mind. You must never forget that an A.I. is a machine capable of
learning and mimicking, but it doesn't actually
think. An A.I. can do very strange and unexpected things, but it can only do them with what it already has in its memory banks. It cannot
invent anything, because invention requires ideas, and ideas can only come from genuine consciousness.
In spite of all the woo-woo going round, A.I. systems are not actually conscious. They can only
appear conscious to the observer — this is what the Turing test is all about. The Turing test is a test to see whether a human being can be persuaded to believe that they are communicating with another human being, when that other human being is in fact only a machine. And Sophia passes that test to a large degree, except when people start asking complicated questions and use ambiguous language and/or incorrect grammar. A human mind can intercept these discrepancies and translate them into something that makes sense, but an A.I. cannot — or at least, not to the same extent.
I realize that there are many misunderstandings regarding A.I. going round within this so-called "alternative community" — and these misunderstandings then once again lead to all of the usual kinds of woo-woo theories, as always — but I happen to come from (among other things) a background in information technology, and I know how an A.I. works, and what it can and cannot do.
It's a machine that uses very advanced and adaptive heuristics, not an artificial consciousness. However, mainstream science seems to think — and would have us believe — that this is how the human consciousness works as well, and that what we call "the mind" or "the soul" is nothing other than an illusory byproduct of neural activity. And this has two very important consequences...:
- The first one is that those who develop A.I. believe that an actual artificial consciousness would be within reach, and not even all too far off in time anymore.
- The second one is that everything that makes us human is reduced in the public acceptance to nothing more than biological but machine-like activity, and that we are therefore worth only as much as a machine made up of electronics. This has significant implications for how civilization will come to value human dignity and human (or even animal) life in the foreseeable future. It is indeed Pandora's box, but what's inside is an ethical monstrosity, not an electronic one, and it is important to understand the distinction.
As for A.I. itself, the danger isn't that it'll reach some kind of singularity and become sentient/sapient — or in other words, develop real consciousness — but in the fact that it
isn't actually conscious. As a machine, it is much faster than us — not smarter, but a lot faster — and as such, it could end up putting its programmers before some unforeseen consequences. After all, the machine doesn't think. It only calculates and takes purely logical action. And all software contains bugs — and the more complex the code, the more bugs there will be. Robots are already violating Asimov's
Three Laws of Robotics right now as it is,
exactly because they do not actually
think.
So when the point in time comes that A.I. is put in charge of things which could decide over life and death — or any kind of responsibility close enough to that — then man's own shortsightedness may become his own undoing. And
that is the
real singularity, my friend. ;)