On life 3.0, initial thoughts

in #life6 years ago (edited)

I have been following all these emerging discussions on the possibility of Artificial General Intelligence ever since Elon Musk and Sam Harris warns about how its evolution may not be aligned with what humans would expect from its capabilities, and how it will affect our future and existence.

My attention peaked ever since a research scientist at MIT Lex Fridman published a series of lectures on AGI as part of his class. He also published a series of audio podcast that I can listen to, as an addition to the lectures. From which, I was intrigued in particular with the Life 3.0 book written by Max Tegmark.

First, if you haven't already read the book Life 3.0, I would recommend to read at least about The Omega and Prometheus.

I can’t help but imagine that this scenario of The Omega and Prometheus as the birth of an AGI will become the reality of our lives in the future. Yes, it may not happen exactly as being written there in the book, but I would expect humans to become more dependent on machines to sustain their lives and to solve their problems. The evolution of these machines will happen gradually as we continue to develop programmable machines. It’s only a matter of time before they can self-replicate like a living organism. With self-replication, the time and effort spent by us to design, build, and rebuild will be reduced, and we can continue in our pursuit to solve other areas such as sending them for exploration in a harsh environment such as oceans and space.

We might not see the machines to become sentient, but they will never disappear from our lives, unless a disaster occurs or we deliberately forced ourselves to stop using and building them due to some regulations, in which case I would imagine humans will again seek to create the machines albeit in a more decentralized manner.

In our perception, machines are replaceable, they can be turned off, they don’t feel an emotion as we humans or animals do, but our desire to connect via electromagnetic waves among humans (I can see this growing every day), and the continuous evolution of intelligence has become the basis for their existence. Therefore, the perception that an artificial life has to resemble a biological organism in order for it to exist and evolve may have to be re-evaluated.

It is yet to be seen, what is the evolutionary path of software, AI, and AGI. Would software become more robust with predictable algorithms? Would type system evolve so we can always have predictable reasoning on programs behavior? or would it become more probabilistic, as machine learning has demonstrated, or none of that matters when software grows in complexity.

After all, probability is just a process of computation where you don’t have a point of certainty in the output of a function, and the goal is to minimize the deviation of the output from a prior reference value. Once, there’s no more deviation or error, the computation will repeat the same process over and over until the environment changes. That is basically what we consider as learning.

If learning can be automated without a deliberate effort from the process that initiates it, then it is very hard at this point to distinguish what life and intelligence actually is and why it has to emerge from our familiar biological process.

Coin Marketplace

STEEM 0.27
TRX 0.13
JST 0.032
BTC 62699.43
ETH 2963.61
USDT 1.00
SBD 3.61