You are viewing a single comment's thread from:

RE: WHEN THE BOOK READS YOU WHILE YOU THINK YOU READ THE BOOK

in #science5 years ago (edited)

I think that you and I agree to a large extent, although it really had some drawbacks to understand some things, so I prefer to ask.

You say that the AI ​​must always remain as a kind of software, without real existence, but that in turn acts as a human being, as far as its capabilities are concerned, that is, reason and emotion. I'm wrong?

I think that is the direction towards everything is directed, although there is always a degree of uncertainty, because with respect to the future, humans must always deal with uncertainty, although as long as impulsive actions are not taken, neither for control or manipulation purposes, then I think there will be no problem.

I think you're right, there is an interconnection between all people, which is evident in their synchrony and reciprocity, which happens in all interactions that all humans have, only that we realize only when such connection is very obvious, but always happens. It reminds me of the concept of synchronicity. Very good article that adds on the Gestalt process, includes some good points, although that is also a subject that gives much more to talk.

And although it might seem pointless to say this, I think that patience is not contrary to impatience, on the contrary, patience is contrary to agency (action), and impatience is precisely the lack of patience, as when something that is passive begins to act actively and tires, so it becomes impatient, or vice versa. Being impatient is precisely a sign that we must be patient. Without forgetting, of course, that being patient is not synonymous with waiting, but with letting the agent act.

Maybe I have not understood you well, if so, don't be afraid to correct me, because after all, it's just a group of disorganized thoughts. Or if you think that I have not covered something, say it.

Regards! :)

Sort:  

AI only seems like a human being, but it is in no way human. It only appears "as if".

Not sure if that is what you meant. I hope I maybe can reveal if the two of us talk about the same:

From my point of view, a machine cannot feel impatience, therefor not be impatient, because it is always in an operational mode, unless you turn it off. But a machine knows no hesitation, unless the hesitation refers to the processing time of large amounts of data, so that the AI decides which data-decision branch it chooses to follow. However, this does not cause it any trouble, so it is done without any visible effort. Which was the second point of the listed. But all it's effort is predetermined by the connection to the electrical grid and it cannot be proud of it's discipline to having accomplished something against it's own habit. It knows no habit.

An AI could only imitate hesitation by making visible on the communication interface what appears to be a weighing process for humans and acting "as if" a weighing process were taking place. This can be seen just as well today in online games, for example, which have extremely large amounts of data and therefore the download time is displayed with time specifications and loading bar.

The AI can show such a loading bar each time and make it "emotionally visible or audible in a human language".

The way an AI works would probably never contradict its action, it knows no impatience and no hardship, because it lacks the organic prerequisites for it. One could try to program this into the AI, for example by making paradoxes its task. But a paradox is a tricky affair and I wonder how a human programmer can be able to write such code.

I suspect that even this will only be possible with the help of an already existing AI. From my point of view, a paradox is always tied to a substancial human dilemma.

If, for example, I give a quarrelling married couple the task as a helpful intervention to quarrel every evening at point eight o'clock, i.e. "to do more of the same", then this is paradoxical. To do more of the same to solve a human conflict seems at first sight completely nonsensical.

But when one starts to think about the absurdity of such a task, one sees on the one hand the humor in it and on the other hand the coherence. When the couple then turns to the clock several times at that certain day (which they will do when they use a reminder) and says to themselves: "We must start arguing at eight, then the couple has already spent with this train of thought and has entered a new space of consciousness, which it would not have entered without this paradoxical task. It then may realize the absurdity of fighting.

How is an AI supposed to manage to recognize the paradoxical auxiliary concept of "more of the same", except that it takes a mechanistic view and this is contained in its catalogue of proposals "paradoxical concepts and individual proposals", which it then presents "blindly" to the human user. AI has no sense of humor, no sense of irritation so far. I doubt it ever will. It can only give the answer to a human question in the form of "please, specify your request" or "wait, I am processing" or "here is my decision". Though I can imagine one can teach the AI to mimic humor and irritation.

I am sure you know all of what I have said here and maybe I also wrote it down for the wider audience. Indeed I am glad that you again referred to that specific part of my text as I think it's an important one.

Thank you as always for your valuable comments.

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.032
BTC 63701.36
ETH 3076.90
USDT 1.00
SBD 3.81