WMN: t3_3fz5pa_t1_cttc11j

Type: WMN: disagreement

Meaning: situated meaning

Context: Online interaction

Corpus: Winning Arguments (ChangeMyView) Corpus

URL: https://convokit.cornell.edu/documentation/winning.html

License:

Dialogue: t3_3fz5pa

[TITLE]

CMV: Sentient AI will emerge spontaneously significantly before humans can digitize themselves.

[Versepelles]

Human sentience (or consciousness, or sapience, or whatever term you like) is an emergent behavior of neurons and their interactions, which can be replicated in non-organic systems, i.e., computers. Thus it is likely that given advanced enough technology, we could make a digital human, or even copy an organic human to the digital world. Similarly, other neural networks can be constructed digitally, collectively known as AI. This is fairly well accepted in modern society, although there are debates over 'souls' and things like that. I think that sentient AI will probably emerge first, long before we get close to digitizing humans, because of the way AI may emerge. Currently, one approach to developing advanced neural networks is to allow them grow 'naturally', which is to say that the network is given some parameters to optimize, and then iterations darwinistically decide what the optimal solution is. Pretty much like carbon-based evolution, but much faster. However, I'm not an expert in the field, so I'm open to different opinions, or even alternatives not the above two options. CMV: AI will achieve sentience way before we can make computer humans. _____ > *Hello, users of CMV! This is a footnote from your moderators. We'd just like to remind you of a couple of things. Firstly, please remember to* ***[read through our rules](http://www.reddit.com/r/changemyview/wiki/rules)***. *If you see a comment that has broken one, it is more effective to report it than downvote it. Speaking of which,* ***[downvotes don't change views](http://www.reddit.com/r/changemyview/wiki/guidelines#wiki_upvoting.2Fdownvoting)****! If you are thinking about submitting a CMV yourself, please have a look through our* ***[popular topics wiki](http://www.reddit.com/r/changemyview/wiki/populartopics)*** *first. Any questions or concerns? Feel free to* ***[message us](http://www.reddit.com/message/compose?to=/r/changemyview)***. *Happy CMVing!*

[Chronophilia]

AI will not *spontaneously* become sentient. It's perfectly possible for a computer to be more powerful than the human brain. By some definitions we've already built such computers - it depends a lot on how you measure computing power, but the datacentres behind Google's search engine are definitely superhuman in speed and memory. But that on its own doesn't make them self-aware or even capable of learning. Just because a program uses artificial neural networks in its design doesn't mean it has the potential for sentience. If you fed enough images to a large enough copy of Deep Dream, would it spontaneously start wondering about the purpose of its life? Of course not. It would simply become extremely good at recognising objects in images. That does not give it any capacity to understand itself and its place in the world, and there is no reason or incentive for it to develop such understanding. [There's a lot of ideas that seem to be universal to the human experience](http://condor.depaul.edu/mfiddler/hyphen/humunivers.htm), to the point where anthropologists don't bother making a note of them unless they're violated in some way. They're part of our biological makeup. They would likely *not* be part of the makeup of an arbitrary mind, because there's no reason for most of them. For example, it's quite clear-cut what constitutes "one human", because a single person is pretty well self-contained. An AI, though? If the image-processing and language-processing parts of it use different structures, could they be separate individuals? If two AIs are connected by a high-bandwidth link and trust each other with all their thoughts, could they be one individual? And this is just one of the many, *many* axes on which AIs would be alien to us. There are a vast number of possible minds, and only a tiny tiny fraction are anything like human. An AI will not resemble humans at all unless it's designed to do so. An AI that is nothing like a human will not be recognisable as sentient.

[Versepelles]

By 'spontaneously', I mean through genetic programming, as some other users have pointed out, which does seem possible, although it could take a very long time. Certainly, a 'sentient' AI would probably behave vastly different than a human; however, it would probably become apparent that the new intelligence is behaving significantly differently than its predecessors. The difficulties of measuring 'sentience' certainly exist, but do not constitute a strong argument, in my opinion.

[Chronophilia]

Ah, I took your opinion to mean "We'll create an accidental non-human AI before we deliberately create an AI that emulates humans". Was this not what you believe?

[Versepelles]

Roughly, although 'accidental' isn't a good term. I meant that we will not hand-code an AI, but that we will write programs which write programs trying to 'survive' that create an AI- genetic programming. However, it has already been pointed out that genetic programming takes a long time to iterate, so perhaps human emulation will precede it.

[thedeliriousdonut]

What you're describing is genetic programming. Perhaps we will achieve sentience sooner than digitized humans. Doing it through genetic programming has a few issues, however. Consider consciousness. Have you figured it out? Yes? Good, I'm glad we had this discussion, turn in plans for your Nobel Prize and let the Singularity begin. No? That's disappointing. Oh well. Consciousness is very complex. How do you test if something has met the parameters for intelligence? Solving intelligent tests? At one point, that test was Chess. We did it, we made computers that were very good at making Chess moves, but they had no idea what Chess was, what it symbolized, what the significance of Chess was, they only knew some numbers tied to some positions. They likely didn't even know what winning or losing meant. Next, there was the Turing test. Eugene Goostman beat the Turing test, but was still not considered to have our level of consciousness, if any at all. This is because the Turing test is very, very good at determining if you're SIMULATING consciousness. But not if you're actually conscious. Imagine it like school tests. What are they really measuring? How good are you at math or how good are you at getting the answers to the test? You'd think they're the same, but does getting a high math score prove that you know what math is? Why it's significant? Why it's beautiful? Why these equations are important in everyday life? No. A high math score just means you know how to solve the problems on the test. Then there's the self awareness test, the Tokyo test, etc. We have a history of not having the BEST way to determine consciousness. We're not sure what it is. Maybe consciousness is a spectrum or continuum and we've already solved consciousness, but the AI we're making isn't conscious ENOUGH. But back to genetic programming, eliminating what doesn't work and letting the best survive. The problem with genetic programming is, let's say you do find a way to test consciousness. It's probably going to involve a LOT of factors, right? Ability to have opinions, ability to change itself, ability to feel, ability to use natural language, ability to create, debate, CMV, etc. Consider the pragmatics. How long it takes for each AI to see if they pass they test or to even realize they're testing. If you just force them, for all you know, you could just be making an AI that's programmed specifically to do each thing individually without filling in the gaps of consciousness in between. No, you need AIs that do a bunch of those things individually. How long do you leave it alone to see if it does anything before you eliminate the AI? Or do you take up an ungodly amount of resources to keep a bunch of complex computers the size of warehouses around, possibly taking up miles? And you're doing this with each iteration. It could take forever. Then, consider the ethics. Killing an AI because it's not "good enough?" Sound familiar to anyone? Are we allowed to do that because they're subhuman or something? That's another consideration. Is it okay to have an AI thrown into a test it knows nothing about only to kill it before it has a chance to have a quality life? In all likelihood, because of the randomness of their iterations, most of the AIs will be insane if they do become sentient and they'll suffer. They'll suffer real bad. So, genetic programming, in my non-expert opinion, is not a good approach to conscious AI. Feel free to CMV.

[Versepelles]

The first part of the argument simply questions what the measuring stick would be- it seems that once a decent level of 'intelligence' is achieved, then the immediate consequence would be the AI improving itself rapidly. It would probably be evident when this event has occurred, not perhaps as a single event, but more as a period of events, and this could be judged against the digitization event, which would be even more clear. [STA-CITE]> And you're doing this with each iteration. It could take forever. [END-CITE]∆. Although brain simulations take a huge amount of computing resources, they are already well-enough defined as to replicate them. Genetic programming, on the other hand, would likely be inefficient to begin, requiring much more computing power to achieve a similar product at first, although it could definitely have a better end product, given the same network space. An excellent point; I'm not convinced that digitization will occur before, but I guess sentient (enough) AI is not a definite first either. I was purposefully neglecting morals here, as both subjects are quite a dark shade of grey.

[DeltaBot]

Confirmed: 1 delta awarded to /u/thedeliriousdonut. ^[[History](/r/changemyview/wiki/user/thedeliriousdonut)] ^[[Wiki](http://www.reddit.com/r/changemyview/wiki/deltabot)][[Code](https://github.com/alexames/DeltaBot)][/r/DeltaBot]

[grumbel]

[STA-CITE]> Currently, one approach to developing advanced neural networks is to allow them grow 'naturally', which is to say that the network is given some parameters to optimize, and then iterations darwinistically decide what the optimal solution is. [END-CITE]That's not really how it works. You don't just randomly "grow" a neural network, you train it for a specific purpose. Essentially the network is a function that takes high dimensional input (e.g. say millions of pixels of a digital camera) and produces lower dimensional output (e.g. a simple label for the content in the picture: cat, dog, car, house, ...). You train it by feeding it lots of input images. It is essentially a compression algorithm. There is no way that that can just spontaneously form sentience on it's own, as it neither has the necessary input (all the human senses, etc.) nor the necessary output (muscle responses, etc.). A neural network won't spontaneously form into sentience any more then an combustion engine will spontaneously form into a car. That said, researchers will of course feed more input into neural networks, give them more output over time and change their structure to whatever they think is needed to build something that approaches some form of sentience, but it won't be spontaneous or accidental, it will be a deliberate attempt to build sentience.

[Versepelles]

[STA-CITE]> but it won't be spontaneous or accidental, it will be a deliberate attempt to build sentience. [END-CITE]This is exactly what I was trying to express. Some other users pointed out the correct terminology- genetic programming.

[dark_3141]

are you suggesting that sentient AI will have the same kind of consciousness humans have? can a machine experience the world in the same exact way humans or animals do?

[Versepelles]

Sentient AI may or may not be similar to humans; most likely not. However, *could* a machine exists that emulates humanity- yes. We are only mechanical reactions and the emergent behavior which arises. There is no physical law that says a machine could not also perform the same functions, given an analogous structure. So, [STA-CITE]> can a machine experience the world in the same exact way humans or animals do? [END-CITE]Yes.

[Mozz78]

I won't try to change your view because I think it's correct. The first thing scientists are trying to do right now is to simulate the brain activity of very simple animals. As the computing power grows, they could simulate more and more complex brain activities, until consciousness emerges from simulated neural activity, just like it emerges from humans' neural activity. That may take time (a couple decades maybe) but there is nothing too complex in the principle. It's a very brute force approach but there is no reason it wouldn't work. Digitizing humans however seems way more complex, because you would have to make a perfect caption of an individual brain state (at a neural of maybe molecular level). The technology to get that seems pretty complex. But if we admit that's it doable, you just have to put that state into a computer simulation. So that requires at least a software to simulate a brain activity, which is exactly what was discussed earlier. To summarize, first we manage to simulate the brain activity of a human being. And from that point, we would essentially have functional AI. Second of all, we may be able to initialise it with an actual human brain state to "upload the consciousness" of a human.

[Versepelles]

Another user pointed out the timescale of genetic programming, which seems to be quite large, just as evolution took a *long* time. It actually seems plausible, even likely, that we could digitize humans before genetic programming creates AI.

[Mozz78]

Nothing takes time when it's calculated by a computer, genetic programming included. Nature is slow because it takes generations upon generations to process evolution. On a computer, a generation can take a fraction of second. [STA-CITE]> It actually seems plausible, even likely, that we could digitize humans before genetic programming creates AI. [END-CITE]Why use genetic programmic to create Ai when you can just mimic the functionning of animals' brains with the already existing techniques?

[Versepelles]

Simulating animal brain activity requires both understanding that activity and the ability to reproduce that activity. We are mapping human brains, but the only fully mapped biological neural system currently is C. Elegans, a small worm, as far as I am aware. This would seem to put genetic programming at an advantage, but perhaps a solution will come from a combination of the methods.

[NaturalSelectorX]

[STA-CITE]> Similarly, other neural networks can be constructed digitally, collectively known as AI. This is fairly well accepted in modern society, although there are debates over 'souls' and things like that. [END-CITE]Neural networks are not synonymous with AI. You can try to create AI using a neural network, but AI has yet to be created. A neural network is only an approach to computing. [STA-CITE]> I think that sentient AI will probably emerge first, long before we get close to digitizing humans, because of the way AI may emerge. Currently, one approach to developing advanced neural networks is to allow them grow 'naturally', which is to say that the network is given some parameters to optimize, and then iterations darwinistically decide what the optimal solution is. [END-CITE]You are talking about a genetic algorithm, and that is nothing new. You pass over "give parameters" as if you enter "become intelligent" into a text box and wait. A requirement for a genetic algorithm is a fitness test. How do you test how well an AI learns, understands, and applies knowledge without having something that learned, understood, and applied to knowledge to check it? You could say "understand these instructions and take an action", but then you just optimize for those specific instructions.

[Versepelles]

[STA-CITE]> How do you test how well an AI learns, understands, and applies knowledge without having something that learned, understood, and applied to knowledge to check it? [END-CITE]How did nature do the same with us? It still seems fairly plausible that genetic programming could create a strong enough survival algorithm that sentience emerges, just as we emerged from the wild. We ourselves are just some better fit iteration of the primate brain; I see no reason that genetic programming could not eventually produce the same results. However, another user pointed out the timescale of this event, which is much more convincing.

[NaturalSelectorX]

[STA-CITE]> How did nature do the same with us? [END-CITE]Nature *didn't* do it with us. Nature is not an intelligent force, life is something that happened to emerge. Life emerged and evolved because of successful reproduction. I can design a program to successfully copy itself up until the limits of hardware are reached. [STA-CITE]> It still seems fairly plausible that genetic programming could create a strong enough survival algorithm that sentience emerges, just as we emerged from the wild. [END-CITE]Survival from what?

[Versepelles]

[STA-CITE]> Nature didn't do it with us. Nature is not an intelligent force, life is something that happened to emerge. Life emerged and evolved because of successful reproduction. I can design a program to successfully copy itself up until the limits of hardware are reached. [END-CITE]This is exactly the point of genetic programming. It operates in the same way as nature, trying to optimize some parameters. The exact fitness parameters which would create an environment from which intelligence could emerge are unknown, but will likely come from designing programs to emulate and learn as humans do, which is of course hard to quantify, but similar research is being conducted now.

[redditeyes]

What is happening inside neural networks isn't that different from what is happening inside your brain. Connections are constantly made and destroyed based on input, trying to optimize how certain task is performed. Optimizing a neural network is more similar to "learning" rather than "evolving a brain". How intelligent a neural network can become is limited by its design. If I build a neural network, it will never become intelligent, even if I let it run for years. Just like you can train a raccoon for years but it will never become smarter than a raccoon. You will never get a point where sentient AI will "emerge spontaneously". We will have to create it ourselves. Our systems will become better and smarter with each design iteration, until one day they are good enough to be indistinguishable from what we call "intelligence". At the end of the day simulating a brain requires little understanding of how it works, but needs A LOT of computational power. Even simulating realistically a couple of neurons takes supercomputers. Building something intelligent from the ground up will be more efficient, but then you need to know precisely how to create such a system (i.e. deep understanding of intelligence and sentience). The question which will happen first depends on how fast each field is advancing. If it turns out we are better at making faster computers and intelligence is extremely complex, then simulation will be first. If we turn out to be good at understanding how brains work, we might create AI before the tech simulating brains is available. Saying one will happen for certain is premature. After all, computers double in computational speeds every 18 months. Our understanding of brains and intelligence grows slower than that. There is a chance simulation will be first.

[Versepelles]

[STA-CITE]> Optimizing a neural network is more similar to "learning" rather than "evolving a brain". How intelligent a neural network can become is limited by its design. [END-CITE]However, if we design a large enough neural network with some interesting parameters to optimize by learning, this seems like it *could* spontaneously develop 'intelligent' behaviors, similar to how human brains were modified versions of primate brains, where evolution 'learned' what worked best. [STA-CITE]> There is a chance simulation will be first. [END-CITE]I agree, but the probability of a neural network seems much larger.

[CheapeOne]

You misunderstand what is going on in a neural network. A neural network is a form of 'supervised learning'. This means, given a bunch of X's (input variables) and their respective Y's (classification), the neural net can find an function 'f' such that f(X) = Y. When you train a neural network, you give it a bunch of these X's and their respective Y's. E.g. an X may be a bunch of attributes of a car, like its size, color, price, MPG, etc. The matching Y would be whether or not someone will buy it. Running a neural net means training it on a bunch of *pre selected* X Y pairs, so that it can learn to classify new X's. Continuing with the car example, you might train it on a bunch of cars sold in the last 5 years (so, we already know which cars people chose to buy, i.e we already know what the output Y is *supposed* to be). It flows the X's through itself, sees what *its own* output was, and compares that with what the *training example* said the output was supposed to be. If they differ, it back-propagates the differences up the network, and trains again to see if that improved the result. There are a few things wrong with this when it comes to equating a neural network with human intelligence. First of all, they are slaves to the training set and what is included in it. If you don't include a wide enough variety of different X's, it won't correctly classify future inputs. For example, if all the cars you *train* on are the same color, then it may never pick up on the difference color makes when choosing a car. Cars this year may also have some new technology, like android auto or something, that is has never been trained on. Another instance is if you include too much irrelevant information, 'noise', in your training set. This will result in 'over-fitting': the neural net learning the training set *too* well. The neural net may look at a new car and say 'well, all of the good cars in my training set had 1 extra spec of dust under the passenger seat, this must mean that was the deciding factor!'. Fixing all of these things is up to the human designer (hence *supervised* learning). A neural net's effectiveness is dependent on how well the training examples are chosen, what is included in them, how long the neural net runs and how much is corrects itself after each run. Neural nets can easily be run too long, or over-correct themselves after a training example. The point is that while neural networks sound cool (and they definitely are!), they are not a model of the kind of general intelligence that humans are known for.

[Versepelles]

[STA-CITE]> The point is that while neural networks sound cool (and they definitely are!), they are not a model of the kind of general intelligence that humans are known for. [END-CITE]This is a fair point, given *only* neural networks as a model of learning. I'm not an expert in the field, only an interested amateur. I think that neural networks are currently the most like nature- that is, a darwinistic environment which has been shown to produce high intelligences- and thus the most probable source of high intelligence. Certainly, there are limitations on the parameters of input and interest, but it still seems that this is likely the 'primordial soup' from which AI life could emerge.

[redditeyes]

[STA-CITE]> if we design a large enough neural network [END-CITE]That's not how it works. If it was about size, whales would be the smartest animal on Earth. It's not about how big it is, but how it is organized. [STA-CITE]> with some interesting parameters [END-CITE]Yes, but designing those interesting parameters is the hard part. Saying we will accidentally create intelligence makes as much sense as saying you can accidentally recreate a perfect Mona Lisa copy by throwing a bucket of paint against the wall, just as long as the bucket is big enough and you throw it with "interesting" twist. Yes, it might be physically possible, but you will need to design a very specific bucket and very specific throw that gets every single drop of paint exactly where you want it. If you just throw random shit at the wall, you'll never get Mona Lisa. You underestimate how much work goes into designing the structure and behavior of the neural networks. That's like 99% of the work. How well it will perform after teaching it in the end depends entirely on how well you designed it to learn the thing in question. It's not like you throw random shit and then magic happens. You actually have to design that magic and understand it in detail. [STA-CITE]> similar to how human brains were modified versions of primate brains [END-CITE]The entire point of my post was that it is not similar. How much a neural network can achieve is limited to the design of that network, just like a primate brain is limited to the genetics it was born with. You can teach it for years and it will never start doing calculus or discussing quantum physics. Just like you can train a neural net for years and it will not achieve more than the design allows. While the structure of the brain itself can improve through evolution, this is not true for neural networks. If your program says "when X,Y and Z happen, then make a connection", then this rule will not change even if you run the program for millions of years. If you want to change how it works, you need an actual programmer to sit down and write the new code. Nature goes around the problem by doing random changes to the code and seeing what sticks (evolution). This process is possible to do with computers (it's called [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming)), but is EXTREMELY inefficient to do, so most programmers avoid it like hell, especially when dealing with complex and computation-heavy problems like designing neural networks. It only works for nature, because it has billions of years to work with. If we had the hardware to run such algorithms in effective manner, we could just simulate a brain in the first place.

[warsage]

When you say "emerge spontaneously," do you mean without any human design? The network gets big enough that the AI just kind of appears? I think you'll have a hard time finding any serious evidence for this theory. It's a sci-fi plot device, nothing more. Digitizing humans, in the other hand, is based on the assumption that, once we understand the human brain well enough, we'll be able to create computers that mimics the same functions as the brain. A great deal of effort is being put into understanding the human mind. I think this could plausibly happen within one or two hundred years. .

[ablanchard17]

[STA-CITE]> When you say "emerge spontaneously," do you mean without any human design? The network gets big enough that the AI just kind of appears? I think you'll have a hard time finding any serious evidence for this theory. It's a sci-fi plot device, nothing more. [END-CITE]It's not a matter of the network getting big enough that an *AI* "appears". It's a matter of it getting big enough to the point that *Consciousness itself* begins to exist in it. Your brain is a series of billions and billions of neurons that interact with each other. Through these interactions, your consciousness exists. We can probably both agree that if you only had 10,000 neurons that you wouldn't be the same entity that you are right now. It's a question of at what point does consciousness itself, begin to exist within a system like that?

[warsage]

[STA-CITE]>It's a question of at what point does consciousness itself, begin to exist within a system like that? [END-CITE]I'd actually argue that it's a question of "will consciousness ever begin to exist in a system like that?" Simple *complexity* isn't enough to create consciousness. Waterfalls have billions and billions of water molecules constantly interacting with each other in incalculably complex ways. It's an extremely complex system. But we would never wonder "at what point will a waterfall develop consciousness?" I don't know of any evidence that suggests that computer networks are capable of consciousness in the same way that brains are. This is in the realms of science fiction, not of science.

[ablanchard17]

[STA-CITE]> It's an extremely complex system. But we would never wonder "at what point will a waterfall develop consciousness?" [END-CITE]Just because we would never wonder, doesn't mean the same things don't still apply. We have absolutely no evidence that the waterfall will or will not become conscious, or if it even has the ability to.

[warsage]

Well, yeah, but it's a huge leap from "I mean, we can't prove that computers *won't* spontaneously become conscious" to OP's view: [STA-CITE]>Sentient AI will emerge spontaneously significantly before humans can digitize themselves. [END-CITE]On the one hand: a sci fi trope with zero evidence for it and zero researchers. On the other hand: a well-theorized technology that is one of the main focuses of computer scientists around the world. Which sounds more likely to come about first?

[Versepelles]

[STA-CITE]> When you say "emerge spontaneously," do you mean without any human design? The network gets big enough that the AI just kind of appears? [END-CITE]No, the hardware for neural networks will be in place, but the AI will not be hand-coded, but an iteration of machine coding. That is, we write some code on a large network with some capacity to expand, and the code iterates with some 'goal' until consciousness appears. [STA-CITE]> Digitizing humans, in the other hand, is based on the assumption that, once we understand the human brain well enough, we'll be able to create computers that mimics the same functions as the brain. A great deal of effort is being put into understanding the human mind. I think this could plausibly happen within one or two hundred years. [END-CITE]A great deal of effort is being put into neural networks, machine learning, and AI in general, so I while this may be valid, it could be equally valid for AI sentience.