And that's using the Pascal-based Drive PX2. Heck, didn't Nvidia themselves demonstrate their 'BB8' car last year, saying it only needed four days to learn to drive?
So which will it be? Tesla Level 4 in a year, or Bosch/Nvidia Level 4 (or Audi/Nvidia) in the 2020s? I wouldn't bet against Tesla...
PS Level 4 basically means the human driver is monitoring the car and responsible for it, exactly the same as a modern fighter jet or airliner. It won't need any certification. Level 5 is the level where the steering wheel goes, and the car (or rather its manufacturer) is responsible for it. Some car makers have said they want to skip straight to Level 5 but I doubt any will be able to sit by while competitors offer Level 4.
Getting 5 times the efficiency through architectural enhancements to the GPU doesn't seem likely. I think the reason it can get 1 DL TOPs per watt is because it contains an ASIC geared towards inferencing convolutional neural networks. That is probably what the "CVA" is in the SoC block graphic. Power consumption is kept low by reducing data movement through the reuse of data and by storing the data in "scratchpads" located very close to the processor elements. That's how they can go from ~0.2 DL TOPs per watt with the SoC-only version of the Drive PX 2 (Parker) to 1 DL TOPs per watt with Xavier on a similar manufacturing process. I wonder, though, if it allows for flexible precision or if it only allows for the inferencing of 8-bit integer based networks.
OMG! ...30 Watts if that's for the whole system meaning CPU + GPU and GPU contains 512 cores then Volta would be power efficient as hell. A GPU like 940M with 384 cores has a TDP of 36 watts and 1050 for laptops with 640 cores is 75 watts. Volta could be awesome!
I think Volta is supposed to be about 60% more energy efficient than Pascal, at least in terms of GEMM (matrix multiplication). But comparing the Xavier SoC to a 940M or even the 1050 brings with it a lot of issues. For example, the SoC includes a CPU, yes, but the 940M and 1050 power numbers include power-hungry GDDR5 VRAM.
Those cores are designed from the ground up for machine learning. This means maximized throughput at low precision integers and minimized transistor cost that would translate into increasingly abysmal performance as the data type precision increases.
You cannot make a direct comparison in terms of power efficiency with a general purpose GPU core, which is optimized 32 bit float computation.
Self driving cars might work technologically, but they'll never be able to take the philosophic hurdle necessary to be allowed on EU-roads.
What does the system do, when it drives down a road, a motorcycle coming up the pther way and suddenly a stroller with a child inside rolls onto the street with not enough road ahead to stop before hitting the stroller? A: hit the stroller with the child B: swing to the left hitting the motorcycle C: swing to the right into the people on the walkway
Aslong as the question isn't answered to the extend that it can be signed into law which life is worth less in that case and what option the car should take, aslong such automated systems don't belong on our roads.
Such decisions have to be made by humans and not by machines.
Actually that's an easy decision, made by people in a split second all the time. If you'll hit something else, you just try to stop and hit what is in your lane. You don't actually know what the consequences of moving off the road to the walkway will be, and oncoming collisions are more fatal. Try to stop and hit what is in your lane is simple. That's what usually happens in real accidents anyways.
Agreed. And the assumption behind that problem is that the car can tell that the stroller contains a baby and is therefore something "worth" making an extra attempt to miss. I'm pretty sure all this car-driving AI isn't going to be connected to Star Trek sensors that can tell human lifeforms apart from mere objects that happen to be on collision courses. The logic isn't going to be differentiating between "is it a baby stroller or a shopping trolley?" but simply "can I stop in time?"
I'd suggest that a human in that situation, however traumatised by unavoidably hitting the child, would not be prosecuted if there was no possibility of averting the accident, which is the Kobayashi-Maru (no-win) scenario you are trying to set up. (You canna change the laws of physics - momentum of car, available braking force, likelihood of skid on road surface, distance to impact.) So why try to create a higher philosophical hurdle to pass for a machine that would have even better reaction times and control over the car than a human driver in that situation? The whole basis of the argument is nonsensical.
What's more worrying here is that despite all the marketing monikers, machine learning is not AI. It does fairly good in situations I got to do many mistakes over and over, but once it encounters a previously untested set of environment factors, it cannot reason its way out of it, it can only fail.
Granted, if the hardware survives, that learned knowledge could be incorporated into future releases, getting better over time, but it will surely suck to be an early adopter, and end up disabled or dead as part of the learning experience.
Having some actual AI would surely help prevent or at the very least severely mitigate the damage of learning on the go, with actual human beings inside, but then again, I doubt the industry will push for an actual AI any time soon, mostly because it will do and say things that make sense rather than do and say things that benefit the industry.
Actually no. It is learning general rules of behavior (on a specific instances) and it will apply them to any situation. In some situations those rules will fail, but that is no different from the way you or any other human is approching situations (child will also always fail the "hot stove situation" first time, that doesn't mean it's not inteligent). And as you pinted out - it cen LEARN from those situations. Therefore it is AI. There is a difference in how complex rules it can learn for now and how quickly, but basicaly thats all. Obviously you can draw an arbitrary line how complex rules it can learn and how fast to call it na "true AI", but that's just an arbitrary line.
Let me put it this way - it sucks at improvising. It could not possibly succeed without failing, usually many times over.
This is not how intellect works. I don't need to jump off a bridge once, much less multiple times, before I establish that it will cripple or even kill me. And I don't get to. That's what separates intellect from a programmed machine - the intellect can reason, the machine can only follow a static set of predefined rules.
Which is why I refuse to call machine learning AI, it doesn't reason, it merely automates the programming by means of failing until it succeeds. It is a purely statistical approach. There is no reasoning involved, there is no understanding either.
That does not diminish the applications of machine learning. For driving cars it will likely be much safer even now, in its infancy, with a predominantly human driver traffic system, and will only get better in time, as the software matures and the number of human drivers decreases. It is just not AI. There is no intellect involved.
That being said, I wonder if we are going to see discrimination, much like we see with everything else. For example, David Rockefeller got his 7th hearth transplant at 101 years old, with only spending 2 years with his last hearth. Whereas regular people need to wait for many years for a donor, and do not even qualify for a transplant if they are over 70. The list goes on for pretty much everything in life.
Which begs the question, will we see self driving cars crashing to kill average Joe but minimize infrastructure damage, while driving mr. Fatcat into a soft group of pedestrians to cushion the impact and minimize his injuries?
Also, machine learning cannot really learn how to learn. What it learns from and how it is applied pretty much has to be preprogrammed. Machine learning will not learn from something it wasn't set up to learn from. If lucky, developers would be able to identify what went wrong, and apply that in a future version, but on its own, it cannot learn how to learn. Which separates it from intelligence. It needs human input, it needs to be told what factors to factor and how to apply the results. And you don't have an "intellect" without those prequisites. You have automated programming, that is set up by humans, there is no self-learning involved whatsoever. This is far from AI, even machine "learning" would be generous, what it truly is, is machine TRAINING.
But that just doesn't sound cool enough to the impressionable simpletons, so why not go with AI, because it is sooo much cooler ;)
Human intelligence pretty much works the same way. We are pre-programmed by our genes (involving motivation and the way our neurons work), we have a lot more neurons than these machines, and we typically need many years of learning experience before we can be called intelligent.
So it's not really a fair comparison. The self driving AI is general AI in the sense that it can learn and generalise from precious experience, but will only live in the world of driving, and it's capacity for learning is limited to that area. If you want to reserve the term AI for something truly general, you'll have to wait until WE learn everything there is to know and understand everything there is to understand about the universe, otherwise you'll never know if it's truly general.
A self driving car understands car driving about as much as a mechanical cookie extruder understands cookie making. It is just a tool, albeit more sophisticated and digital, but still just a mechanism.
Learning everything is not necessary, all that is necessary is the ability to learn, and I mean like new things. Current "AI" is entirely incapable of that, because there is no understanding involved, there is only evaluation according to a preprogrammed static, finite set of data based on our understanding, based on our intellect. Just because it follows rules as presented by our intellect does not imbue intellect into the machine.
Machines can already destroy us at a number of disciplines, but they all involve doing work we told them to do. This includes the so called "AI" as well. It is just number crunching, there is no abstract thought.
You could whip or treat a dog into playing a tune on the piano, which wouldn't be much different from "AI", but even then, that dog will not have learned music, it would be trained to play a specific tune, it will not understand music as music, and as a result it would not be able to play just about any music piece, much less compose music.
With machine training you could go further, you could easily program music theory into it, or you can even use existing music as training material to make it figure music theory on statistical level, you could have a machine that writes 10 hours of commercial grade music per second... but it will still not understand music. It may contain all there is to know about music, far more than any human could possibly fit in his brain, yet it will not understand music. Its standards for music would be what we've composed and have considered to be good. It will never have its own taste of music, music will never be anything more than numbers to it.
As for our brains and our genes - the genes don't really contain any meaningful information. The human is born with a few basic instincts, many of which fade in time. I am actually following progress in neurology with great attention, and I can assure you, as much as we know about what the human brain is and how it works, what makes our ability to learn is still a complete mystery. Labs have ran large scale brain simulations at a sufficiently high speed, and there is no evidence that such simulations are capable of cognition, much less intellect, awareness or abstract thought. Don't want to sound spiritual, because I am really not, but I think it is obvious that there is more to it than the machine. It may not even be attainable, I mean all that we can do is a product of our intellect, we cannot bring our intellect into it any more than a computer game character can leap out into the real world. It may well be non-reproducible to us.
That being said, even if we cannot simulate and produce an actual artificial intellect, we can incorporate our set of understanding into one, which for most intents and purposes would qualify as an AI, while likely impossible to create an intellect that learns on its own, it is entirely doable to create one that leans as we do. That would be AI. Until then, it is, at best, machine TRAINING.
Btw the argument you (ddriver) are making it quite similar to the 'philosophical zombie', which postulates that you could build a machine which looks like a human and reacts like one, e.g. poke it and it goes 'ouch', and say it's really only a zombie because we built it. But that's easily countered: if it walks like a duck, swims like a duck, and quacks like a duck, it's a duck. There's no magic inside us (though some philosophers and scientists, 'dualists', think there is).
You could program a machine that killing is wrong, you could even program it to shed tears. But it will will not be on its own accord, it will not understand sorrow, remorse, regret or pain.
I can make a machine that looks like a duck, swims like a duck, and quacks as a duck. If your logic is correct, then you should be able to also eat it like you would be able to eat a duck. It would be fun to watching you eat a mechanical duck. Maybe then you will realize in which spectrum of the human intellect you are :)
Furthermore, and not surprising at all, your concept of magic is just as much BS as your concept of science. You only know science from entertaining articles, and you only know magic from entertaining books and movies.
Magic is, and has always been things that fall outside of our understanding. It dates back to times when people didn't have the knowledge to explain what they see. Magic was later picked up by religion, when it became the dominant establishment form, to hide the science that religion wanted to keep to itself as an exclusive advantage for the purpose of population control. Science was labeled magic and witchcraft, and its practice was punishable by death. But that was not enough, for there still existed the possibility of people practicing it in secret, which is why it was also given the hoodoo voodoo BS spin, sending pretty much every curious individual in some made up nonsense dead end.
So yeah, magic is very much real, and what makes us tick is exactly it. And it will be magic until we understand it, which we may never really achieve, at least on this level of existence. If it is what makes us, then it is what encapsulates us, we cannot understand what we can't see from the outside, and we cannot see it from the outside because we do not and cannot exist outside of it.
Humans are as narcissistic as they are narrow minded, and I don't mean individually, but as species. We think we are the top of the pyramid, we've had nothing, then atoms, molecules, proteins, single cell organisms, complex life, animals, humans and... that is. We cannot see past our noses, even if the progression and structure of reality are brutally obvious. And it is precisely because we believe we are the top notch that we also believe that there is nothing we cannot understand or achieve. And in a way this is true, but not only because there isn't any more than that, but because we lack the perception for it. There exists an infinite number of higher order organization of consciousness than us, that we could be the forming cells or even atoms of, but not with our current mindset. With that we are stuck here on this rock, we will never really travel in space in the form of a consciousness bound to an axis of time and animated by falling uncontrollably through it. There are things we cannot grasp, much less achieve on our current level, and what makes us tick is one of those things. We are about as much aware of what's ahead of us as a microbe is aware of our human world. It just falls outside the the threshold of perception.
ddriver, in this sequence of comments you and some others appear to be honestly trying to grapple with a real problem, so I'll take you at your word. The basic problem, however, is that all of you are operating in a "systematic" mindset. This is the characteristic thinking of modernity (basically the West since ~1500), and its primary characteristic is an assumption that the world is legible. This is not exactly a belief that the world is rational, causal, and so on; it's the stronger belief that - everything important about the world - can be encapsulated in a few axioms - from which everything else can be derived.
Look at this structure. It' encompasses modern science, yes, but also modern law (as opposed to the more flexible way in which common law was handled), modern religion (various fundamentalisms, starting with the protestant reformation), Kantian-style ethics, and various politico-economic theories.
The point I am trying to make is that this is not the ONLY way of thinking about the world --- Kantian style ethical systems, for example, are not the only way to construct ethical systems. The fact that you all are discovering that, indeed, cars constructed according to the "systematic" model cannot cope with the full variety of the world is a discovery about the "systematic" model, not about cars. The same would be just as true if humans were forced to live absolutely according to the supposed rationality of rules derived from a small set of axioms. (This lack of flexibility, the more you're forced to live according to actually poorly chosen, but supposedly universal and optimal, rules, gets worse the weaker in society you are --- hence why parts of life sucks for the jobs of the poor, school children, the colonized, and so on.)
The solution is not to ditch rationality, it is to accept that the world is complex enough that it can't be captured in such a small cage. Mathematics learned this in the early 20th C (that's essentially what Godel is all about), likewise physics. But our legal and political systems, and our public discourse, are still operating according to this 19th C model. The best of us need to figure out how to move our society beyond this, not backwards to nihilism or selfishness, but to a rationality that understands its limits. AI (or machine intelligence or statistical learning -- if you think arguing about what "real" intelligence is still valuable, even after reading the links below, I'm afraid you have nothing to contribute, you need to sit at the kids table while the adults have their conversation) is perhaps our best hope for society as a whole, not just a few smart individuals, to have to confront and then deal with these issues.
This is a slightly technical page that tries to show the problems with one very particular version of these "systematic" models of everything: https://meaningness.com/probability-and-logic
(I didn't write these pages and know nothing of their author. I have no interest in his [only occasional] asides about Buddhism. Overall they reflect the thinking of a very smart guy who spent the first part of his life at MIT trying to get a variety of AI schemes to work, and concluded from the experience that "our" [ie society's] thinking about these issues is much more broken than it needs to be.)
Very interesting, name99. Personally I think humans over-complicate things. Look at animals. They perceive and they react in accordance with their innate purpose to survive and reproduce. We just added simulation and language (and thence co-operation and planning, though dogs do it too), and began to improve our survivability and offensive/defensive capabilities with sticks sharpened with stone tools, furs, and fire. Now we're here. I don't think the basics -- perception, purpose, simulation, execution -- will be too hard to instil in AI and robots. That's why I'm fascinated by process nodes -- it's one of the key enabling technologies.
Between that first stage of blooming buzzing confusion ala Henry James, and today, we have an intermediate stage of Julian Jaynes' _The Origin of Consciousness in the Breakdown of the Bicameral Mind_ for which the wikipedia article gives a summary: https://en.wikipedia.org/wiki/Bicameralism_(psycho...
Thanks Name, I'd already come across bicameralism (not convinced) but that first link is fascinating. So is the article it links to -- how iron supplanted bronze. I've been reading about that too, just last week!
It would seem that people who are profoundly deaf from birth, and not taught sign language, don't have an internal monologue, and show reduced cognition. I think that's telling.
Thanks, that was deafness link interesting. I know I do all my thinking in natural language, and am amazed at the claims made by some that thinking happens in some sort of universal "pre-language" mentalese.
The problem is the lack of understanding the mechanics of our own reasoning. If we could model that, then it can be implemented in various ways, organic, analog electronics, digital electronics, quantum electronics. That's just a detail.
I can't seem to find people who actually can explain things in a reasonable way, even things about them. One notable example is preferences - why do people like what they like. The usual answer is "because I like it". They cannot give a logical explanation the mechanics of how things influence them, and why would they find something to be appealing or appalling.
Scientists are presented as those "very smart relative to the general population" individuals, but they aren't really any more intelligent, they are just more trained. Such scientists are very much narrow minded, they are good at what they were trained to do, and mediocre in everything else, and most of the time it is one single thing they are good at, it is very rare to see a scientist proportionally proficient in multiple disciplines.
So it comes at absolutely no surprise that scientists, relying on pre-programmed in their heads set of knowledge to do a single thing, produce an "AI", relying on pre-programmed in their implementation set of knowledge to do a single thing.
It is just the best they can do. They do not understand the root mechanic of intellect, therefore they cannot model and reproduce it.
you are very outdated with your info. Go to youtube and watch some video about how Google DeepMind works. "It" learns by it self! no pre-program needed. You specify 2 things, entry data and desired results. The AI("machine learning") fill in the blank(aka black box, or self-programming) and connect the dots. Yes, it is very scary~
Nope they can't, not yet anyway. You don't really understand the subject and its implications. Those are sensationalist titles intended to impress simpletons. And it evidently works.
It ain't no different than that hack Kaku talking about "quantum teleportation" in the context of actual, star-trek like teleportation. It is almost as if that over-hyped celebrity scientist doesn't know what he is talking about. The teleportation he talks about is as much teleportation as the fax machine, which we've had for so long that today it is pretty much obsolete ancient tech.
Unlike you, I do understand the actual science, therefore I can easily determine that the mainstream presentation of science is PURE, 100% BS. It is nonetheless astonishing how easy it is to impress and convince simpletons that they are smart and progressing people by simply feeding them BS. It is no different than new age or pseudo / alternative science, or any of those groups of nutjobs.
First off, because of your tone, I've put you in the category of 'commenters who I automatically skip'. I may not be alone in that; presumably you're here because you want your comments to be read. Consider that.
Second, I do not claim to understand the science. If I did, I'd be in a very select group, and working for Google. So excuse me if I dismiss your claims, which don't seem to amount to anything more than insults really.
Those hurdles are the sort of things ethicists and philosophers think about because that's their job and their delight. But in actuality the world is a Darwinian place. People's morality and ethical comfort will adjust with what is necessary and productive. You can make demands like "Aslong as the question isn't answered to the extend that it can be signed into law which life is worth less in that case and what option the car should take, aslong such automated systems don't belong on our roads. Such decisions have to be made by humans and not by machines." But as soon as China, for example, allows the adoption of artificial intelligence throughout their society and in doing so increases their productivity significantly it will reduce European competitiveness, reducing European wealth and increasing European poverty in turn. At that time no one is going to care about some philosophical question, and the (justified) fear of that situation materializing will cause people to be willing to face and accept those ethical dilemmas.
Besides, if one wishes to consider the philosophical dilemma you proposed one must also consider the ensuing one: If self-driving cars are statistically much safer and will save over 20,000 lives a year in Europe is it ethical to block the saving of those lives on the basis of such a philosophical hurdle as you proposed?
The answer is simple, the people not doing anything wrong should live, the child strolled onto the street by choice and therefore should suffer any and all consequences. Just because you saw this stupid scenario posted somewhere else doesn't mean you should bring it up again.
The goal is not to replicate human decision making processes. The goal is that autonomous vehicles should be safer than human drivers. That is low bar in that machines usually do not get drunk or do drugs. Also, faster than human reflexes are likely.
It is not the human intellect that makes human drivers unsafe, it is the lack of it ;) There is no reason why you can't have the best of both worlds, except that scientists cannot really replicate the human intellect.
As it is the best of any world is measured by how much money is made by any "world", which all the noise about self driving cars is really about. Autopilot on planes has been there for ages but still the pilot is not allowed to leave the cockpit while it is on. So, I would give the idea a second thought if all those who make these cars and their families and their employees use these cars exclusively for ten years.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
Meteor2 - Saturday, March 18, 2017 - link
Tesla reckons their current hardware will support 'full-automony':https://www.tesla.com/en_GB/blog/all-tesla-cars-be...
And that's using the Pascal-based Drive PX2. Heck, didn't Nvidia themselves demonstrate their 'BB8' car last year, saying it only needed four days to learn to drive?
So which will it be? Tesla Level 4 in a year, or Bosch/Nvidia Level 4 (or Audi/Nvidia) in the 2020s? I wouldn't bet against Tesla...
PS Level 4 basically means the human driver is monitoring the car and responsible for it, exactly the same as a modern fighter jet or airliner. It won't need any certification. Level 5 is the level where the steering wheel goes, and the car (or rather its manufacturer) is responsible for it. Some car makers have said they want to skip straight to Level 5 but I doubt any will be able to sit by while competitors offer Level 4.
ragenalien - Saturday, March 18, 2017 - link
The drive PX 2 board supports up to two mxm cards for additional compute power. The SOC in it isn't powerful enough on it's own for level 4 autonomy.Meteor2 - Sunday, March 19, 2017 - link
Yes you're right, more recently Musk has said the cars might need a hardware upgrade for Level 4.geekman1024 - Sunday, March 19, 2017 - link
do you mean 'full-auto-money'?name99 - Sunday, March 19, 2017 - link
Oh my god! You mean companies plan to CHARGE US for this technology? Is the US government aware of this scandal?If you're going to bring in political pseudo-outrage, can you at least make it interesting?
Yojimbo - Saturday, March 18, 2017 - link
Getting 5 times the efficiency through architectural enhancements to the GPU doesn't seem likely. I think the reason it can get 1 DL TOPs per watt is because it contains an ASIC geared towards inferencing convolutional neural networks. That is probably what the "CVA" is in the SoC block graphic. Power consumption is kept low by reducing data movement through the reuse of data and by storing the data in "scratchpads" located very close to the processor elements. That's how they can go from ~0.2 DL TOPs per watt with the SoC-only version of the Drive PX 2 (Parker) to 1 DL TOPs per watt with Xavier on a similar manufacturing process. I wonder, though, if it allows for flexible precision or if it only allows for the inferencing of 8-bit integer based networks.chrissmartin - Saturday, March 18, 2017 - link
OMG! ...30 Watts if that's for the whole system meaning CPU + GPU and GPU contains 512 cores then Volta would be power efficient as hell. A GPU like 940M with 384 cores has a TDP of 36 watts and 1050 for laptops with 640 cores is 75 watts. Volta could be awesome!Yojimbo - Saturday, March 18, 2017 - link
I think Volta is supposed to be about 60% more energy efficient than Pascal, at least in terms of GEMM (matrix multiplication). But comparing the Xavier SoC to a 940M or even the 1050 brings with it a lot of issues. For example, the SoC includes a CPU, yes, but the 940M and 1050 power numbers include power-hungry GDDR5 VRAM.CrazyElf - Saturday, March 18, 2017 - link
Depends on how they are clocked.There are too many variables here for us to know for sure:
1. Process enhancements from TSMC
2. Architectural enhancements
3. The car version may be underclocked
We don't know what real world gaming performance is going to be like.
ddriver - Sunday, March 19, 2017 - link
Those cores are designed from the ground up for machine learning. This means maximized throughput at low precision integers and minimized transistor cost that would translate into increasingly abysmal performance as the data type precision increases.You cannot make a direct comparison in terms of power efficiency with a general purpose GPU core, which is optimized 32 bit float computation.
jrs77 - Saturday, March 18, 2017 - link
Self driving cars might work technologically, but they'll never be able to take the philosophic hurdle necessary to be allowed on EU-roads.What does the system do, when it drives down a road, a motorcycle coming up the pther way and suddenly a stroller with a child inside rolls onto the street with not enough road ahead to stop before hitting the stroller?
A: hit the stroller with the child
B: swing to the left hitting the motorcycle
C: swing to the right into the people on the walkway
Aslong as the question isn't answered to the extend that it can be signed into law which life is worth less in that case and what option the car should take, aslong such automated systems don't belong on our roads.
Such decisions have to be made by humans and not by machines.
Alistair - Saturday, March 18, 2017 - link
Actually that's an easy decision, made by people in a split second all the time. If you'll hit something else, you just try to stop and hit what is in your lane. You don't actually know what the consequences of moving off the road to the walkway will be, and oncoming collisions are more fatal. Try to stop and hit what is in your lane is simple. That's what usually happens in real accidents anyways.asmian - Saturday, March 18, 2017 - link
Agreed. And the assumption behind that problem is that the car can tell that the stroller contains a baby and is therefore something "worth" making an extra attempt to miss. I'm pretty sure all this car-driving AI isn't going to be connected to Star Trek sensors that can tell human lifeforms apart from mere objects that happen to be on collision courses. The logic isn't going to be differentiating between "is it a baby stroller or a shopping trolley?" but simply "can I stop in time?"I'd suggest that a human in that situation, however traumatised by unavoidably hitting the child, would not be prosecuted if there was no possibility of averting the accident, which is the Kobayashi-Maru (no-win) scenario you are trying to set up. (You canna change the laws of physics - momentum of car, available braking force, likelihood of skid on road surface, distance to impact.) So why try to create a higher philosophical hurdle to pass for a machine that would have even better reaction times and control over the car than a human driver in that situation? The whole basis of the argument is nonsensical.
ddriver - Sunday, March 19, 2017 - link
What's more worrying here is that despite all the marketing monikers, machine learning is not AI. It does fairly good in situations I got to do many mistakes over and over, but once it encounters a previously untested set of environment factors, it cannot reason its way out of it, it can only fail.Granted, if the hardware survives, that learned knowledge could be incorporated into future releases, getting better over time, but it will surely suck to be an early adopter, and end up disabled or dead as part of the learning experience.
Having some actual AI would surely help prevent or at the very least severely mitigate the damage of learning on the go, with actual human beings inside, but then again, I doubt the industry will push for an actual AI any time soon, mostly because it will do and say things that make sense rather than do and say things that benefit the industry.
qap - Sunday, March 19, 2017 - link
Actually no. It is learning general rules of behavior (on a specific instances) and it will apply them to any situation. In some situations those rules will fail, but that is no different from the way you or any other human is approching situations (child will also always fail the "hot stove situation" first time, that doesn't mean it's not inteligent). And as you pinted out - it cen LEARN from those situations. Therefore it is AI.There is a difference in how complex rules it can learn for now and how quickly, but basicaly thats all. Obviously you can draw an arbitrary line how complex rules it can learn and how fast to call it na "true AI", but that's just an arbitrary line.
ddriver - Sunday, March 19, 2017 - link
Let me put it this way - it sucks at improvising. It could not possibly succeed without failing, usually many times over.This is not how intellect works. I don't need to jump off a bridge once, much less multiple times, before I establish that it will cripple or even kill me. And I don't get to. That's what separates intellect from a programmed machine - the intellect can reason, the machine can only follow a static set of predefined rules.
Which is why I refuse to call machine learning AI, it doesn't reason, it merely automates the programming by means of failing until it succeeds. It is a purely statistical approach. There is no reasoning involved, there is no understanding either.
That does not diminish the applications of machine learning. For driving cars it will likely be much safer even now, in its infancy, with a predominantly human driver traffic system, and will only get better in time, as the software matures and the number of human drivers decreases. It is just not AI. There is no intellect involved.
That being said, I wonder if we are going to see discrimination, much like we see with everything else. For example, David Rockefeller got his 7th hearth transplant at 101 years old, with only spending 2 years with his last hearth. Whereas regular people need to wait for many years for a donor, and do not even qualify for a transplant if they are over 70. The list goes on for pretty much everything in life.
Which begs the question, will we see self driving cars crashing to kill average Joe but minimize infrastructure damage, while driving mr. Fatcat into a soft group of pedestrians to cushion the impact and minimize his injuries?
ddriver - Sunday, March 19, 2017 - link
Also, machine learning cannot really learn how to learn. What it learns from and how it is applied pretty much has to be preprogrammed. Machine learning will not learn from something it wasn't set up to learn from. If lucky, developers would be able to identify what went wrong, and apply that in a future version, but on its own, it cannot learn how to learn. Which separates it from intelligence. It needs human input, it needs to be told what factors to factor and how to apply the results. And you don't have an "intellect" without those prequisites. You have automated programming, that is set up by humans, there is no self-learning involved whatsoever. This is far from AI, even machine "learning" would be generous, what it truly is, is machine TRAINING.But that just doesn't sound cool enough to the impressionable simpletons, so why not go with AI, because it is sooo much cooler ;)
ajp_anton - Sunday, March 19, 2017 - link
Human intelligence pretty much works the same way. We are pre-programmed by our genes (involving motivation and the way our neurons work), we have a lot more neurons than these machines, and we typically need many years of learning experience before we can be called intelligent.So it's not really a fair comparison. The self driving AI is general AI in the sense that it can learn and generalise from precious experience, but will only live in the world of driving, and it's capacity for learning is limited to that area. If you want to reserve the term AI for something truly general, you'll have to wait until WE learn everything there is to know and understand everything there is to understand about the universe, otherwise you'll never know if it's truly general.
ddriver - Sunday, March 19, 2017 - link
A self driving car understands car driving about as much as a mechanical cookie extruder understands cookie making. It is just a tool, albeit more sophisticated and digital, but still just a mechanism.Learning everything is not necessary, all that is necessary is the ability to learn, and I mean like new things. Current "AI" is entirely incapable of that, because there is no understanding involved, there is only evaluation according to a preprogrammed static, finite set of data based on our understanding, based on our intellect. Just because it follows rules as presented by our intellect does not imbue intellect into the machine.
Machines can already destroy us at a number of disciplines, but they all involve doing work we told them to do. This includes the so called "AI" as well. It is just number crunching, there is no abstract thought.
You could whip or treat a dog into playing a tune on the piano, which wouldn't be much different from "AI", but even then, that dog will not have learned music, it would be trained to play a specific tune, it will not understand music as music, and as a result it would not be able to play just about any music piece, much less compose music.
With machine training you could go further, you could easily program music theory into it, or you can even use existing music as training material to make it figure music theory on statistical level, you could have a machine that writes 10 hours of commercial grade music per second... but it will still not understand music. It may contain all there is to know about music, far more than any human could possibly fit in his brain, yet it will not understand music. Its standards for music would be what we've composed and have considered to be good. It will never have its own taste of music, music will never be anything more than numbers to it.
As for our brains and our genes - the genes don't really contain any meaningful information. The human is born with a few basic instincts, many of which fade in time. I am actually following progress in neurology with great attention, and I can assure you, as much as we know about what the human brain is and how it works, what makes our ability to learn is still a complete mystery. Labs have ran large scale brain simulations at a sufficiently high speed, and there is no evidence that such simulations are capable of cognition, much less intellect, awareness or abstract thought. Don't want to sound spiritual, because I am really not, but I think it is obvious that there is more to it than the machine. It may not even be attainable, I mean all that we can do is a product of our intellect, we cannot bring our intellect into it any more than a computer game character can leap out into the real world. It may well be non-reproducible to us.
That being said, even if we cannot simulate and produce an actual artificial intellect, we can incorporate our set of understanding into one, which for most intents and purposes would qualify as an AI, while likely impossible to create an intellect that learns on its own, it is entirely doable to create one that leans as we do. That would be AI. Until then, it is, at best, machine TRAINING.
Meteor2 - Sunday, March 19, 2017 - link
Btw the argument you (ddriver) are making it quite similar to the 'philosophical zombie', which postulates that you could build a machine which looks like a human and reacts like one, e.g. poke it and it goes 'ouch', and say it's really only a zombie because we built it. But that's easily countered: if it walks like a duck, swims like a duck, and quacks like a duck, it's a duck. There's no magic inside us (though some philosophers and scientists, 'dualists', think there is).ddriver - Sunday, March 19, 2017 - link
You could program a machine that killing is wrong, you could even program it to shed tears. But it will will not be on its own accord, it will not understand sorrow, remorse, regret or pain.I can make a machine that looks like a duck, swims like a duck, and quacks as a duck. If your logic is correct, then you should be able to also eat it like you would be able to eat a duck. It would be fun to watching you eat a mechanical duck. Maybe then you will realize in which spectrum of the human intellect you are :)
ddriver - Sunday, March 19, 2017 - link
Furthermore, and not surprising at all, your concept of magic is just as much BS as your concept of science. You only know science from entertaining articles, and you only know magic from entertaining books and movies.Magic is, and has always been things that fall outside of our understanding. It dates back to times when people didn't have the knowledge to explain what they see. Magic was later picked up by religion, when it became the dominant establishment form, to hide the science that religion wanted to keep to itself as an exclusive advantage for the purpose of population control. Science was labeled magic and witchcraft, and its practice was punishable by death. But that was not enough, for there still existed the possibility of people practicing it in secret, which is why it was also given the hoodoo voodoo BS spin, sending pretty much every curious individual in some made up nonsense dead end.
So yeah, magic is very much real, and what makes us tick is exactly it. And it will be magic until we understand it, which we may never really achieve, at least on this level of existence. If it is what makes us, then it is what encapsulates us, we cannot understand what we can't see from the outside, and we cannot see it from the outside because we do not and cannot exist outside of it.
Humans are as narcissistic as they are narrow minded, and I don't mean individually, but as species. We think we are the top of the pyramid, we've had nothing, then atoms, molecules, proteins, single cell organisms, complex life, animals, humans and... that is. We cannot see past our noses, even if the progression and structure of reality are brutally obvious. And it is precisely because we believe we are the top notch that we also believe that there is nothing we cannot understand or achieve. And in a way this is true, but not only because there isn't any more than that, but because we lack the perception for it. There exists an infinite number of higher order organization of consciousness than us, that we could be the forming cells or even atoms of, but not with our current mindset. With that we are stuck here on this rock, we will never really travel in space in the form of a consciousness bound to an axis of time and animated by falling uncontrollably through it. There are things we cannot grasp, much less achieve on our current level, and what makes us tick is one of those things. We are about as much aware of what's ahead of us as a microbe is aware of our human world. It just falls outside the the threshold of perception.
name99 - Sunday, March 19, 2017 - link
ddriver, in this sequence of comments you and some others appear to be honestly trying to grapple with a real problem, so I'll take you at your word.The basic problem, however, is that all of you are operating in a "systematic" mindset. This is the characteristic thinking of modernity (basically the West since ~1500), and its primary characteristic is an assumption that the world is legible. This is not exactly a belief that the world is rational, causal, and so on; it's the stronger belief that
- everything important about the world
- can be encapsulated in a few axioms
- from which everything else can be derived.
Look at this structure. It' encompasses modern science, yes, but also modern law (as opposed to the more flexible way in which common law was handled), modern religion (various fundamentalisms, starting with the protestant reformation), Kantian-style ethics, and various politico-economic theories.
The point I am trying to make is that this is not the ONLY way of thinking about the world --- Kantian style ethical systems, for example, are not the only way to construct ethical systems. The fact that you all are discovering that, indeed, cars constructed according to the "systematic" model cannot cope with the full variety of the world is a discovery about the "systematic" model, not about cars. The same would be just as true if humans were forced to live absolutely according to the supposed rationality of rules derived from a small set of axioms. (This lack of flexibility, the more you're forced to live according to actually poorly chosen, but supposedly universal and optimal, rules, gets worse the weaker in society you are --- hence why parts of life sucks for the jobs of the poor, school children, the colonized, and so on.)
The solution is not to ditch rationality, it is to accept that the world is complex enough that it can't be captured in such a small cage. Mathematics learned this in the early 20th C (that's essentially what Godel is all about), likewise physics. But our legal and political systems, and our public discourse, are still operating according to this 19th C model. The best of us need to figure out how to move our society beyond this, not backwards to nihilism or selfishness, but to a rationality that understands its limits.
AI (or machine intelligence or statistical learning -- if you think arguing about what "real" intelligence is still valuable, even after reading the links below, I'm afraid you have nothing to contribute, you need to sit at the kids table while the adults have their conversation) is perhaps our best hope for society as a whole, not just a few smart individuals, to have to confront and then deal with these issues.
General framework:
https://vividness.live/2015/10/12/developing-ethic...
This is a slightly technical page that tries to show the problems with one very particular version of these "systematic" models of everything:
https://meaningness.com/probability-and-logic
You can work through the entire shebang here if you like:
https://meaningness.com
(I didn't write these pages and know nothing of their author. I have no interest in his [only occasional] asides about Buddhism. Overall they reflect the thinking of a very smart guy who spent the first part of his life at MIT trying to get a variety of AI schemes to work, and concluded from the experience that "our" [ie society's] thinking about these issues is much more broken than it needs to be.)
Meteor2 - Sunday, March 19, 2017 - link
Very interesting, name99. Personally I think humans over-complicate things. Look at animals. They perceive and they react in accordance with their innate purpose to survive and reproduce. We just added simulation and language (and thence co-operation and planning, though dogs do it too), and began to improve our survivability and offensive/defensive capabilities with sticks sharpened with stone tools, furs, and fire. Now we're here. I don't think the basics -- perception, purpose, simulation, execution -- will be too hard to instil in AI and robots. That's why I'm fascinated by process nodes -- it's one of the key enabling technologies.Or maybe I've just watched Westworld too much.
name99 - Sunday, March 19, 2017 - link
If that sort of thing (how human thought is [perhaps] different from pre-human modes of thought) you want to read this:http://www.ribbonfarm.com/2011/03/17/cognitive-arc...
Between that first stage of blooming buzzing confusion ala Henry James, and today, we have an intermediate stage of Julian Jaynes' _The Origin of Consciousness in the Breakdown of the Bicameral Mind_ for which the wikipedia article gives a summary:
https://en.wikipedia.org/wiki/Bicameralism_(psycho...
Meteor2 - Monday, March 20, 2017 - link
Thanks Name, I'd already come across bicameralism (not convinced) but that first link is fascinating. So is the article it links to -- how iron supplanted bronze. I've been reading about that too, just last week!I found this interesting: http://www.todayifoundout.com/index.php/2010/07/ho...
It would seem that people who are profoundly deaf from birth, and not taught sign language, don't have an internal monologue, and show reduced cognition. I think that's telling.
name99 - Monday, March 20, 2017 - link
Thanks, that was deafness link interesting. I know I do all my thinking in natural language, and am amazed at the claims made by some that thinking happens in some sort of universal "pre-language" mentalese.ddriver - Sunday, March 19, 2017 - link
The problem is the lack of understanding the mechanics of our own reasoning. If we could model that, then it can be implemented in various ways, organic, analog electronics, digital electronics, quantum electronics. That's just a detail.I can't seem to find people who actually can explain things in a reasonable way, even things about them. One notable example is preferences - why do people like what they like. The usual answer is "because I like it". They cannot give a logical explanation the mechanics of how things influence them, and why would they find something to be appealing or appalling.
Scientists are presented as those "very smart relative to the general population" individuals, but they aren't really any more intelligent, they are just more trained. Such scientists are very much narrow minded, they are good at what they were trained to do, and mediocre in everything else, and most of the time it is one single thing they are good at, it is very rare to see a scientist proportionally proficient in multiple disciplines.
So it comes at absolutely no surprise that scientists, relying on pre-programmed in their heads set of knowledge to do a single thing, produce an "AI", relying on pre-programmed in their implementation set of knowledge to do a single thing.
It is just the best they can do. They do not understand the root mechanic of intellect, therefore they cannot model and reproduce it.
amdwilliam1985 - Sunday, March 19, 2017 - link
you are very outdated with your info.Go to youtube and watch some video about how Google DeepMind works.
"It" learns by it self! no pre-program needed.
You specify 2 things, entry data and desired results. The AI("machine learning") fill in the blank(aka black box, or self-programming) and connect the dots. Yes, it is very scary~
Meteor2 - Sunday, March 19, 2017 - link
Think computers can't learn?http://www.zdnet.com/article/google-brain-simulato...
And that was five years ago. Computers can learn, can reason in novel situations, and can overcome challenges you might not expect.
Meteor2 - Sunday, March 19, 2017 - link
...or demonstrate intuition? https://www.wired.com/2016/05/google-alpha-go-ai/ddriver - Sunday, March 19, 2017 - link
Nope they can't, not yet anyway. You don't really understand the subject and its implications. Those are sensationalist titles intended to impress simpletons. And it evidently works.It ain't no different than that hack Kaku talking about "quantum teleportation" in the context of actual, star-trek like teleportation. It is almost as if that over-hyped celebrity scientist doesn't know what he is talking about. The teleportation he talks about is as much teleportation as the fax machine, which we've had for so long that today it is pretty much obsolete ancient tech.
Unlike you, I do understand the actual science, therefore I can easily determine that the mainstream presentation of science is PURE, 100% BS. It is nonetheless astonishing how easy it is to impress and convince simpletons that they are smart and progressing people by simply feeding them BS. It is no different than new age or pseudo / alternative science, or any of those groups of nutjobs.
Meteor2 - Monday, March 20, 2017 - link
First off, because of your tone, I've put you in the category of 'commenters who I automatically skip'. I may not be alone in that; presumably you're here because you want your comments to be read. Consider that.Second, I do not claim to understand the science. If I did, I'd be in a very select group, and working for Google. So excuse me if I dismiss your claims, which don't seem to amount to anything more than insults really.
Yojimbo - Saturday, March 18, 2017 - link
Those hurdles are the sort of things ethicists and philosophers think about because that's their job and their delight. But in actuality the world is a Darwinian place. People's morality and ethical comfort will adjust with what is necessary and productive. You can make demands like "Aslong as the question isn't answered to the extend that it can be signed into law which life is worth less in that case and what option the car should take, aslong such automated systems don't belong on our roads. Such decisions have to be made by humans and not by machines." But as soon as China, for example, allows the adoption of artificial intelligence throughout their society and in doing so increases their productivity significantly it will reduce European competitiveness, reducing European wealth and increasing European poverty in turn. At that time no one is going to care about some philosophical question, and the (justified) fear of that situation materializing will cause people to be willing to face and accept those ethical dilemmas.Besides, if one wishes to consider the philosophical dilemma you proposed one must also consider the ensuing one: If self-driving cars are statistically much safer and will save over 20,000 lives a year in Europe is it ethical to block the saving of those lives on the basis of such a philosophical hurdle as you proposed?
shabby - Sunday, March 19, 2017 - link
The answer is simple, the people not doing anything wrong should live, the child strolled onto the street by choice and therefore should suffer any and all consequences.Just because you saw this stupid scenario posted somewhere else doesn't mean you should bring it up again.
versesuvius - Sunday, March 19, 2017 - link
Self driving cars! The most stupid idea ever!!Yojimbo - Sunday, March 19, 2017 - link
Worse than shoulder pads in women's clothing?versesuvius - Monday, March 20, 2017 - link
Of course!dealcorn - Sunday, March 19, 2017 - link
The goal is not to replicate human decision making processes. The goal is that autonomous vehicles should be safer than human drivers. That is low bar in that machines usually do not get drunk or do drugs. Also, faster than human reflexes are likely.versesuvius - Sunday, March 19, 2017 - link
Certainly!ddriver - Sunday, March 19, 2017 - link
It is not the human intellect that makes human drivers unsafe, it is the lack of it ;) There is no reason why you can't have the best of both worlds, except that scientists cannot really replicate the human intellect.versesuvius - Monday, March 20, 2017 - link
As it is the best of any world is measured by how much money is made by any "world", which all the noise about self driving cars is really about. Autopilot on planes has been there for ages but still the pilot is not allowed to leave the cockpit while it is on. So, I would give the idea a second thought if all those who make these cars and their families and their employees use these cars exclusively for ten years.mapesdhs - Monday, March 20, 2017 - link
"Autonomous" systems the CIA can hack, yay. :D