Tag Archives: writing robots

Q&A: Computer Logic is not Human Logic

Hi! A question inspired by the androids of Detroit: Become Human. If an otherwise human android (or gynoid) had only faster reflexes (and inability to feel pain), being able to compute the best possible approach in any hand-to-hand combat situation from move to move, how much of an advantage would that be? Is there an advantage to human unpredictability or can melee combat be optimized by artificial intelligence?

Have you ever played chess against a computer?

They cheat. They don’t even cheat intelligently, they just cheat. They go right for the jugular, and the “game” is over in one to maybe two moves. An android in combat is going to do the same thing, in that it will do precisely what you programmed it to do and that logical outcome is: to go directly to instant death every. single. time.

Total neutralization of the threat before they have time to react.

Well, that’d be after the AI realized that it couldn’t just not fight or put the world on pause forever. Or it might just shut itself down after activation like that Security Robot which committed suicide in a fountain. Not fighting is winning. You can achieve victory by never fighting or simply shutting down. However, if you must, immediate total obliteration is the most optimal approach when it comes to conventional ideas about violence. You cut your enemy off at the knees, act preemptively once you register the situation, act before the enemy has time to get their pants on, and knock them off the proverbial cliff via straight up murder.

The computer does not distinguish, the computer does not regulate, the computer does not care. The computer is doing exactly what you told it to do and subtle nuance like deciding whether one crime is worse than another is beyond it. You told it to deal with a threat, the threat has been dealt with in the most efficient way possible regardless of future consequences. The computer wasn’t programmed to consider those.

Now, I know that some of you are going, “but what if it was?”

Well, let’s be honest, this is a perfectly logical, reasonable, rational solution that plenty of real people have already come up with. Plenty of self-defense professionals will tell you that this is the best, least risky, and ultimately safest solution is recognizing the threat before the threat occurs and acting. The two sets of mores which will hold us back are moral and social. This is not a societally or socially acceptable method of dealing with other human combatants.

Let us remember, you asked for the most efficient hand to hand solution and not the most socially acceptable one.

That method is sudden, violent murder. The computer will then escalate from there into preemptive action… like murdering all humans everywhere because that will definitively end the threat humans pose to each other.

This is why Isaac Asimov’s Three Laws of Robotics exist.

Computers have trouble with complex moral quandaries and subtle nuance when it comes to decision making. You just don’t want them to be able to hurt people.

This, of course, is predicated on the idea that the programming works and the android can actually predict “the best possible” solution in hand to hand combat at a speed rapid enough to keep up with the human. (Which is why I say “preemptive instant death”, the computer will figure out quickly that this is the least risky approach which requires minimal overall computing power.) Hand to hand combat has a myriad of complex permutations and approaches which would be extremely difficult for a computer to keep up with, and the android could only do this with what it was programmed to know.  With a learning algorithm of some sort it’d be a kludgy person, ultimately slower and less capable. It not being able to “feel pain” would actually be a detriment for it. Working through pain is what teaches humans to ignore it, to know when they’ve reached their limit, when they truly are injured, and discover which pain actually matters.

This quality is often ignored by popular media outside of sports films, war movies, and fighting anime, but pain is extremely important to a combatant’s development. Pushing past pain is necessary for your mental barriers in martial arts training, which are key to developing conviction, determination, courage, and general grit. You don’t just train your body, you train your mind and your spirit. By going through difficult and frustrating experiences you grow, and get strong. That mental and emotional strength is what we use to push past our limits, to achieve new heights, and keep going when we’re certain we’re spent.

During training, you push past pain, past exhaustion, past your own insecurities, your self-defeat. You stand up. You keep going.

This quality? This comes from facing and defeating yourself, your own internal expectations of yourself and your own strength. You get past the first hump, and every hump you get past after that is a little easier even when the trials you face are more difficult.

The “One More Lap” mentality is the Determinator.

This is the difference between the mediocre student who showed up every day and worked their butt off to get better versus the talented student who was content to coast on their genetically gifted laurels.

This inner quality, earned by blood, sweat, and tears, is the foundation of every single champion.

It’ll screw up an algorithm.

And that’s why the computer cheats.

Against an overwhelming threat, the computer will react to protect itself the way anyone else would. Like so many other humans before it, the computer reduces risk to the smallest possible margins by turning to other options. It ultimately settle on the safest solution: preemption, and if not preemption then rapid escalation into brutality and murder.

If at any point during this post you went, “but no, that’s wrong!”

Exactly.

That’s an error checking your computer can’t do.

More than that, you can’t program a computer to work off information you don’t have and it doesn’t know. You can’t program the computer to “find the best solution in any hand to hand scenario” because you can’t program it with all that information. You won’t have access to nearly all the necessary information, and the possibilities are too numerous. Even if you program your computer with a magical learning algorithm it will only have access to the information it has experienced. The computer does not have the ability to be prescient.

I mean just look at all the actual AI experiments out there. Computers are very good at some aspects and terrible at others. Check out this video where an AI plays Tetris, and in order not to lose pauses right at the end. It can’t lose now, it’s indefinitely paused. Computer problem solving is different from human problem solving in some very fascinating and, in some cases, extremely literal ways.

Violence is very simple in some ways, but extremely complex in others. There are the moral and ethical quandries, such as when is use of force necessary but also complex kinetic motions requiring supremely good coordination in order to perform. This is the kind of force generation that’s very difficult to program because there are a lot of moving pieces. Those pieces are several steps beyond just programming the android to pick up objects, walk, or run.

The Terminators are the way to go. They don’t fight in conventional hand to hand, they just throw, flick, and crush on their way to victory. They have that option. They’re durable, most modern damage won’t slow them down, and they’re choosing motions that aren’t that mechanically complex. After all, why program the android to perform a 540 kick when they can throw someone through a wall? Easy, effective, involves fewer moving parts, and there’s ultimately less risk of damage.

The problem with Detroit: Become Human is that the androids are in the hands of a human player. They’re being controlled by a person, so, of course, they’ll behave like people. Games where you play the android are a terrible exploration of whether or not a computer can feel empathy. Think instead about NPCs in all your other video games. How do they behave? What do they do? There are plenty of learning AI in strategy games, and a lot of them cheat.

So, could a human fight this potential android and win?

Yes, fairly easily, because humans not only also cheat but because our brains prioritize the accumulation of different data that a computer will ignore. Information about the environment, for example. Developing tactics in regards to utilizing that environment during combat are another. We call this the “Let Me Hit You With A Trash Can Lid” approach. You can look at your environment and see items in it that you can use as weapons. The computer? The computer is going to ignore those. A human can also anticipate secondary and tertiary consequences to their actions, which means their decision making is ultimately different. It is very difficult to anticipate an enemy you ultimately don’t understand. Programming a computer with martial arts techniques is one thing, programming the computer to understand what people might do with those techniques is actually a different process altogether, and programming the computer to perform all those techniques (if they can even gain access to the full spectrum) is going to give some poor robotics expert a real headache.

I got a headache just thinking about it.

-Michi

This blog is supported through Patreon. If you enjoy our content, please consider becoming a Patron. Every contribution helps keep us online, and writing. If you already are a Patron, thank you.