Hackers and Logic Bombs on Computer Systems - What About Logic Bombs on Humans?

A logic bomb computer virus is one which starts working and is self-triggered upon some event occurring, think of it as a hidden hot-key when you ask the computer to do something. At that point it executes various tasks, anything from deleting program files to deleting saved information. Worse, these logic bombs can be triggered by a negative event, something that does not occur as well, for instance if you fail to do something by a certain time, even if you don't know what that something is?

Now then, what about human logic bombs? Can they exist too? Yes, they do exist, and for those who study Freud you will immediately understand this, as some event triggers a certain type of behavior in the individual due to a very powerful imprintation of neurons in the person's past history or childhood. It can also happen in adults; think about PTSD for instance.

The other day I was discussing this with a PhD psychologist and Sociologist professor at Starbucks here near the University. And we got onto the topic of neurology and computers with regards to artificial intelligence, and our conversation progressed to a point where I asked a question; "What happens when a thinking computer is given a philosophical conundrum or circular argument? It cannot figure out the answer, but it would trigger it to search, reason, and it would get stuck on it, going in circles and unable to figure it out.

Indeed, I guess we can call this the ultimate logic bomb for an artificial intelligent computer. It wouldn't be the first time this happens, in humans it happens all the time with very intelligent individuals who study philosophy. They get to a point where they can't reason something, which leads them to a circular argument. Many past philosophers in history have come to these points and been unable to figure them out, and therefore they spend their entire lives dedicated to solving a simple mental problem. And they write thousands and thousands of pages on the topic and often do not come to an answer which is adequate.

Sometimes these philosophical conundrums are left to those of us in future periods. But what happens if this occurs with a thinking artificially intelligent computer? They would go around in circles and be of little value, searching their memory banks for every possible connection in all of their databases, trying to figure out the answer - never being able to reach any conclusion with 100% possibility. So if someone wanted to sabotage a future artificial intelligent computer, all they have to do is ask IT questions that cannot be figured out. Yes, that would be the ultimate logic bomb. Please consider all this.

Комментариев: [0] / Оставить комментарий


ALL NEWS TODAY © PoznaiSebya.Com
Designer Padayatra Dmytriy