Artificial Intelligence: Practical application

In summary, AI is a field of research that deals with the creation of computer software that "behaves intelligently" in some sense. It has not yet produced anything that everyone agrees on as true intelligence, but there are many applications today that provide value to businesses, researchers, and Sally Public alike.
  • #1
caumaan
33
0
What exactly is artificial intelligence, what are some of its practical applications, and have we created it yet?

Please no arguments that humans are not intelligent! If it comes to that then this thread is useless.
 
Computer science news on Phys.org
  • #2
Artificial Intelligence is a research topic. The object is to create computer software/hardware that behaves intelligently, in some sense of the word. It takes various forms (top-down, neural network, etc.) and it has not yet produced anything that everyone agrees on as true intelligence. Some philosophers bitterly oppose it, holding that only the unique human mind can be truly intelligent.

An obvious application is the computer chess programs that can beat world champions.

And I believe sophisticated search engines like google have some relationship to AI research.
 
  • #3
AI is impossible

From a philosphical point of view, I think AI is impossible. How can computers reason? There is no mathematical formula to calculate reason. Also, what is Intelligence? By reading Plato's Theory of Knowledge, is knowledge the same thing as intelligence? What makes a being intelligent? I think that it all depends on your point of view, but there is no way around it. How can there be artificial reason? "Who decides reason? What is logic?" John Nash once said.
So, philosophically, in my point of View, AI is impossible. Comments? Questions? Suggestions?
 
  • #4
"AI" - a more modest concept

After the initial buzz and hubris subsided, and real research into what 'human intelligence' actually is got under way in earnest, we learned that the devil truly is in the details. Along the way we discovered that what we thought would be a near-trivial problem - simulating 'common sense' - turned out to be richly complicated; common sense is, in fact, anything but 'common'.

In terms of how the phrase AI was couched many a year (decade?) ago, there are a number of AI applications today, providing value to businesses, researchers, and Sally Public alike. Some examples:

- machine translation, of which AltaVista's Babelfish is an example. Sure it leaves a lot to be desired, but it is a form of AI, as it was originally conceived

- automatic voice transciption - you say the words, the 'AI' prints out what you said

-agents and bots, like what google uses for example. They are used in many different applications, from travel sites (e.g. expedia) to job searching (e.g. monster) to some network management systems

- expert systems. These codify the logic or knowledge (or both) of human experts in narrow domains of knowledge, and provide valuable assistance or advice. There are many examples, perhaps the most lucrative are those used to identify arbitrage opportunities in various financial markets; it's likely true that they produce better long-term results than highly paid professionals. Fraud management systems are another example which delivers valuable results.

- autonomous systems. Perhaps now more an area within robotics than AI, but it was once thought to be an AI objective. Perhaps the most interesting examples are Spirit, Opportunity, and Beagle 2. These Mars landers are designed to maneouver their way around (a small part of!) Mars, making decisions on what to do, where to go, and how to get there without recourse to their human masters.
 
  • #5
People tend to think of AI in the 'Skynet' 'Terminator' sense -- making a machine self aware, or at least making one that can beat the Turing Test and make us *think* that it is a human.

However, that's just one field of AI. Look at the animal life in your back yard; is your dog intelligent? Yes, it acts intelligently. But does it talk, act and think like a human? No it doesn't. It doesn't need to to fulfil its purpose.

Concordantly, one immensely useful aspect of AI is making software that *acts* in an intelligent manner; which reasons things out according to prior experience rather than a strict unchanging program. The uses? Well don't you get sick of updating your virus scanner every few days? Why not just create a machine that could look at the code of a file and think: 'hey, that looks pretty nasty to me! Has all the stuff that I've seen in virusses before... I'd say this is a virus' and then tell you 'Hey Chris, I'm about 90% certain this is a virus. Should I kill the fella?'

How useful it would be too, for the FBi to have a program that could *think* about how to crack an encryption code rather than mindlessly bruteforcing the keys.
 
  • #6
Concordantly, one immensely useful aspect of AI is making software that *acts* in an intelligent manner; which reasons things out according to prior experience rather than a strict unchanging program.
While 'reasoning' might go a bit far, neural network-based AI apps do learn; I believe their designers call the period before they let such systems out into the wide world 'training'.

Examples? IIRC, some of the better fraud-detection systems used by banks, credit card companies and the like have neural-network components. Similarly, I'd not be surprised if Norton, McAfee, et al employed such systems internally for their work on virus detection and analysis.

In astronomy, I remember reading of a program which could reliably and consistently assign a Hubble class to galaxy images, and which was judged more accurate than all but the most experienced humans (it was, of course, much faster than the humans, could work 24 hours a day, and didn't draw a salary!)

May not be all that sexy, but progress is likely to be made in small, incremental steps than some Headline News breakthrough.
 
  • #7
From a philosphical point of view, I think AI is impossible. How can computers reason? There is no mathematical formula to calculate reason.

You and I can reason through something, IE, should a person be punished with death for killing another human being? We unconciously and almost instantaneously decide yes or no depending on certain circumstances. Was the act made by mistake, such as an accident? Hold old is the person that actually did the killing? Was the act habitual or a one time event? Does the killer feel justified (self defense, vengeance, a preemptive attack, etc.)? Does the killer feel remorse? There may be hundreds of other questions that you unconciously answer before deciding whether the killer should be put to death or not. I feel that a computer can be programmed to reason its way through a similar situation. You and I have had years to take in information (increasing our knowledge base), we have had years to hear different reasons why a person should not be put to death or why they should be(rules or more appropriately - guidelines to follow). At this point we do not have a true "thinking machine". We have machines that can follow a set of instructions, guidelines, or rules. Some of those instructions may allow the machine to actually alter its instructions, guidelines, or add information or remove information from its knowledge base to perform in a different way than it was actually programmed to perform, ergo learn. We could devise a program that could decide on whether to put someone to death or not. First we would have to decide (very broadly) whether an act actually merits death. Someone commiting a crime where no one was hurt or no one died may be the first branch in the decision. Next, we may take into consideration some or all but not limited too the questions listed above. The program could priortize the questions and set a point value to the answers. Last, the program "weighs" the final value of the answers to determine if the death penalty should proceed. Now, of course, this is a very simple and incomplete answer to the problem, but it does (I hope) open a door in your mind to the idea that we could give computers enough information and some sort of dynamic guideline system to replicate or at least begin to replicate the reasoning process. This may be years/decades away, but I think it is possible. With that said, I showed you mine, now show me yours.

Jeremy
 
Last edited by a moderator:
  • #8
At least a limited amount of AI is important in any system that has to operate autonomously in a variable, unpredictable environment. That's probably not a bad definition of AI, but even that might be too restrictive.

Some people aren't willing to use the label of AI unless a computer does everything, but lots of systems work in ways that are definitely intelligent, often surpassing the abilities of people performing the same tasks. One example is scanning x-rays looking for suspiciously cancerous growth. There are lots of expert systems doing all sorts of interesting things.

AI is making progress, maybe slow progress, but in an evolutionary manner. Perhaps all they need to do is to be networked together.
 
  • #9
its kind of an oxymoron in my opinion though.. or maybe a paradox, not sure if the right word.. contradiction there we go.

The only way to have AI truly accepted by everyone but the most scrutizing individuals is to have the code be able to 'learn' for itself.. and that INCLULDES guidlines. There are too many variables in reality for anyone 1 man or even 'all' men to be able to program into a machine which is what michio spoke on on tech tv i believe.

so if you set guidlines the machine is never truly intelligent it will always simply be following orders and even if it learns guidlines for itself its still debatable that its following orders.

MythioS
 
  • #10
Definitions of AI from 8 books:

1) The exciting new effort to make computers think... machines with minds, in the full and literal sense.

2) The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning...

3) The act of making machines that perform functions that require intelligence when performed by people.

4) The study of how to make computers do things at which, at the moment, people are better.

5) The study of mental faculties through the use of computational models.

6) The study of the computations that make it possible to perceive, reason, and act.

7) Computational intelligence is the study of the design of intelligent agents.

8) AI... is concerned with intelligent behaviour in artifacts.

Blue: This category is about machines thinking like humans.
Italicised: This category is about machiens that act like humans.
Red: Systems that think rationally.
Bold: Systems that act rationally.
 
Last edited:
  • #11
Hey adam,

surely a concern is that if we design AI too much like human minds/ human brain architecture then we might get to see ai malfunctions in the form of paranoia, schizophrenia etc Its opening a bottle to a whole new genie (not that I am intrinsically against it)
 
  • #12
Hi Funkyjuice.

I don't see it as a problem at all. I just did a semester of AI, and I was actually quite disappointed with it. The course focused on computational methods which I personally don't consider to be worth the title of "AI". We studied search methods and such, all manner of things which can apply in any other area of computer studies and are not specific to AI. The core of AI discussion and development, to me, is sorting out the "why". Why a machine will choose one thing over another, do one thing rather than another, et cetera. And this was not covered at all in my course.

I have this idea that we will be one massive step closer to developing true AI when we have formulated a basic set of logical instructions on which all judgements will be based. For example:
  • Multiple entities with varying capabilities can achieve more than a single entity. In other words, co-operation is a good thing.
  • Killing off others produces negative effects (like others coming after you to kill you in return), so it should be avoided.
  • "I exist."
  • "The world outside my mind exists." (I feel this one is necessary and should be hardwired in. The other option is to show the machine that it can acceptance of the world, rather than solipsism, is basically a safer bet.)
 
  • #13
i think your last 2 are the key to true AI.

"I exist."

"The world outside my mind exists."

If we can start to postulate a 'm-theory' of the mind that can corelate experiences like the forces of nature it will give us the foothold we need to translate real world events into a formula that can then be minipulated.

MythioS
 
  • #14
my problem with the last point is that we as humans can't even prove that to be true... from Sheldrake's theory of morphic fields to the more mundane 5 senses, our "world outside" is only created from the assembly of information we are given... RS Ramachandran has shown how quickly we can fool the "body schema", so how can we hard wire a principle into a machine that we ourselves don't understand?We are not even sure how or even quite why the feeling of "self" and the "self"'s raltionship to the outside world works in humans
 
  • #15
Funkyjuice.

It doesn't matter that we can't prove that point to be true. The fact is we must act as though it is true. Otherwise you might as well go for solipsism, eblieve you can fly off a building, and go jump. You'll splatter all over the ground. Natural selection will result in the end of solipsists and the continuation of those who accept that the world around them is real. In other words, it is a safer bet. This must be explained to a computer.
 
  • #16
Hey Adam,

Thats all great in theory, but in practice unless you understand how it works in humans how are you meant to emulate this concept in an AI environment?

Merry xmas all
 
  • #17
It's a simple logical choice. Demonstrate the logic to a machine. Show it what happens to another computer that chooses the solipsism option, with a hammer if necessary.
 
  • #18
You have to think of it more along the lines of quantum uncertainty and darwinism. You don't need to tell a set of robots how to build a car but eventually through the course of uncertainty they'll figure it out. You simply have to give them the power to learn through the senses and then form opinions based on that input.

Somehow possibly correlating theyre experiences through a spatial refences so eventually they would realize that tactile sensory input is only achieved in its closest proximity. Wether they realize that that close proximity grid is them or not doesn't really matter. The fact is that it would see those occurances happening more often and would relate information based upon the locations of those events most often then anything else and hence voila zip bamboo or something.

Kind of like, give the robot the ability to move within three dimensions but attach all of its experiences within a separate dimension of its own; time. Each bit of information would have a time piece encoded on it along with a spatial coordinate system of its own. Eventually the program would learn with enough practice that in order for it to move, it has to keep certain opinions and other opinions must be let go.

Dont get me wrong, i see where your coming from, but the problem with that type of philosophy is that we'll never be able to do it because we aren't god and we can't hand out the 'essence' of a big bang to a new life form so we'll just have to judge within our limits of creation. And personally I'd like to see it come true.

MythioS
 
Last edited:
  • #19
What exactly do you mean by "uncertainty"?
 
  • #20
I appreciate the reply guys...

I understand the philosophy of AI... but surely we must learn more about our own neurology first (im not saying understanding consciousness is impossible... just hard at the mo) ... before trying to emulate it in a machine... or we might once again behold the "trick" that is consciousness and yet still not know how it’s done...
 
  • #21
Artificial Intelligence (AI) is an experimental science whose goal is to understand the nature of intelligent thought and action. This goal is shared with a number of longer established subjects such as Philosophy, Psychology and Neuroscience. The essential difference is that AI scientists are committed to computational modelling as a methodology for explicating the interpretative processes which underlie intelligent behaviour, that relate sensing of the environment to action in it. Early workers in the field saw the digital computer as the best device available to support the many cycles of hypothesizing, modelling, simulating and testing involved in research into these interpretative processes, and set about the task of developing a programming technology that would enable the use of digital computers as an experimental tool. A considerable amount of time and effort over the last 35 years or so has been given over to the design and development of new programming languages, tools and techniques. While the symbolic programming approach has dominated, other approaches such as non-symbolic neural nets and genetic algorithms have also featured strongly, reflecting the fact that computing is merely a means to an end, an experimental tool, albeit a vital one.

-- Jim Howe

Source

I think in order to AI to be possible, I believe it would have to consist of several applications:

×game playing
×speech recognition
×understanding natural language
×computer vision
×expert systems
×heuristic classification

and then there is the the branches of AI:

a) logical AI
b) search
c) pattern recognition
d) representation
e) inference
f) common sense knowledge and reasoning
g) learning from experience
h) planning
i) epistemology
j) ontology
k) heuristics
l) genetic programming

Source

While three out of the six are duable (referring to the top applications), the others would just take a leap in general technology; which I don't believe is possibe in the near future. I suppose maybe a 100 years but anything in grasp of high-tech scientology is just not ready for development.

But, on the contrary, I do have a question that I would like to hear an expert opinion on. Would or could it be possible that an A.I. bot or "being" could possesses a structure or knowledge of intelligent quotients (IQ)? If so, how would scientists use 'some type' of method to construct this application in a bot?
 
  • #22
which of those six do you think are so dificult? AFAIK all six are being done now; some expertly (game playing) and some less so (speech recognition) but all at better than beginner level. As for computer vision, they have built a gadget that replaces the visual cortex of a chimp, and it works!

As for IQ, I would bet that within ten years they have software that can take an IQ test, whether written or raven matrices, and score higher than Marilyn vos Sant. But would it be intelligent? I firmly believe this. If humans work hard enough to define a kind of problem, they can write software that will solve any problem of that kind. What is much much harder is to make software that solves kinds of problem people haven't thought of.
 
  • #23
and here my ignorance is displayed ... particularly with this maybe naive next question...

Even if we hard wire empathy.. even if we can make a logical framework for the machine to think of itself as a "self".. Would a machine (with the 5 senses with have available) be able to make a suitable guess at the truth behind even a child like lie?

Would a machine be able to understand that not all information is real, such as an auditory lie or a visual illusion? Would it be able to fill in the blanks?

.. back to my previous point somewhat echoed earlier... what’s the point in trying to understand the "truth" behind the actors without realising the puppeteer has his hand up their behinds... surely we can't be saying that the answer to who and why we are lies inside a machine pretending to be us?
 
  • #24
A machine can easily tell if a human is lying by various clues we give.

A machine can tell if a machine is lying if the lying machine says something which goes against previously established knowledge. Ie. if the lying machine says Jupiter is a doughnut.

Beyond that, perhaps machines will have to do what we humans do: trust, or be doubtful.
 
  • #25
Thanks Adam.. but maybe with your knowledge you can explain how they will do this...

You and an AI machine meet me and say "hey Alex how are you today?", I looks downwards.. with a frown and say "I'm doing well"... now you and I adam know the guy isn't doing well... how will the machine begin to try and pick up these subtle nuances in communication?
 
  • #26
Another expression (im sure therfe are thousands)

"I feel blessed"

is the machine simply meant to understahd that as the individual feels lucky (another trickyish subject)

or is it meant to understand the concept of a diety (whether it believes or rejects it is irrelevant for this) and assumes the human is feeling genuinly blessed by the "gods"?

or even a combination of both...
 
  • #27
"so if you set guidlines the machine is never truly intelligent it will always simply be following orders and even if it learns guidlines for itself its still debatable that its following orders."

MythioS



It can always be pointed out that humans them selves have orders or guidelines that they follow programed into their DNA commonly referred to as instinct.


A friend and I once discussed the possibility of an AI gathering impressions of information rather than whole information. For an example rather than comming across a chair and making an exact memory of it including precise dimensions materials and the like it could perceive the chair and realize it as such based on comparison with previous impressions of shapes and textures. Perhaps this chair has a construction that resembles a ladder and the AI has a robotic 'body' that is capable of climbing a ladder and it can reason that while this object is a chair it may be usable as a ladder. Perhaps this is already done?
 
  • #28
Originally posted by Funkyjuice
Thanks Adam.. but maybe with your knowledge you can explain how they will do this...

You and an AI machine meet me and say "hey Alex how are you today?", I looks downwards.. with a frown and say "I'm doing well"... now you and I adam know the guy isn't doing well... how will the machine begin to try and pick up these subtle nuances in communication?

Computer has cameras. A programme for interpreting human reactions. When the eyes berak contact at answering a question like that, it gives a higher score toward the probability of it being a lie.
 
  • #29
I think this has been said, but I think that all a computer has to be able to do is learn and adapt. Afterall, that's how WE learn. For example, a baby when young cannot tell a lie from the truth, but through time, and interaction, it will, eventually. I guess that should be all. We do not acquire common sense from some preprogrammed genetic code, we learn it as we grow. Of course, one has to equip a robot with the neccessery machinary to do the job. Eg. Cameras, microphone, and others that can emulate the body's functions.

Well...might be wrong, but that's what I think.
 
  • #30
Hey all,

Yes we do learn as infants, but there is a strong belief that these are inate abilities.. written into the genes, thus Nurture works in cooperation with Nature. A machine has a clearly limited Nature (those instructions written into the source code) , we on the other hand have evolved into a situation where we are predisposed towards these "learned behaviours".

Further to this we have around 30 different areas of the brain just to receive visual information alone, from the older "how" pathways (superior colliculus) to the newer "what" pathway (lateral geniculate nucleus), these systems would almost be impossible to replicate in a machine at the present momment.

These problems i cannot see us overcoming with our relatively primative attempts at AI at the mo, i personally feel this will only be possible when we more fully understand how our brain works and probably not before.

Again just my 2 cents
 
  • #31
Theories

Alot of how you tackle a problem depends upon how you identify it and other things. I guess I am coming at things from a liberal point of view where i just would not give up irreguardless.

Personally I don't believe in genetics much (when it comes to thought process) or the brain. Thats where I feel a higher reality is simply making an impression upon the physical world. What we see as sections of the brain that deal with certain aspects of reality i percieve as in actuality the reverse, where it is our 'souls' that are interpreting reality and our brains are simply an echo of the translation between the physical and the metaphyiscal world.

But that's for another topic...

It all comes down to what you believe. If you truly believe that its a combination of genetics & enviroment.. then i can see there being a problem.. but if it all in the end truly comes down to a persons choice of will, how they percieve the world around them and the impressions they get from actions around them.. then all it will take is time.

MythioS
 
  • #32
Originally posted by ChrisDines

How useful it would be too, for the FBi to have a program that could *think* about how to crack an encryption code rather than mindlessly bruteforcing the keys.

This doesn't make sense. The program itself uses a set of algorithms to bruteforce an encryption code. So it basically *thinks* by finding out the right match through these sequence of algorithms.
 
  • #33
Well to some extent on the surface it may not make sense but if your talking about the methodology the program uses in order to derive the end result it can.

On one hand you may have a program that in order to find a password it would simply go through every concievable combination of letters and numbers..

On the other hand you have a program that may have access to a database that stores general information about the individual.. their family tree, hobbies and interests of theirs, where they were born, etc.

Which one would be most effective, well.. i guess it depends on the situation, level of complexity of the code or whatever but i don't think his point was redundant...

If you don't give the situation the benefit of the doubt that mankind and its nervous system are basically a set of electrical signals that can be translated into 0's & 1's then your always going to find something wrong with the computer trying to tread on the territory of mankind.

MythioS
 
Last edited:
  • #34


Originally posted by TARSIER
From a philosphical point of view, I think AI is impossible. How can computers reason? There is no mathematical formula to calculate reason. Also, what is Intelligence? By reading Plato's Theory of Knowledge, is knowledge the same thing as intelligence? What makes a being intelligent? I think that it all depends on your point of view, but there is no way around it. How can there be artificial reason? "Who decides reason? What is logic?" John Nash once said.
So, philosophically, in my point of View, AI is impossible. Comments? Questions? Suggestions?

YUP! Bottom line - I totally agree.. I think once we have a true grasp of the human mind we'll see that it is impossible to achieve true AI.
 
  • #35
Whistling in the dark.
 

Similar threads

  • Computing and Technology
Replies
2
Views
532
  • Computing and Technology
3
Replies
99
Views
4K
  • Computing and Technology
2
Replies
35
Views
4K
  • Computing and Technology
Replies
17
Views
1K
  • Computing and Technology
Replies
34
Views
4K
Replies
1
Views
1K
Replies
3
Views
3K
  • Computing and Technology
Replies
2
Views
986
  • Computing and Technology
Replies
4
Views
1K
  • Programming and Computer Science
Replies
13
Views
1K
Back
Top