In summary, PF Advisors were asked about their thoughts on the rise of AI and its impact on STEM in various areas. Some advisors expressed skepticism about its potential, while others acknowledged the potential for both positive and negative outcomes. Some mentioned the need for more clarity and understanding in terms of what truly constitutes AI. Others saw it as a valuable tool that can be used for beneficial purposes or misused for harmful ones. Overall, there were mixed opinions and levels of interest in the topic among the advisors.
  • #36
anorlunda said:
It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.

To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet. The intelligence behind the flyball governor doesn't comes from the machine itself, it came from someone who saw a pattern between an object in rotation and the forces it created.

What people are afraid from AI comes from the fact that they link "intelligence" to "being alive". But there are no direct links between the two. Being alive is mostly about being able to replicate yourself. That relates to the concept of autonomy, the capacity to function independently. An entity doesn't need intelligence to replicate itself. And an intelligent machine that was designed to find patterns in, say, scientific papers, will not "evolve" to replicate itself.

Even if we assume that an intelligent machine will evolve to replicate itself - and we are really far from that - some people worry that the machines will go on to destroy humans. But that is just a very pessimistic assumption. They are plenty of life forms on this planet and none of them have a goal of destroying other life forms. And from what we understand, diversity is important for survival and there are only disadvantages when it comes to destroying other life forms. Why would a new intelligent life form (designed by us) comes to a different conclusion?
 
Technology news on Phys.org
  • #37
jack action said:
To me, intelligence is the capacity to identify patterns.
Therein is the problem with public discussions. Unless the discussion is preceded by rigid and boring definitions of common words, each participant uses his/her personal favorite definition and we wind up talking past each other. For example, define "decide" for all possible relevant contexts. We have something like an uncanny valley that poisons public discussions of AI.
 
  • Like
Likes 256bits and russ_watters
  • #38
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device. It "decided" how to adjust the throttle to maintain nearly constant speed. The methods of making the "decision" are beside the point.
1593431786791.png
1593431786791.png


My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

This reminds me of a story I heard from an aquacultural engineer I know who makes/installs mostly automated water systems (they call them life support systems) for various kinds of fish facilities.
They had done an install in a Central or South American shrimp farm and went back after a year or so to check up on how it was working out (maintenance contract possibly).
They found a guy sitting on a chair in the pump room (where most of the equipment is). He was there because one of the automated valves had stopped working. He would manually control the valve went appropriate. This was a cheaper solution (his salary) than getting the valve fixed at the time.
My friend say he was a very loyal employee.

Maybe that's Artificial Machine Intelligence (AMI)!

Since I don't know a lot about details of AI function and definitions, I also took an approach that I would address AI simply as increased machine smarts and proceeded from there.
 
  • Haha
Likes anorlunda
  • #39
fresh_42 said:
Not really. The question is: What makes AI different from all classical algorithms? Is it a new property or a only a sophisticated algorithm?
anorlunda said:
I prefer an extremely broad definition of AI. One that would include almost all technology and automation. I would include James Watt's flyball governor from the 1700s as an AI device...

My reason is simple, if we replace some human with some machines, who cares if the machine is AI or not? Even that flyball governor displaced some intelligent human workers who could have sat there with their hands on the throttle and made intelligent decisions to regulate it.

If we have technology providing benefits on one hand, risks on the second hands, and displacing human workers on the third hand, what difference does it make if that technology deserves the label AI?

IMO debate over what AI really means is a needless distraction. IMO claims that AI is a discrete approach or "quantum leap" are false. Comparisons to human intelligence and the magic of cognition are not helpful. We will have continuous spectrum of machine abilities, and methods to achieve those abilities.
There's a decent chance you guys will like my response. I share the concern/annoyance about definitions and the pop-culture feel of the term, so I just drove over it and declared a broad definition to answer the question.

Although:
fresh_42 said:
...I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.
That's a provocative statement. You do own/use a thermostat, don't you?

But otherwise I agree with the sentiment; the level of decision making we leave to machines can become problematic if it isn't done carefully, or if the machines or people aren't prepared for the shift in responsibility.
 
  • #40
jack action said:
I wouldn't go that far. Otherwise, one could say that when a ball sitting at the top of a hill starts rolling down, it "decided" to roll down. It didn't. It just reacts to changes in its environment.
What makes you think you are any different from that ball?
To me, intelligence is the capacity to identify patterns. I believed we can make machines that can do this, although I'm not sure how far we have gone into that domain, yet.
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
 
  • #41
russ_watters said:
That's a provocative statement. You do own/use a thermostat, don't you?
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
 
  • #42
fresh_42 said:
These accidents with people driving into rivers actually occur. And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
I'm aware the accidents occur/the problem exists. My concern was that the statement seems extremely broad/absolute.

In my view, much of the current/recent problem is caused by a sort of "uncanny valley" for decision-making where the humans don't correctly understand the limitations of the automation or their (the humans') role in the algorithm.
 
  • #43
fresh_42 said:
I oppose automatic decisions.
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions.

As @russ_watters said, if your thermostat makes the decision that you need more heat would you oppose that decision?

There is a continuum of decisions. It's very hard to define a general line between what we approve and disapprove.

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
 
  • #44
anorlunda said:
I think you probably only oppose some decisions. If the 737 Max autopilot decided to turn itself off and return control to the pilot, I'm guessing you would approve. That's an automatic decision too. If my car decided to invoke the anti-skid features, or decides that I'm drunk and it refuses to start, those are automatic decisions...

It sounds like Russ and I will agree that it would be a mistake to allow dual safety standards, one for AI and another for non-AI automation.
I would suggest that bad decisions by badly written algorithms (like the 737 Max MCAS) should be excluded because any machine can be badly designed, with a fatal flaw, and that doesn't really have anything to do with automation. I don't see a fundamental difference between a badly designed algorithm failing and crashing the plane and an improperly designed or installed control linkage breaking and crashing the plane.

In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear to the users or they don't like where it is and try to override the automation...or they simply don't care about the risk of giving the automation more responsibility than it is designed to have. Several people have been killed by self-driving cars for that reason, for example.

...and for GPS for a car, the responsibility really isn't any different from a paper map. A GPS can give more/better information, but it bears little or no responsibility for the final decisions, and doesn't execute them itself. As far as I can judge, GPS directions aren't automation at all.
 
Last edited:
  • #45
russ_watters said:
In my opinion, the problem with advanced automation is that the boundary between human and computer control is not always clear
We're in agreement. But we need to resist attempts to define that as an AI related problem. The public is easily convinced that "advanced" and "AI" are synonymous.

If AI helps Boeing sell airplanes, they'll hang the tag on everything they market. But if AI is seen as evil, they'll simply deny that their devices contain AI.
 
  • Like
Likes russ_watters
  • #46
russ_watters said:
What makes you think you are any different from that ball?
I understand what effect a hill has on me (and even on others).
russ_watters said:
What kind of/complexity of patterns? Does a PID controller on a thermostat qualify? It learns the responsiveness of your HVAC system and adjusts the timing of the start/stop to avoid undershoot and overshoot.
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern". Being intelligent means to identify a pattern that no one pointed out to you first. The example I like is a computer program that is fed scientific papers and then spits out a new, unknown, scientific theory. Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent. One that is not, would stubbornly try to stay on the same side, maybe just being really good at maneuvering to avoid crashes.

That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms. With my self-driving car example applied to today's machines, most likely that it would have extra information that says "When positioned within the limits of England, switch sides." That's feedback. This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
 
  • #47
fresh_42 said:
I am strongly against decision making by machines, It causes people to drive into rivers because GPS tells them to do.

That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.

Now if the faulty GPS were an input to the control program of a self-driving car, which then drove into a river based on the GPS info, that would be decision making by machines. And I don't think machines should be allowed to make decisions of that kind until they get reliable enough not to do things like that.
 
  • Like
Likes russ_watters and phinds
  • #48
PeterDonis said:
That isn't decision making by machines. It's (stupid) decision making by people who are (stupidly) relying too much on information from a machine.
Indeed. But if not even this works, how much less will it work with more sophisticated programs. It wasn't meant as an example of AI, rather than an example of the notoriously bad interaction. An example where AI failed is, that fb ads frequently suggest that I should consult companies which help me with my tax problems as an American who lives in Europe. Sure, that isn't an actual issue, but it demonstrates the non existing reliability.
 
  • #49
fresh_42 said:
if not even this works, how much less will it work with more sophisticated programs

I agree that, before we even think about using a particular program as input to an AI decision maker, we should first make sure it doesn't give obviously stupid input or produce obviously stupid results with human decision makers. And a lot of hype surrounding AI ignores the obvious fact that programs which fail that simple test are ubiquitous, and programs which can pass it are rare.
 
  • #50
It is not the normal which is risky, it are the exceptions - like my online behavior which doesn't match the standard, or in case of the 737 the low height or whatever it was. And there is still my argument "it keeps consultancies busy". (I had at least four AI ads recently from SAS.)
 
  • #51
jack action said:
I understand what effect a hill has on me (and even on others).
What relevance does that have in the context of this thread?
If it works as a system with feedback (like the flyball governor), I don't think of it as "identifying a pattern".
No, the flyball governor is Proportional control only. A specific rpm yields a specific throttle position, and that's it. A PID controller learns the responsiveness of the feedback system and adjusts the outputs accordingly. For example, your thermostat will turn on or off before it senses a temperature change because it remembers how long it took the last few times. And it will adjust as that delay adjusts; it will turn on sooner and off later on a hot day than a cooler day.
Being intelligent means to identify a pattern that no one pointed out to you first.
It appears to me the thermostat qualifies by that definition.
Another example: if a computer in a car was taught how to drive in America and then you would put it in England and it would figure out by itself that it must drive on the other side of the road; That would be intelligent.
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
That is why I agree with others when they say "artificial intelligence" is thrown around lightly. Most machines defined as AI mostly mimics cognitive functions with some complex feedback mechanisms.
How do you know you don't?
This is why @russ_watters refers to "badly written algorithms", which means the decision process lies with the programmer who feed the causes and effects he knows and the computer program never gets out of these definitions. That's feedback. AI would find either new causes or new effects, or would be given a new cause and deduce the effect based on its knowledge.
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
 
  • #52
russ_watters said:
What relevance does that have in the context of this thread?
It is the basis for "making a decision". It implies you understand that an effect has a cause and that you can act differently according to it. The flyball governor has no knowledge of what is happening. It just get pushed around by other objects, some that may have the intelligence of doing it with a goal of obtaining a certain effect.
russ_watters said:
It appears to me the thermostat qualifies by that definition.
I never said it wasn't. But if you think it qualifies according to my definition, then it is AI for me as well.

I'm no expert on AI, but maybe it could be defined as "advance feedback". Maybe the line is blurry where feedback ends and AI begins. But in the end, saying that a flyball governor is the same as AI seems exaggerated to me. It's sounds like saying a boiler sitting on a fire is the same as a nuclear power plant. The end results might be similar, but the processes are totally different.
russ_watters said:
By that definition, an awful lot of humans aren't intelligent. Most are told ahead of time that they have to drive on the left side of the road and some still mess it up.
I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
russ_watters said:
How do you know you don't?
There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.
russ_watters said:
I have a lot of bad habits I can't seem to break. I suppose that means I lack some intelligence, but at the same time I'd be ok with blaming my programmer for not writing them better.
But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
 
  • #53
jack action said:
It is the basis for "making a decision"...

...I know you understand what I mean, and that wasn't it. Any human, as stupid as it could be - without being told - will realize at one point that everybody does what he or she used to do, the only difference being that they are on the other side of the road. And that it is easier to switch side rather than fight his or her way.

Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.

There is a joke going around where AI is defined as a series of nested IF...ELSE statements. That is not AI because the program does exactly what it was initially told to do. But with the computing power that is available today, a program may evaluate so many conditional statements that for us mere humans it looks like intelligence. But it still isn't, it just mimics it.

But when you will have a machine with AI, it may make decisions completely unforeseen by the programmer. You can see that as owning a dog. If the dog does something that is unwanted, who is responsible? The breeder, the trainer, the owner or the dog itself? There may not be a single answer that fit all possible cases.
So again: how do you know that you don't?

I'm not being glib with that question, I really mean it/would like an answer. How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs? Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent. Maybe part of the definition issue is that it isn't "intelligence" people think of when they think of AI, but artificial life. They want something that feels real, even if that actually makes the system inferior in many ways. If that's what people think of and want, fine, but that's not what I'd be after.
 
  • Like
Likes gleem
  • #54
jack action said:
The flyball governor has no knowledge of what is happening.
Neither does a neural network.

If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
 
  • Like
Likes russ_watters
  • #55
jack action said:
Another example is learning an unknown language. You can put any human in an environment where everybody speaks another language than his or hers, and this human will learn it, without anyone teaching it to him. Just observing and noticing patterns. It's a question of time, but it will happen.
FYI, regarding language:
http://news.mit.edu/2018/machines-learn-language-human-interaction-1031

I don't think that example is of learning a language from scratch, but rather learning the use of language in everyday life. We're always going to build language and intelligence programs into computers simply because it's easier that way. It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste? But in this example, the computer is learning to be more conversational in the way it speaks. It starts as mimicking (like a parrot or a human would), but here's a telling part in the article:
A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,”
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!

Is this really what we want out of AI?
 
  • #56
russ_watters said:
How do you know that you aren't just acting on a an extremely complex set of if/then/else statements based on an extremely complex set of inputs?
Like I said, maybe one could define intelligence as "advanced feedback" and there is no clear line where one begins and where one ends. How do you know that you are a human and not an ape or a tree? Apparently, there's enough differences that we created different words for them.

That is a problem that arises from the fact that humans love to classified things, where such thing doesn't really exists in nature.
russ_watters said:
Even worse, how can you be sure that what differentiates you from a computer isn't that you suck at it?

People, dogs and gnats are unpredictable largely because they suck at being logical, not because they are intelligent.
I don't agree with the use of "suck at it". My definition of intelligence is "the ability to identify patterns". I prefer saying some can be better at it then others. Humans are most likely the best at it. But if I compare a cheetah with a human, I say that its ability to run is better, not that the human suck at running. The latter implied that the human is not a worthy living being because he can't run as well as the best of the living being. I don't want to imagine what one would think of a fish with that kind of thinking ...
russ_watters said:
even if that actually makes the system inferior in many ways.
russ_watters said:
The robot starts out speaking properly, but learns by mimicking humans to speak poorly. So again, the defining characteristic that it lacks is imprecision. It needs to learn how to be bad at speaking to be more human!
Here you have a lot of judgements about AI which doesn't affect its definition. How do you define "inferior"? What is "speaking properly" and "speaking poorly"? I don't think you'll find scientific laws to define this.
russ_watters said:
Is this really what we want out of AI?
That is a good reflection for the present segment of "Ask the advisors". I can't wait to read people's opinions on this.
russ_watters said:
It takes decades to teach a human to be functional -- why would we want that in a computer when you can copy and paste?
If you look at AI as a machine that can do what I can do, it just does it instead of me, that view is logical. But AI can be much more and that's where I find it interesting. If we assume that a cure for cancer exists, humans most likely can find it by looking for patterns that link causes and effects. It may take decades, maybe centuries. But with AI, it will most likely go faster. AI is the power tool to find patterns. Sure, you can use AI to do simplistic tasks - like you can use an electric drill to make a hole in a piece of paper - but I find this to simply be a waste of resources for amusement or just say "I do it because I can."
anorlunda said:
If the issue is what machines can do and should do, what is the utility of a narrow definition of AI?

Pattern recognition is a method. Our policies should be method independent. If not, they can be obsoleted overnight.
By using the word "policies", I think you are worrying about what type of decisions we should leave to machines. If so, I do share some concerns about this and I did share them in my answer to @Greg Bernhardt 's question. Short answer: I fear that people will trust blindly a machine's decision and not their own. That applies to AI or flyball governors, but the more a machine "look" smart, the easier it is to go down that path.

But I feel @Greg Bernhardt 's question was broader than this subject alone.
 
  • #57
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
 
  • Like
Likes nsaspook and BillTre
  • #58
Averagesupernova said:
I have suppressed my urge to post anything about AI since I believe intelligence as well as artificial intelligence both have rather obscure definitions.
-
I will say what no one else has said in this thread so far (I think): Human intelligence and some animals display this as well, have an ability to say "What if..." A very eccentric engineer once told me that humans compared to machines can solve problems with significantly missing data. In other words, we say "What if..."
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
 
  • #59
phinds said:
So do you believe that AI is forever doomed to not be able to do extrapolation? I don't.
No, I can't say for sure that I believe that. There are systems out there now that are programmed to ignore inputs that are outside normal limits. But they are basic simple systems such as an engine control module that realizes a bogus reading from an oxygen sensor and simply meters fuel in based on a preloaded table. It notifies the driver that there is a problem and it ends there. It does not troubleshoot to find a defective sensor, broken wire, connection, etc. If it did, it would have a limited number of scenarios to troubleshoot for. A human would notice wiring that does not look original (for instance) and suspect someone at one time made an improper repair or something of this nature. It takes humans the better part of a lifetime of learning by experience. Doing a job based on looking something up in a troubleshooting guide is no different than a set of if-then instruction in a computer. It will never come close to human experience, which in my opinion is required for good extrapolation.
 
  • #60
Averagesupernova said:
A human would
Why compare it with humans at all? Is it because of your understanding of the word intelligence?
 
  • #61
anorlunda said:
Why compare it with humans at all? Is it because of your understanding of the word intelligence?
You have a valid point. And the (in my opinion) obscure definitions cause some trouble when discussing it. Right now as far as I know human intelligence is the highest known, or at least recognized. What would you compare it to?
-
A number of years ago I troubleshot test equipment in a factory setting. Part of my job was to create flow charts to indicate what part of a circuit board the calibrator was to inspect based on a particular failure. Wrong part here? Wrong part there? Etc. What this amounts to is taking my intelligence and experience and condensing it down to a series of if-thens. The longer we manufactured a certain product the more detailed the flow charts became. After a certain time of course, there was a net loss in productivity by further detailing the flow charts. There was never a time when a troubleshooter such as myself was not required.
 
  • #62
Averagesupernova said:
You have a valid point. And the (in my opinion) obscure definitions cause some trouble when discussing it. Right now as far as I know human intelligence is the highest known, or at least recognized. What would you compare it to?
Even human intelligence can vary in unexpected ways. A friend of mine named Dave was a factory supervisor somewhere in Africa many years ago and he was assigned to get a wooden fence built along the back of the property.

He told the foreman of the workers that he wanted the fence to be absolutely straight and just to be sure he was clear, he showed the foreman a picture of a perfectly straight fence. The foreman assured Dave that the fence would be absolutely straight and then proceeded to build the fence in what Westerners would consider a crooked, slightly rambling line that probably never deviated from the center line by more than a foot or so but which to Dave was clearly NOT straight.

The foreman was flabbergasted that Dave did not see the fence as straight. Again Dave showed the foreman the picture and pointed out that the fence wasn't the same as the picture. The foreman insisted that the two fences were identically straight.

When Dave complained to the factory owner that the foreman was either obstructionist or mentally retarded, the owner just laughed and explained to Dave that for a African of that tribe, that WAS absolutely straight and you would never convince him otherwise.

You can argue that it was a difference in definitions, but I argue that it was a different way of THINKING about what "straight" means.
 
  • #63
phinds said:
You can argue that it was a difference in definitions, but I argue that it was a different way of THINKING about what "straight" means.
As far as I am concerned, doesn't the definition of straight or any word depend on how it is thought about?
-
Interpretation of definitions can certainly get us in trouble. But, for someone to argue that those two fences were identical is not playing with a full deck if they understand the definition of the word identical. I would have expected the foreman to say that they thought it was straight enough. The foreman did display a certain amount of intelligence based on the fact that (I'm assuming here) the fence was built in such a way as to do the job that he/she assumed it was meant to do.
 
  • #64
Averagesupernova said:
What would you compare it to?
How about just looking at the utility of the machine without needless comparisons?
 
  • #65
anorlunda said:
How about just looking at the utility of the machine without needless comparisons?
So what are you saying? It's pointless to compare a lowly thermostat to the intelligence of a normal human being? Intelligent enough is good enough? That doesn't make for stimulating conversation, nor does it give us any idea of how to proceed to make something smarter by having never compared one system with another.
 
  • #66
fresh_42 said:
And the latest fatale mismatch was caused by Boeing's software change to avoid a new FAA approval procedure for its 737 Max. As long as people program by the quick-and-dirty method, as long do I oppose automatic decisions.
That was human error, or malfeasance. MCAS could have been successful IF there was a way to detect inconsistencies of inputs from various instruments such that a fault (input error) could be detected and corrected. In the case of the aircraft's control system, there should have been a way for the system to know if an angle of attack (AOA) sensor was faulty (damaged). Left and right sensors should have been compared, and any difference/inconsistency challenged. The pilots needed training in over-riding (disconnecting) the system. Boeing engineering staff and management, and FAA (oversight), are ultimately responsible.

Error detection and correction is one area in AI needing attention. Currently, humans do the program and write the rules. Full intelligence requires learning from successes AND failures (errors).

In a wholly different area: A former colleague did a thesis on "Artificial Neural Network Modeling of Mechanical Properties of Seamless ___________ Tubing". I've seen similar works for various alloys. It requires knowledge of the alloy system, the sensitivity of mechanical properties (strain hardening and strain rate hardening) to composition, and the manufacturing process. Seamless tubing involves multiple mechanical reduction steps with intermittent thermal treatment (annealing), but annealing can be for recrystallization, or solution annealing, the latter being at greater temperature than simple annealing. The solution annealing temperature must be set in conjunction with time in order to prevent grain growth (this relationship will depend on the composition and level of cold work (dislocation density) in the alloy structure.
A related paper - https://pureadmin.qub.ac.uk/ws/files/377750/132.pdf

With respect to what is considered AI, did you know that there are at least 40 types of hammers.
https://www.garagetooladvisor.com/hand-tools/different-types-of-hammers-and-their-uses/
 
Last edited:
  • #67
Astronuc said:
With respect to what is considered AI, did you know that there are at least 40 types of hammers.
Yes.
 
  • #68
Averagesupernova said:
So what are you saying?
I say that we should judge machines by their utility and their risks. The methods (dumb/smart) that they use to achieve that are irrelevant.

A self driving car has benefits and risks, regardless of whether it uses AI or not.

I think machines of the future will be extremely capable. I also think that the experts will never stop debating whether those machines really have AI.
 
  • Like
Likes BillTre and Ibix
  • #69
Game changer in the chess world! The extract below is from https://en.wikipedia.org/wiki/AlphaZero

DeepMind stated in its preprint, "The game of chess represented the pinnacle of AI research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning algorithm – originally devised for the game of go – that achieved superior results within a few hours, searching a thousand times fewer positions, given no domain knowledge except the rules."[1] DeepMind's Demis Hassabis, a chess player himself, called AlphaZero's play style "alien": It sometimes wins by offering counterintuitive sacrifices, like offering up a queen and bishop to exploit a positional advantage. "It's like chess from another dimension."[9]
 
  • Like
Likes berkeman
  • #70
We asked our PF Advisors “How do you see the rise in A.I. affecting STEM in the lab, classroom, industry and or in everyday society?”. We got so many great responses we need to split them into parts, here are the first several. Enjoy!

156839.jpg


Ranger Mike
If things run true as before, and I have seen no vast improvement in the correct forecasting of future trends from these areas, I see lots of money going in these areas but not much usable product coming out. I chose not to dwell on such predictions like back in the early 1980s when we were told the factory of the future would be a lights out manufacturing trend with only a few humans doing maintenance to keep the...

Continue reading...
 

Attachments

  • 156839.jpg
    156839.jpg
    3.5 KB · Views: 101
  • Like
  • Love
Likes bhobba, berkeman and Adesh

Similar threads

  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
2
Views
2K
Replies
4
Views
8K
  • STEM Academic Advising
Replies
5
Views
9K
  • Earth Sciences
Replies
3
Views
3K
Replies
31
Views
9K
Back
Top