With ChatGPT, is the college essay dead?

In summary, the conversation discusses the use of AI technology, specifically GPT-3, for writing essays and the implications it has for academia. Some argue that it is no different than students paying someone else to write their essays, while others suggest that students should be required to explain their work in front of the class to ensure that they have a true understanding of the subject. The conversation also draws comparisons to using technology in other fields, such as engineering, and how it differs from using AI for writing essays. Ultimately, it is suggested that colleges may need to increase penalties to discourage the use of AI for writing essays.
  • #106
jedishrfu said:
However more recently, google search has morphed into a yellow pages advertising mode where obvious search terms bring you to sites that you’re not really interested in before you get to see the relevant sites.

This is what I really dislike about Google now. If there were an alternative that works like google did, say 15 years ago, I would switch to it (over goog) in a heartbeat.
 
Science news on Phys.org
  • #107
Have you tried DuckDuckGo? It seems to show what you want but may not be as comprehensive as Google can be.
 
  • #108
gleem said:
However, LaMDA responds like a human in a discussion. It does not know in any sense of the word but to have AGI it has to be able to do this. How much of what we say is like this, basically reflex.
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates? Note that I am not in any way implying that any existing system is anywhere near an AGI; but in some of the discussions about the future of AI many seem to assume that is somehow obvious that humans are not "only" machines and that our ability to "understand" would be impossible to duplicate.
 
  • Like
Likes jedishrfu
  • #109
f95toli said:
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates?
Certainly, the way humans know things is different from the way GPT does. Humans obtain information slowly over a long period in small increments which is true if it is an experience or is largely vetted to be true from other sources, quite the opposite of the way GTP "knows".

Artificial neural nets are being used to try to emulate the human brain. If successful would lower the bar with regard to how we set ourselves apart from everything else. To be sure, a true emulation of a human brain may never occur but with regard to processing information and decision making it seems possible and even better. We just can't accept being second.
 
  • Haha
Likes jedishrfu
  • #110
f95toli said:
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates?
Actually, I think the problem I have with ChatGPT is that it mimics too much the human way. Humans do not retain information exactly; they don't even record information entirely. Our eyes don't record every partial of light going in and our brains don't remember everything: We make stuff up - based on educated guesses - to fill the voids. Which is what ChatGPT is also doing.

That is why we put stuff in writing and take pictures: to assure us we remember things correctly and that they are faithful to reality.

Using ChatGPT to write fiction or automating a process like writing documentation for a programming script may be a good use for this tool. The former doesn't mind if something made up comes up and the latter is basically a translator for a language that has very few concepts, based on pure logic nonetheless.

Trying to fetch quality information seems to be something a specialized search engine can do better. The lack of sources makes ChatGPT no better than asking your neighbor to recall something he read a while ago.
 
  • #111
Microsoft's CEO was interviewed recently about AI and ChatGPT. Microsoft is investing 10 Billion in OpenAI.

Some main takeaways from the interview:

(1) This is just the beginning of the AI explosion and it isn't a linear curve.

(2) Already, as an anecdote, an elite AI programmer says that 80% of his code is written by AI. If you extrapolate (not saying you can but not sure why not), then it would suggest companies could fire 80% of their programmers without losing productivity.

(3) The recent breakthroughs are notable to him because (if I read him right) the models are surprising us as unexpected abilities are emerging in them that they weren't even trained for specically.

(4) While it will be disruptive and displace many "knowledge workers" such as programmers, hopefully we will adapt and utilize the improved productivity to reduce the disparity between quality of life around the world.



For me, I have to admit that while I had predicted rapid acceleration in AI capabilities, I hadn't figured that AI would replace programmers so quickly.
 
  • #112
One concern I have is as ChatGPT generated content gets more popular then it will begin to use its own content to train itself which can skew the results in a negative way.
 
  • Like
Likes jack action
  • #113
Jarvis323 said:
(2) Already, as an anecdote, an elite AI programmer says that 80% of his code is written by AI. If you extrapolate (not saying you can but not sure why not), then it would suggest companies could fire 80% of their programmers without losing productivity.
If history repeats itself - and it always does - you will be pretty deceived. When people can spend more easily, they only tend to waste more. I see a lot [more] of useless code running a simple task in the future.

and-can-send-chrome-tab-scary-humans-to-moon-vjVBa.jpg

Jarvis323 said:
(4) While it will be disruptive and displace many "knowledge workers" such as programmers, hopefully we will adapt and utilize the improved productivity to reduce the disparity between quality of life around the world.
I'm still waiting to see technology reducing disparities between people around the world. It actually tends to increase it.
 
  • Like
Likes PeroK and Jarvis323
  • #114
jack action said:
I'm still waiting to see technology reducing disparities between people around the world. It actually tends to increase it.
Yuval Harari argues that "the big danger for people is no longer exploitation, it is irrelevance"

He goes into more depth starting around 1 minute into the video.



Neil DeGrass Tyson is more optimistic. He thinks the solution is to funnel money into maintaining and modernizing infrastructure, which he thinks will still require human labor for quite some time. Then again I think he is thinking much more near term. Whereas people like Yuval are trying to think as much as hundreds or thousands of years ahead.

His argument starts around 16:30.

 
Last edited:
  • #115
jedishrfu said:
One concern I have is as ChatGPT generated content gets more popular then it will begin to use its own content to train itself which can skew the results in a negative way.
I was thinking about this since you brought it up and I thought of some aspect to this which intrigues me. I don't know the answer. But, in addition to the ChatGPT content present in the next generation of training data, there will also be our criticism of its output. And also our fears and enthusiasms about how it might change the world. I wonder what effect that will have? Will it learn from our feedback in a way that leads to better performance or better alignment with our interests? Or maybe it will just be more likely to make the same mistakes but also be more likely to acknowledge that it makes those mistakes?

I am curious because if the former is true (I suspect not, but would not be surprised to be surprised) or could become true eventually, then AI might "naturally" evolve to adapt to our expectations of it in a sense simply based on what we say about it on the internet between each iteration.
 
  • Like
Likes jack action
  • #116
For example, what if, without it being an intentional feature, if you ask GPT4 to write an essay, it says,

"I'm sorry, but I can't write an essay for you. Writing an essay for you would be unethical since it could interfere with the education system."

Or maybe you ask it to write some code and it says,

"I'm sorry but writing code for humans could result in the loss of jobs for software engineers."

That would be strange.
 
  • Like
Likes symbolipoint
  • #117
One of the promises of OpenAI for GPT4 is to improve the alignment of GPT with human norms and values and sense the intention of the user. Additionally, it is supposed to be more knowledgeable of the outside world (GPT3 has no knowledge of the world) as well as receive feedback from humans in the training to help correct misinformation. It is believed this will reduce misinformation. Unfortunately, GPT4 like 3 will still provide an answer to any question.
 
  • #118
It would be cool to ask it the classic car question from My Cousin Vinny where the prosecutor questions the car expertise of Vinny's girlfriend with a trick question.

It's likely though that ChatGPT will have been trained on that information.



In contrast, ChatGPT said:

The correct ignition timing for a 1955 Bel-Air Chevrolet with a 327 cubic inch engine and a four-barrel carburetor would likely be around 4 degrees before top dead center (BTDC) at idle speed. However, it's important to note that this is a rough estimate and the actual timing may vary depending on the specific engine and other factors such as the condition of the ignition system and the type of fuel being used. It's always best to refer to the factory service manual or consult a qualified mechanic for the most accurate information.

The question is who is right ie is the movie dialogue right or chatgpt? I can't answer that myself but Marisa Tomei did a great scene. This particular scene has reverberated throughout the legal profession in that there are times when an expert witness is needed who has no obvious credentials but upon vire dior shows exceptional knowledge.

https://www.abajournal.com/gallery/pivotal_scenes/987
 
Last edited:
  • Like
Likes Astronuc and symbolipoint
  • #119
Jarvis323 said:
ChatGPT has refused to answer many questions I have asked it, either because it was unethical or because it thought it was unable to. For example, I asked to write a research paper in the style of Dr. Seuss and it refused, because it said it would be inappropriate. And I asked it to make financial predictions and it declined. I asked it to give certain obscure facts and it decline saying it can't because it can't access the internet.
This is interesting since most GPT3 prognosticators seem to think it will try to answer any question. However, as I said OpenAI will be trying to improve GPT4 to be more discerning with regard to its interpretation of instructions. Perhaps OpenAI has been updating 3's ability in this respect for it sounds exactly like what 4 will be supposed to do.

Jarvis323:
We haven't really "told it" (or trained it) yet that it is really bad at math and physics, for example.

As I have noted in other posts on this topic, when it comes to specific subjects for accuracy it must be trained on those subjects. As it is, it can use only what general information it was provided.
 
  • #120
gleem said:
This is interesting since most GPT3 prognosticators seem to think it will try to answer any question. However, as I said OpenAI will be trying to improve GPT4 to be more discerning with regard to its interpretation of instructions. Perhaps OpenAI has been updating 3's ability in this respect for it sounds exactly like what 4 will be supposed to do.

Jarvis323:As I have noted in other posts on this topic, when it comes to specific subjects for accuracy it must be trained on those subjects. As it is, it can use only what general information it was provided.

In retrospect, it is most likely not refusing to answer, it may be either filtering, or more likely "predicting" that the words which should follow are "Sorry, I can't answer that ...". So its answer is a non-answer, because that was the answer it saw most or was reinforced to generate in the training data. For it, the correct answer is the non-answer.

In that case, I don't know if it would actually reason that it should give a non-answer independently. It might to some extent based on some general rules it has learned.

But we may expect it to naturally begin to regurgitate information that it has learned from our perceptions of its limitations or problems. So its apparent confidence might drop in some cases or increase in others, even if it is never fully in-line with its actual abilities. But I am not sure.
 
  • #122
Using it after business hours is better in my experience.
 
  • #125
jedishrfu said:
Here’s an interesting article hinting at how it works

https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/
Violates? Strange choice of words. "Exceeds" is more like it.

I don't really expect OpenAI to give away the recipe for their amazing secret sauce. I'm sure they have a great deal up their sleeve.
 
  • #126
The article is based on GPT3. AFAIK ChatGPT is based on GPT3.5 has significant enhancements and is continually updated by OpenAI. I agree with @Hornbein that "violates" is a poor choice of words.

I find its responses remarkable considering it is just stringing words together perhaps just like a precocious child might do.

Notice that it has said that it does not have access to the internet which is a good thing considering we do not really understand what it does. Like that precocious child, we must guide it appropriately to assure that it does not get into trouble or cause trouble.
 
  • Skeptical
Likes jack action
  • #127
jedishrfu said:
Here’s an interesting article hinting at how it works

https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/

OpenAI might have hidden techniques it uses, but I wasn't convinced by the author's speculations.

Some of his claims were obviously wrong, like that the neural network should not be able to recognize patterns in words such as writing styles, and the idea it should not be able to output/repeat words from the prompt it hasn't been trained on.

It isn't like it has a database of words it extracted and picks from as it assembles its sentences. It represents words as numeric vectors in an embedded space. A word in the input (even one it hasn't seen) becomes a vector in that space. The new word's position in that space will be meaningful to the extent that the learned embedding generalizes or to the extent that the parts of the model which determine the relationships between the words in the input generalize. That it is able to take an input vector in the prompt and output the same vector, is not strange, and even that it does so in a meaningful way is not strange, especially for a word that is number.

The idea it shouldn't be able to understand garbled words also doesn't make sense. First, garbled words exist in the training data, and second, even ones which are not in the training data may be embedded in a way which captures relevant information about it by generalization.

Basically, in theory, if a human can understand some new word within context, so could a transformer model.

Regarding example 7, where the author is surprised that ChatGPT understood which parts of a copy and pasted previous session were its own, it is also not surprising. Aside from whatever secrets OpenAI has, the model is supposed to be basing its responses entirely on the prompt. Its own output (and the whole conversation at a given point) is just an extension of the prompt. So it should make no difference if you copy a previous interaction and paste it or continue an ongoing prompt. It can predict what part it wrote based on the patterns in the dialog.
 
Last edited:
  • Like
Likes jack action
  • #128
Jarvis323 said:
....Or maybe you ask it to write some code and it says,

"I'm sorry but writing code for humans could result in the loss of jobs for software engineers."

That would be strange.
"I'm sorry Dave, Im afraid I can't do that."
 
  • Haha
Likes Vanadium 50 and jedishrfu
  • #130
berkeman said:
And now in the business world...

"A Microsoft spokesperson said 365 users accessing the new AI tools should be reminded the technology is a
work in progress and information will need to be double checked."
Oh, like "don't sleep while your Tesla is driving?" lol.

More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
 
  • Like
Likes BillTre
  • #131
gmax137 said:
More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
Yeah, this has been weird over the last 5-10 years at work, where our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing, but it always seemed to me that we were "opening the kimono" with all of the information that we were storing in the cloud for our company...
 
  • #132
berkeman said:
...our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing​
I'm not so sure. It wasn't too long ago that the only thing the IT department knew was "try rebooting."Here's more from the linked article:

"Microsoft is also introducing a concept called Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data. The agent will know what’s in a user’s email and on their calendar for the day as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update."​

Do you think the Business Chat agent will reside on your desktop, or in a MS server somewhere?
 
  • #133
berkeman said:
And now in the business world...
In my history of fights with MS Office, the feature I've found the most annoying was that paperclip-thing.
It seems I'll need to prepare for it's descendants :oldcry:
 
Last edited:
  • Like
Likes BillTre and berkeman
  • #136
There is now available an app called GPTZero which can be used to determine if something might have been written by ChatGPT. I say might have since it only estimated the likelihood of it being written by AI or maybe only assisted by AI. It looks a the structure of the prose. AI is more consistent than humans in this regard. Human-produced prose also has more variability in content.

This app needs sufficient data, at least ten lines, to have a reasonable chance of making a distinction. The more the better. So social media posts using ChatGPT probably cannot be detected. It is discussed here.
 
  • #137
jtbell said:
This article reminded me of the current Fun with ChatGPT thread in General Discussion:

The College Essay Is Dead (The Atlantic)My wife (a retired professor of German language and literature) commented that students will have to be forced to write their essays in the classroom, after having their phones confiscated. :frown:
THAT, exactly!

I have no patience enough to recheck if I ever commented anything on this forum topic, but currently the discussion has become long with many posts, some drifting.

The intent of writing an essay for class while in class, is to write the essay genuinely yourself without any technological assistance. But some kinds of essays are assigned without restricting the writer to a supervised time & place; so who does what to ensure the writer composes and designs the essay genuinely himself without any technological assistance? How or should essay be verified for authenticity?? Maybe a live, in-person interview before the writer writes the essay?
 
  • #138

‘Overemployed’ Hustlers Exploit ChatGPT To Take On Even More Full-Time Jobs​

https://www.vice.com/en/article/v7b...t-chatgpt-to-take-on-even-more-full-time-jobs

"ChatGPT does like 80 percent of my job," said one worker. Another is holding the line at four robot-performed jobs. "Five would be overkill," he said.

Some who "helps financial technology companies market new products; the job involves creating reports, storyboards, and presentations, all of which involve writing."

last year, he started to hear more and more about ChatGPT, an artificial intelligence chatbot developed by the research lab OpenAI. Soon enough, he was trying to figure out how to use it to do his job faster and more efficiently, and what had been a time-consuming job became much easier. ("Not a little bit more easy,” he said, “like, way easier.") That alone didn’t make him unique in the marketing world. Everyone he knew was using ChatGPT at work, he said. But he started to wonder whether he could pull off a second job. Then, this year, he took the plunge, a decision he attributes to his new favorite online robot toy.

for a small cohort of fast-thinking and occasionally devious go-getters, AI technology has turned into an opportunity not to be feared but exploited, with their employers apparently none the wiser.
 
  • Like
Likes russ_watters and jedishrfu
  • #139
Tough to know how to feel about that. I've seen a bunch of ChatGPT generated content but haven't directly used it myself. But what I've seen has not impressed me. Here on PF a lot of posts generated by it have been basically technobabble: grammatically complete sentences with a collection of technical sounding words that don't sum to a coherent thought. Which is not surprising for a glorified search/predictive text engine.

But if your boss doesn't see it, I guess there's no harm in it.
 
  • Like
Likes Vanadium 50, Astronuc and symbolipoint
  • #140
I had ChatGPT write a blurb for my book. Did a great job, much better than I could. It excels at writing vacuously in a traditional style. There are many people with desk jobs doing this.
 
  • Like
Likes Astronuc

Similar threads

  • STEM Educators and Teaching
Replies
34
Views
3K
  • STEM Educators and Teaching
Replies
7
Views
1K
Replies
10
Views
2K
  • STEM Academic Advising
Replies
6
Views
882
  • STEM Academic Advising
Replies
17
Views
1K
  • STEM Academic Advising
Replies
7
Views
2K
  • STEM Academic Advising
Replies
4
Views
882
Replies
2
Views
917
  • STEM Academic Advising
2
Replies
54
Views
4K
Replies
1
Views
844
Back
Top