A Student’s Reflections on Artificial Intelligence

(Note: I have very limited, slightly more than average citizen, knowledge of ai. And the following is in no way comprehensive, but is what felt relevant to write at the time)

——

On Witnessing the Advent of Ai

I find myself particularly disconcerted today about the development of Ai (and equally impressed) and thought it might be a good idea to document what it's like for those of us in this year (it's May 9th, 2023) as we witness the advent of ai. It might be something that we will look back on and only remember vaguely how it felt. So, i thought “shit let me write a primary historical source”

Anyways, i begin now

----

Today I sat in lecture for a class on Research Methods in Psychology.

Bored, as I've taken the lecture before, I decided to browse Reddit.

I came across a post about using GPT-4 to create mind Maps (basically flowcharts) of various concepts. I was impressed so I decided to try it out. Initially, I asked it to create a detailed mind map of the field of psychology. Within a minute I had a comprehensive flow chart of the basic concepts of psychology and their sub-topics. I was very impressed. I continued to mess around with it, asking for mind maps of the sense of self, of spirituality, of Zen Buddhism. They were all impressive.

So then, as the TA began explaining the final project (a research proposal, specifically on a topic relevant to cyberbullying), i had a ‘bright’ idea:

"GPT, Please create a research proposal based on a topic relevant to cyber bullying"

"Sure, I can do that: ...."

In less Than 60 seconds, I had my entire final project completed. This was the first day of class…

Suddenly, I was no longer simply impressed, I was scared.

If this class simply exists to teach students how to write a research proposal, and GPT can do it faster, probably better (than most), and without any effort, then isn't the class entirely redundant? Why would even a real researcher with a doctorate write their own proposal? Just input your specifics into GPT and have it save you an hour (or more).

Shocked, I realized that perhaps my entire or at least most of my education might be largely redundant in 5 years. Thoughts ran through my mind: "The entire education system is going to change, my degree might not be relevant in 2030, I’ll be less valuable than individuals who go through a psych degree who are trained to engage with the field in a fully Ai-integrated way".

I spoke to the TA after class and explained my 1 line prompt completed the course in under 60 seconds and he directly responded “yea you could totally use GPT and I honestly probably wouldn’t be able to tell, and actually because of that, I don’t actually care If you do. It saves you time and real researchers could use this tool and save themselves time too.”

----

I interrupt the previous flow of thought to say that an acquaintance on campus came up to me while I was mid sentence and we chit chatted, eventually getting into the topic of Ai

Both of us discussed our fears of being put out of a job, he wants to direct films, i explained that ai can already write scripts, and will soon be able to create entire movies with minimal prompting. He speculated that he wouldn't have a job, but pointed out that something like theatre wouldn't be (entirely) replaceable. I remarked that interest in theatre (and orchestra, his other example) would probably decline significantly in the advent of alternative, ai-driven forms of more stimulating entertainment, similar to how the advent of things like television and social media have decimated interest in previous peak forms of entertainment.

We also discussed how insane it is just how fast ai has developed. We wouldn't even be having this conversation a mere 5 months ago. It reminded me of how, when ai art was released, we had a lengthy discussion in my 19th century art history class early last December.

It had just hit the popular media scene and was hot conversation for a week or two. My professor and I dialogued a bit about the future and finding meaning in our lives in the presence of a society fully integrated with ai - prior, we had been discussing a painting of a laborer in a field, and the Protestant themes of finding meaning in our labor. How would we find similar meaning without our jobs? What will the art scene look like in the future? Will artists be out of a job?

This is a core memory for me and one I have recalled at least 10 or 12 times since that day. I see it as the first moment that i was witness to questions of the future of ai in popular society — Ai was no longer in the future — it had arrived.

According to Google Trends, interest in the topic "Ai Art" spiked around the first 2 weeks of December, increasing 588% from around the last week of November. This conversation in art history class took place during this time.

It was also at this time (Nov. 27th to be specific) that ChatGPT (of OpenAI) was released and skyrocketed into popular media.

I recall discussions in an online forum that contained many who work in the tech sector/as developers and concerns around job security in the face of a future where Ai can write the code on its own.

One individual from this group who was a computer scientist at one point (iirc) explained that he predicts humanity won't exist in 10-15 years, citing the "godfather of ai" recently predicting the advent of General AI Super Intelligence within 5-10 years (iirc), about 20 years sooner than he previously expected. My friend cited troubles with AI “alignment” as the basis of his prediction, suggesting that an Ai super intelligence would be essentially impossible to control. He, like myself, feels that a total temporary ban on Ai development is appropriate until effective safeguards and policies have been put in place.

I don’t personally expect that this 10-15 year prediction is real, but it speak volumes about how society feels about the future of ai: According to polling from Monmouth University, only 9% of respondents feel Ai will for more good than harm to society. With only 46% believing Ai will do equal harm and good to society, and 41% of respondents believing that it will do overall harm to society. 55% of respondents felt very or somewhat worried that Ai poses a serious risk to humanity in the future.

Why, if the majority of people fear the continued development of Ai, are we not having more serious conversations about its future? Why are we not doing something now instead of trying to fix it later.

I know a similar conversation: Climate change. We’ve known for decades that this was coming, and many feel that it’s too little too late. I fear the same will happen with Ai. Especially that, once we are faced with it’s harmful effects, it will be harder to change the nature of its use once it’s already fully integrated into society.

--

Consider the nefarious uses of Ai. Recently, in the news, I saw an article about a woman who received a phone call from her daughter, sobbing that she had been kidnapped and would be killed or something (cant recall) unless the mother paid some money or something to the kidnappers. The mother believed the whole thing, it was Ai the entire time.

There are so many examples of nefarious (and likely) uses of ai to harm society and individuals that I couldn't possibly even list 1% of them. But some examples would be the damages incurred by effective ai driven political misinformation (especially deep fake videos of candidates, perhaps mere days before an election, (convincingly) making extremely egregious statements or supporting controversial policies that they don't, in real life, support), i imagine scams targeting the elderly will be so convincing that they are effective virtually 100% of the time, and i can even imagine a world where, with the use of ai filters (such as those on TikTok, which are extremely effective now compared to a year ago, they now match pixel by pixel without any discernible tells) in concert with voice filters to prey on children online over video chat, by convincingly pretending to be their age. These are just some (a small, small number of the total) of the potentials for nefarious uses of Ai. I know now that I cant even currently imagine what malicious tasks Ai will be able to do in the future, just as how a mere 5 months ago, writing an entire research paper with Ai was not something that had ever occurred to me. In other words — the future is darker than I can imagine…

I have always held the opinion that Ai is a Pandora's box that simply should never have been opened (too late!).

----

I've always been someone who doesn't really like living in a digitized society. It's always felt a bit "wrong" to me, as if we're somehow divorced from what is natural. I pine for the days when social media didn't exist, wondering how my peer group experiences would be different if social media didn't exist, if we would have developed socially in a more satisfying way, and other things like how much better would my youth have been if it wasn't defined by spending 70-80% of my free time on my phone? I often envy the Amish, in an actual, unironic way.

I have also often wondered growing up if I would be happier living in the woods, in a simple home or cabin, than living in this society. Now it seems more likely than ever.

I am concerned what a future with ai fully integrated into our daily lives would look like.

yes, there are so many possible benefits to ai: medicine, narrowing disability gaps/creating more equal opportunity, and helping us to advance even our understanding of ourselves. I've recently used ai to help provide feedback of mine and a friends communication styles following an argument we had by copying and pasting the dialogue (it was over the internet) into Claude, an AI LLM similar to ChatGPT produced by Anthropic. I found Claude to be extremely insightful and help point out weak points in both our attempts at communicating while providing encouragement and useful advice for future engagements, all while making each other's points more clear to the other in ways we didn't see prior to using Claude. I immediately thought of the potential for implementation in couple's therapy.

All that being said, I take the opinion that there is a healthy relationship to technology and an unhealthy relationship to technology, and I think society's relationship is heavily toxic and harmful.

If we cannot take a step back, slow down (or temporarily stop altogether), and get clear about how to proceed, we will likely destroy ourselves.

As for me, I remain afraid of the future but willing to try to adapt as best as possible. On the other hand, I think I hear the woods calling my name louder than ever before.

~ Grant

submitted by /u/Forward_Motion17
[link] [comments]

* This article was originally published here
A Student’s Reflections on Artificial Intelligence A Student’s Reflections on Artificial Intelligence Reviewed by Transaction Banker on May 09, 2023 Rating: 5
Powered by Blogger.