Window 2 Woodmore

The Student News Site of Woodmore High School

Breaking News
  • April 17April 2024 Student of the Month announced: Congratulations, Keira Travis!
  • April 17April 2024 Senior of the Month announced: Congratulations, Luke Avers!
Latest Sports Scores

Window 2 Woodmore

Window 2 Woodmore

From Oppenheimer to AI: A Critical Debate on Can vs. Should

From+Oppenheimer+to+AI%3A+A+Critical+Debate+on+Can+vs.+Should

     I finally got around to watching Oppenheimer last weekend.  With just ten days to go before the movie is expected to be named Best Picture at this year’s Oscars, I thought I might set aside the three hours it takes to get through this major motion picture.  Now this is not a film review, although I did find the movie to be interesting, and a good – not great film.  As you all are aware by now it centers around the life of Dr. Robert Oppenheimer the physicist behind the creation of the Atomic Bomb.  Spoiler alert… we used the bomb to effectively end the Second World War by dropping it on Nagasaki and Hiroshima.  Perhaps the most interesting aspect of the movie is how Dr. Oppenheimer himself became one of the most outspoken critics of any further use of the bomb.  The man who defied all scientific belief that such a weapon was even possible, never wanted to see it used again.  And the reason – he worried what would happen if it fell into the wrong hands, like a government or ruler who didn’t adhere to accepted rules of warfare or he did not care about the long-term consequences of such a weapon becoming the new military norm.  Much of the film centers on the race between us and the Nazis to see who could create the bomb first.

 

     The decision to drop the bomb on the Japanese was controversial then, and even more controversial today.  Given that the casualties in both cases were overwhelmingly civilian, and the long-term devastation to those regions was so terrible – it should come as no surprise that, despite some close calls – the bomb has never been used in military conflict since.  Which brings us to the question we should all be asking ourselves today: just because we can do something, should we?

Story continues below advertisement

     That is the question the good doctor wrestled with for the rest of his life, and the decision to speak out against the very government and its military that made him the most famous man in the world at the time proved to be a very costly but necessary one.  His reasoning was simple – whenever you are putting the safety of the world in human hands you are risking it getting into the wrong hands.  Any technology or scientific breakthrough must be handled with the utmost care.  We’ve had that debate over many things since then: the internet, social media, cloning, stem cell research – all just a few examples of things that have huge potential, but also could have some very scary uses as well. And that got me to thinking of something that seems to be in the news all the time now – AI. 

     Artificial Intelligence is nothing new – it’s been around for a very long time.  The AI that exists now however is different – because it’s in the hands of everyone who basically owns a smartphone or has an internet connection.  It’s evolving faster than any technology in the last 100 years, and it led me to search for something in Google that’s probably going to give me nightmares: “What happens when AI stops listening to the humans who create it?”.  It’s called AI Singularity, and it’s already here.

     Essentially, it takes pretty smart people to create highly sophisticated AI.  Much like our phones learn what we like and don’t like, our social media knows what we follow and are more likely to click on – AI uses that same pattern to continue to grow smarter and more intuitive.  AI Singularity, from what I learned today, is what happens when any AI platform is smart enough that it essentially teaches itself how to run on its own – meaning it can now choose whether or not to listen to it’s programmers. 

     Now some might argue that the Atomic Bomb can do a lot more damage with one drop – but the bomb can’t drop itself.  Now consider all the things that are connected to computer mainframes and cloud-based services: government, the military, national power grids, banking, and finance – all crucial in our day-to-day lives.  So, what happens when one of these AI creations decides to take one or all of them over?  Sounds like one of those big summer blockbuster movies where Will Smith or Tom Cruise saves the day just before the doomsday clock expires and we all die or the machines take over the world.  But it’s not so silly or far off in years as you might think.  Global terrorism is rarely fought on the ground anymore – it’s a digital world where people thousands of miles away can take down so many of our critical systems if they have the technology to do it. 

     What about the many common uses of AI that we keep hearing about? For example, while writing this article I asked my Alexa how to spell a word or two that no matter how I spelled it on the computer basic auto-correct couldn’t figure out. Let’s look at a simple use of AI Grammarly. Grammarly is a form of AI that is used to help correct mistakes and errors in the form of writing. Multiple news outlets a week ago reported that a University of North Georgia student had been placed on academic probation for using the software to proofread a paper. The college’s anti-plagiarism software also accused her of using AI to write the paper, which she strongly denied. This cost her a scholarship and will lead her to go before the disciplinary committee. It seems unlikely that if she had used AI software to write the paper that she would then need to use Grammarly to correct it. The university does not seem to recognize the irony in such a thought.  The student issued a warning on social media to students of all ages to do everything in their power not to get accused of cheating like she has been. Which brings me to my next point.

     Well, there are students of all ages being accused of using Chat GPT to write essays, there are multiple instances of teachers using it not only to write but to see if their students are using it to complete assignments. Last May, an entire class at Texas A&M University were accused of plagiarism and had their diplomas temporarily denied after a professor incorrectly used Chat GPT to test whether the students used AI to generate their final assignments. This would be like the cops breaking into a bank to see if a suspect could actually have pulled off a bank robbery.

     Law enforcement is already reporting an increase in severe types of identity theft and cyber-financial crime. Certain forms of AI are making it harder than ever to detect what is real from what is fake. Experts believe this will lead to the creation of deep fakes that are so believable and without the common signs that it has been altered, that it will make it impossible to trust photographic and video evidence.

     Without trying to sound like a real “downer”, we already have an alarming percentage of the population that doesn’t know what or who to trust. Other experts warn of a further widespread lack of faith in what is fact vs. what is fiction. We all know multiple people who unfortunately will believe just about anything that will pop up on their screen. How much farther will trust fall when even the most careful person can no longer tell fact from fiction.

     Plot twist: I used Chat GPT to write this entire article. Or did I? The answer: I didn’t but it is interesting to wonder would Chat GPT have been as critical of Artificial Intelligence as I have been.

     The world has arguably been a better and safer place because the people who make the big decisions asked, we can drop the bomb but should we? Our generation is lucky, we have not had to live through the threat of the bomb being dropped. Our parents didn’t live through the Cuban Missile Crisis like their parents and grandparents did. Our generation is living in what I prefer to call artificial technology. I leave out “intelligence” because the people who decided not to push the button had to use not only intelligence but emotion, diplomacy, and thinking ten steps ahead. I can’t say with much confidence that a computer program designed to be “artificially intelligent” will possess the same skill set as those world leaders did in deciding not to take over. It is my hope that the programmers deciding the fate of AI will focus more on the question of should?

 

 

 

 

Leave a Comment
More to Discover
About the Contributor
Isabelle Bush
Isabelle Bush, Student Event/News Editor
Isabelle Bush is a Junior at Woodmore High School. This is Isabelle’s second year on the journalism staff. Isabelle enjoys spending time with family and friends and watching football and basketball.

Comments (0)

All Window 2 Woodmore Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *