Taming the Technology Thunderbolt: How Higher Ed Can Avoid Gen AI Cyberattacks
By Julie Moog, Managing Director for TIAA’s Cybersecurity and Fraud Management Organization
Note: Thank you to TIAA, one of REN-ISAC’s amazing partners, for helping to bring the highest level of cybersecurity awareness and information to our members and the public.
Our world changed in 2023 because a thunderbolt technology came into our lives like no other we’ve ever seen. You know which one, ChatGPT, a type of generative artificial intelligence (AI) that you can ask to write a 5,000-word article on any topic and it will send it to you within seconds. Or a college term paper, or blog, or executive summary, or software code.
Wow.
Stunningly powerful, without question. It’s fundamentally going to change how professors teach and students learn in profound ways not as yet fully understood. It’s ratcheting up the potential for a larger number of more harmful and difficult to detect cyberattacks, on a much broader scale with more sophistication, against everyone involved in higher education from administrators to professors to students and employees. For many years to come.
Nothing has come along in quite some time, if ever, that has the potential to upend the very foundations of teaching and learning and how colleges and universities protect themselves from cyberattacks.
What You Will learn
In this article we’re going to share a brief explanation of what generative AI is, why it’s so powerful, and the latest statistics illuminating its rapid adoption. From there we’ll get into prevalent generative AI cyberattacks and threats and how you can detect and avoid them.
We’ll then sharpen our focus on higher education with updates on the latest programs focused on use of this technology. We’ll also address trends in student use, several of which are concerning, with associated insights, data, and actions to consider. We’ll wrap up with important actions you may want to consider to help avoid these new types of cyberattacks.
What is Generative AI?
This is a tool that when asked a question searches the Internet and synthesizes and sends the prompter answers, often coherent and well written, within seconds – much faster than a person can. For example, when you type a query using generative AI a question such as “what is cybersecurity?,” it produces content swiftly that answers your question. This could be an article, blog, executive summary or however you would like it to be produced.
The key concept to understand is that the tool learns patterns in this data and, when asked another question, generates more data. Often, the more data the tool ingests the more accurately it recognizes and predicts word patterns, hence coherent text.
This amounts to a serious time saver and productivity enhancer, along with a powerful brainstorming tool and jumping off point for better critical thinking. How smart is it? Consider these stats: It scored a 1410 on the Scholastic Aptitude Test – some 400 points higher than the national average of this year’s high school students; the tool also passed the Law School Admissions Test, according to MIT Technology Review.
Growth Trends and Statistics
Being so smart and therefore compelling and useful, the uptake in usage of generative AI since introduced in November of 2022 has been staggering and unprecedented. One million people used ChatGPT, the first-to-market tool developed by Open AI, in the first five days of commercial availability – the fastest adoption ever of an Internet technology, according to Statista. That’s much quicker than Facebook, Twitter, and Netflix.
What all this means is this technology will probably have a bigger impact on our lives than the Internet, search engine, or smartphone. Contemplate its power: The tool can link pretty much all data accumulated in human history and categorize, segment, and learn from it. And then develop text, actions, recommendations, executive summaries, and on and on.
Stop and think about this: In April of this year the OpenAI website had 1.8 billion visits – seven times higher than last December, according to Similarweb. Viewed more globally, Generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually to the economy, according to McKinsey. You read that right. That’s trillion with a T.
In cybersecurity this technology will be used for good such as real-time analysis of unusual online behaviors, thereby helping detect cyberattacks, and allowing just about anyone to learn virtually anything from anywhere. But it will also be harnessed by bad actors to commit more sophisticated and dangerous cyberattacks.
Impact on Higher Education
Not surprisingly given its widespread cyberattack vulnerabilities, bad actors are targeting use of this technology in higher education. So far this year there have been 27 confirmed ransomware attacks against high ed institutions, according to Campus Safety Magazine. And Emsisoft reports 44 colleges and universities felt the brunt of ransomware attacks last year. Expect these numbers to rise, unfortunately, through the next several years because of the power of generative AI technology, which presents a whole new set of challenges in detecting and preventing ransomware and many other types of cyberattacks.
Actions to Avoid Cyberattacks
One of the most pressing problems is that bad actors can create more convincing and harder to detect phishing emails faster using generative AI. As such, it couldn’t be more crucial right now than for people in higher education to be extra careful about clicking on suspicious emails links or attachments. If the email strikes you as too accurate, too well-written, and too personalized, be suspicious. It’s probably a generative AI phishing attack.
To a growing extent you’re going to be seeing bad actors use generative AI to create personalized spear-phishing attacks, meaning highly targeted messages based on an organization’s marketing materials and public website. Unlike typical phishing emails sprayed to various random people, spear phishing emails will reveal more personal details about individuals than is normal and should make the recipient suspicious.
Don’t Want Your Data Public? Don’t Put it into Generative AI
Higher ed professionals should also have top of mind these questions: What happens if your information is entered into a generative AI tool? Will it be protected? Or deleted? Is it safe?
The answer to all these questions is to avoid sharing personal data you don’t want public with a generative AI tool. To that end, be careful asking the tool questions and what words you type in. If you enter personal information about yourself into a generative AI query prompt, the tool can use that information to potentially generate a highly personalized cyberattack against you.
New Higher Ed Generative AI Programs
The good news is that across the higher education landscape programs are being launched to address the harmful potential of generative AI. The U.S. National Science Foundation (NSF) recently teamed with higher ed institutions in a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The goal is to advance foundational AI research for developing new cybersecurity approaches and promoting trustworthy and ethical AI technologies and systems.
In a $20 million project within this program, the University of California at Santa Barbara will establish an AI Institute for the Agent-Based Cyber Threat Intelligence and Operation.
Twenty other higher ed institutions will join this overall NSF effort including University of California at Berkeley, Purdue University, and Georgia Tech.
USC launches program
Elsewhere, the University of Southern California has launched a $1 billion+ initiative to expand and infuse advanced computing throughout the university’s programs and curriculums. The initiative will focus on advancing AI and machine learning software, the technology that fuels generative AI.
There’s more. University of Iowa professors are embedding AI into their curriculums to teach benefits and drawbacks. Professors are learning alongside students.
Higher ed institutions should be realizing the real cyberattack harms generative AI technology can produce. These institutions are prime targets because of their culture that promotes open data sharing and collaboration; an abundance of sensitive student, professor, and administrator data; and the dispersed nature of college data networks that create more vulnerabilities for cyber attackers to exploit.
Student Use of Generative AI
One group bad actors are and will continue to target are students. The key is teaching students to question generative AI bias and inaccuracies. But it will be a challenge. Eighty-five percent of surveyed college and high school students say studying with ChaptGPT is more effective with a person, reports Intelligent.com. Nine out of ten prefer studying with ChatGPT instead of a tutor.
The major takeaway is that students may not be as careful to avoid cyberattacks thereby opening more doors for bad actors to penetrate higher ed networks. To address this problem, it’s crucial to train students on the broad range of new types of cyberattacks using generative AI such as more convincing phishing emails. For their part, professors should start using generative AI tools to strengthen security, boost productivity, and improve student teaching and learning.
Three Actions to Consider
There are plenty of smart steps to take to help avoid generative AI cyberattacks. We want to emphasize three as being especially important.
One: Start using generative AI tools consistently but carefully. The more you know the better you’ll be able to detect and avoid these cyberattacks. This technology is not going away; it will only become more pervasive. Those who don’t know how to use it will risk being more vulnerable to attacks.
Second, make sure a person gets involved when using generative AI. People are crucial for checking on mistakes and biases. As impressive as its capabilities can be, this technology often does not produce accurate results. You need to be careful using output the tool generates because it can sometimes be factually wrong and biased.
And third, watch out for emails that sound too convincing and too well written; they’re probably a generative AI phishing attack. Don’t click on email links or attachments.
Looking Ahead to 2024
There’s no doubt a top agenda item all across higher education next year will be how to adjust to generative AI. From teaching to studying to running operations to detecting cyberattacks, this technology will be front and center and top of mind.
We’ve entered a new era in higher education. The stakes are high. All the sensitive data needs to be protected. It is possible and imperative that we do so.
Go Back