Information Security Risks of AI Tools

Overview:

Individuals and institutions are increasingly integrating AI tools such as Grammarly, Open AI, Quillbot, etc... due to their ability to enhance writing quality and efficiency. However, while these tools offer potential benefits, they also pose challenges related to academic integrity and information security. Saint Louis University (SLU) along with most schools must address these risks to maintain secure and ethical practices in academic and operational frameworks.
 

Impact Analysis & Malicious Use of AI:

Social Engineering:

Social engineering involves manipulating individuals into revealing confidential information or performing actions that compromise security. Utilizing AI tools enable bad actors to craft highly personalized and convincing messages.

  • Impact:
    • AI-powered language models can be misused to generate phishing emails or messages that mimic legitimate communications. These messages may appear legit, requesting sensitive information like login credentials or personal data.
  • How it can happen:
    • Students or staff unknowingly upload sensitive data to AI tools, such as email drafts, academic content or documents, which could be processed and potentially exploited by attackers to craft tailored phishing schemes. For example, a student’s uploaded research proposal could be used to create a phishing email targeting the student or the advisor.

Generative Adversarial Networks (GANs) and Deepfakes:

GANs are AI systems that generate realistic images, videos, or audio by learning from large datasets. Deepfakes, a product of GANs, create manipulated content that appears authentic, such as videos of people saying or doing things they never actually did.

  • Impact:
    • Attackers use data uploaded to AI tools (e.g., video or audio snippets shared for proofreading or editing) to generate fake media. A student or staff member’s image or voice used to create a video impersonating them, leading to fraud or reputational harm.
  • How it can happen:
    • Students upload multimedia files, such as project videos or recorded presentations, to AI platforms. These files could be exploited to create deepfake videos impersonating them to request financial transfers or sensitive information from others.

AI-Driven Malware Development:

  • Impact:
    • Cybercriminals can use AI tools to craft malicious scripts or enhance malware functionality. For instance, a seemingly harmless resume/CV format or excel project file shared via AI-driven platforms could embed scripts that install malware upon opening.
  • How it can happen:
    • Students uploading and/or downloading resumes, project files, or application templates to AI tools for editing or enhancement might unknowingly introduce vulnerabilities if the tools are not secure.
    • Example: A student downloads a "corrected" resume that includes malicious code, infecting their system when opened.

Global Disinformation Campaigns

  • Impact:
    • Attackers might misuse academic data/content uploaded to AI tools to generate misleading content, such as fake research findings or fabricated student records, to spread misinformation. Social media bots powered by AI can then amplify this content to reach a wider audience quickly.
  • How it can happen:
    • Students upload content from academic papers or sensitive institutional data to AI tools. These documents could be accessed or repurposed to create false narratives about institutional policies or academic misconduct, undermining trust and credibility.
       

Possible Malicious Actors:

Several groups are leveraging AI for harmful purposes:

  • Cybercriminals: Targeting individuals and organizations for financial gain.
  • Nation-State Actors: Engaging in espionage and disrupting critical infrastructure.
  • Hacktivists: Leveraging AI tools to create misinformation campaigns for ideological or political agendas.
  • Insider Threats: Exploiting AI tools within organizations to access sensitive information or sabotage systems.

State-Sponsored Threat Groups:

  • Russia: Focus on cyber espionage targeting critical military infrastructure and operational systems.
  • China: Advanced custom malware and zero-day vulnerabilities for long-term infiltration.
  • North Korea: Embedding IT workers in Western companies for espionage and financial gain.
  • Iran: Cyber operations tied to geopolitical conflicts, targeting regional governments and telecom infrastructure.
     

Recommendations for SLU Community:

  1. Use Approved AI Tools Only:
    Always use AI tools vetted and approved by SLU's IT department like Co-pilot. Avoid uploading sensitive data to unauthorized or not secured public AI platforms.
  2. Beware of AI-Enhanced Phishing:
    Stay alert to phishing emails generated by AI, which may appear highly convincing. Verify unexpected requests for sensitive information through official SLU channels.
  3. Train on Responsible AI Use:
    Attend SLU workshops or follow SLU Publishing for guidance on ethical and secure AI practices, ensuring AI tools are used as assistants rather than replacements for critical thinking or originality.
  4. Limit Data Shared with AI Tools:
    Minimize the amount of sensitive or confidential data uploaded to AI platforms. For example, avoid sharing personal details or proprietary SLU information with AI tools to mitigate potential risks.
  5. Think Before Clicking or Sharing:
    Avoid clicking on links or sharing information from unknown or unverified sources. Stick to secure platforms for sharing files and messages. Install antivirus software and keep it updated. Regularly update your computer, phone, and apps to fix security holes.
  6. Learn and Report Issues:
    Stay informed about AI-related risks and share tips with others. Report any suspicious activities or emails to SLU’s IT help desk right away.
  7. Backup Important Files:
    Save copies of your important documents, photos, and data using secure cloud services or external drives in case of a cyberattack.
     

As always, report any incidents to the Service Desk:

If you notice anything unusual with your account, like unauthorized access, strange emails, or unexpected changes, report it to the SLU Help Desk right away. Early reporting helps protect your information and prevents further harm.

Phone: 314-977-4000 
Email: ask@slu.edu 
Create a ticket via the AskSLU Service Portal: ask.slu.edu 
​​​​​​​