Overview:
Generative AI tools such as ChatGPT and Microsoft Copilot are transforming how we write, research, code, and learn. These tools use large language models (LLMs) trained on massive datasets to generate human-like responses to user prompts, making them valuable for brainstorming, drafting, and enhancing productivity.
However, while powerful, these tools are not without risk. AI systems do not think or reason like humans. They can generate content that appears to be accurate but may not be. Additionally, they may store user input, increasing the risk of data exposure if sensitive information is shared. Misuse or over reliance on AI can lead to unintended consequences ranging from misinformation and ethical concerns to potential security breaches.
This article highlights the most common risks and offers practical, easy-to-follow safety practices for using AI responsibly.
Common Risks of AI Tools:
Risk Area |
Real-World Example |
Recommended Action |
Data Leakage |
Pasting confidential info
(e.g., academic records,
personal data) into an
AI prompt |
Never share private or
sensitive data with any AI tool |
Misinformation |
AI can generate wrong facts,
fake citations, or biased content |
Always fact-check and
verify results using trusted sources |
Prompt Injection |
Bad actors trick AI into
outputting inappropriate
or harmful content |
Do not reuse suspicious prompts
or click on unknown AI outputs |
No Privacy by Default |
Free tools may store your
queries for training or logging |
Use only approved AI platforms
and understand their data policies |
While AI tools continue to improve in capability, they lack critical thinking, ethics, and contextual awareness. This means they can generate convincing but incorrect or biased outputs, or even replicate harmful patterns found in their training data. Users must act as a human checkpoint, applying judgment, ethics, and institutional policies when working with AI platforms.
Securing Input Data on AI Platforms:
AI tools learn from user input data to become more precise to train their LLM’s, Throughout this process it’s crucial to ensure that sensitive data is not shared through these platforms.
- Users may unintentionally submit confidential info (e.g., student records, health info, internal research).
The users can check the AI platform settings to control their data sharing. The settings for the data control for a few of the famous LLMs are:
- ChatGPT: Turn off “Chat History & Training” under Settings → Data Controls.
- Microsoft Copilot (365): Settings and privacy → Dynamics 365 applications → toggle off include my data in Dynamics 365 applications
Safe Usage Guidelines:
- Don’t share sensitive or personal information like passwords, IDs, or internal files.
- AI tools might store what you type
- Avoid entering anything you wouldn’t want a stranger to see including:
- Login credentials
- ID numbers
- Private school/work documents
- Use AI for drafting or brainstorming, not for final answers.
- ChatGPT or Co-Pilot should be used as helpers - not decision-makers
- Let them assist with ideas or outlines
- Don’t treat their responses as final or flawless
- Assume all inputs may be stored or visible - keep it clean.
- Don't treat these tools as private
- Your messages might be used to improve the system
- Avoid typing anything you wouldn’t want saved or reviewed later
- Avoid using AI for legal, medical, or financial decisions.
- AI doesn't replace real professionals
- Seek out qualified experts for serious topics including:
- Health
- Finance
- Legal matters
- Always double-check facts and sources manually.
- AI can "hallucinate" (make things up)
- Always verify the facts through trusted sources including:
- Official websites
- Textbooks
- Instructors and other experts