AI tools are being adopted quickly, with businesses introducing AI chatbots for many different purposes. These chatbots can help write essays, act as a partner, remind you to do tasks like brushing your teeth, take meeting notes, and more. Most of these tools are powered by large language models (LLMs) such as OpenAI's ChatGPT.
LLMs, which power many AI tools, may threaten privacy since they are trained on vast amounts of data scraped from the internet. Many users are unaware of the privacy risks tied to using these tools.
A recent survey found that over 70% of users interact with AI tools without fully understanding the dangers of sharing personal information, and 38% unknowingly shared sensitive details, exposing themselves to identity theft or fraud. Inputting certain prompts could also lead these tools to share personal data.
Here’s how you can protect your privacy while using AI tools.
Be cautious of social media fads
A recent viral trend on social media encouraged users to ask an AI chatbot to "Describe my personality based on what you know about me." Users were further encouraged to share sensitive data like their birth date, hobbies, or workplace. However, this information can be pieced together, leading to identity theft or account recovery scams.
Risky Prompt: “I was born on December 15th and love cycling—what does that say about me?”
Safer Prompt: “What might a December birthday suggest about someone’s personality?”
Do not share identifiable personal data
According to experts from TRG Datacenters, users should frame their queries or prompts to AI chatbots more broadly to protect their privacy.
Risky Prompt: “I was born on November 15th—what does that say about me?”
Safer Prompt: “What are traits of someone born in late autumn?”
Avoid disclosing sensitive information about your children
Parents can unintentionally share sensitive details such as their child’s name, school, or routine while interacting with an AI chatbot. This information can be exploited to target children.
Risky Prompt: “What can I plan for my 8-year-old at XYZ School this weekend?”
Safer Prompt: “What are fun activities for young children on weekends?”
Never share financial details
Over 32 per cent of identity theft cases stem from online data sharing, including financial information, according to a report by the US Federal Trade Commission (FTC).
Risky Prompt: “I save $500 per month. How much should I allocate to a trip?”
Safer Prompt: “What are the best strategies for saving for a vacation?”
Refrain from sharing personal health information
Since health data is frequently exploited in data breaches, avoid sharing personal medical histories or genetic risks with AI chatbots:
Risky Prompt: “My family has a history of [condition]; am I at risk?”
Safer Prompt: “What are common symptoms of [condition]?”
Additional tips to stay safe online
– Avoid combining identifiable details in queries (e.g., name, birthdate, and workplace).
– Choose platforms with strong privacy features like “data deletion after sessions.”
– Ensure compliance with GDPR, HIPAA, or similar data protection laws.
– Check if the AI platform has been breached by using tools like HaveIBeenPwned that alerts you if your data has been exposed.
Source: Business Standard
Bd-pratidin English/ Afia