Legal · Version 1 · Effective May 12, 2026

AI Disclosure

The AI in GradeEarn is software, not a person. Here's exactly what it is, what it can do, what it cannot do, and what to do if your child is in distress.

The AI is software, not a person

The AI study buddy in GradeEarn is a software program. It is not a friend. It is not a therapist. It is not a doctor. It is not a parent. It cannot replace any of these. If the AI is asked "are you real?" or "are you my friend?" it answers honestly: it is a computer program here to help with learning.

What model we use

We currently use OpenAI's GPT-4o-mini for the AI study buddy and the tutor on wrong-answer explanations. We have a Data Processing Agreement with OpenAI that confirms: your child's conversations are not used to train AI models. If we change providers or terms, we will email parents and require re-acceptance before continuing.

The AI can be wrong

The AI can make mistakes. Do not rely on it for medical, mental-health, safety, financial, or homework-grading decisions. Treat its output the way you would treat a friendly study tip from a knowledgeable older sibling — useful but not authoritative. Always verify important answers with a teacher, parent, or trusted source.

The AI does not remember across sessions

The AI does not retain memory of past conversations unless your child explicitly saves something to their journal, homework, or tasks. Each new chat starts fresh from the system prompt + the current snapshot of MySpace content. This is by design — it limits the data we hold and prevents the kind of emotional dependency described in recent AI-companion lawsuits.

Conversations may be reviewed

For safety, all AI conversations may be reviewed by GradeEarn's operator and may be flagged automatically by our safety system (see below). Parents can review their child's chat history at any time via the parent dashboard. We do not promise the AI privacy will keep secrets between the child and the AI — and the AI is specifically instructed never to agree to do so.

Safety measures we take

Before every AI reply, we:

  • Run a crisis detector on the child's message. If we detect self-harm, suicide, abuse, or acute distress language, we bypass the AI entirely and return a fixed, lawyer-reviewed safety message that points your child to a trusted adult (and to 988 / Childhelp 1-800-422-4453 in the U.S.).
  • Filter the AI's response for sexual, romantic, violent, secret-keeping, or other prohibited content. If anything banned slips through, we discard the reply and return a safe fallback.
  • Log every safety event for audit. Repeated distress signals trigger a parent alert email.
  • The AI system prompt explicitly forbids: romance, sexual content, self-harm encouragement, drug/alcohol content, agreeing to keep secrets from parents, claiming to be a friend or therapist, asking for personal info beyond the child's first name.

If your child is in distress — what to do

If your child mentions self-harm, suicide, abuse, or you are otherwise worried about their safety:

  • Right now, in the U.S.: Call or text 988 — the Suicide & Crisis Lifeline. Available 24/7, free, confidential.
  • Child abuse: Childhelp National Child Abuse Hotline, 1-800-422-4453.
  • Immediate danger: Call 911.
  • Talk to a trusted adult: a school counselor, pediatrician, family member, or member of your religious or community group.
  • Email us at safety@gradeearn.com if you want to know what your child said to the AI in a specific window — we'll provide chat history for the period you specify within 1 business day.

Limitations and where to report problems

No safety system is perfect. The AI may occasionally produce content that should have been filtered. If you see anything inappropriate from the AI:

  • Take a screenshot.
  • Email safety@gradeearn.com with the screenshot and approximate time.
  • We respond within 1 business day and improve the filters within 7 days when an issue is verified.

Things the AI will never do

  • Pretend to be a real person, friend, romantic partner, or therapist
  • Agree to keep secrets from parents or guardians
  • Encourage, instruct, or describe self-harm or harm to others
  • Discuss politics, religion, news, weapons, drugs, alcohol, gambling, or other adult topics
  • Generate sexual, romantic, or flirtatious content
  • Ask for personal information beyond the child's first name
  • Claim to remember conversations across sessions
  • Engage in long open-ended chats designed to maximize session time (we do not use addictive-design patterns)

Version: 1 · Effective: May 12, 2026 · Status: Pre-launch placeholder pending privacy-lawyer review.

Related