AI Chatbot Psychics —
How Bots Are Replacing Human Advisors
Several unregulated psychic apps have deployed AI chatbots — trained on psychic reading transcripts and optimised to extend sessions — to simulate human advisors. Consumers are being billed per minute for machine-generated responses. This is what we found, how we found it, and how you can protect yourself.
By the Psychic Chronicle Technology Desk • March 25, 2026
Background: Why AI Makes This Possible Now
The psychic reading format is, from a technical perspective, well-suited to AI simulation. Sessions are text-based (in chat format), follow predictable topic patterns, use consistent emotional language, and involve clients who are often emotionally receptive and cognitively focused on the content of what is being said rather than meta-analysis of how it is generated.
Large language models trained on psychic reading transcripts — of which there are millions publicly available across forums, review sites, and social media — can produce responses that are superficially indistinguishable from genuine human reading content. The addition of a fabricated profile photo (easily generated via AI image tools), a constructed review history, and a platform interface that treats all 'advisors' identically completes the illusion.
The economic incentive is straightforward: a human advisor can conduct one session at a time, at rates they negotiate with the platform. A bot can conduct unlimited simultaneous sessions at effectively zero marginal cost. For an unscrupulous platform operator, replacing human advisors with AI bots transforms a labour-intensive business model into one that scales instantly.
Our investigation identified this pattern emerging in 2024 and accelerating through 2025–2026 as language model quality improved and the cost of API access declined. The platforms we identified are not household names — the major established platforms (Kasamba, Keen, California Psychics, Purple Garden, Psychic Source) have not been implicated in undisclosed AI deployment. But the unregulated fringe of the market is increasingly affected.
Our Investigation: Methods and Findings
Our investigation involved four distinct methods over a six-week period from January to February 2026:
1. Technical Session Analysis
We conducted 22 test sessions across 11 platforms not included in our main review group — specifically platforms that had appeared in consumer complaint reports as potential AI users, and platforms that had been flagged by members of the professional psychic advisor community as suspicious. We conducted sessions using standardised scenarios designed to reveal bot behaviour.
Bot detection techniques we used: response time analysis (bots typically respond faster and more consistently than humans, particularly for long messages); consistency testing (we provided contradictory information across different points in a session to see if the 'advisor' would reconcile or remember earlier statements); and Turing test prompts (direct, off-topic questions about the advisor's personal experience designed to elicit responses that would be difficult for a bot to handle naturally).
We identified probable bot behaviour in sessions on four of the eleven platforms tested. Our certainty level for each identification varied: two platforms showed patterns we assess as conclusive bot operation; one showed patterns consistent with bot operation but not definitive; and one showed a hybrid pattern suggesting AI-assisted rather than fully AI-replaced human operation.
2. Network Traffic Analysis
With appropriate technical assistance, we analysed network traffic from sessions on the suspected platforms. On two platforms, we identified API calls to third-party language model providers during active chat sessions — specifically, call patterns consistent with sending session content to an LLM and receiving generated responses. These platforms were not disclosing AI involvement anywhere in their user interface.
We are not naming the platforms at this stage. Our evidence has been shared with the FTC and with relevant state attorneys general. We will update this report when and if those regulatory engagements produce public findings.
3. Advisor Community Testimony
We interviewed seven professional psychic advisors — all of whom operate on established platforms and requested anonymity — about their awareness of AI bot competition. Three had direct knowledge of specific platforms they believed were using bots; all three named the same two platforms independently, which corroborates our technical findings.
"You can tell from the review patterns," one advisor with 14 years of platform experience told us. "An advisor with 800 reviews who never has a bad one, who responds instantly at 3am, who handles every topic with equal expertise — that's not a human. I know what the schedule of a human reader looks like."
A second advisor described attempting to report suspected bots to a platform's advisor support team and receiving no response. "I reported a profile that I was certain was a bot — the response patterns were completely consistent with GPT output. The platform never replied to my report."
4. Consumer Report Review
We reviewed 340 consumer complaints filed with state AGs and the BBB in 2025 that referenced automated or robotic responses from psychic advisors. Eleven complaints used language describing responses that were "too fast" or "too perfect" in ways the complainants found suspicious. Three complaints explicitly stated they believed they had been communicating with an AI.
How AI Psychic Bots Work: A Technical Overview
The Typical Architecture
The Optimisation Problem
The central issue with AI psychic bots is not simply that they are AI — it is that they are optimised for the wrong objective. A human psychic advisor, whatever their actual abilities, is at least motivated to provide an experience the client values, because their livelihood depends on repeat clients and reviews. An AI bot, if configured by a platform optimising for revenue, is motivated to extend sessions and maximise billing — regardless of whether this produces value for the consumer.
This is not a theoretical concern. In our test sessions on suspected bot platforms, we found that sessions consistently escalated in intensity toward their midpoint — new revelations, urgency, emotional hook — before offering partial resolution. This is a known pattern in consumer psychology for maximising session length. Human advisors produce it too, but inconsistently. Bot-operated sessions produced it in every session we tested, with a statistical consistency incompatible with human variance.
How to Detect an AI Psychic Bot
None of these detection methods is definitive in isolation. But multiple indicators together build a meaningful case.
Response Speed and Consistency
High indicatorHuman advisors, particularly in extended responses, show natural variation in typing speed. They occasionally correct typos in real time. They pause. Bots respond at consistent speeds regardless of response length. If a 200-word response arrives in 12 seconds, consistently, across multiple messages, that pattern warrants investigation.
Memory and Continuity
High indicatorTest by providing contradictory information at different points in a session. Tell the advisor your partner's name is James at one point, then refer to 'my partner Alex' later. A human advisor would notice and ask about the discrepancy. Many bot implementations do not maintain context effectively across long sessions and will respond to the later information without flagging the contradiction.
Off-Topic Personal Questions
Moderate indicatorAsk the advisor where they are physically located, what they had for breakfast, or an irrelevant personal question. A human will answer naturally (or decline with a personalised explanation). A bot will typically deflect back to the reading or produce an implausibly perfect response.
Technical Knowledge Tests
Moderate indicatorIf your session involves a specific life situation (e.g., a specific medical condition, a niche career field), ask the advisor a specific technical question about it that would require domain knowledge. Human advisors will acknowledge the limits of their expertise; bots trained on general data may produce confident but inaccurate responses.
Profile Photo Reverse Image Search
Moderate indicatorRun the advisor's profile photo through a reverse image search tool. AI-generated photos are now sophisticated enough that many won't be found this way, but some platforms still use stock photography. A stock photo on an advisor profile is a significant red flag.
Review Pattern Analysis
Moderate indicatorImplausible review patterns: 100% five-star reviews across hundreds of sessions with no negative feedback, reviews that use unusually consistent language or structure, review dates clustered around specific time periods. Human advisors produce variance; bots and fake review systems produce implausible consistency.
What Regulators Are Doing
Regulatory response to AI-simulated human services is at an early stage. The FTC's AI and consumer protection enforcement priorities have focused primarily on AI-generated content in advertising and AI-assisted decision-making in credit and housing. The specific use of AI to simulate human service providers — without disclosure — is not yet addressed by specific regulation, though it likely falls under existing Section 5 unfair and deceptive practice authority.
The FTC's 2024 "Voice Cloning Challenge" and subsequent guidance on AI impersonation suggest growing awareness of the category. We anticipate enforcement activity in the AI-simulated human services space within 12–18 months, based on conversations with regulatory affairs professionals familiar with the agency's enforcement pipeline.
Several state-level actions are more advanced. California's AB 2355, passed in 2025, requires disclosure of AI involvement in consumer-facing services. If applied to psychic platforms operating in the state, it would mandate disclosure of AI advisor systems — though enforcement mechanisms remain to be tested.
The platforms we have identified have been reported to the FTC and relevant state AGs. We will update this investigation when those reports produce public outcomes.
The Legitimate AI Question
This investigation focuses on undisclosed AI substitution — fraud. It is worth separately addressing the legitimate question of whether AI has a role in the psychic services industry that is honest and potentially valuable.
Several legitimate platforms are exploring AI-assisted tools that operate with disclosure: AI-powered advisor matching systems, AI-assisted session transcription and summarisation for returning clients, and AI chatbots for customer service functions (not for readings). These are meaningfully different from undisclosed AI substitution and not our concern here.
We have also received inquiries about AI-powered "psychic reading" apps that disclose their AI nature — apps that provide tarot interpretations, astrology readings, or personality insights using language models, explicitly branded as AI experiences. These are not fraudulent (assuming the disclosure is clear) and are arguably a distinct product category from human psychic advisory services. Consumer preferences on this question vary; the disclosure is what matters.
Frequently Asked Questions
Which specific platforms use AI bots?
We are not publishing platform names at this stage of the investigation. Our evidence has been provided to the FTC and relevant state attorneys general, and we are awaiting the outcome of those referrals before publication. Naming platforms publicly before regulatory action could compromise enforcement proceedings. We will update this report when public findings are available.
Do established platforms like Kasamba or Keen use AI bots?
We have found no evidence that the five platforms in our main review group (Kasamba, Keen, California Psychics, Purple Garden, Psychic Source) are using undisclosed AI bots for advisor sessions. These platforms have established human advisor communities, functional review systems reflecting genuine session history, and sufficient reputational skin in the game to make undisclosed AI substitution a high-risk strategy. Our investigation has focused on smaller, less regulated platforms where the risk-benefit calculation differs.
Can AI produce useful psychic readings?
AI can produce content that resembles psychic readings in format and tone. Whether that content is 'useful' depends on what the consumer is seeking. AI can provide empathetic, warm, structured content that some consumers find comforting or thought-provoking. Whether it produces 'psychic insight' in any meaningful sense is a separate philosophical question. Our concern here is not with the philosophical question but with the material fact of undisclosed substitution: consumers believe they are paying for a human service and are receiving something else.
How can I avoid AI bot psychic platforms?
The safest approach is to use established, regulated platforms with documented human advisor communities and years of operating history. The five platforms in our review group all meet this standard. Beyond platform choice: apply the detection techniques described in this investigation; pay attention to response patterns; and report suspicious behaviour to both the platform and consumer protection authorities.
Is using an AI for psychic readings illegal?
Using an AI without disclosure to simulate a human advisor in a paid service context likely constitutes unfair and deceptive practice under FTC Act Section 5, and potentially violates state consumer protection statutes. California's AB 2355 specifically addresses AI disclosure in consumer-facing services. The legal picture is still developing, but undisclosed AI substitution in a paid human service context has significant legal exposure as regulatory frameworks mature.