The debate over AI chatbots and children presents an unusually difficult regulatory challenge. Here's why I'm uncertain about the best path forward.
AI chatbots are now able to engage in natural, emotional conversations with people. They can also convincingly simulate the personalities of specific people and characters. It has been reported that growing numbers of children and teenagers are spending significant time interacting with these chatbots, and there is anecdotal evidence that some are developing what appear to be meaningful relationships with them.
Some groups have raised concerns about these interactions. They are worried that if children form relationships with chatbots, they will fail to form social relationships, experience stunted emotional growth, and be vulnerable to behavioural manipulation by chatbot providers. They are calling for governments to intervene to protect children from these potential harms. The evidence base is still developing. I know AI chatbots can be addicting and emotionally manipulative.
But, I also want to be careful about assuming that chatbot relationships cause social problems. It might be that socially struggling kids gravitate toward AI chatbots as safer alternatives and might already be socially isolated. If designed well, they might be great for those who can't afford professional help.
I'm hesitant about AI chatbot regulation because chatbots operate globally and regulations are national. Restrictive policies in one country could shift children to black-market/unregulated platforms. Might aggressive content monitoring/age verification be more privacy-invasive than the chatbots are harmful?
I think regulation is needed, but AI evolves so fast that policy could become obsolete or counterproductive. Longitudinal studies comparing children who do and don't use chatbots intensively and measuring best policy approaches (i.e.: parental controls vs. usage limits vs. age verification) should be done.