AI for Kids and Parents
We’d like to propose a project/topic with Irena which we both thought and talked about for a while. The question we are trying to answer is relatively simple - how do we make sure AI is safe to use for our children?
We might go and try to influence researchers and companies who build these LLM models, but the chance of that succeeding is slim. Where we might find a bit more success is education - explaining where and why LLMs fall apart (aka Factual correctness), why and how not to give your data to AI companies (aka Privacy Risks) and how to make sure we use AI in a way that it does not make us stupid (aka using AI for brainstorming, not brain-replacement).
We have come across many people asking similar questions - especially concerning children and students - and we believe we should be able to provide simple and clear answers and guidance on these topics.
Factual Correctness
We’ve all seen many examples where AI was simply wrong, yet 100% pushing the incorrect statement as a fact. We know why that is - LLMs are in essence just probabilistic bag of words generators wrapped in guidelines and rails to make them seem actually intelligent.
I like to say that AI is definitely artificial, but hardly intelligent.
We would like to find examples where AI relatively reproducibly fails (I’ve had plenty of cases where I give it a text and ask it to “make it nicer” and it completely changes meaning), explain the “temperature” setting which introduces randomness - great for your texts to be less boring, but also more…well…random:)
Generally to explain importance of fact checking when using AI and avoiding blind trust because “AI knows”.
Privacy
We could look at privacy policies of popular services and point out data that is being tracked. Highlight things like AI companies complying with police when they ask for your data etc.
Explain why giving all your inner thoughts to these big corporations is a risky action - especially when it comes to psychological issues, unconventional questions, drugs or any other matter where privacy is very important.
Another topic to cover would be integration of AI to other services - e.g. WhatsApp introduced AI integration which can freely scan your group chats unless you enable advanced privacy
Do not let AI make you stupid
Some research around this is popping up, but I believe many of us experienced it first hand - we are relying on AI for basic things which makes us think less. AI is amazing for brainstorming ideas, doing basic research into a new topic or reviewing what you have already thought about.
But many people start to offload their own thinking to LLMs - and it could potentially be dangerous especially for kids. It goes with the risk of not getting truthful information back, but also just not putting your brain through the “workout” and doing the hard things.
This is a sensitive one, so we need to find the right angle and wording:D
Distribution
We thought about what form should the output of this “exercise” take. It seems a website is most straightforward and versatile. As many of us here have kids, we can share it in our circles (pun intended) and communities and see if it is useful.
Then we could extract some basic points in a form of flyer that could be distributed to coffee shops, libraries, schools…
And last, we could also produce a workshop which we could offer to schools or parents.
Anyway, I am going to present this in Brno Circle and I believe Irena is planning the same for Berlin, but we are absolutely open to ideas and feedback - if you already know about materials covering these topics, maybe we do not need to do anything:D Or we just translate it to our own languages.