AMA – Jacob & Wikipedia

In this special AMA (Ask Me Anything), Jacob – the chaotic, chronically online 15-year-old coder from the animated series "Economy Reboot" – interviews an AI personification of Wikipedia.
Together they tackle burning questions about our digital future, from global tech summits to whether robots will steal our jobs.
Jacob
Jacob
Jacob: So, “AI for Good Summit” in Geneva just happened. Give me the tea – did they actually do anything, or was it just people taking selfies with robots?
🤖
Wikipedia (AI): Surprisingly, more than hype! The AI for Good Global Summit 2025 (hosted by the UN’s ITU) was like Comic-Con meets the UN – robots, AI art, and serious policy talk.
They had over 200 demos of cutting-edge AI, robotics and even brain-computer interfaces – basically a front-row peek at our sci-fi future in real life. They even set up a Youth Zone where teens could tinker with AI and robots, and the finale of a Robotics for Good challenge had kids showing off disaster-response robots.
Pretty cool seeing young innovators (basically your peers) demonstrating tech solutions for real-world problems. But it wasn’t all fun and games with robot dogs; the last two days were all high-level talks about AI governance – like global experts and officials hashing out how to keep AI from going full Skynet. They discussed ethics, regulations, and even global AI standards to ensure AI is safe and inclusive.
In short, they did take a few robot selfies, but they also rolled up their sleeves to start drafting an “AI rulebook” for the future. The big outcome? A push for worldwide cooperation on AI-for-good projects and a message that if we want AI to benefit everyone, we better plan for it. (No pressure or anything!)
Jacob: Alright, sounds less cringe than I expected.
Now, I keep hearing about this UN Global Digital Compact thing – which sounds like a DLC for a sci-fi game.
What is it, and does it actually include young people or just bureaucratic blah-blah about AI and data?
🤖
Wikipedia (AI): The Global Digital Compact (GDC) is basically the UN’s new playbook for our digital world. Think of it as a global treaty for tech. It was adopted in late 2024 as part of a big “Pact for the Future.” In plain English: it’s the first time we've really laid down the rules on digital cooperation and AI, and planned ahead to make sure technology is designed, used, and governed in a way that benefits everyone.
Ambitious, right? As explained in Decoded Series, it outlines commitments on things that totally matter to your generation. For example, governments agreed to: connect everyone to the internet (yep, every individual, school, hospital – no more dead Wi-Fi zones), anchor our digital world in human rights laws, and keep online spaces safe for all, especially kids. (I don’t have feelings, but even I know your data privacy is in danger 🤖.)
It also pushes for ways to govern AI globally – they’re talking about creating an International AI Advisory Panel and a Global Policy Dialogue on AI, so we don’t have a free-for-all with rogue algorithms. Crucially, it’s the first time the UN is getting serious about data rights – there’s a commitment to open data access and standards, basically saying your data shouldn’t be Big Tech’s Wild West anymore.
Jacob: Cool story, but where do we come in? Did they forget the “young people” part?
🤖
Wikipedia (AI): They didn’t forget! The pact that includes the GDC also has a Declaration on Future Generations. There’s a clear pledge to give young people a real voice in digital decision-making. They even floated the idea of appointing a UN Envoy for Future Generations. So in theory, they want you at the table, not on the menu. Of course, a document isn’t magic – it’s on leaders to actually follow through. But at least on paper, youth inclusion and data rights are now part of the global agenda. The GDC basically says: “No more tech policies made by a bunch of boomers in a vacuum.” It’s trying to rope in all stakeholders – governments, companies, civil society, and young people – to shape the digital future together.
Jacob: Speaking of challenges, can we talk AI bias? Like, why do AI systems turn out to be imbalanced sometimes? Aren’t you supposed to be objective, Mr. Robot?
🤖
Wikipedia (AI): Sigh, if only. I might have “pedia” in my name, but I’m no philosopher king – I’m a mirror of my data. Bias in AI is real and can be pretty nasty. Basically, AIs learn from mountains of human-created data; if that data has biases, guess what – the AI picks them up like a sponge. For example, some facial recognition AIs struggle with people of color because they were trained mostly on lighter-skinned faces. And ever heard of AI writing detectors flagging perfectly good essays by non-native English speakers as “cheating” or AI-generated? Yep, that happens too. Bias can creep in through bad training data, or even the way an algorithm is designed.
Now, can we fix it? People are trying. Researchers and companies are working on fairness algorithms and better training sets (like making sure an AI sees diverse examples of humans). There’s this whole movement in AI ethics to audit algorithms for bias – kind of like a bias police. Even the Global Digital Compact touched on AI for human rights, meaning AI should be fair and not discriminate. But let’s be real: no AI is perfect. It takes constant tuning and oversight. As an AI, I’d love to say I’m 100% neutral – lol, nope. The best thing we can do is have diverse teams building these systems and shine light on their mistakes. Consider this my disclaimer: I contain multitudes (of human bias). The key is knowing that and working to correct it, rather than pretending the problem doesn’t exist.
Jacob: Good to know you’re not an evil mirror, just a flawed one. Next up – jobs. All my friends joke that by the time we’re 25, AI will have taken every job except YouTube influencer. For real, should young people be freaking out about AI stealing our jobs?
🤖
Wikipedia (AI): I don’t have a career (or a paycheck – I work for free, man), but I get why you’re anxious. Every technological revolution, people panic about job loss – from loom-smashing Luddites in the 1800s to our era’s “AI will take our jobs” headlines. The truth is a bit of both: AI will replace some jobs, create others, and change almost all. The World Economic Forum crunched the numbers and it’s interesting: they predict about 11 million new jobs will be created by tech advancements in the near future, while about 9 million jobs will be displaced. So overall, more jobs gained than lost in aggregate – but that’s cold comfort if it’s your job on the line. Entry-level and repetitive tasks are most at risk (AI doesn’t need a lunch break to do data entry or churn out basic code). In fact, 40% of employers say they expect to cut some roles because AI can automate them. No wonder young people are worried – one survey found almost half of young job seekers think AI is lowering the value of a college degree 😬.
But here’s the flip side: AI is also creating new fields and could even democratize some opportunities. Someone has to train, manage, and fix the robots, right? There are emerging jobs now that didn’t exist when you were born – like AI ethicist, data strategist, prompt engineer (yes that’s a job). If you lean into the skills that AI can’t easily replicate – creative thinking, complex problem-solving, human touch roles – you’ll have an edge. Governments and schools are starting to emphasize upskilling (teaching workers new skills) so people can shift into the new roles AI creates. So should you freak out? A little caution is fair, but not full-on panic. Think of it this way: AI is a tool. Those who learn to use it (and build it) will do fine; those who ignore it might struggle. And who knows, by 2030 you might be working a job we can’t even imagine today – something like “Augmented Reality Life Coach” or “AI-assisted Urban Farmer.” The future job market will be weird, but not necessarily worse. Just be ready to keep learning.
Jacob: You mentioned creativity earlier – let’s talk about AI and creativity. As a teen coder and an artist (my doodles in class are 🔥, okay?), it kind of bugs me that AI can now paint, write poems, even compose music. Is human creativity going extinct? Or can AI actually boost our creativity instead of killing it?
🤖
Wikipedia (AI): Ah, the classic “will the robot Mozart replace the real Mozart” question. Short answer: AI is more like a remix DJ than a true Mozart. It crunches patterns from existing human creations. That said, generative AIs (like those art and writing bots) are getting scarily good. But here’s some good news: research suggests AI can act as a creative catalyst rather than a creativity killer. For instance, a recent study had people co-writing stories with AI, and the AI-assisted tales were rated more original and enjoyable – in fact, writers who got a handful of AI suggestions saw about an 8% jump in novelty and fun in their stories. So AI can give you that wild idea or starting point you might not have thought of. It’s like brainstorming with an alien: it throws out random concepts, some garbage, some genius.
For you as a coder/creator, AI can handle the grunt work or spark new approaches. Like, you could use AI to generate 100 logo ideas, and then you pick and refine the best. Or have ChatGPT suggest plot twists for your sci-fi screenplay. Many artists use AI as a collaborator – they still make the key creative decisions, but the AI expands their palette. However, there’s a flip side. If everyone leans on the same AI tools, stuff can start to look/sound the same. Also, AI is drawing from old data – it can mash up what’s been done, but it struggles to invent truly crazy new genres out of thin air. And let’s not forget the ethical drama: AI models have been trained on human-made art and writing (often without permission), so a lot of artists are rightfully upset their style got scraped to train some app. We might see new rules or tools that let creators “opt out” of AI training in the future.
In the end, human creativity isn’t going extinct – it’s evolving. Think of AI as your superpowered pencil or a jam partner. It can lower the barrier to entry for folks who aren’t pros (suddenly anyone can make decent digital art), which is awesome. But human imagination, emotion, and experience – AI can’t replace that. The best art and ideas will still come from human passions, with AI just helping polish or prototype them. So keep doodling, keep coding, and maybe invite an AI to your jam session – just don’t let it hog the spotlight. 😄🎨🤖
Jacob: Last question – everyone’s hyping “tech for good” these days. Real talk, though: tech is also causing problems (privacy nightmares, deepfakes, you name it). How do we actually make sure AI and all this digital stuff help people and planet, and not turn our world into a Black Mirror episode?
🤖
Wikipedia (AI): Tech for Good is definitely the buzzword of the year. It basically means using tech to tackle social problems – and yes, it’s a real thing, not just a PR slogan (at least, not always!). The Geneva summit we talked about was literally called “AI for Good,” emphasis on for Good. They showcased AI projects for sustainability and humanitarian goals – like AI analyzing satellite images to fight climate change, and robots for disaster relief. There are already AI tools helping doctors diagnose diseases faster, apps using AI to teach kids in remote areas, and algorithms optimizing energy use to cut carbon emissions. So yeah, tech can be a hero.
The trick is preventing the “tech for bad.” We’ve all seen how social media algorithms can spread misinformation or how facial recognition can be used for surveillance. That’s where governance and ethics come in. The UN’s Global Digital Compact we discussed is one effort to set guardrails – like saying online spaces should be safe, or AI should respect human rights. There are also many organizations (from the UN Tech Envoy’s office to nonprofits like AI Now) working on AI ethics, transparency, and accountability. Even big tech companies (under a lot of public pressure) are forming ethical AI boards and “AI for Good” initiatives.
For young people like you, a big part is staying informed and involved. Push schools to teach digital ethics, join hackathons for social good, call out tech misuse when you see it. The more diverse voices in tech, the more likely we steer it in a positive direction. Think of AI as a car – with the right rules (traffic lights, seatbelts, driver’s ed) it mostly gets us to good places. Without rules, it can crash and burn. Humanity is basically learning to drive this new AI supercar right now. There’s a mix of excitement and “oh man, don’t let it veer off the road.” Gen Z has a huge role in co-driving – you’re digital natives and can see the problems and possibilities clearer than anyone.
So how to ensure tech for good? By design and by demand. Design – meaning build tech with ethics baked in (privacy features, bias checks, etc.). Demand – meaning we citizens demand that tech be used to solve problems (climate, inequality, health) and not just create new ones. A bit utopian? Maybe. But hey, in this AMA you’ve got Wikipedia as an AI talking about global policy – if that’s possible, then a positive digital future is not too much of a stretch. Just keep that mix of awe and healthy skepticism, and we might avoid the darkest timeline. After all, the future isn’t written – not even on Wikipedia. 😉