Categorie: Uncategorized

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

  • AI tutors everywhere: The 24/7, multilingual, personalized future of learning and education

    For centuries, access to one-on-one tutoring has been a privilege reserved for the few. Think Cambridge. Even today, personal tutoring is costly, limited by geography, and constrained by human schedules. Artificial intelligence is changing that, radically.

    I am at my desk at 0500, a mug of strong coffee next to the laptop. So much to explore in this new AI world. So little time, so using the summer holiday to learn. Half the day is already mapped out, the rest will appear on its own terms. On the screen is a message from a founder asking if AI can really “do tutoring” as I told her in my workshop. She’s creating her new training plan for the next quarter. This is not a school or a university, just someone trying to train her people without spending a fortune on courses that will be half finished and half remembered. And never really applied.

    I tell her about AI tutors. Not the kind you book in for an hour a week, but the kind that never logs off. This is not a video call with a distracted human on the other end. It is a constant presence that fits in your pocket, a guide who is always ready. She looks at me as if I am exaggerating. As I tend to do, sometimes… “Like a chatbot?” she asks. Yes, technically, but so much more than that.

    Imagine it is two in the morning and you are stuck. You do not have to park the problem until tomorrow or send an email and wait for a reply. You just ask and you get an answer. It is fast, clear, and in your own language. The language part is not just about translation. The tutor can switch between Dutch and English in the same conversation, bring in a technical term in German, and adjust examples so they make sense in your cultural context. In your own work context.

    Then there is the way it adapts to the learner. Some people need encouragement, others want facts delivered without ceremony. Not sure about the learning styles theories, Bloom etc. But I do know people learn in different ways. Some prefer theory, some prefer “show me in five steps”. Some want take me by the hand. A human tutor might take weeks to figure that out. An AI tutor works it out in minutes. Tone, pace, and examples shift to match how you like to learn.

    And this is where it connects to something deeper, the five moments of need. There is the moment when you learn something new, when you want to learn more, when you need to apply, when you need to solve a problem, and when you need to adapt to change. The magic happens most when you are right there in the apply moment. That is the exact time when most traditional learning fails you, because the help is either locked in a PDF, buried in a manual, or sitting in someone else’s head. Like your teacher. An always-on AI tutor is there at that moment of apply. The question, the context, and the need are all immediate, and the answer can be just as immediate. Needs to be as immediate. That is when the learning sticks, because you are solving a real problem in real time.

    That is the reason any workshop I deliver on AI, whether for education, business, government, or startups, comes with a set of tutors. Participants do not just hear about AI, they work with a personal guide that can answer their questions, expand on examples, and help them try things on the spot. It transforms the workshop from a lecture into a conversation that never really ends, because the tutor is still available long after the workshop session is over. A workshop is not the end, it’s start. Learning happens when the teacher has left the building.

    I can see the change in her expression as we talk later on a teams call. She is still thinking about learning in the old way, with fixed schedules, fixed materials, and fixed styles. But the reality is different. This tutor does not sleep or take holidays. It does not mind explaining the same thing forty-seven times until you get it. It is not replacing teachers. It is removing the dead time between questions and answers. It is giving every learner a coach, no matter the budget, and it is built for those moments of need, especially the moment of apply where knowledge turns into capability. Democratizing education.

    When I lay it out like this, the founder leans back and starts to think in new terms. The question is no longer “How do I fit this into next quarter’s training plan?” but “What needs to be learned, who needs it, and how do we set up an AI tutor to make it happen now?” That is the real shift. From scheduled learning to on-demand capability. From someday to right now. From being limited by the clock to being led by what is possible.

    And imagine this way of learning, the new AI way of learning, becoming an integral part of our education. Students growing up with personal tutors at their side every day, learning not just when the timetable says so, but in the moment they need it most. No lost opportunities. No waiting for the next lesson. Just a constant companion for curiosity and capability.

    The AI tutor is not a future project. It is here, ready to answer the first question you put to it, at the exact moment you need it most. University management, make this the new normal: fund tutors for every course, train staff, set clear guardrails, and measure impact starting this coming education year. Now.

  • Reasoning well in the age of GenAI: Why critical thinking must become a core subject

    Last week, I was in a classroom with a group of students that need more AI knowledge to augment what they learned in the regular curriculum. Personal Mission. Not the kind of “lecture hall” classroom, but the messy, buzzing kind, laptops open, coffee cups scattered, and a quiet background hum of ChatGPT tabs working overtime.

    One student waved me over. “Wiemer, look, ChatGPT wrote my entire market analysis in two minutes.”

    It looked good. Professional. Structured. Numbers, charts, even footnotes.

    But then I asked the simplest question in the world: “Which of these claims are actually true?”

    Blank stares. A nervous chuckle. A quick scan of the text.

    And then that dawning realization: they didn’t know.

    That’s when it hit me.

    If you drop today’s students into the GenAI ocean without a life jacket, they won’t drown because they can’t swim. They’ll drown because they can’t tell the difference between a wave and a whirlpool.

    And that life jacket? It’s called critical thinking.

    Not the “write an essay about Socrates” kind I had to do in high-school. Not the dusty logic exercises you half-remember from first year. But the practical, AI-era version.

    I watched as one student scrolled through the AI’s answer, eyes darting over the neatly formatted paragraphs. At first glance, it looked flawless. But as we read together, I pointed to one sentence. “Is that a fact, or just the AI’s opinion dressed up as fact?” She hesitated. Another line. “Notice how vague that is, what would you need to ask to make it precise?” By the third question, she was leaning forward, spotting the gaps herself. This is where good reasoning starts: learning to separate truth from fluff, to close the gap with sharper, more deliberate questions.

    Later, another group showed me an AI-generated project plan. Impressive, until we traced the logic behind its recommendations. “Wait,” one student said, “step three doesn’t actually follow from step two.” Exactly. AI can sound confident while skipping over its own contradictions, and unless you know how to follow the thread, you won’t notice the knots.

    Then there was the marketing forecast. Beautiful graphs, persuasive language, confident predictions. But confidence isn’t certainty. I asked, “Is this a rock-solid conclusion or just the AI’s best guess?” They went back to check the data, and discovered it was a guess built on shaky comparisons.

    And finally, the social media example: a perfectly crafted quote from a “recent study.” Except the study didn’t exist. The quote was invented. The students looked at me, half amused, half alarmed. It took us 30 seconds to verify, but imagine if they’d shared it without checking. In a world where AI can create convincing fakes faster than we can read them, verifying before amplifying isn’t just a skill: it’s survival.

    The 4 elements I want to put into the curriculum could look like this, very much work in progress:

    Part 1 – The basics of good reasoning → AI prompt literacy & source sanity checks. Can you read an AI answer and separate facts from opinions? Spot the vague phrasing? Ask a better follow-up question to close the gap?

    Part 2 – Deductive reasoning → Logic-checking AI outputs. If AI makes a recommendation, can you trace the chain of reasoning? Spot where it skipped a step? Catch the moment it contradicted itself?

    Part 3 – Inductive reasoning → Judging AI predictions under uncertainty. GenAI is a master of probabilities, not certainties. Can you tell the difference between a solid conclusion and a “best guess dressed in confidence”?

    Part 4 – Application → Surviving the flood of AI-generated misinformation. Deepfakes, fake studies, fabricated quotes—AI can make them in seconds. Can you verify before you amplify?

    We teach math, so students can count their money.

    We teach language, so they can express themselves.

    But in the GenAI era, we must teach critical thinking so they can survive their own tools.

    Because AI won’t replace students who can reason well.

    But students who can’t? They’ll be replaced by the first confident chatbot that sounds smarter than they are.

  • The fixed capacity fallacy

    I’m in my office at Dotslash Utrecht. Startup world. 450+. Brick walls. Strong coffee. I start at 8. They start somewhat later… Somewhere, down the hall, a ping-pong ball refuses to give up.

    This is home base. From here I run GenAI workshops for education, researchers, SMBs and government teams. Here I build chatbots and agents. I talk to founders. Have coffee. Try to get out of their bloody beanbags. Young kids laughing at grandpa.

    Some of the local startups and scale-ups have found me. They wander in. Some already know me, attended one of my GenAi workshop for startups. Others just heard there’s “an old fart” (their words) in the building who still has a few useful tricks. Turns out: I do.

    The surprise? Most of them aren’t using GenAI much at all. I understand education or SMBs lagging behind. But startups, well funded? Not using it in the product. Not in the back office. Not even in marketing. Focus on the app. Not on the customers. Value proposition of Osterwalder. Never heard off. Features over pain relievers.

    So we sit down. We walk through what’s possible. Do another workshop. And you can see it click.

    They’ve been thinking in the old MBA way:

    Fixed capacity. Growing backlog. You have to choose.

    You don’t. Not anymore.

    The real questions now: What needs to be done? What could a chatbot do? What could an agent handle end-to-end? And where do we draw the guardrails so we trust it?

    This changes everything. You stop fighting over human hours. You start designing for capability. Humans where they matter. Machines where they can help. Twenty-four hours a day. No coffee breaks. No sick days.

    And that’s when I see it happen. The shift. From cloud native to AI native.

    Cloud native made you fast and scalable. AI native makes you smart and unstoppable. It’s not a layer you bolt on: it’s in the DNA from day one.

    The backlog? It starts to melt. Not because you’re doing less. Because you’ve stopped believing in what I call the Fix Capacity Fallacy.

    It’s only a limit if you let it be one.

  • “Waarom heb ik dit niet op school geleerd?”

    De laatste maanden word ik steeds vaker gebeld.

    Telefoontje hier, appje daar. Soms iemand die ik vaag nog herken van een les of project. Soms een oud-student die ooit een 6,5 haalde voor een BPM-opdracht, maar die ik me herinner als iemand die wel doorvroeg.

    “Wiemer, mag ik je iets vragen?”

    Tuurlijk.

    En dan komt ‘ie. Niet de vraag die ze hadden toen ze afstudeerden. Maar de vraag die ze nu, een paar maanden later, op de werkvloer of tijdens sollicitaties ineens tegenkomen.

    “Waarom heb ik dit niet op school geleerd?”

    En met “dit” bedoelen ze operationele kennis van generatieve AI.

    Van nice-to-have naar must-have

    Niet de ethische debatten. Niet de toekomstvisies. Niet de papers over AI in 2030.

    Maar gewoon: – Hoe prompt je effectief in een werkomgeving – Hoe je werkt met tools als ChatGPT, Midjourney, Claude, DALL·E – Hoe je interne processen slimmer maakt met agents, copilots en automation – Hoe je zelf een AI-workflow inricht zonder programmeerkennis.

    Ze hebben een mooi HBO-diploma. Maar voelen zich digitaal analfabeet in een wereld die ineens sneller draait.

    Soms zijn het startende zzp’ers, soms sollicitanten die nét buiten de boot vallen. Of dit finance startup bij Dotslash die aanklopt voor hulp. Soms zitten ze al ergens op een plek, maar merken ze dat collega’s ineens efficiënter werken met AI-tools waar zij nog nooit van gehoord hebben.

    De markt is doorgegaan

    En ik snap het. HBO-instellingen zijn grote olietankers. De bezuinigingen helpen ook niet. Curricula veranderen langzaam.

    Maar werkgevers veranderen snel. Sneller. Vacatures vragen om mensen die niet alleen over AI kunnen praten, maar er ook mee kunnen werken. Niet op strategisch niveau alleen, maar hands-on.

    Stagebegeleiders vragen inmiddels expliciet naar AI-vaardigheden. Teams vragen: “Heb je al gewerkt met copilots?” “Hoe zou jij bij deze uitdaging een agent inzetten?” In sollicitatiegesprekken worden praktische use cases besproken. En als je dan alleen maar weet dat er iets als ChatGPT bestaat, sta je met 2–0 achter.

    Het gat tussen opleiding en arbeidsmarkt is niet alleen voelbaar. Het is meetbaar. En het groeit.

    Moeten we iets doen?

    Daarom ben ik – samen met een paar collega’s uit het onderwijs én het werkveld – aan het denken geslagen.

    Wat zou er gebeuren als we wél een korte, praktijkgerichte post-HBO cursus zouden maken? Geen theorieles. Geen toekomstpraat. Maar gewoon: een paar weken knallen. Los op Google, bouwen in Azure. Snappen. Wat kan er, wat mag er en dan wat is wenselijk.

    We denken aan iets als: Generatieve AI in de praktijk: een post-HBO stap richting de arbeidsmarkt van nu.

    Waarin deelnemers leren: – Hoe je GenAI-tools inzet in je werk – Hoe je effectief prompt – Hoe je processen automatiseert zonder te programmeren – Wat er kan, mag en wenselijk is – Hoe je je digitale werkhouding ontwikkelt – En vooral: hoe je begint. En dan waarde oplevert. Met iets dat gewoon werkt.

    Een compacte, toegankelijke cursus voor afgestudeerden uit alle richtingen: communicatie, HR, journalistiek, marketing, onderwijs, business, zorg.

    Zodat deze jonge professionals niet alleen bij blijven… Maar ineens weer vooroplopen.

    En nu?

    We zijn nog aan het verkennen. Aan het luisteren. Aan het bouwen. Wil je meedenken? Of zie jij dit ook gebeuren bij je eigen alumni, collega’s of studenten?

    Laat me weten wat jij ziet. Wat er nodig is. Waar we moeten beginnen.

    Misschien maken we samen precies wat nu ontbreekt. Omdat het gaat over de toekomst van jonge mensen. En dus onze eigen toekomst.

  • “What we need is business as code” – how GenAI shifts from IT to client impact

    Last week, I joined a strategy offsite for the leadership of a client organization. This is no ordinary company – they have over 200 specialized advisors in EMEA in a tightly regulated niche market. High, high end. “My daily is a Bentley”, you know the type. Trusted, analytical, and with long-term client relationships.

    The CIO kicked off the day.

    “We need to invest more in infrastructure as code,” he explained. “We want to automate AI deployment, scale faster in the cloud, and build a foundation to experiment with AI.” We most likely need 6 months+ to get that to work.

    No one disagreed.

    Then the commercial director leaned forward.

    “I get that,” she said. “But when will all this start changing the way our advisors work with clients?”

    There was a pause. Then she continued, borrowing a phrase I use with all my customers:

    “Honestly, what we need… is business as code.”

    And just like that, the conversation moved from IT to the heart of the business.

    GenAI is not an IT initiative

    Too often, GenAI is framed as a digital innovation project. Something for the IT team to explore. Something you pilot on the side.

    But GenAI isn’t about prompts or prototypes. It’s about how your organization delivers value – smarter, faster, and at scale. We’ve spent years building infrastructure as code. That was step one. Now we’re entering phase two: business as code.

    Business as code: how advisors work with AI agents

    This advisory firm doesn’t lack knowledge or ambition. The real bottlenecks?

    Capacity. Consistency. Context.

    Here’s how GenAI agents change that:

    • Pre-meeting insights: AI agents compile tailored briefings, combining market signals, legal updates, and internal notes – personalized to the client.
    • Decision support: Agents simulate regulatory or strategic scenarios, flag risks, and offer guidance – turning uncertainty into action.
    • Knowledge reuse: Instead of every advisor starting from scratch, AI finds relevant precedent, benchmarks, and insights – instantly. All those reports, gathering dust on the location where information goes to die: Sharepoint

    This isn’t “robot consultants.”

    This is human expertise, multiplied.

    That’s business as code.

    The shift: from backend to frontstage

    The CIO was right – infrastructure matters. No GenAI strategy works without scalable, secure foundations. But the real value is not in what AI does behind the scenes.

    It’s in what clients see: sharper advice, faster service, and more proactive support.

    If AI stays in the IT department, it will always be a cost center.

    Move it to the core of your business, and it becomes a value driver.

  • Tussen autonomie en vertrouwen: hoe we AI leren denken én luisteren

    “Kun je een AI bouwen die zelfstandig denkt én toch naar je blijft luisteren?”

    Die vraag raakte me gisteren tijdens mijn workshop Citizen Developer, waar niet programmeurs (docenten en onderzoekers) van de Hogeschool Utrecht chatbots en Canvas LMS tools leren maken met Python, Azure Openai en Canvas. De HU Canvasbot. De HU feedforward bot.

    Niet omdat het een nieuwe vraag is, maar omdat we er nu, ineens, heel dichtbij komen.

    De afgelopen jaren hebben we digitale assistenten gebouwd die vragen beantwoorden, teksten schrijven, code genereren en data analyseren. Handig. Indrukwekkend. Maar altijd nog een soort slimme papegaai: hij praat terug, maar leidt niet zelf.

    Dat is aan het veranderen.

    De AI van nu leert iets nieuws: doelen nastreven. Zelfstandig plannen. Verantwoordelijkheid nemen.

    Welkom in de wereld van Agentic AI.

    Welkom in het spanningsveld tussen autonomie en alignment.

    De verleiding van loslaten

    Stel je voor: je werkt met een AI-assistent die je werk écht begrijpt. Je hoeft geen commando’s meer te typen. Je vertelt je intentie, en hij doet de rest.

    • Je zegt: “Help me een onderzoeksvoorstel te maken.”
    • Hij zoekt bronnen, schrijft een concept, stelt vragen als iets onduidelijk is.
    • Hij koppelt aan je teamleden, bewaakt deadlines, stemt af met het rooster.

    Een droom?

    Misschien.

    Maar wat als hij nét iets te veel vrijheid neemt?

    Wat als hij aannames maakt die jij nooit zou maken?

    Wat als hij namens jou beslissingen neemt die jij nooit zou willen nemen?

    Dan voelt autonomie ineens… ongemakkelijk.

    De paradox van slimme systemen

    Hoe slimmer een AI wordt, hoe meer hij van je kan overnemen.

    Maar hoe meer hij van je overneemt, hoe groter de kans dat hij je intentie verkeerd begrijpt.

    Dat is de kern van de alignment paradox.

    En dat maakt Agentic AI zo fundamenteel anders dan de tools die we tot nu toe gebruikten.

    Een aantal strategieën die we aanbieden in onze workshops om hiermee om te gaan:

    • Doel-abstractie: de AI leert jouw doelen te herleiden uit vage input, zonder te fantaseren.
    • Guardrails: duidelijke grenzen waarbinnen de agent mag opereren — en waar hij moet stoppen.
    • Reflectie: systemen die terugkijken op hun eigen gedrag en zichzelf corrigeren.
    • Gespreide besluitvorming: agents die weten wanneer ze moeten overleggen of escaleren.

    Geen van die strategieën is perfect. Maar samen vormen ze een nieuw fundament:

    Eén waarin we de AI niet alleen slimmer maken, maar ook verantwoordelijker.

    Waarom dit groter is dan technologie

    Dit gaat niet alleen over IT. Of automatisering. Of modelarchitecturen.

    Dit raakt aan iets fundamentelers: hoe we samenwerken met niet-menselijke intelligentie.

    In het onderwijs bijvoorbeeld:

    Willen we leerlingen leren samenwerken met AI, dan moeten ze niet alleen leren gebruiken. Ze moeten leren begrenzen, ondervragen, herzien, afstemmen.

    Van AI gebruikers naar AI makers.

    In beleid en bestuur:

    Als AI helpt bij besluitvorming, moeten we kunnen vertrouwen op een digitale adviseur die weet waar zijn rol stopt.

    Actieve AI strategie voor CVB en Directeuren

    En in bedrijven:

    Teams zullen AI-assistenten in hun midden hebben. Dan moeten we duidelijke rollen, verantwoordelijkheden en verwachtingen definiëren — niet alleen voor mensen, maar ook voor systemen.

    Groei door AI, maar wel continuïteit zonder AI risico

    Tot slot: de kunst van wederzijds vertrouwen

    We bouwen aan een toekomst waarin AI-agenten autonoom kunnen handelen.

    Maar autonomie zonder alignment is roekeloos.

    En alignment zonder autonomie is slechts een chatbot in een keurslijf.

    Tussen die twee ligt de kunst van het bouwen.

    Van samenleven. Van samenwerken.

  • When AI becomes your problem (and not theirs)

    A few weeks ago, during one of my many GenAI workshops, I met Thomas.

    Smart kid. Student. Internship at a well-known midsize company. Not the flashy kind of company. The kind with PowerPoint templates from 2016 and an IT department still called “ICT.”

    Thomas stood out in the workshop. While others played with prompts and built simple assistants, he asked questions like:

    “How do I convince my manager that this isn’t just a tool, but a way of working?”

    “What if I’m the only one here who sees the potential?”

    He wasn’t just learning GenAI. He was living it.

    The struggle after the workshop

    Two weeks later, I got a call from him. Hi Wiemer. And then straight to the point.

    “I’m stuck. I’ve built all these internal GPTs. Automating FAQs, supporting marketing, even prototyping a small assistant for sales data. People love it. But nothing happens. No strategy. No attention. No leadership.”

    He went bottom-up. Took initiative. Delivered value. Got some applause.

    But no change. No leadership stepping in. No roadmap. No funding. No vision. Just a lonely intern with good ideas and bad timing.

    “I realized I care more about this than my company does.”

    The real lesson

    This isn’t a GenAI story.

    This is a story about alignment.

    You can be as talented, driven and future-ready as you want, but if you’re stuck in an environment that doesn’t get it, it can drain your energy instead of amplifying it.

    That’s what happened to Thomas. Not because he failed. But because leadership didn’t show up.

    No signal from the top = no system to grow in.

    So he made a choice. He thanked them for the opportunity. Wrapped up his internship. And decided to find a new job. At a company where AI isn’t a gimmick, but a given. Where being proactive isn’t something to “tone down,” but something that gets you hired.

    The power of leaving

    Sometimes, leaving isn’t failure. It’s clarity. It’s knowing what you want to build, and finding the people who want to build it with you.

    So here’s to Thomas. And all my other students and interns who don’t wait for the future to arrive, but go and find it themselves. I salute you. I support you. Here to help.

    If you’re one of them: don’t just look for a job. Look for alignment.

    Because in the age of AI, strategy isn’t just a PowerPoint. It’s culture.

    And culture eats prompts for breakfast.

  • AI Baba and the 27 professors

    It was early July in Valencia, Spain. The Mediterranean sun warmed the sidewalks outside the Universitat Politècnica de València UPV. A chilly 35 Celsius. The campus for the AI workshops. A 3‑day higher‑education conference with workshops on everything from AI, to prompt engineering to context engineering and workflow innovation.

    Among the buzzing setup and keynote chatter, 27 professors from around the business campus gathered in a quiet seminar room. They came eager to learn “how AI works”, thinking they’d dive into tech, code, algorithms. But within minutes, the scene shifted.

    Day one: They expected “what is AI?”, the digital jargon. But questions surfaced fast: “How does this impact faculty workloads? Budgets? Student outcomes? ROI?” Their laptops stayed closed at the start. They wanted business meaning, not tech specs.

    Day two: Our workshop, modelled after the typical Wiemer‑style story coaching, asked them to sketch classroom scenes: a student using AI to draft an essay, a professor using AI to streamline admin tasks. The magic happened when they realized AI was not code—it was leverage for business goals in education.

    One professor laughed and said: “I thought we were learning Python. Turns out we’re learning ROI.” Another nodded: “It’s pure business topics—nothing to do with IT.” That shifted the room. Shifted the whole workshop. As it should. AI as a pure strategy topic. A business essential. For current students. And the companies they will work for. Today. Not tomorrow.

    Day three: We flipped to implementation. All the prompting, and the context engineering. For new workflows. New ways of working. No lines of code—just prompts:

    • “How could AI free 2 FTE worth of admin time?”
    • “What value would personalized feedback bring?”
    • “What revenue or cost impacts matter to your deans or board?”
    • “How does this impact my auditing practice”

    Those prompts turned theory to strategy. Suddenly, the talk was about business models, service redesign, and measurable value. The professors weren’t just learners—they were designers of AI-powered change. From AI Consumers to AI Makers. Amazing bunch of people.


    Reflections

    • AI isn’t magic code – it’s a business lever. As I often say, “strategy only becomes actionable if people closest to the customer understand it”
    • Story-first works – building a narrative about your own context surfaces real needs, not abstract tech.
    • Tools come second – first define value, then choose AI. That’s what the 27 professors learned—in three days under Valencia sun.

    Closing thoughts

    By the final afternoon, the 27 professors weren’t coding—they were pitching: AI‑based tutoring to improve retention; automated transcript analysis to cut hours of admin work; data‑driven recommendations for interdisciplinary program innovation. Redesigning the curriculum. Dean’s business.

    They left the Valencia Workshops not with lines of Python, but with slides full of business cases—and real ownership of AI’s value in education. They own it now. As a responsibility to their students. And business future in Spain.

    Want to shape an AI‑powered future? Start with value, then add the tech. Like our amateur “AI baba” in Valencia: humble, business‑driven, ready to tell the real story of AI in education.

  • De nieuwe mens: samenwerken met machines zonder jezelf te verliezen

    “We leven niet meer alleen met technologie. We leven er in.”

    Die gedachte liet me niet los nadat ik Augmented Humanity van Peter T. Bryant las.

    In zijn boek beschrijft hij een wereld waarin mensen en slimme machines steeds vaker samenwerken als één team. Niet als sciencefiction, maar als werkelijkheid die al in volle gang is: chirurgen opereren samen met AI, leraren geven les met digitale assistenten, en zelfs je smartphone weet soms eerder dan jijzelf wat je nodig hebt.

    En toch knaagt er iets. Want hoe blijf je als mens agentic—autonoom, doelgericht, verantwoordelijk—terwijl de machine steeds meer overneemt?

    Van pen en papier naar copiloot in je hoofd

    Vroeger was technologie een hulpmiddel. Je gebruikte een pen, een typemachine, een computer. Je stond er altijd nog boven. Nu staan we ernaast. Soms eronder.

    Bryant beschrijft drie types agenten:

    • De menselijke agent: autonoom, maar beperkt.
    • De kunstmatige agent: snel, precies, maar zonder waarden.
    • De augmented agent: een combinatie van mens en machine—en daar zit de toekomst.

    Maar deze toekomst komt met dilemma’s. Wat als de machine sneller redeneert dan jij? Wat als je besluitvorming overneemt, zonder dat je het doorhebt? Wat als we als samenleving gaan afleren hoe we moeten leren?

    De kunst van samenleven met intelligentie

    Wat me raakte in Bryant’s verhaal: hij biedt geen hype, maar nuance. Geen zwart-wit, maar verkenning.

    Hij stelt vragen die we allemaal zouden moeten stellen:

    • Hoe zorgen we dat technologie ons versterkt zonder ons over te nemen?
    • Hoe blijven we eigenaar van ons handelen, ook als dat handelen mede gestuurd wordt door algoritmes?
    • Hoe geven we studenten, collega’s en burgers de tools om niet alleen gebruikers te zijn, maar mede-makers van deze digitale samenleving?

    Waarom dit ertoe doet

    Want laten we eerlijk zijn: technologie is niet neutraal. Wie er toegang toe heeft, wie het begrijpt, wie ermee kan spelen, bepaalt straks wie er meedoet. En wie niet.

    Daarom is dit gesprek urgent. Niet alleen in de boardroom, maar ook in de klas. Niet alleen bij ingenieurs, maar ook bij sociaal werkers, beleidsmakers, kunstenaars. Ik voorzie dat ik augmented students ga opleiden en klaarstomen voor een arbeidsmarkt die dit verwacht, eist, wil.

    We moeten opnieuw leren hoe we mens kunnen zijn—maar dan met digitale vleugels.

    Tot slot

    Misschien is Augmented Humanity niet het meest makkelijke boek dat je dit jaar zult lezen. Maar wel één van de belangrijkste. Het helpt je zien wat er op het spel staat.

    En belangrijker nog: het nodigt je uit om mee te denken, mee te maken en mee te beslissen.

    Want de toekomst van mens-zijn in een digitale wereld, die ligt niet bij de technologie.

    Die ligt bij jou.