Friday, May 16, 2025

LLMs ≈ Pocket Calculators for the Mind

Why the real question is how we use them, not if we should

“I don’t carry information in my mind that is readily available in books… The value of a college education is not the learning of many facts but the training of the mind to think.” — Albert Einstein (Quote Investigator)


 

1 · Why this matters to me

I’ve spent three decades putting technology at the service of people, not the other way around. Large-language models (LLMs) now sit on my workbench beside Docker, Bicep and Git—but only as tools:

  • Sounding board – I draft ideas, let the model challenge clarity, then revise.
  • Turbo spell-checker – grammar, tone, inclusiveness, and bilingual nuances get a quick, respectful scrub.
  • Pattern spotter – when logs, YAML, or policy docs sprawl, an LLM helps surface the outliers I might miss.

2 · History keeps rhyming

TechnologyInitial FearWhat Actually Happened
Pocket calculators (1980s classrooms)“Students will forget how to add.” Teachers’ unions protested nationwide. (easy-task.ai)Mental arithmetic skills shifted, but math curricula moved up the value chain (algebra sooner, statistics earlier).
The Internet & Google (2000s)“Search engines are making us stupid.” (The Atlantic, 2008) (The Atlantic)Information literacy became vital; search refined our questions, not our ability to reason.
LLMs (today)“AI will replace writers, coders, thinkers.”It’s doing for knowledge work what calculators did for arithmetic—removing drudgery so we can concentrate on insight.

The pattern is clear: new tools redistribute cognitive load. They do not erase our abilities; they elevate where we invest them. So of course I'm going to use this tool. Heavily.

3 · LLMs through a human-centric lens

  • Augment, don’t abdicate

    • I ask an LLM to critique an incident-response playbook, then I decide which refinements fit our risk profile.
  • Traceability by design

    • Every AI-assisted change is committed with provenance in Git. Humans review before merge—no silent overrides.
  • Privacy & ethics guardrails

    • No sensitive client data ever enters a public model. I maintain air-gapped, containerized instances for secure contexts.
  • Continuous learning loop

    • Just as mental arithmetic drills still matter, we run “manual-only” sprints: teams solve tickets without AI, then compare outcomes to keep skills sharp.

4 · Why fears persist—and how to answer them

ConcernPractical Rebuttal
“People will stop thinking.”Tools free bandwidth for higher-order thinking—exactly Einstein’s point. (Quote Investigator)
“Outputs are unreliable.”Treat LLM drafts like raw code from a junior dev—review, test, validate.
“Jobs will vanish.”Roles evolve: prompt engineering, AI governance, and human-in-the-loop QA are already new career paths.

5 · Guiding principles I follow

  • Humanism first – Empathy and critical reasoning remain irreplaceable.
  • Transparency – Disclose AI assistance in deliverables.
  • Accountability – The author (me) signs off; the model never owns the final word.
  • Sustainability – Prefer efficient, on-device models when possible to reduce energy footprint.
  • Accessibility – Use AI to lower—not raise—the barrier for non-technical colleagues.

6 · Call to Action

Next time you see an LLM suggestion pop up, remember the calculator in your desk drawer: it didn’t make you forget 2 + 2; it let you solve for x sooner. Let’s wield AI with the same intent—to think better together.

*Written in collaboration with ChatGPT 3o

#HumanisticAutomation #LLM #AIethics #ContinuousLearning #DevOps #TechForGood

No comments:

Post a Comment

L’excellence en ingénierie est-elle une espèce en voie de disparition?

Une réflexion SecDevOps sur le rapport 2025 " State of Software Engineering Excellence " Pourquoi ce rapport est important pou...