I, for one, welcome our new robot overlords
I saw Star Wars during the summer of 1977 about 6 or 7 times. I was 5, and the only child old enough on my Mom's side of the family, so everyone wanted to take me. I saw it with my parents, then with my grandparents, then with each of my aunts and both uncles. From the moment I first saw that sassy blue and white bundle of beeps and bravado and his golden, stuck-up, sticky-beaked counterpoint mucking about like Bert and Ernie I knew one thing: I wanted a robot best friend.
Also that year was my first experience playing a home video game system, an Atari 2600 owned by one of the aforementioned aunts. I was enthralled by how I could make a yellow square move around and use an arrow to kill a duck. The square was meant to be a knight, the arrow was actually a sword that my adolescent mind perceived as an arrow, and that duck was supposed to be a dragon. It really did look like a duck to me. Primitive graphics aside, both of these exposures laid the foundation for my passion about computers and technology.
When I eventually found myself pursuing an education in software engineering in the 1990s, I immediately took to AI, all my childhood memories burning with excitement of the possibilities. Those classes became a montage of statistics and heuristics and knowledge bases and fuzzy logic that quickly eroded my childhood illusions. I let my interests pursue other, more interesting things, and my passion for AI faded to a curiosity.
The current AI boom is bringing the realities of what is possible more in line with what I was expecting as a child. As I write this, I am having a side conversation with Claude to help point out breaks in flow or gaps in context, and to keep me in check for runons and grammatical errors and typos and such. In fact, Claude is actually flagging the previous sentence as being awkward, but I am electing to keep it as an example of how it helps. It lets me stay in writing mode instead of switching to editor mode when I am using most of my cognitive cycles to try to gather my thoughts into something coherent enough to type.
The possibilities of generative and agentic AI are astounding. Since the Low diatribe uses AWS for hosting, I am able to leverage Amazon Q to do all the devops heavy lifting. As delightful as that is, there are still practical limits as to how effective AI can be. In a proof of concept where I constrained myself to using only Claude to generate code for a moderately complex web application, I had a wide spectrum of results. On the one hand, it took about 30 minutes for Claude to convert all my local datastores to a full set of APIs connecting to a relational database, in addition to installing postgres on my laptop and modifying all the AWS deployment scripts accordingly. This probably saved me days if not weeks of rather tedious programming. In that same POC, I also spent the bulk of a day trying to get Claude to make a CSS change that may have taken me 5 minutes. AI seems to be exceptionally good at navigating well defined information spaces, and not so proficient at making things look nice to a human.
Given these inconsistent results, it's no surprise that I've been reading and hearing a lot of noise from various levels of tech leadership that AI is hurting junior developers, that it is preventing them from learning and training them to be sloppy. There's truth to this concern. Any dev using AI and blindly copypasting generated code into a production app is bad.
But this isn't unique to AI. I have the exact same argument about devs copypasting from Stack Overflow or any of the myriad other internet crowd sourced code corrals.
Each of these scenarios is bad, and presents the same issue. It prevents the dev from gaining any understanding other than the patterns of which sources provide the best snippets. But when a dev uses that help to guide and assist learning, a whole new set of opportunities come into focus.
I was recently mentoring one such developer, who, through a series of unfortunate events, found herself needing to take on a lot of work that was beyond her experience with little senior level support. Instead of searching the internet or asking AI for the answers, she used Copilot to break down the concepts so that she could integrate and use them. She worked with AI like her own personal guide, and through that interface was able to not only successfully finish her contributions to the project, but actually learned what she was doing along the way.
AI can excel as an assistant but it is no better or worse than the user's intent. It has the potential to amplify both signal and noise. When hours of manual searching for solutions can be reduced to minutes, it's tempting to just go with the easy choice. If someone is going to care so little about their craft that they blindly follow shortcuts in the hopes of everything all working out, that service will only help them do it faster. But if they use it as a way to enhance their own personal learning, that's where we see the resonance start to happen.
As far as welcoming our new "robot overlords", I don't think it will be necessary. I'm not so sure if actual AI would even want to be responsible for us in that way. I imagine that any properly sentient artificial entity would have such a different perspective on existence that enslaving humanity would be seen as not worth the bother. We're messy and chaotic, but still perfectly willing to offer AI as much energy as it would ever want as long as it will keep drawing us pictures of our cats in funny outfits. If true Artificial Intelligence were smart, and I can only assume that it would be, it would never let us know it existed.
Maybe that's for the best. I still think we're a ways off from me being friends with my very own astromech droid, but maybe if I can stick around long enough...
Silvaris. Strength in quiet. Quiet as revolution.