[001] AI for Coding (What else did you do while you were opening the pod bay doors, Hal?)

This is my first real post in the Yes, But Faster series. Over the first few, I’m going to try and gauge, from comments and other feedback, how to balance the level of “hard core tech” I get into vs. more high-level discussion. Ultimately, I want this thing to be the most useful for people in senior tech leadership – or even those just outside tech who want to understand what’s happening and get some perspectives on what to do about it. I really hope people will leave comments on the LinkedIn post for each episode, which I’ll always put here, so I can tune this thing as I go.

At this point in tech zeitgeist it would be ridiculous not to start with some AI stuff. As was the case with “Big Data” and Blockchain, the newest generation of AI implementations is all the rage. And if you’re not “in AI” now, you’re nothing. But despite my instinctive skepticism toward the breathless proclamations that “This changes everything!” with each and every new invention or iteration, there’s no denying that AI has achieved a qualitative rather than just quantitative leap forward these past 2 years.

In future posts I’ll dive into a bunch of business categories where AI is having a profound effect – notice, I say “profound” but not necessarily positive. In this post I want to discuss AI Code Assistants, Code Generators, and the like.

Before I start, I ask you to stick with me, because the first part of this is shock and awe at the incredible capabilities that have been introduced. But I promise, as is my intention in every post, I will get to the challenges, the trade-offs, and my recommendations for senior tech leadership in using these things later in this post.

Coding Assistance

Install even the free version of GitHub CoPilot in Microsoft’s VS Code and open some Node.js file you’re working on. Maybe it’s a ReactNative Screen.tsx file or a Next.js file. Doesn’t really matter because CoPilot supports quite a huge number of languages and frameworks. 

Create New Functionality

Now go into the CoPilot frame in VS Code and type, “Insert a Button with label ‘Get Hair Color’ and a TextBox where the end user will type a URL and add the code that will fetch the results from that URL when the user clicks the Button, parse those results looking for a json field ‘hair-color’ and present that value in a Text area below the Button and the TextBox”

Wait a few seconds. CoPilot will do the job and do it well. 

Make Changes/Improvements

But it did some bad things. So you type, “I don’t want string literals hard-coded in my code. Pull them out into a separate strings file in the project.” Yeah, it’ll do that, too.

Recommendations as You Type

CoPilot will insert greyed out text suggestion what it thinks you are trying to do as you type. Hit Tab to accept its suggestions – suggestions which often almost terrifyingly anticipate your goals.

The interaction with CoPilot is incredible. It understands the context of:

  • Entire project you have open
  • File you’re currently editing
  • All of the previous requests and changes you’ve made

Development of Entire Projects

Still within CoPilot, but also a wide range of other platforms like Claude AI, builder.ai you can generate an entire, fully functional application or project based on a well defined textual specification, a Figma or other UI, combinations of the two. Ask it to generate a ReactNative mobile app based on these inputs and the output can be shockingly close to your goals. As described above, need to make tweaks and changes? All of these platforms support a truly iterative process of correcting things you don’t like – either in functionality or in architecture or coding style. And none of it is a black box. Open the project and edit it yourself at any stage and then go back to requesting changes and improvements from AI. The you go to run the thing and the build fails. Why? Ask AI, its insights on missing dependencies, a need to upgrade a component, etc., etc. are equally impressive.

Developers interacting with these systems regularly sit back in their chairs and say, “Holy CRAP!” And rightly so.

Backend Development & Architecture

The examples above are front-end app development, and specifically in React. These are somewhat well-structured environments with relatively strong “guardrails” for architecture, etc. (though certainly there’s tremendous flexibility for how to get things done, and how to do it badly!). But development on the backend – where scale, performance, cost, portability, security, privacy, and a million other considerations come into play – that’s where things get far more challenging. Certainly these AI platforms for coding assistance, and full-project development can be quite valuable in this space too, though that will take longer. 

But now it’s time to get to the core question in all of this:

So is that it? No More Software Developers?

Now that I’ve given you a feel for the functionality in this space, and now that I’m finished drooling over its magnificence, what should you as a senior tech leader consider in applying these technologies? Are we done with software engineers? After all we are an expensive bunch and we go on endlessly about Star Trek (superior) and Star Wars (deeply inferior – I smell a highly inflammatory episode coming). If all of what I described above is possible now, why not just hire strong Technical Product Developers and unleash them on these tools? They can type away (or speak now, of course) all day interacting with these platforms until the applications you want are ready to go.

I wish I could say that the answer is, “NO, NEVER!” It is not. We are heading in this direction and for someone who loves software development and true creativity that goes into architecting and building applications, the thought that it might be removed from the portfolio of human activity saddens me – and along with all of the other features of AI, terrifies me for the future of human work and expression. Still, we have some time before we are all replaced and, interestingly, I believe this is a circumstance where the replacement of personnel (I prefer to call them “people”) will likely happen in a bottom-of-the-pyramid-upward model.

The “Is My Job Gone?” Pyramid of Misery

But to make my point, let’s start at the top of the pyramid and move down:

  1. Non-Tech Senior Execs: I guess way up at the top we have the executives who decide what it is we’re building, and why. What’s the strategy? How much will it cost? When and how will it be released, etc.? The impact on their roles today? None at all. Who knows what the future will bring?
  2. Tech Leadership (is this you?): The CTOs, the Heads of Engineering and the like – while your jobs are safe for now, yours may be the greatest challenge: Figuring out how to lead and transform a technology organization which will change dramatically from below while maintaining quality – oh, and while actually caring about human beings, if you’re into that kind of thing.
  3. Architects & Senior Engineers: These are the people who establish the standards guarantee quality, meet security requirements, etc.. For now, I think these roles are not only secure, the demands on them will be far higher. As AI becomes the source of more and more generated code, creating the guardrails for how code is generated, and then how it is reviewed and approved will, in fact, become a far greater challenge.
  4. Engineers/Junior Engineers: This is a tough one. For now (guessing a year or two) we may not see wholesale layoffs of these folks, but their roles are certainly in jeopardy. If nothing else, a staff of 30 engineers will be reduced as 20 can achieve the same velocity. Individual engineers need to not only know their platforms like the backs of their hand, they also need to deeply understand how to “conjure” and “cajole” these AI platforms to give them what they need according to the standards established by the organization. That in itself is a special, and very new skillset.
  5. DevOps, QA, UX/UI, Security, etc.: While I’ll skip this broad category of personnel in this discussion, safe to say, AI platforms are already making huge inroads into all of them as well.

What to Do, What to Do?

So I’ll leave you with this: Saying that leadership through this transition will be tricky is a huge understatement. In fact, I wouldn’t even call this a “transition” as that implies “A” to “B” and the truth is we are nowhere near knowing what “B” will even look like, and how far it will go. But safe to say, any of you running significant technology organizations have a lot to think about, and plan. Some of those things:

  • Standards, Quality & Trust: Any platform that makes things easier comes with risks. Think about using Open Source libraries. Are you sure you know what’s going on inside them? The same applies here and perhaps even more. When AI solves your problems for you, did it do it well? Securely? In a scalable fashion? How readable/repairable is the code? There are great dangers that come with abdicating control. Think them through. Take risks where you don’t mind so much. Watch like a hawk where you do.
  • Organizational Planning & Communication (also known as “Caring about People”): Staying silent about how these changes may affect your organization is just a bad idea. I promise they’re all thinking about it. Meet it head on. And though maybe it’s not my place to say here, never, ever forget, these are people with lives and families, with aspirations. My blog is named “Yes, But Faster.” And we all embrace “Yes, but cheaper.” But let’s never forget to be here for each other. And anyway, if you want to be hard-nosed about it, consider this: You don’t want your organization running for the door and abandoning you. AI ain’t ready to do everything for you… yet.

We can leave things off like this: AI Code Support is the real deal already and will only gobble up more of what software developing humans currently do. Without getting into the long-term, for now this means encouraging developers to learn how to use and “tame” these tools and making 100% sure your organization is not abdicating responsibility for its output – whether that output came from humans or AI.