Minimum Viable Expectations for Developers and AI
We're headed into the tail end of 2025 and I'm seeing a lot less FUD (fear, uncertainty, and doubt) amongst software developers when it comes to AI. As usual when it comes to adopting new software tools I think a lot of the initial hesitancy had to do with everyone but the earliest adopters falling into three camps: don't, can't, and won't:
- Developers don't understand the advantages for the simple reason they haven't even given the new technology a fair shake.
- Developers can't understand the advantages because they are not experienced enough to grasp the bigger picture when it comes to their role (problem solvers and not typists).
- Developers won't understand the advantages because they refuse to do so on the grounds that new technology threatens their job or is in conflict with their perception that modern tools interfere with their role as a "craftsman" (you should fire these developers).
When it comes to AI adoption, I'm fortunately seeing the numbers falling into these three camps continuing to wane. This is good news because it benefits both the companies they work for and the developers themselves. Companies benefit because AI coding tools, when used properly, unquestionably write better code faster for many (but not all) use cases. Developers benefit because they are freed from the drudgery of coding CRUD (create, retrieve, update, delete) interfaces and can instead focus on more interesting tasks.
Because this technology is so new, I'm not yet seeing a lot of guidance regarding setting employee expectations when it comes to AI usage within software teams. Frankly I'm not even sure that most managers even know what to expect. So I thought it might be useful to outline a few thoughts regarding MVEs (minimum viable expectations) when it comes to AI adoption:
Use an AI-first IDE
Even if your developers refuse to generative AI tools for large-scale feature implementation, the productivity gains to be had from simply adopting the intelligent code completion features is undeniable. A few seconds here and a few seconds there add up to hours, days, and weeks of time saved otherwise spent repeatedly typing for loops, commonplace code blocks, and the like.
Use GitHub Copilot or a Similar Tool for Automated Code Reviews
Agentic AIs like GitHub Copilot can be configured to perform automated code reviews on all or specific pull requests. At Adalo we've been using Copilot in this capacity for a few months now and while it hasn't identified any groundshaking issues it certainly has helped to improve the code by pointing out subtle edge cases and syntax issues which could ultimately be problematic if left unaddressed.
Incorporate MCP Servers Into Coding Workflows
In December, 2024 Anthropic announced a new open standard called Model Context Protocol (MCP) which you can think of as a USB-like interface for AI. This interface gives organizations the ability to plug both internal and third-party systems into AI, supplementing the knowledge already incorporated into the AI model. Since the announcement MCP adoption has spread like wildfire, with MCP directories like https://mcp.so/ tracking more than 16,000 public MCP servers.
Companies like GitHub and Stripe have launched MCP servers which let developers talk to these systems from inside their IDEs. In doing so, developers can for instance create, review, and ask AI to implement tickets without having to leave their IDE. As with the AI-first IDE's ability to perform intelligent code completion, reducing the number of steps a developer has to take to complete everyday tasks will in the long run result in significant amounts of time saved.
Use AI to Assist with Test Writing
In my experience test writing has ironically one of AI's greatest strengths. SaaS products I've built such as https://securitybot.dev/ and https://6dollarcrm.com/ have far, far more test coverage than they would have ever had pre-AI. As of the time of this writing SecurityBot.dev has more than 1,000 assertions spread across 244 tests:
Tests: 244 passed (1073 assertions)
6DollarCRM fares even better (although the code base is significantly larger), with 1,149 assertions spread across 346 tests:
Tests: 346 passed (1149 assertions)
Models such as Claude 4 Sonnet and Opus 4.1 have been remarkably good test writers, and developers can further reinforce the importance of including tests alongside generated code within specifications.
Proactively Review and Revise AI Coding Guidelines
AI coding tools such as Cursor and Claude Code tend to work much better when the programmer provides additional context to guide the AI. In fact, Anthropic places such emphasis on the importance of doing so that it appears first in this list of best practices. Anything deemed worth communicating to a new developer who has joined your team is worthy of inclusion in this context, including coding styles, useful shell commands, testing instructions, dependency requirements, and so forth.
You'll also find publicly available coding guidelines for specific technology stacks. For instance I've been using this set of Laravel coding guidelines for AI with great success.
Conclusion
The sky really is the limit when it comes to incorporating AI tools into developer workflows. Even though we're still in the very earliest stages of this technology's lifecycle, I'm both personally seeing enormous productivity gains in my own projects as well as greatly enjoying seeing the teams I work with come around to their promise. I'd love to learn more about how you and your team are building processes around their usage. E-mail me at wj@wjgilmore.com.