Once a month, I zoom out and write about overall trends in technology and vertical SaaS.

There are plenty of implicit fears these days about what AI means for the future of software. Does it blow up the existing industry? Do incumbents win? What happens? While I'm far less certain about the prospects of consumer software, I don't think B2B is going anywhere. And while these are early thoughts and it’s never wise to scoff at disruption risk too much, I’ll attempt to sketch some of this out here.
In crypto, there’s something known as the Oracle Problem. The basic question is how can I verify that some external event that I wish to register to the blockchain did in fact occur? Blockchains are isolated. They aren’t designed to have real links to the world. There’s no great way to admit real world events in a way that doesn’t also create some single point of failure with an external verification source exogenous to the blockchain.1
In software, we haven’t really had to worry about this, namely because we assume that trusted humans are logging correct information into our software and we don’t need to worry about decentralization. Humans have links to the real world, are trusted by their organization, and so we can trust them to update the source of truth with relevant information. And in the off-chance, SBF logs wrong accounting information, the accounting software doesn’t go to jail, SBF does.2
With AI moving to the forefront, we have a reverse Oracle Problem. AIs are inherently digital actors that are reliant upon predictions arising from a training set representing the internet writ large. This gives us a two-fold problem. 1) AIs are not “eyewitnesses” to the world of atoms. 2) AIs can’t effectively maintain sources of truth for the real world without also having humans in the loop in some capacity.
Until iOT systems and robots inherit the earth, this isn’t changing.
Thus, in workflows that are digital, such as email, AIs stand a really good chance at taking over. But most workflows, aren’t entirely digital. They’re at best hybrid. And these workflows still will need a source of truth auditable by humans. That’s software in some form or fashion. And even as we task AIs with more and more, there will consistently be a need for a system to ensure that AI actions in the world of bits corresponds to an action in the world of atoms.
And what’s more, it’s not clear to me that software’s primary goal as a source of truth for an organization can simply be displaced by AI.
One of the reasons AI’s use case has been so profound in generative work is that this concern is far lessened. Humans are still in the loop and are verifying whether AI acts are meaningful.
And AI turns out to be really good at this individual creative work, in part because there’s an immediate feedback loop around the work-product. Most work-product is after all just a value judgment. And creating the initial work-product and then judging it are two separate acts.
And while it’s clear that the initial work-product will get transformed by LLMs. It’s less clear that the judgment call whether to “ship” that product will change as well. The latter judgment criteria is often more of a management decision AI is going to dramatically change work-product. I doubt it changes management.
Work-Product
Any industry or profession heavily reliant upon work-product generation at the individual level with very few links to the world of atoms will thus transform. Take law, which is already an abstract profession operating in the world of ideas. Most work-product generation will absolutely transform.3
There’s a general acronym for good legal writing that every law student is dutifully taught: IRAC.
Issue - My client was in an accident at this location
Research - Support with case law
Analysis - Apply the case law
Conclusion - My client is owed a million dollars
To generate legal work-product, a typical associate is going to reference their case management software (Filevine) for the issue, a legal library (Casetext, Westlaw, or Lexis) for the research, and create a draft containing the the above plus analysis, often using some pre-existing firm template.
In an AI driven firm, what will probably happen is an associate writes a prompt like “Draft me a civil complaint involving the facts contained in Jack vs. Henry according to ______ template.”
And boom, an AI will whip it out for an associate to then scrutinize. In the background, of course, an AI will also be combing through relevant firm documents and case law; and probably using the respective sources of truth for each piece of data.
Or in a less dramatic and more near-term fashion, an associate will simply ask “find me every relevant case in the Seventh Circuit involving intellectual property disputes” and suddenly an AI will spit out a case list, high level summaries, and relevant quotes. An associate might further query with additional criteria “where does this court opinion distinguish from these facts in my case” and you’ve truncated the work required from an associate by a huge amount.
There’s still an important judgment function on the quality of the draft that an associate will perform. And a partner might then wish to peruse it themself to make sure it’s up to snuff.
The important point for the 'end of software' debate is that we aren't going to entrust an AI to manage the case itself or have the last word on quality. We simply will use it to formulate work-product associated with the case itself.
In short, there’s a big difference between AI as a work-product generator and AI as a management tool.
Perhaps another thought experiment will prove useful: would we expect it to be possible for a group of AIs (or even just one) to entirely operate a corporation?
To the extent your answer is no, my guess is that it has nothing to do with AI’s ability to turn out useful products and more to do with AI’s capability to manage the strategic objectives of an org, navigate human terrain, and more.
In fact, in any future where AIs can run an organization fully on their own is also a world in which this entire piece is meaningless. AGI will be here and the world will be so vastly different that it’s unpredictable.4
Short of that, we will still need management tools and humans in the loop of work-product.
Software and Management
So in the same way, software has mainly been a management tool. I think AI is going to mainly be a work-product tool for the foreseeable future.
So, if work-product becomes cheap to create, it doesn't implicitly mean that traditional software is over. Quite the opposite. Software is usually about logging or aiding human activities in the world for an organization to manage. Why would we assume that AI actions wouldn’t need to be logged for management as well?
And until we build iOT and robots for everything, humans are still going to be the source of truth for the world of atoms. Every delivery still ends in a QR scan by a human worker. Even if a drone performs this, that’s still going to have to be logged somewhere.
So the general concept of software as a structured database with workflows isn’t going anywhere.
But what if the costs to build software shrink incredibly low? Can’t organizations roll their own stack?
Well, they could, but is that wise or even probable? There’s plenty of benefits from having a software tool across an industry or profession. People don’t have to retrain as much, and I would assume that AIs won’t have to either.5
What’s more, most people really don’t want to build out custom software and I’m not sure this changes in an AI world. My hunch is that AI is going to create an even more specialized world. Individually and as a company, highly specialized (and thus non-commodified labor) tasks will be the only real moat. Executing on that will of course involve AIs, but mostly it will involve outsourcing any non-specialized function to partners in order to drive efficiency and focus on the goal.
And that will mean software partners as well.
Dual-use Software
My second hunch is that we see the rise of dual-use tech companies. Traditional software to help manage and interact with the real world and AIs to help develop work-product that is then logged within these management systems.
This is part of the reason I’m so impressed with Filevine. They have the case management module + a new best in class text editor. They’re already becoming a dual-use innovator and I think they will dominate an AI-driven legal world.
In short, there’s no reason to expect sources of truth to undergo massive disruption, especially those that are cataloguing activities in the real world.
Instead, the next phase will involve the disruption of work-product tools and new management challenges associated with this. Dual-use tool tools that allow organizations to drive growth and efficiency both in management and work-product will become incredibly valuable.
Further, there’s a new mandate for vertical software companies operating in tricky, undigitized verticals: make great sources of truth and then make AI-empowered tooling that significantly accelerates the industry itself into the new age. After all, if AIs have no industry data to work with and no workflows that maintain fidelity with the real world, what good are they for? We probably won’t see the productivity increases we would hope for. We must dream bigger than a mere generalized email assistant. AIs that can navigate industry data, and help create better work-product around supply chains, drug discovery, and healthcare will be one of the biggest unlocks possible. And so it is somewhat up to the next generation of vertical companies to design software and LLMs with that in mind.
Interesting Links:
This piece on software’s future is great and a fun piece in conversation with my own.
This piece on the future of banks + fintechs is a good read.
Next Month:
Next month is going to be quite diverse. Expect some stuff on real estate, fintech in vertical SaaS, and a piece on one of my favorite vertical software companies.6
A good quote from this piece: “When Alice transfers the house to Bob, the smart contract needs to know that she actually transferred the house to Bob. There are several ways of doing this but they all have the same essential problem. There has to be some trust in some third party to verify the events in the physical world.”
Generously, let’s assume he was logging accounting information at all.
Law has a very specific definition of work-product: the mental impressions, strategies, and ideas of an attorney in a case. What’s interesting is that these are protected and not discoverable in litigation. Everyone’s concerned that AIs will be regulated out of the legal profession, but I lean the opposite way. I think they are already protected for use in work-product generation. The very use of AI may not be discoverable at all.
I posed this thought experiment to GPT-4 as well and eventually it agreed that the only world in which an AI run organization is intelligible is one in which AGI is present. How far are we from that? Will we ever get there? I’m not sure. And it doesn’t even seem worth spending energy on a framework that AGI is coming when there is still so much to build.
Cf. Adept.ai
Would love to chat with some folks in real estate/mortgage SaaS. Hit me up if you have some thoughts.