All Software Will Be AI Software

In the future, all software will be AI software.

What do we mean by this?

Well, we start by taking a long view of technology’s impact on society. Artificial intelligence (AI) is increasingly powerful and popular (given the mainstream attention to advances in AI-generated artwork and chatbots). Over time, we foresee a future where machine learning drives the development of software itself and innovation across all sectors.

However, we’ve still got a ways to go before we reach a state where “The Singularity” has been reached and “artificial general intelligence” rules the universe.

Our co-founders, Logan and Zheng, have encountered first-hand the many obstacles standing between the present and the possible future.

Prior to Lexer, Zheng dealt with problems productionizing machine learning for autonomous vehicles, while Logan was upgrading underwriting decisions in the financial industry. Despite the application differences, attempting to apply ML led to a long list of issues:

  • Languages, Libraries, and Frameworks: What is the minimal set of ML frameworks, programming languages, and associated third-party libraries we need to develop and deploy a given model? Once we’ve picked a framework, what are the technical limitations and tradeoffs we need to make? For example, are there unsupported operations in the model? How much engineering effort will it be to roll our own hardware-specific algorithms to close this gap?
  • Hosting and Serving: What platforms / environments will we use for our ML model lifecycle? What’s the catch when using specific cloud services offered by big tech companies? To what extent are we locked in their toolchain or hardware targets?
  • Hardware: What is the hardware configuration on the host machine? Does it differ from and affect my model validation processes? Can I train and export production-grade models without incurring significant development costs?
  • Optimizing Performance: Once I have all the other technical decisions aligned, will my application actually work well enough? Does it satisfy the operational constraints of the platform I’m running it on, ranging from basic performance metrics like inference latency and memory consumption, to nuanced targets like proportional time spent on GPU memory transfers? Do I need to fork my ML framework and tweak GPU configurations for it to run well?

As long as these questions exist, AI alone won’t be able to answer them. At Lexer, our mission is to educate people on these opportunities, excite them with the possibilities, and provide the tools for humans and machines to build better a future together.

Leave a Reply

%d bloggers like this: