This is the last in a series of posts to help search fund entrepreneurs evaluate tech-enabled (and more specifically software) businesses. The purpose of these articles is to equip the searcher to ask better questions earlier in the qualification process, and preserve precious time and capital for more qualified opportunities.

In the first article we framed how to think about a technology product company in terms of People, Product and Process. In the second article, we dived into the People leg of the stool. In the third article, we talked about the Product and key questions to ask.

In this post, we'll discuss how you can better understand the key risk areas of Process in your target company. We'll discuss three key risks: Scaling the Wrong Team, Fixing and Breaking, and Adding Expensive Resources.  We'll share questions you can ask even if you're not an engineer to get a better sense of what lies behind the curtain. 

Scaling the Wrong Team

It's common in small companies to solve problems with people. Many times this will take the shape of an implementation and support team growing much faster than the rest of the organization. The underlying issue when you see this is that the product requires a lot of human touch either in onboarding a customer, supporting existing customers, or both. Good product companies can scale and add new customers without linearly adding support personnel. Here are some questions to help you get a sense of how the company operates today:

How many support tickets do you handle per day?

It's often beneficial to check this over time, say for the last six months, and tie it to the addition of customers. Do the trend lines run in the same general direction? If so, it means you can't add customers more profitably over time, which is the mark of a great product. Decreasing margins means that, without addressing the underlying causes of the issues, eventually you'll be running an unprofitable business.

Searchfunder member

This is often a moving target inside the business. Often times the implementation team will be the first line of defense not only for onboarding issues but also for support. Customers get used to talking with a certain person on the team and want to continue to talk with that same person. As a result, the line between when a customer is in the initial implementation and when they've moved to support can be blurred, impacting lost revenue (if you charge for support) and increasing costs (providing support for free under the guise of implementation). 

The other critical issue in onboarding is to assess how much the customer can do themselves. If the company's team is holding their hand all the way through the process, you need to understand if that's a result of complexity of the product or lack of attention to systems easy enough to use to push the burden to the customer.

Fixing and Breaking

Nothing is more frustrating as a customer, and more draining to a product organization, than releasing new versions of the product that break more things than are fixed or new. Here are some questions to ask to get an understanding of how reliable the release process is:

Does support ticket volume increase directly following a release?

Of all the indicators of bad release and testing process, this is the most direct. If the company is tracking support tickets in a system of some kind (either home grown or something like Zendesk), you can plot the release schedule against the historical ticket volume. You can also plot the release schedule against historical bug reports. Both of these metrics indicate a lack of adequate testing in the release process, and most likely an acute lack of test automation. Ask the support team for reports on both bugs and support tickets over time (for the last 12 months at least). 

Is there a documented test plan for releases?

In a perfect world, most testing would be done by machines rather than humans. When a product can support continuous deployment (releasing several times per day or at regular intervals) without breaking, it's most likely because there is adequate automated testing. In the absence of automated testing, manual test plans should be documented and the results of each test run recorded. Often times the test plan is documented in a spreadsheet with the tests down the left side and each test run across the top. As each test is performed, the results are recorded in the sheet.  The test plan is the responsibility of the QA / testing team. Lack of a documented test plan likely means that testing is at best ad-hoc and likely misses key areas of regression on a regular basis.

What is the % of code covered by automated testing?

Automated testing is a big topic in and of itself with a good bit of nuance. However, simply asking this question will indicate that you've got an understanding of its importance. Good build systems will run automated tests with every code checkin and provide a test coverage report. Feel free to ask for a copy of this to validate the statistics you've been given.

Adding Expensive Resources

Almost without exception the most expensive human resources in a product company are the engineering staff. Good engineers are in high demand and command top of the market salaries regardless of market. Getting these people productive as quickly as possible means they are a part of the revenue generating machine instead of draining cash. Here are some questions to ask about how the company adds new engineers:

Can you walk me through the process for onboarding a new engineer?

You should expect that an engineer is able to be productive (fixing small bugs) within 1-2 days of coming aboard. If the product requires weeks of oversight and hand-holding to get the engineer productive, you're not only burning cash on the new hire but also your existing personnel who can't handle their regular work load. Any process beyond a week is a red flag that the product is not easily understood and not well documented enough, which increases the likelihood that new bugs will be introduced and drag the organization down. 

How well-documented is the codebase and surrounding processes?

Good teams will have an internal wiki that details how to get the product running on a developer's machine as well as the key concepts of the product that must be understood in order to work on it. Our acid test is that a new developer should be able to pull the source code to their machine and have it running within an hour or two. This indicates that things are reasonably well documented from an engineering perspective and that the code has been organized in such a way that the new developer can start poking around.

Conclusion

In this series we've looked at the three legs of the technology diligence stool: People, Product and Process. We've given you a few questions to ask early in your evaluation process to help expose potential areas of weakness. None of these areas of risk or weakness are deal killers necessarily, but they are areas to explore in more depth as you go along. If you see negative trends in these questions in the early stages, you can expect the formal diligence process to be highly unfavorable or at the very least expose areas of significant risk.

Please contact us if you're in the middle of evaluating a software product company. We'd be happy to answer any questions you've got or give you additional areas to explore prior to getting to LOI. 

If you've got further questions on any area covered in this series, or would like for us to answer any of these questions in more depth, let us know. Thanks for reading!