Introduction – Technical due diligence

What is Technical Due Diligence?

Technical Due Diligence is a formal review undertaken by the buyer prior to committing to the acquisition of a target company.  It is one of multiple Due Diligence workstreams and is particularly important in software companies.

Let us begin with some disambiguation –  Technical vs Supplier Due Diligence?

You may have been told you need to do a Tech Due Diligence on a major technology provider and you want to conduct a thorough review as part of your sourcing process.  We understand the term Due Diligence is also used for the tech supplier sourcing verification stage prior to contract.  See the bonus section at the bottom of this article for some tips.

For those of you about to embark on M&A or strategic investment in tech, read on!

Definition and Importance

A Technical Due Diligence is a formal review associated with a M&A (Merger & Acquisition) process or strategic investment deal.

Both ‘Technical Due Diligence’ and ‘Technology Due Diligence’ are commonly used.  The abbreviation term of ‘Tech DD’ is also common.

The general concept of Technical Due Diligence is to find out as much as possible about the technology aspects of the deal before it goes ahead.  The buyer or investor has a deal thesis – which is their rationale for why they are doing the deal – and the details of the technology needs to be reviewed for alignment with what is hoped or expected.

The Technical Due Diligence process is important because the target has an information asymmetry advantage.  Put simply, they know more about their technology than the potential acquirer.  Once the Technical Due Diligence is done the acquirer knows enough to make an informed investment decision.

The importance of a Technical Due Diligence has grown over the years as the nature of business and the importance of software and information technology has become central to the deal.

When is Technical Due Diligence Performed?

Technical Due Diligence is performed on a target that the acquirer wants to buy, or investor wants to invest in, prior to a final agreement being reached.

Mergers and Acquisitions

A Tech Due Diligence is performed as part of the buyer’s standard process when acquiring a company with a) significant or material technology assets, b) revenue streams that are dependent on technical services operating effectively and / or c) deal thesis contingent on successful technology acquisition, integration and ongoing Research & Development (R&D) activities.

For software companies buying other software companies it is crucial.

Tech Due Diligence is normally performed after a Letter of Intent (LOI) has been signed and before a Definitive Agreement (DA) is signed. In many cases the acquirer has exclusivity during this period and a limited time to complete Due Diligence.

The process is normally completed within 4 to 6 weeks.

Tech Due Diligence is one of many workstreams and will occur in parallel with Finance, HR and Legal workstreams.

Strategic Investments

A Tech Due Diligence is also performed as part of the investor’s standard process when investing a material sum into a company that has a key technology asset.

In the case of Private Equity (PE) firms the acquisition is viewed as part of a set of strategic investments that form part of a fund.  The investments within a Portfolio are also referred to as Portfolio Companies (or PortCo for short) and in many cases the new target will be merged into an existing operating company.

Many PE firms are on the lookout for turnaround opportunities and consider negative findings that may be raised during a Tech Due Diligence as levers to transform during their intended period of ownership.

Benefits of Technical Due Diligence

There are many benefits of Tech Due Diligence. The acquirer or investor can proactively manage risks, improve decision-making and identify integration issues early, so they can be factored into the plan.

A thorough Tech Due Diligence will also provide comfort and confidence to stakeholders that the investment is being well-managed.

Reduced Risk and Mitigating Hidden Costs

Proactive risk management involves identification and assessment.  Tech Due Diligence is the primary method of identifying and assessing risks involved in M&A in the tech sector and when the acquisition target is heavily dependent on technology for revenue generation, operational efficiency and the effectiveness of R&D is a key ingredient in overall company success.

The idea of a cost being hidden relates to the concept of information asymmetry mentioned earlier. At the start of the process the seller knows something the buyer doesn’t.  During the process of Tech Due Diligence the target is required to disclose the information that is requested.  The Tech Due Diligence team generates, gathers and sifts through the information and determines relevance, importance and connects the pieces of the puzzle: to realise the deal thesis, what additional investment will be required?

Improved Decision-Making

When the M&A strategy is defined there are assumptions made about the target and the state of the business.  During Tech Due Diligence these assumptions are tested.  When the information is reported and summarized during the Tech Due Diligence readback, the deal team can consider whether the findings are aligned with the assumptions made when the target was identified and the LOI was signed.

Discoveries made during Tech Due Diligence can lead to a re-negotiation of the deal terms.  Or the acquirer can continue with eyes wide open to commit to the total investment (money, time and effort) that will most likely be required to realise the deal thesis.

In some cases the potential showstoppers that are raised by the Tech Due Diligence workstream can result in the buyer walking away from the deal.

Identifying Integration Issues Early On

When two entities merge there are often signficiant efforts required to integrate.

For tech sector acquisitions the persona of a ‘tech tuck in’ reflects the idea that the target technology and team will be ‘tucked in’ to the acquirer.  In this case the Tech Due Diligence needs to look at how the offerings can be combined.  Commonly a period of side-by-side operation is planned while the full integration is executed.

For many other businesses that are dependent on tech there is often a rationalization required, where the M&A results in unnecessary duplication. A period of parallel-running can occur during a merger that can last for months or years. Ultimately there should be synergies in operating efficiency once the rationalization occurs, but in the meantime the cost of parallel-running and data migration can be significant.

Some deals expect the acquired company to continue to operate ‘stand alone’ and the Tech Due Diligence is largely a confirmatory exercise.  Is what got them to here going to get us to where we need to go?  Aspects such as scalability and performance can be more important in those Tech Due Diligence engagements.

Key Areas of Technical Due Diligence

This section explores some of the key areas that a Tech Due Diligence covers.


The technology infrastructure is what the solution or service runs on.  This section assumes the target is a provider of services that requires a production (also known as Prod) technology stack.  This includes contemporary Software-as-a-Service approaches. 

In the cases of software being provided to the customer and run in the customer’s technology environment, we look at what is the normal requirement for Prod (that the customer needs to provide) and how it needs to be administered and operated. 

Hardware and Software Inventory

The hardware and software inventory will provide details of the type and number of assets / Configuration Items (CIs).  We recommend that with the provided inventories (uploaded to the Virtual Data Room), the target should designate Prod assets differently from non-Prod environments such as Development and Test environments.

There have been many shifts over the years from dedicated and proprietary stacks towards virtualized, open source and cloud native infrastructure.  A well-known example is the increasing popularity of Kubernetes which is an open source cloud native infrastructure that containerized workloads run on.

Related choices are whether to host workloads in Data Centers or with cloud service providers such as Amazon, Microsoft or Google who are referred to as ‘the hyperscalers’ or one of the smaller providers such as DigitalOcean.

The Tech DD team should adjust their approach to understand the current hosting arrangements and to explore how intended post-acquisition deployments can be supported. 

Network Architecture and Security

The network architecture defines how the service is accessed, which includes user sessions, APIs (Application Programming Interfaces) and how the different technology components are connected from the front-end (e.g. web servers) to the back-end (e.g. databases).

It is normal for privileged access to be handled with a higher level of security, such as requiring Virtual Private Network (VPN) access.  It is also normal for multiple lines of defence, including firewalls, to be established between less trusted networks and protected systems and databases.

Modern approaches under the banner of ‘Zero Trust’ network architecture assume an untrusted / hostile situation involving adversaries seeking to do malicious and intentional harm.    

Capacity and Scalability

To verify the infrastructure can run the workloads at the desired level of performance it is important to look at the capacity and scalability.  

In the case of cloud hosting it is normal for a service provider to enable elastic scaling of some technology components that are used.  This provides for an increased number of infrastructure components to be alive and processing when the workload increases and a scaling back when the amount of usage drops down during non-peak periods. However, elastic scaling can run into limitations of there is a fundamental design problem limiting scaling. 

In a Data Center hosted approach, although there may be private cloud technology layers present, such as VMware virtualization, there can be limits to the elasticity due to the relatively fixed nature of the underlying hardware, in which case the Tech DD team will examine headroom in the current stack and anticipated time required to provision additional infrastructure.

The related aspects of availability and reliability consider what happens when there are planned or unplanned outages of any infrastructure component across the stack.  Modern infrastructure design aims to be resilient to the failure of any one node. 

The Infrastructure-as-Code approach will apply software engineering principles to infrastructure problems and this will include automatically provisioning new instances of compute or scaling other components when needed.  Resource files that define the infrastructure that will be spun up are stored in software repositories.

Cost-related observations are shared with the Finance workstream to support Cost of Goods Sold (COGS) calculations under forecast growth scenarios.  

Disaster Recovery Plans

Major disruptions to technology services were historically addressed with Disaster Recovery Plans (DRPs).  These were typically based around an unplanned and extended outage of the primary Data Center.  Often the central consideration was a failover to a designated secondary Data Center, data restoration from backup and staged service resumption within defined Recovery Time Objective (RTO) window. 

The DRP approach is still valid for many types of critical infrastructure, including banking, payment processing switches and energy network operations.

Modern approaches combine elements of resilience in infrastructure design and deployment, for example the deployment across multiple Availability Zones (AZ) in Amazon Web Services (AWS), and DevOps practices including Site Reliability Engineering (SRE) that will provide for a managed and predictable recovery for a wide range of scenarios.

HA (High Availability) deployment choices may be made available as supported configurations for software solutions that are deployed to run in a customer’s own technology environment.   

Software Development

Software development practices are at the heart of value creation. The Tech Due Diligence process lifts the covers on this critical set of practices and determines the historical, current and anticipated future trajectory of performance.  

Development Methodology

The Tech DD team evaluate development methods, processes and tooling.

The continuum of practices can be understood at a high-level – with umbrella terms such as Agile vs Waterfall – and decomposed down to multiple different sequential and interlocking steps.

During diligence there are a number of techniques applied to understand how the flow of work occurs – through bug / task tracking systems such as Jira – and how the source code has been managed over time in the source control system, of which the most commonly used is Git.

Our approach is to delve deeply into the source code and work through the contributor history to determine how much effort has been invested and who are the past and current contributors. Some of this confirms claims regarding Intellectual Property.

Like many analytical approaches, examing the number of lines changed in each file per commit across the entire history of the code base provides the raw data and this is used to tell the story.

One of the other areas illuminated by the deep inspection method are the open source and third party code dependencies.  Dependency networks many layers deep can be unpicked to reveal packages in repositories that may be problematic because they are no longer current or are riddled with security problems.     

Coding Practices and Standards

The way code is developed should be documented and these guidelines should be followed by different engineers.  This allows for effective teamwork, peer review of code changes and for software maintenance duties to be distributed rather than simply being assigned back to the original author. 

A sample-based method is appropriate for confirming that coding practices and standards are followed, or determing if there are deviations and the extent of variability.  

Testing Procedures and Code Quality

The way software is tested builds up from unit tests that are instrumented by code contributors, through to independent tests done by QA teams and automated into regression test suites. 

There are methods of quantifying how comprehensive the test coverage is. The Tech DD team should see high test coverage in the most import areas of functionality. 

Code quality is able to be measured by various methods.  Tracking bugs and triaging them should result in the most important bugs getting fixed first.  Summary stats of outstanding (unfixed) bugs can provide a simple snapshot.  More detailed analysis will reveal the typical amount of time a defect is open and the rate at which quality issues occur.  

Version Control and Deployment Processes

Once software has passed the testing stage, there are a number of other steps that need to work reliably for the changes to get into Prod.

Modern practices emphasise small batches and frequent releases of change through to Prod, via Continuous Delivery.

Additional levels of visibility relating to the Software Bill of Materials (SBOM) and build attestations are being developed so that downstream consumers of software can become more confident with the software they are receiving and relying on. 

The Tech DD team should look at a set of past releases to confirm how major, minor, point and patch releases have been done in the past.  

Data Management

The data management topic covers how customer data is handled as well as the provider’s internal and operational datasets.   

Data Security and Compliance

Data security and compliance relate to how well the data is protected.  Protection aims to ensure availability to those with a legitimate need to access and a default deny to all others. 

The custodians of the most sensitive data – such as Personal Health Information, Payment Cards – are required to comply with a large collection of measures.  These are routinely reviewed and attested, for example by PCI DSS (Payment Card industry Data Security Standard) independent assessors.  Often the Tech Due Diligence team can access and leverage existing deliverables that may have been created for these purposes.    

Data Integrity and Backup Procedures

The Tech DD team needs to look not just at the primary databases and how they are managed, but at replication, backup and archiving strategies. 

The risk of data loss is mitigated by having multiple controlled copies.  Reliable restoration from backup should be a standard tested control.  

Data Ownership and Access Controls

In a SaaS business model there are multi-tenant considerations.  Controls need to be in place to allow the provider to economically support multiple customers, and also maintain clear separation between tenancies.

The separation of data access by tenant needs to extend to the APIs and any direct data access techniques that may be provided.   

Data Migration Strategy

A special topic that the Tech Due Diligence team needs to consider if the target’s data needs to be migrated into a different platform.  Some of the key concerns related to data cleansing and the logical consistency of the data, dictating the extent of ‘mapping’ and translation.

Tech Due Diligence teams have historically taken an interest in structured database schema, however there are also unstructured / document-based database technologies to consider.  

Conducting Technical Due Diligence

Planning and Preparation

The planning and preparation stage ensures that the scope and objectives are defined and the team is assembled and ready to go.

Defining Scope and Objectives

The scope and objectives are defined partly in relation to the deal thesis and partly in terms of what work will be undertaken and how it will be done.

The deal thesis will provide the context for the Tech Due Diligence team and help in the identification of what aspects of the technology should be given the most focused effort.

The assumptions being made by the acquirer are often based off the Management Presentation or Confidential Information Memorandum (abbreviated to CIM) that is prepared by the sell-side.

The formality of an external consulting engagement, with a Statement of Work (SOW) defining the Tech Due Diligence will clarify the scope.  Our recommended approach is to define each review category and within each review category define the review dimensions and the assessment / evaluation to be performed.

For example, within the category of Performance and Scalability, the review dimensions may be:

  • Operational performance
  • Scalability
  • Availability, stability and reliability
  • Non-functional defects

And within this category of Performance and Scalability the associated evaluation for the Tech Due Diligence team may be:

  • Ability to grow / limits to scalability
  • Additional performance engineering and QA resources required
  • Remedation required

The approach to the work will depend on the maturity of the target. To continue our example in the Performance and Scalability category, for a fast growing company the Tech Due Diligence team may be asked to look at scaling factors of 10x and 100x and rely on details provided via production observability platforms such as DataDog. For a company that is very early in its lifecycle the main body of evidence for Tech Due Diligence will be provided from the target’s test lab.

Assembling the Due Diligence Team

Many companies choose to combine internal and external team members into a cohesive Tech Due Diligence workstream.  Often the combined team and overall workstream activity is managed by the external advisor.
All team members who are brought ‘over the wall’ need to have confidentiality agreements in place.

Internal Team Members

For strategic acquirers, the internal team members are selected from the sponsoring product team. A product role, a software engineering role and a technology management role are recommended.

Bearing in mind the possibility the deal may not proceed, there can be issues associated with ‘tainting’ these team members by exposing too much of the implementation detail in the target’s proprietary technology.

In the case of a PE firm acquiring a new PortCo it is normally expected for a close proximity PortCo to assign one or two team members to the Tech Due Diligence activity.

External Expertise

The external consultants who are engaged should be experts in conducting Tech Due Diligence and should also have previous relevant track record in a similar field of technology.

As mentioned above, it is common to assign the workstream leadership role to the external consultants and they need to be adept at working in collaborative mode with the internal team.

For software review as part of Tech Due Diligence it is expected the external consultants will bring and use automated tools to accelerate the inspection process.

Data Gathering and Analysis

The data gathering and analysis phase involves review of documents provided via the deal Virtual Data Room, interviews with Key Tech Personnel and source code inspection.

It used to be common for Data Center tours to be arranged so that infrastructure could be inspected.  This may still be relevant, although many software teams choose cloud-hosting so this activity isn’t required.

Document Review

As part of the Tech Due Diligence there will be a Diligence Request List (DRL) created and shared with the target.  The target will find and upload relevant documents to the Virtual Data Room (VDR).

Many consultants who specialize in Tech Due Diligence have created a standardized Diligence Request List although this is normally tailored for each project.  Our standard list is available to download here.

While a standardized list is useful, it is also worth noting that tech teams, particularly in early stage companies, may not be great at documentation. A ‘show and tell’ approach works well for these targets, in which the tech teams describe what they have instead of documentation, through live walkthroughs of code and / or describing key architectural concepts.

Interviews with Key Technical Personnel

The target should bring ‘over the wall’ and make available a set of key technical personnel. This can include: engineering managers, architects, software engineers, product managers, cybersecurity experts, Quality Assurance leads and TechOps.

The interview schedule should provide the key areas of discussion in advance as the agenda and allow for ‘show and tell’ where it will be more effective than providing documents.

During the interview it is important to leverage documents although also look beyond polished slide decks for relevant evidence of claims.  As noted above, great evidence for the Performance and Scalability category can be provided from the observability platform.

Many teams are comfortable with video conference sessions and are OK with these being recorded, however it is important to check in advance with the participants.

Source Code Review

Source code review is typically done by the external consultants using proprietary tools. It isn’t normal for the buy-side internal team to be exposed to source code, so the external consultants inspect the source code in detail and summarize findings into a report. The report is shared with the buy-side internal deal team.

In some categories of review there is an opportunity for “window-dressing”, to make a target more appealing than it really is.  It is common to find PowerPoint slides representing a positive side of things although these can be aspirational rather than reflecting reality. Discerning what is actually present vs what will be present can sometimes be a challenging task.

Due to the version control systems leaving a trail of evidence behind, an experienced Tech Due Diligence team can look out for any software window-dressing.

One important aspect of source code review is to independently look at the state of open source dependencies and the associated software licenses, also looking at whether the dependencies are up-to-date and free of security CVEs (Common Vulnerabilities and Exposures). Open source licensing of components is a key component of the value of Intellectual Property. Picking the wrong open source components can result in having to open source proprietary code and / or pay substantial licensing fees. Article describing the importance of open source understanding can be found here and here

For software that is shipped to customers and supported it is also important to look at the versions that are out in the field and what level of support is offered for older versions that customers may still have deployed in their environments.

On-site Visits and Infrastructure Audits

For critical infrastructure that relies on a Data Center it may be appropriate for the Tech Due Diligence team to arrange an independent inspection.  On-site visits introduce a number of logistical difficulties.

Our preferred approach is to build up an understanding of the observability techniques being used and follow a ‘day in the life’ of compute workload analysis, including utilization stats of virtual compute nodes.  Supplemented with Virtual Data Room inventory that maps virtual to physical resources, and / or cloud service provider detailed usage records and bills.

The ‘show and tell’ approach results in an evidence set that is compelling, if it involves live monitoring screens, log inspection and Tech DD team directed queries.

As noted around Data Security and Compliance, there can be existing independent review reports to leverage.    

Reporting and Recommendations

Summarizing Findings and Identifying Risks

During Tech Due Diligence the key findings by section are documented.  The detailed section describes what has been found and how it has been verified.  The summary section interprets this into simple and summarized statements, with associated color-coding.

Review findings in each category are represented at a detailed level and rolled-up to summary statements.

Our approach uses Green to represent a finding that is ‘as expected’ and in line with what the deal thesis requires.  Yellow represents an area that needs investment or remediation.  Red represents a potential showstopper issue.  The red indicates the finding isn’t in line with the deal thesis and is a material negative finding, that can’t be readily addressed.

Recommendations for Improvement or Mitigation

In most cases the negative findings can map across to recommended improvements that can be costed at a high-level.  This provides sufficient information for the deal team to understand the materiality and the cost, effort and time required to remedy.

Outlining Potential Integration Challenges

The Tech DD precedes detailed integration planning and it is useful for the Tech DD findings to help shape the approach to integration.


Importance of a Proactive Approach in Technical Due Diligence

The buyer of a tech company has a great opportunity during Tech Due Diligence to discover what they can and become sufficiently informed to make a good investment decision.

It isn’t just a rubber stamp and it isn’t just a checklist.  It is a great time to get under the hood and understand items that may need more work than anticipated post acquisition, or potentially identify showstoppers before Definitive Agreement.  

Maximizing Value of Your Investment

The discipline of Tech Due Diligence can ensure the M&A funds are being well spent, and the associated investment is understood and factored into the total cost of acquisition.



Bonus Section

Thanks for your interest in Tech Due Diligence! This section provides some extra tips and pointers.

Common Technical Due Diligence Red Flags

Some of the common Tech Due Diligence red flags included outdated technology, unclear documentation, data security issues and over-dependence on outside tech providers.

Outdated Technology

Outdated technology can be identified by finding the version number and comparing it to ‘end of life’ information available at resources like this site.

There are multiple problems related to running outdated technology.  It can be difficult to support and vulnerable to known security issues. Old versions can also hinder the adoption of modern techniques.

Unclear Documentation

Gaps in documentation can result in over-reliance on key people.  Sometimes startups describe themselves as ‘small but mighty’ or ‘scrappy’.

There can be multiple issues if the larger acquirer expects teams to write great documentation and keep Knowledge Bases up to date, adhere to standards and generally conform. This can be a cultural clash issue for the acquired team and the talent may flee.

Data Security Issues

The need to adequately protect data will drive the imposition of a cybersecurity program.  This can also challenge the freedom and autonomy of startup tech teams.

Over-dependence on outside tech providers

Although it is common to rely on contract partners for software development, in some cases this goes too far.  This can become evident when the target is unable to provide relevant experts for the Tech Due Diligence team to interact with.

The contracted provider may be in control of the agenda and may be difficult to manage post-acquisition.

Technical Due Diligence for Different Software Types

In the case of proprietary development we recommend a full software review, including quantitative and qualitative assessment of the source code and the open source dependencies. 

In the case of third party ‘off the shelf’ software being used, it is possible to short-cut many of the detailed steps outlined in this article.  The key points are to:

  • Understand the usage and ensure appropriate licenses are in place
  • Check that supported versions are deployed
  • Understand levels of customization 
  • Consider how tightly integrated the solution is within the target’s IT landscape

The Tech Due Diligence team should consider what is core IP and what is commodity.  In categories that are non-differentiating we expect the target to have chosen from the market something that is suitable and dependable.  

Technical Due Diligence for Cloud-Based Systems

When doing Tech Due Diligence of cloud-based systems it is important to consider whether the target is a provider or a tenant.

Many startups decide early on to be tenants on a particular cloud, such as AWS or Microsoft Azure.  This can offer an array of services that can accelerate the early development stages and getting the product into market.  In those cases the Tech Due Diligence team should consider whether the workloads can be easily moved to another target cloud environment, and / or model out the Cost of Goods Sold (COGS) of remaining with the current provider.

If the target is providing Infrastructure-as-a-Service or Platform-as-a-Service then the Tech Due Diligence team will need to go down into the tech stack to understand physical deployment (including Data Centers, compute, storage and networking infrastructure) and up into the multi-tenanted software layer.  These are some of the more complex Tech Due Diligence efforts. 

Technical Due Diligence for Artificial Intelligence

Over the years the consideration of Data Science as an add-on to Tech Due Diligence has evolved into a full view of Artificial Intelligence (AI) and Machine Learning (ML) technologies within a lifecycle, framework and ecosystem. 

The lifecycle view looks at how the AI and ML models are created (model development and model training stage) and to what extent the target is dependent on model providers (upstream models sourced from sites such as HuggingFace) and fine-tuned for the target application.  We also look at how the model is deployed and used for inference (typically called via API), and how it is updated over time.

The framework and ecosystem view looks at what the AI and ML models are based on, with popular open source choices Tensorflow and Pytorch support DIY (Do It Yourself) ML teams, and others choosing to connect to and harness general purpose AI, such as OpenAI

This is a rapidly developing area and the Tech Due Diligence team should be flexible and adaptative.  

Technology Supplier Verification Performed Prior to Contract

This section addresses the situation of a customer wanting to perform a detailed verification of their technology supplier prior to contract.  We know this can also be called Tech Due Diligence. We’ve got you covered!

The approach is different from what is described above because there is no Letter of Intent to acquire the provider.  There is an intention to contract with the provider to supply technology assets or services.

Your main point of contact will be the sales team in the technology provider and they will be limited in terms of the information they are allowed to provide to prospective customers. What you will typically get:

  • Other reference customers to talk to
  • A prepared detailed InfoSec / CyberSec dossier
  • Access to a demonstration version of the technology
  • Trial accounts that also provide access to support resources
  • Product roadmap and historical version release notes
  • Draft contract and license agreement
  • Corporate profile information

Some of the steps that we recommend:

Customer interviews

  • Preparing a script to use for interviews with reference customers
  • Selecting reference customers that are most like you and have more than 12 months experience using the technology
  • Scheduling the interviews – we find 45 minutes are normally effective
  • Writing down the responses and probing further for any areas of weakness

Cybersecurity review

  • Requesting and reading through the provided dossier relating to security measures
  • Exploring independent penetration testing findings that can be included as standard practice, potentially performed on an annual basis

Trial use with targeted persona

  • Work out a few of the targeted persona and recruit individuals to perform simple tasks
  • Collate feedback and determine a) fitness for use and suitability, and b) training and change management that may be required

Support enquiries

  • Raise support tickets and see how they are handled
  • Include requests of a functional and technical nature and exercise different channels
  • Note the responsiveness and effectiveness of the support provided

Contract review

  • Review the standard contract offered by the provider against other technology provider contracts that your company has entered into

Product roadmap and historical releases

  • Look back at a year of the most recent releases and examine the release notes
  • Determine the rate of feature delivery and bug-fixing in that period
  • Look ahead at the planned product roadmap and consider whether the staging of promised features is in line with what has been historially delivered

Corporate profile and financials

  • Providers may be prepared to show summary financial statements and these can verify claims of revenue, profitability and financial strength



Find out what makes acquirers cheer and targets cry.

Download our full technology due diligence request list to find out what we ask a target company to provide during a technical due diligence M&A project.

DRL Free Download
Future communications