Benchmarking Salesforce Performance

Introduction

Salesforce is a Platform-as-a-Service (PaaS) offering hosted on a multi-tenant cloud architecture. What that implies is that at any given point, the infrastructure of Salesforce is shared by multiple end customers. Obviously, this leads to better utilization of resources, leading to cost optimization for Salesforce, but the flip side of that is the impact on performance. Since the infrastructure is shared, there is load-balancing and scheduling algorithms as well as governor limits in place to ensure all customers get a fair share of the resources.

To add to that, additional layers of customization also increase the processing time of transactions executed on the platform. These customizations can be either 3rd party AppExchange products or custom builds done by customer / ISV for the customer. Execution of server-side code, queries, data manipulations also require access to server resources and are subject to load-balancing and scheduling algorithms to execute.

Since most enterprise applications of Salesforce rely on integrations as well, the complexity of cross system interactions is also a parameter that needs to be considered, especially when integrating with on-premises systems behind firewalls, or legacy systems that do not natively support APIs.

All these factors contribute towards how well a Salesforce application performs, apart from the regular factors like volumes, parallel threads etc. As the world moves towards an ever-shortening attention span of users, system performance becomes a key factor for designers, making benchmarking and performance measurement of the system an absolute need, and identifying avenues for performance improvement thereof.

Benchmarking End-to-end performance

When dealing with enterprise applications, a transaction inevitably flows across multiple systems before reaching its end state. From a performance perspective, the highest level of granularity is the time taken by the transaction from initiation till reaching its end state.

Consider the following example showcasing the L1 process flow for an order placed from eCommerce portal:

Fig 1: End-to-end transaction flow with timing

Here, the total time taken for the transaction across systems is T0.

What is the significance of T0?

Measuring the total time taken by a transaction is the starting point for performance benchmarking. This is the parameter that will be measured with various user loads, parallel executions etc. to ascertain how the system performs in various loading and volume scenarios.

How can T0 be measured?

Usually, end-to-end performance can be measured by a monitoring software under regular load. Examples of monitoring tools include Dynatrace, New Relic, Splunk.

For measurements under simulated load and volume conditions, specialized load testing tools are needed. The most popular such tool is JMeter used for load testing. LoadRunner is also used quite often for the same.

Benchmarking individual system times (T1-T9)

Once the total time for the transaction (T0) is measured, the next level of granularity is measuring the time taken by each system to process the transaction (T1, T3, T5, T7, T9) as well as the latency associated with each data exchange (T2, T4, T6, T8).

What is the significance of individual system times?

Knowing the amount of time each system takes to process a transaction can help identify performance bottlenecks within systems and allow a more concerted and focused effort to alleviate the performance bottleneck instead of generally focusing on overall performance of every system.

Also, with multiple iterations, the latency between systems can also be baselined to understand the time taken to exchange data between systems. Any bottlenecks identified in data exchange can be looked at for improvement in infrastructure, data exchange formats, integration pattern etc. to allow for faster data exchange for that specific integration.

How can individual system times be measured?

Usually, individual system performance can be measured by a monitoring software under regular load. Examples of monitoring tools include Dynatrace, New Relic, Splunk. If a middleware is used for data exchange, the logs from middleware can also be used to determine the performance.

For measurements under simulated load and volume conditions, specialized load testing tools are needed. The most popular such tool is JMeter used for load testing. LoadRunner is also used quite often for the same. The times can be measured using monitoring tools or middleware logs.

Benchmarking Salesforce platform performance

As the transaction flows through various systems, Salesforce may possibly be one of them. As part of the end-to-end measurements, the time taken by Salesforce system (T5) would have already been baselined but a more granular breakdown of it might be needed to identify and eliminate performance bottlenecks with the Salesforce application.

The below diagram shows the components of execution within Salesforce for the same use case:

Fig 2: Transaction flow within Salesforce with timing

The processing within Salesforce is broken down into multiple steps and the timings for each of these (T51, T53, T55, T57, T59) can be measured. However, the latency between steps (T52, T54, T56, T58) is slightly trickier to measure since the data exchange is happening within the Salesforce platform.

What is the significance of individual step timings?

Measuring the timing for individual steps within Salesforce helps establish a baseline for future changes as well as helps identify performance bottlenecks which then become candidates for technical debt reduction and performance optimization.

How can individual step timings be measured?

If the steps being executed are custom elements like custom Apex classes or custom Flows, the processing time can be measured via the debug logs. Another great tool for measuring performance of Salesforce custom elements is the event monitoring logs but that comes at additional cost.

For measuring the UI performance, if debug mode is enabled, the browser console can be used to check the page load messages. However, a detailed page load performance and data exchange between UI and backend will require custom code injections into the Lightning components or Apex classes to log timestamps on the console to allow for targeted benchmarking.

Benchmarking Salesforce 3rd party performance

Often, Salesforce applications utilize 3rd party add-ons for custom functionality, like Vlocity managed package or the Steelbrick managed package. When dealing with such instances, debug logs and event monitoring logs are not sufficient to capture finer details of the execution, even though high-level profiling information is captured in the logs.

In Fig 2, assuming the order processing logic is handled by Vlocity, the timing measurement will involve the processing time taken by the managed package custom logic (T53) as well as the latency of data exchange between unmanaged components and managed components (T52, T54).

What is the significance of 3rd party step timings?

Measuring the timing for steps processed by managed components provides a baseline for the performance of the 3rd party application installed in Salesforce. If the performance is deemed unacceptable from a user experience or SLA perspective, the data collected for benchmarking can be presented to the support team of the respective application for performance improvement.

How can 3rd party step timings be measured?

Though debug logs contain only high-level information of the managed code execution, the time taken for execution and the time taken for data exchange can be ascertained from it. However, most 3rd party packages provide their own tools for performance benchmarking. For example, Vlocity provides its own framework for time tracking through the IDX tool, providing insights into execution time taken for each element within an Omniscript, DataRaptor or Integration Procedure.

Conclusion

Though benchmarking the performance of enterprise applications is essential, there is no one-size-fits-all approach to benchmarking. The ways of measuring performance are as varied as the enterprise architecture can be, and the strategy for this should be established when devising the enterprise architecture. Performance benchmarking cannot and should not be an afterthought in a world where user attention span for applications is measured in seconds.

Author Details

Pratyush Anand

Pratyush is a Salesforce Techno-Solution Architect with the Enterprise Cloud Application Services (Salesforce) Unit at Infosys. He has more than 13 years of experience in designing and developing Salesforce applications, with a keen eye for innovation, optimization and efficiency within the platform as well as in the business processes. He has worked in multiple domains, right from banking, insurance to manufacturing and telecommunications. He also has a knack for ideating, designing, developing and documenting reusable assets and bots to aid in the development and implementation of projects. Pratyush has an overwhelming love for the literary arts, and is an author, blogger and poet when not designing disruptive tech solutions.

Leave a Comment

Your email address will not be published.