The Quantum Prince in the Kingdom of Finance

Classical computers are the ‘Legends’ invented long ago that serves as the computational prowess of all problems. As time progressed, the problems grew with the legends needing an upgrade. Scientists and programmers termed computational challenges as P and NP-hard problems that bear some semblance to old myths, partly because they are impossible and partly, they are out of the capacity of the Legends.

Continuing the story of legends and myths, let us talk about the Quantum Prince, who is relatively young and is displaying a potential, viz off the charts. Due to the recent technology boom, Quantum Computing is now exhibiting qualities of working out some of the most intriguing and complex problems. The Monte Carlo Method [1] was devised in the 1960s and is still relevant today. It is probably the most used method in finance and is deployed where risk computations, predictions, and probabilities are considered to make intelligent decisions. One of the applications of the Monte Carlo method is Portfolio Optimization.

The Monte Carlo Simulation is a method of plotting different scenarios of Risk and Return versus Time on a graph. Figure 1 is a Monte Carlo simulation for an asset price, i.e. running 1 million distinct alternatives to compute probabilities forecasting price and strength of the asset over a time interval (in days)? These coloured lines represent different portfolio scenarios (stock prices owing to the risk and return). Zero value denotes the initial price at which the stock is purchased.

So, suppose we select a scenario of the return we want (indicated with the black line in Figure 2) versus the risk we are willing to take and keep it for a period. What will be the possible outcomes using the Monte Carlo method? Figure 2 indicates a scenario of selected simulation results that fit the needs of the investors, whereas the actual scenario shown in Figure 3 is drastically different. When the same stock is examined in real-time (directly in the stock market), the trajectory followed is indicated with the brown line. Hence, the simulation displayed a rise of about 2000$ while the actual stock price turned out to rise by 2500$ within 1000 days. This scenario proved in favour of the investor because who does not love profit when it exceeds expectations.

 

The key points to be considered here are:

  • The 1000 days (32 months)
  • The loss expected (the lowest value stock descents) was nearly zero, here thus making it safe heaven.
  • Loss is also a scenario that we can observe, but the stock whose simulations are inclined more towards positive than negative should be our targets for investment. Is this not amazing we can run an algorithm and find a solution to our financial problems forever? Probably the reason why the Monte Carlo is one of the legendary methods of this kingdom.

On the contrary, there is an issue concerning the speed of this legendary method. It does not perform very well with an increasing amount of data and with several assets. Let us consider the scenario around the stock market that has companies listed for around 20-22 years since the digital era began or even earlier than that. The fluctuations in the stock market are known throughout the kingdom. Hence the process of forecasting and analysis must be done on the data generated every day for 20 years, considering the trends, strengths, etc. Monte Carlo is an efficient algorithm, but with an increase in stocks and data, it reaches its hardware extremities and takes time ranging from days to even weeks for computing large amounts of data [2].

Let us consider a scenario for a Portfolio Manager: He wants to select the ‘N’ number of stocks in a universe of ‘S’ stocks and ‘X’ number of stocks out of ‘N’ that gives him maximum return. Hence, he considers buying or holding the ‘X’ stocks while selling away ‘(N-X)’ stocks, considering he is willing to take ‘R’ per cent risk on his investments. This kind of problem is solved by deploying the Monte Carlo method. The graphical representation is given in Figure 4.

 

The computational and practical challenges faced by this manager are as follows:

1. The number of combinations can become huge if there is an increase in the number of stocks under ‘X’. The universe ‘S’ is already a large pool of stocks. For example, the Indian Stock market has more than a million stocks listed. So the selection of ‘N’ stocks becomes very hard and ultimately increases the difficulty of optimal picking of ‘X’ out of ‘N’.

2. The data of every stock worth 20 years would reach storage close to thousands of Gigabytes. Relatively it can be solved easily with expensive hardware or heavy cloud storage. But the current legends (algorithms) cannot vouch for space and time complexity trade-off.

3. The most crucial thing is that computation will take an exponential amount of time for every increment in the number of stocks, and that is where the whole purpose is lost.

Let us imagine a scenario where our motive is to forecast returns for a portfolio for the next 14 days. If the computation takes two days and the optimized value begins on the third day, our prediction is already trailing behind two days, i.e., two days of real data is already lost, which we will have to account for on the third day. With the stock market changing every few seconds, the significance of two days’ worth of real-time data is vast.

There is a saying that, “A King is as intelligent as his most trusted Kingsman as he is who influences the King’s decisions”. In the kingdom of finance, a young prodigy, the Quantum Computer, is already making ground-breaking advancements in solving all the difficulties mentioned above of the Monte Carlo Method.

In recent research by IBM, the Quantum Amplitude Estimation algorithm (QAE) [2] has proven to provide Quadratic Speed Up [3] over the traditional Monte Carlo method. In short, if you take a T amount of time to finish a specific task in Monte Carlo, it will take you the only square root of the time, i.e. √T in QAE, hence faster.

This result becomes significant when you solve something that takes two or more days to compute, keeping the accuracy of results comparable or even better than the legendary Monte Carlo. This is the prowess of the young prince. It is about time for the young prince to ascend to the throne and for the kingdom to praise its crowned prince.

The most significant difference that sets apart a king from a prince is the experience. At present, the prince is dealing with his own set of weaknesses. Hence, the limitations [4] of Quantum Computers are listed below:

  1. Limited Number of Qubits: Presently, there is a limitation in IBM’s quantum hardware of up to 60 qubits. That is very low considering context to terra-bytes of data and millions of parameters.
  2. NISQ Phase: Since the current quantum computers are relatively noisy, replicating qubits is a typical error correction [4] method, thereby causing the usage of more qubits for the same task. For example, a problem with 70 variables will use 70 qubits to calculate in a noise-free QC while it can take 210 to 2100 qubits depending upon the error correction schema deployed for the Noise.

Due to limited access points and more people carrying on computations, the waiting time in the queue to compute can be a factor too (in the range of 5-30 mins, viz low, but an important factor). The computation must be carried out on cloud-based interfaces as the hardware conditions are complex, increasing the latency of internet speeds in the exhibition of the results.

These limitations are absolute, but in the next 2 to 3 years, we will witness an immense advancement in this field. Major tech giants are pioneering Quantum Computer Algorithms and harnessing an increased number of qubits in smaller and smaller chipsets. Top finance tycoons are investing and providing data access to build algorithms that can save time and, in most cases, find an optimum solution. These kinds of actions can have reactions that could broaden the horizons for our quantum prince. We all are witnessing a technological leap in front of our eyes. All hail the young prince!

 

Co-authored by Aditya Bothra, Senior System Engineer, Infosys QCoE

References

1. A.I. Adekitan, MONTE CARLO SIMULATION https://www.researchgate.net/publication/326803384_MONTE_CARLO_SIMULATION

2. Pooja Rao, Kwangmin Yu, Hyunkyung Lim, Dasol Jin, Deokkyu Choi, Quantum amplitude estimation algorithms on IBM quantum devices, https://arxiv.org/abs/2008.02102

3. Ashley Montanaro, Quantum speedup of Monte Carlo methods, https://arxiv.org/abs/1504.06987

4. X.Fu, L.Riesebos, L.Lao, C.G.Almudever, F.Sebastiano, R.Versluis, E.Charbon, K.Bertels, The engineering challenges in quantum computing https://www.researchgate.net/publication/316948252_The_engineering_challenges_in_quantum_computing

 

Author Details

Vijayaraghavan Varadharajan

Dr. Vijayaraghavan Varadharajan is a Principal Research Scientist doing research in the field of Quantum Computing, XAI, FoW, Security analytics, Cloud security, Security assessment, Authentication and Privacy protection. He also focuses on Emerging technology & Business opportunity identification, Incubation and Venture assessment for Investments. He is working with Universities and see how academic research can help in solving real time Industrial problems. Vijay has 6 granted US patents and filed many US and Indian patents in key technology areas. He has published 50+ research papers in International journals & conferences. He has mentored many International students from reputed universities across the world.

Leave a Comment

Your email address will not be published.