Protected: RD&A Regulatory update

This content is password protected. To view it please enter your password below:

Protected: Cloud Migration

This content is password protected. To view it please enter your password below:

Protected: The Corporate Sustainability and Responsibility Directive (CSRD): A Transformative Force for the Asset Management Industry 

This content is password protected. To view it please enter your password below:

How Large Language Models Are Reshaping Asset Management


In this article, we explore the utilization of large language models (LLMs) in the domain of Asset Management. Beyond delineating their areas of application, we examine the constraints inherent in these models and their ramifications on practical implementation.

Large Language Models
At its core, a Large Language Model or LLM is a machine learning model trained on a massive corpus of text. It learns the patterns, structures, and semantics of language, making it capable of tasks like text generation, translation, summarization, and much more. These models are exceptionally versatile, making them a valuable asset for various business applications (Kumar, 2023).


Application within Asset Management
The complex and multifaceted nature of asset management has amplified the demand for advanced AI solutions. With the fast-paced advancements in big data and AI technology, the use of Large Language Models (LLMs) in asset management has been expanding (China Asset Management Co., Ltd., 2023):

  • Investment research: LLMs can assist asset management firms in quickly and accurately extracting key information from a vast array of market data, financial reports, and macroeconomic indicators. They can analyse and summarize this complex information, enabling faster data collation and reducing errors that can occur due to human intervention.
  • Risk management: LLMs can aid asset management companies in predicting and evaluating various types of risks via sophisticated data analysis and pattern recognition. For example, when it comes to assessing the market volatility of a particular asset class, LLMs can swiftly analyse historical trends and relevant news reports, providing both quantitative and qualitative support to the risk assessment process.
  • Customer service and consultation: the application of LLMs has significantly improved the user interaction experience. They can comprehend the specific needs and situations of customers, providing targeted responses or recommendations, which greatly enhances customer satisfaction.
  • Regulatory compliance: LLMs can interpret complex regulatory documents, assisting asset management companies in ensuring that their business operations meet a variety of legal requirements. For instance, when new financial regulations are introduced, LLMs can quickly summarize the main changes and potential impacts, helping the company adapt swiftly to changes in the legal environment.


LLMs can analyse market data and make predictions about future price movements, which can be used to inform trading strategies. In addition to hedge funds, LLMs can also benefit other players in the asset management industry, such as asset managers and pension funds (Agarrwal, 2023).

LLMs empower new tools that allows users to access the quality of climate-related disclosures in sustainability reports[1] (Human, 2023).

But in all these cases: having confidence in the outputs produced by LLMs is important, especially in financial document processing, where having an answer may not be sufficient unless the reason for the LLM producing that answer is provided. And there we touch on shortcomings of LLMs.

Shortcomings and limitations
LLMs share a shortcoming common to AI and Machine Learning applications: they are essentially black boxes. Not even the programmers know exactly how an LLM like ChatGPT configures itself to produce its text.

Model developers traditionally design their models before committing them to program code, but LLMs use data to configure themselves. LLM network architecture itself lacks a theoretical basis or engineering: programmers chose many network features simply because they work without necessarily knowing why they work (William W. Hahn, 2023).

LLMs have various limitations that need to be addressed and fixed in future models. These limitations mainly concern the following topics (Dilmegani, 2024):

  • Accuracy: potential inaccuracies due to the use of Machine Learning and LLMs struggle to adapt to new information dynamically, leading to potentially erroneous responses.
  • Bias: recent findings (Committe, 2023) indicate that more advanced and sizable systems tend to assimilate social biases present in their training data, resulting in sexist, racist, or ableist tendencies.
  • Toxicity: refers to the issue where these models inadvertently generate harmful, offensive, or inappropriate content in their responses.
  • Capacity: every LLM has a specific memory capacity, which restricts the number of tokens it can process as input (ChatGPT ± 1.500 words, GPT-4: 25.000 words);
  • Pre-trained knowledge set: once the training is complete, the model’s knowledge is frozen and cannot access up-to-date information.


Evaluate and review
Evaluating LLMs involves addressing the subjective nature of language and the technical complexity of the models, alongside ensuring fairness and mitigating biases. As AI technology rapidly advances, evaluation methods must adapt to remain effective and ethical, demanding ongoing research and a balanced approach to meet these evolving challenges .

No single metric gives the full picture. Use a balanced mix of metrics and human judgment to truly understand an LLM’s strengths and weaknesses. This allows us to unlock their potential while ensuring responsible development (Ruiz, 2024).

Partner with RD&A
Embrace the transformative power of the LLMs and embark on a journey of artificial intelligence with RD&A Consulting as your trusted guide. Together, we can shape a future where LLM powered investing is not just a choice but a cornerstone of financial success.

Bibliography

Agarrwal, A. (2023, February 9). How to use large language models in asset management industry. Retrieved from The Economic Times: https://economictimes.indiatimes.com/markets/stocks/news/how-to-use-large-language-models-in-asset-management-industry/articleshow/97767393.cms?from=mdr

China Asset Management Co., Ltd. (2023, December 21). Shai: A large language model for asset management. Retrieved from Arxiv.org: https://arxiv.org/html/2312.14203v1

Committe, A. I. (2023). The AI Index 2023 Annual Report. Stanford, CA.

DigFin. (2023, October 17). Retrieved from Digital FInance: https://www.digfingroup.com/blackrock-llm/

Dilmegani, C. (2024, January 10). The Future of Large Language Models in 2024. Retrieved from Research AI Multiple: https://research.aimultiple.com/future-of-large-language-models/

Human, T. (2023, August 21). AI tool allows anyone to generate score for sustainability reports. Retrieved from IR Magazine: https://www.irmagazine.com/reporting/ai-tool-allows-anyone-generate-score-sustainability-reports

Kumar, M. (2023, October 27). Understanding Large Language Models and Fine-Tuning for Business Scenarios: A simple guide. Retrieved from Medium: https://medium.com/@careerInAI/understanding-large-language-models-and-fine-tuning-for-business-scenarios-a-simple-guide-42f44cb687f0

Ruiz, A. (2024, February 3). How to evaluate an LLM? Retrieved from nocode.ai: https://newsletter.nocode.ai/p/evaluate-llm?utm_source=newsletter.nocode.ai&utm_medium=newsletter&utm_campaign=how-to-evaluate-an-llm

William W. Hahn, C. (2023, October 31). ChatGPT and Large Language Models: Their Risks and Limitations. Retrieved from Enterprising Investor: https://blogs.cfainstitute.org/investor/2023/10/31/chatgpt-and-large-language-models-their-risks-and-limitations/

Zishan Guo, Renren Jin, Chuang Liu, Yufei Huang, Dan Shi, Supryadi Linhao Yu, . . . Deyi Xiong. (2023, November 25). Evaluating Large Language Models: A. pp. 1-111.

Further readings:

AI tool allows anyone to generate score for sustainability reports: https://www.irmagazine.com/reporting/ai-tool-allows-anyone-generate-score-sustainability-reports

Additional questions?

Contact RD&A: Bernard van de Weerd

© 2024 RD&A Consulting


[1] The model is reviewing only corporate disclosure, not a company’s actual actions to tackle climate change.