Skip to main content
Article

The opportunities and risks of AI in the investment industry

Lucy Hughes January 02, 2024

Lucy Hughes, Financial Lines Underwriter, Beazley

Artificial intelligence (AI) is revolutionising the way we live, work, communicate – and the investment industry is responding. Following the release of ChatGPT-3.5, where usage grew to 100 million users within the space of two months, investor excitement has accelerated. The global market for the use of AI in asset management is expected to be worth US$13.4 trillion by 2027,[1] leaving investment managers keen to take advantage of the potential upsides that this technology offers. On the flip side, the rapid advancement of such world-changing technology will bring known and unknown risks, that will need to be carefully considered and managed.

Whilst the techniques of AI are experiencing a PR boom, they derive from decades of research and development, with the concept of AI first appearing in 1950 in Alan Turin’s publication “Computing machinery and intelligence”.

Technology has continued to evolve dramatically since then and so have the various definitions and applications of AI; with the financial sector predominantly focused on:

  • Machine learning (ML) -the science of getting computers to act without being explicitly programmed, ML uses algorithms to identify patterns in data and develop predictive models.
  • Natural language processing (NLP): Put simply, how computers can process language like humans do. The best known NPL tool is GPT-3.

AI Uses and Benefits

Efficiency is doubtlessly one of the major advantages that comes with the use of AI, though other benefits include; data sharing across the business reducing reliance on third party service providers,  and risk management framework efficiencies. AI can also monitor market conditions, gather and analyse data on stocks, summarise financial reports and indicate early signals of market movement in minutes. It also offers opportunities to improve investment portfolio analysis and recommend investments based on risk appetite.

Other applications include textual analysis. For example, identifying CEO and CFO sentiments from earnings call transcripts, something which has been used in successful investment strategies. Algorithmic trading is a lucrative use of AI technology and has been implemented in more than 60% of overall U.S. equity trading.[2]. Amongst younger generations, Robo Advice is becoming increasingly popular and the algorithms embedded within the technology are built using AI as a foundation.

The negative impact of AI

When considering the negative impact of AI use, inevitably there is concern for the workforce, as it has been predicted that 300 million jobs could be lost or diminished by the acceleration of technology,[3] though luckily for the average human, AI comes with its challenges. Firstly, AI relies on historic data: Algorithmic trading for example, utilises historical data to provide reliable predictions for market movement. However, relying solely on the historical judgement, does not allow for unforeseen events, like the Covid-19 pandemic, which sent the stock market spiralling.

Even as AI inevitably evolves to compensate for the unknown, there is an additional challenge for AI-enabled portfolio managers in that the models are so complex it can be problematic trying to explain their performance results. The European Securities and Markets Authority (ESMA) highlighted this issue, attributing it as a factor in the low number of European funds using AI in their investment strategies.[4]

Additionally, Ethical AI is now on the radar with institutional investors keen to ensure that the use of AI tech, and that AI technology firms they are investing in are taking an ethical, considered and responsible approach to human rights – such as bias discrimination, privacy and the risk to humans.  

With the potential risks of AI’s use in investment, global governments and regulators have an obligation to protect investors. In the UK, the National Cyber Security Centre introduced guidelines in November for providers of any AI systems relating to design, development, deployment and operation. Unsurprisingly this is high on the Financial Conduct Authority’s (FCA) agenda and even though the FCA is not directly responsible for regulating AI, in July of this year its CEO emphasised in a published speech that the implications of this technology within financial services is within their purview[5]. The EU’s AI Act is also on the horizon, which aims to bring in rules on the usage of AI.

The impact on financial lines insurance

A concern for financial lines insurers is that this may translate into increased risk of regulatory investigation costs. Regulators may decide to review existing AI strategies of  a firm, especially in respect of mandates, investor suitability and marketing material. Which if not compliant, may result in an increased risk of mis-selling or misstatements. The reliability of the data used is an additional consideration for insurers, as these risks are ultimately the responsibility of the Board. Understandably high on the C-suite agenda is successfully navigating a robust risk and governance framework for both existing and potential future AI strategies linked to investment directives.

Investment managers operate in a highly regulated sector which is becoming increasingly more restrictive with impending legislation and cautiously finding the balance between AI, performance and transparency is paramount. Ultimately, the directors are accountable for decisions on governance, including AI and investment, which if mismanaged may expose the firm to an array of issues such as regulatory investigation costs, regulatory fines, misrepresentation/misstatement liability and reputational damage with no guarantee that the results will out-perform non-AI strategies. There is a reason that AI has not yet replaced humans in the investment industry, despite it’s obvious attributes. The requirement of trust and long standing relationships remains vital and is not computable… yet.

 

Beazley Furlonge Limited (Company Registration Number: 01893407 and VAT Number: 649 2754 03) is a managing agent for Syndicates at Lloyd’s and is authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority (Firm Reference Number: 204896). Beazley Furlonge Limited is registered in England and Wales with its Registered Office at 22 Bishopsgate, London EC2N 4BQ.  Email: info@beazley.com Tel: +44 (0)20 7667 0623 Fax: +44 (0)20 7082 5198. The descriptions contained in this communication are for preliminary informational purposes only. Coverages are underwritten by Beazley syndicates at Lloyd's and will vary depending on individual country law requirements and may be unavailable in some countries. The exact coverage afforded by the products described in this brochure is subject to and governed by the terms and conditions of each policy issued

Lucy Hughes

Underwriter - Financial Institutions