Interview with Michael Imerman, Finance Professor at UC Irvine

Michael Imerman discusses how emerging technologies are reshaping banking and highlights the role of regulatory guidance in promoting innovation and inclusivity.

Apr 2, 2024


In this interview, we had the privilege to speak with Professor Michael Imerman, an Assistant Professor of Finance at the Paul Merage School of Business, UC Irvine, where he also serves as faculty director of the Master of Finance program.  

Our conversation covered a range of topics, from the promise quantum computing holds for improving credit risk management to the competitive position of the US regarding fintech innovation. Dr. Imerman’s areas of expertise include banking, risk management, financial regulation, financial data science, and fintech innovation. He is in the final stages of writing The Economics of FinTech, a book to be published later this year by MIT Press. Recently, Dr. Imerman spent a sabbatical with the FinTech Group at the Federal Reserve Bank of San Francisco. Additionally, he regularly advises and consults for companies ranging from startups to large financial institutions. 

Professor Imerman currently serves on the editorial advisory board of the Journal of Financial Data Science and was previously an associate editor for the Journal of Risk Finance. His research has been published in numerous journals, including the Journal of Business & Economic Statistics and the Journal of Banking & Finance. Prior to UCI, he was Associate Professor of Finance at the Drucker School of Management at Claremont Graduate University, where he also served as Co-Director of the Financial Engineering program.  Before his career in academia, Imerman worked as an analyst on Wall Street supporting high-grade corporate bond and credit derivatives traders. He received both his PhD in Finance and Economics as well as his BS in Finance from Rutgers University.  


Good morning, Michael. Thank you very much for joining us. You recently served as a visiting scholar at the San Francisco Fed. Coming most immediately from academia as well as prior posts in the private sector, did working at a regulator provide a new perspective on issues you had previously explored from other angles? 


My stint at the San Francisco Fed helped me understand challenges that confront regulators in trying to ensure financial stability while also promoting innovation. The challenge is especially acute in the current environment as digital transformation and the advent of fintech overhaul traditional bank business models. It’s a delicate balance.  

One thing I learned there was that financial institutions should not view regulators as adversaries as fintech innovation unfolds, but rather as partners that can help them understand the risks of emerging technologies and prepare them for changes that are beginning to transform the financial system. 


How could a financial institution benefit from a partnership with a regulatory agency? 


The San Francisco Fed has a dedicated fintech group, housed within supervision and credit. It functions as an internal think tank that provides support to bank examiners as well as supervision executives. The group explores the disruptive potential of emerging technologies on traditional bank business models, along with associated risks.  

So the way I saw it - and I think many of my colleagues at the Fed shared this view - is that there was a partnership to be forged by also sharing these insights with the banks that the San Francisco Fed supervises in the Twelfth District.  

When you think of the Fed supervising banks, especially the Federal Reserve Board, which is based in Washington DC, you think of these large bank holding companies like JP Morgan and Citi.  

In the Twelfth District, there certainly are some larger banks, such as Wells Fargo. But an overwhelming majority of the banks in the district are smaller regional or community banks. And in recent years, these institutions have felt pressure to adopt new technologies. 


Can you expand a bit on some of these new technologies? 


One example is blockchain technology, which was a particular focus of attention a couple of years ago. The banks were asking: “Should we roll out crypto trading platforms? Should we integrate distributed ledger infrastructure?” The examiners would then come back to us at the fintech group and relate these questions that the banks had. At the same time, the examiners themselves conceded that the new technology didn’t fit into the existing risk assessment protocols. So the fintech group would provide guidance - both to the examiners on updating these protocols and to the banks that had these questions. 

Another area that I was very involved in was the adoption of AI and machine learning, as well as alternative data. Many fintech startups located in San Francisco were looking to partner with banks or to secure them as clients. But they first wanted to understand the regulatory ground rules for forging these partnerships. Questions they had included ones surrounding the use of alternative data. For example, to what extent can you use someone’s record of payments for cell phone bills and Netflix subscriptions? Can you use that data to evaluate creditworthiness? 

The easy thing to do is prohibit use of these data sources, right? But that would inhibit both innovation and financial inclusion. People with “thin credit files” - a short or small credit history - or without a credit score altogether typically don’t get approved for loans. But incorporating alternative data that shows whether they are paying their monthly subscriptions on time might give you enough information to evaluate their creditworthiness. It might also help you identify some other risk factors that traditional models are missing.  

We also considered the problems inherent with evaluating “black box” models that derive from AI and machine learning. There is regulatory guidance for model validation and model risk management for traditional models of bank risk but AI/ML models don’t fit comfortably within that guidance. How then can we evaluate the riskiness of these models? And how do we then educate the examiners who must go in and make sure that these models are properly documented and valid? 


Where does the US stand in terms of fintech innovation? Are there areas where the US lags other countries? What types of things can everyday people more easily do overseas compared to the US? 


There are some areas within the FinTech ecosystem that the U.S. has a lead in, and other areas where we seem to lag, or have lagged but are starting to catch up.  

One example where we had fallen behind is payments. Many Asian countries have been successfully using mobile and contactless payments since 2010. As late industrializers, they’ve had less invested in legacy systems. 

However, in the midst and aftermath of COVID, the US has started to catch up, with greater adoption of contactless payment solutions. In my upcoming book, The Economics of FinTech, to be published by MIT Press later this year, there’s a chapter on payments technologies, which my co-author Frank Fabozzi and I discuss in the context of Peter Drucker's framework on the seven sources of innovation. And one of those sources of innovation is the unexpected. Something emerges that nobody predicted but that forces society to innovate.  

And in regards to payments technology and contactless payments, the constraints of the pandemic era definitely served that function. For example, people were no longer dining out and using cash to tip waiters. Instead, delivery drivers were leaving food outside of homes and people were forced to tip in a contactless way. The preference for non-physical forms of payment was also evident when you ventured out to stores or had to manage curbside deliveries and pickups. 

You mentioned that the US has a lead in certain areas. Can you give us an example?  

In past articles, I’ve examined the extent of innovation across different sub-sectors or functions within the US financial services industry, such as insurtech, risk management, and payments, and mapped the prevalence of emerging technologies, such as AI, big data, and quantum computing in each of those sub-sectors. 

Within this matrix, payments has indeed been slow to embrace technology. In contrast, the adoption of digital technologies has been more thoroughgoing in wealth management. Here, robo-advisors construct portfolios after assessing multiple factors, including a client’s level of risk aversion. Traditionally, a human financial advisor would move towards the same goals by asking straightforward questions about the client’s attitudes towards money, savings, and investing. For example: “How would you react to a 10% loss in your investment portfolio?” The weakness of this approach is that clients are prone to answering in ways they imagine the financial advisor would consider “right”.  

Instead, robo-advisors take a more indirect approach. By leveraging AI and drawing upon principles from behavioral economics, the robo-advisors may ask questions that have nothing to do with finance but that still correlate with the ultimate information needed to build a suitable portfolio for a client. For example, in the middle of the standard questions, the robo-advisor might also ask: “It’s a clear 70-degree day. What's your ideal way to spend the afternoon?: A) Going for a walk, B) Going on the tallest roller coaster in a theme park, C) Going skydiving, D) Going boating, E) Sitting in a coffee shop.”  

The question is still trying to uncover a client’s attitude to risk tolerance. But by removing money from the question framing, the robo-advisor can elicit more honest answers. I suppose a human advisor could also ask the same indirect questions. However, differences lie in the scale robo-advisors can exploit to improve their algorithms and the dynamic flow of questions they can pose in response to previous answers. More importantly, as a software-driven tool, robo-advisors can democratize wealth management and offer economical investment guidance to millions of people unable to afford a traditional advisory service. 

So, the US has definitely outpaced other countries in applying technology to wealth management. It began with start-ups like Wealthfront and Betterment and has since spread to more established firms – the so-called incumbents – such as Fidelity and Vanguard. 

You mentioned quantum computing earlier. As someone that's done a lot of work on risk over the years, what promise does quantum computing have for improving risk modeling at financial institutions? 

Let me preface this by saying that of all the emerging technologies in the fintech ecosystem, I'm most excited about quantum computing.  

By exploiting properties of quantum mechanics, specifically entanglement, superposition quantum computers can carry out in minutes calculations that would take classical computers decades or centuries. 

I’ll try and describe one practical application of the technology.  

Value-at-risk, or VaR, is one of the most important risk models used on Wall Street. To calculate it, you need to build a probability distribution for weekly, monthly, quarterly, or yearly P&L. One approach to doing that is using Monte Carlo simulations. Now, if you have a portfolio that has forty different positions in it, mapping out all the possible Monte Carlo trajectories of the forty assets over the next week, month, quarter, or year can take many hours. So a lot of these models are run overnight and the risk managers and traders come back the next morning to see the VaR. Now, the problem is that by the time the market opens, the risk profile has changed - and will continue to change throughout the rest of the day. 

However, Quantum Monte Carlo could theoretically run all these trajectories in parallel simultaneously in a matter of minutes. The engineers haven't actually done it yet, but a lot of very smart people are working on the problem. And when they succeed, a trader or risk manager could get an updated VaR in real time. No classical computer could hope to accomplish that. However, I should also note that the experts I've spoken with have told me we're probably, at best, five to ten years from widespread commercialization of quantum computing technology. 


When you get called in to evaluate the risk models used by private firms, whether established companies or start-ups, what are some of the fundamental principles you bring to bear in your assessments? 


The British statistician George Box once remarked: “All models are wrong, but some are useful.” And even though he was a statistician referring to statistical models, the statement very much applies to quantitative finance models. 

 The models we use in quantitative finance and financial engineering are all abstractions of reality. It’s like looking at a geographic map printed on paper, which lets you see a general spatial representation of the continents, and depending on the map, other information, such as the earth’s topography or the delineation of political entities. You might also then be able to draw certain inferences from the spatial representation. For example, Argentina is located in the southern hemisphere. It therefore experiences summer from around December to February and you could deduce that it’s probably warmer in Buenos Aires now than in New York. 

But a map is just a model. It’s not going to perfectly depict reality. In addition to the problems associated with projecting a sphere onto a 2-D plane, a map cannot capture changing political borders, growing urban areas, or dynamic landscapes, such as the continual expansion of the Big Island in Hawaii. It will also fail to pick up on the sorts of details provided in 3-D renderings or street-level displays like Google Maps. Finally, the presence of microclimates and the fact that New York and Naples, Italy are at the same latitude show that our ability to make inferences related to the weather is quite limited. 

Now, let’s return to Box’s quote: “All models are wrong, but some are useful.” So a financial model can be helpful and gives insights from which we can draw inferences. However, if, for example, you want a model to predict what Microsoft’s stock price is going to be on a certain date next year, it's going to be wrong. It's absolutely going to be wrong. If it gets it right, it's sheer luck.  

So, although we can’t expect an exact price prediction, we can still look to it for some insights: Is it, relatively speaking, overvalued or undervalued? Are the expected returns consistent with the level or risk? Then, based on these insights, we can make an informed decision about whether we want to buy, sell, or hold Microsoft stock. This is, of course, an oversimplified example, but it illustrates the fact that a well-researched model can provide insights to the user but that the model will always come with inherent limitations. 

A second principle that should support any assessment is that a model is only as good as its weakest assumption. A model will necessarily rely on assumptions. In trying to forecast Microsoft’s stock price, we might assume that Microsoft is going to keep its debt structure constant over the next year or that it will continue to pay out a certain percent of its income as dividends.  

So when we validate a model, whether it's at a startup or a large, incumbent financial institution, we try to test the validity of their assumptions. What if you relax an assumption? What if you replace one assumption with another?  

Just like civil engineers stress test a bridge’s design by multiplying the assumed number of cars that will cross it during a day, we do the same thing when we're validating financial models. What if volatility goes from 20% to 80%, or from 20% to 200%? Does it break the model or does the model still work?  

Finally, the last step in the model validation process after we’ve stressed the model is to see whether the outputs make sense. I call this the “smell test”. Does it smell funny at the end of the day? If you get negative stock prices as an output, that’s probably a sign that something’s amiss with the model.  


Can you talk about some of the evolving trends in risk management? 


There’s a plethora of risk types that financial institutions manage, including credit risk, market risk, liquidity risk, and operational risk. In addition, the models themselves present a risk if they’re poorly constructed, misused, or not well understood. So model risk management is an area that’s grown alongside the traditional risk types. 

SR 11-7 is the regulatory guidance for banks that the Federal Reserve has developed in regards to model risk management. And SR 11-7 has been a pretty well-understood and accepted set of guidelines.  

Emerging technologies however present everyone with a new challenge. The “11” in SR 11-7 stands for the year 2011, the year the guidance was written. That was thirteen years ago, before financial institutions had even begun to anticipate AI and ML-driven risk models. Instead, SR 11-7 was developed with typical bank risk models in mind, such as capital adequacy models and derivative pricing models.  

So, how to handle AI/ML risk models is the question that both academic researchers and regulators are heavily invested in right now. Should we tweak the current model risk management guidelines to accommodate AI/ML models? Or should we devise a whole new set of guidelines? Forging solutions to these challenges is something that I'm very much involved in and excited about.  


I was wondering if you could comment on the spate of bank failures that occurred in the first half of 2023. How would you apportion blame among the banks and the regulators? Could you assign a performance grade to the regulators? 


Well, I don't think anybody feels as though the regulators did a good job in the heat of the crisis. However, the later innings were handled a bit better. One could even arguably applaud the US financial regulators, specifically the Fed in partnership with Treasury, for the initiatives they put into place after the failures. By finally providing liquidity to the regional banking system, the panic subsided and a more widespread crisis was averted.  

But in the lead up to and during Silicon Valley Bank’s failure, I don't think anybody would say that the Fed handled it well. In fact, it was a giant mess.  

I should note that I had already left the Fed at this point. So my crisis-related observations do not come from an insider's perspective. 

So, again, there were clear shortcomings at the Fed. In the prelude to the breakout of the crisis, the Fed should definitely have been on top of SVB more. Still, I can’t point the finger entirely at the regulators. Much of the blame also surely lies with SVB’s management. 

Where exactly did SVB go wrong? 

For SVB, it was a classic case of mismanagement of interest rate risk. Taking a step back even further, SVB's problems arguably began when the tech sector - the community to which it catered -- slowed down in 2022. The commercial banking business model is to borrow money short term from depositors and lend it out long term to borrowers. SVB was the bank for the innovation economy. On the one hand this was a brilliant and unique market position; one that I had admired throughout my career studying innovation in the finance sector. 

However, this also exposed them to concentration risk: too much business concentrated in a specific industry. So when the tech sector slowed down after the massive surge during COVID, there was less demand for venture loans from startups. SVB still had to do something with the deposits it was holding to earn a return. The spread between the interest that banks pay on deposits (interest expense) and the interest that it makes on assets such as loans and bonds (interest income) is called the Net Interest Margin. SVB decided to invest those deposits into "risk-free" government securities.   

However, with interest rates so low they had to go farther out on the yield curve to long-term Treasury bonds, rather than short-term Treasury yields. Aside from having to wait many more years for these bonds to mature, longer-term bonds are more sensitive to interest rate movements.  And the relationship is inverse: as interest rates go up, bond prices go down. So when the Fed began raising interest rates in 2022, those long-term bonds into which SVB had just dumped billions of dollars lost a lot of value.  

The other thing, as I mentioned, is it would have had to wait decades to get the principal back. So when the clients of SVB, again due to the tech sector slowdown, started tapping into their deposits – what we would call deposit burn – the bank had to sell the Treasury bonds at a loss to meet the cash needs of its clients. They announced in early 2023 that they needed to raise fresh capital to cover the losses. This then triggered a sell-off of SVB stock and - in what proved to be the final death-blow - a run on the bank where depositors withdrew their funds en masse. 

All banks have interest rate risk models which should tell them (a) how much exposure they have, and (b) suggest hedging strategies for excess interest rate exposure.  For (b) there are very effective tools – interest rate derivatives – to hedge the risk.  Many banks use them and when your bonds and loans lose value because of an increase in interest rates, those hedges pay off handsomely.  However, with SVB, even when the models were flashing red, saying you're got too much interest rate exposure, they failed to hedge effectively.  


Are there lessons from that failure to hedge effectively that other financial institutions could benefit from? What were the specific sources of that failure? 


There’s a very important point here: SVB did not have a chief risk officer in place at the time to make the calls on hedging. Instead, the hedging decisions wound up in the lap of other C-level executives, none of whom had a specific background in risk. 

Accounts differ as to whether the CRO was on leave at the time or had already stepped down. Regardless, the crucial point is that in either case, SVB should have had an interim CRO in the role and in a position to say “OK, we have an excessive amount of interest rate risk, but tools exist to hedge that and we need to use them.” 

SVB had excess interest rate exposure due to their large investments in long-term treasury bonds. They used interest rate swaps to hedge that exposure, but they had expired in July the previous year. 

The head of risk management, who was the highest ranking risk-focused staff member in place at the time and who would normally report to the CRO, actually noted this.  

Now, when a hedge expires, you routinely roll it over and take out a new policy. It's like with car insurance. When it expires, you pay the premium and take out another six months of insurance. You don't let it expire and drive around without coverage. But that's exactly what SVB did.  

And interest rates indeed went higher. And all their long-term treasury bonds, which they had a lot of, became worth a lot less.  

From what I understand, allegedly, certain C-level executives thought there was nothing to worry about because interest rates were at the highest they had been in almost two decades. Hedging that interest rate exposure also wasn’t cheap. So the head of risk management was outranked and SVB decided not to roll the hedge. Sure enough, two months went by and they lost… big.  

They were out several billion dollars on those treasury bonds. And so my understanding is that the Fed sent memos to SVB leadership about the issue before the losses occurred. In my opinion, the Fed should have been more aggressive. But, as you can see as I've related this story, most of these mistakes fall on SVB's leadership.


Thank you for your time and insights, Professor Imerman. We're eagerly awaiting the publication of your book and the continued impact of your work. It's been a true privilege to discuss these pivotal topics with you.

© Credcore 2023. All right reserved.

© Credcore 2023. All right reserved.

© Credcore 2023. All right reserved.