![]() |
Conversations with Institutional InvestorsAuthor: Investment Innovation Institute [i3]
Conversations with Institutional Investors is your gateway to in-depth discussions with the masterminds behind leading global investment firms, including key figures from pension funds, insurance companies, and sovereign wealth funds. Our podcast explores the evolving landscape of asset allocation, portfolio construction, and investment strategy, offering you firsthand insights from industry experts to inspire smarter, more innovative investment approaches. For further insights go to i3-invest.com. You can also subscribe to our complimentary newsletter at: i3-invest.com/subscribe/ Language: en-au Contact email: Get it Feed URL: Get it iTunes ID: Get it |
Listen Now...
132: Michael Kollo – New Book, Building an AI Equity Analyst and AI as a Review Agent
Episode 132
Sunday, 29 March, 2026
In this episode, I speak with Michael Kollo, a return guest to the [i3] Podcast. Michael has recently published a book on artificial intelligence, called: Future-ready with Generative AI Skills, Mindsets, and Stories in the Age of AI We speak with Michael about how he build an equity analyst AI agent in a weekend, how AI helps review your work and how you can get it to find solutions that are tailored to your style of working. We also delve into deeper philosophical questions around the nature of language, how AI changes people's interaction with language and whether AI changes our perception of what is artificial and what is not. Enjoy the show! __________ Follow the Investment Innovation Institute [i3] on Linkedin Subscribe to our Newsletter Explore our library of insights from leading institutional investors at [i3] Insights __________ Overview of Podcast with Michael Kollo 03:00 We, collectively, still struggle to have the right framing for what this kind of AI is 04:00 I wanted to write a book on AI from a white-collar perspective that was neither hype nor alarmist 10:30 You build an equity analyst in a weekend, using AI agents, which produces broker reports? 14:00 What good looks like is still very much an individual judgment 14:30 AI is a mirror to yourself 18:00 Students are very strong users of AI across many different disciplines. And they are symbiotically learning with the AI and are becoming natural users of it 18:30 One of the more powerful usages of AI is not to do a job, but to review a job 21:00 AI helps you to think on a meta level: what is it that you find interesting, useful and powerful? 23:30 Can you get an AI system to explain to you in plain English why a non-linear system works, test it in 10 different ways and write a research report about it? Yes, you can. 31:00 "I'm expressing myself in my adult language (English), but (AI) is taking it and swirling it into patterns of Hungarian (my childhood language) that are hitting me back at a whole different angle that I'm not sure if I could have done myself". 32:30 "Language was supposed to be this thing that was supremely human. It encapsulates all this weirdness and contradictions that is to be a human being. 34:00 An AI system is not an individual or a single entity that has a will or a desire. It is a field that you can land on and move from one place to another, as you will. 37:30 There is a danger that power users of AI might experience burnout, because they are constantly given things to review. 43:00 AI experts should not be asked about workforce impact, because they don't know enough about it Full Podcast Transcript Wouter Klijn 01:17 Welcome to the [i3] Podcast. I'm here today with a return guest, Michael Kollo. Mike, welcome to the show. Michael Kollo Hi. How's it going? Wouter Klijn Pretty good. Pretty good. So we're here to talk about a book that you wrote on generative AI. It's called Future Ready with Generative AI: Skills, Mindsets and Stories in the Age of AI. Michael Kollo 01:39 Well, thanks very much for that. That's a bit of a mouthful. We kind of continue to expand the title. It feels it's coming out in the middle of March, so I think the 14th, 15th. It's being published by a publisher called Rutledge, which is a UK based publisher. So it'll be available here in the US, in the UK, all around the place. Wouter Klijn 01:58 So what prompted you to write this book? Michael Kollo 02:01 So look over the last three years and for years before that, but certainly the last three years, the topic of AI has obviously become very, very popular. Everybody's been thinking about it and talking about it. But one of the things I found through lots of presentations about 40 keynotes per year from all kinds of different audiences, from boards of directors all the way down to, you know, the average person kind of presentations is that we, we collectively, still struggle to have the right framing for what AI, this kind of AI is, and what it might mean for us. I don't think anybody has answers as to where it's all going, what will happen to the workforce or jobs or personal relationships and so on, but I think we have a pretty good inkling as to its capabilities and how fast it's moving. We have a pretty good inkling as to the different areas it might impact, but we don't have the right framing or the right thinking about it. So I was very keen to write a book that I could capture the imagination of a white collar worker in across any industry, just about to help them just understand what this thing could be and what it means, and get them to form their own view, but to form it in the middle ground, not to be hype and not to be alarmist. I wasn't keen on creating a book about how it could all go wrong and how it could be all terrible. And I wasn't also keen to create a book about, you know, the utopia that it could foreshadow in the future, but I was just interested in informing the average person how the middle ground could look. Wouter Klijn 03:29 Yeah, so not hype, but you do call it a civilisation altering technology. What do you mean by that? Michael Kollo 03:37 Yeah, so that's, um, so okay, there's a story behind that. Okay, so the story is the following. I, I was asked to give a brief testimony to the Senate, Senate hearing in Australia, and it was for the education use of AI within education. And so a lot of the speakers before me had come and talked about the dangers and the problems and so on. So I was really keen to try to counterbalance that with the significance, but in a positive light as well. And what I was trying to say with to that audience is that if we get this right in terms of how we teach the next generation, how we enable the next generation to reason and to think better, then we could bring about a whole golden age, a whole kind of new renaissance, I suppose, of reasoning and thinking, and this could be civilization altering. And so really, the context of it was, how do we use this technology to enable people to be their better selves, or to reason or to think better? It was a bit idealistic, absolutely, because we all know that not every technology, in fact, most technologies, arguably, and not always used for the betterment of people. There's a whole bunch of other negative things that we've had recently, especially with social media and the way that it impacts people. But I was kind of, I suppose, making a case for if we can use this properly, in a good way, that there's enormous an abundance of positivity that we could have for our civilization. So. I really was looking for a term that would say, actually, civilization materially changed with the printing press. It materially changed with a few other critical things we've done in the past, nuclear power, or electricity, or so on. And each one of these changes just brought about change. That's all it is. And this is one of those moments, Wouter Klijn 05:17 And you explore that concept further by actually saying it's not just the technology. I think you describe it as: it's a system that helps you navigate uncertainty. Can you explain a little bit what you mean by that? Michael Kollo 05:31 So this was one of the big challenges I had in the book, is I was trying to get across to people that they should not think about this as just technology, as data, as statistics, because for a lot of people, that alienates them immediately from the topic. They go, Well, I'm not about technology, I'm about people. I'm about conversation. I'm about artistic things. I'm about something else. And so for them, it pushes it away somewhere in the corner for someone else to deal with, and it's more comfortable that way as well. And so I was trying to find the right words or the right framing in this book by positioning it in as a companion, positioning it as a co worker, positioning it as a whole bunch of different kinds of things in the fiction and the non fiction stories. And I think in this particular case, it was somewhat of an abstract way of saying that if you think about this as a reasoning engine, as a thinking intelligence of some kind without will and without desire. So we take those off the table and we say it's just about that. Then it really is about how to help the average working person understand information, distil it, or expand it or manipulate it in different ways, and then ultimately, to deal with the core part of most jobs, which is dealing with uncertainty. Decision making under uncertainty certainly is really prevalent in finance and in financial services, but it's prevalent in many other industries as well. You have to make decisions. Do you write two two paragraphs or one paragraphs to your boss to explain what happened? Do you do you put in a big report or a small report? Do you go with one stock manager versus another stock manager? Whatever the decision might be, there's a constant set of understanding, evaluation, analysis that happens in our world. And this is a capability. I wouldn't even call it a tool. I'd call it almost like a companion to help with that. Wouter Klijn 07:13 I think a lot of people are a bit concerned whether it's going to replace them or not. You say it's more of an analytical tool that helps you make better decisions. And I thought it was an interesting example that you gave in a book where you talk about a friend who is a programmer, and he basically recognised where his own input was when he asked it questions. And some of it was really relevant, but some of it led him down, you know, a deep rabbit hole that absolutely was not worth pursuing, but he recognised which one was, you know, the right path to follow. And that's when he realised, this is where I contribute to it. It will help me. But left on its own, it could easily descend into, you know, just time wasting, resource wasting. Do you think that that translates to other professions, and in particular our industry. Michael Kollo 08:04 Yeah, look absolutely so white collar work Financial Services is about navigating uncertainty. So as we just said, it's about creating analysis, thoughtful analysis. Some of the analysis is pretty scripted, cash flow modelling and things like that. It's almost like a task based thing. But then there's a lot of choices to make along that pathway. So if you're, let's say, doing manager selection, you might look at the track record, you might look at the data, you might look at the portfolios, but then you might also look at the character of the manager. You might look at the way they talk about the markets, and you try and anticipate how they will respond to certain market conditions, and you try to essentially understand their partner. So I think it's, it's, there's a lot of judgement that goes into these things. There's a lot of personal judgement that happens as well. And so when you're using these AI systems to analyse something, because you don't have a set path, you're discovering the path as you go, and you're using judgement as your compass. In a way, you're using these systems more or less as a transportation vehicle to get to where you want to go, but your compass is in your hands, which is your judgement and so but the speed at which you can get to those places is much, much quicker. So it puts some pressure on your judgement to be a lot quicker. Your Compass has to figure out where you are much quicker, because these systems can, at speed of light, create analysis in different directions. So I think, I think the first part is understanding the fact that you're in charge with that judgement, with that compass, with the where should you go? What does good look like, is entirely up to you. There's only a very few jobs where what good looks like is already scripted, that is already is written down in hard ways, and you're just executing that, in which case your judgement is anyway not in the picture. And so you would assume that for those kinds of things, that you'd almost happily hand it over to an AI system. But anytime it's up to you, and you understand the why something happens and why that your judgement works like that, you should think about these as just very rapid, iterative companions. Wouter Klijn 09:59 It's. Yeah, but having said that, I do understand that you build an equity analyst in a weekend which can produce its own broker reports, and quite hard to distinguish from the human produced ones. Michael Kollo 10:12 So this was a kind of experiment I did recently. We were just chatting about it before, but Okay, so let me describe what it does, and then let me describe why it still fits into my model, so I can defend my model, my mental model here. Okay, so in the first instance, what it does is it's a forensic accounting type system. So it takes financial statements over a number of years for a company, it then looks through them, looks for unusual attributes of the accounting that's been presented, the cash flow statements, the earnings, the accrual accounting, the way that various other things. It then creates hypotheses as to try to explain those particular anomalies, and goes out there and test those hypotheses. And it does so by looking at competitor data, by looking at product market fit, by reading the news, by looking at other contractual, other statements, and so on. So it covers a wide variety of different data sources from trusted places. But importantly, the way that it thinks, the mechanism of its reasoning is a very kind of stoic, kind of philosophical kind of reasoning. So observation, empirical, observation, hypothesis, test, confirm or deny. And it does that. And so it typically takes about 17 page, 17 minutes, excuse me to do something, and as an output, it gives me back exactly the investigation path. What is what is found, is unusual, where it looked and so on. When I was calibrating it, after a couple of iterations, I started noticing that I was guiding it more as to what is significant. So what is material? You found a bunch of things over there. It's actually, don't worry about that. That's it's interesting, but it's not material for what I'm trying to do. Or what does a hypothesis actually look like? Well, I want you to actually create very striking hypothesis. Don't want you to create an average hypothesis that says that the reason that that particular ratio is out of whack is because, on average, you know, cash flows are easy to do with something. I want you to create a punchy one that says it's that way because the manager is trying to do something bad. It's a governance outliers. Yeah, I'm looking for I don't want average explanations to average things. I want extraordinary explanations or extraordinary things, a combination of both. So what good looks like lay in my hands and my compass lay in my hands. And so from the outside, it looks like it produces five, six pages of beautifully written material, and soon enough, you'll have one, and I'll have one, and the person down the street and person listening to this will have one. But I want to distinguish what what makes it tick and what actually makes it good or not, will no longer be just the advent of that. There is five pages, because today, you know, there's lots of people that provide equity research, but not infinite amount, just a lot. Tomorrow, there'll be a lot more, but it'll be even harder to tell inside those systems what is the reasoning pattern being followed and why that's good, why your system of reasoning is better than mine. And so therefore, if I'm trying to evaluate which Equity Research AI I should listen to, I'm probably going to need my own AI that that, you know, I've calibrated that tells me this is actually a better way of thinking about it, and it will go and evaluate other systems so you can see kind of escalating. But the problem is, what does good look like? Is still a bait, at the moment, very much a source of individual judgement, yeah. Wouter Klijn 13:20 And going into that a bit further, I thought one thing that was fascinating in your book is where you describe, sort of the the impact that expertise and training and just experience has on how you interact with an AI system. So I think there's a couple of stories that you tell in the book where people quite quickly recognise, okay, this is the path to go through, but to a degree, it also relies on how well they understand where their advantage lies. That can be sometimes a very tricky thing to analyse and to recognise. Michael Kollo 13:56 I often, over the last years of working with AI, I found that it's most often a reflection on me, in other words, like it makes me question why I do that thing, or why I'm asking it that way. And so it's a mirror point to yourself, really, which is, again, a very unusual thing, this idea of meta Meta reasoning, or meta thinking. So thinking about thinking, when I asked that question of doing it that way for the equity research for example. Let's take that example. Why is the forensic accounting the right approach? Why is this kind of a hypothesis testing the right way of doing things? Are there other ways of thinking about value? Is there other ways of solving for this kind of problem? And I think knowing the why and owning it then becomes a very empowering thing. To go back to the AI system and go, Okay, I want you to look at these three other ways of doing forensic accounting. But ultimately, the aim is to do whatever the objective is, and you can rotate your lens around the problem, if you're comfortable enough to rotate your lens. Around the problem. I think for a lot of us, we become accustomed to doing things a certain way, and we haven't exercised those muscles of, why am I doing it this way? There's just a there's a process, there's a structure. I'm going to look at these five things, or these six things. I'm going to fill out the boxes. You know, I'm going to talk a little bit about it, but I didn't set up this system. I'm just executing it. I I'm just a custodian of it. I think for those kinds of jobs, companies are going to look at that and go, Well, if you don't, if you're just a custodian of this system, either you quickly start to own it and go, Why? And that becomes you, becomes yours and your manifestation of your will, or it becomes a process that's executed, in which case, why not bring an intelligent system to do it? Wouter Klijn 15:47 Yeah, yeah. So often people think, Well, young people that are growing up in this age of, you know, a lot of technology, the internet, that they might be better equipped to work with AI. But it seems, from sort of this reasoning, that potentially more senior, more seasoned people could work better with AI because they know better questions to ask it or better give them better instructions. Michael Kollo 16:12 I think so. I think I think there's, out of any problems that I can see, this is one of the biggest ones I can see, which is that we have many of our industries are set up in ways where younger people come in, they learn by doing or by osmosis, and then over time, they develop knowledge and they develop judgement, and then they manage others to do the same. And if AI comes in, hollows out that beginning part and basically says, well, we can kind of execute based upon your commands, dear senior person, and you're quickly going to get to a point where, which is what you're seeing today, which is that graduates are having a lot of difficulty finding work. They are possibly the cheapest resource, yes, but they also the ones that are most unproven in terms of their ability to add value. So for the average organisation, look at an AI system, and they go right, if we can basically scale the expertise we have in the senior people, maybe we can do that. I think that's one point of view. I think the other point of view is that younger resources are often more flexible in their cognitive capabilities. So they may not yet have worked out what they believe to be the best way of doing things, but that often means that they are open to different ways of doing things, and they're open to learning and so on. So normally, a lot of the younger generation that come across in universities are very strong users of AI across multiple different disciplines, and they're symbiotically learning with the AI system to use it better and become natural users of it, which means that they're able to question and iterate to better solutions much more rapidly than a domain expert who's just trying to manifest their own way of doing things into an AI system. Wouter Klijn 17:51 It seems like an ideal tool to do like pre and post mortems of certain business cases. Michael Kollo 17:57 It's so good. I mean, the one of the use cases that so often people start with AI going, why? I'd like you just to do something for me. Okay, write my report, or, you know, write a transcript of this podcast, or whatever it is, and therefore it'll just do it for me. Probably one of the even more powerful use cases is the reviewer. Is the review this for me. Review this report and tell me where the inconsistencies are, where the mistakes are. What's better way to frame this? When I wrote this book, I gave it to a number of times to be reviewed, and while it was complimentary to begin with, it was very sharp and very critical very quickly, about, you know, loss of tone, about consistency across chapters, about consistency of messaging. And it helped me to pretty high level, which I'm not sure I rose to, but, but it's, it was a, it was a really interesting thing where you put something that you feel like it's a part of you, like when you write a book, you inevitably put a lot of yourself in it, and it acknowledges that goes, thanks, Mike, yep, I can see this is you. That's lovely. Anyway, now you need to do these things. You almost like, Oh, it's right. I don't like it, but it is right. Wouter Klijn 19:02 Yeah. So it's polite, but also, to the point, to a degree, it seems that what you get out of it depends a lot on how specific you are in what you put into it. And I think you gave this example in your book, where there was somebody in a financial services company who said, Write me a market update, and then that led to rubbish. But then, when you specified, okay, write a market update about volatility in emerging markets with, you know, an eye on the US dollar, then it came up with something much more interesting and much more to the point. To what degree is it like a matter of, like, you know, garbage in, garbage out, and just being specific about what you want, Michael Kollo 19:46 it's all about that. So, I mean, there's a bunch of different ways I can say this, but let's say, for example, in this example, you said, right? Write me an interesting market update. Okay, so what does interesting mean? So this goes back to our meta conversation, right? So what makes interesting for you? Well, interest. For me, means sharp market movements with contradictory news or mysteries. Okay, great, so is a mystery. Is a sharp market move 5% or 10% Well, at least 10% okay, great, etc, etc. You follow this path and suddenly you get half a page of write for me, articles that relate to at least 10% market moves, accompanied by interesting news articles that point to x, y, z, etc, and you start to really define very carefully what it means. So you can't just look at something and goes, I'll know what it when I see it. And this meta thinking actually makes you again question, what is it that you do? What is it that you find useful, valuable, powerful, and so on. And these systems can take you anywhere. So one of the kind of visual metaphors in this book, which I quite like, is the is the kind of the Ranger metaphor. So imagine like an infinite woods in front of you with many, many clearings. And your guide arranger is going to take you to one of those clearings. And depending upon how you instruct it in terms of what kind of clearing you'd like to get to, you can get to beautiful oak trees and sunny whatever, rainbows and butterflies. Or you can get to dark, terrible things, or you can get to pretty average looking, or you can get to plasticky ones, or any proportion permutation of woods that are available to you. And the only thing that will take you to one place to another is what you whisper into that, into that Ranger, AKA your prompt. So if you just say to it, take me to a wood clearing, please. You're literally looking at any number of those that you could land in. Wouter Klijn 21:34 Where are some of the best benefits you think can be had in our industry? And you know, we sort of discussed the analytical part of it, the decision making process. But you also describe in your book, where you make the comment, financial markets are not linear, and one of the things that we often hear about AI systems is that they can help find trends in nonlinear processes. Is that sort of another area where you find a lot of benefits in? Michael Kollo 22:01 It's a good question. So people often confuse these terms. So with generative AI, and specifically with language models that this book deals with, it is very much the patterns and trends within language and expression and information. And so really, what you're doing here is you're intermediating your and another person reading of information. So if you're doing a report or analysis and so on, you're putting a system in the middle of that which is very good at reasoning, cognitive capabilities, but also speculative writing, creative writing, and so on, so on. Can you get it to explain a non linear system for you linguistically? Yes. Can you get it to recommend for you methodologies to tackle that loneliness, yes, can you get it to write the code and execute that nonlinear system to test it for that yes, you can. Can you actually ask it to test 10 different ways of modelling that weird nonlinear system, write the code, evaluate it on certain grounds, come back to you, write a research paper for you and explain it to you without the technical logit. Yes, you can. And so where we are today is this really interesting moment where, at the beginning of it, had we been having this conversation three years ago, I would have said to you, well, it's good for doing, I don't know, emails and reporting, but after that, you're on your own. Today. We're at this point where the systems can pick up skills, can pick up what this is a methodology from Claude. Can pick up, in this particular case, Knowledge Base. Can execute code, can test, can revise their hypotheses, can re execute again, iteratively learning and updating their hypothesis, all through the medium of language models. So language models become the controlling layer, and underneath that are these systems that are working not just to write code, but to execute and then learn from it. So again, two years ago, yeah, I would have said to you, yeah, just use it to write some code, and then good luck to you, Python or whatever else you want to do today, the system will write it and execute it and look at the results and to see whether it's actually what you want, and then go back and write it again, if it isn't all in a loop. And I think it's, it's a it's easy to lose track of how fast this progress is going. If you haven't looked at it for six months or nine months, you might go back and say, I tried that. It just didn't work. But I, and this is something I find often in LinkedIn, because I'm quite active there is that people don't understand that this is a very much a thing in motion. So whatever has you kind of characterise it as it's like that, or it's like this in six months time is no longer probably not true anymore, except, so this is one of the challenges of writing a book like this. I had to write in such a way that I try to pick out those eternal characteristics of the system. But I wouldn't be too much pointing fingers, going, haha, I can't do this, because by the time the book come out, it can do it just fine. Wouter Klijn 24:47 But is there sort of a confusion, maybe sometimes, about what it does, because you take a clear approach that you look at, sort of the language, the large language models, but the language aspect of it, while, if you're. Looking from an investment side, you're probably looking more at mathematical trends rather than just language. Do people confuse that you think where… Michael Kollo 25:08 Massively so right? Let's take a step back for a second so you've got basic statistics, econometrics, statistics that used to measure back test so on. Most of quantitative finance, if not all of it presently, is based upon econometrics and linear models. So standard linear models, little bit of exponentials, maybe a little bit of logit here and there, and that's about it. And so therefore those systems are input output equation go hypothesis testing, kind of methodology, then neural networks came along to finance, probably 2014, 2015, and people went, Oh, can I use this to do stuff, trading, liquidity management, high frequency, because it eats up data at higher rates. Can't use it for monthly data. So it's a very specific area. Finances started to think about using it, which is the higher frequency trading elements of it. Can't use it for macroeconomic research, because the data is to 60 observations, monthly observations to work with, or you got 600 it's just not enough. You don't have hundreds of 1000s. So the methodology fell by the wayside as a primary methodology to model financial systems at anything else but a high frequency range when language models came along, and this is a very specific area of AI or kind of neural networks. It said, Look, maybe we can model how language, or how words and tokens, in this case, come together. And it turns out it was quite good at that system, again, through the transformer mechanism, but that that mechanism didn't transfer. I can't put a stock price in there and go by the way, can you also tell me what's going to happen to happen to that? So it's much more along the lines of modelling language, and over time, it obviously got bigger and better to the point where now we can do 24 languages, but we can do number of programming languages, and we can do mathematical proofs, which was an unusual thing up until now because we thought, well, how can language models do maths that's kind of dumb. Maths is a whole other part of your brain and whole different ways of adding numbers together. And still today, if you ask a system to do math for you, it's more likely to write code and then execute the code and give you back the response than it is to figure it out directly. So when you're thinking about the difference between language models or generative AI versus modelling or forecasting the world, forecasting methodologies are often much more simple in their applications, with more limited data, with more speculative outcomes, but the difference now is that language models become so powerful that they themselves can do the testing for you. So it's not the case that the language model itself has a neural network that's specifically about stock prices, but the fact that the language model understands the fact that it can run a regression for you, test the results, understand what the results are, gives you back the results, or iterates over and over again. So it's a layer that sits on top of empirical testing, and does that testing with you, for you so on. Wouter Klijn 28:09 Yeah. Because I think when you wrote this book, you can sort of get a sense that you get almost at a more philosophical level, where you start to ask what language actually is. And I think there's a passage in your book where you talk about beneath the surface of language, words and concepts in communication, there are deep patterns that represent common ideas, thoughts, impressions, emotions and experiences in life, where you basically talk that there's a lot of abstract commonality among different languages. And I think you gave an example where from your childhood, where you go back to Hungary, and it's a very different language. It's, I think you say it's more complex, but you realise that there is these commonalities underlying that. Can you tell me a little bit about how it has changed your concept of language. Doing this, doing this exercise, Michael Kollo 29:04 It's a big topic. I think it's one of those things that is very personal to most of us. When I first looked at this, I thought that maybe language itself was a weak expression of an inner world. So you have this big inner world. Both of us have first language is non English, and so our we grew up as children speaking another language, and so maybe there's like this big inner world of emotions and so on that we try to squeeze through the keyhole of language into the outer world. And therefore AI is just that keyhole, or language model specifically just represent that keyhole, but they can't represent the entire inner world. I think as I got to know language models more, and as they kind of got bigger and better, I felt like there was more power and more things that linked us than not through languages, through the expression of language alone. Yeah, and that made me question about my inner world, whether my inner world was also a lot more linked to both language and maybe to each other and to other people and but I think just falling deeply into this idea of what can language represent about the mind, how much of your mind is imprinted into the words they use, and how you express yourself and so on. I think for me, has been an interesting journey. Again, being dual language, expressing my ideas through English, getting the system to translate into Hungarian writing music with that, writing poetry, with that, seeing the way that the words form around my ideal, of my thoughts in a different language, has been really humbling. There's this moment where you're like, I'm expressing myself in my today's adult language, but it's taking it and swirling it into patterns of Hungarian which then hit me back on a whole other angle that I'm not sure I could have done myself and and again, these are moments where you have these wow moments about this is more than a or, if it is a mirror, it's a mirror into humanity, as much as it's a mirror to yourself. Wouter Klijn 31:07 Yeah, and you delve also into the concept, then what does it actually mean artificiality? And I think, you know, for me, it's very relevant, because as I starting to use it more, I'm starting to think, should I give chat GPT a byline or not? Michael Kollo 31:24 Oh, interesting. I like it. I like it. Wouter Klijn 31:27 But, you know, it becomes concluded where the system didn't generate it by itself. I didn't generate it necessarily by myself, but it was an interaction. So tell me a little bit about how you now think about the concept of artificial. Michael Kollo 31:43 It's so that I never liked the word artificial, because it there's always a tribalism attached to it. For me, like a separation. Artificial means not of Me, of over there, and that separation means that it's over there, and it's becomes adversarial eventually. And so I feel like we have lots of adversarial pictures of AI in our history, right? So everything from terminators to whatever, and they're normally just versions of ourselves to humans with more power, bigger muscles, you know, etc. It's always very amusing for me now to watch Terminator two back, which is an awesome movie, and go, these systems have figured out how they work time travel, but they don't quite understand language, you know, this kind of I think because, you know, language was meant to be supremely human. It was meant to be the thing that really encapsulates the weirdness that is and contradictions that is to be a human being. So I think, I think we're just in a very different world than we were back when those kind of systems were thought about. And I think AI really came from a world where technology and statistics and data were over there, and we with our humanity and our flowing and whatever were over here. And it was a very clean separation, and made us feel comfortable now that separation is not so clean anymore. Now these systems can talk to us, can modulate voices, can understand and reason and help. The number one usage for Claude is personal therapy, like it's it's going to be really so called, coming into this space that we, up until this point, we had thought about as purely human, and that's okay. That's not something to be concerned about that's more about a really interesting kind of self realisation about what it is that you bring to the table, versus what's another AI bring to the table. So to your point about having a when you set up a GPT or any kind of system, you'll say to it, this is what I find interesting, and what I don't find interesting, and you will embed a part of you into it, and then it'll run for a while. And if you allow it to continue to evolve itself, aka, based upon a number of clicks or readerships or your likes, for example, it will try and choose more of that or less of this. Again, it's just a ranger. An AI system is not an individual. It's not a single entity that has a will or a desire. It's a field. It's an S surface area, and it's a vast surface area that you can land on and move from one place to another, as you will, because you bring the will, it brings a surface area. Wouter Klijn 34:16 Yeah. So it comes back to sort of this concept, the better you know yourself, and the better you know your skill set. You seem to get more out of these systems. Do you think that that will translate in how people, the people that will do it better, are a specific type of mindset and thinking, for example, about people whose jobs revolve around delegation. It seems that a lot what you do in interaction with language models is delegating them to do things right, and recognising where you come short, and finding solutions to that. Do you think there will be a certain subset of people that are just better in interacting with these systems? Michael Kollo 34:58 It's a really interesting, very live. Problem, right? So lots of HR professionals are grappling with, what are the skills of the future? I think, being able to explain yourself and what is it that you want to do and why is absolutely a key skill. There's no question. Now. Full stop. Agree with you. Nothing more to add. There is a danger here. And it's a very funny one. So one of the things that you find when you work with AI too much is you get spoiled. You have this intelligent system that is only constantly listening to you, thinking about what you're saying, and trying its damned hardest to do how you want it to be done. As it learns more about you and learns about the background, how you like to do things, updates his memory, or your skills or whatever. Every conversation feels like. It's easier. It's like it's already a step in there. It's already thinking about it like that. You already find yourself nodding as it's finishing the note to you. It's very hard for humans to compete with that. It's very hard for me to walk back into room with four humans that I don't much work with and try to align them, to get them on my page, to get them understanding, to get them understanding. There's all the social cues, the etiquettes, the intentionality, the all of the things I've got to work past, then I've got to understand where you're coming from, then I've got to align you, and so on, so on. The gift for me is that I get to work with people, and that maybe one of you has a better idea than I do, and we make something better than so that's there's absolutely reasons to make that investment. But I think in many cases, people will go, why am I doing this? If I can just go to an AI system and it'll get me quicker, better, faster. This feels so slow, this feels so difficult, by the time I get around to it and so on. And so what I think this will kind of lead us to is a world where individual power users will be incredibly good at using AI systems, and their only danger will be, as I'm finding certainly recently, is burnout. Ai burnout, right? I work six hours, and then I my brain stops. I look at a blank wall. I can't really focus anymore, because the man of will and attention. It's literally like a room full of really smart people constantly giving you things to review, and you're constantly judging reviewing next, judging reviewing next. And as soon as you set another milestone, it's achieved and it's achieved and it's achieved and you're running at full speed, and I tend to be quite strategic. Think I do tend to think a couple of steps ahead, and even I'm like, Okay, so I've thought three steps ahead, and you've already got there, so I need to now think three more steps ahead and so on. Wouter Klijn 37:36 Yeah. So in that context, like so we talked a little bit about this, this equity analyst that you created, and I thought one interesting aspect of that is that it came up with, I think you said, it creates a hypothesis, but then it spawned new agents to look in different elements of all of those hypotheses. So you get this sort of cascading effect and but ultimately it came back and gave you a report. But when you look at that process on itself, it seems like it's something very different than, I think, where people started out maybe two years ago, where everybody was expecting that AI would automate a lot of stuff. This doesn't feel like automation. What is your sense there that these language models end up more, as you say, reasoning agents for individual people, rather than straight through automation? Michael Kollo 38:36 It's a good question. So I think we need to understand the category of use cases much better in organisations. So an organisation is a rich set of use cases, and then sub use cases tasks underneath them, and some of those tasks are about the movement of information. Reporting. For example, you're moving information from one system to another. Maybe you're combining it with few things. Maybe you're making a calculation. But then ultimately, so everything from your custodian to your fund, reporting to other things are simply linear movements, and what you're doing, in most cases is your search, retrieve problems. Where is my data? Where can I get it? Is it accurate? And then moving it from A to B? So that's fine. I think that's automatable, and it's automation friendly, and maybe up until now, it's been hard to automate because it was the data's been finicky. It's been all over the place. It's been messy. Now with AI, it'll iteratively search and find things for me and things like that. So that's better then you've got this class of discovery type problems, right? So discovery is, I don't know what I'm looking for. I just have a principle of what I'm trying to do. So in this case, the principle is, I think there might be something wrong here, but I don't know what it is, so I have to give the AI system breadth to explore that, and I'm going to lean into its intelligence to do that. So I'm going to lean into the idea that it can creatively create hypotheses, that it has the reasoning systems to test those, or to write code and so on. So I expect it to go widely in a deep research kind of way. So go. Wide and come back in again. Go wide, come back in again, go wide, and so on. And at the end of it all, I wanted to have covered a lot of ground, and ideally, hopefully it's figured out what relevance is, or I've given it good instructions as to what relevance for me looks like. So out of the 180 things are found, only 18 get mentioned to me, and out of the 18, only three are flagged for me to do, and again, for research purposes. This is an incredibly powerful tool. That's what you're seeing now. Physics Research people are talking about, or mathematics research being co done with AI at much faster speeds. I'm not giving you another example, Monash. I was at Monash, I think last year, I was giving a presentation to the faculty and a bunch of other quants in the room, and it was about using AI as a paper reviewer, as an article reviewer, so if you've got a working paper, submit it to a journal. Normally it takes six, seven months for you to come back, and then little bit back and forth, and then the next journal and so on. Give it to an AI system that goes out there, reads the nearest 20 other papers, figures out what your unique contribution is. Then reads the next 20, the next 20, the next he's read 150 papers and said based upon what you've done. And these 150 papers, I think these are the elements that are unique, or not unique, or these are the suggestions I would have, and so on. It can now do that within the space of 15 minutes or 20 minutes, and that's a tool that allows us to create much sharper research. But then you ask the obvious, the question, and you go, Well, why don't we just get the AI to do that the beginning of my research project, rather than the end? Before I even pick up the pen to go, what should I be doing? Let's go that way. So, a lot of different applications. But the bottom line is, the is, we're all discovering this idea of what it is useful for, because, at the moment, this thing called generative AI, and where it's put into agentics, or whether it's a chat bot, is is a raw material. It's like the genie flask with the genie coming out of it. Wouter Klijn 41:52 Yeah. So in a lot of ways, AI helps to accelerate processes, rather than really replace entire functions. What do you think is, sort of, if we may do a little bit of crystal ball gazing, what is sort of the natural progression of that? I mean, as you said, there's only so many hours you can review, you know, all the things that come back at you and the information overload. Do you want to take a stab of where AI might take us in the next two or three years? Michael Kollo 42:23 I don't actually, I'm going to plead ignorance, but you should definitely just read the book. No, it's, um, yeah, I have a bug bear about people that have expertise in one field being asked about a different field. I sort of often think that, you know, AI experts shouldn't be asked about workforce impact or things like that, because they just don't know enough about it. I don't think it's as Armageddon as everybody believes it is to be. I get the sense that, like with anything else, people will do very creative and interesting things certain Emperor's close moments might happen, things that you know, we kind of do currently, just because we think we should suddenly become not that relevant. I think there is reason to think that there'll be moments of disillusionment, perhaps in a sense of society will and unions and governments will suddenly go, Hey, this is happening way too fast for me. We need to slow this down, not necessarily for safety, but more for like protection, society, workers, those kinds of things. But ultimately, the there's a very small percentage of people in the entire world that will take this and will do incredible things with it. Will advanced sciences, biochemistry and physics, mathematics, hopefully economics, although I don't hold my breath much, much further, and I think everybody else will be struggling or a little bit trying to balance between the human rate of change that's possible, which is typically three, 4% per year versus the AI possibilities that will keep opening new doors every time that you're just about told your staff about what you think AI is another door gets opened, and in three, four months time, you need to tell them what else it is, unless we adopt it and use it, it won't have any impact on our society. So AI is not a weather pattern where you just sit here and it rains on you, and then it doesn't rather it is only comes about because somebody has picked it up and done something with it. And so innately, the rate of change from AI in the world is only manifested through the adoption of people, at the rate of adoption of people. And so that gives us a lot more time, I think, that more people believe when they just look at the capability and go, Oh, look, you can do this. So therefore tomorrow, it's only going to be done this way. Wouter Klijn 44:46 Yeah. Now that's a fair comment. Well, Mike, thank you very much for this conversation, and thanks for coming to the office. It's fascinating. Michael Kollo 44:54 My pleasure. Thank you. I look forward to the GPT, by the way, the [i3] AI column, I look forward to reading that. Wouter Klijn 45:03 Yeah, I should have a few AIs writing for me that would be helpful. All right, thank you.













