Asanti...in conversation with

Neuroscientist Interview: AI Models and Hallucinations, How to Build an AI Enabled Application

Asanti Data Centres Season 1 Episode 2

In the second episode of our “In Conversation With” series, Martin Veitch dives deep into the reality, and future, AI.

This time, Stewart Laing, CEO of Asanti is joined by neuroscientist Steve Elcock, who is Director of Product, AI and HCM at Zellis, who talks us through the process for developing LLM for AI applications and just why they need so much power.  The trio also tackle hot topics like GPU-powered infrastructure, data sovereignty, UK government policy, sustainability, and how AI is reshaping HR and enterprise systems.

With a healthy dose of realism and optimism, this episode unpacks whether AI-driven data centres are the next evolution or just hype.

Introduction: This transcript has been generated for reference and accessibility, with subtitles included on the video for easy navigation. It will not be 100% accurate but should be very close to the conversation.

Well, hello and welcome back to “In Conversation With” - my name's Martin Veitch. And today we're going to be talking about AI data centre infrastructure. Is that the future of the data centre, I hear you ask? Or is it just another silver bullet sold by the IT industry to sell more stuff? Today, to answer that, I'm going to be joined by two special experts, Stewart Laing of Asanti Data Centres and Steve Elcock recently-- we can add, can't we, Zellis. Tell me a little bit about that to kick off with. 

 

So, we were recently acquired by Zellis. I was previously the CEO and founder of elementsuite. elementsuite was born out of, I guess, the frustration of working with big ERP companies. My prior business had been working with Oracle systems. So, we founded elementsuite as, if you like, a way of getting to market to customers who are frustrated with the cost of running big Oracle systems. And throughout that course of building out an HR platform, one of the reasons I think I'm here today is talk about AI. And we've really bedded in AI quite deeply into our software platform. And I think that's probably one of the reasons Zellis is interested in us. It's the way things are going. And definitely, the AI part of the business is something I'm very, very much focused in on my new role. 

 

Very good. They liked it so much. They bought the company.

 

Tell me, then, maybe, Stewart, we can kick off with this one. There's been a lot on the front pages of the newspapers recently about AI data centres because the government's had this initiative nudging people towards certain data centre designs and investments. What's your take on that? Is the government right to be getting involved here? And will it play a significant role in adoption of AI data centres?

 

It is still very early days, actually, from a data centre perspective. In terms of AI, I spent the last year or two specifically looking at that. And what does this actually mean for data centres? At the moment, what the government's talking about is obviously a large-scale, high-powered data centre. So that would be accommodating the new GPU technologies, as we call it, which are super powerful IT systems. But they also require huge amounts of power and cooling to run them.

 

Certainly, it is a challenge for the current industry within the UK because most data centres would have been designed and built a good number of years ago, way before GPUs probably were thought about in terms of what they're now doing. So that does create a lot of challenges currently. And for data centre providers, like ourselves, you really would have to start again almost. You have to look at building something new.

 

So, the idea of building something new, yes. Whether that's something the government should be doing? Or whether that should be left to the data centre industry and private companies. But certainly, there is certainly a challenge in the UK right now around doing that. We do need capacity. And at the moment, most of it is going abroad. But that's driven by power costs predominantly.

 

There were a number of opportunities last year that were presented and companies were invited to bid for that work last year. But the ones that we saw firsthand been involved in did end up going abroad. But that was just down to power. 

 

OK, and Steve, from your point of view, this government intervention, is it helpful at all? 

 

It's a great question. I mean, I guess the whole AI subject is a very big topic that we can talk about on many, many different levels. The history of AI is strewn with these sort of ups and downs and sort of winters of discontent and times of great progress. If you look back through the history of things, we've had RNN, CNN, ZSTMs, all these acronyms that have gradually evolved AI to be more and more powerful. Clearly, in recent times, ChatGPT popularised things very much and the birth of the transformer architecture by Google, who kind of accidentally found this thing back in 2017, which for me is fascinating. A bit like penicillin is one of those accidents, happy accidents of history, if you can call it happy. I think AI is very transformative in how it's changing the world. Of course, that does mean with more data, more infrastructure, we're now seeing incredible use cases in AI, in medicine, in learning, and all sorts of things. And for us, with HR systems, things that we couldn't really have contemplated before.

 

So yeah, when it comes to government and economy, I think the Bank of England Governor Andrew Bailey, even last week, said he was talking about GPT. And we know GPT in a technology way as being a generative, pre-trained transformer. But there was a paper a few years ago that talked about GPT being general purpose technology. And that's certainly how he used the term. They're looking at the UK economy, looking at ways in which it can be boosted. And it seems like AI will be a great way in which productivity could potentially be boosted. So, in my mind, I think there is a huge need for data centre capacity. That said, and I'm sure we'll talk about it later, it depends very much on the use case. AI takes a large amount of power to train the original models. But then when you actually take it into specific use cases, there's far less of a need for that huge power consumption. So, it kind of depends on the stage of which you're training the AI, but more importantly, how you're using it. 

 

Absolutely. I think you make a very good point. We need to separate out the training of the LLMs and the consumption of AI as a software technology, right? 

 

Absolutely. Absolutely. These super hyperscalers, gosh, the millions of pounds that they're spending in training these things, and the millions of kilowatts, you guys know this better than I am sure. But the training of a large frontier model LLM is very, very significant.

 

I think people have been asked to output their carbon footprints because this thing is so potentially damaging for the environment. So, you have this small number of hyperscalers who can make these frontier models. But actually, once it's trained, and particularly if it is a general model, running it at inference time is usually far less power. They, of course, have large token windows, which mean that if you're putting 100,000 tokens into a prompt or something, that there's more power consumption in that inference. But for simple decision making, these large models aren't required. And I think it's about horses for courses. It's picking the right model for the right task. 

 

Absolutely. So, Stewart and Steve, we're already getting deep into TLA territory, aren't we? This is IT chat. So, we have to have some three-letter acronyms and abbreviations. But LLMs, in short, the training models, the large language models, things like ChatGPT are obviously dependent on. I mean, the creation of those is the big power consumption hog, isn't it, rather than the actual you and I using our co-pilot PC and asking silly questions of ChatGPT. Is that correct? 

 

No, it's true. I mean, an apology to the acronyms, as is the way with all tech, we end up with some acronyms. But funny enough, large language models, although they're very popular in recent times, have been around for a while as a concept. And the concept is that in language, you have a lot of structure. And when you're training a neural network, it's very useful to understand that structure so that the weights and the biases, which are kind of inside this incredible, perplexed set of maths, ultimately this blob of maths, which is kind of vectors.

 

In my A-level maths, there's these things called vectors. You probably remember them when you had vector multiplication. But ultimately, it's numbers in this kind of inscrutable model, certainly when you come to deep neural networks. And there's different types of neural networks, of course. But LLMs basically take the entirety of the internet, at least the modern incarnations of LLMs, for their training set. And they're looking for patterns. They're looking for structure within that language. And ultimately, through millions, possibly billions, of forward and back passes through this process of back propagation, gradually the weights and the way in which information flows through the kind of nodes in this sort of mathematical blob change to represent ultimately reasoning. And we think of it as next letter or next word prediction, as what an LLM ultimately does. But it's kind of more than that. It knows things at quite deep levels. And that's because it's probably one of the best approximations we have these days to our brains.

 

As a neuroscientist, which is what I studied at Cambridge, it fascinates me how neural networks are gradually approximating the brain more and more. And when your image falls on my eye, it gets reversed onto the back of my retina. And it goes through these levels of my visual cortex. And I can see the shape and the lines. And maybe at some point, I could probably understand there's a face shape and all the rest of it. And it's kind of this feature detection that's happening in the layers of the neural network. It's incredible. That's how the brain works. And that's ultimately how these deep neural networks are working as well. And ultimately, why they require so much power. Because to make the correct changes and to really get it to understand, which is an incredible thing, the entirety of the internet and the knowledge of the internet is a very, very big power consuming task to do. And that's why these hyperscalers are good at making the frontier models. But having made them, as you say, the inference is far less power consumptive afterwards. 

 

Absolutely. It's interesting, isn't it? The neural networks have been around a long time. Attempts to effectively mimic the way the human brain works. You said AI has been around since, well, mid part of the 20th century, really. But it's only now these things are coming to fruition. And I think the aspect that relates to the data centre is data manageability becomes an enormous part of trying to control this data protection, data security, and so on. And I'm sure from your point of view, Stewart, you're going to have customers coming to you and saying, look, we're getting deeply involved now in AI and machine learning. What are the implications for how I manage my data and my data centre? 

 

We don't tend to get involved so much in that side. But we do work very closely with our partners who provide the security aspect.

 

For us, as a co-location provider, the challenges this brings is trying to understand and trying to plan for the future. Saying that last year or two, speaking to numerous organisations and individuals and trying to understand where this is taken us. From a data centre perspective, you don't build a data centre for tomorrow. You build a data centre for the next 5, 10, 15 years. So, the capital investment required is obviously considerable. So, you get that return over a number of years. But things are changing so quickly.

 

So, one of the things they struggle with, I suppose the difficulty for us as data centre providers, what does that actually mean in terms of what we're building? So, if we're investing – at Asanti we can expand and grow our position, but also new builds. But what are we actually building? What would you build? If it was 10 years ago, I would have said, do you build a data centre for this amount of - it might have been 7 or 8 kilowatts of rack, 7 or 8, 10 years ago - to give you that longevity. And then standard IT equipment, we're up now 8, 10, 15, even kilowatts of rack in standard equipment. So, you're trying to build for the long term with this. I actually don't know the answer yet in terms of where this takes us. And you mentioned earlier on about the government data centres. We do need capacity in the UK. But the problem I saw was exactly that. What are we building for? What scale are we building for?

 

Because if the technology keeps going the way it is, even if you build 100 kilowatts of rack data centre, that actually might also be obsolete in five years' time. Or actually it might not be, because technology may come along in those other places looking at us now. Technology may come along and actually bring it back the way in terms of power required to actually operate these extremely powerful systems, because the powerful systems will change. So, it is a challenge. But it's a good challenge, because once you've actually developed the application or whatever, where does that go? What technology does that require? And for us, that's probably the main focus. It's not so much the high-powered side of it, because the capital investment pertains to a short period of time. Who knows? It's probably the world of the hyperscalers right now. 

 

Absolutely. Implications for hardware, for networking, for data centres are fairly profound, aren't they? We talked earlier on, Steve, you mentioned at the rise of GPUs in importance. I remember when they were useful for graphics processing exclusively, really. And now, of course, a really core part in managing these AI workloads. Tell me a little bit. You mentioned capacity. We're talking about compute capacity. We're talking about networking bandwidth. We're talking about storage. And we're talking about software manageability. These are all vastly complicated processes. And as you said, Stewart Laing, very hard to know the future and how to build out from it at the moment. 

 

I think in addition to that, it is actually the physical electricity, the power required to get it to allocation. That's a challenge in the UK right now. Because you were talking significant amounts of power. Your data centres, your regional edge data centres around the UK, will tend to fit into anything from 2, 3 megawatts up to maybe 10, 15 megawatts. So, they're not huge. 

 

Where you get into the ongoing development of this and how this affects us is going to be challenging and interesting. But we do need to create that capacity somewhere. But you can say, getting - so it's power, it's cooling. And then it's all the things you just said as well. So, you take all those factors into consideration. And all of those have got to be considered in terms of whether the government are building large data centres or whether they're getting private companies to do the data centres. All these factors are going to have to be considered. 

 

There's something Stewart said that really resonated with me about UK. I'm a big believer in UK tech. And I think it's really, really important that we celebrate, we foster the fantastic brains we have in the UK. We've got some good universities, some really good research facilities, and some great startups. And I think, sadly, there is a brain drain often to the US. And one of the contributing factors is the availability of infrastructure and tech in general. And I think the fact that the government are facing into this is critically important for our future. But also, I think, just touching on something Stewart said as well, is that things will change around things like chip design.

 

So Graphcore is a fairly well-known UK chip manufacturer. They're competing with NVIDIA. We have the rise of weird and wonderful chip design to help us with that inference, that runtime of AI. LPUs, language processing units, are a new type of chip design. And there's a company called Groq, Groq with a Q, not a K. So not the Elon Musk Grok. But they purely do inference time, AI sort of outputs. So, they don't do the training side. They just do the output side. And I think it's going to be very interesting as things evolve. And as we, if you like, approximate the brain more and more, things will change on many, many different levels. The chip design, the infrastructure, the fact that the way we're asking the AI to do certain tasks will be much more refined to the nature of the use case we're asking the AI to do. And I think that the final thing that just makes me often laugh is, as a neuroscientist, I'll never forget, my professor is telling me, we need to all remember that the magic of consciousness happens on 20 watts.

 

You imagine a little 20 watt light bulb walking around with this thing flashing on your head. And that is about the lowest power light bulb you could imagine. And yet, the great mysteries of the universe are solved at that low power. So ,you'd like to think, the more we evolve our thinking around AI, the less and less there will be a need to trash the rainforest and do whatever the hyperscalers are doing at the moment. I mean, Three Mile Island was recently announced to be opened up for Microsoft, wasn't it? And crazy power requirements might not be a thing that's required in the future. And that's not just with chip design, also with actually looking at the neural networks themselves. Because LLMs are actually in themselves quite inefficient, but perhaps more on that a bit later. 

Yeah, absolutely. I mean, it's interesting. One thing that strikes me is that people have been talking about electrical consumption of data centres as a challenge of sustainability challenge for many years now. And yet, it seems to have stood fairly constant at about 2% of global power consumption. You do wonder, don't you, with some of the power hog technologies that are coming along, not just AI, but also cryptocurrency and so on and Bitcoin, just if that's really going to change and what the implications for the planet and in a more day-to-day way of how people get access to these kind of resources when the small companies are startups or public sector organisations.

 

So, Stewart, do you think the government policies, as they relate to AI, are well synchronised with the needs of the private sector and other organisations? 

 

I definitely believe there's an understanding now, maybe still in development, but definitely understanding now of what AI is capable of doing and what that means for industry and for the UK.

 

From a data centre perspective, there is definitely a need. There's definitely a lack of capacity in terms of these large-scale, high-powered capabilities in the UK. And they've recently announced the idea of actually creating these things. I don't think there's going to be government data centres, but there certainly want the private sector to come in and do something significant in that area.

 

Last year, I was personally involved in three different opportunities where there was inquiries to build AI data centres, i.e. hyper data centres in the UK. Unfortunately, we went through the process, a number of us did. Unfortunately, that opportunity actually went elsewhere, went out of the UK. But that was predominantly driven by power infrastructure and power costs.

 

But I do believe there's an understanding we certainly need to do something, and we certainly need to create that capacity in the UK. 

 

Yeah, and I think it's so sad that the opportunity went elsewhere. For me, I think you asked the question, Martin. The government's AI policies, for me, I would like to see more clarity, because I don't think it's well framed at all, really. I know there's the announcement about data centres, and that's welcome, of course. But I think in general, governments are struggling with all sorts of aspects, all sorts of levels of AI, and how pervasive it is already, how it may change the workplace. And I think there's a very important question in the mix there around data security.

 

With more and more kind of AI in the wild, I think a lot of people probably aren't educated about what's happening to their data when they submit it into some of the particularly the open models, but the things like ChatGPT, if you don't have a private boxed off environment, an enterprise license, or something, your data is being used to train the models. And there's a whole world of pain, I think, as we unpick that about what data is going out, what's being used to train models. And that's an important facet as well of, I think, government policy and the guidance they should be giving going forward. 

 

I mean, it's a real Pandora's box there, isn't it, in terms of data protection, data security, data sovereignty, data residency. These are all areas of massive implications with AI, aren't they? 

 

Oh, they are. I mean, and again, it's early days, and it's very much a wild west. It really is. And it's interesting. I mean, in the old days, you had your data classification frameworks, and you'd have restricted and confidential and private and public data, and you'd sort of try and work out these things were probably fairly easy to box into those compartments.

 

I think with the advent of AI, particularly where you have kind of metadata, so data about data, or features like we were talking about in your brain and in the layers of the neural network, it's far harder to classify a kind of a feature detection or something that's higher up from the raw data, the ground truth data itself. And of course, that data is being sent around the world over API calls and all that kind of thing. So, API is just a way of calling an LLM remotely to a data centre. And I think very few people, sadly, are truly aware of what's happening when they submit their prompts.

 

There's horror stories about people sending off mortgage letters. We deal with HR systems. So, security is of paramount importance to us and what we do. And we have to kind of make sure it's completely boxed off. And all the operations we perform are appropriately authenticated and authorised. But yeah, there's lots of opportunity for people to get it wrong, unfortunately. 

 

OK, so Steve, moving away from government interventions and all of that deep stuff, tell me a little bit about developing AI applications in the real world and what are the challenges and obstacles and opportunities that you see?

 

So, I started in earnest looking at all of this when ChatGPT came out two years ago and thought pretty naively, if I bought a big enough laptop with some GPU power, I might be able to train a model. And I think it was probably seven days later, the biggest and best laptop GPU kind of setup I could buy, nearly causing a few house fires with my wife telling me it probably wasn't a good idea. So, you kind of learn these things, I suppose, by trial and error. Clearly, training of models is best done in major cloud provision infrastructure type companies. So, we've tended to start with a baseline of an existing model, usually a hyperscaler trained model. Not exclusively, we've taken other models like LAMA2, LAMA3 as an open source model that we can take and fine tune ourselves with specific things we want the model to do. That's an interesting point there about open source versus kind of closed models. The joke is open AI should be called closed AI. I think Elon Musk was trying to force that one through for a while in a lawsuit. But certainly, open models provide an alternative to these big Google meta and open AI sort of providers. But we've kind of mixed this stuff up. In the way we deploy, we can be not too specific about the end points that we call to. And that's important because, again, going back to that point about what's the use case we're trying to solve, it might be a very, very simple binary classifier. Is this interaction with this end user something that is reportable to a manager or not? Is it relatively - it could be a simple choice. It might be a very complex choice because of the nature of the conversation. So you've kind of ranging the complexity of the model very much depends on the task you've asked it to do. And in the real world, we've deployed content generation. So, contracts, letters based on sort of templates for HR people to very much speed up how much work they have to do to generate all of that stuff. Job posting descriptions, salary benchmarking. AI is very, very good at all of this stuff. It gets more interesting when you move into question and answers. You can take a model. And of course, that won't know necessarily that much about Zellis or elementsuite or Asanti. So, you can kind of frame it with this, if you like, short term memory store of all the questions and answers you'd like it to consider when it's answering the question. So bundle the question with those kind of nearest neighbour answers into the final model. That gives a very, very good output for the model. And there's various other techniques. You can feed back from a human sort of loop at the end to say how good was the answer and change those question answer pairs or even change the fine tuning of the model. And one of the most powerful techniques we found for reducing hallucinations particularly, because hallucinations are a big deal with AI. It does go wrong. You've got horrible stories about lawyers being disbarred because they've used it for cases. AI is kind of dreamt up. So an ensemble model or a mixture of experts model is one way you imagine we're all three kind of AI models. And I might say something, but you (Martin) might be critiquing and Stewart might be summarising or something. So, each model has a different characteristic. And by contending with each other, they improve the output of the model. 

 

These are all techniques by which we've improved the performance of AI in the real world. But there's more and more of these and it will lead to more and more infrastructure requirement as I think people get the hang of what they can do. And I think people are starting to get the hang of that. 

 

One last thing I'll mention is quite a complex use case for us, but one that's very, very powerful and we're just making some amazing breakthroughs in is it used to be the case that I'd have to write to my software development team to write me a report. So, if I say tell me who's got the worst absenteeism at certain times of the month or whatever, and used to have to make a specification and it would go through a process of triage and come back weeks, months later, now you can just ask an AI in natural language. And it does an incredible job of converting from human language into code, running the code and then coming back to you with the answers. And I think there's more and more of this going to be sort of useful to business to make people more productive. And therefore, there'll be more and more of a requirement for infrastructure. Now the form of that infrastructure, I think, is the important thing where we match the type of infrastructure to match the problem we're trying to solve. 

Yeah, absolutely. I mean, that's why there's so much excitement, of course, about AI. It helps just one example there, get rid of a lot of rote tasks, things that machines are very good at and people hate doing, right? Tell me though, Stewart, we talked a little bit earlier on, touched on data residency. The Cloud Act has certain implications here. Do you see that as being something that's going to be a positive?

 

There's been a number of articles, actually, that I've read recently on this as well. I'm not sure really how - I mean, certainly the Cloud Act, along with the Patriot Act, et cetera, which has been around a while. So, they've been there a while. And so far there's never really been a problem. I think the concern that's growing at the moment is we don't really know where that's going to go. The fact there is a provision, potentially, to have access to data that people don't want that access to happen with. So that's certainly a concern right now.

 

We see with the critical national infrastructure, some of the reporting in Europe that was being asked for, initially, which was to actually, for the data centre operators to tell the government what was in the racks, what's actually happening in the servers. As data centre operators, we don't know that. And we shouldn't.

 

As long as it's not illegal, then we shouldn't. So yeah, I think it's just something we need to keep an eye on and to just manage that carefully and the expectations as we go forward.

I would hope that doesn't become an issue. But certainly, as I say, in Europe, that request was made. It can't happen anyway, because the operators don't know. But how far could you take that? I don't want to enter politics, but the Cloud Act was created in 2017, 18. The current president set that up. So where is he taking it next? 

 

And I suppose that's the concern that people have. I don't have a personal opinion on that right now. But it's going to be interesting to see where that goes. And it's something I think people are very wary of. Because at the end of the day, with the growth of AI, which I think is just incredible now, is an incredible example of where it's actually going and the positive things that it can do, but the data involved, it is going to be absolutely crucial that that is managed properly and securely in it. And that people know it's secure. 

 

Yeah, confidence. 

 

We all want to know that when we upload our pictures from our mobile phones, we want to know it's secure. And that's going to be even more so as we continue with the AI development and the benefits that we'll bring.

 

I think it's a very interesting point about security and residency and privacy. In the HR world, we have the challenge of adhering with GDPR and the right to be forgotten of an employee. So, an employee can say, we would like you to erase what record of our HR in the system and things like that. So, in terms of how you erase information that might have got its way into a fine-tuned AI model, that becomes a much harder challenge. In fact, there's academic papers about this. There's one, I think it's called Forgetting Harry Potter or something like it. And basically, they try to make this AI forget Harry Potter. And of course, Harry Potter is so prevalent on the internet. It is quite a challenge. Trying to retrain an AI model to extract things from it in these inscrutable weights in this massive maths blob.

 

So, I think the question of security and residency becomes quite mixed up at that point. Because if that data is somewhere in a secure purview, in a secure environment, it's less of an issue than if it's got out to China or something like that. And that's where I think potentially, even with the trends we're seeing with the Trump administration, and there's a bit of a backlash going on, I think, about sort of UK and European sort of, I suppose, how comfortable we are about sending this stuff over to the US hosted hyperscalers. Because those things are becoming more and more sort of important to consider, what data is being sent and is it securely held. Certainly, when DeepSeek came out, there was a huge panic that things might be moving over to China.

 

Interestingly, Microsoft hosted DeepSeek quite quickly on their own infrastructure because it was an open model. But I think those questions of residency, sovereignty and privacy, they're very much interlinked. 

 

Well, this is the challenge. No one really saw DeepSeek coming, did they? And now it's a big part of the GPT landscape. And I think the big challenge for CIOs and for all of us really is this stuff is just moving so incredibly quickly and has implications for all of these broader issues. So, Stewart, Steve, you know, I absolutely aware of the absurdity of trying to squeeze in this enormous topic and this rapidly changing topic in the short period of time we have, but I'd love to just ask you for a couple of closing thoughts if you would. Stewart, maybe you go first. 

 

I suppose exactly what you said, it's just such a huge subject. From a data centre operation perspective, it asks lots of questions. We are still working on the answers. We talked about the UK, we definitely need capacity in the UK. There's no question about that. But we need to remove the challenges that come with that. But yeah, I just see it as a, obviously it's a hugely exciting time and a tremendous opportunity for us at Asanti, but also the wider industry sort of thing. And even in the UK as a whole.

 

Yeah, and just to echo that, I feel very privileged to be alive at this time. Even with AI researchers, they'll say, nothing has happened at this pace throughout the history of AI really. And yeah, I think it's tremendously exciting. Of course, there's this balance of looking at the upside of all this incredible, hopeful productivity gain that we'll see with the caution of data and what we do with it and how we handle it and do it securely. And I think for data centre providers, I think they're in an interesting place where the right kind of deployment of infrastructure to support the right kind of AI problems being solved is very much front and centre, I think of most CIOs minds at the moment. There isn't really a one size fits all. It all very much depends on the use case. We're still exploring the use cases. So, it's going to be very interesting to see how the market develops over the next few years. 

 

Absolutely, and we're going to try to strike a balance, aren't we, between innovation and managing risk?  Well gentlemen, that's all we've got time for on this conversation with, but we'll see you very soon for the next one.