Advertisers and brands are hungry to capitalize on AI. But attempts to market and adopt the technology have grown so omnipresent that it seems the industry has skipped past gaining a solid understanding of what AI actually is.
Alex Castrounis is the Founder and CEO of Why of AI, an AI consultancy that educates and advises businesses on investing in AI in impactful ways. In this episode, he lays out the foundational knowledge marketers need to effectively harness this much hyped emerging technology.
Noor Naseer: Artificial intelligence has been a buzzword for a minute now and it will continue to be one across 2024 and beyond. The speed with which people have been talking about it, you might think you'd be well versed at this point. The reality is most people aren't and probably could use a solid primer to understand what it is from an unbiased source. And who better to discuss the topic than a real subject matter expert. Our guest today is Alex Castrounis. He's the best-selling author of a book on Artificial intelligence called “AI for People and Business - a framework for better human experiences in business success”. He's also a professor at Northwestern University's Kellogg McCormick MBAI program that is focused on innovation. He runs a consultancy and organization called Why of AI which consults clients and businesses on all things artificial intelligence and how they can leverage it in the smartest ways possible. Alex shares a ton of information on what AI is and its implications, purposes, and use cases are for ad tech and beyond. This episode with Alex to get the 101 on AI starts right now.
—
Alex thanks for joining me today. I know you're a busy guy. There's a lot of stuff happening in the artificial intelligence space, so the time is much appreciated.
Alex Castrounis: Of course, yeah thanks for having me today.
NN: AI has been around for a long time but this new revolution or renaissance around it has really just started. What has really changed about AI now compared to the AI that's been around for the last several years?
AC: Yeah, I mean so as you said AI has been around a long time. In fact the term AI was coined in 1956. And even then, the origin of the idea of artificial intelligence included things, concepts and potential techniques like neural networks and things that we see today that are very much associated with artificial intelligence. So, indeed it's not new. Although AI as a field has gone through different kinds of periods of lots of investment, lots of innovation, lots of progress, followed by what they call “AI Winters” where things slow down a bit then pick up again and so on. At one point in time, we started to have a lot more data available, and the advent of the world wide web, and just the ability to transfer and move data around in much greater quantities and store more data. And more computing power and so on sort of led to this sort of growth of AI and ML capabilities.
And a lot of what was going on was really around things like forecasting or predictive analytics whether it's trying to predict numbers or sort automatically classify things. And then fromthere, other techniques started to gain traction and get more advanced as well like computer vision and natural language processing and so on. I think what led to this moment was really when certain kinds of what I would call “architecture” it's what most people refer to is kind of neural networks or deep learning architectures, started to be developed by researchers, like “The Transformer”. I'm not sure if you're familiar with that at all. But the Transformer sort of model and architecture that underlies models like large language models that we hear about today that power ChatGPT and GPT4, and now Claude and llama and Bard with Google and Palm; and the list just now goes on and on. It really is where this transition really happened in terms of capabilities from a generative perspective with language, as well as these models being able to do things people didn't explicitly train them to do. So, in other words, they could do different tasks almost on the fly and become very specialized or what they call ‘conditioned’ based on the intent that you have for them. So, just by introducing this idea of ‘prompts’ you could take a model and take the base model, and then condition it or specialize it in a certain way using certain kinds of examples. But also, you could ask it to do tasks that it was never explicitly trained on. And it turned out that these models are actually much more versatile and generalizable than people originally sort of expected them to be.
Once that became better understood a lot of that came out of the research. In fact, papers like “Attention is All You Need”, the original Open AI GPT2 paper and the Open AI GPT3 paper where some of these concepts were really brought to the forefront are why we're at this point right now where you know exponential increase worldwide in terms of global AI awareness interests. And quite honestly people are sort of scrambling to figure out, “What is this stuff? How do we understand it? How do we demystify it? How do we use it?” I think it is largely because of the Open AI ChatGPT for example. And when they launched GPT 3 and some things like that started to add kind of a usability characteristic to this stuff, sort of a UX, UI if you will like a user interface that's easy to understand and use.
With GPT3, it was a little more complex because the interface still required you to tune certain parameters and things like that that maybe the majority of people wouldn't be as familiar with. But I think with ChatGPT that launch sort of brought the honestly quite remarkable capabilities of these large language models to the public in an interface that's super easy to use quickly, super easy to understand and see results right away. It doesn't require any sort of tuning or configuration on the user's part. And I think just that helps really helps people sort of the light bulb go off and go, “Oh wow this is pretty remarkable stuff”. And now the potential applications and use cases are a little bit clearer; at least in the generative and large language model sense.
NN: So, there's folks like you Alex who are in the bucket of people with deep subject matter expertise around artificial intelligence. Or even something tangential like they work in the predictive analytics space. They're knowledgeable about what neural networks are or they're just deeply researched and they're educating themselves. And then there's folks on the other side of the spectrum where there's curiosity and that might be the greatest extent to which I've tried Chat GPT. So, a lot of folks maybe want to be moving away from being all the way on one side and moving a little bit closer. They're not going to become subject matter experts, but they want to know more about the medium. How should people educate themselves? What recommendations would you make to folks besides just reading the next AI article that pops up in your newsfeed?
AC: It's a great question because it does really depend largely on sort of what your goals are in terms of the understanding. On the one hand there's the practitioners. There's the data scientists, the data engineers, the machine learning engineers, the AI researchers and so on. In which case if that's of interest or doing any of the coding or understanding sort of these learning algorithms models and technical detail that go into these things, if that's something that someone's interested in then the path to learning is very different, than let's say you're a decision maker or business leader or entrepreneur or whatever the case may be. In which case for me in my company Why of AI actually focuses much more on that education piece with that type of audience. The more business folks' entrepreneurs, innovators, basically non-practitioners and not necessarily technical folks that still need to understand AI machine learning to some degree- sort of what I would refer to as the appropriate level for what they need to understand.
One of the challenges with it is that AI and ML are huge fields. Even though right now a lot of people have like what I would call horse blinders on, in terms of this very focused view of AI as like generative AI or large language models or Chat GPT but there's still like a very large field of AI which is like other aspects of natural language processing, computer vision, unsupervised techniques like clustering and segmentation that are often used in marketing and advertising, and so on. There's forecasting, there's classification. There's personalization, recommender systems and sort of the list kind of goes on and on.
So, it does depend on what it is but ultimately the key thing is that the way I help people understand it especially in the business sense, is ultimately it has to line up to some goals that you have either for your business. Or for a certain department within your business-like sales, operations, marketing, HR whatever it is. Or maybe you're looking at how to solve certain problems for your specific products or services or for your customers or users. So, there's going to be goals associated with those different areas, goals, needs, gaps or challenges. So, the question then becomes which sort of areas, and which specific types of tasks can you accomplish using artificial intelligence/machine learning depending on what those needs have been. And usually, it comes down to less about the really technical details of the models, or the algorithms or the tools or whatever and more about what are you trying to do exactly. Are you trying to predict something? Are you trying to augment something? Are you trying to answer certain questions based on some data you have? Are you trying to extract information in a certain way? Are you trying to categorize things in a certain way or recognize or detect things in images or things like that? So, I think part of it is learning more about how AI and machine learning help you functionally solve these problems. And what do those real-world use cases and applications look like for your business products, services, departments, whatever.
So yeah, it depends on what you're trying to learn and what you're trying to keep up to date with. But the bare minimum, I think everyone should have some degree of understanding at this point of generally what artificial intelligence kind of means, what does machine learning mean and what are some of those different areas and how might they be used to accomplish certain goal-driven or goal-aligned tasks and results.
NN: If an organization has come to the conclusion that you suggested, which is that their first responsibility is to figure out what their business objectives are that could leverage the upside of AI. Tell me the more granular details. How are organizations doing that before they're turning to you or other subject matter experts in AI space? Are they doing an audit across departments? I'll use an example specific to the advertising space where a lot of people work in sales. And a lot of people if they work at agencies there's a pitch and sales side of things. And then there's also the workflow piece, the processes piece, and you talked about operations where I think efficiencies are deeply desired. How do you help people help you if that makes sense when they're trying to share their business objectives? Because I think if you started asking everybody in your company, people could come up with an endless list of challenges they're trying to solve for.
AC: Well that's exactly it that's spot on. I mean, at this point you being an organization can benefit from AI and machine learning especially now with the generative stuff that's really accelerated some of this. How you can see results and value pretty quickly depending on what you're trying to do across the organization. You can help your organization at large you can help with every single business function you have. You can help implement and incorporate AI into product features that you have or your processes or customer experience. I mean, you just name it. So, you're right in that the options are sort of endless.
I think what it comes down to is trying to really figure out where the biggest needs are at the moment, where the biggest gaps, challenges, needs, goals, objectives. Like what are the most important things? A big part of it is sort of a bit of a prioritization exercise. In terms of that and filtering down a little bit and trying to narrow things down.
Going back to your first question though. In terms of what I see out there with companies and how they're approaching it, it's all over the place, quite honestly. In many ways it comes down to what I often refer to as “AI readiness” and “AI maturity”. There's a spectrum, there's a scale. There's everything from companies that have not done anything with artificial intelligence and machine learning, that just want to know more about and start to get their feet wet and get going. Then there's companies sort of in between—they prototyped some things, they've done some things but not necessarily gotten AI solutions into production and commercialized or at scale in any appreciable way. Then there's organizations that have a kind of a mature and experienced and sophisticated AI or machine learning team. But often what I see even in really big companies is they tend to be very narrowly focused in certain areas of AI machine learning based on sort of their core company offerings, let's say. So, they're sort of experts in specific things around what the core companies like products or services do. And so, they've developed these very sophisticated models and ways to maintain them and manage them, and improve them and monitor the results of them and so on. But they're in sort of a similar boat whereas AI advances literally at this point on a day-to-day week- to week basis and they're hearing about generative AI and large language models and all that. They're not necessarily either already doing stuff with it or have resources that have that kind of bandwidth to just tackle those problems either. And so, sometimes organizations get around that sort of thing by setting up centers of excellence or emerging technology innovation centers, things like that. But generally, yeah, it's all over the place. So, it really depends on the organization how much they've been doing with AI/ML, if at all. And do they only specialize in certain areas and still have a lot of opportunities to branch out and sort of figure out how to do more with it?
NN: My guess is that for people who are in the AdTech and advertising space or the agency space, they’re less likely to be in the position where they've got in-house data scientists or people who are managing learning models that they're building out that are custom and to themselves. And it's more likely that they're going to leverage those tools. So, there's so many tools that have popped out of the woodwork in the last couple of months. So, I think a lot of people what they're doing is assessing those tools. So, you've mentioned a couple of times Chat GPT which is very visible. Google’s Bard as well, and I think a handful of other free tools. But I also wonder if there's some charlatans out there just selling snake oil. They're trying to jump on a hot new trend and get in with folks that don't have deep subject matter expertise. Have you seen anything like that out there Alex, like things that people should be wary of in the AI space? Or maybe it's not even their intention to offer something that is so lacking in legitimacy but it's just not as fruitful as what maybe people are looking to gain by looking at AI tools?
AC: Yeah. Absolutely. And I don't think that's new right, like if you think back even quite a while ago there became a trend of everyone saying on their website their product was powered by AI in some way. You saw that a lot actually in things like advertising or marketing tools and platforms and whatever. And often they're not necessarily powered by AI. So, in that case I mean it could be a little hard to assess because companies aren't always 100% transparent. And nor should they be necessarily because that's kind of their bread-and-butter, secret sauce, confidential sort of proprietary information. If you're sort of like, “Hey, what exact algorithms or models are using or this or that?” So, sometimes it could be a little tricky to determine.
I will say that there are a lot of—to your point, I think one of the trends we started to see even before sort of this explosion of AI and machine learning interest was around no-code low-code. So there was a big movement to make coding more accessible to companies so that they could set up a website quicker like with Squarespace or Wix or something like that without necessarily having this programmer in house. Because to your point a lot of companies didn't necessarily have a software development team either and with that requires you know UX UI designers, QA folks, product managers and this big list. So, whenever you can kind of abstract away some of those like technical complexities and make these technologies a bit more accessible to others to use, that tends to be very attractive particularly for organizations that don't have that sort of expertise or core competency if you will in-house. I think we're seeing the same movement now with AI tools as well. So, we're seeing these platforms whether they’re cloud-based platforms that sort of can help manage the end-to-end process of AI and machine learning development and deployment of models and so on. Also, APIs that you can use on demand sort of like Open AI API and some of the other ones we're seeing like Claude and Anthropic and some other ones sort of making these tools accessible via API calls where you don't have to necessarily roll this all out yourself.
Hugging Face is a great example as well. I don’t know if you're familiar with Hugging Face. But there's also the open- source movement and that's been around for quite some time. One of the organizations that's really big in the AI space right now is called Hugging Face and they've basically created this very capable and sort of comprehensive open-source Python-based library that wraps these Transformer models. So that organizations don't have to like sort of train or build these models from scratch, but they can benefit from this and use them in a much sort of simpler way within their own sort of tools.
You're right in terms of, especially with generative AI, there's new companies coming out nonstop right they're saying they're doing generative AI stuff or natural language stuff. And then the question really becomes like are they really differentiated in any particular way or are they just a wrapper around something like Open AI's API. Because anyone could do that. Anyone can sort of just build a front end, connect it to Open AI's API and then collect some language somehow from a user whether they speak to the app or platform or they type something in. Send it off to the Open AI API, get the results back and then just show it to the user. In that case if that's all it is it's a wrapper more or less but if they're doing other things like maybe they're fine-tuning models or they're doing specific kind of sophisticated prompt engineering behind the scenes, or they're connecting not only to those kinds of APIs but also to some sort of database to like not just have all the outputs that you're returning to the user be generated purely by these large language models through their parameters; which is kind of like a statistical thing where the output is purely based on the parameters of the model. And what's the most probable output for the prompt you gave it. But rather either combining or shifting between outputs that come from real data that's relevant to that particular application, versus outputs that come purely from the model statistically generating the most likely probable output for whatever it received through the user interface or conversational interface. So, it's a bit Wild West out there, to summarize.
NN: You use the words that I was searching for that there are some existing accessible and sometimes free tools like Chat GPT and anything else in that category that's now been popularized. And another organization has put a wrapper on it customized or repositioned it a little bit. And now they're putting that out there for some companies that might be worthwhile. Something else you had mentioned earlier on that people have concern about or they raise an eyebrow to is when people do not distinguish between what is machine learning versus what is AI. And I think we touched on this a little bit. It's been talked a lot about in the trades that some organizations are going out of their way to distinguish between one versus the other. How do you really describe the difference between them at a time when so many folks are just jumping on the bandwagon to associate themselves with artificial intelligence?
AC: Yeah, I mean the way I've always sort of defined these things and explained these concepts is with artificial intelligence I sort of always go back to the sort of how you would define intelligence in general. If you look up the definition of intelligence for humans for example like human intelligence or animal type intelligence, it always boils down to something along the lines of you learn you understand things, and then you use that understanding to carry out tasks or accomplish goals and things like that. So, when we're babies were born with sort of a blank slate and then we learn from our parents, our school, our friends. As kids we do a lot of trial and error and experimenting. We just keep learning. We develop more and more knowledge that our brain sort of remembers and encodes if you will. That becomes accessible to us and it also gives us what we call common sense, which is really that we over time buy a world model that we just have operating in the background all the time, even if we don't think about it. All of that allows us to do things like have conversations so getting back to that thing of doing something with that learning and understanding. So, we can have conversations, we can get to work every day and get home. We can do the tasks we need to do as part of our job. We can assist clients in a consulting fashion if that's what we do and so on and so forth.
Machine learning though is the learning part of that bigger picture equation. So that's intelligence as we think of humans and animals, that sort of thing. Artificial intelligence is literally just a natural extension of that by saying intelligence exhibited by machines. So, if you could get machines to also learn then understand and be able to carry out tasks like predict a stock price. Predict whether an email is spam. Make recommendations of songs you might be interested in listening to. Determine whether a skin lesion is cancerous or not. Figure out what's the best price or promotion for a given sort of target market for a given product as well. That's artificial intelligence. And the learning part of that for the machines, when it's machines that are exhibiting intelligence, comes from machine learning. So, really all machine learning is is certain algorithms, what they call “learning algorithms” that as long as you have data and usually that data is somewhat domain specific or industry specific or maybe it's functionally specific like in the case of sales or marketing. Or domain in the sense of advertising or insurance or financial services. Then these learning algorithms that fall under the machine learning umbrella can kind of automatically learn the underlying correlations, relationships patterns and so on that's encoded into that data, such that you can use it in an AI solution to do those tests.
So, machine learning in and of itself is the learning part and the outcome of the learning part from those learning algorithms is usually a model. That model is then that understanding part. Like we said with AI or intelligence in general there's learning then there's understanding and then there's doing and carrying out tasks. These learning algorithms do the learning the understanding, and machine learning comes in the form of models. So, in the case of ChatGPT and GPT4 I use those as examples regularly just because people are now very familiar with them. Those are large language models that have already been pre-trained, and they've been made available via their API or their user interface. But those are just a bunch of model parameters. In the case of GPT4, it's like a trillion model parameters or something like that that was learned during that learning process. That model once you have it represents that sort of understanding of human language.
Then what you do with it is what makes it AI. If you don't do anything with it, if all you do is learn, use learning algorithms to learn from data and create a trained model and the model just sits there on the shelf or does nothing, then that's not AI. It has to then predict something or classify something or automate something or help someone carry out certain tasks at their job every day or whatever the case may be.
NN: Yeah, I think what I've seen a lot in the adtech space is that people have just a bandwagon to say, “we've always had AI”. And on some level, it's much more machine learning optimization than it is in fact artificial intelligence. There's also the intent to do dynamic customization or the dynamic delivery of ads. I don't have enough subject matter expertise to say how much they lean towards AI versus machine learning but I'm just harboring a guess that it's much more machine learning at this time than it is in fact artificial intelligence.
Another question I have for you is just about not embracing AI. I think there have been some things in the past, in the recent past where businesses, marketers, advertisers. People from any other walk of life and business they've raised an eye and said, “You know this emerging technology, it's a trend, it's a fad it's not really going to become a part of our day-to-day”. Have you seen any of that skepticism showing up for companies where they're not taking it seriously? And if they're not taking it seriously do you have fears for companies that are refusing to put time and energy into understanding how they can adopt AI?
AC: I love that question because yeah totally in the past I saw that a lot more where people like—you know it's funny ‘cause there's a lot of people like myself that have actually been working in this field for quite some time and we're sort of saying, “Hey there's this AI thing and machine learning thing and it can do these things that could be beneficial.” A lot of companies weren't taking it very seriously, or they just didn't understand it or they didn't get like what are those real world applications and use cases or whatever the case may be. So, there was more of that. And there was hesitation around it in general just sort of like “Oh, AI”.
Then again, I think now the opposite happened because of the arrival of ChatGPT in these models but also genuinely the capabilities of these models like it really has advanced it's not just hype. These models do kind of remarkable things and if you understand sort of how they work under the hood and how they get to the point of being able to do these things. Not just with language but also with things like Dall-E where you can type text and it generates an image. It's pretty remarkable how we've gotten to this point and it's not stopping there. It's continuing to go. So, all of that combined sophistication has gotten a lot better. The advancements are a lot better and more capable; the interest awareness buzz is so much greater. It's less now of people like “Yeah, I don't know about if we need this AI thing”. It's more like scrambling everywhere. It seems like more and more everywhere I turn or people I talk to or organizations I talk to or hear about they're more scrambling now. They're very much worried about missing the boat on this thing or getting behind or somehow losing advantage or something like that.
The two biggest questions I get today hands down are build versus buy. Sort of everybody wants to know should they build or buy solutions now in AI and then secondly is should we wait to build. So, it's not so much we don't think we need it or we're worried about going down that path or anything. It's more we're scrambling. We need it. We have horse blinders on and to us now AI just is synonymous with large language models and Chat GPT. And in some ways almost ignoring the rest of this, like a much bigger landscape that falls under the AI umbrella. But a concern now is more how do we invest time, effort and money in building solutions when all we're hearing is the stuff is advancing so quickly. And if we go and build on something and then it's out of date or deprecated or obsolete three weeks from now or two months from now or six months from now, have we sort of created problems for ourselves?
NN: Yeah, the build or buy piece like that sounds exactly right and it's less of people saying “We're totally going to ignore this thing”. Because the way I'm seeing AI today it's kind of like saying you're going to ignore the internet and that it's not going to be a part of your business. It just doesn't make sense. It's baked into our expected vision for the future. Knowing whether or not you're spinning your wheels and wasting your time doing something that you don't need to be doing because there's something available for you and you could be wasting time and resources. Or acquiring resources that could otherwise be better spent doing other things that are necessary for your business. So, I imagine that's something that you're giving a lot of advice on, and that people are trying to get recommendations or referrals for what they can do at this moment in time. Is that fair?
AC: Absolutely. It's completely fair. And you're right it is like the internet. I think that's a great analogy. The other thing is whether we like it or not or want it or not. Everyone is interacting with AI today now all the time- with all sorts of different tools that they use and software that they use. Even the people that are being served the ads. In the case of digital advertising there's AI algorithms behind the scenes there. When you're setting up campaigns there's AI algorithms sort of optimizing where those ads show up and so on, and when and all this sort of thing. So, it's just baked into so much at this point too. I don't think it really benefits anyone to ignore it.
I think the bigger thing isn't so much whether you ignore AI or choose not to use it or something like that. But rather just making sure that you're using it responsibly, fairly, safely in a trustworthy way. So, I think the bigger thing is really just at the same time you're trying to figure out what is this AI/ML stuff and how do we use it for our business to benefit either our business, our customers, our users and so on. It's also how we do that in a very safe and sort of fair and responsible way. And, we create trustworthy solutions that we're confident in and that we trust.
NN: I'll have to say any follow-up questions about potential concerns are at AI for another time. But just in the last few minutes that we have together, I do want to ask this question. For organizations that are small, that don't have dev teams available that want to make sure that they're staying on top of what they can like you mentioned people are developing Centers of Excellence or task force things of that nature. What should people do today when they are let's say less equipped and they don't have as many resources at their avail? What should those smaller teams be doing to stay on top of AI as best as they can?
AC: I mean so shameless plug here so my company Why of AI, certainly helps at least on the education piece and the strategy consulting piece. We don't build AI/ML solutions, we work with partners that do. But I think one of it is if you want to start learning through a workshop type of thing whether it's us or someone else. There's that kind of things like getting help in terms of workshops or some sort of courses for small teams that sort of thing to understand AI and ML at the right level- again not super technical depending on whether you're a practitioner or not. But in terms of keeping up I have to say it's really hard. This is something I spend a tremendous amount of time on and it's not a trivial task uh to keep up with AI and machine learning today. So, I think the question is really more what aspects you are trying to keep up with. If you're not so focused on necessarily all the super technical details or the latest and greatest models or this or that. But you are in like advertising or digital marketing or you're in financial services or health care or whatever. I would really recommend at the minimum gaining enough sort of high-level understanding at least of the general concepts of AI and machine learning. Not the super technical stuff. Not the really in the weeds jargon. But like a high enough understanding that you can read some of the articles that come out or the news you're seeing in a specific industry that's relevant to you. And you can understand it.
The other day I saw this really amazing thing where they're using sound to listen in the oceans for fish activity around coral reefs as a measure of whether the coral reef is healthy or not or dying or has died. And use that information to then take actions to sort of help maintain healthier coral reefs. Things like that. You don't necessarily need to know all the technical models and algorithms that are powering this solution. But you start to get a feeling of like, “Oh I get how you can use audio in certain ways. You can use images and video in certain ways. You can use text in certain ways. You can use structured data that might, maybe you have in spreadsheets or tables or a relational database like your CRM or your sales data in certain ways and so on”.
So, I think the biggest thing is if you're not a technical person or a practitioner is really just understanding what this stuff is at the right level. And how is it being used in real world use cases and applications and so on that are creating actual positive impacts and benefits and outcomes that are relevant to you. That would be a good starting place because otherwise the whole thing is just too massive.
NN: It's great framing and perspective. I'll leave it here Alex. I know you have to run to another meeting. AI calls. Duty calls for artificial intelligence. So, this is just a kickoff point for us on this topic. There's much more to learn for everyone. So, maybe we'll touch base with you later on in future episodes.
AC: Well, I can't thank you enough for having me join the show today. And thank you so much. Best of luck and I hope to talk again soon.
—
NN: That's it for this episode. Thanks to Alex Castrounis, Founder and CEO of Why of AI for all his insight. I'll just say I did not want to get a primer from anyone in adtech who might spin it to speak to a product or a product pitch or a product release at this time. It's a little bit questionable how AI-centric some of those releases are and I think this is a topic that has so many implications and is going to impact this industry for years to come and so many others. So, a lot more to be seen. We'll be touching on AI again and again. So, expect to hear it brought up in multiple episodes in the future. Until next time, more Adtech Unfiltered real soon.