Predicting the Future: Combining Power BI & Azure AI for Accurate Forecasting
Webinar
REPLAY
Transcript & PowerPoint Slides Available for Download
PREDICTING THE FUTURE: COMBINING POWER BI & AZURE ML FOR ACCURATE FORECASTING
TRANSCRIPT
Welcome everybody to "Predicting the Future, Combining Microsoft Power BI and Azure ML for Accurate Forecasting." Just to quickly let you know who we are, we're Eastern Analytics, a boutique consulting service who helps customers maximize the value of their data. We specialize in Microsoft Analytics, Power BI, and Azure AI. As seasoned experts, we have the functional knowledge that allows us to understand our customer's sources and requirements, and the technical expertise to design and build systems that are robust, flexible, and secure. We assist our customers with their analytic needs, taking them through design, development, deployment, and DevOps. And we provide technology advice, solution architecture, design engineering, dashboards and visualization, and staff documentation.
Now, Scott Pietroski is Eastern Analytics Managing Partner and Senior Solution Architect. For more than 25 years, he's been building out analytic platforms for corporations such as Bose, Adidas, and Estee Lauder, and many more. And he specializes in Microsoft Power BI and Azure, and he's excited to share with you today's presentation.
Hi, everybody. Welcome to today's presentation. Today, we're going to do an overview of Microsoft Power BI and Azure ML. After that, we're going to look at an example of using Azure ML. We're going to do a quick prediction of automobile prices. That's a multiple regression using Auto ML. Next, we're going to display a simple use case while consuming the model in Power BI so that you can actually see how to consume it. After that, we're going to do a second example where we predict beer and wine demand. It's a time series forecast. It's what you would think of more when it comes to forecasting. And then we're going to consume that model inside of Power BI. After that, we're going to talk about considerations when designing a reliable forecasting model. And at the end, we're going to do Q&A. We're leaving the Q&A until the end just because we're packing so much stuff into this session that with Q&A, we can go down a rabbit hole. So, we're going to try to keep that to a minimum.
So, the Power BI desktop and the service. Most of you are probably familiar with the Power BI desktop, probably with the service. The desktop is a standalone application. Basically, you can install it on your PC and you do all the development work locally. The Power BI service is a web service similar to the Power BI desktop. It's the service which you probably think of most of the people in your organization think of as power BI. And basically, you can do web publishing to the service. You have access control capabilities. It's broken into workspaces, etc. For its AI ML integration, both the desktop and the web service allow you to consume Azure ML models.
So, if we look here briefly, this is the desktop. And the desktop, we can see here, we've gone into the power query. So, when you first enter the desktop, there's a button up here in the middle for transform data. You'd select transform data and it pulls up the power query editor. In this example, we have a data set that we pulled in automobile pricing. And when we think about applying ML to it or consuming a forecasting model, we're going to want to think about up in this area for AI Insights. So there's different things in AI Insights. This text analytics, which is a cognitive service provided by Microsoft. That's where it will do things like sentiment analysis, it will do key phrase extraction, it will do language detection. It also has vision, which is where you would analyze pictures or images. There it would basically, it's another cognitive service, it would then return to your tags for the different images as to the content inside of the images. It recognizes, I think, over 10,000 shapes and objects.
What we're going to be doing is we're going to be looking at the Azure Machine Learning piece. If we look at the Power BI service, in this case, we can do it in a Pro session. So, if we go to the Power BI service and we look at... These are the workspaces. In this case, we created one workspace for this demo. In it, we've uploaded a data set from the Power BI desktop. But if you want to consume models from within the service, you end up doing it inside a data flow. So, the data set from the desktop, the data flows are things that we created inside of here. And if you go into the power query editor inside of a data flow, you can consume a model as well. Ai ML consumption in Power BI. Ml is consumed, once again, using the power query. It's wizard based, which is very easy to use. The system automatically suggests column mappings based on the target data type.
And for data preparation, we're going to be using something called Auto ML as part of this. It's basically the no code version of ML. And basically, what happens is the data as you go through and you train your model, Auto ML automatically does some data preparation steps for you. And the good thing about data preparation and using Auto ML is that anything that your training set goes through to create your model, that your inference set, which is being delivered from Power BI in this case, is going to go through those same steps. You can consume and apply any machine learning model created on the Azure ML platform in Power BI. The limitation is that you need to publish it so that you consume it. And you also need to be authorized for that workspace.
If we look at the Power BI desktop, this is the Power BI desktop once again, inside a power query. We've gone up and we've selected, in this case, we selected text analytics for this demo, or for this webinar or in the other ones we're going to select Azure Machine Learning. The GUI is the same. So, we selected, in this case, text analytics.
We can see the three different cognitive services that Microsoft provides. It's very simple. It automatically pulls up a field which is the feature you need to pass their trained machine learning models. You're going to map a field from your table into it, and then it will automatically add an inference column to your table, and you can consume it. You'll call the endpoint and do the predictions.
For the Power BI service inside of a data flow, this is a little bit different. Inside of their power query, up at the top, it has AI Insights that has a button with a brain. It looks like a brain. You select that button and what it does is it automatically pulls up all of the available AI models that you can use. In this case, we have Azure Machine Learning models that were authorized to see the workspace. We created two of them for this demo. We could select either one of these. We can see below the cognitive services. Once again, these are the Microsoft supplied cognitive services. These represent the same options that you see in the buttons in the desktop.
Last but not least, we're not going to go into it here, but there is also we created a Power BI Machine Learning model in Power BI. So, in the Power BI premium, you can go in and actually create machine learning models in your data flows. We're not going to cover that here. We do actually have another webinar that we've done. If you want, email me and I'll send you the recording of it. And it goes into actually how to use the Power BI ML capabilities and it compares and contrasts them to Azure ML.
Once again, you can consume any model that's available inside of Azure ML as long as it's published as a web endpoint, and you're authorized to see it.
Azure ML at a high level. Azure ML itself is a machine learning platform. So, it's standalone. You have to basically you go in and you stand it up. So, you have one of your platform people go in and stand it up. It has seamless integration with Power BI. It includes Auto ML functionality and an entire toolkit for building and deploying models.
So, what is Auto ML? You'll hear me talk about Auto ML; you hear that all over the place. Auto ML basically is a scripted environment where you go in, you identify your training set.
So, you create an Auto ML job, you connect to your training set. And then what it does is it loops through all of the available algorithms that are available within the environment. It sees which ones are the most accurate. It then tunes its hyper parameters of the most accurate algorithms and then outputs models for each of the algorithms. It also recommends what the best model is the most accurate. You can use it as a starting point. Most people use it as a starting point. Or if it's accurate enough, you can just actually use that as a production model and use it for doing your inferences.
It's an ML platform. Azure ML is the platform. It's designed for enterprise-level ML. It can be either GUI driven, which I'll show you in a moment, or it can also be code driven. So, data scientists who are used to going and doing everything inside of Jupyter or Python notebooks, they have the same Jupyter Notebooks inside of Azure ML, and it has an SDK that allows you to consume and interact with all of the different objects within the platform. So, it provides a framework for common data science tools.
From a technical standpoint, the data volume and the size is unlimited. So, think of it as you're basically storing all of your data in the side of Azure data link storage account. Now, you can scale your computes as needed. So that's where it comes to the data and volume is unlimited. I have actually in one instance, we basically bombed a compute because we just didn't have enough memory in it and it had 512 gigabytes. So, it depends on your data set. It isn't unlimited, but you can segment your data and things so that you can get what you need of it.
For access control, it's role-based authorizations, so you can control exactly who can do what inside of Azure ML. And then for model retraining, that's actually orchestrated through the Azure Data factory. So, you can run into all your experiments at Azure ML, but then when you basically want to put into production and orchestrate it, you use the data factory for it. I won't go too far into this in the Azure ML Studio. I'm not going to go too far into it because we don't have the time.
We also do have another webinar that we do comparing and contrasting Azure ML to data bricks ML. It goes much deeper into this area. So, if you want, email me and I can send you that as well.
This is the Azure ML Studio. If we look here, basically, this is a pipeline so you can draw, basically, and drag in different activities in the designer and basically build your whole ML experiment visually through the GUI. You can also use notebooks where you can code all of it in any event. You can use Auto ML, which is the automated process I spoke about.
So, in there you can basically connect it to different data assets. Think of if you know the data factory, like a big service, you basically can define those in there along with your data sets. All of the stuff that runs is recorded inside of jobs. You have pipelines that we're looking at here and has a model repository. Endpoints, which is what you consume, and then environments which are predefined environments like if you need Tensorflow or you just looking for your standard ML libraries.
This is what some of the objects, we look under the hood inside of Azure ML in the portal side of Azure, we can still see for an ML environment, we've got basically our ML workspace. They have a Kubernetes service, which is our inference cluster, which you can spin up from within Azure ML. We have a key vault for security, application insights for your logs, and for problem-solving, and then last but not least, everything that gets pulled into Azure ML goes into your own storage account. So, all of the metadata, all of the configuration that you do in Azure ML, all the data that pulls in and any transitory data sets that you use between different flows or data prep that's all inside of your own storage account.
If we look now at an example, this is just a quick example of forecast. In this case, it's not time-based. We're simply forecasting auto sales and what an automobile would sell for. Now, this is based on the Kaggle data set. It's the eBay used car sales data set. And in this case, when we did this, we staged the data in an Azure SQL database. We have our training set, which is our historical sales. We have the different attributes about a car, pretty straightforward. The year of the sale, the zip code, the make, the model, the year of the car and the mileage, and what it's sold for.
We now have an inference set. So, let's think of ourselves as maybe an auto wholesaler. So, we go out, we buy cars, we buy used cars, and then we resell them. So, we want to find out and figure out which used cars we should buy to determine whether or not we're going to make money off it. So, here's a list, and this is our inference set. Here's a list of cars for sale on the wholesale market that we can buy with their list price. What we did was we said, "Okay, well, let's take the training set where we know the sales price and let's predict the sales price of these cars that are listed for sale that we can purchase." Once we look at that, we can do the analysis and we can determine which one of these vehicles we should buy so that we can make the most money.
The example of this, okay, first thing we do, these are the steps that we go through. We're going to, basically it's a supervised machine learning problem. We're going to connect Azure ML to the data store. We're going to register the data asset, which is the training set. We create a new automated ML job. We review the results of it, then we publish the model, which is the most accurate.
If we look in here, this is the first step. We're going to create the data store. This is the link service. In this case, you're just giving it a name. The type of connection is an Azure SQL database. We happen to be a Microsoft partner, so we're putting it inside of our partner network subscription, so on and so forth.
After you created that, you then create the data asset, which is basically think of it as your source table. This source table for this data asset, you can literally enter in your SQL. You can go in and enter in your SQL. There we go. The SQL here, I've done some cleanup here with data types and things like that to make it easier for ML to use. They didn't match the tables, so I did some cleanup. You can do whatever you want in your SQL and then consume this data set.
Next, we go in and we create an ML job. So, we just select Auto ML inside of there. We connect to our asset. We choose the data set we want to train on. The next steps is it's going to go and ask you some information about the jobs. You're going to go in and select the task settings. There aren't many settings to this. This is basically the compute you're going to run your job on. Any particulars, it might be checking off or unchecking certain columns as to whether you want to count them as features or not. Then at the end, we then tell it how you want to do your twin test splint so that it will then go through and how you want to validate it, cross validation or whatever you want to do.
After you run the job, this is the output of an auto ML job at a high level. What it does is it's gone through and it's looped through all of the different algorithms, figured out which one is the most accurate, tuned the type of parameters as much as it could. And then after that, saving it as the best model. It's pointing you at the job and the model of the output of the job, which is the model which you can then publish.
When you publish that, it's all been stored in a registry. So, it all works through MLflow, it stores all the statistics. Every one of the runs is its own job, so you can see the other algorithms that were less accurate, had a lower accuracy. You can look at those details as well. It has displays for feature selection, things like that.
You choose the model that you want and in the end you publish it. So, we publish it as a web service. When you publish as a web service, it'll ask you Kubernetes on compute you want to push it to, and probably about takes 5 to 10 minutes. It's then available.
Now, how do we consume that model inside of Power BI? First thing we're going to do is inside of Power BI, we're going to connect to the inference set. We're going to connect to the data set that we need to know the prices for. What we do is we connect to that data set and then we then can consume the model and the forecasting model will return its predicted prices. So, we connect to the inference set, we adjust field types and I'll explain that in a moment. We assign the ML model to it. After that, we adjust any field types again for reporting and then we consume them in a report.
So if we look here, in this case, our inference set is in SQL Server, so we just go in New Sources as a standard Power BI. We select our data and we fresh it and pull the data in to Power BI.
Once the data is inside of there, we're then going to basically go through and adjust the column types of our features. And the reason I say adjust is our features... When we go in and use ML inside of our BI, it's all wizard based. It basically shows us the machine learning models features that it expects as parameters. Then it gives you a drop-down list of the fields that you can use to assign. If you do not adjust your types or make sure that the types inside of this data set match what your machine learning model wants, then they won't appear for assignment. So, we adjust the types.
We then go up and we use the machine learning here, selection here. It pops up with our machine learning models that we have. These are our two forecasting models, and we can see this automatically pops up. These are the fields that are our feature fields. These are the fields inside of our table. If they're not the correct type, you won't be able to assign them. You'll have to basically close out of the wizard and go fix it. Once you do this, you select okay, there's nothing more to it.
It then is going to come through. It's going to add your inference to your table. And after it adds the additional column for your inference, I don't have it here, it's over to the right. You're then going to want to, if you need to, adjust your data types of your fields again so that they match what you want for reporting. It's just something that you want to go through weekly reporting.
After you adjust them, in this case, we have here, this is just a simple report we put together with some calculations on it. This is our inference set. So, we have the different attributes of the car along with its list price. That's what we can buy it for. We then have the inferred or forecasted value.
And now we can go through and add some calculated columns with some conditional formatting that says, "okay, what is the possible markup percentage, the difference between the list price and what the value is that we can sell it for? And then also what is the potential profit?" So, we set thresholds on this. But because of that, now we can look at this and we can look at listing data for automobiles as a wholesaler or as a used car salesman. I can go in and look at the history, look at what's available, and I can then determine which vehicles I can buy that will meet my margin requirements and also my minimum profit requirements. This is the first example.
Now, the next example is a little bit more complex. This is a time series forecast, so it's still forecasting. You're still going to use Auto ML to do it. You're just going to tell it basically that it's a regression with a time series for what the date is, and it will do the rest. Our data set here is a different data set. It's from Kaggle. We have, its grocery sales in Ecuador.
So this data set was a large data set. It was a chain of grocery stores. It provided all of its data for like five or seven years about what it sold and what different product categories or product families. We just filtered it to liquor, beer and wine because that's what we like. And then one of the attributes here, one of the features is something called on promotion.
Now, what on promotion is, it means that if you owned a supermarket that sold beer and wine, you send out flyers and coupons periodically in newspapers and the rest out on the web now on the internet. And you may have five flyers out trying to pull in your customers at any one time. So, you might have a certain beer is on sale that week. You might have that wine is a certain wine is on sale. So that's important when it comes to sales. It draws people into a store. So that's one of the features is how many products we had on promotion. But just as importantly is the deal.
So, in this case, we're doing a time series forecast. So, our training set, where we have our training set, goes up to a certain point in time. So, what we do is we use up to a certain point in time for training. And then what we want to do is we want our testing data set that we're going to send in and get the predictions for. We want our testing data set to start after the training set. That way, the model has not seen our data yet. So, because of that, it really is an inference. You don't have to worry about overfitting or something because it's already seen it. From what we want is our testing set to start at that point and go forward.
There are other features. There's a lot more features inside of the data set that you could use. We didn't use them for this example. However, you do have things like whether or not the date is a holiday. So that certainly could influence what the forecast would be. Maybe whether it was a national holiday or local holiday. Or maybe what region of the country might have different religious aspects in different regions where certain people don't consume as much as other areas. So, there are certainly a bunch of other features. In here, we just use these three and then we're going to predict the sales amount. Once again, we stage this data in just an Azure SQL database.
You're going to go through the same steps that we did for the prior example. It's just you're going to select that it's a time series and the date when you do your machine learning model. You go through, you create your data asset for your training set. In this case, you can also for a time series, you can assign a testing set. You're going to create a new Auto ML job. You're going to review the results and you're going to publish the model. I won't go through all the screenshots. It's the same basic steps and it's wizard based.
Now, when we think about the data, the data basically we're still going to connect to Power BI. Power BI to our training and inference set. And I'll show you why we're going to do that in a moment. We're going to basically prep our data for ML consumption like we did before. We're going to assign the model. In this case, we assign the model to both sets. And the reason I do that, I'll show you in a moment. We want basically to look at our training data as it was run through the model and also our forecast, which is the future data. Now, we're going to create an append table inside of Power BI to combine them for presentation. Then we're going to present your data and show it.
If we look at it here, basically, we can see that I'm pulling in the training data, and then I'm also putting in the forecast set, which is the future data. And in the end, I will create something called an append table.
Here, we basically walk through. We've already adjusted our data types. We actually applied the model to the training set. The first one we did was the test set. We ended up doing it to both of them. It's the same steps. You adjust your types so that the wizard will work. For machine learning, you then go select Azure Machine Learning. It pops up and you select your model. If you have multiple models or the forecasting model you're going to use, you assign the field and run it. It appends to this table, the inference. After that, you then go through and you can consume it however you want.
If we look here, what we did was we basically sent our training set through and added an inference column to it. So, in there we have the original actual value plus the forecasted value in a different column. Then we have our testing set which we did a forecast on. And now what I'm doing is I'm actually combining the two of these using a merge query. So, the merge query we're doing an append. Instead of doing a join, we're going to put all of the records into one table. And the reason that we do that is so that we can present them.
Now, if we look at the presentation, this is just our pen table where we've got data with multiple data sets in it. We can see basically that our actual sales... This is the actual sales as they sold, this blue line. This is the forecast data. So, this is the data this is the forecast of our sales based upon our training set and then run through the model. And it looks pretty good. The pattern looks similar. If you go in and you look at the dates, these actual dates down on the bottom, you can see that there's a spike of sales and it's on a Friday, Saturday, and Sunday. So, they do most of their sales on the weekend, which makes sense for beer and wine. And it continues that pattern if we go in and we look at these dates as for our forecast data. So, so far, so good.
Now, one of the things we want to do, though, what I like to do, is I just added in our training set, but as it was sent through the forecast model to get an inference on it. So, the blue line here is our actual historical data we use to train. This orange line is that actual historical data sent through the model to determine what it would predict. So, from here, we can see based on here, we can see that while it looks like the peaks on the weekend days, it looks fairly accurate, the tops are pretty close. Maybe on these days we need to look into here, or basically this area. But we can also see that on the slower periods, it's having problems where we might be able to fine tune it. So, we're going to want to look into the features we choose whatever type of feature engineering we can do. We're going to want to look into whatever we can do to tighten this up. And by tightening up the history, we should be able to more accurately predict our future.
So, this is basically we've gone through and we've talked about loading a data set. We've talked about using Auto ML to end up building the models. And then from there, we talked about consuming it and displaying it.
If we look here, how do we go about creating a better forecast? Forecasting, or just machine learning in general, when you think about the different algorithms that Auto ML will run through, you'll have two or three different algorithms that will all be the leaders, and they'll be extremely close. So, it really comes down to features. How can I enhance my data set? How can I clean my data set? Do I need to normalize? Do I have to look at class imbalances? All of the different things that you need to do to your data. That's how you're going to improve your accuracy of your model.
So, can you add or remove any features? Time, the stores, the geography, others. So, Auto ML, just a note for you. Auto ML automatically adds time features for you. So, in this case, we're entering a date. It automatically will do a join and pull in the year, the month, and the day of week of that date, which might influence things. It certainly could. We know that they sell more on Friday, Saturday, and Sunday. So, the day of the week is important. Auto ML automatically does that for you.
Auto ML also removes insignificant features. That is part of something it calls its data guardrails. So, it will look at cardinality between your features. It will clean your data. And in the case of removing insignificant features, it'll look at cardinality and say, I've got these three features that really under the hood represent the same thing, or have the same cardinality, then I can actually use one of them. So, it'll choose one of them and then go ahead and do its training.
An important thing, are you dealing with seasonality? Think of seasonality, the holiday seasons. So, the seasonality is cyclical. So, it's a pattern over time. Seasonality could be your holiday season. But then also you want to think about seasonality. When you're going to deal with seasonality, the general rule of thumb is that you want in your training set to at least have that season occur twice from trough to peak.
Think of it as you don't want to just load one year's worth of data. If you're dealing with seasonality on a calendar basis, you're going to want to load two years or three years worth of data so it can recognize the patterns for you. Are there patterns that are predictable in your data? So, things like the Super Bowl or maybe the World Cup soccer? Is it a global or a local pattern? Think of maybe the fourth of July. In the US, there's all sorts of beer and wine sales on the fourth of July. Now, the fourth of July is pretty straightforward, meaning that it should be able to recognize that on the fourth of July, which is the holiday, the previous whatever four days, everybody goes out and does their shopping or everything for the fourth of July holiday.
Now, if you're looking for something at something like Thanksgiving, the date changes every year. So, because of that, the calendar date may not be the best way for the algorithm to pick it up. So, in cases like that, you might want to use a rolling window. A rolling window is you can do basically a sum, you can do a moving average as of that point. You might do a sum of the prior seven days or five days and basically do your rolling window.
Now, Auto ML does not do this automatically. So, you have to build that into your data, in your data prep stages, in your transformation stages. It is available, that functionality, is available in Azure ML's SDK. So, if you are going to code it inside of Jupyter Notebooks, you do have access to those statistical functions. You just do not have it in Auto ML itself.
One of the things you want to be aware of is do not train with features that are not available at inference time. What do I mean by that? Sometimes when doing forecasting, let's say for a restaurant's demand, when doing forecasting for the demand, naturally you want to say, "Well, what was the weather like that day?" Well, you're not going to want to include weather condition as one of your features if you do not have it for your inference set. So, if you don't have for your inference set, if you don't know what the weather is going to be and you're trying to predict maybe inventory for your restaurant two weeks out and you can't accurately predict the weather, then you should not include it in your feature set because you're really relying on a prediction that's unreliable in the future.
So, you're going to want to consider risk when combining your forecasts. Consider the risk of combining a forecast for your weather and then using that as a feature because you could easily, any variance between your forecast for your weather and actual reality could easily cause you a lot more harm than it could do good for you.
Last but not least is you're going to want to experiment, experiment, and experiment. That's why they call them experiments. Azure ML is designed to do that. You're going to walk through, and you can do experiment after experiment and just test and refine, and test again and refine again.
So, I know that I included a lot of information there. I tried not to go too low just because of the variety of audience that we would have. But now we're open to question and answers. And the question and answers, I have one here in the QA.
Can notebooks in Azure ML be used in a manner similar to Auto ML in Power BI? For instance, a set of code specifying data transformations and or algorithms. You can always stage your data and do things. So yes, for your data prep inside of Azure ML, you do have the ability to read and write from your storage. So, you can go in and code your notebooks to do all of your different data preps, transformations, harmonizations, normalizations, cleansing, whatever you want, and then write that back to your data store and then consume that in a model. So, I think that's what you're thinking of when it comes to, let's see,
AutoML and Power BI. Can notebooks and Azure ML be used similar to AutoML? Yes, they can. Oh, and I think your question is here, David. I think that when you go in and you actually create, interestingly enough, when you create an Auto ML job in Azure ML, everything it's generating is actually stored in Python notebooks in the ML files. So, it's designed to either in notebooks, it creates scoring script files as well, and it creates YAML files to be able to spin up the containers. You can go in and use those notebooks as a starting point from your Azure ML, and it calls all the different functions used for ML flow to do all the recording, to register your models. It does everything inside of that notebook for Auto ML. You can take that Auto ML notebook and then completely tweak it so that you can generate whatever models you need differently using that as a structure and a framework to basically go in and generate new models that might be more accurate.
Let's see, do we have any other questions? Oh, another one. This is an interesting one. Does Eastern Analytics offer Power BI Azure ML training? Well, for the most part, we're actually implementation people. I would be interested in talking to you about it because I do put together webinars with specific topics about it, and I certainly would be interested in talking to you about it. Worst case scenario, we could also point you at somebody who's very good or one of our colleagues, but I would be very interested in talking to you about that as well. So, Laine, please give me a call.
Now, let me put up our information here. When I say, Give me a call, at least now you can see it. And then we have some other questions here. I'm just scrolling through. Let's see.
How do you know that Azure ML is picking the right algorithm? Well, you will see inside of Azure ML, inside of Auto ML, you're going to see basically all of the different job runs. It has one tab based upon, that will literally show you the output of all of your different algorithms and what the highest accuracy it was.
One of the things that you will note that if you're used to using specific algorithms. Support Vector Machine comes to mind. If you're used to using specific algorithms and you don't see it used in Auto ML, they don't use all of the different... They don't use every algorithm. Support Vector Machine is standard inside of sidekit learn, but you don't see it used in there. You may want to go in and create your own pipeline for any algorithms that you would like to test that you don't see, and then you'll be able to... You really have to do it on your own.
Any idea how much it costs to implement Azure ML? Azure ML, basically, it depends. The highest cost, I say depends, and that's because it's all compute related. So, for an instance of Azure ML, you might be looking at $5 to $800 a month when it comes to doing basic data science inside of their basic amounts of training and then maybe publishing two or three models. The Kubernetes service tends to be the most expensive part of it. And that's because the Kubernetes service basically is a VM that's sitting up and it's persistent. You're just leaving it there. And that's because you never know when power BI is going to connect to it. So, when you serve your models and publish them as endpoints, that tends to be the most consistent cost. And that's because that VM is up and running and you size it accordingly, so on and so forth. But usually for a small implementation, your first workspace, you're looking at maybe $5 to $900 a month, maybe less. It depends on what your workload is and how often you retrain your models. It looks like that is it.
I have another one here. We're using Power BI for our customers. A few of our team are advanced, most of very new. Okay, so it looks like that was a question that Kerri had for you.
Okay, well, I thank you, everybody, for attending the training, or should I say the webinar? And hopefully, maybe we've been able to shed some light on the topic. Kerrilee will reach out to everybody just to make sure you don't have any questions because lots of times questions pop up afterwards. You go, “You know what? I wish I asked that.” So, she will reach out to you.
Thank you very much.