I have really strange problem So, I created model via Modeler Flow using Spark Environment There was a .csv file, that consists of data about bank's clients and this model had to predict is there any risk to give these people the loan. After that, I deployed the model and get to OpenScale instance.
So the problem is:
When I config the "Model Details" it ask to point the field from training data file that contains the answers to be predicted by the model it shows me that there is only one suitable field (and its correct), this field contains string data (I TOTALLY ENSURE IN IT!!) Then it ask to point the column in the OUTPUT Data from the model From the output data, select the feature that contains the prediction generated by the AI deployment.
There are three possible columns, two of them have string type and another one is double type I choose the middle one because this column consists of string predictions values (No/Yes)
Ok, it's done.
Go to quality monitor and it shows me this message Prerequisites Check has failed: Label column label
in training_data_schema
with modelling role target
has different type double
than column $X-Risk
in output_data_schema
with modelling role prediction
of type string
; Verify label_column
and prediction_field
settings in asset_properties
of your subscription
I really don't know how to fix it. Because I KNOW that that there is NO column named LABLE and of course there is NO column named LABLE with DOUBLE TYPE!!
Help me please!!
As you have created a spark model , you must have done a string indexing while training model this would apply even for the target column (aka label column ) , which might have lead to detecting the target column from training data to be a double column , so please try selecting the prediction column (that you select from output data) to be the number (aka double column) and it should resolve your problem mostly