The model I have designed, trained and published an Azure ML experiment (using a two class decision jungle) as a web service and can call it fine and it returns the expected result (based on a threshold of 0.5).
The problem However I want to manipulate the result returned to provide a result closer to my desired accuracy, precision and recall which don't happen to coincide with the default threshold of 0.5. I can easily do this via the ML studio by visualizing the evaluation results and moving the threshold slider from the center (0.5) to the left or right.
I have googled and read many Azure ML documents and tutorials but so far cannot work out how to alter the threshold and return a different scored probability in my trained and published experiment.
The score module also returns the result with scored probabilities. I think you can add a simple math operation to compare the scored probability and add a new column or write a simple R script - see the image below with "apply math operation" to generate output based on probability exceeding 0.6 instead of 0.5