pythonjsongoogle-drive-apigoogle-colaboratoryroboflow

Roboflow Yolo: how to process multiple images and save individual return json?


Below is a simple Roboflow Yolo segmentation code that I am running on Colab. The pre-trained model works fine on 1 sample image. The only thing that matters here is the model, which is an instance segmentation model. The result of prediction returns labels and coordinates of detected objects in json format.

!pip install roboflow

rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
workspace = rf.workspace("workspace-id")
project = workspace.project("model-id")
version = project.version("version-number")
model = version("version-number").model

prediction = model.predict("/content/sample 1.jpg")

# Plot the prediction
prediction.plot()

# Convert predictions to JSON
prediction.json()

But I have a hard time modifying the script.

Question 1: I searched and tried every code but still can not figure out how to save the return json file. Can you help me to add a code to save the json file(prediction in this case)to a specific location that has a same file name as input image?

Question 2: I simply dragged a sample image to content to test. And now I am trying to process every image in a google drive. I can link a specific folder using the below code, but how can make so all images on the drive can be processed automatically?

from google.colab import drive
drive.mount('/gdrive')


Solution

  • Question 1 - updating the imgfile_location variable will update the image you predict on, and save the JSON file of the predictions with the corresponding file name:

    from roboflow import Roboflow
    from google.colab import files, drive
    import os
    
    def safe_create_path_parent(path: str) -> None:
    path_parent = os.path.dirname(path)
    os.makedirs(path_parent, exist_ok=True)
    
    def dump_to_json(target_path: str, content: dict) -> None:
    safe_create_path_parent(target_path)
    with open(target_path, 'w') as outfile:
        json.dump(content, outfile, indent=4)
    
    drive.mount(‘/gdrive’)
    
    rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
    workspace = rf.workspace("workspace-id")
    project = workspace.project("model-id")
    version = project.version("version-number")
    model = version("version-number").model
    
    imgfile_location = "/content/sample 1.jpg"
    imgfile_name = Path(imgfile_location).name
    prediction = model.predict(imgfile_location)
    
    # Plot the prediction
    prediction.plot()
    
    # Convert predictions to JSON
    predicted_result = prediction.json()
    
    # Save JSON predictions to your computer
    dump_to_json(imgfile_name, predicted_result)
    

    EDIT: Question 2 - multiple images or a folder: raw_data_location = "INSERT_PATH_TO_IMG_DIRECTORY" for raw_data_extension in ['.jpg', '.jpeg', 'png'] globbed_files = glob.glob(raw_data_location + '/*' + raw_data_extension) for img_path in globbed_files: predictions = model.predict(img_path, confidence=40, overlap=30) # save prediction image predictions.save(f'inferenceResult_{os.path.basename(img_path)}') predictions_json = predictions.json() print(predictions_json)