pythonapache-sparkpysparkapache-spark-sqldata-pipeline

nested json from rest api to pyspark dataframe


I am trying to create a data pipeline where I request data from a REST API. The output is a nested json file which is great. I want to read the json file into a pyspark dataframe. This works fine when I save the file locally and use the following code:

from pyspark.sql import *
from pyspark.sql.functions import *

spark = SparkSession\
    .builder\
    .appName("jsontest")\
    .getOrCreate()

raw_df = spark.read.json(r"my_json_path", multiLine='true')

But when I want to make a pyspark dataframe directly after I have made the API request I get the following error:

Error when trying to create a pyspark dataframeenter image description here

I use the following code for rest api call and conversion to pyspark dataframe:

apiCallHeaders = {'Authorization': 'Bearer ' + bearer_token}
apiCallResponse = requests.get(data_url, headers=apiCallHeaders, verify=True)
json_rdd = spark.sparkContext.parallelize(apiCallResponse.text)
raw_df = spark.read.json(json_rdd)

The following is some of the response output

{"networks":[{"href":"/v2/networks/velobike-moscow","id":"velobike-moscow","name":"Velobike"},{"href":"/v2/networks/bycyklen","id":"bycyklen","name":"Bycyklen"},{"href":"/v2/networks/nu-connect","id":"nu-connect","name":"Nu-Connect"},{"href":"/v2/networks/baerum-bysykkel","id":"baerum-bysykkel","name":"Bysykkel"},{"href":"/v2/networks/bysykkelen","id":"bysykkelen","name":"Bysykkelen"},{"href":"/v2/networks/onroll-a-rua","id":"onroll-a-rua","name":"Onroll"},{"href":"/v2/networks/onroll-albacete","id":"onroll-albacete","name":"Onroll"},{"href":"/v2/networks/onroll-alhama-de-murcia","id":"onroll-alhama-de-murcia","name":"Onroll"},{"href":"/v2/networks/onroll-almunecar","id":"onroll-almunecar","name":"Onroll"},{"href":"/v2/networks/onroll-antequera","id":"onroll-antequera","name":"Onroll"},{"href":"/v2/networks/onroll-aranda-de-duero","id":"onroll-aranda-de-duero","name":"Onroll"}

I hope my problem make sense and someone can be of help.

Thanks in advance!


Solution

  • Following this answer you might add these lines:

    import os
    import sys
    
    os.environ['PYSPARK_PYTHON'] = sys.executable
    os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
    

    And to run your code must add [ ] here:

    rdd = spark.sparkContext.parallelize([apiCallResponse.text])
    

    See an example:

    import requests
    
    response = requests.get('http://api.citybik.es/v2/networks?fields=id,name,href')
    rdd = spark.sparkContext.parallelize([response.text])
    
    df = spark.read.json(rdd)
    
    df.printSchema()
    # root
    #  |-- networks: array (nullable = true)
    #  |    |-- element: struct (containsNull = true)
    #  |    |    |-- href: string (nullable = true)
    #  |    |    |-- id: string (nullable = true)
    #  |    |    |-- name: string (nullable = true)
    
    (df
     .selectExpr('inline(networks)')
     .show(n=5, truncate=False))
    # +----------------------------+---------------+----------+
    # |href                        |id             |name      |
    # +----------------------------+---------------+----------+
    # |/v2/networks/velobike-moscow|velobike-moscow|Velobike  |
    # |/v2/networks/bycyklen       |bycyklen       |Bycyklen  |
    # |/v2/networks/nu-connect     |nu-connect     |Nu-Connect|
    # |/v2/networks/baerum-bysykkel|baerum-bysykkel|Bysykkel  |
    # |/v2/networks/bysykkelen     |bysykkelen     |Bysykkelen|
    # +----------------------------+---------------+----------+