google-cloud-platformdataflowscrapinghubapache-beam

Data flow template cant be created because Scrapinghub Client Library doesn't accept ValueProvider


I'm trying to create a data flow template that can be called from a cloud function that is triggered by a pubsub message. The pubsub message sends a job id from Scrapinghub (a platform for scrapy scrapers), to a cloud function that triggers a data flow template whose input is the job id, and output is the corresponding data to BigQuery. All other steps of this design is completed, but I cannot create the template because of the possible incompatibility between Scrapinghub's client library and apache beam.

The code:

from __future__ import absolute_import
import argparse
import logging
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from scrapinghub import ScrapinghubClient
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.value_provider import StaticValueProvider


class UserOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
    parser.add_value_provider_argument('--input')
    parser.add_value_provider_argument('--output', type=str)


class IngestionBQ:
    def __init__(self): pass

    @staticmethod
    def parse_method(item):
        dic = {k: item[k] for k in item if k not in [b'_type', b'_key']}
        new_d = {}
        for key in dic:
            try: 
                new_d.update({key.decode("utf-8"): dic[key].decode("utf-8")})
            except AttributeError:
                new_d.update({key.decode("utf-8"): dic[key]})
        yield new_d          


class ShubConnect():
    def __init__(self, api_key, job_id):
        self.job_id = job_id
        self.client = ScrapinghubClient(api_key)

    def get_data(self):
        data = []
        item = self.client.get_job(self. job_id)
        for i in item.items.iter():
            data.append(i)
        return data


def run(argv=None, save_main_session==True):
    """The main function which creates the pipeline and runs it."""
    data_ingestion = IngestionBQ()
    pipeline_options = PipelineOptions()
    p = beam.Pipeline(options=pipeline_options)
    api_key = os.environ.get('api_key')
    user_options = pipeline_options.view_as(UserOptions)
    (p
        | 'Read Data from Scrapinghub' >> beam.Create(ShubConnect(api_key, user_options.input).get_data())
        | 'Trim b string' >> beam.FlatMap(data_ingestion.parse_method)
        | 'Write Projects to BigQuery' >> beam.io.WriteToBigQuery(
                user_options.output,
                schema=schema,
                # Creates the table in BigQuery if it does not yet exist.
                create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
                write_disposition=beam.io.BigQueryDisposition.WRITE_EMPTY)
     )
    p.run()


if __name__ == '__main__':
    logging.getLogger().setLevel(logging.INFO)
    run()

And I deploy the template with this command in cloud shell:

python main.py 
--project=project-name 
--region=us-central1 
--runner=DataflowRunner  
--temp_location gs://temp/location/
--template_location gs://templates/location/ 

And the error appeared:

Traceback (most recent call last):
  File "main.py", line 69, in <module>
    run()
  File "main.py", line 57, in run
    | 'Write Projects to BigQuery' >> beam.io.WriteToBigQuery(
  File "main.py", line 41, in get_data
    item = self.client.get_job(self. job_id)
  File "/home/user/data-flow/venv/lib/python3.7/site-packages/scrapinghub/client/__init__.py", line 99, in get_job
    project_id = parse_job_key(job_key).project_id
  File "/home/user/data-flow/venv/lib/python3.7/site-packages/scrapinghub/client/utils.py", line 60, in parse_job_key
    .format(type(job_key), repr(job_key)))
ValueError: Job key should be a string or a tuple, got <class 'apache_beam.options.value_provider.RuntimeValueProvider'>: <apache_beam.options.value_provider.RuntimeValueProvider object at 0x7f1
4760a3630>

So before that, I successfully created a template but instead of using parser.add_value_provider_argument, I used parser.add_argument instead. However, while the template could be created, it could not be run since parser.add_argument doesn't support runtime parameters. However, not only that the template could be created using parser.add_argument, I could run the pipeline from cloud shell with parser.add_argument. Why didn't Scrapinghub's Client API throw an error with parser.add_argument but with parser.add_value_provider_argument? What's the fundamental programmatical difference between the 2? And also, of course, how can I still create this template with ValueProvider parameters?

Thanks a lot.

Edit

After reading the documentation, I understand that the error occured because there's no support for ValueProvider objects for non-I/O module. Ref: https://cloud.google.com/dataflow/docs/guides/templates/creating-templates#python_5


Solution

  • After reading the documentation, I understand that the error occured because there's no support for ValueProvider objects for non-I/O module. Ref: https://cloud.google.com/dataflow/docs/guides/templates/creating-templates#python_5

    So to achieve what I need to do, I can either switch to Java SDK or come up with another idea. But this path is dead-end until there's support for ValueProvider for non-I/O modules.