pythonapicsvwritetofileget-request

How to create files with names from a file and then writing to files? Python. API request


There is an algorithm in the end of the text. It reads lines from the file SP500.txt. File contains strings and it looks like:

AAA
BBB
CCC

Substitutes these strings in the get request and saves the entire url to a file url_requests.txt. For the example:

https://apidate.com/api/api/AAA.US?api_token=XXXXXXXX&period=d
https://apidate.com/api/api/BBB.US?api_token=XXXXXXXX&period=d
https://apidate.com/api/api/CCC.US?api_token=XXXXXXXX&period=d

and then processes each request via the API and adds all responses to get requests to responses.txt. I don't know how to save the response from each request from the file url_requests.txt into separate csv file instead of responses.txt (now they are all written to this file, and not separately). In this case, it is important to name each file with the corresponding line from the file SP500.txt. For example:

AAA.csv `(which contains data from the request response https://apidate.com/api/api/AAA.US?api_token=XXXXXXXX&period=d)`
BBB.csv `(which contains data from the request response https://apidate.com/api/api/BBB.US?api_token=XXXXXXXX&period=d)`
CCC.csv `(which contains data from the request response https://apidate.com/api/api/CCC.US?api_token=XXXXXXXX&period=d)`

So, algorithm is:

import requests

        # to use strip to remove spaces in textfiles.
import sys

        # two variables to squeeze a string between these two so it will become a full uri
part1 = 'https://apidate.com/api/api/'
part2 = '.US?api_token=XXXXXXXX&period=d'

        # open the outputfile before the for loop
text_file = open("url_requests.txt", "w")

        # open the file which contains the strings
with open('SP500.txt', 'r') as f:
              for i in f:
                uri = part1 + i.strip(' \n\t') + part2
                print(uri)
                text_file.write(uri)
                text_file.write("\n")

text_file.close()

        # open a new file textfile for saving the responses from the api
text_file = open("responses.txt", "w")

        # send every uri to the api and write the respones to a textfile
with open('url_requests.txt', 'r') as f2:
            for i in f2:
                uri = i.strip(' \n\t')
                batch = requests.get(i)
                data = batch.text
                print(data)
                text_file.write(data)
                text_file.write('\n')

text_file.close()

And I know how to save csv from this response. It is like:

import csv
import requests

url = "https://apidate.com/api/api/AAA.US?api_token=XXXXXXXX&period=d"
response = requests.get(url)

with open('out.csv', 'w') as f:
    writer = csv.writer(f)
    for line in response.iter_lines():
        writer.writerow(line.decode('utf-8').split(','))

Solution

  • To save in different names you have to use open() and write() inside for-loop when you read data.

    It would good to read all names to list and later generate urls and also keep on list so you would not have to read them.

    When I see code which you use to save csv then it looks like you get csv from server so you could save all at once using open() write() without csv module.

    I see it in this way.

    import requests
    #import csv
    
    # --- read names ---
    
    all_names = []  # to keep all names in memory
    
    with open('SP500.txt', 'r') as text_file:
        for line in text_file:
            line = line.strip()
            print('name:', name)
            all_names.append(line)
    
    # ---- generate urls ---
    
    url_template = 'https://apidate.com/api/api/{}.US?api_token=XXXXXXXX&period=d'
    
    all_uls = []  # to keep all urls in memory
    
    with open("url_requests.txt", "w") as text_file:
        for name in all_names:
            url = url_template.format(name)
            print('url:', url)
            all_uls.append(url)
            text_file.write(url + "\n")
    
    # --- read data ---
    
    for name, url in zip(all_names, all_urls):
        #print('name:', name)
        #print('url:', url)
        
        response = requests.get(url)
    
        with open(name + '.csv', 'w') as text_file:
            text_file.write(response.text)
            
            #writer = csv.writer(text_file)
            #for line in response.iter_lines():
            #    writer.writerow(line.decode('utf-8').split(',')