I have implemented a PHP function which checks & downloads a lot of images (> 1'000) - as passed to it using an array - using the PHP curl_multi_init()
method.
After reworking this a few times already, because I was getting things like 0 bytes files, etc. I have a solution now which downloads all images - BUT every other image file downloaded is not complete.
It looks to me as if I use file_put_contents()
"too early", meaning, before some of the images' data has been received completely using curl_multi_exec()
.
Unfortunately I didn't find any similar question nor any Google Results for my case, in which I need to use curl_multi_exec, but do NOT want to retrieve & save images using the curl-opt-header "CURLOPT_FILE
".
Hope someone is able to help me out regarding what I'm missing and why I get some broken images saved locally.
$curl_httpresources = [
[ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
,'/srv/www/data/images/1_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/srv/www/data/images/2_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,'/srv/www/data/images/3_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/srv/www/data/images/4_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/srv/www/data/images/5_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
,'/srv/www/data/images/6_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/srv/www/data/images/7_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,'/srv/www/data/images/8_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/srv/www/data/images/9_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/srv/www/data/images/10_unsplash.jpg' ],
];
Now for the function I'm currently using - and kind of "works", except some partially downloaded files - this is the code:
function cURLfetch(array $resources)
{
/** Disable PHP timelimit, because this could take a while... */
set_time_limit(0);
/** Validate the $resources Array (not empty, null, or alike) */
$resources_num = count($resources);
if ( empty($resources) || $resources_num <= 0 ) return false;
/** Callback-Function for writing data to file */
$callback = function($resource, $filepath)
{
file_put_contents($filepath, $resource);
/** For Debug only: output <img>-Tag with saved $resource */
printf('<img src="%s"><br>', str_replace('/srv/www', '', $filepath));
};
/**
* Initialize CURL process for handling multiple parallel requests
*/
$curl_instance = curl_multi_init();
$curl_multi_exec_active = null;
$curl_request_options = [
CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
CURLOPT_TIMEOUT => 10,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_VERBOSE => false,
CURLOPT_RETURNTRANSFER => true,
];
/**
* Looping through all $resources
* $resources[$i][0] = HTTP resource
* $resources[$i][1] = Target Filepath
*/
for ($i = 0; $i < $resources_num; $i++)
{
$curl_requests[$i] = curl_init($resources[$i][0]);
curl_setopt_array($curl_requests[$i], $curl_request_options);
curl_multi_add_handle($curl_instance, $curl_requests[$i]);
}
do {
try {
$curl_execute = curl_multi_exec($curl_instance, $curl_multi_exec_active);
} catch (Exception $e) {
error_log($e->getMessage());
}
} while ($curl_execute == CURLM_CALL_MULTI_PERFORM);
/** Wait until data arrives on all sockets */
$h = 0; // initialise a counter
while ($curl_multi_exec_active && $curl_execute == CURLM_OK)
{
if (curl_multi_select($curl_instance) != -1)
{
do {
$curl_data = curl_multi_exec($curl_instance, $curl_multi_exec_active);
$curl_done = curl_multi_info_read($curl_instance);
/** Check if there is data... */
if ($curl_done['handle'] !== NULL)
{
/** Continue ONLY if HTTP statuscode was OK (200) */
$info = curl_getinfo($curl_done['handle']);
if ($info['http_code'] == 200)
{
if (!empty(curl_multi_getcontent($curl_requests[$h]))) {
/** Curl request successful. Process data using the callback function. */
$callback(curl_multi_getcontent($curl_requests[$h]), $resources[$h][1]);
}
$h++; // count up
}
}
} while ($curl_data == CURLM_CALL_MULTI_PERFORM);
}
}
/** Close all $curl_requests */
foreach($curl_requests as $request) {
curl_multi_remove_handle($curl_instance, $request);
}
curl_multi_close($curl_instance);
return true;
}
/** Start fetching images from an Array */
cURLfetch($curl_httpresources);
I ended up using just regular cURL requests in a classical loop, to query all >1'000 images and download the ones with a "HTTP 200 OK" response. My initial concerns, that the server might cut the connection due to a potential falsely identified DDoS has had no effect, why this approach works well for my case.
Here's the final function with regular cURL requests I'm using:
function cURLfetchUrl($url, $save_as_file)
{
/** Validate $url & $save_as_file (not empty, null, or alike) */
if ( empty($url) || is_numeric($url) ) return false;
if ( empty($save_as_file) || is_numeric($save_as_file) ) return false;
/** Disable PHP timelimit, because this could take a while... */
set_time_limit(0);
try {
/**
* Set cURL options to be passed to a single request
*/
$curl_request_options = [
CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
CURLOPT_TIMEOUT => 5,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_RETURNTRANSFER => true,
];
/** Initialize & execute cURL-Request */
$curl_instance = curl_init($url);
curl_setopt_array($curl_instance, $curl_request_options);
$curl_data = curl_exec($curl_instance);
$curl_done = curl_getinfo($curl_instance);
/** cURL request successful */
if ($curl_done['http_code'] == 200)
{
/** Open a new file handle, write the file & close the file handle */
if (file_put_contents($save_as_file, $curl_data) !== false) {
// logging if file_put_contents was OK
} else {
// logging if file_put_contents FAILED
}
}
/** Close the $curl_instance */
curl_close($curl_instance);
return true;
} catch (Exception $e) {
error_log($e->getMessage());
return false;
}
}
And to execute it:
$curl_httpresources = [
[ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
,'/srv/www/data/images/1_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/srv/www/data/images/2_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,'/srv/www/data/images/3_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/srv/www/data/images/4_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/srv/www/data/images/5_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
,'/srv/www/data/images/6_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
,'/srv/www/data/images/7_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
,'/srv/www/data/images/8_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
,'/srv/www/data/images/9_unsplash.jpg' ],
[ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
,'/srv/www/data/images/10_unsplash.jpg' ],
];
/** cURL all request from the $curl_httpresources Array */
if (count($curl_httpresources) > 0)
{
foreach ($curl_httpresources as $resource)
{
cURLfetchUrl($resource[0], $resource[1]);
}
}
Still, if someone has an idea, how to properly retrieve the file data streams using curl_multi, that would be great, as my answer to the initial question just shows a different approach - rather than solving the initial approach.