awkksh

unable to capture the erroneous record number while processing huge text file


the input for this job is a huge .txt file.

#!/bin/ksh  


 
while read -r line
  do
    awk ' BEGIN {FS= ","} 
    $2 ~ /[mM]/  {     

    if  ($12 ~ /[1-9]+/ )
      {
        SECNext=$13
        if ( SECNext != SECPrev )
          {
            SECPrev=SECNext
            $3=substr($3,5,4)"-"substr($3,1,2)"-"substr($3,3,2)  

          }
        else
          {
            printf ("%s\t Same SEC Occured \n",$0) >>$var1$var2
          }
      }
    else
      {
        printf ("%s\t No SEC  for this trem\n",$0) >>$var1$var2
      }
    }
             ' 2>>$var1$var3
 done<$tmp_file>$dir$file".dat"

 rc="$?"
 

How ever, i have done minor changes to this script to capture standard error returning from awk using this ' 2>>$log_dir$err_fname and capturing it in an custom error file.

But unable to capture for which record the awk is writing standard error. need to capture that input line among huge counts of lines from input txt file.

provide a way to know at which line it's failing?


Solution

  • Shell variables $log_dir and $log_fname aren't replaced between 'quotes'. Use awk-option -v to pass values, e.g.:

    awk -v log_dir="$log_dir" -v log_fname="$log_fname" '
        ...
        printf ("%s\t No UPC  for this item\n",$0) >>(log_dir log_fname)
        ...
    '
    

    Edit: the question has been edited since, albeit inconsistently.