i have a data.csv file that i want to convert to txt and then find how many even numbers are contained:
my data.csv is like this:
# Recorded data from sensor XaB.v2
# Recording started on Oct 29 15:00:00
# Recording ended on Oct 29 15:15:59
# X, Y, Z, X', Y', Z'
55, 14, 48, 62, 78, 41
32, 52, 94, 11, 17, 83
and so on.
i tried this way but maybe there is a better way
grep -v '^#' data.csv | sed -e 's/,//g' > data.txt
grep -o '[0-9]\+' data.txt | awk '$1%2 == 0' | wc -l
expecially the sed command i don't know if this is the optimal way.
Then i have to create n copies of my data.txt with scaled values divided by i, again i tried with this script but i do not know if there is a simpler and cleaner way to do it:
n=$1
for ((i=1; i<=n; i++)) do
awk -v divisor="$i" '{ for (j=1; j<=NF; j++) printf "%.6f", $j / divisor; print "" }' data.txt > "data_${i}.txt"
done
I have not included the necessary checks (syntax and n as a positive integer) in this snippet.
Recent versions of GNU awk
can parse the CSV format (-k
or --csv
option). For the first goal (count even values):
awk -k '
!/^#/ {
for(i = 1; i <= NF; i++) s += 1 - ($i % 2)
}
END {
print s+0
}' data.csv
For the second goal we can also let awk
run the outermost loop:
awk -v n="$1" -k '
!/^#/ {
for(i = 1; i <= n && f = "data_" i ".txt"; i++)
for(j = 1; j <= NF; j++)
printf("%s%.6f%s", j==1 ? "" : " ", $j/i, j==NF ? "\n" : "") > f
}' data.csv
(spaces added to separate the output values). Note that with simple CSV like this (no multi-line fields, no quoted fields with commas...), even with older or non GNU versions of awk
, as shown in Barmar's answer, we could use -F ','
and obtain the same result as with -k
.