I have a bunch of files with the keyword 'A' in them. They are nested to look something like this (simplified)
.
└── key_A
├── data_A
│ ├── d0_A.txt
│ └── d1_A.txt
├── image_A.txt
└── text_A.txt
I would like to rename all 'A' to 'B'.
I tried with the rename command
find . -name '*A*' -exec rename 's/A/B/g' '{}' ';'
But the lowest level directory key_A
is renamed first and the following renames don't know what's happening:
find: ‘./key_A’: No such file or directory
I can run it multiple times from top down with the -mindepth
and replacing only the last part, i.e.,
find . -mindepth 3 -name '*A*' -exec rename 's/(.*)A/\1B/' '{}' ';'
but it takes many command line calls.
Is there an easy solution that is specific for this nested directory problem?
find -depth
is the right tool to get the files and directories in the right order, but you only want to change the last instance of _A
in each line. With -depth
that will do the files first, then any subdirectories, and so on up the chain.
I'm partial to visually pre-testing. I like to see what is going to happen before I execute, so I usually do something with simple-ish string parsing and/or debug mode.
$: find key_? # show me what's there now
key_A
key_A/data_A
key_A/data_A/d0_A.txt
key_A/data_A/d1_A.txt
key_A/image_A.txt
key_A/text_A.txt
$: find key_A -depth | # get files in sensible order, echo commands below to confirm
> while read -r f; do post="${f##*_A}"; pre="${f%_A$post}"; echo mv "$f" "${pre}_B$post"; done
mv key_A/data_A/d0_A.txt key_A/data_A/d0_B.txt
mv key_A/data_A/d1_A.txt key_A/data_A/d1_B.txt
mv key_A/data_A key_A/data_B
mv key_A/image_A.txt key_A/image_B.txt
mv key_A/text_A.txt key_A/text_B.txt
mv key_A key_B
$: set -x; find key_A -depth | # remove the echo to make it happen
while read -r f; do post="${f##*_A}"; pre="${f%_A$post}"; mv "$f" "${pre}_B$post"; done; set +x
+ find key_A -depth
+ read -r f
+ post=.txt
+ pre=key_A/data_A/d0
+ mv key_A/data_A/d0_A.txt key_A/data_A/d0_B.txt
+ read -r f
+ post=.txt
+ pre=key_A/data_A/d1
+ mv key_A/data_A/d1_A.txt key_A/data_A/d1_B.txt
+ read -r f
+ post=
+ pre=key_A/data
+ mv key_A/data_A key_A/data_B
+ read -r f
+ post=.txt
+ pre=key_A/image
+ mv key_A/image_A.txt key_A/image_B.txt
+ read -r f
+ post=.txt
+ pre=key_A/text
+ mv key_A/text_A.txt key_A/text_B.txt
+ read -r f
+ post=
+ pre=key
+ mv key_A key_B
+ read -r f
+ set +x
$: find key_A # gone
find: ‘key_A’: No such file or directory
$: find key_?
key_B
key_B/data_B
key_B/data_B/d0_B.txt
key_B/data_B/d1_B.txt
key_B/image_B.txt
key_B/text_B.txt
This does still leave the possibility of breaking on files with newlines embedded in the name. See BashFAQ: How can I find and safely handle file names containing newlines, spaces or both?
As a follow-up, it's certainly possible to use the same basic tools for files with multiple occurrences of the key value, and/or odd embedded characters like newlines. While I'd recommend more error checking, here's a stripped-down but functional rewrite:
$: find ./key_A -depth -print0 |
> while read -r -d '' p; do f="${p##*/}"; mv "$p" "${p%/*}/${f//_A/_B}"; done
There are several possibly non-obvious optimizations here to avoid sometimes subtle errors, such as adding a dot-slash (./
) to the beginning of the target directory and leaving off the trailing slash, but it does work.
$: shopt -s globstar; printf "[%s]\n" key_?/**
[key_A]
[key_A/data_A]
[key_A/data_A/d0_A-and-another_A.txt]
[key_A/data_A/d1_A.txt]
[key_A/image_A.txt]
[key_A/text_A.txt]
[key_A/with_A
and a newline, and spaces, and another_A.txt]
$: find ./key_? -depth -print0 | while read -r -d '' p
> do f="${p##*/}"; mv "$p" "${p%/*}/${f//_A/_B}"; done
$: printf "[%s]\n" key_?/**
[key_B]
[key_B/data_B]
[key_B/data_B/d0_B-and-another_B.txt]
[key_B/data_B/d1_B.txt]
[key_B/image_B.txt]
[key_B/text_B.txt]
[key_B/with_B
and a newline, and spaces, and another_B.txt]