We've written a script to loop through all our files and remove a certain string. The string contains no special characters and the script does remove the string and replace it with nothing like we're after. But, there are some special characters in the files. Instead of removing just the string we specify and leaving everything else in the files as is, the script replaces these special characters with a question mark, "?" We've tried both the PowerShell -replace and .Replace functions with the same effect. Both replace this special character with "?"
this is the script we run:
$packageFiles = Get-ChildItem -Path E:\SSIS\Projects -Recurse -Include *.dtsx
foreach ($file in $packageFiles)
{
(Get-Content $file.PSPath) |
Foreach-Object { $_.Replace('somestring','')} |
Set-Content $file.PSPath
}
and this is what the line with the special character looks like in the file before:
and after:
The before has the non breaking space byte order mark special character, "ZWNBSP". And the after has replaced the special character with a question mark "?".
Does anyone have a solution that will allow us to still use one of PowerShell's replace functions but not have our special characters replaced by question marks which breaks our files?
UPDATE: The file we are trying to find/replace within is an SSIS package which is xml. I tried rewriting the script to use xml functions rather than treating the file as plain text. This is the new script:
$xmlPath = "E:\somepath\package.dtsx"
$xmlContent = Get-Content -Path $xmlPath
$xml = [xml]$xmlContent
$ns = New-Object System.Xml.XmlNamespaceManager($xml.NameTable)
$ns.AddNamespace("xml", "http://www.w3.org/XML/1998/namespace")
$ns.AddNamespace("DTS", "www.microsoft.com/SqlServer/Dts")
$pwd_xml = $xml.SelectSingleNode("//DTS:Variable[@DTS:ObjectName='SSIS_APP_PWD']/DTS:VariableValue", $ns)
$pwd_xml
$pwd_xml.innerText = ''
$xml.Save("E:\somepath\package.dtsx")
and instead of replacing the original special character with a question mark it now replaces it with "":
This still breaks our file.
Building on Santiago's helpful comments:
tl;dr
PowerShell does not ensure that text read from a text file with Get-Content
is later written with the same character encoding using, for instance, Set-Content
.
To preserve the original character encoding, you need to (a) know the specific character encoding of the input file and (b) specify the same encoding via the -Encoding
parameter when calling Set-Content
.
Your symptom implies that you're using Windows PowerShell, where Set-Content
defaults to ANSI encoding, in which a character such as ZERO WIDTH NO-BREAK SPACE, U+FEFF
can not be represented and is "lossily" translated to a verbatim ?
character.
Assuming that your input files are UTF-8 files with BOM, the solution is therefore to pass -Encoding UTF8
to Set-Content
(streamlined version of your code):
Get-ChildItem E:\SSIS\Projects -Recurse -Include *.dtsx |
ForEach-Object {
($_ | Get-Content -Raw).Replace('somestring','') |
Set-Content -LiteralPath $_.FullName
}
If there's also a problem in how the files are read, use an -Encoding
argument with Get-Content
too.
As for the XML API-based approach you added later (which is preferable in this case):
There, the character encoding is preserved, assuming you're using the following idiom to load and parse your file into an XML DOM (and assuming a standards-conformant, internally consistent XML file):
$xmlPath = "E:\somepath\package.dtsx" # Be sure to use a full path
($xml = [xml]::new()).Load($xmlPath)
# ...
# $xml.Save(...) preserves the original encoding.
See the bottom section of this answer for details.
Fundamentally, PowerShell never preserves information about an input file's character encoding when reading text files, such as with Get-Content
:
On reading file content into memory, a file's bytes are decoded based on a specific character encoding, and converted to .NET [string]
s, which are internally composed of UTF-16 Unicode code units, and therefore capable of representing all Unicode characters. On writing .NET strings back to a file, a character encoding must again be applied (see next major bullet point).
The specific character encoding used is chosen in one of three ways:
If a BOM (byte-order mark) is present at the start of a file, it implies the character encoding to use.
In the absence of a file starting with a BOM, the default character encoding is used, which varies by PowerShell edition:
The legacy, ships-with-Windows Windows PowerShell edition defaults to the system locale's active legacy ANSI code page, such as Windows-1252 on US-English systems.
The modern, cross-platform PowerShell (Core) 7 edition now fortunately consistently defaults to UTF-8.
If the default character encoding is the wrong one, it can be overridden with an -Encoding
argument.
When writing text files later, such as with Set-Content
, no information about the original character encoding is available, and the cmdlet's default encoding is used in the absence of an -Encoding
argument:
Again, the defaults vary by edition and can be overridden with an -Encoding
argument:
In Windows PowerShell, Set-Content
uses the ANSI code page for writing too, whereas Out-File
uses "Unicode", i.e UTF-16LE encoding. Other cmdlets have yet different defaults; see the bottom section of this answer for details.
-Encoding utf8
invariably creates UTF-8 files with a BOM; the creation of BOM-less UTF-8 files requires workarounds: see this answer.
In PowerShell 7, all cmdlets fortunately now consistently use BOM-less UTF-8; UTF-8 with BOM can be requested with -Encoding utf8BOM