I'm using the unicode-show
package to make a passing hspec test:
code:
...(show . Data.List.NonEmpty.head $ infiles2, outFile2) `shouldBe` ("\"/foo/bar/baz - :.ogg\"", outFile1))...
expected: ("\"/foo/bar/baz - :.ogg\"", "/foo/bar/fee.m4a")
but got: ("\"/foo/bar/baz - \\65306.ogg\"", "/foo/bar/fee.m4a")
but then this code passes:
...(Text.Show.Unicode.ushow . Data.List.NonEmpty.head $ infiles2, outFile2) `shouldBe` ("\"/foo/bar/baz - :.ogg\"", outFile1))...
Test suite spec: RUNNING...
.........
Finished in 0.0115 seconds
9 examples, 0 failures, 2 pending
Test suite spec: PASS
Why is this necessary? Why doesn't show
do the right thing here?
The implementors of show @String
had two design goals: 1. produce valid Haskell code that means the same thing as the input that 2. uses only 7-bit clean ASCII. (Remember, Haskell was designed back in the 90s-ish! We were still in the dark ages where not all software supported Unicode yet, notably terminals and email clients.) So all codepoints above 127 get escaped.