unicodepython-2.x

unicode().decode('utf-8', 'ignore') raising UnicodeEncodeError


Here is the code:

>>> z = u'\u2022'.decode('utf-8', 'ignore')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2022' in position 0: ordinal not in range(256)

Why is UnicodeEncodeError raised when I am using .decode?

Why is any error raised when I am using 'ignore'?


See also: Why does ENcoding a string result in a DEcoding error (UnicodeDecodeError)?, the other way around.


Solution

  • When I first started messing around with python strings and unicode, It took me awhile to understand the jargon of decode and encode too, so here's my post from here that may help:


    Think of decoding as what you do to go from a regular bytestring to unicode and encoding as what you do to get back from unicode. In other words:

    You de-code a str to produce a unicode string (in Python 2)

    and en-code a unicode string to produce a str (in Python 2)

    So:

    unicode_char = u'\xb0'
    
    encodedchar = unicode_char.encode('utf-8')
    

    encodedchar will contain your unicode character, displayed in the selected encoding (in this case, utf-8).

    The same principle applies to Python 3. You de-code a bytes object to produce a str object. And you en-code a str object to produce a bytes object.