I am working on changing dicom
headers using pydicom
.
PatientName contains Korean, so I modified PatientName
to English.
After that, ISO_IR_149 Error is output when trying to save.
I looked in charset.py
for the part related to ISO_IR_149, but I could not solve this problem.
How can we solve this problem?
import pydicom
import os
import tempfile
import datetime
from pydicom.dataset import Dataset, FileDataset
suffix = '.dcm'
filename_little_endian = tempfile.NamedTemporaryFile(suffix=suffix).name
filename_big_endian = tempfile.NamedTemporaryFile(suffix=suffix).name
#print(ds.PatientSex)
ds = pydicom.filereader.dcmread("003.dcm")
ds.PatientName = "Patient1"
print(ds.PatientName)
print("Writing test file", filename_little_endian)
ds.save_as(filename_little_endian)
print("File saved.")
# Write as a different transfer syntax XXX shouldn't need this but pydicom
# 0.9.5 bug not recognizing transfer syntax
ds.file_meta.TransferSyntaxUID = pydicom.uid.ExplicitVRBigEndian
ds.is_little_endian = False
ds.is_implicit_VR = False
print("Writing test file as Big Endian Explicit VR", filename_big_endian)
ds.save_as(filename_big_endian)
# reopen the data just for checking
for filename in (filename_little_endian, filename_big_endian):
print('Load file {} ...'.format(filename))
ds = pydicom.dcmread(filename)
print(ds)
# remove the created file
print('Remove file {} ...'.format(filename))
os.remove(filename)
The error message is as follows.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/admin/PycharmProjects/Cylinder/OpenHeader-dicom.py", line 15, in <module>
ds.save_as(filename_little_endian)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\dataset.py", line 1108, in save_as
pydicom.dcmwrite(filename, self, write_like_original)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 888, in dcmwrite
write_dataset(fp, get_item(dataset, slice(0x00010000, None)))
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 521, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\tag.py", line 37, in tag_in_exception
raise type(ex)(msg)
LookupError: With tag (0010, 0010) got exception: unknown encoding: ISO_IR 149
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\tag.py", line 30, in tag_in_exception
yield
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 521, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 464, in write_data_element
writer_function(buffer, data_element, encodings=encodings)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 264, in write_PN
val = [elem.encode(encodings) for elem in val]
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\filewriter.py", line 264, in <listcomp>
val = [elem.encode(encodings) for elem in val]
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\valuerep.py", line 763, in encode
return _encode_personname(self.components, encodings)
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\valuerep.py", line 594, in _encode_personname
for group in comp.split('^')]
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\valuerep.py", line 594, in <listcomp>
for group in comp.split('^')]
File "C:\ProgramData\Anaconda3\lib\site-packages\pydicom\charset.py", line 274, in encode_string
encoded = value.encode(encoding)
LookupError: unknown encoding: ISO_IR 149
The answer is simple but probably dissatisfying: DICOM does not support ISO_IR 149 directly. It does support a character set with code extension techniques for the Korean language. The defined term for Korean characters in DICOM is: "ISO 2022 IR 149". Maybe using UTF-8 (ISO_IR 192) is another option for you?
See PS3.3, C.12.1.1.2.