convmv - converts filenames from one encoding to another
convmv [options] FILE(S)
... DIRECTORY(S)
filename(s)
from which should be converted
filename(s)
should be converted
Example:
convmv -f latin1 -t utf-8 -r --exec ``echo #1 should be renamed to #2'' path/to/files
--nosmart
will also force conversion to UTF-8 for such files, which might
result in ``double encoded UTF-8'' (see section below).
By the way: The superscript dot of the letter i was added in the Middle Ages to distinguish the letter (in manuscripts) from adjacent vertical strokes in such letters as u, m, and n. J is a variant form of i which emerged at this time and subsequently became a separate letter.
convmv is meant to help convert a single filename, a directory tree and the contained files or a whole filesystem into a different encoding. It just converts the filenames, not the content of the files. A special feature of convmv is that it also takes care of symlinks, also converts the symlink target pointer in case the symlink target is being converted, too.
All this comes in very handy when one wants to switch over from old 8-bit
locales to UTF-8 locales. It is also possible to convert directories to UTF-8
which are already partly UTF-8 encoded. convmv is able to detect if certain
files are UTF-8 encoded and will skip them by default. To turn this smartness
off use the --nosmart
switch.
An interoperability issue that comes with UTF-8 locales is this: Linux and (most?) other Unix-like operating systems use the so called normalization form C (NFC) for its UTF-8 encoding by default but do not enforce this. Darwin, the base of the Macintosh OS enforces normalization form D (NFD), where a few characters are encoded in a different way. On OS X it's not possible to create NFC UTF-8 filenames because this is prevented at filesystem layer. Anywhere else convmv is able to convert files from NFC to NFD or vice versa which makes interoperability with such systems a lot easier.
Sometimes it might happen that you ``double-encoded'' certain filenames, for
example the file names already were UTF-8 encoded and you accidently did
another conversion from some charset to UTF-8. You can simply undo that by
converting that the other way round. The from-charset has to be UTF-8 and the
to-charset has to be the from-charset you previously accidently used. You
should check to get the correct results by doing the conversion without
--notest
before, also the --qfrom
option might be helpful, because the
double utf-8 file names might screw up your terminal if they are being
printed - they often contain control sequences which do funny things with your
terminal window. If you are not sure about the charset which was accidently
converted from, using --qfrom
is a good way to fiddle out the required
encoding without destroying the file names finally.
When in the smb.conf (of Samba 2.x) there hasn't been set a correct ``character set'' variable, files which are created from Win* clients are being created in the client's codepage, e.g. cp850 for western european languages. As a result of that the files which contain non-ASCII characters are screwed up if you ``ls'' them on the Unix server. If you change the ``character set'' variable afterwards to iso8859-1, newly created files are okay, but the old files are still screwed up in the Windows encoding. In this case convmv can also be used to convert the old Samba-shared files from cp850 to iso8859-1.
By the way: Samba 3.x finally maps to UTF-8 filenames by default, so also when you migrate from Samba 2 to Samba 3 you might have to convert your file names.
Almost all POSIX filesystems do not care about how filenames are encoded, here are some exceptions:
Despite other POSIX filesystems RFC3530 (NFS 4) mandates UTF-8 but also says: ``The nfs4_cs_prep profile does not specify a normalization form. A later revision of this specification may specify a particular normalization form.'' In other words, if you want to use NFS4 you might find the conversion and normalization features of convmv quite useful.
The Journaling Filesystem (JFS) encodes files internally in UTF-16. The operating system has to convert from the charset of the current locale, which has to be specified via the iocharset mount option. Any filename containing character sequences which are not valid in this encoding cannot be created. Running different locales on one filesystem may result in filename problems if JFS is used. Also converting between different encodings is likely to fail on JFS. You might set iocharset to an 8bit encoding where all 255 characters are valid (like cp850) and then use any charset you want but that's just an ugly workaround to get a sane (in the sense of UNIX-like) behaviour, where you can create filenames containing anything but NUL and slash. My advice for most people is to use a different filesystem if possible.
Apple has modified UFS in a way, which makes it impossible to create filenames in UTF-8 NFC, they will always be NFD. Also creating filenames in other (non UTF-8) encodings is not possible. This hacks on UFS makes Darwin a real crappy Unix.
locale(1) utf-8(7) charsets(7)
no bugs or fleas known
Bjoern JACKE
Send mail to bjoern [at] j3e.de for bug reports and suggestions.