Wednesday, April 05, 2006
Redhat 9 Sucks
I spent the better part of 3 hours at work on Friday trying to figure out why my Perl script wasn't working. I had an arbitrary collection of binary data skimmed from the network that was perfectly coherent and viable when stored in a hashtable. However, as soon as I decide to write this data to a file, random garbage was interspersed with real data, resulting in a file on disk that was much larger than the file in memory, in addition to it being total garbage. Since my work is on an isolated system, I cant post details of what I was seeing. Nevertheless, it was a very perplexing problem. Perplexing that is until I remembered having a similar problem a number of years ago with Perl on a Redhat 9 system...back when Redhat 9 was relatively new and still in favor with me.
After several futile google searches to narrow down the problem, I decided to try the solution that I remembered from my prior disastrous experiences...open the file in binary mode. Even though the documentation explicitly says that this is only necessary on broken operating systems like Microsoft Windows, this simple modification made my script work flawlessly. I am not in the mood to run this issue into the ground much further since I don't use Redhat products by choice anymore and this work project is nearing its completion. However, I did find out what the culprit most likely was.
At some point around the release of Redhat 9, Perl was going through its transition to Unicode support. Apparently, parts of the system were not completely functional for the Redhat 9 Perl release. In particular, it seems that the print function was converting my binary data to Unicode before writing it to the file, translating certain bytes to the extended form. This would explain why I had extra data in my file on disk. Doesn't make me any happier though :(
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment