This data is getting loaded into database by using sql loader based on character position. The loader has thrown error that "value too large for column" (actual: 51, maximum: 50).
I saw the error and tried to test it on shell with actual data. I took the data from position 9-58 and counted. As other lines got loaded successfully, it has created error like this. That's what my confusion is, how its counting one more character?
okay, thanks a lot for helping me to understand the wc -c work. But the confusion still remains (SQL loader part) for the same data other rows have been loaded but some are not. Is there any possibility that other character are included in the same line but not visible to me on shell prompt?
I am using Solaris 10.
No Microsoft OS involved in this except I use putty to login to UNIX machine from my Windows os on client machine.
Its complete ASCII character set what we use.
Yes, its on same computer but db is on other machine, shouldn't be a matter.
Hmm. Extended ASCII (greater than Octal 127). Probably come from a Microsoft system or possibly a foreign language ASCII character set? There is potential for these characters to convert to two-character UTF in your SQL loader or possibly some other multi-character sequence (like the octal sequence cited).
We know so little about your software that this is pure speculation.
This is a common problem in multi-platform system design.
I recommend that you instigate a systematic software re-test on a test system, paying particular attention to Extended ASCII character sets.
The big decision is what to do with each Extended ASCII character such that the data loads correctly into your database. This is not trivial.