Wednesday, March 28, 2012

Importing unicode data from Oracle Database

Hi everybody,

i try to import data from an oracle database configured in unicode.
When i run the package i have an error message

Error at Data Flow Task [Source - ACCOUNT]: Column "ACCOUNT_CODE" cannot convert between unicode and non-unicode string data types.

how can i resolve this?

regards.Open up your package and set the data type of ACCOUNT_CODE field to be DT_WSTR.

-Jamie|||Thanks for your help.

I have change data type in advanced editor but now i have this error
the output column "ACCOUNT_CODE" on the error output has properties that do not match the properties of its corresponding data source column.
The data appears as string and i can not change the error output data type.

how can i resolve this ?

Thank you.|||Any luck here? I'm having the same problem. Unicode coming in, bound for a text file so I change the output column data type to string. Trouble is, the error output column data type remains Unicode text stream, and I can't change it.|||have you tried using the data conversion transformation component?|||

Indeed, using a Data Conversion component is the way to go. We do not perform implicit conversions.

We have had a number of requests for implicit conversions and we're certainly looking at those. However, it is important to bear in mind that the more SSIS does implicitly, the fewer opportunities you have to capture and respond to errors at a fine grain of detail. Of course, there will be some business cases where such fine-grained error handling is less important than others - Unicode Ansi conversion may well be one of them.

Donald

|||

Importing unicode from Oracle 10g to SQL2005. Source column is defined as VARCHAR2, destination as NVARCHAR.

I have tried a standard OLEDB input/output using the MS Oracle driver, and also with a data conversion component specifying the input and output columns as DT_WSTR. The data-flow runs with no errors, yet the data loaded into SQL is incorrect.

In Oracle, using the DUMP function to see the Unicode value of the column - value is 151. Inspecting the column in SQL Server after the import using the UNICODE function yields a value of 191.

Any other suggestions as to what I might try? Thanks

|||

Could you check what Locale ID and code pages are set on your source connecting to Oracle?

Thanks.

|||

LocaleID is English (United States)

Codepage is 1252.

Using SQL2005 SP2.

thanks

|||

OK. And do you know what code page your string in the Oracle database is supposed to be associated with? If it is not 1252 (ANSI - Latin I) you can expect conversion errors.

Thanks.

|||

The Oracle CharacterSet is WE8ISO8859P1.

I strongly suspect the issue is the MS Oracle OLEDB driver. The source columns can contain characters equivalent to the decimal values 150,151,152,153 - which are extended ASCII.

My workaround is to translate these characters to standard ASCII characters within the source query, then translate back after the SQL insert is completed.

Thanks for your assistance.

|||Hi Marc,

How do you translate extended ASCII characters to standard ASCII characters within the source query?

This data importation is given me a lot of headeaches !!! :-((

Your help will be much appreciated.

Thank you,

Nicole
|||Has anyone successfully gotten this to work? We started running into this issue last week when we started testing going from 9.2.0.8 to using 10.2.0.3 migrating to SQL server 2005.

Here is my character set for 9i:

NLS_CHARACTER_SET AL16UTF16
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8iSO8859P1

10g

NLS_CHARACTER_SET AL16UTF16
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF

Are these the settings everyone else is using? 10g defaults to the newer version.

Thanks for any help.
sql

No comments:

Post a Comment