I need to generate a fixed width file with few of the columns in packed decimal format and few of the columns in normal number format. I was able to generate. I zipped the file and passed it on to the mainframe team. They imported it and unzipped the file and converted to EBCDIC. They were able to get the packed decimal columns without any problem but the normal number fields seemed to have messed up and are unreadable. Is there something specific that I need to do while process/zip my file before sending it to mainframe? I am using COMP3 packed decimal. Currently working on Windows XP but the real production will be on RHEL.
Thanks in advance for helping me out. This is urgent.
Edited on 06 June 2011:
This is how it looks when I switch on HEX.
. . . . . . . . . . A . .
333333333326004444
210003166750C0000
The 'A' in the first row has a slight accent so it is not the actual upper case A.
210003166 is the raw decimal. The value of the packed decimal before comp3 conversion is 000000002765000 (we can ignore the leading zeroes if required).
UPDATE 2 : 7th June 2011 This how I am converting creating the file that gets loaded into the mainframe: File contains two columns - Identification number & amount. Identification number doesn't require comp3 conversion and amount requires comp3 conversion. Comp3 conversion is performed at oracle sql end. Here is the query for performing the conversion:
Select nvl(IDENTIFIER,' ') as IDENTIFIER, nvl(utl_raw.cast_to_varchar2(comp3.convert(to_number(AMOUNT))),'0') as AMOUNT from TABLEX where IDENTIFIER = 123456789
After executing the query, I do the following in Java:
String query = "Select nvl(IDENTIFIER,' ') as IDENTIFIER, nvl(utl_raw.cast_to_varchar2(comp3.convert(to_number(AMOUNT))),'0') as AMOUNT from TABLEX where IDENTIFIER = 210003166"; // this is the select query with COMP3 conversion
ResultSet rs = getConnection().createStatement().executeQuery(sb.toString());
sb.delete(0, sb.length()-1);
StringBuffer appendedValue = new StringBuffer (200000);
while(rs.next()){
appendedValue.append(rs.getString("IDENTIFIER"))
.append(rs.getString("AMOUNT"));
}
File toWriteFile = new File("C:/transformedFile.txt");
FileWriter writer = new FileWriter(toWriteFile, true);
writer.write(appendedValue.toString());
//writer.write(System.getProperty(ComponentConstants.LINE_SEPERATOR));
writer.flush();
appendedValue.delete(0, appendedValue.length() -1);
The text file thus generated is manually zipped by a winzip tool and provided to the mainframe team. Mainframe team loads the file into mainframe and browses the file with HEXON.
Now, coming to the conversion of the upper four bits of the zoned decimal, should I be doing it before righting it to the file? Or am I to apply the flipping at the mainframe end? For now, I have done the flipping at java end with the following code:
public static String toZoned(String num) {
if (num == null) {
return "";
}
String ret = num.trim();
if (num.equals("") || num.equals("-") || num.equals("+")) {
// throw ...
return "";
}
char lastChar = ret.substring(ret.length() - 1).charAt(0);
//System.out.print(ret + " Char - " + lastChar);
if (lastChar < '0' || lastChar > '9') {
} else if (num.startsWith("-")) {
if (lastChar == '0') {
lastChar = '}';
} else {
lastChar = (char) (lastChar + negativeDiff);
}
ret = ret.substring(1, ret.length() - 1) + lastChar;
} else {
if (num.startsWith("+")) {
ret = ret.substring(1);
}
if (lastChar == '0') {
lastChar = '{';
} else {
lastChar = (char) (lastChar + positiveDiff);
}
ret = ret.substring(0, ret.length() - 1) + lastChar;
}
//System.out.print(" - " + lastChar);
//System.out.println(" -> " + ret);
return ret;
}
The identifier becomes 21000316F at the java end and that is what gets written to the file. I have passed on the file to mainframe team and awaiting the output with HEXON. Do let me know if I am missing something. Thanks.
UPDATE 3: 9th Jun 2011
Ok I have got mainframe results. I am doing this now.
public static void main(String[] args) throws FileNotFoundException {
// TODO Auto-generated method stub
String myString = new String("210003166");
byte[] num1 = new byte[16];
try {
PackDec.stringToPack("000000002765000",num1,0,15);
System.out.println("array size: " + num1.length);
} catch (DecimalOverflowException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (DataException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
byte[] ebc = null;
try {
ebc = myString.getBytes("Cp037");
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
PrintWriter pw = new PrintWriter("C:/transformationTextV1.txt");
pw.printf("%x%x%x%x%x%x%x%x%x",ebc[0],ebc[1],ebc[2],ebc[3],ebc[4], ebc[5], ebc[6], ebc[7], ebc[8]);
pw.printf("%x%x%x%x%x%x%x%x%x%x%x%x%x%x%x",num1[0],num1[1],num1[2],num1[3],num1[4], num1[5], num1[6], num1[7],num1[8], num1[9],num1[10], num1[11],num1[12], num1[13], num1[14],num1[15]);
pw.close();
}
And I get the following output:
Á.Á.Á.Á.Á.Á.Á.Á.Á.................Ä
63636363636363636333333333333333336444444444444444444444444444444444444444444444
62616060606361666600000000000276503000000000000000000000000000000000000000000000
I must be doing something very wrong!
UPDATE 4: 14th Jun 2011
This query was resolved after using James' suggestion. I am currently using the below code and it gives me the expected output:
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String myString = new String("210003166");
byte[] num1 = new byte[16];
try {
PackDec.stringToPack("02765000",num1,0,8);
} catch (DecimalOverflowException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (DataException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
byte[] ebc = null;
try {
ebc = myString.getBytes("Cp037");
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
FileOutputStream writer = new FileOutputStream("C:/transformedFileV3.txt");
writer.write(ebc,0,9);
writer.write(num1,0,8);
writer.close();
}
As you are coding in Java and you require a mix of EBCDIC and COMP-3 in your output you wiil need to do the unicode to EBCDIC conversion in your own program.
You cannot leave this up to the file transfer utility as it will corrupt your COMP-3 fields.
But luckily you are using Java so its easy using the getBytes method of the string class..
Working Example:
package com.tight.tran;
import java.io.*;
import name.benjaminjwhite.zdecimal.DataException;
import name.benjaminjwhite.zdecimal.DecimalOverflowException;
import name.benjaminjwhite.zdecimal.PackDec;
public class worong {
/**
* @param args
* @throws IOException
*/
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String myString = new String("210003166");
byte[] num1 = new byte[16];
try {
PackDec.stringToPack("000000002765000",num1,0,15);
System.out.println("array size: " + num1.length);
} catch (DecimalOverflowException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch (DataException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
byte[] ebc = null;
try {
ebc = myString.getBytes("Cp037");
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
FileOutputStream writer = new FileOutputStream("C:/transformedFile.txt");
writer.write(ebc,0,9);
writer.write(num1,0,15);
writer.close();
}
}
Produces (for me!):
0000000: f2f1 f0f0 f0f3 f1f6 f600 0000 0000 0000 ................
0000010: 0000 0000 2765 000c 0d0a ....'e....