Here’s a bit of innocuous code. It was being compiled with gcc -fexec-charset=1047, so all the characters and strings were being treated as EBCDIC. For example ‘0’ = ‘\xF0’.
if (c >= '0' && c <= '9')
c -= '0';
else if (c >= 'A' && c <= 'Z')
c -= 'A' - 10;
else if (c >= 'a' && c <= 'z')
c -= 'a' - 10;
Specifying the charset is not enough to port this code to the mainframe. The problem is that EDCDIC is completely braindead, and DOESN’T PUT THE FRIGGEN LETTERS TOGETHER!
The letters are clustered in groups:
with whole piles of crap between each range of characters, so comparisons like c >= ‘A’ && c <= ‘Z’ are useless, as are constructions like (c-‘A’-10) since c in J-R or S-Z will break that.
Now I have a big hunt and destroy task ahead of me. I can fix this code, but where else are problems like this lurking!
May 7, 2020
blocked, COBOL, COBOL FD, DATASET, DDNAME, DSNAME, EBCDIC, eye wash station, fixed, JCL, job step, mainframe, organization, variable
I encountered customer COBOL code today with a file declaration of the following form:
000038 SELECT AUSGABE ASSIGN TO UR-S-AUSGABE
000039 ACCESS IS SEQUENTIAL.
000056 FD AUSGABE
000057 RECORDING F
000058 BLOCK 0 RECORDS
000059 LABEL RECORDS OMITTED.
where the program’s JCL used an AUSGABE (German “output”) DDNAME of the following form:
The SELECT looked completely wrong to me, as I thought that SELECT is supposed to have the form:
SELECT cobol-file-variable-name ASSIGN TO ddname
That’s the syntax that my Murach’s Mainframe COBOL uses, and also what I’d seen in big-blue’s documentation.
However, in this customer’s code, the identifier UR-S-AUSGABE is longer than 8 characters, so it sure didn’t look like a DDNAME. I preprocessed the code looking to see if UR-S-AUSGABE was hiding in a copybook (mainframe lingo for an include file), but it wasn’t. How on Earth did this work when it was compiled and run on the original mainframe?
It turns out that [LABEL-]S- or [LABEL]-AS- are ways that really old COBOL code used to specify file organization (something like PL/I’s ENV(ORGANIZATION) clauses for FILEs). This works on the mainframe because a “modern” mainframe COBOL compiler strips off the LABEL- prefix if specified and the organization prefix S- as well, essentially treating those identifier fragments as “comments”.
For anybody reading this who has only programmed in a sane programming language, on sane operating systems, this all probably sounds like verbal diarrhea. What on earth is a file organization and ddname? Do I really have to care about those just to access a file? Well, on the mainframe, yes, you do.
These mysterious dependencies highlight a number of reasons why COBOL code is hard to migrate. It isn’t just a programming language, but it is tied to the mainframe with lots of historic baggage in ways that are very difficult to extricate. Even just to understand how to open a file in mainframe COBOL you have a whole pile of obstacles along the learning curve:
- You don’t just run the program in a shell, passing in arguments, but you have to construct a JCL job step to do so. This specifies parameters, environment variables, file handles, and other junk.
- You have to know what a DDNAME is. This is like a HANDLE in the JCL code that refers to a file. The file has a filename (DSNAME), but you don’t typically use that. Instead the JCL’s job step declares an arbitrary DDNAME to refer to that handle, and the program that is run in that job step has to always refer to the file using that abstract handle.
- The file has all sorts of esoteric attributes that you have to know about to access it properly (fixed, variable, blocked, record length, block size, …). The program that accesses the file typically has to make sure that these attributes are all encoded with the equivalent language specific syntax.
- Files are not typically just byte streams on the mainframe but can have internal structure that can be as complicated as a simple database (keyed records, with special modes to access them to initialize vs access/modify.)
- To make life extra “fun”, files are found in a variety of EBCDIC code pages. In some cases these can’t be converted to single byte iso-8859-X code pages, so you have to use utf-8, and can get into trouble if you want to do round trip conversions.
- Because of the internal structure of a mainframe file, you may not be able to transfer it to a sane operating system unless special steps are taken. For example, a variable format file with binary data would typically have to be converted to a fixed format representation so that it’s possible to seek from record to record.
- Within the (COBOL) code you have three sets of attributes that you have to specify to “declare” a file, before you can even attempt to open it: the DDNAME to COBOL-file-name mapping (SELECT), the FD clause (file properties), and finally record declarations (global variables that mirror the file data record structure that you have to use to read and write the file.)
You can’t just learn to program COBOL, like you would any sane programming language, but also have to learn all the mainframe concepts that the COBOL code is dependent on. Make sure you are close enough to your eyewash station before you start!
The release notes for the latest z/OS C/C++ compiler are interesting. When I was at IBM they were working on “clangtana”, a clang frontend melded with the legacy TOBY backend. This really surprised me, but was consistent with the fact that the IBM compiler guys kept saying that they were continually losing their internal funding — that project was a clever way to do more with less resources. I think they’d made the clangtana switch for zLinux by the time I left, with AIX to follow once they had resolved some ABI incompatibility issues. At the time, I didn’t know (nor care) about the status of that project on z/OS.
Well, years later, it looks like they’ve now switched to a clang based compiler frontend on z/OS too. This major change appears to have a number of side effects that I can imagine will be undesirable to existing mainframe customers:
- Compiler now requires POSIX(ON) and Unix System Services. No more compilation using JCL.
- Compiler support for 31-bit applications appears to be dropped (64-bit only!)
- Support for C, FASTLINK, and OS linkage conventions has been dropped (XPLINK only.)
- Only ibm-1047 is supported for both source and runtime character set encoding.
- C89 support appears to have been dropped.
- Hex floating support has been dropped.
- No decimal floating point support.
- SIMD support isn’t implemented.
- Metal C support has been dropped.
i.e. if you want C++14, you have to be willing to give up a lot to get it. They must be using an older clang, because this “new” compiler doesn’t include C++17 support. I’m surprised that they didn’t even manage multiple character set support for this first compiler release.
It is interesting that they’ve also dropped IPA and PDF support, and that the optimization options have changed. Does that mean that they’ve actually not only dropped the old Montana frontend, but also gutted the whole backend, switching to clang exclusively?
Suppose you wanted to do the equivalent of the following Unix shell code on the mainframe in JCL:
head -1 < UT128.SYSOUT.EXPECTED > $TID.$CID.SYSOUT.ACT
head -1 < UT128.COBPRINT.EXPECTED > $TID.$CID.COBPRINT.ACT
Here’s the JCL equivalent of this pair of one-liners:
There are probably shorter ways to do this, but the naive way weighs in at 22:2 lines for JCL:Unix — damn!
I can’t help but to add a punny comment that knowing JCL must have once been really good JOB security.
April 19, 2018
batch, COBOL, dependencies, EBCDIC, editors, ftp, git, loadmodule, LzLabs, mainframe, make, PDS, PL/I, t3270
Once upon a time, in a land far from any modern developers, were languages named COBOL and PL/I, which generated programs that were consumed by a beast known as Mainframe. Developers for those languages compiled and linked their applications huddled around strange luminous green screens and piles of hole filled papers while chanting vaguely latin sounding incantations like “Om-padre-JCL-beget-loadmodule-pee-dee-ess.”
In these ancient times, version control tools like git were not available. There was no notion of makefiles, so compilation and link was a batch process, with no dependency tracking, and no parallelism. Developers used printf-style debugging, logging trace information to files. In order to keep the uninitiated from talking to the Mainframe, files were called datasets. In order to use graphical editors, developers had to repeatedly feed their source to the Mainframe using a slave named ftp, while praying that the evil demon EBCDIC-conversion didn’t mangle their work. The next day, they could go back and see if Mainframe accepted their offering.
[TO BE CONTINUED.]
Incidentally, as of a couple days ago, I’ve now been working for LzLabs for 2 years. My work is not yet released, nor announced, so I can’t discuss it here yet, but it can be summarized as really awesome. I’m still having lots of fun with my development work, even if I have to talk in languages that the beast understands.