C/C++ development and debugging.

C++ compiler diagnostic gone horribly wrong: error: explicit specialization in non-namespace scope

September 23, 2022 C/C++ development and debugging. , , , , , , , ,

Here is a g++ error message that took me an embarrassingly long time to figure out:

In file included from /home/llvm-project/llvm/lib/IR/Constants.cpp:15:
/home/llvm-project/llvm/lib/IR/LLVMContextImpl.h:447:11: error: explicit specialization in non-namespace scope ‘struct llvm::MDNodeKeyImpl<llvm::DIBasicType>’
 template <> struct MDNodeKeyImpl<DIStringType> {
           ^

This is the code:

template <> struct MDNodeKeyImpl<DIStringType> {
  unsigned Tag;
  MDString *Name;
  Metadata *StringLength;
  Metadata *StringLengthExp;
  Metadata *StringLocationExp;
  uint64_t SizeInBits;
  uint32_t AlignInBits;
  unsigned Encoding;

This specialization isn’t materially different than the one that preceded it:

template <> struct MDNodeKeyImpl<DIBasicType> {
  unsigned Tag;
  MDString *Name;
  MDString *PictureString;
  uint64_t SizeInBits;
  uint32_t AlignInBits;
  unsigned Encoding;
  unsigned Flags;
  Optional<DIBasicType::DecimalInfo> DecimalAttrInfo;

  MDNodeKeyImpl(unsigned Tag, MDString *Name, MDString *PictureString,
               uint64_t SizeInBits, uint32_t AlignInBits, unsigned Encoding,
                unsigned Flags,
                Optional<DIBasicType::DecimalInfo> DecimalAttrInfo)
      : Tag(Tag), Name(Name), PictureString(PictureString),
        SizeInBits(SizeInBits), AlignInBits(AlignInBits), Encoding(Encoding),
        Flags(Flags), DecimalAttrInfo(DecimalAttrInfo) {}
  MDNodeKeyImpl(const DIBasicType *N)
      : Tag(N->getTag()), Name(N->getRawName()), PictureString(N->getRawPictureString()), SizeInBits(N->getSizeInBits()),
        AlignInBits(N->getAlignInBits()), Encoding(N->getEncoding()),
        Flags(N->getFlags(), DecimalAttrInfo(N->getDecimalInfo()) {}

  bool isKeyOf(const DIBasicType *RHS) const {
    return Tag == RHS->getTag() && Name == RHS->getRawName() &&
           PictureString == RHS->getRawPictureString() &&
           SizeInBits == RHS->getSizeInBits() &&
           AlignInBits == RHS->getAlignInBits() &&
           Encoding == RHS->getEncoding() && Flags == RHS->getFlags() &&
           DecimalAttrInfo == RHS->getDecimalInfo();
  }

  unsigned getHashValue() const {
    return hash_combine(Tag, Name, SizeInBits, AlignInBits, Encoding);
  }
};

However, there is an error hiding above it on this line:

        Flags(N->getFlags(), DecimalAttrInfo(N->getDecimalInfo()) {}

i.e. a single missing brace in the initializer for the Flags member, a consequence of a cut and paste during rebase that clobbered that one character, when adding a comma after it.

It turns out that the compiler was giving me a hint that something was wrong before this in the message:

error: explicit specialization in non-namespace scope

as it states that the scope is:

‘struct llvm::MDNodeKeyImpl

which is the previous class definition. Inspection of the code made me think that the scope was ‘namespace llvm {…}’, and I’d gone looking for a rebase error that would have incorrectly terminated that llvm namespace scope. This is a classic example of not paying enough attention to what is in front of you, and going off looking based on hunches instead. I didn’t understand the compiler message, but in retrospect, non-namespace scope meant that something in that scope was incomplete. The compiler wasn’t smart enough to tell me that the previous specialization was completed due to the missing brace, but it did tell me that something was wrong in that previous specialization (which was explicitly named), and I didn’t look at that because of my “what the hell does that mean” reaction to the compilation error message.

In this case, I was building on RHEL8.3, which uses an ancient GCC toolchain. I wonder if newer versions of g++ fare better (i.e.: a message like “possibly unterminated brace on line …” would have been much nicer)? I wasn’t able to try with clang++ as I was building llvm+clang+lldb (V14), and had uninstalled all of the llvm related toolchain to avoid interference.

The C compiler is too forgiving! sizeof(variable_name+1) allowed?

April 28, 2022 C/C++ development and debugging. , , , ,

I carelessly passed:

sizeof(s.st_size+1)

to an allocator call, instead of:

s.st_size+1

and corrupted memory nicely.

What the hell would sizeof(variable+1) even mean, and why on earth would the compiler think that is anything close to valid? Both gcc and clang, each with -Wall, are completely quiet about this error!

Debugging a C coding error from an XPLINK assembly listing.

March 12, 2021 C/C++ development and debugging., Mainframe , , , , , , ,

There are at least two\({}^1\) z/OS C calling conventions, the traditional “LE” OSLINK calling convention, and the newer\({}^2\) XPLINK convention.  In the LE calling convention, parameters aren’t passed in registers, but in an array pointed to by R1.  Here’s an example of an OSLINK call to strtof():

*  float strtof(const char *nptr, char **endptr);
LA       r0,ep(,r13,408)
LA       r2,buf(,r13,280)
LA       r4,#wtemp_1(,r13,416)
L        r15,=V(STRTOF)(,r3,4)
LA       r1,#MX_TEMP3(,r13,224)
ST       r4,#MX_TEMP3(,r13,224)
ST       r2,#MX_TEMP3(,r13,228)
ST       r0,#MX_TEMP3(,r13,232)
BASR     r14,r15
LD       f0,#wtemp_1(,r13,416)

R1 is pointed to r13 + 224 (a location on the stack). If the original call was:

float f = strtof( mystring, &err );

The compiler has internally translated it into something of the form:

STRTOF( &f, mystring, &err );

where all of {&f, mystring, &err} are stuffed into the memory starting at the 224(R13) location. Afterwards the value has to be loaded from memory into a floating point register (F0) so that it can be used.  Compare this to a Linux strtof() call:

* char * e = 0;
* float x = strtof( "1.0", &e );
  400b74:       mov    $0x400ef8,%edi       ; first param is address of "1.0"
  400b79:       movq   $0x0,0x8(%rsp)       ; e = 0;
  400b82:       lea    0x8(%rsp),%rsi       ; second param is &e
  400b87:       callq  400810 <[email protected]>  ; call the function, returning a value in %xmm0

Here the input parameters are RDI, RSI, and the output is XMM0. Nice and simple. Since XPLINK was designed for C code, we expect it to be more sensible. Let’s see what an XPLINK call looks like. Here’s a call to fmodf:

*      float r = fmodf( 10.0f, 3.0f );
            LD       f0,+CONSTANT_AREA(,r9,184)
            LD       f2,+CONSTANT_AREA(,r9,192)
            L        r7,#Save_ADA_Ptr_9(,r4,2052)
            L        r6,=A(__fmodf)(,r7,76)
            L        r5,=A(__fmodf)(,r7,72)
            BASR     r7,r6
            NOP      9
            LDR      f2,f0
            STE      f2,r(,r4,2144)
*
*      printf( "fmodf: %g\n", (double)r );

There are some curious details that would have to be explored to understand the code above (why f0, f2, and not f0,f1?), however, the short story is that all the input and output values in (floating point) registers.

The mystery that led me to looking at this was a malfunctioning call to strtof:

*      float x = strtof( "1.0q", &e );
            LA       r2,e(,r4,2144)
            L        r7,#Save_ADA_Ptr_12(,r4,2052)
            L        r6,=A(strtof)(,r7,4)
            L        r5,=A(strtof)(,r7,0)
            LA       r1,+CONSTANT_AREA(,r9,20)
            BASR     r7,r6
            NOP      17
            LR       r0,r3
            CEFR     f2,r0
            STE      f2,x(,r4,2148)
*
*      printf( "strtof: v: %g\n", x );

The CEFR instruction converts an integer to a (hfp32) floating point representation, so we appear to have strtof returning it’s value in R3, which is an integer register. That then gets copied into R0, and finally into F2 (and after that into a stack spill location before the printf call.) I scratched my head about this code for quite a while, trying to figure out if the compiler had some mysterious way to make this work that I wasn’t figuring out. Eventually, I clued in. I’m so used to using a C++ compiler that I forgot about the old style implicit int return for an unprototyped function. But I had included <stdlib.h> in this code, so strtof should have been prototyped? However, the Language Runtime reference specifies that on z/OS you need an additional define to have strtof visible:

#define _ISOC99_SOURCE
#include <stdlib.h>

Without the additional define, the call to strtof() is as if it was prototyped as:

int strtof( const char *, char ** );

My expectation is that with such a correction, the call to strtof() should return it’s value in f0, just like fmodf() does. The result should also not be garbage!

 

Footnotes:

  1.  There is also a “metal” compiler and probably a different calling convention to go with that.  I don’t know how metal differs from XPLINK.
  2. Newer in the lifetime of the mainframe means circa 2001, which is bleeding edge given the speed that mainframe development moves.

Visualizing the 3D Mandelbrot set.

February 1, 2021 C/C++ development and debugging.

In “Geometric Algebra for Computer Science” is a fractal problem based on a vectorization of the Mandelbrot equation, which allows for generalization to \( N \) dimensions.

I finally got around to trying the 3D variation of this problem.  Recall that the Mandlebrot set is a visualization of iteration of the following complex number equation:
\begin{equation}
z \rightarrow z^2 + c,
\end{equation}
where the idea is that \( z \) starts as the constant \( c \), and if this sequence converges to zero, then the point \( c \) is in the set.

The idea in the problem is that this equation can be cast as a vector equation, instead of a complex number equation. All we have to do is set \( z = \Be_1 \Bx \), where \( \Be_1 \) is the x-axis unit vector, and \( \Bx \) is an \(\mathbb{R}^2\) vector. Expanding in coordinates, with \( \Bx = \Be_1 x + \Be_2 y \), we have
\begin{equation}
z
= \Be_1 \lr{ \Be_1 x + \Be_2 y }
= x + \Be_1 \Be_2 y,
\end{equation}
but since the bivector \( \Be_1 \Be_2 \) squares to \( -1 \), we can represent complex numbers as even grade multivectors. Making the same substitution in the Mandlebrot equation, we have
\begin{equation}
\Be_1 \Bx \rightarrow \Be_1 \Bx \Be_1 \Bx + \Be_1 \Bc,
\end{equation}
or
\begin{equation}
\Bx \rightarrow \Bx \Be_1 \Bx + \Bc.
\end{equation}
Viola! This is a vector version of the Mandlebrot equation, and we can use it in 2 or 3 or N dimensions, as desired.  Observe that despite all the vector products above, the result is still a vector since \( \Bx \Be_1 \Bx \) is the geometric algebra form of a reflection of \( \Bx \) about the x-axis.

The problem with generalizing this from 2D is really one of visualization. How can we visualize a 3D Mandelbrot set? One idea I had was to use ray tracing, so that only the points on the surface from the desired viewpoint need be evaluated. I don’t think I’ve ever written a ray tracer, but I thought that there has got to be a quick and dirty way to do this.  Also, figuring out how to make a ray tracer interact with an irregular surface like this is probably non trivial!

What I did instead, was a brute force evaluation of all the points in the upper half plane in around the origin, one slice of the set at a time. Here’s the result

Code for the visualization can be found in github. I’ve used Pauli matrices to do the evaluation, which is actually pretty quick (but slower than plain std::complex< double> evaluation), and the C++ ImageMagick API to save individual png files for the slices. There are better coloring schemes for the Mandelbrot set, and if I scrounge up some more time, I may try one of those instead.

As your T.A., I have to punish you …

December 19, 2020 C/C++ development and debugging. , , ,

Back in university, I had to implement a reverse polish notation calculator in a software engineering class.  Overall the assignment was pretty stupid, and I entertained myself by generating writing a very compact implementation.  It worked perfectly, but I got a 25/40 (62.5%) grade on it.  That mark was well deserved, although I did not think so at the time.

The grading remarks were actually some of best feedback that I ever received, and also really funny to boot.  I don’t know the name of this old now-nameless TA anymore, but I took his advice to heart, and kept his grading remarks on my wall in my IBM office for years.  That served as an excellent reminder not to write over complicated code.

Today, I found those remarks again, and am posting them for posterity.  Enjoy!

Transcription for easy reading

  • It is obvious that are a very clever person, but this program is is like a big puzzle, and in understanding it, I appreciated it and enjoyed it, because of your cleverness. However much I enjoyed, it is none the less a very poorly designed program.
  • A program should be constructed in the easiest and simplest to understand manner because when you construct very large programs the “complexity” of them will increase greatly.
  • A program should not be an intricate puzzle, where you show off how clever you are.
  • Your string class is an elephant gun trying to kill a mouse.
  • macros Build_binary_op and Binary_op are the worst examples of programming style I have ever seen in my entire life!  Veru c;ever. bit a cardinal sin of programming style.
  • Your binary_expr constructor does all the computation.  Not good style.
  • Your “expr” class is a baroque mess.
  • Although I enjoyed your program, Never write a program like this in your life again.  As your T.A., I have to pushish you so that you do not develop bad habits in the future.  I hate to do it, but I can only give you 25/40 for this “clever puzzle”.

Reflection.

The only part of this feedback that I would refute was the comment about the string class.  That was a actually a pretty good string implementation.  I didn’t write it because I was a viscous mouse hunter, but because I hit a porting issue with pre-std:: C.  In particular, we had two sets of Solaris machines available to us, and I was using one that had a compiler that included a nice C++ string class.  So, naturally I used it.  For submission, our code had to compile an run on a different Solaris machine, and lo and behold, the string class that all my code was based on was not available.

What I should have done (20/20 hindsight), was throw out my horrendous code, and start over from scratch.  However, I took the more fun approach, and wrote my own string class so that my machine would compile on either machine.

Amusingly, when I worked on IBM LUW, there was a part of the query optimizer code seemed to have learned all it’s tricks from the ugly macros and token pasting that I did in this assignment.  It was truly gross, but there was 10000x more of it than my assignment.  Having been thoroughly punished for my atrocities, I easily recognized this code for the evil it was.  The only way that you could debug that optimizer code, was by running it through the preprocessor, cut and pasting the results, and filtering that cut and paste through something like cindent (these days you would probably use clang-format.)  That code was brutal, and I always wished that it’s authors had had the good luck of having a TA like mine.  That code is probably still part of LUW terrorizing developers.  Apparently the justification for it was that it was originally written by an IBM researcher using templates, but templates couldn’t be used in DB2 code because we didn’t have compiler on all platforms that supported them at the time.

I have used token pasting macros very judiciously and sparingly in the 26 years since I originally used them in this assignment, and I do think that there are a few good uses for that sort of generative code.  However, if you do have to write that sort of code, I think it’s better to write perl (or some other language) code that generates understandable code that can be debugged, instead of relying on token pasting.

%d bloggers like this: