clang

Tagged V3 of my toy compiler (playing with the MLIR -> LLVM-IR toolchain)

June 3, 2025 clang/llvm , , , , , , , , , , , , , ,

Screenshot

 

I’ve added a number of elements to the language and compiler:

  • comparison operators (<, <=, EQ, NE) yielding BOOL values.  These work for any combinations of floating and integer types (including BOOL.)

  • integer bitwise operators (OR, AND, XOR).  These only for for integer types (including BOOL.)

  • a NOT operator, yielding BOOL.

  • Array + string declaration and lowering support, including debug instrumentation, and print support for string variables.

This version also fixes a few specific issues:

  • Fixed -g/-OX propagation to lowering.  If -g not specified, now don’t generate the DI.
  • Show the optimized .ll with –emit-llvm instead of the just-lowered .ll (unless not invoking the assembly printer, where the ll optimization passes are registered.)
  • Reorganize the grammar so that all the simple lexer tokens are last.  Rename a bunch of the tokens, introducing some consistency.
  • calculator.td: introduce IntOrFloat constraint type, replacing AnyType usage; array decl support, and string support.
  • driver: writeLL helper function, pass -g to lowering if set.
  • parser: handle large integer constants properly, array decl support, and string support.
  • simplest.cpp: This MWE is updated to include a global variable and global variable access.
  • parser: implicit exit: use the last saved location, instead of the module start.  This means the line numbers don’t jump around at the very end of the program anymore (i.e.: implicit return/exit)

I started with the comparison operators, thinking that I’d add if statement support, and loops, but got sidetracked.  In particular, I generated a number of really large test programs, and without some way to print a string message, it was hard to figure out where an error was occuring.  This led to implementing PRINT string-variable support as an interesting feature first.

As a side effect of adding STRING support, I’ve also got declaration support for arbitrary fixed size arrays for any type.  I haven’t implemented array access yet, but that probably won’t be too hard.

Here’s an example of a program that uses STRING variables:

STRING t[2];
STRING u[3];
STRING s[2];
INT8 s2;
s = "hi";
PRINT s;
t = "hi";
PRINT t;
u = "hi";
PRINT u;
u = "bye";
PRINT u;

This is the MLIR for the program:

module {
  toy.program {
    "toy.declare"() <{name = "t", size = 2 : i64, type = i8}> : () -> ()
    "toy.declare"() <{name = "u", size = 3 : i64, type = i8}> : () -> ()
    "toy.declare"() <{name = "s", size = 2 : i64, type = i8}> : () -> ()
    "toy.declare"() <{name = "s2", type = i8}> : () -> ()
    toy.string_assign "s" = "hi"
    %0 = toy.load "s" : !llvm.ptr
    toy.print %0 : !llvm.ptr
    toy.string_assign "t" = "hi"
    %1 = toy.load "t" : !llvm.ptr
    toy.print %1 : !llvm.ptr
    toy.string_assign "u" = "hi"
    %2 = toy.load "u" : !llvm.ptr
    toy.print %2 : !llvm.ptr
    toy.string_assign "u" = "bye"
    %3 = toy.load "u" : !llvm.ptr
    toy.print %3 : !llvm.ptr
    toy.exit
  }
}

It’s a bit clunky, because I cheated and didn’t try to implement PRINT support of string literals directly. I thought that since I had variable support already (which emits llvm.alloca), I could change that alloca trivially to an array from a scalar value.

I think that this did turn out to be a relatively easy way to do it, but this little item did take much more effort than I expected.

The DeclareOp builder is fairly straightforward:

builder.create<toy::DeclareOp>( loc, builder.getStringAttr( varName ), mlir::TypeAttr::get( ty ), nullptr ); // for scalars
...
auto sizeAttr = builder.getI64IntegerAttr( arraySize );
builder.create<toy::DeclareOp>( loc, builder.getStringAttr( varName ), mlir::TypeAttr::get( ty ), sizeAttr ); // for arrays

This matches the Optional size now added to DeclareOp for arrays:

def Toy_DeclareOp : Op<Toy_Dialect, "declare"> {
  let summary = "Declare a variable or array, specifying its name, type (integer or float), and optional size.";
  let arguments = (ins StrAttr:$name, TypeAttr:$type, OptionalAttr:$size);
  let results = (outs);

There’s a new AssignStringOp complementing AssignOp:

def Toy_AssignOp : Op<Toy_Dialect, "assign"> {
  let summary = "Assign a (non-string) value to a variable associated with a declaration";
  let arguments = (ins StrAttr:$name, AnyType:$value);
  let results = (outs);

  // toy.assign "x", %0 : i32
  let assemblyFormat = "$name `,` $value `:` type($value) attr-dict";
}

def Toy_AssignStringOp : Op<Toy_Dialect, "string_assign"> {
  let summary = "Assign a string literal to an i8 array variable";
  let arguments = (ins StrAttr:$name, Builtin_StringAttr:$value);
  let results = (outs);
  let assemblyFormat = "$name `=` $value attr-dict";
}

I also feel this is a cludge. I probably really want a string literal type like flang’s. Here’s a fortran hello world:

program hello
  print *, "Hello, world!"
end program hello

and selected parts of the flang fir dialect MLIR for it:

    %4 = fir.declare %3 typeparams %c13 {fortran_attrs = #fir.var_attrs, uniq_name = "_QQclX48656C6C6F2C20776F726C6421"} : (!fir.ref<!fir.char<1,13>>, index) -> !fir.ref<!fir.char<1,13>>
    %5 = fir.convert %4 : (!fir.ref<!fir.char<1,13>>) -> !fir.ref
    %6 = fir.convert %c13 : (index) -> i64
    %7 = fir.call @_FortranAioOutputAscii(%2, %5, %6) fastmath : (!fir.ref, !fir.ref, i64) -> i1
  ...

  fir.global linkonce @_QQclX48656C6C6F2C20776F726C6421 constant : !fir.char<1,13> {
    %0 = fir.string_lit "Hello, world!"(13) : !fir.char<1,13>
    fir.has_value %0 : !fir.char<1,13>
  }

I had to specialize the LoadOp builder too so that it didn’t create a scalar load. That code looks like:

mlir::Type varType;
mlir::Type elemType = declareOp.getTypeAttr().getValue();
        
if ( declareOp.getSizeAttr() )    // Check if size attribute exists
{       
    // Array: load a generic pointer 
    varType = mlir::LLVM::LLVMPointerType::get( builder.getContext(), /*addressSpace=*/0 );
}       
else    
{       
    // Scalar: load the value
    varType = elemType;
}       
        
auto value = builder.create<toy::LoadOp>( loc, varType, builder.getStringAttr( varName ) );

Lowering didn’t require too much. I needed a print function object:

auto ptrType = LLVM::LLVMPointerType::get( ctx );
auto printFuncStringType = LLVM::LLVMFunctionType::get( LLVM::LLVMVoidType::get( ctx ),
                                                        { pr_builder.getI64Type(), ptrType }, false );
pr_printFuncString = pr_builder.create<LLVM::LLVMFuncOp>( pr_module.getLoc(), "__toy_print_string",
                                                          printFuncStringType, LLVM::Linkage::External );

With LoadOp now possibly having pointer valued return

if ( loadOp.getResult().getType().isa<mlir::LLVM::LLVMPointerType>() )
{           
    // Return the allocated pointer
    LLVM_DEBUG( llvm::dbgs() << "Loading array address: " << allocaOp.getResult() << '\n' );
    rewriter.replaceOp( op, allocaOp.getResult() );
}           
else        
{           
    // Scalar load
    auto load = rewriter.create<LLVM::LoadOp>( loc, elemType, allocaOp );
    LLVM_DEBUG( llvm::dbgs() << "new load op: " << load << '\n' );
    rewriter.replaceOp( op, load.getResult() );
}           

assign-string lowering basically just generates a memcpy from a global:

Type elemType = allocaOp.getElemType();
int64_t numElems = 0;
if ( auto constOp = allocaOp.getArraySize().getDefiningOp<LLVM::ConstantOp>() )
{           
    auto intAttr = mlir::dyn_cast<IntegerAttr>( constOp.getValue() );
    numElems = intAttr.getInt();
}           
LLVM_DEBUG( llvm::dbgs() << "numElems: " << numElems << '\n' );
LLVM_DEBUG( llvm::dbgs() << "elemType: " << elemType << '\n' );

if ( !mlir::isa<mlir::IntegerType>( elemType ) || elemType.getIntOrFloatBitWidth() != 8 )
{           
    return rewriter.notifyMatchFailure( assignOp, "string assignment requires i8 array" );
}           
if ( numElems == 0 )
{           
    return rewriter.notifyMatchFailure( assignOp, "invalid array size" );
}           

size_t strLen = value.size();
size_t copySize = std::min( strLen + 1, static_cast<size_t>( numElems ) );
if ( strLen > static_cast<size_t>( numElems ) )
{           
    return rewriter.notifyMatchFailure( assignOp, "string too large for array" );
}           

mlir::LLVM::GlobalOp globalOp = lState.lookupOrInsertGlobalOp( rewriter, value, loc, copySize, strLen );

auto globalPtr = rewriter.create<LLVM::AddressOfOp>( loc, globalOp ); 

auto destPtr = allocaOp.getResult();

auto sizeConst = 
    rewriter.create<LLVM::ConstantOp>( loc, rewriter.getI64Type(), rewriter.getI64IntegerAttr( copySize ) );

rewriter.create<LLVM::MemcpyOp>( loc, destPtr, globalPtr, sizeConst, rewriter.getBoolAttr( false ) );

rewriter.eraseOp( op );

I used global’s like what we’d find in clang LLVM-IR. For example, here’s a C hello world:

#include <string.h>

int main()
{
    const char* s = "hi there";
    char buf[100];
    memcpy( buf, s, strlen( s ) + 1 );

    return strlen( buf );
}

where our LLVM-IR looks like:

@.str = private unnamed_addr constant [9 x i8] c"hi there\00", align 1

; Function Attrs: noinline nounwind optnone uwtable
define dso_local i32 @main() #0 {
  %1 = alloca i32, align 4
  %2 = alloca ptr, align 8
  %3 = alloca [100 x i8], align 16
  store i32 0, ptr %1, align 4
  store ptr @.str, ptr %2, align 8
  %4 = getelementptr inbounds [100 x i8], ptr %3, i64 0, i64 0
  %5 = load ptr, ptr %2, align 8
  %6 = load ptr, ptr %2, align 8
  %7 = call i64 @strlen(ptr noundef %6) #3
  %8 = add i64 %7, 1
  call void @llvm.memcpy.p0.p0.i64(ptr align 16 %4, ptr align 1 %5, i64 %8, i1 false)
  %9 = getelementptr inbounds [100 x i8], ptr %3, i64 0, i64 0
  %10 = call i64 @strlen(ptr noundef %9) #3
  %11 = trunc i64 %10 to i32
  ret i32 %11
}

My lowered LLVM-IR for the program is similar:

@str_1 = private constant [3 x i8] c"bye"
@str_0 = private constant [2 x i8] c"hi"

declare void @__toy_print_f64(double)

declare void @__toy_print_i64(i64)

declare void @__toy_print_string(i64, ptr)

define i32 @main() {
  %1 = alloca i8, i64 2, align 1
  %2 = alloca i8, i64 3, align 1
  %3 = alloca i8, i64 2, align 1
  %4 = alloca i8, i64 1, align 1
  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %3, ptr align 1 @str_0, i64 2, i1 false)
  call void @__toy_print_string(i64 2, ptr %3)
  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %1, ptr align 1 @str_0, i64 2, i1 false)
  call void @__toy_print_string(i64 2, ptr %1)
  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %2, ptr align 1 @str_0, i64 3, i1 false)
  call void @__toy_print_string(i64 3, ptr %2)
  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %2, ptr align 1 @str_1, i64 3, i1 false)
  call void @__toy_print_string(i64 3, ptr %2)
  ret i32 0
}
...

I managed my string literals with a simple hash, avoiding replication if repeated:

mlir::LLVM::GlobalOp lookupOrInsertGlobalOp( ConversionPatternRewriter& rewriter, mlir::StringAttr& stringLit,
                                             mlir::Location loc, size_t copySize, size_t strLen )
{       
    mlir::LLVM::GlobalOp globalOp;
    auto it = pr_stringLiterals.find( stringLit.str() );
    if ( it != pr_stringLiterals.end() )
    {       
        globalOp = it->second;
        LLVM_DEBUG( llvm::dbgs() << "Reusing global: " << globalOp.getSymName() << '\n' ); 
    }       
    else    
    {       
        auto savedIP = rewriter.saveInsertionPoint();
        rewriter.setInsertionPointToStart( pr_module.getBody() );

        auto i8Type = rewriter.getI8Type();
        auto arrayType = mlir::LLVM::LLVMArrayType::get( i8Type, copySize );

        SmallVector<char> stringData( stringLit.begin(), stringLit.end() );
        if ( copySize > strLen )
        {       
            stringData.push_back( '\0' ); 
        }       
        auto denseAttr = DenseElementsAttr::get( RankedTensorType::get( { static_cast<int64_t>( copySize ) }, i8Type ),
                                                 ArrayRef<char>( stringData ) );

        std::string globalName = "str_" + std::to_string( pr_stringLiterals.size() );
        globalOp =
            rewriter.create<LLVM::GlobalOp>( loc, arrayType, true, LLVM::Linkage::Private, globalName, denseAttr );
        globalOp->setAttr( "unnamed_addr", rewriter.getUnitAttr() );

        pr_stringLiterals[stringLit.str()] = globalOp;
        LLVM_DEBUG( llvm::dbgs() << "Created global: " << globalName << '\n' );

        rewriter.restoreInsertionPoint( savedIP );
    }           

    return globalOp;
}         

Without the insertion point swaperoo, this GlobalOp creation doesn’t work, as we need to be in the ModuleOp level where the symbol table lives.

 

… anyways, it looks like I’m droning on.  There’s been lots of stuff to get this far, but there are still many many things to do before what I’ve got even qualifies as a basic programming language (if statements, loops, functions, array assignments, types, …)

The C compiler is too forgiving! sizeof(variable_name+1) allowed?

April 28, 2022 C/C++ development and debugging. , , , ,

I carelessly passed:

sizeof(s.st_size+1)

to an allocator call, instead of:

s.st_size+1

and corrupted memory nicely.

What the hell would sizeof(variable+1) even mean, and why on earth would the compiler think that is anything close to valid? Both gcc and clang, each with -Wall, are completely quiet about this error!

Interesting z/OS (clang based) compiler release notes.

December 13, 2019 C/C++ development and debugging. , , , ,

The release notes for the latest z/OS C/C++ compiler are interesting.  When I was at IBM they were working on “clangtana”, a clang frontend melded with the legacy TOBY backend.  This really surprised me, but was consistent with the fact that the IBM compiler guys kept saying that they were continually losing their internal funding — that project was a clever way to do more with less resources.  I think they’d made the clangtana switch for zLinux by the time I left, with AIX to follow once they had resolved some ABI incompatibility issues.  At the time, I didn’t know (nor care) about the status of that project on z/OS.

Well, years later, it looks like they’ve now switched to a clang based compiler frontend on z/OS too.  This major change appears to have a number of side effects that I can imagine will be undesirable to existing mainframe customers:

  • Compiler now requires POSIX(ON) and Unix System Services.  No more compilation using JCL.
  • Compiler support for 31-bit applications appears to be dropped (64-bit only!)
  • Support for C, FASTLINK, and OS linkage conventions has been dropped (XPLINK only.)
  • Only ibm-1047 is supported for both source and runtime character set encoding.
  • C89 support appears to have been dropped.
  • Hex floating support has been dropped.
  • No decimal floating point support.
  • SIMD support isn’t implemented.
  • Metal C support has been dropped.

i.e. if you want C++14, you have to be willing to give up a lot to get it.  They must be using an older clang, because this “new” compiler doesn’t include C++17 support.  I’m surprised that they didn’t even manage multiple character set support for this first compiler release.

It is interesting that they’ve also dropped IPA and PDF support, and that the optimization options have changed.  Does that mean that they’ve actually not only dropped the old Montana frontend, but also gutted the whole backend, switching to clang exclusively?

using ltrace to dig into shared libraries

October 19, 2016 C/C++ development and debugging., clang/llvm , ,

I was trying to find where the clang compiler is writing out constant global data values, and didn’t manage to find it by code inspection. If I run ltrace (also tracing system calls), I see the point where the ELF object is written out:

std::string::compare(std::string const&) const(0x7ffc8983a190, 0x1e32e60, 7, 254) = 5
std::string::compare(std::string const&) const(0x1e32e60, 0x7ffc8983a190, 7, 254) = 0xfffffffb
std::string::compare(std::string const&) const(0x7ffc8983a190, 0x1e32e60, 7, 254) = 5
write@SYS(4, "\177ELF\002\001\001", 848)         = 848
lseek@SYS(4, 40, 0)                              = 40
write@SYS(4, "\220\001", 8)                      = 8
lseek@SYS(4, 848, 0)                             = 848
lseek@SYS(4, 60, 0)                              = 60
write@SYS(4, "\a", 2)                            = 2
lseek@SYS(4, 848, 0)                             = 848
std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()(0x1e2a2e0, 0x1e2a2e8, 0x1e27978, 0x1e27978) = 0
rt_sigprocmask@SYS(2, 0x7ffc8983bb58, 0x7ffc8983bad8, 8) = 0
close@SYS(4)                                     = 0
rt_sigprocmask@SYS(2, 0x7ffc8983bad8, 0, 8)      = 0

This is from running:

ltrace -S --demangle \
   ...

The -S is to display syscalls as well as library calls. To my suprise, this seems to show calls to libstdc++ library calls, but I’m not seeing much from clang itself, just:

clang::DiagnosticsEngine::DiagnosticsEngine
clang::driver::ToolChain::getTargetAndModeFromProgramName
llvm::cl::ExpandResponseFiles
llvm::EnablePrettyStackTrace
llvm::errs
llvm::install_fatal_error_handler
llvm::llvm_shutdown
llvm::PrettyStackTraceEntry::PrettyStackTraceEntry
llvm::PrettyStackTraceEntry::~PrettyStackTraceEntry
llvm::raw_ostream::preferred_buffer_size
llvm::raw_svector_ostream::write_impl
llvm::remove_fatal_error_handler
llvm::StringMapImpl::LookupBucketFor
llvm::StringMapImpl::RehashTable
llvm::sys::PrintStackTraceOnErrorSignal
llvm::sys::Process::FixupStandardFileDescriptors
llvm::sys::Process::GetArgumentVector
llvm::TimerGroup::printAll

There’s got to be a heck of a lot more that the compiler is doing!? It turns out that ltrace doesn’t seem to trace out all the library function calls that lie in shared libraries (I’m using a shared library + split dwarf build of clang). The default output was a bit deceptive since I saw some shared lib calls, in particular the there were std::… calls (from libstc++.so) in the ltrace output. My conclusion seems to be that the tool is lying by default.

This can be confirmed by explicitly asking to see the functions from a specific shared lib. For example, if I call ltrace as:

$ ltrace -S --demangle -e @libLLVMX86CodeGen.so \
/clang/be.b226a0a/bin/clang-3.9 \
-cc1 \
-triple \
x86_64-unknown-linux-gnu \
...

Now I get ~68K calls to libLLVMX86CodeGen.so functions that didn’t show up in the default ltrace output! The ltrace tool won’t show me these by default (although the man page seems to suggest that it should), but if I narrow down what I’m looking through to a single shared lib, at least I can now examine the function calls in that shared lib.

On the SONAME

Note that the @lib….so name has to match the SONAME.  For example if the shared libraries on disk were:

libLLVMX86CodeGen.so -> libLLVMX86CodeGen.so.3
libLLVMX86CodeGen.so.3 -> libLLVMX86CodeGen.so.3.9
libLLVMX86CodeGen.so.3.9 -> libLLVMX86CodeGen.so.3.9.0

$ objdump -x libLLVMX86CodeGen.so | grep SONAME

would give you the name to use.  This becomes relevant in clang 4.0 where the SONAME ends up with .so.4 instead of just .so (when building clang with shared libs instead of archive libs).

How to invoke the 2nd pass of the clang compiler manually

October 3, 2016 clang/llvm , , , , ,

Because the clang front end reexecs itself, breakpoints on the interesting parts of the clang front end don’t get hit by default. Here’s an example

$ cat g2
b llvm::Module::setDataLayout
b BackendConsumer::BackendConsumer
b llvm::TargetMachine::TargetMachine
b llvm::TargetMachine::createDataLayout
run -mbig-endian -m64 -c bytes.c -emit-llvm -o big.bc

$ gdb `which clang`
GNU gdb (GDB) Red Hat Enterprise Linux 7.9.1-19.lz.el7
...
(gdb) source g2
Breakpoint 1 at 0x2c04c3d: llvm::Module::setDataLayout. (2 locations)
Breakpoint 2 at 0x3d08870: file /source/llvm/lib/Target/TargetMachine.cpp, line 47.
Breakpoint 3 at 0x33108ca: file /source/llvm/include/llvm/Target/TargetMachine.h, line 133.
...
Detaching after vfork from child process 15795.
[Inferior 1 (process 15789) exited normally]

(The debugger finishes and exits, hitting none of the breakpoints)

One way to deal with this is to set the fork mode to child:

(gdb) set follow-fork-mode child

An alternate way of dealing with this is to use strace to collect the command line that clang invokes itself with. For example:

$ strace -f -s 1024 -v clang -mbig-endian -m64 big.bc -c 2>&1 | grep exec | tail -2 | head -1

This provides the command line options for the self invocation of clang

[pid  4650] execve("/usr/local/bin/clang-3.9", ["/usr/local/bin/clang-3.9", "-cc1", "-triple", "aarch64_be-unknown-linux-gnu", "-emit-obj", "-mrelax-all", "-disable-free", "-main-file-name", "big.bc", "-mrelocation-model", "static", "-mthread-model", "posix", "-mdisable-fp-elim", "-fmath-errno", "-masm-verbose", "-mconstructor-aliases", "-fuse-init-array", "-target-cpu", "generic", "-target-feature", "+neon", "-target-abi", "aapcs", "-dwarf-column-info", "-debugger-tuning=gdb", "-coverage-file", "/workspace/pass/run/big.bc", "-resource-dir", "/usr/local/bin/../lib/clang/3.9.0", "-fdebug-compilation-dir", "/workspace/pass/run", "-ferror-limit", "19", "-fmessage-length", "0", "-fallow-half-arguments-and-returns", "-fno-signed-char", "-fobjc-runtime=gcc", "-fdiagnostics-show-option", "-o", "big.o", "-x", "ir", "big.bc"],

With a bit of vim tweaking you can turn this into a command line that can be executed (or debugged) directly

/usr/local/bin/clang-3.9 -cc1 -triple aarch64_be-unknown-linux-gnu -emit-obj -mrelax-all -disable-free -main-file-name big.bc -mrelocation-model static -mthread-model posix -mdisable-fp-elim -fmath-errno -masm-verbose -mconstructor-aliases -fuse-init-array -target-cpu generic -target-feature +neon -target-abi aapcs -dwarf-column-info -debugger-tuning=gdb -coverage-file /workspace/pass/run/big.bc -resource-dir /usr/local/bin/../lib/clang/3.9.0 -fdebug-compilation-dir /workspace/pass/run -ferror-limit 19 -fmessage-length 0 -fallow-half-arguments-and-returns -fno-signed-char -fobjc-runtime=gcc -fdiagnostics-show-option -o big.o -x ir big.bc

Note that doing this also provides a mechanism to change the compiler triple manually, which is something that I wondered how to do (since clang documents -triple as an option, but seems to ignore it). For example, I’m able to able to change -triple aarch64_be to aarch64 and get little endian object code from bytecode prepared with -mbig-endian.