Stream of consciousness dump: new job, claude terminal, vscode, vim mode, …

May 4, 2026 Incoherent ramblings No comments , , , ,

This post is a fairly random stream of consciousness dump, including bits on all of the following:

  • Job transition, and hints of what I’m going to be working on
  • Vscode, and vim mode deficiencies
  • AI development integration

Job change

My last day at Lemurian was two weeks ago, after 11 months there. I enjoyed much of the work, which was well suited for my skills and aptitudes, and learned a lot, but ultimately it wasn’t the right fit for me. I miss a number of the people I worked with, and wish them and Lemurian continued success.

The new job is at a just-formed startup, with a handful of developers, no product yet, and no Canadian corporate presence (so I’m working as a contractor for now). Budget is tight and there’s no additional hiring capacity at the moment, so this isn’t a “come join us” post — it’s more of a “here’s what I’m up to.”

That new job is really fun so far. In one week of 14+ hour days (with extensive Claude generation of boilerplate tablegen/parser/builder), I have most of an MLIR language dialect + parser/builder built for a specific programming language. There’s lots more to do, and the interesting and tricky parts will start soon, once I have infrastructure in place. The work is going to be lots of fun; the team is great, really capable, and knows how to build and get the job done. I don’t know how long it will take us to get the product built, and we are starting from absolutely nothing, but the pieces are coming together quickly.

Vscode

At Lemurian, most people used VSCode for their development. I’ve been using tmux+vim+cscope+ctags+perl so long that I found it hard to make the switch. I’d start in VSCode in the morning and, in frustration, end up back in the terminal in short order.

As an example, suppose that you want to filter the current function through a script (say, clang-format). In vi, you can do something like:

:,/^}/!clang-format

or if your function is indented (by four spaces, say):

:,/^    }/!clang-format | sed 's/^/    /;s/ *$//'

VSCode has a vim mode, but its pattern selector is broken, so you can’t do the last (I don’t remember if you can do filtering either, but let’s stick to the selection issue). To do that line selection in vim mode VSCode, the line range selection part has to be written with the following clunky expression:

:,/^\s\s\s\s}/

It was little quirks like that which repeatedly ejected me from VSCode back to the terminal, or I’d run both VSCode and terminal. It was a strange workflow.

There were a few things that I liked about VSCode:

1. The right-click for type definitions was really nice. I do this the hard way, with memorization, grep, cscope, ctags, and other ad-hoc methods.
2. The autocomplete was nearly magical sometimes.
3. The integration with Claude Code or other AI agents works fabulously, giving the agent full context for the repo and project.

Points 1 and 2 may eventually draw me back to VSCode, despite how slow I feel working in that environment. On the other hand, the AI tools are so good now that English is becoming a programming language. Do I really need to go through the pain of figuring out how to be effective in VSCode?

Claude terminal

For point 3, at the new job, I’ve now got a great replacement: Claude terminal!

Running that terminal application (in an ssh session to my development VM), I can grant the AI tooling access to the repo, as I could in VSCode.  However, I’m able to run the claude session in a tmux window (tmux rename-window claude; claude), and away we go.

A development model that works really well is to describe the desired task, have a conversation about it, and have it produce a design document and execution plan.  I review that, choose which parts I want to do myself, and throw the tooling at the rest, then review that work thoroughly.  The result is staggeringly fast iterations.  Some spectacularly large systematic tasks that may have taken months to do unassisted, can be done in hours.

One gotcha: I’d like to be able to enter multi-line prompts in the terminal UI application, but haven’t figured out how. Claude claims that shift-enter works, but perhaps only in the web app? I’ve resorted to writing some prompts in a file in a separate tmux window and then asking Claude to read that file.

Line stepping through MLIR with a debugger!

February 10, 2026 C/C++ development and debugging. , , , , ,

gdb session

I’ve added an alternate input source for the silly compiler.  As well as the .silly files that it previously accepted, it now also accepts .mlir (silly-dialect) files as input.

This means that if there’s an experimental language feature that requires new style MLIR, but I don’t want to figure out how to push that all the way through grammar -> parser -> builder -> lowering all at once, I might be able to at least understand the required MLIR patterns by by manually modifying exiting MLIR (generated with ‘silly –emit-mlir’).

For example, I don’t have BREAK support for FOR loops. I can do something simple:

INT64 v;

FOR (INT64 myLoopVar : (1, 5))
{
    PRINT myLoopVar;
    v = myLoopVar + 1;
};

PRINT "after loop: ", v;

The MLIR for this (with location info stripped out), looks like:

fedoravm:/home/peeter/toycalculator/tests/endtoend/for> silly-opt --pretty -s out/for_simplest.mlir 
module {
  func.func @main() -> i32 {
    %c0_i32 = arith.constant 0 : i32
    %c5_i64 = arith.constant 5 : i64
    %c1_i64 = arith.constant 1 : i64
    "silly.scope"() ({
      %0 = "silly.declare"() <{sym_name = "v"}> : () -> !silly.var
      scf.for %arg0 = %c1_i64 to %c5_i64 step %c1_i64  : i64 {
        "silly.print"(%c0_i32, %arg0) : (i32, i64) -> ()
        %3 = "silly.add"(%arg0, %c1_i64) : (i64, i64) -> i64
        silly.assign %0 :  = %3 : i64
      }
      %1 = "silly.string_literal"() <{value = "after loop: "}> : () -> !llvm.ptr
      %2 = silly.load %0 :  : i64
      "silly.print"(%c0_i32, %1, %2) : (i32, !llvm.ptr, i64) -> ()
      "silly.return"(%c0_i32) : (i32) -> ()
    }) : () -> ()
    "silly.yield"() : () -> ()
  }
}

If I want to add a BREAK into the mix (which I don’t support in any of grammar or parser or builder right now), something like:

INT64 v; 
FOR (INT64 i : (1, 5)) {
    PRINT i; 
    v = i + 1; 
    IF (i == 3) { BREAK; }; 
};
PRINT "after loop: ", v; 

Then it can be done by replacing the scf.for with scf.while, and putting in additional termination condition logic. Example:

module {
  func.func @main() -> i32 {
    %c0_i32 = arith.constant 0 : i32
    %c1_i64 = arith.constant 1 : i64
    %c3_i64 = arith.constant 3 : i64
    %c5_i64 = arith.constant 5 : i64
    %true = arith.constant true
    %false = arith.constant false

    "silly.scope"() ({
      %0 = "silly.declare"() <{sym_name = "v"}> : () -> !silly.var

      scf.while (%i = %c1_i64, %broke = %false) : (i64, i1) -> (i64, i1) {
        %not_broke = arith.xori %broke, %true : i1
        %in_range = arith.cmpi slt, %i, %c5_i64 : i64
        %continue = arith.andi %in_range, %not_broke : i1
        scf.condition(%continue) %i, %broke : i64, i1
      } do {
      ^bb0(%loop_var: i64, %break_flag: i1):
        "silly.print"(%c0_i32, %loop_var) : (i32, i64) -> ()
        %2 = "silly.add"(%loop_var, %c1_i64) : (i64, i64) -> i64
        silly.assign %0 :  = %2 : i64

        %is_three = arith.cmpi eq, %loop_var, %c3_i64 : i64
        %should_break = arith.ori %break_flag, %is_three : i1

        %next = arith.addi %loop_var, %c1_i64 : i64
        scf.yield %next, %should_break : i64, i1
      }

      %lit = "silly.string_literal"() <{value = "after loop: "}> : () -> !llvm.ptr
      %p = silly.load %0 :  : i64
      "silly.print"(%c0_i32, %lit, %p) : (i32, !llvm.ptr, i64) -> ()

      "silly.return"(%c0_i32) : (i32) -> ()
    }) : () -> ()
    "silly.yield"() : () -> ()
  }
}

Now, here’s where things get cool.  I noticed something curious when I looked at the .mlir dump from the MLIR parser (which I dumped to verify I was getting the expected round trip output before lowering). The MLIR parser, given only MLIR source, and no other location tagging, goes off and tags everything with location info for the MLIR source itself.  Example:

#loc15 = loc("forbreak.mlsilly":27:12)
#loc16 = loc("forbreak.mlsilly":27:28)
module {
  func.func @main() -> i32 {
    %c0_i32 = arith.constant 0 : i32 loc(#loc2)
    %c1_i64 = arith.constant 1 : i64 loc(#loc3)
    %c3_i64 = arith.constant 3 : i64 loc(#loc4)
    %c5_i64 = arith.constant 5 : i64 loc(#loc5)
    %true = arith.constant true loc(#loc6)
    %false = arith.constant false loc(#loc7)
    "silly.scope"() ({
      %0 = "silly.declare"() <{sym_name = "v"}> : () -> !silly.var loc(#loc9)
      %1:2 = scf.while (%arg0 = %c1_i64, %arg1 = %false) : (i64, i1) -> (i64, i1) {
        %4 = arith.xori %arg1, %true : i1 loc(#loc11)
        %5 = arith.cmpi slt, %arg0, %c5_i64 : i64 loc(#loc12)
        %6 = arith.andi %5, %4 : i1 loc(#loc13)
        scf.condition(%6) %arg0, %arg1 : i64, i1 loc(#loc14)
      } do {
      ^bb0(%arg0: i64 loc("forbreak.mlsilly":27:12), %arg1: i1 loc("forbreak.mlsilly":27:28)):
        "silly.print"(%c0_i32, %arg0) : (i32, i64) -> () loc(#loc17)
        %4 = "silly.add"(%arg0, %c1_i64) : (i64, i64) -> i64 loc(#loc18)
        silly.assign %0 :  = %4 : i64 loc(#loc19)
        %5 = arith.cmpi eq, %arg0, %c3_i64 : i64 loc(#loc20)
        %6 = arith.ori %arg1, %5 : i1 loc(#loc21)
        %7 = arith.addi %arg0, %c1_i64 : i64 loc(#loc22)
        scf.yield %7, %6 : i64, i1 loc(#loc23)
      } loc(#loc10)
      %2 = "silly.string_literal"() <{value = "after loop: "}> : () -> !llvm.ptr loc(#loc24)
      %3 = silly.load %0 :  : i64 loc(#loc25)
      "silly.print"(%c0_i32, %2, %3) : (i32, !llvm.ptr, i64) -> () loc(#loc26)
      "silly.return"(%c0_i32) : (i32) -> () loc(#loc27)
    }) : () -> () loc(#loc8)
    "silly.yield"() : () -> () loc(#loc28)
  } loc(#loc1)
} loc(#loc)
#loc = loc("forbreak.mlsilly":9:1)
#loc1 = loc("forbreak.mlsilly":10:3)
#loc2 = loc("forbreak.mlsilly":11:15)
#loc3 = loc("forbreak.mlsilly":12:15)
#loc4 = loc("forbreak.mlsilly":13:15)
#loc5 = loc("forbreak.mlsilly":14:15)
#loc6 = loc("forbreak.mlsilly":15:13)
#loc7 = loc("forbreak.mlsilly":16:14)
...

My compiler can then turns that location info into dwarf DI, just as it does for regular .silly source file, so I can actually line step through the MLIR itself with any debugger! Here’s an example session:

Breakpoint 1, main () at forbreak.mlsilly:25
25              scf.condition(%continue) %i, %broke : i64, i1
(gdb) l
20            
21            scf.while (%i = %c1_i64, %broke = %false) : (i64, i1) -> (i64, i1) {
22              %not_broke = arith.xori %broke, %true : i1
23              %in_range = arith.cmpi slt, %i, %c5_i64 : i64
24              %continue = arith.andi %in_range, %not_broke : i1
25              scf.condition(%continue) %i, %broke : i64, i1
26            } do {
27            ^bb0(%loop_var: i64, %break_flag: i1):
28              "silly.print"(%c0_i32, %loop_var) : (i32, i64) -> ()
29              %2 = "silly.add"(%loop_var, %c1_i64) : (i64, i64) -> i64
(gdb) l
30              silly.assign %0 :  = %2 : i64
31              
32              %is_three = arith.cmpi eq, %loop_var, %c3_i64 : i64
33              %should_break = arith.ori %break_flag, %is_three : i1
34              
35              %next = arith.addi %loop_var, %c1_i64 : i64
36              scf.yield %next, %should_break : i64, i1
37            }
38
39            %lit = "silly.string_literal"() <{value = "after loop: "}> : () -> !llvm.ptr
(gdb) b 32
Breakpoint 2 at 0x40076c: file forbreak.mlsilly, line 32.
(gdb) c
Continuing.
1

Breakpoint 2, main () at forbreak.mlsilly:32
32              %is_three = arith.cmpi eq, %loop_var, %c3_i64 : i64
(gdb) disassemble
Dump of assembler code for function main:
   0x000000000040072c <+0>:     sub     sp, sp, #0x60
   0x0000000000400730 <+4>:     stp     x30, x21, [sp, #64]
   0x0000000000400734 <+8>:     stp     x20, x19, [sp, #80]
   0x0000000000400738 <+12>:    mov     w19, wzr
   0x000000000040073c <+16>:    mov     w20, #0x1                       // #1
   0x0000000000400740 <+20>:    mov     w21, #0x1                       // #1
   0x0000000000400744 <+24>:    str     xzr, [sp, #8]
   0x0000000000400748 <+28>:    cmp     x21, #0x4
   0x000000000040074c <+32>:    b.gt    0x400784 
   0x0000000000400750 <+36>:    tbnz    w19, #0, 0x400784 
   0x0000000000400754 <+40>:    add     x1, sp, #0x10
   0x0000000000400758 <+44>:    mov     w0, #0x1                        // #1
   0x000000000040075c <+48>:    stp     x21, xzr, [sp, #24]
   0x0000000000400760 <+52>:    str     x20, [sp, #16]
   0x0000000000400764 <+56>:    bl      0x4005b0 <__silly_print@plt>
   0x0000000000400768 <+60>:    add     x21, x21, #0x1
=> 0x000000000040076c <+64>:    cmp     x21, #0x4
   0x0000000000400770 <+68>:    str     x21, [sp, #8]
   0x0000000000400774 <+72>:    cset    w8, eq  // eq = none
   0x0000000000400778 <+76>:    orr     w19, w19, w8
   0x000000000040077c <+80>:    cmp     x21, #0x4
   0x0000000000400780 <+84>:    b.le    0x400750 
   0x0000000000400784 <+88>:    mov     x8, #0x3                        // #3
   0x0000000000400788 <+92>:    ldr     x9, [sp, #8]
   0x000000000040078c <+96>:    mov     w10, #0xc                       // #12
   0x0000000000400790 <+100>:   movk    x8, #0x1, lsl #32
   0x0000000000400794 <+104>:   add     x1, sp, #0x10
   0x0000000000400798 <+108>:   mov     w0, #0x2                        // #2
   0x000000000040079c <+112>:   stp     x8, x10, [sp, #16]
   0x00000000004007a0 <+116>:   adrp    x8, 0x400000
   0x00000000004007a4 <+120>:   add     x8, x8, #0x7f8
   0x00000000004007a8 <+124>:   stp     x9, xzr, [sp, #48]
   0x00000000004007ac <+128>:   mov     w9, #0x1                        // #1
   0x00000000004007b0 <+132>:   stp     x8, x9, [sp, #32]
   0x00000000004007b4 <+136>:   bl      0x4005b0 <__silly_print@plt>
   0x00000000004007b8 <+140>:   ldp     x20, x19, [sp, #80]
   0x00000000004007bc <+144>:   mov     w0, wzr
   0x00000000004007c0 <+148>:   ldp     x30, x21, [sp, #64]
   0x00000000004007c4 <+152>:   add     sp, sp, #0x60
   0x00000000004007c8 <+156>:   ret
End of assembler dump.



(gdb) c
Continuing.
2

Breakpoint 2, main () at forbreak.mlsilly:32
32              %is_three = arith.cmpi eq, %loop_var, %c3_i64 : i64
(gdb) p v
$2 = 2

Having built a compiler for an arbitrary language, and having implemented DWARF instrumentation for that language, I get line support for stepping through the MLIR itself, if I want it.

I can imagine a scenerio where I’ve screwed up the MLIR ops generation in the builder. This lets me set a breakpoint right at the MLIR line in question, and poke around at the disassembly for that point in the code, and see what’s going on. What a cool compiler debugging tool!

Some line integral examples of the Fundamental theorem of geometric calculus

January 20, 2026 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

On my discord server, Frank asked about his attempt to demonstrate an example line integral computation of the fundamental theorem of geometric calculus.

Before working through his example, and some others, it is first worth restating the
line integral specialization of the \textit{Fundamental theorem of geometric calculus}:

Theorem 1.1: Fundamental theorem of geometric calculus (line integral version.)

Given multivectors \(F, G \), a single variable parameterization \( \Bx = \Bx(u) \), with line element \( d\Bx = du \Bx_u \), \( \Bx_u = \PDi{u}{\Bx} \), \( \boldpartial = \Bx^u \PDi{u}{} \), and \( \Bx^u \cdot \Bx_u = 1 \), then
the line integral is related to the boundary by
\begin{equation*}
\int F d\Bx \boldpartial G = \evalbar{F G}{\Delta u},
\end{equation*}
(with the \( \boldpartial \) acting bidirectionally on \( F, G \).)

It is very important to point out that the derivative operator here is the vector derivative, and not the gradient. Roughly speaking, the vector derivative is the projection of the gradient onto the tangent space. In this case, the tangent space is just the line in the direction \( \Bx_u \), which may vary along the parameterized path.

Here are some examples of some one variable parameterizations, all in two dimensions

  1. \( \Bx = u \Be_1 + y_0 \Be_2 \).
    We compute
    \begin{equation}\label{eqn:lineintegralExamples:20}
    \begin{aligned}
    \Bx_u &= \PD{\Bx}{u} = \Be_1 \\
    \Bx^u &= \Be_1 \\
    d\Bx &= du \Be_1 \\
    \boldpartial &= \Be_1 \PD{u}{}.
    \end{aligned}
    \end{equation}
    and \( d\Bx \boldpartial = \PDi{u}{} \).
    The fundamental theorem is really just a statement that
    \begin{equation}\label{eqn:lineintegralExamples:40}
    \int \PD{u}{} \lr{ F G } du = \evalbar{ F G }{\Delta u}.
    \end{equation}

  2. \( \Bx = \alpha u \Be_1 + \beta u \Be_2 \), where \( \alpha, \beta \) are constants. i.e.: a line, but not necessarily on the horizontal this time.
    This time, we compute
    \begin{equation}\label{eqn:lineintegralExamples:60}
    \begin{aligned}
    \Bx_u &= \alpha \Be_1 + \beta \Be_2 \\
    \Bx^u &= \inv{\Bx_u} = \frac{\alpha \Be_1 + \beta \Be_2}{\alpha^2 + \beta^2} \\
    d\Bx &= du \lr{ \alpha \Be_1 + \beta \Be_2 } \\
    \boldpartial &= \inv{\alpha \Be_1 + \beta \Be_2} \PD{u}{}.
    \end{aligned}
    \end{equation}
    Again, we have \( d\Bx \boldpartial = \PDi{u}{} \), and the story repeats.

  3. \( \Bx = R \Be_1 e^{i\theta}, i = \Be_1 \Be_2 \). This time we are going along a circular arc.

    Let \( \rcap = \Be_1 e^{i\theta} \), and \(\thetacap = \Be_2 e^{i\theta} \). We can compute
    \begin{equation}\label{eqn:lineintegralExamples:80}
    \begin{aligned}
    \Bx_\theta &= R \Be_2 e^{i\theta} = R \thetacap \\
    \Bx^\theta &= \inv{\Bx_\theta} = \inv{ R \Be_2 e^{i\theta} } = \inv{R} \thetacap \\
    d\Bx &= d\theta \thetacap \\
    \boldpartial &= \frac{\thetacap}{R} \PD{\theta}{}.
    \end{aligned}
    \end{equation}
    This time, probably to no suprise, we have \( d\Bx \boldpartial = \PDi{\theta}{} \), so the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:100}
    \int \PD{\theta}{} \lr{ F G } d\theta = \evalbar{ F G }{\Delta \theta}.
    \end{equation}

  4. \( \Bx = r e^{i\theta_0} \), where \( \theta_0 \) is a constant. We’ve already computed this above with a Cartesian representation of a line, but can do it again this time with an explicitly radial parameterization. We compute
    \begin{equation}\label{eqn:lineintegralExamples:120}
    \begin{aligned}
    \Bx_r &= \Be_1 e^{i \theta_0} \\
    \Bx^r &= \inv{\Bx_r} = \Be_1 e^{i \theta_0} \\
    d\Bx &= dr \Be_1 e^{i \theta_0} \\
    \boldpartial &= e^{i \theta_0} \PD{r}{}.
    \end{aligned}
    \end{equation}
    This time, \( d\Bx \boldpartial = \PDi{r}{} \), and the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:140}
    \int \PD{r}{} \lr{ F G } dr = \evalbar{ F G }{\Delta r}.
    \end{equation}

Observe that we do not get the same result if we use the gradient instead of the vector derivative. We may only make a gradient substitution for the vector derivative when the dimension of the hypervolume integral equals the dimension of the vector space itself. For a line integral that would mean we are restricting the domain of the underlying vector space to \(\mathbb{R}^1\), which isn’t a very interesting case for geometric algebra.

In Frank’s example, he was working with a generating vector space of \(\mathbb{R}^2\), with the horizontal parameterization \( \Bx = u \Be_1 + y_0 \Be_2 \) that we used in the first example (with \( F = 1, G = x y i \), where \( i = \Be_1 \Be_2 \), the pseudoscalar for the space).

Let’s see what happens if we compute a similar integral, but swapping out the vector derivative with the gradient
\begin{equation}\label{eqn:lineintegralExamples:160}
\begin{aligned}
\int d\Bx \spacegrad x y i
&=
\int du \Be_1 \lr{ \Be_1 \partial_x + \Be_2 \partial_y } ( x y i ) \\
&=
\int du \Be_1 \lr{ \Be_1 y + \Be_2 x } i \\
&=
\int du \lr{ y + i x } i \\
&=
\int du \lr{ y_0 + i u } i \\
&=
\lr{\Delta x} y_0 i – \frac{x_1^2}{2} + \frac{x_0^2}{2}.
\end{aligned}
\end{equation}
As well as the pseudoscalar term that we had when evaluating the fundamental theorem integral, this time we have an extra scalar term, a contribution that goes back to the \( y \) component of the gradient. There is nothing wrong with performing such an integral, but it’s not an instance of the fundamental theorem, and the same tidy answer should not be expected. In Frank’s original example, he also didn’t put the \( \Bx \) adjacent to the differential operator, which is required to get the perfect cancelation of the tangent space vectors that we’ve seen in the evaluations above.

One more version tagged for the silly compiler: V7

January 4, 2026 clang/llvm , ,

V7 of the silly compiler is tagged. (EDIT: I’d previously announced this as V8, but I jumped the gun, including having an AI generate a “V8” image for me below. The newly tagged version is V7. There is no V8 yet, and only when I create it, will the V8 image below will be justified.)

This version makes a couple small bug fixes, and a bunch of maintenance changes, mostly related to location information in the builder/parser, which is now considerably simpler. It also adds support for PRINT numeric-literal, and implements an ELIF statement. That last was in the grammar, but had no builder support.

Adding the ELIF feature actually simplifed things. I didn’t try to add my own silly.if MLIR operator, with intrinsic support for this else-if construct that I had in the grammar. Instead I just used scf.if — essentially making a transformation of the following form in the builder:

IF ( x )
{
  statement1;
}
ELIF ( y )
{
  statement2;
}

to

IF ( x )
{
  statement1;
}
ELSE
{
  IF ( y )
  {
    statement2;
  }
}

In my original implementation of IF/ELSE, I only generated the ELSE region and block when the user code had an ELSE statement. Then when the ANTRL4 enterElseStatement was called, I’d then generate the else region and block (setting the insertion point at that point, so that the following statements would then populate that block.). Then when exitElseStatement was run, I’d generate the scf::Yield operation to terminate the block.

With the implementation of ELIF, I changed the scf::IfOp generation to include empty then and else regions. When this is done up front by default, the constructor adds the scf::Yield operators automatically — all that’s required is setting the insertion point. I then only have to push/pop a single insertion point, adjusting it (without push) for each ELIF/ELSE callback in the ANTLR4 tree walker.

Here are some examples:

INT32 x;

x = 3;

IF ( x < 4 )
{
  INT32 y;
  y = 42;
  PRINT y;
};

PRINT "Done.";

This is an IF without an ELSE, but the MLIR now has an empty else block:

module {
  func.func @main() -> i32 {
    "silly.scope"() ({
      "silly.declare"() <{type = i32}> {sym_name = "y"} : () -> ()
      "silly.declare"() <{type = i32}> {sym_name = "x"} : () -> ()
      %c3_i64 = arith.constant 3 : i64
      silly.assign @x = %c3_i64 : i64
      %0 = silly.load @x : i32
      %c4_i64 = arith.constant 4 : i64
      %1 = "silly.less"(%0, %c4_i64) : (i32, i64) -> i1
      scf.if %1 {
        %c42_i64 = arith.constant 42 : i64
        silly.assign @y = %c42_i64 : i64
        %3 = silly.load @y : i32
        silly.print %3 : i32
      } else {
      }
      %2 = "silly.string_literal"() <{value = "Done."}> : () -> !llvm.ptr
      silly.print %2 : !llvm.ptr
      %c0_i32 = arith.constant 0 : i32
      "silly.return"(%c0_i32) : (i32) -> ()
    }) : () -> ()
    "silly.yield"() : () -> ()
  }
}

Here's one with an ELIF:

INT32 x; // line 1

x = 3; // line 3

IF ( x < 4 ) // line 5
{
   PRINT x; // line 7
}
ELIF ( x > 5 ) // line 9
{
   PRINT "Bug if we get here."; // line 11
};

PRINT 42; // line 13

This one has a nested if/else within the else:

module {
  func.func @main() -> i32 {
    "silly.scope"() ({
      "silly.declare"() <{type = i32}> {sym_name = "x"} : () -> ()
      %c3_i64 = arith.constant 3 : i64
      silly.assign @x = %c3_i64 : i64
      %0 = silly.load @x : i32
      %c4_i64 = arith.constant 4 : i64
      %1 = "silly.less"(%0, %c4_i64) : (i32, i64) -> i1
      scf.if %1 {
        %2 = silly.load @x : i32
        silly.print %2 : i32
      } else {
        %2 = silly.load @x : i32
        %c5_i64 = arith.constant 5 : i64
        %3 = "silly.less"(%c5_i64, %2) : (i64, i32) -> i1
        scf.if %3 {
          %4 = "silly.string_literal"() <{value = "Bug if we get here."}> : () -> !llvm.ptr
          silly.print %4 : !llvm.ptr
        } else {
        }
      }
      %c42_i64 = arith.constant 42 : i64
      silly.print %c42_i64 : i64
      %c0_i32 = arith.constant 0 : i32
      "silly.return"(%c0_i32) : (i32) -> ()
    }) : () -> ()
    "silly.yield"() : () -> ()
  }
}

Effectively, the existing IF and ELSE implementation has been moved into helper functions, leaving the guts of all the walking functions for IF/ELIF/ELSE almost trivial:

    void MLIRListener::enterIfStatement( SillyParser::IfStatementContext *ctx )
    try
    {
        assert( ctx );
        mlir::Location loc = getStartLocation( ctx );

        SillyParser::BooleanValueContext *booleanValue = ctx->booleanValue();
        assert( booleanValue );

        createIf( loc, booleanValue, true );
    }
    CATCH_USER_ERROR

    void MLIRListener::enterElseStatement( SillyParser::ElseStatementContext *ctx )
    try
    {
        assert( ctx );
        mlir::Location loc = getStartLocation( ctx );

        selectElseBlock( loc, ctx->getText() );
    }
    CATCH_USER_ERROR

    void MLIRListener::enterElifStatement( SillyParser::ElifStatementContext *ctx )
    try
    {
        assert( ctx );
        mlir::Location loc = getStartLocation( ctx );

        selectElseBlock( loc, ctx->getText() );

        SillyParser::BooleanValueContext *booleanValue = ctx->booleanValue();

        createIf( loc, booleanValue, false );
    }
    CATCH_USER_ERROR

    void MLIRListener::exitIfelifelse( SillyParser::IfelifelseContext *ctx )
    try
    {
        // Restore EXACTLY where we were before creating the scf.if
        // This places new ops right AFTER the scf.if
        builder.restoreInsertionPoint( insertionPointStack.back() );
        insertionPointStack.pop_back();
    }
    CATCH_USER_ERROR

The selectElseBlock function used to be a createElseBlock. Now it just sets the insertion point, which takes a bit of work to figure out where it is:

    void MLIRListener::selectElseBlock( mlir::Location loc, const std::string & errorText )
    {
        mlir::scf::IfOp ifOp;

        // Temporarily restore the insertion point to right after the scf.if, to search for our current IfOp
        builder.restoreInsertionPoint( insertionPointStack.back() );

        // Now find the scf.if op that is just before the current insertion point
        mlir::Block *currentBlock = builder.getInsertionBlock();
        assert( currentBlock );
        mlir::Block::iterator ip = builder.getInsertionPoint();
        
        // The insertion point is at the position where new ops would be inserted.
        // So the operation just before it should be the scf.if
        if ( ip != currentBlock->begin() )
        {
            mlir::Operation *prevOp = &*( --ip );    // the op immediately before the insertion point
            ifOp = dyn_cast<mlir::scf::IfOp>( prevOp );
        }
    
        if ( !ifOp )
        {
            throw ExceptionWithContext(
                __FILE__, __LINE__, __func__,
                std::format( "{}internal error: Could not find scf.if op corresponding to this if statement\n",
                             formatLocation( loc ), errorText ) );
        }

        mlir::Region &elseRegion = ifOp.getElseRegion();
        mlir::Block &elseBlock = elseRegion.front();
        builder.setInsertionPointToStart( &elseBlock );
    }

The createIf helper is also pretty trivial:

    void MLIRListener::createIf( mlir::Location loc, SillyParser::BooleanValueContext *booleanValue, bool saveIP )
    {
        mlir::Value conditionPredicate = parsePredicate( loc, booleanValue );

        if ( saveIP )
        {
            insertionPointStack.push_back( builder.saveInsertionPoint() );
        } 

        mlir::scf::IfOp ifOp = builder.create<mlir::scf::IfOp>(
            loc, conditionPredicate,
            /*withElseRegion=*/true );
        
        mlir::Block &thenBlock = ifOp.getThenRegion().front();
        builder.setInsertionPointToStart( &thenBlock );
    }

Debugging wrong debug location info for a CALL in my silly language.

January 1, 2026 clang/llvm , , , , ,

Screenshot

Here’s a program in my silly language:

PRINT "hi"; // line 1

FUNCTION bar0 ( ) // line 3
{
    PRINT "bar0"; // line 5
    RETURN; // line 6
};

CALL bar0(); // line 9

I noticed that line stepping for this program has a “line number glitch”:

xpg:/home/pjoot/toycalculator/samples> gdb out/f
Reading symbols from out/f...
(gdb) b main
Breakpoint 1 at 0x400491: file f.silly, line 1.
(gdb) run
Starting program: /home/pjoot/toycalculator/samples/out/f 
Downloading separate debug info for system-supplied DSO at 0x7ffff7fc5000
[Thread debugging using libthread_db enabled]                                                                                           
Using host libthread_db library "/lib64/libthread_db.so.1".

Breakpoint 1, main () at f.silly:1
1       PRINT "hi"; // line 1
(gdb) b 9
Breakpoint 2 at 0x4004a0: file f.silly, line 9.
(gdb) c
Continuing.
hi

Breakpoint 2, main () at f.silly:9
9       CALL bar0(); // line 9
(gdb) s
bar0 () at f.silly:1
1       PRINT "hi"; // line 1
(gdb) n
5           PRINT "bar0"; // line 5
(gdb) 
bar0
6           RETURN; // line 6

(i.e.: from line 9, we should jump to line 3, then 5, but we first end up at line 1 instead of 3.)

I can see that things are wrong in the dwarfdump:

LOCAL_SYMBOLS:
< 1><0x0000002a>    DW_TAG_subprogram
                      DW_AT_low_pc                0x00000000
                      DW_AT_high_pc                18 
                      DW_AT_frame_base            len 0x0001: 0x57: 
                          DW_OP_reg7
                      DW_AT_linkage_name          bar0
                      DW_AT_name                  bar0
                      DW_AT_decl_file             0x00000001 ./f.silly
                      DW_AT_decl_line             0x00000001
                      DW_AT_type                  <0x00000064> Refers to: void
                      DW_AT_external              yes(1)
< 1><0x00000047>    DW_TAG_subprogram
                      DW_AT_low_pc                0x00000020
                      DW_AT_high_pc                25 
                      DW_AT_frame_base            len 0x0001: 0x57: 
                          DW_OP_reg7
                      DW_AT_linkage_name          main
                      DW_AT_name                  main
                      DW_AT_decl_file             0x00000001 ./f.silly
                      DW_AT_decl_line             0x00000001
                      DW_AT_type                  <0x0000006b> Refers to: int
                      DW_AT_external              yes(1)

The DW_AT_decl_line for the implicit main function for the program has a valid value (1), but the DW_AT_decl_line for the bar0 function shouldn’t be line 1.

Let’s have a peek at the LLVM-IR representation:

; ModuleID = 'f.silly'
source_filename = "f.silly"
target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128"
target triple = "x86_64-unknown-linux-gnu"

@str_1 = private constant [2 x i8] c"hi"
@str_0 = private constant [4 x i8] c"bar0"

declare void @__silly_print_string(i64, ptr)

define void @bar0() !dbg !4 {
  call void @__silly_print_string(i64 4, ptr @str_0), !dbg !8
  ret void, !dbg !9
}

define i32 @main() !dbg !10 {
  call void @__silly_print_string(i64 2, ptr @str_1), !dbg !14
  call void @bar0(), !dbg !15
  ret i32 0, !dbg !16
}

!llvm.dbg.cu = !{!0}
!llvm.ident = !{!2}
!llvm.module.flags = !{!3}

!0 = distinct !DICompileUnit(language: DW_LANG_C, file: !1, producer: "silly", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug)
!1 = !DIFile(filename: "f.silly", directory: ".")
!2 = !{!"silly V7"}
!3 = !{i32 2, !"Debug Info Version", i32 3}
!4 = distinct !DISubprogram(name: "bar0", linkageName: "bar0", scope: !1, file: !1, line: 1, type: !5, scopeLine: 1, spFlags: DISPFlagDefinition, unit: !0)
!5 = !DISubroutineType(types: !6)
!6 = !{!7}
!7 = !DIBasicType(name: "void")
!8 = !DILocation(line: 5, column: 11, scope: !4)
!9 = !DILocation(line: 6, column: 5, scope: !4)
!10 = distinct !DISubprogram(name: "main", linkageName: "main", scope: !1, file: !1, line: 1, type: !11, scopeLine: 1, spFlags: DISPFlagDefinition, unit: !0)
!11 = !DISubroutineType(types: !12)
!12 = !{!13}
!13 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed)
!14 = !DILocation(line: 1, column: 7, scope: !10)
!15 = !DILocation(line: 9, column: 11, scope: !10)
!16 = !DILocation(line: 1, column: 1, scope: !10)

The error is right there in the ‘!4 DISubprogram’, which has line 1, not 3. Also note that the scopeLine should also be 5, not 1.

Sure enough, I’ve got the line number hardcoded in lowering when I generate my DISubprogramAttr

    void LoweringContext::createFuncDebug( mlir::func::FuncOp funcOp )
    {
        if ( driverState.wantDebug )
        {
            ModuleInsertionPointGuard ip( mod, builder );

            mlir::MLIRContext* context = builder.getContext();
            std::string funcName = funcOp.getSymName().str();

            mlir::LLVM::DISubroutineTypeAttr subprogramType = createDISubroutineType( funcOp );

            mlir::LLVM::DISubprogramAttr sub = mlir::LLVM::DISubprogramAttr::get(
                context, mlir::DistinctAttr::create( builder.getUnitAttr() ), compileUnitAttr, fileAttr,
                builder.getStringAttr( funcName ), builder.getStringAttr( funcName ), fileAttr, 1, 1,
                mlir::LLVM::DISubprogramFlags::Definition, subprogramType, llvm::ArrayRef<mlir::LLVM::DINodeAttr>{},
                llvm::ArrayRef<mlir::LLVM::DINodeAttr>{} );

            funcOp->setAttr( "llvm.debug.subprogram", sub );

            // This is the key to ensure that translateModuleToLLVMIR does not strip the location info (instead
            // converts loc's into !dbg's)
            funcOp->setLoc( builder.getFusedLoc( { mod.getLoc() }, sub ) );

            subprogramAttr[funcName] = sub;
        }
    }

Those ‘1, 1’ parameters are line (first line of function declaration) and scopeLine (first line of code in the function body) respectively, and it takes a little bit of work to get both:

--- a/src/lowering.cpp
+++ b/src/lowering.cpp
@@ -411,9 +411,27 @@ namespace silly
 
             mlir::LLVM::DISubroutineTypeAttr subprogramType = createDISubroutineType( funcOp );
 
+            mlir::Location funcLoc = funcOp.getLoc();
+            mlir::FileLineColLoc loc = getLocation( funcLoc );
+            unsigned line = loc.getLine();
+            unsigned scopeLine = line;
+
+            mlir::Region ®ion = funcOp.getRegion();
+
+            mlir::Block &entryBlock = region.front();
+
+            // Get the location of the First operation in the block for the scopeLine:
+            if (!entryBlock.empty()) {
+              mlir::Operation *firstOp = &entryBlock.front();
+              mlir::Location firstLoc = firstOp->getLoc();
+              mlir::FileLineColLoc scopeLoc = getLocation( firstLoc );
+
+              scopeLine = scopeLoc.getLine();
+            }
+
             mlir::LLVM::DISubprogramAttr sub = mlir::LLVM::DISubprogramAttr::get(
                 context, mlir::DistinctAttr::create( builder.getUnitAttr() ), compileUnitAttr, fileAttr,
-                builder.getStringAttr( funcName ), builder.getStringAttr( funcName ), fileAttr, 1, 1,
+                builder.getStringAttr( funcName ), builder.getStringAttr( funcName ), fileAttr, line, scopeLine,
                 mlir::LLVM::DISubprogramFlags::Definition, subprogramType, llvm::ArrayRef{},
                 llvm::ArrayRef{} );

Here’s the new DWARF dump for bar0:

< 1><0x0000002a>    DW_TAG_subprogram
                      DW_AT_low_pc                0x00000000
                      DW_AT_high_pc                18 
                      DW_AT_frame_base            len 0x0001: 0x57: 
                          DW_OP_reg7
                      DW_AT_linkage_name          bar0
                      DW_AT_name                  bar0
                      DW_AT_decl_file             0x00000001 ./f.silly
                      DW_AT_decl_line             0x00000003
                      DW_AT_type                  <0x00000064> Refers to: void
                      DW_AT_external              yes(1)

scopeLine doesn’t show there, but it’s in the LLVM-IR dump:

!4 = distinct !DISubprogram(name: "bar0", linkageName: "bar0", scope: !1, file: !1, line: 3, type: !5, scopeLine: 3, spFlags: DISPFlagDefinition, unit: !0)