## Notes compilation for ECE1505, Convex Optimization

March 18, 2017 ece1505 No comments

I’ve now posted a notes compilation for the subset of the Convex Optimization (ECE1505H) course I was taking in the winter 2017 session.

This course was taught by Prof. S. Draper.

These convex optimization notes are incomplet, covering only the first 9 lectures. The unredacted notes include my solution to problem set 1 (149 pages, vs. 131 pages).

I initially enrolled on this optimization course because I needed a specific quota of ECE courses to satisfy the M.Eng graduation requirements, and the electromagnetics group wasn’t offering enough courses.  I remembered liking linear programming in high school, and always wanted to understand the rational for some of the assumptions that was based on that were never proven in class.  Specifically, I recall that it was stated, but not proved in that high school class, that the extreme values were always found at the vertices of the optimization region.  So, my thought was, I’ll have fun learning the basis for those assumptions, and also learn about optimization theory in general.

It turns out that optimization theory, at least as presented in this course, is very very dry.  It was an endless seeming sequence of definition and proof, with the end goal so far away that it was very difficult to see the big picture.  I worked through the a number of weeks of this particular course before I had enough and bailed.  Work is too fun right now to torture myself and spend the time on an academic course that I am not enjoying, so I dropped it and am back to full time work at LzLabs (from 80%) until the next session at UofT starts again.

The reason I enrolled on the M.Eng in the first place was to study material that I was interested in.  Ideally I would have done that in a part time physics grad context, but that was not available, so I found that the M.Eng allowed me to take an interesting (but constrained) mix of physics and engineering electromagnetism courses.  However, when I enrolled, the electromagnetism course selection was a lot better, and now unfortunately it is sparse and includes only courses that I’d already taken.  I don’t want the M.Eng degree paper badly enough to torture myself with a course that I’m not actually interested in.

I now actually have a plan to satisfy both the degree requirements and my interests (using a project “course”).  That will involve independent study on Geometric Algebra applications to engineering electromagnetism.  I am irked that I have to pay a part time engineering program fee next year to self study, but it does seem worthwhile to come out of the M.Eng study with an actual degree as a side effect, so I am going to go ahead and do it anyways.

## gdb pretty print of structures

Here’s a nice little gdb trick for displaying structure contents in a less compact format

(gdb) set print pretty on
(gdb) p dd[0]
$4 = { jfcb = { datasetName = "PJOOT.NVS1", ' ' <repeats 34 times>, . . . vols = {"<AAAiW", "\000\000\000\000\000", "\000\000\000\000\000", "\000\000\000\000\000", "\000\000\000\000\000"}, }, block_size = 800, device_class = 32 '\040', device_type = 15 '\017', disp_normal = 8 '\010', disp_cond = 8 '\010', volsers = 0x7fb71801ecd6 "<AAAiW", }  compare this to the dense default (gdb) set print pretty on (gdb) p dd[0]$5 = {jfcb = {datasetName = "PJOOT.NVS1", ... vols = {"<AAAiW", "\000\000\000\000\000", "\000\000\000\000\000", "\000\000\000\000\000", "\000\000\000\000\000"}, block_size = 800, block_size_limit = 0, device_class = 32 '\040', device_type = 15 '\017', disp_normal = 8 '\010', disp_cond = 8 '\010', volsers = 0x7fb71801ecd6 "<AAAiW"}


For really big structures (this one actually is, but I’ve pruned a bunch of stuff), this makes the structure print display a whole lot more readable. Additionally, if you combine this with ‘(gdb) set logging on’, then with pretty print enabled you can prune the output by line easily to see just what you want.

## A really dumb DNS lookup for my internal network

The new Hitron cable modem in the house cowardly refuses to let me cache mac and ip address pairs, which is really annoying because my ip addresses now change on me over a couple days. The old router (also a Hitron) allowed that, so putting it on a UPS was generally enough to let me have a static IP table, provided I didn’t have to reboot it.

Here’s a hack using nmap that I just cobbled together to fill in the /etc/hosts entries on the couple machines that I want to talk to each other (mac and Linux machines, so all are unix like).

my %hostnameByMacAddr = (
'B8:4E:3F:C4:04:02' => 'router',
'E4:5C:89:C2:0F:4B' => 'macbookw # wireless',
'10:C2:C6:A0:20:58' => 'nuc2w',
'10:C2:C6:CA:93:6A' => 'nuc1w',
'A8:AE:ED:EB:39:86' => 'nuc1',
'A8:AE:ED:7D:CE:5A' => 'nuc2',
'28:C9:86:46:A8:15' => 'macbookt # thunderbolt monitor connected',
'10:24:2B:A1:7B:F7' => 'brother # printer',
'BC:87:A3:34:1A:FF' => 'macbooke # ethernet cable connected',
);

open my $h, "sudo nmap -n -p 22 192.168.0.1/24 2>&1 | grep -e '192' -e '^MAC' |" or die; my$ip;
while ( <$h> ) { if ( /scan report for.*(192\.\d+\.\d+\.\d+)/ ) {$ip = $1 ; } if ( /MAC Address: (.*) / ) { my$mac = $1; if ( defined$hostnameByMacAddr{$mac} ) { print "$ip $hostnameByMacAddr{$mac} # $mac\n" ; } else { print "#$ip $mac # unknown\n" ; } } } close$h or die;


If anybody knows how to set up an actual DNS server for internal networks, I’d be interested to see what is involved, since it looked very hard when I googled it.

## VSAM creation and population with JCL and IDCAMS

March 7, 2017 Mainframe No comments , , , , , , , ,

I learned a few JCL DATASET related things yesterday that seemed notable, at least for a JCL newbie.

## Delete a DATASET, and ignore any error.

Each time I’ve wanted a DATASET cleanup step in JCL I’ve been using a separate script, and running that first.  A better way of doing this is to include a IDCAMS job step in the script, and have that do the deletion

//CLEANUP EXEC PGM=IDCAMS
//SYSIN DD *
DELETE PJOOT.XXXXX005
SET MAXCC = 0
/*
//SYSPRINT DD SYSOUT=*
//SYSOUT   DD SYSOUT=*
//*SYSTERM  DD SYSOUT=*


This deletes the file PJOOT.XXXXX005, which in this case was a VSAM file. In case that file (a DATASETs in mainframe-eze) did not exist, the error code for that DELETE is ignored by setting MAXCC=0. If you have multiple things that you want to do with IDCAMS, you can do things like DELETE and then ALLOCATE immediately, such as

//REALLOC EXEC PGM=IDCAMS
//SYSIN DD *
DELETE PJOOT.XXXXX005
SET MAXCC = 0
DEFINE CLUSTER (NAME(PJOOT.XXXXX005) -
CYLINDERS(1) VOLUMES(LZ0000) -
INDEXED -
KEYS(4 0) -
RECORDSIZE(240 240) -
) -
DATA (NAME(PJOOT.XXXXX005.DATA)) -
INDEX (NAME(PJOOT.XXXXX005.INDEX))
/*
//SYSPRINT DD SYSOUT=*
//SYSOUT   DD SYSOUT=*
//*SYSTERM  DD SYSOUT=*


This does the DELETE, ignores any error, and then proceeds to do the new ALLOCATE for the VSAM file. I haven’t seen any way described of ALLOCATING a VSAM file other than using IDCAMS, except in 3270 screens. I think I’ve seen that LzLabs has 3270 capabilities for this sort of stuff, but I’m not inclined to try to figure out how to use it. I’d rather use our much more intuitive GUI or do it in script with JCL like this.

## Copy a DATASET.

Here is some JCL to copy an (INLINE) dataset into the VSAM file created above

//COPY2VS EXEC PGM=IDCAMS
//TARGET DD DSN=PJOOT.XXXXX005,DISP=(OLD,KEEP,KEEP)
//INLINEDD DD DATA,DCB=(BLKSIZE=240,LRECL=240,RECFM=F)
a
brown
fox
quick
/*
//SYSIN DD *
REPRO -
INFILE(INLINEDD) -
OUTFILE(TARGET)
/*
//SYSPRINT DD SYSOUT=*
//SYSOUT   DD SYSOUT=*
//SYSTERM  DD SYSOUT=*


There are two quirks that are noteworthy here.

1. The VSAM file requires the input be sorted, which is why the words from ‘a quick brown fox’ are in the explicitly sorted order above.
2. The VSAM file was created with RECORDSIZE 240, so the input file had to be forced to LRECL=240 to match.

Omission of either sort or the LRECL matching causes the VSAM load to fail.

This was the first time that I’d seen this specific INLINE DD syntax, with explicit parameters.  The way I’d seen it before was how SYSIN was specified above with ‘NAME DD *’, ending with C “comment start” /* sequence.  It turns out the default end of file delimiter can also be specified, for example, this also works:

//INLINEDD DD DATA,DLM=@@,DCB=(BLKSIZE=240,LRECL=240,RECFM=F)
a
brown
fox
quick
@@


## Cat a file to spool

Because IDCAMS can copy files, this can also be used to cat a file to SPOOL if desired.  Here’s an example:

//CATVS JOB
//CATVS EXEC PGM=IDCAMS
//TARGET DD DSN=PJOOT.XXXXX005,DISP=(OLD,KEEP,KEEP)
//SYSIN DD *
REPRO -
INFILE(TARGET) -
OUTFILE(SYSOUT)
/*
//SYSPRINT DD SYSOUT=*
//SYSOUT   DD SYSOUT=*
//SYSTERM  DD SYSOUT=*


If I include a step like this, I’m able to see the file contents in our nice GUI spool browser along with the JCL script and all the other output.

## peeking into relocation of function static in shared library

Here’s GUI (TUI) output of a function with a static variable access:

B+ |0x7ffff7616800 <st32>           test   %edi,%edi                                                                                                                     |
|0x7ffff7616802 <st32+2>         je     0x7ffff7616811 <st32+17>                                                                                                      |
|0x7ffff7616804 <st32+4>         mov    %edi,%eax                                                                                                                     |
|0x7ffff7616806 <st32+6>         bswap  %eax                                                                                                                          |
|0x7ffff7616808 <st32+8>         mov    %eax,0x200852(%rip)        # 0x7ffff7817060 <st32.yst32>                                                                        |
|0x7ffff761680e <st32+14>        mov    %edi,%eax                                                                                                                     |
|0x7ffff7616810 <st32+16>        retq                                                                                                                                 |
>|0x7ffff7616811 <st32+17>        mov    0x200849(%rip),%edi        # 0x7ffff7817060 <st32.yst32>                                                                        |
|0x7ffff7616817 <st32+23>        bswap  %edi                                                                                                                          |
|0x7ffff7616819 <st32+25>        mov    %edi,%eax                                                                                                                     |
|0x7ffff761681b <st32+27>        retq                                                                                                                                 |
|0x7ffff761681c <_fini>          sub    $0x8,%rsp | |0x7ffff7616820 <_fini+4> add$0x8,%rsp                                                                                                                     |
|0x7ffff7616824 <_fini+8>        retq                                                                                                                                 |
|0x7ffff7616825                  add    %al,(%rcx)                                                                                                                    |
|0x7ffff7616827 <x16+1>          add    (%rcx),%al                                                                                                                    |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+


The associated code is:

int st32( int v ) {
static int yst32 = 0x1a2b3c4d;

if ( v ) {
yst32 = v;
}

return yst32;
}


The object code dump (prior to relocation) just has zeros in the offset for the variable:

$objdump -d g.bs.o | grep -A12 '<st32>' 0000000000000050 <st32>: 50: 85 ff test %edi,%edi 52: 74 0d je 61 <st32+0x11> 54: 89 f8 mov %edi,%eax 56: 0f c8 bswap %eax 58: 89 05 00 00 00 00 mov %eax,0x0(%rip) # 5e <st32+0xe> 5e: 89 f8 mov %edi,%eax 60: c3 retq 61: 8b 3d 00 00 00 00 mov 0x0(%rip),%edi # 67 <st32+0x17> 67: 0f cf bswap %edi 69: 89 f8 mov %edi,%eax 6b: c3 retq  The linker has filled in the real offsets in question, and the dynamic loader has collaborated to put the data segment in the desired location. The observant reader may notice bwsap instructions in the listings above that don’t make sense for x86_64 code. That is because this code is compiled with an LLVM pass that performs byte swapping at load and store points, making it big endian in a limited fashion. The book Linkers and Loaders has some nice explanation of how relocation works, but I wanted to see the end result first hand in the debugger. It turned out that my naive expectation that the sum of$rip and the constant relocation factor is the address of the global variable (actually static in this case) is incorrect. Check that out in the debugger:

(gdb) p /x 0x200849+$rip$1 = 0x7ffff781705a

(gdb) x/10 $1 0x7ffff781705a <gy+26>: 0x22110000 0x2b1a4433 0x00004d3c 0x00000000 0x7ffff781706a: 0x00000000 0x00000000 0x00000000 0x30350000 0x7ffff781707a: 0x20333236 0x64655228  My magic value 0x1a2b3c4d looks like it is 6 bytes into the$rip + 0x200849 location that the disassembly appears to point to, and that is in fact the case:

(gdb) x/10 $1+6 0x7ffff7817060 <st32.yst32>: 0x4d3c2b1a 0x00000000 0x00000000 0x00000000 0x7ffff7817070 <y32>: 0x00000000 0x00000000 0x32363035 0x52282033 0x7ffff7817080: 0x48206465 0x34207461  My guess was the mysterious offset of 6 required to actually find this global address was the number of bytes in the MOV instruction, and sure enough that MOV is 6 bytes long: (gdb) disassemble /r Dump of assembler code for function st32: 0x00007ffff7616800 <+0>: 85 ff test %edi,%edi 0x00007ffff7616802 <+2>: 74 0d je 0x7ffff7616811 <st32+17> 0x00007ffff7616804 <+4>: 89 f8 mov %edi,%eax 0x00007ffff7616806 <+6>: 0f c8 bswap %eax 0x00007ffff7616808 <+8>: 89 05 52 08 20 00 mov %eax,0x200852(%rip) # 0x7ffff7817060 <st32.yst32> 0x00007ffff761680e <+14>: 89 f8 mov %edi,%eax 0x00007ffff7616810 <+16>: c3 retq => 0x00007ffff7616811 <+17>: 8b 3d 49 08 20 00 mov 0x200849(%rip),%edi # 0x7ffff7817060 <st32.yst32> 0x00007ffff7616817 <+23>: 0f cf bswap %edi 0x00007ffff7616819 <+25>: 89 f8 mov %edi,%eax 0x00007ffff761681b <+27>: c3 retq End of assembler dump.  So, it appears that the %rip reference in the disassembly is really the value of the instruction pointer after the instruction executes, which is curious. Note that this 4 byte relocation requires the shared library code segment and the shared library data segment be separated by no more than 4G. The linux dynamic loader has put all the segments back to back so that this is the case. This can be seen from /proc/PID/maps for the process: $ ps -ef | grep maindl
pjoot    17622 17582  0 10:50 pts/3    00:00:00 /home/pjoot/workspace/pass/global/maindl libglobtestbs.so

\$ grep libglob /proc/17622/maps
7ffff7616000-7ffff7617000 r-xp 00000000 fc:00 2492653                    /home/pjoot/workspace/pass/global/libglobtestbs.so
7ffff7617000-7ffff7816000 ---p 00001000 fc:00 2492653                    /home/pjoot/workspace/pass/global/libglobtestbs.so
7ffff7816000-7ffff7817000 r--p 00000000 fc:00 2492653                    /home/pjoot/workspace/pass/global/libglobtestbs.so
7ffff7817000-7ffff7818000 rw-p 00001000 fc:00 2492653                    /home/pjoot/workspace/pass/global/libglobtestbs.so


We’ve got a read-execute mmap region, where the code lies, and a read-write mmap region for the data. There’s a read-only segment which I presume is for read only global variables (my shared lib has one such variable and we have one page worth of space allocated for read only memory).

I wonder what the segment that has none of the read, write, nor execute permissions set is?

## New book for work: Linkers and Loaders

February 24, 2017 C/C++ development and debugging. No comments , ,

Fresh off the press:

I got this book to get some background on relocation of ELF globals, and was surprised to find a bit on z/OS (punch card compatible!) object format layout:

… an interesting bonus that’s topical.

## Omelets, a nice perk of working from home.

February 9, 2017 Food No comments , , ,

Today’s lunch was a three egg avocado, onion, red pepper, mushroom, cheese omelet, made with free range eggs from a little local Markham farm (16th just past York Durham line) .  I should have included a picture of the eggs before I cracked them, since two of them were an awesome blue, which the farm owner said is due to the Brazilian breed.

Prep included using Sofia’s awesome trick of separating the eggs, and pre-whipping the whites before a final recombination:

and the final delicious result:

I realize now that I forgot to add milk this time, which I’ve done in the past, but it all still worked out well.  I guess the milk is not really required.  As I tend to do with tortillas, I overstuffed this, and now I’m also overstuffed.

## ECE1505H Convex Optimization. Lecture 8: Local vs. Global, and composition of functions. Taught by Prof. Stark Draper

[Click here for a PDF of this post with nicer formatting]

## Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course ECE1505H, Convex Optimization, taught by Prof. Stark Draper, from [1].

## Today

• Finish local vs global.
• Compositions of functions.
• Introduction to convex optimization problems.

## Continuing proof:

We want to prove that if

\begin{equation*}
\begin{aligned}
\spacegrad F(\Bx^\conj) &= 0 \\
\spacegrad^2 F(\Bx^\conj) \ge 0
\end{aligned},
\end{equation*}

then $$\Bx^\conj$$ is a local optimum.

Proof:

Again, using Taylor approximation

\label{eqn:convexOptimizationLecture8:20}
F(\Bx^\conj + \Bv) = F(\Bx^\conj) + \lr{ \spacegrad F(\Bx^\conj)}^\T \Bv + \inv{2} \Bv^\T \spacegrad^2 F(\Bx^\conj) \Bv + o(\Norm{\Bv}^2)

The linear term is zero by assumption, whereas the Hessian term is given as $$> 0$$. Any direction that you move in, if your move is small enough, this is going uphill at a local optimum.

## Summarize:

For twice continuously differentiable functions, at a local optimum $$\Bx^\conj$$, then

\label{eqn:convexOptimizationLecture8:40}
\begin{aligned}
\spacegrad F(\Bx^\conj) &= 0 \\
\spacegrad^2 F(\Bx^\conj) \ge 0
\end{aligned}

If, in addition, $$F$$ is convex, then $$\spacegrad F(\Bx^\conj) = 0$$ implies that $$\Bx^\conj$$ is a global optimum. i.e. for (unconstrained) convex functions, local and global optimums are equivalent.

• It is possible that a convex function does not have a global optimum. Examples are $$F(x) = e^x$$
(fig. 1)
, which has an $$\inf$$, but no lowest point.

fig. 1. Exponential has no global optimum.

• Our discussion has been for unconstrained functions. For constrained problems (next topic) is not not necessarily true that $$\spacegrad F(\Bx) = 0$$ implies that $$\Bx$$ is a global optimum, even for $$F$$ convex.

As an example of a constrained problem consider

\label{eqn:convexOptimizationLecture8:n}
\begin{aligned}
\min &2 x^2 + y^2 \\
x &\ge 3 \\
y &\ge 5.
\end{aligned}

The level sets of this objective function are plotted in fig. 2. The optimal point is at $$\Bx^\conj = (3,5)$$, where $$\spacegrad F \ne 0$$.

fig. 2. Constrained problem with optimum not at the zero gradient point.

## Projection

Given $$\Bx \in \mathbb{R}^n, \By \in \mathbb{R}^p$$, if $$h(\Bx,\By)$$ is convex in $$\Bx, \By$$, then

\label{eqn:convexOptimizationLecture8:60}
F(\Bx_0) = \inf_\By h(\Bx_0,\By)

is convex in $$\Bx$$, as sketched in fig. 3.

fig. 3. Epigraph of $$h$$ is a filled bowl.

The intuition here is that shining light on the (filled) “bowl”. That is, the image of $$\textrm{epi} h$$ on the $$\By = 0$$ screen which we will show is a convex set.

Proof:

Since $$h$$ is convex in $$\begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h$$, then

\label{eqn:convexOptimizationLecture8:80}
\textrm{epi} h = \setlr{ (\Bx,\By,t) | t \ge h(\Bx,\By), \begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h },

is a convex set.

We also have to show that the domain of $$F$$ is a convex set. To show this note that

\label{eqn:convexOptimizationLecture8:100}
\begin{aligned}
\textrm{dom} F
&= \setlr{ \Bx | \exists \By s.t. \begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h } \\
&= \setlr{
\begin{bmatrix}
I_{n\times n} & 0_{n \times p}
\end{bmatrix}
\begin{bmatrix}
\Bx \\
\By
\end{bmatrix}
| \begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h
}.
\end{aligned}

This is an affine map of a convex set. Therefore $$\textrm{dom} F$$ is a convex set.

\label{eqn:convexOptimizationLecture8:120}
\begin{aligned}
\textrm{epi} F
&=
\setlr{ \begin{bmatrix} \Bx \\ \By \end{bmatrix} | t \ge \inf h(\Bx,\By), \Bx \in \textrm{dom} F, \By: \begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h } \\
&=
\setlr{
\begin{bmatrix}
I & 0 & 0 \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\Bx \\
\By \\
t
\end{bmatrix}
|
t \ge h(\Bx,\By), \begin{bmatrix} \Bx \\ \By \end{bmatrix} \in \textrm{dom} h
}.
\end{aligned}

### Example:

The function

\label{eqn:convexOptimizationLecture8:140}
F(\Bx) = \inf_{\By \in C} \Norm{ \Bx – \By },

over $$\Bx \in \mathbb{R}^n, \By \in C$$, ,is convex if $$C$$ is a convex set. Reason:

• $$\Bx – \By$$ is linear in $$(\Bx, \By)$$.
• $$\Norm{ \Bx – \By }$$ is a convex function if the domain is a convex set
• The domain is $$\mathbb{R}^n \times C$$. This will be a convex set if $$C$$ is.
• $$h(\Bx, \By) = \Norm{\Bx -\By}$$ is a convex function if $$\textrm{dom} h$$ is a convex set. By setting $$\textrm{dom} h = \mathbb{R}^n \times C$$, if $$C$$ is convex, $$\textrm{dom} h$$ is a convex set.
• $$F()$$

## Composition of functions

Consider

\label{eqn:convexOptimizationLecture8:160}
\begin{aligned}
F(\Bx) &= h(g(\Bx)) \\
\textrm{dom} F &= \setlr{ \Bx \in \textrm{dom} g | g(\Bx) \in \textrm{dom} h } \\
F &: \mathbb{R}^n \rightarrow \mathbb{R} \\
g &: \mathbb{R}^n \rightarrow \mathbb{R} \\
h &: \mathbb{R} \rightarrow \mathbb{R}.
\end{aligned}

Cases:

1. $$g$$ is convex, $$h$$ is convex and non-decreasing.
2. $$g$$ is convex, $$h$$ is convex and non-increasing.

Show for 1D case ( $$n = 1$$). Get to $$n > 1$$ by applying to all lines.

1. \label{eqn:convexOptimizationLecture8:180}
\begin{aligned}
F'(x) &= h'(g(x)) g'(x) \\
F”(x) &=
h”(g(x)) g'(x) g'(x)
+
h'(g(x)) g”(x) \\
&=
h”(g(x)) (g'(x))^2
+
h'(g(x)) g”(x) \\
&=
\lr{ \ge 0 } \cdot \lr{ \ge 0 }^2 + \lr{ \ge 0 } \cdot \lr{ \ge 0 },
\end{aligned}

since $$h$$ is respectively convex, and non-decreasing.

2. \label{eqn:convexOptimizationLecture8:180b}
\begin{aligned}
F'(x) =
\lr{ \ge 0 } \cdot \lr{ \ge 0 }^2 + \lr{ \le 0 } \cdot \lr{ \le 0 },
\end{aligned}

since $$h$$ is respectively convex, and non-increasing, and g is concave.

## Extending to multiple dimensions

\label{eqn:convexOptimizationLecture8:200}
\begin{aligned}
F(\Bx)
&= h(g(\Bx)) = h( g_1(\Bx), g_2(\Bx), \cdots g_k(\Bx) ) \\
g &: \mathbb{R}^n \rightarrow \mathbb{R} \\
h &: \mathbb{R}^k \rightarrow \mathbb{R}.
\end{aligned}

is convex if $$g_i$$ is convex for each $$i \in [1,k]$$ and $$h$$ is convex and non-decreasing in each argument.

Proof:

again assume $$n = 1$$, without loss of generality,

\label{eqn:convexOptimizationLecture8:220}
\begin{aligned}
g &: \mathbb{R} \rightarrow \mathbb{R}^k \\
h &: \mathbb{R}^k \rightarrow \mathbb{R} \\
\end{aligned}

\label{eqn:convexOptimizationLecture8:240}
F”(\Bx)
=
\begin{bmatrix}
g_1(\Bx) & g_2(\Bx) & \cdots & g_k(\Bx)
\end{bmatrix}
\begin{bmatrix}
g_1′(\Bx) \\ g_2′(\Bx) \\ \vdots \\ g_k'(\Bx)
\end{bmatrix}
+
\lr{ \spacegrad h(g(x)) }^\T
\begin{bmatrix}
g_1”(\Bx) \\ g_2”(\Bx) \\ \vdots \\ g_k”(\Bx)
\end{bmatrix}

The Hessian is PSD.

### Example:

\label{eqn:convexOptimizationLecture8:260}
F(x) = \exp( g(x) ) = h( g(x) ),

where $$g$$ is convex is convex, and $$h(y) = e^y$$. This implies that $$F$$ is a convex function.

### Example:

\label{eqn:convexOptimizationLecture8:280}
F(x) = \inv{g(x)},

is convex if $$g(x)$$ is concave and positive. The most simple such example of such a function is $$h(x) = 1/x, \textrm{dom} h = \mathbb{R}_{++}$$, which is plotted in fig. 4.

fig. 4. Inverse function is convex over positive domain.

### Example:

\label{eqn:convexOptimizationLecture8:300}
F(x) = – \sum_{i = 1}^n \log( -F_i(x) )

is convex on $$\setlr{ x | F_i(x) < 0 \forall i }$$ if all $$F_i$$ are convex.

• Due to $$\textrm{dom} F$$, $$-F_i(x) > 0 \,\forall x \in \textrm{dom} F$$
• $$\log(x)$$ concave on $$\mathbb{R}_{++}$$ so $$-\log$$ convex also non-increasing (fig. 5).

fig. 5. Negative logarithm convex over positive domain.

\label{eqn:convexOptimizationLecture8:320}
F(x) = \sum h_i(x)

but
\label{eqn:convexOptimizationLecture8:340}
h_i(x) = -\log(-F_i(x)),

which is a convex and non-increasing function ($$-\log$$), of a convex function $$-F_i(x)$$. Each
$$h_i$$ is convex, so this is a sum of convex functions, and is therefore convex.

### Example:

Over $$\textrm{dom} F = S^n_{++}$$

\label{eqn:convexOptimizationLecture8:360}
F(X) = \log \det X^{-1}

To show that this is convex, check all lines in domain. A line in $$S^n_{++}$$ is a 1D family of matrices

\label{eqn:convexOptimizationLecture8:380}
\tilde{F}(t) = \log \det( \lr{X_0 + t H}^{-1} ),

where $$X_0 \in S^n_{++}, t \in \mathbb{R}, H \in S^n$$.

F9

For $$t$$ small enough,

\label{eqn:convexOptimizationLecture8:400}
X_0 + t H \in S^n_{++}

\label{eqn:convexOptimizationLecture8:420}
\begin{aligned}
\tilde{F}(t)
&= \log \det( \lr{X_0 + t H}^{-1} ) \\
&= \log \det\lr{ X_0^{-1/2} \lr{I + t X_0^{-1/2} H X_0^{-1/2} }^{-1} X_0^{-1/2} } \\
&= \log \det\lr{ X_0^{-1} \lr{I + t X_0^{-1/2} H X_0^{-1/2} }^{-1} } \\
&= \log \det X_0^{-1} + \log\det \lr{I + t X_0^{-1/2} H X_0^{-1/2} }^{-1} \\
&= \log \det X_0^{-1} – \log\det \lr{I + t X_0^{-1/2} H X_0^{-1/2} } \\
&= \log \det X_0^{-1} – \log\det \lr{I + t M }.
\end{aligned}

If $$\lambda_i$$ are eigenvalues of $$M$$, then $$1 + t \lambda_i$$ are eigenvalues of $$I + t M$$. i.e.:

\label{eqn:convexOptimizationLecture8:440}
\begin{aligned}
(I + t M) \Bv
&=
I \Bv + t \lambda_i \Bv \\
&=
(1 + t \lambda_i) \Bv.
\end{aligned}

This gives

\label{eqn:convexOptimizationLecture8:460}
\begin{aligned}
\tilde{F}(t)
&= \log \det X_0^{-1} – \log \prod_{i = 1}^n (1 + t \lambda_i) \\
&= \log \det X_0^{-1} – \sum_{i = 1}^n \log (1 + t \lambda_i)
\end{aligned}

• $$1 + t \lambda_i$$ is linear in $$t$$.
• $$-\log$$ is convex in its argument.
• sum of convex function is convex.

### Example:

\label{eqn:convexOptimizationLecture8:480}
F(X) = \lambda_\max(X),

is convex on $$\textrm{dom} F \in S^n$$

(a)
\label{eqn:convexOptimizationLecture8:500}
\lambda_{\max} (X) = \sup_{\Norm{\Bv}_2 \le 1} \Bv^\T X \Bv,

\label{eqn:convexOptimizationLecture8:520}
\begin{bmatrix}
\lambda_1 & & & \\
& \lambda_2 & & \\
& & \ddots & \\
& & & \lambda_n
\end{bmatrix}

Recall that a decomposition

\label{eqn:convexOptimizationLecture8:540}
\begin{aligned}
X &= Q \Lambda Q^\T \\
Q^\T Q = Q Q^\T = I
\end{aligned}

can be used for any $$X \in S^n$$.

(b)

Note that $$\Bv^\T X \Bv$$ is linear in $$X$$. This is a max of a number of linear (and convex) functions, so it is convex.

Last example:

(non-symmetric matrices)

\label{eqn:convexOptimizationLecture8:560}
F(X) = \sigma_\max(X),

is convex on $$\textrm{dom} F = \mathbb{R}^{m \times n}$$. Here

\label{eqn:convexOptimizationLecture8:580}
\sigma_\max(X) = \sup_{\Norm{\Bv}_2 = 1} \Norm{X \Bv}_2

This is called an operator norm of $$X$$. Using the SVD

\label{eqn:convexOptimizationLecture8:600}
\begin{aligned}
X &= U sectionigma V^\T \\
U &= \mathbb{R}^{m \times r} \\
sectionigma &\in \mathrm{diag} \in \mathbb{R}{ r \times r } \\
V^T &\in \mathbb{R}^{r \times n}.
\end{aligned}

Have

\label{eqn:convexOptimizationLecture8:620}
\Norm{X \Bv}_2^2
=
\Norm{ U sectionigma V^\T \Bv }_2^2
=
\Bv^\T V sectionigma U^\T U sectionigma V^\T \Bv
=
\Bv^\T V sectionigma sectionigma V^\T \Bv
=
\Bv^\T V sectionigma^2 V^\T \Bv
=
\tilde{\Bv}^\T sectionigma^2 \tilde{\Bv},

where $$\tilde{\Bv} = \Bv^\T V$$, so

\label{eqn:convexOptimizationLecture8:640}
\Norm{X \Bv}_2^2
=
\sum_{i = 1}^r \sigma_i^2 \Norm{\tilde{\Bv}}
\le \sigma_\max^2 \Norm{\tilde{\Bv}}^2,

or
\label{eqn:convexOptimizationLecture8:660}
\Norm{X \Bv}_2
\le \sqrt{ \sigma_\max^2 } \Norm{\tilde{\Bv}}
\le
\sigma_\max.

Set $$\Bv$$ to the right singular value of $$X$$ to get equality.

# References

[1] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.

## Posted notes for Electromagnetic Theory (ECE1228H), taught by Prof. M. Mojahedi, fall 2016

February 3, 2017 math and physics play No comments ,

I’ve now posted redacted notes for the Electromagnetic Theory (ECE1228H) course I took last fall, taught by Prof. M. Mojahedi.  This course covered a subset of the following:

• Maxwell’s equations
• constitutive relations and boundary conditions
• wave polarization.
• Field representations: potentials
• Green’s functions and integral equations.
• Theorems and concepts: duality, uniqueness, images, equivalence, reciprocity and Babinet’s principles.
• Plane cylindrical and spherical waves and waveguides.
• radiation and scattering.

These notes are fairly compact, only 183 pages, with the full version weighing in at 256 pages.

As always, feel free to contact me for the complete version (i.e. including my problem set solutions) if you interested, but not asking because you are taking or planning to take this course.

## ECE1505H Convex Optimization. Lecture 7: Examples of convex and concave functions, local and global minimums. Taught by Prof. Stark Draper

[Click here for a PDF of this post with nicer formatting]

### Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course ECE1505H, Convex Optimization, taught by Prof. Stark Draper, from [1].

### Today

• Local and global optimality
• Compositions of functions
• Examples

### Example:

\label{eqn:convexOptimizationLecture7:20}
\begin{aligned}
F(x) &= x^2 \\
F”(x) &= 2 > 0
\end{aligned}

strictly convex.

### Example:

\label{eqn:convexOptimizationLecture7:40}
\begin{aligned}
F(x) &= x^3 \\
F”(x) &= 6 x.
\end{aligned}

Not always non-negative, so not convex. However $$x^3$$ is convex on $$\textrm{dom} F = \mathbb{R}_{+}$$.

### Example:

\label{eqn:convexOptimizationLecture7:60}
\begin{aligned}
F(x) &= x^\alpha \\
F'(x) &= \alpha x^{\alpha-1} \\
F”(x) &= \alpha(\alpha-1) x^{\alpha-2}.
\end{aligned}

fig. 1. Powers of x.

This is convex on $$\mathbb{R}_{+}$$, if $$\alpha \ge 1$$, or $$\alpha \le 0$$.

### Example:

\label{eqn:convexOptimizationLecture7:80}
\begin{aligned}
F(x) &= \log x \\
F'(x) &= \inv{x} \\
F”(x) &= -\inv{x^2} \le 0
\end{aligned}

This is concave.

### Example:

\label{eqn:convexOptimizationLecture7:100}
\begin{aligned}
F(x) &= x\log x \\
F'(x) &= \log x + x \inv{x} = 1 + \log x \\
F”(x) &= \inv{x}
\end{aligned}

This is strictly convex on
$$\mathbb{R}_{++}$$, where
$$F”(x) \ge 0$$.

### Example:

\label{eqn:convexOptimizationLecture7:120}
\begin{aligned}
F(x) &= e^{\alpha x} \\
F'(x) &= \alpha e^{\alpha x} \\
F”(x) &= \alpha^2 e^{\alpha x} \ge 0
\end{aligned}

fig. 2. Exponential.

Such functions are plotted in fig. 2, and are convex function for all $$\alpha$$.

### Example:

For symmetric $$P \in S^n$$

\label{eqn:convexOptimizationLecture7:140}
\begin{aligned}
F(\Bx) &= \Bx^\T P \Bx + 2 \Bq^\T \Bx + r \\
\spacegrad F &= (P + P^\T) \Bx + 2 \Bq = 2 P \Bx + 2 \Bq \\
\spacegrad^2 F &= 2 P.
\end{aligned}

This is convex(concave) if $$P \ge 0$$ ($$P \le 0$$).

### Example:

\label{eqn:convexOptimizationLecture7:780}
F(x, y) = x^2 + y^2 + 3 x y,

that is neither convex nor concave is plotted in fig 3.

fig 3. Function with saddle point (3d and contours)

This function can be put in matrix form

\label{eqn:convexOptimizationLecture7:160}
F(x, y) = x^2 + y^2 + 3 x y
=
\begin{bmatrix}
x & y
\end{bmatrix}
\begin{bmatrix}
1 & 1.5 \\
1.5 & 1
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix},

and has the Hessian

\label{eqn:convexOptimizationLecture7:180}
\begin{aligned}
&=
\begin{bmatrix}
\partial_{xx} F & \partial_{xy} F \\
\partial_{yx} F & \partial_{yy} F \\
\end{bmatrix} \\
&=
\begin{bmatrix}
2 & 3 \\
3 & 2
\end{bmatrix} \\
&= 2 P.
\end{aligned}

From the plot we know that this is not PSD, but this can be confirmed by checking the eigenvalues

\label{eqn:convexOptimizationLecture7:200}
\begin{aligned}
0
&=
\det ( P – \lambda I ) \\
&=
(1 – \lambda)^2 – 1.5^2,
\end{aligned}

which has solutions

\label{eqn:convexOptimizationLecture7:220}
\lambda = 1 \pm \frac{3}{2} = \frac{3}{2}, -\frac{1}{2}.

This is not PSD nor negative semi-definite, because it has one positive and one negative eigenvalues. This is neither convex nor concave.

Along $$y = -x$$,

\label{eqn:convexOptimizationLecture7:240}
\begin{aligned}
F(x,y)
&=
F(x,-x) \\
&=
2 x^2 – 3 x^2 \\
&=
– x^2,
\end{aligned}

so it is concave along this line. Along $$y = x$$

\label{eqn:convexOptimizationLecture7:260}
\begin{aligned}
F(x,y)
&=
F(x,x) \\
&=
2 x^2 + 3 x^2 \\
&=
5 x^2,
\end{aligned}

so it is convex along this line.

### Example:

\label{eqn:convexOptimizationLecture7:280}
F(\Bx) = \sqrt{ x_1 x_2 },

on $$\textrm{dom} F = \setlr{ x_1 \ge 0, x_2 \ge 0 }$$

For the Hessian
\label{eqn:convexOptimizationLecture7:300}
\begin{aligned}
\PD{x_1}{F} &= \frac{1}{2} x_1^{-1/2} x_2^{1/2} \\
\PD{x_2}{F} &= \frac{1}{2} x_2^{-1/2} x_1^{1/2}
\end{aligned}

The Hessian components are

\label{eqn:convexOptimizationLecture7:320}
\begin{aligned}
\PD{x_1}{} \PD{x_1}{F} &= -\frac{1}{4} x_1^{-3/2} x_2^{1/2} \\
\PD{x_1}{} \PD{x_2}{F} &= \frac{1}{4} x_2^{-1/2} x_1^{-1/2} \\
\PD{x_2}{} \PD{x_1}{F} &= \frac{1}{4} x_1^{-1/2} x_2^{-1/2} \\
\PD{x_2}{} \PD{x_2}{F} &= -\frac{1}{4} x_2^{-3/2} x_1^{1/2}
\end{aligned}

or
\label{eqn:convexOptimizationLecture7:340}
=
-\frac{\sqrt{x_1 x_2}}{4}
\begin{bmatrix}
\inv{x_1^2} & -\inv{x_1 x_2} \\
-\inv{x_1 x_2} & \inv{x_2^2}
\end{bmatrix}.

Checking this for PSD against $$\Bv = (v_1, v_2)$$, we have
\label{eqn:convexOptimizationLecture7:360}
\begin{aligned}
\begin{bmatrix}
v_1 & v_2
\end{bmatrix}
\begin{bmatrix}
\inv{x_1^2} & -\inv{x_1 x_2} \\
-\inv{x_1 x_2} & \inv{x_2^2}
\end{bmatrix}
\begin{bmatrix}
v_1 \\ v_2
\end{bmatrix}
&=
\begin{bmatrix}
v_1 & v_2
\end{bmatrix}
\begin{bmatrix}
\inv{x_1^2} v_1 -\inv{x_1 x_2} v_2 \\
-\inv{x_1 x_2} v_1 + \inv{x_2^2} v_2
\end{bmatrix} \\
&=
\lr{ \inv{x_1^2} v_1 -\inv{x_1 x_2} v_2 } v_1 +
\lr{ -\inv{x_1 x_2} v_1 + \inv{x_2^2} v_2 } v_2
\\
&=
\inv{x_1^2} v_1^2
+ \inv{x_2^2} v_2^2
-2 \inv{x_1 x_2} v_1 v_2 \\
&=
\lr{
\frac{v_1}{x_1}
-\frac{v_2}{x_2}
}^2 \\
&\ge 0,
\end{aligned}

so $$\spacegrad^2 F \le 0$$. This is a negative semi-definite function (concave). Observe that this check required checking PSD for all values of $$\Bx$$.

This is an example of a more general result

\label{eqn:convexOptimizationLecture7:380}
F(x) = \lr{ \prod_{i = 1}^n x_i }^{1/n},

which is concave (prove on homework).

### Summary.

If $$F$$ is differentiable in \R{n}, then check the curvature of the function along all lines. i.e. At all locations and in all directions.

If the Hessian is PSD at all $$\Bx \in \textrm{dom} F$$, that is

\label{eqn:convexOptimizationLecture7:400}
\spacegrad^2 F \ge 0 \, \forall \Bx \in \textrm{dom} F,

then the function is convex.

### Example:

Over $$\textrm{dom} F = \mathbb{R}^n$$

\label{eqn:convexOptimizationLecture7:420}
F(\Bx) = \max_{i = 1}^n x_i

i.e.
\label{eqn:convexOptimizationLecture7:440}
\begin{aligned}
F((1,2) &= 2 \\
F((3,-1) &= 3
\end{aligned}

### Example:

\label{eqn:convexOptimizationLecture7:460}
F(\Bx) = \max_{i = 1}^n F_i(\Bx),

where

\label{eqn:convexOptimizationLecture7:480}
F_i(\Bx)
=
… ?

max of a set of convex functions is a convex function.

### Example:

\label{eqn:convexOptimizationLecture7:500}
F(x) =
x_{[1]} +
x_{[2]} +
x_{[3]}

where

$$x_{[k]}$$ is the k-th largest number in the list

Write

\label{eqn:convexOptimizationLecture7:520}
F(x) = \max x_i + x_j + x_k

\label{eqn:convexOptimizationLecture7:540}
(i,j,k) \in \binom{n}{3}

### Example:

For $$\Ba \in \mathbb{R}^n$$ and $$b_i \in \mathbb{R}$$

\label{eqn:convexOptimizationLecture7:560}
\begin{aligned}
F(\Bx)
&= \sum_{i = 1}^n \log( b_i – \Ba^\T \Bx )^{-1} \\
&= -\sum_{i = 1}^n \log( b_i – \Ba^\T \Bx )
\end{aligned}

This $$b_i – \Ba^\T \Bx$$ is an affine function of $$\Bx$$ so it doesn’t affect convexity.

Since $$\log$$ is concave, $$-\log$$ is convex. Convex functions of affine function of $$\Bx$$ is convex function of $$\Bx$$.

### Example:

\label{eqn:convexOptimizationLecture7:580}
F(\Bx) = \sup_{\By \in C} \Norm{ \Bx – \By }

fig. 3. Max length function

Here $$C \subseteq \mathbb{R}^n$$ is not necessarily convex. We are using $$\sup$$ here because the set $$C$$ may be open. This function is the length of the line from $$\Bx$$ to the point in $$C$$ that is furthest from $$\Bx$$.

• $$\Bx – \By$$ is linear in $$\Bx$$
• $$g_\By(\Bx) = \Norm{\Bx – \By}$$ is convex in $$\Bx$$ since norms are convex functions.
• $$F(\Bx) = \sup_{\By \in C} \Norm{ \Bx – \By }$$. Each $$\By$$ index is a convex function. Taking max of those.

### Example:

\label{eqn:convexOptimizationLecture7:600}
F(\Bx) = \inf_{\By \in C} \Norm{ \Bx – \By }.

Min and max of two convex functions are plotted in fig. 4.

fig. 4. Min and max

The max is observed to be convex, whereas the min is not necessarily so.

\label{eqn:convexOptimizationLecture7:800}
F(\Bz) = F(\theta \Bx + (1-\theta) \By) \ge \theta F(\Bx) + (1-\theta)F(\By).

This is not necessarily convex for all sets $$C \subseteq \mathbb{R}^n$$, because the $$\inf$$ of a bunch of convex function is not necessarily convex. However, if $$C$$ is convex, then $$F(\Bx)$$ is convex.

### Consequences of convexity for differentiable functions

• Think about unconstrained functions $$\textrm{dom} F = \mathbb{R}^n$$.
• By first order condition $$F$$ is convex iff the domain is convex and
\label{eqn:convexOptimizationLecture7:620}
F(\Bx) \ge \lr{ \spacegrad F(\Bx)}^\T (\By – \Bx) \, \forall \Bx, \By \in \textrm{dom} F.

If $$F$$ is convex and one can find an $$\Bx^\conj \in \textrm{dom} F$$ such that

\label{eqn:convexOptimizationLecture7:640}
\spacegrad F(\Bx^\conj) = 0,

then

\label{eqn:convexOptimizationLecture7:660}
F(\By) \ge F(\Bx^\conj) \, \forall \By \in \textrm{dom} F.

If you can find the point where the gradient is zero (which can’t always be found), then $$\Bx^\conj$$ is a global minimum of $$F$$.

Conversely, if $$\Bx^\conj$$ is a global minimizer of $$F$$, then $$\spacegrad F(\Bx^\conj) = 0$$ must hold. If that were not the case, then you would be able to find a direction to move downhill, contracting the optimality of $$\Bx^\conj$$.

### Local vs Global optimum

fig. 6. Global and local minimums

Definition: Local optimum
$$\Bx^\conj$$ is a local optimum of $$F$$ if $$\exists \epsilon > 0$$ such that $$\forall \Bx$$, $$\Norm{\Bx – \Bx^\conj} < \epsilon$$, we have

\begin{equation*}
F(\Bx^\conj) \le F(\Bx)
\end{equation*}

fig. 5. min length function

Theorem:
Suppose $$F$$ is twice continuously differentiable (not necessarily convex)

• If $$\Bx^\conj$$ is a local optimum then\begin{equation*}
\begin{aligned}
\spacegrad F(\Bx^\conj) &= 0 \\
\spacegrad^2 F(\Bx^\conj) \ge 0
\end{aligned}
\end{equation*}
• If
\begin{equation*}
\begin{aligned}
\spacegrad F(\Bx^\conj) &= 0 \\
\spacegrad^2 F(\Bx^\conj) \ge 0
\end{aligned},
\end{equation*}then $$\Bx^\conj$$ is a local optimum.

Proof:

• Let $$\Bx^\conj$$ be a local optimum. Pick any $$\Bv \in \mathbb{R}^n$$.\label{eqn:convexOptimizationLecture7:720}
\lim_{t \rightarrow 0} \frac{ F(\Bx^\conj + t \Bv) – F(\Bx^\conj)}{t}
= \lr{ \spacegrad F(\Bx^\conj) }^\T \Bv
\ge 0.

Here the fraction is $$\ge 0$$ since $$\Bx^\conj$$ is a local optimum.

Since the choice of $$\Bv$$ is arbitrary, the only case that you can ensure that $$\ge 0, \forall \Bv$$ is

\label{eqn:convexOptimizationLecture7:740}
\spacegrad F = 0,

( or else could pick $$\Bv = -\spacegrad F(\Bx^\conj)$$.

This means that $$\spacegrad F(\Bx^\conj) = 0$$ if $$\Bx^\conj$$ is a local optimum.

Consider the 2nd order derivative

\label{eqn:convexOptimizationLecture7:760}
\begin{aligned}
\lim_{t \rightarrow 0} \frac{ F(\Bx^\conj + t \Bv) – F(\Bx^\conj)}{t^2}
&=
\lim_{t \rightarrow 0} \inv{t^2}
\lr{
F(\Bx^\conj) + t \lr{ \spacegrad F(\Bx^\conj) }^\T \Bv + \inv{2} t^2 \Bv^\T \spacegrad^2 F(\Bx^\conj) \Bv + O(t^3)
– F(\Bx^\conj)
} \\
&=
\inv{2} \Bv^\T \spacegrad^2 F(\Bx^\conj) \Bv \\
&\ge 0.
\end{aligned}

Here the $$\ge$$ condition also comes from the fraction, based on the optimiality of $$\Bx^\conj$$. This is true for all choice of $$\Bv$$, thus $$\spacegrad^2 F(\Bx^\conj)$$.

# References

[1] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.