New home office and network setup

August 30, 2016 Uncategorized No comments

I’ve just bought a nice new desk for my home office:

IMG_20160830_123415866_HDR

When I assembled the new desk, I took the opportunity to refine my home network setup.  In this post I’ll walk through that setup.

I’ve got the router in the living room, along with the two NUC6i’s that I use for my lzlabs development work:

nucs and router

I run the NUCs headless, using ssh and command line access for my work.  For those times that I need to refresh the install image and need a display, I also have them HDMI connected to the living room TV, and direct connected with short ethernet cables to the router:

nuc wiring

The router is also directly connected to the PS3 (for netflix), and then runs through some ethernet lines that I fished into the basement, before some of the basement finishing work we did just after we moved in.  All the lines I have going into the basement terminate in a patch panel:

patch panel

Does a patch panel sound like overkill?  Yes, it probably is.  However, I’ve got a pile of roughed in ethernet lines to the kids rooms and the rec room that I’ll eventually fish into the electrical-panel/laundry/network room:

wire bundle

I’d like to also eventually fish some ethernet lines into the upstairs space (i.e. for netflix run off a blue-ray player that sits mostly unused in the master bedroom), but that’s a harder fishing job, and I haven’t done it yet.  From the front of the patch panel things go off to the router:

patch panel 2

and from there to a switch:

switch and phone

One line comes from the router (through the patch panel), and then all the rest of the ethernet lines from the switch go to final destinations.  Four of those lines go back to the patch panel and up to the office space.  One is plugged into one of the thunderbolt monitors, so I can use a wired connection while “docked”.  One goes up to the office space and is connected to an Odroid (which can run the Lz stack on aarch64 hardware).  One line goes back to the living room, for optional wired couch surfing, and the last is connected to a voip phone.

I’ve got a lot of finishing touches to do.  I plan to mount the patch panel next to the electrical panel, instead of just placing it on top of the freezer down in the laundry room.  I’ve also got lines that are running through the ceiling, but are hanging loose still.  The wall panel in my office space is currently hanging loose:

desk cabling

I have a low voltage wiring box purchased, but need to cut the hole for it, so I can screw in this little panel and get it nicely out of the way, and do the plastering and painting to fix up the messy fishing holes I made trying to find a route down into the basement.

M.Eng survey remarks: rant on exams in grad studies

August 28, 2016 Incoherent ramblings No comments

I was asked to rate my satisfaction of aspects of the M.Eng program I’m taking, so took the opportunity to rant about the insanity of exams:

“As a professional, I find that Exam based courses are very artificial, and not terribly meaningful. As an undergrad I did not question this practice, but had forgotten about those days. It was a rather rude shock to see this dark ages practise still in use for graduate studies.

Nowhere in a professional context does it make sense to try to memorize enough content that you can barf out half-ass material sufficient to pass examination questions, or attempt to gain maximum marks in a minimum amount of time. Professional work needs to be complete and accurate. Mistakes cause millions of dollars, and get people fired.

It makes no sense to take any examination, and not be given the opportunity to see the marked exam content. One ought to have the chance to review that material, correcting all deficiencies in one’s understanding. If mistakes are made in a professional context, there is a feedback cycle, where root causes are addressed. This is completely missing from the brain dead process of final exams.

Thankfully, not all the graduate courses I took still used the childish approach of grading based on examinations, but enough did that it was annoying. The aim should not be to produce a mark, but to produce full understanding of the subject material, and demonstrate the ability to apply that material.”

Shipping with DHL. They will screw you, but not quite as bad as UPS.

August 27, 2016 Incoherent ramblings, Uncategorized No comments , , , , ,

I previously complained about UPS customs clearing charges that I was slammed with receiving back some of my own goods.

Basically, the Canadian government grants shipping companies the right to extort receivers at the point of customs clearing. Canada might add a few cents or a buck or two of tax, but the shipping company is then able to add fees that are orders of magnitude higher than the actual taxes.

I actually stopped buying anything from the United States because of this, and have been buying from Europe and India instead, where I had not yet gotten blasted with customs clearing fees for the items I’ve been buying (usually textbooks).

However, it appears that my luck has run out.  Here’s the newest example, with a $15 dollar clearing fee that DHL added onto about a dollar of tax:

IMG_20160827_095043195 (1)

Note that I did not pick the shipping company.  That was selected by the book seller (one of the abebooks.com resellers).

For $1.17 of taxes, DHL has charged me $14.75 of fees, all for the right to allow Canada revenue to steal from me.  To add insult to injury, DHL is allowed to charge GST for their extortion service, so I end up paying an additional $3.09 (close to 3x the initial tax amount).  The value of the book + shipping that I purchased was only $23.30!

Aside: Why is the GST on $14.75 that high?  I thought that’s a 13% tax, so shouldn’t it be $1.92?

I’ve found some instructions that explain some of the black magic required to do my own customs clearing:

One of the first steps is to find the CBSA office that I can submit such a clearing form to.  I can narrow that search down to province, but some of these offices are restricted to specific purposes, and it isn’t obvious which of these offices I should use.  For example the one at Buttonville airport appears to be restricted to handling just the cargo that arrives there.

I wonder if there are any local resellers that import used and cheap textbooks in higher quantities and then resell them locally (taking the customs clearing charge only once per shipment)?

Electric field due to spherical shell

August 24, 2016 math and physics play No comments , ,

[Click here for a PDF of this post with nicer formatting]

Here’s a problem (2.7) from [1], to calculate the field due to a spherical shell. The field is

\begin{equation}\label{eqn:griffithsEM2_7:20}
\BE = \frac{\sigma}{4 \pi \epsilon_0} \int \frac{(\Br – \Br’)}{\Abs{\Br – \Br’}^3} da’,
\end{equation}

where \( \Br’ \) is the position to the area element on the shell. For the test position, let \( \Br = z \Be_3 \). We need to parameterize the area integral. A complex-number like geometric algebra representation works nicely.

\begin{equation}\label{eqn:griffithsEM2_7:40}
\begin{aligned}
\Br’
&= R \lr{ \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta } \\
&= R \lr{ \Be_1 \sin\theta \lr{ \cos\phi + \Be_1 \Be_2 \sin\phi } + \Be_3 \cos\theta } \\
&= R \lr{ \Be_1 \sin\theta e^{i\phi} + \Be_3 \cos\theta }.
\end{aligned}
\end{equation}

Here \( i = \Be_1 \Be_2 \) has been used to represent to horizontal rotation plane.

The difference in position between the test vector and area-element is

\begin{equation}\label{eqn:griffithsEM2_7:60}
\Br – \Br’
= \Be_3 \lr{ z – R \cos\theta } – R \Be_1 \sin\theta e^{i \phi},
\end{equation}

with an absolute squared length of

\begin{equation}\label{eqn:griffithsEM2_7:80}
\begin{aligned}
\Abs{\Br – \Br’ }^2
&= \lr{ z – R \cos\theta }^2 + R^2 \sin^2\theta \\
&= z^2 + R^2 – 2 z R \cos\theta.
\end{aligned}
\end{equation}

As a side note, this is a kind of fun way to prove the old “cosine-law” identity. With that done, the field integral can now be expressed explicitly

\begin{equation}\label{eqn:griffithsEM2_7:100}
\begin{aligned}
\BE
&= \frac{\sigma}{4 \pi \epsilon_0} \int_{\phi = 0}^{2\pi} \int_{\theta = 0}^\pi R^2 \sin\theta d\theta d\phi
\frac{\Be_3 \lr{ z – R \cos\theta } – R \Be_1 \sin\theta e^{i \phi}}
{
\lr{z^2 + R^2 – 2 z R \cos\theta}^{3/2}
} \\
&= \frac{2 \pi R^2 \sigma \Be_3}{4 \pi \epsilon_0} \int_{\theta = 0}^\pi \sin\theta d\theta
\frac{z – R \cos\theta}
{
\lr{z^2 + R^2 – 2 z R \cos\theta}^{3/2}
} \\
&= \frac{2 \pi R^2 \sigma \Be_3}{4 \pi \epsilon_0} \int_{\theta = 0}^\pi \sin\theta d\theta
\frac{ R( z/R – \cos\theta) }
{
(R^2)^{3/2} \lr{ (z/R)^2 + 1 – 2 (z/R) \cos\theta}^{3/2}
} \\
&= \frac{\sigma \Be_3}{2 \epsilon_0} \int_{u = -1}^{1} du
\frac{ z/R – u}
{
\lr{1 + (z/R)^2 – 2 (z/R) u}^{3/2}
}.
\end{aligned}
\end{equation}

Observe that all the azimuthal contributions get killed. We expect that due to the symmetry of the problem. We are left with an integral that submits to Mathematica, but doesn’t look fun to attempt manually. Specifically

\begin{equation}\label{eqn:griffithsEM2_7:120}
\int_{-1}^1 \frac{a-u}{\lr{1 + a^2 – 2 a u}^{3/2}} du
=
\left\{
\begin{array}{l l}
\frac{2}{a^2} & \quad \mbox{if \( a > 1 \) } \\
0 & \quad \mbox{if \( a < 1 \) } \end{array} \right., \end{equation} so \begin{equation}\label{eqn:griffithsEM2_7:140} \boxed{ \BE = \left\{ \begin{array}{l l} \frac{\sigma (R/z)^2 \Be_3}{\epsilon_0} & \quad \mbox{if \( z > R \) } \\
0 & \quad \mbox{if \( z < R \) } \end{array} \right. } \end{equation} In the problem, it is pointed out to be careful of the sign when evaluating \( \sqrt{ R^2 + z^2 - 2 R z } \), however, I don't see where that is even useful?

References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

Note to IBMers re: LzLabs employment.

August 22, 2016 Incoherent ramblings 2 comments , ,

When contemplating the decision to leave IBM for LzLabs, I found it helpful to enumerate the pros and cons of that decision, which I shared in a Leaving IBM: A causal analysis blog post after I’d formally left IBM.

Who could have predicted that a blog post that wasn’t about programming arcana, physics or mathematics would have been my most popular ever.  There’s no accounting for the taste of the reader;)

In response to that blog post, I’ve been contacted by a few IBMers who were interested in potential LzLabs employment.

Please note that I left IBM with band 9 status, which means that there is a one year restriction (expiring ~May 2017) against me having any involvement with hiring, or recruitment of IBM employees.  An IBM lawyer was very careful to point that the band 9 contract I signed in 2006 has such a non-solicitation agreement.  I don’t think that anybody told the IBM lawyer that IBM appears to be trying really hard to throw away technical staff, but that does not change the fact that I am bound by such an agreement.

If you contacted me, and I was to respond, it could probably be argued that this would not count as solicitation.  However, I don’t feel inclined to pick a fight with IBM lawyers, who I imagine to have very sharp teeth and unlimited legal budgets.  So, if you are working for IBM, and asking about LzLabs employment, please don’t be offended that I did not reply.  I will try to remember to respond to you next spring, when the sharks are swimming elsewhere.

Geeking out: Oriented surface of volume element

August 20, 2016 math and physics play 2 comments

Reading the intro chapters to my Griffiths electrodynamics, I ended up re-deriving the 1,2 and 3 parameter variations of Stokes Theorem (a quick derivation as previously done using Geometric Algebra [PDF], but without looking at my notes).  To understand how to map from the algebraic representation to a geometric one for the 3 parameter volume element, I built the following nice little model:IMG_20160820_231303757 IMG_20160820_231333998 IMG_20160820_231316618_HDR

This is like Fig 1.10 from the pdf notes linked above, but was a lot more fun to construct than the drawing.

Options for the remainder of my M.Eng

August 14, 2016 Incoherent ramblings 4 comments , ,

I had a provisional plan for the remainder of my M.Eng earlier in the year. I knew that I’d likely have to adjust this based on what courses were actually made available at registration time in the years to come.

Without looking back on that plan, assuming what is available this year, will continue to be. I came up with the following new plan:

First choices

2016

2017

2018

Alternates

Thoughts

Last term I ended up dropping ‘Microwave Circuits’ in favour of ‘Scientific Computing for Physicists’, which I was taking concurrently.  The microwave circuits course was really about GHz domain electromagnetics, but it’s a material that I’d enjoy a lot more, and get a lot more out of, by reading the course text and doing problems, than attending the lectures (which were just taught from slides).

Because I’d dropped the uwaves course, I’m now down one ECE course.  I need 5/9’s of my M.Eng course selections to be from ECE to graduate.  This forces me to make additional course selections from ECE that I hadn’t initially planned for.  For example, I’d rather take the MIE CFD course or the PHY Quantum Optics course than the ECE Information Theory course, which I selected primarily because it helped satisfy the graduation requirements.

 

Playing with c++11 and posix regular expression libraries

July 24, 2016 C/C++ development and debugging. No comments , , , , , , , , ,

I was curious how the c++11 std::regex interface compared to the C posix regular expression library. The c++11 interfaces are almost as easy to use as perl. Suppose we have some space separated fields that we wish to manipulate, showing an order switch and the original:

my @strings = ( "hi bye", "hello world", "why now", "one two" ) ;

foreach ( @strings )
{
   s/(\S+)\s+(\S+)/'$&' -> '$2 $1'/ ;

   print "$_\n" ;
}

The C++ equivalent is

   const char * strings[] { "hi bye", "hello world", "why now", "one two" } ;

   std::regex re( R"((\S+)\s+(\S+))" ) ;

   for ( auto s : strings )
   {
      std::cout << regex_replace( s, re, "'$&' -> '$2 $1'\n" )  ;
   }

We have one additional step with the C++ code, compiling the regular expression. Precompilation of perl regular expressions is also possible, but that is usually just as performance optimization.

The posix equivalent requires precompilation too

void posixre_error( regex_t * pRe, int rc )
{
   char buf[ 128 ] ;

   regerror( rc, pRe, buf, sizeof(buf) ) ;

   fprintf( stderr, "regerror: %s\n", buf ) ;
   exit( 1 ) ;
}

void posixre_compile( regex_t * pRe, const char * expression )
{
   int rc = regcomp( pRe, expression, REG_EXTENDED ) ;
   if ( rc )
   { 
      posixre_error( pRe, rc ) ;
   }
}

but the transform requires more work:

void posixre_transform( regex_t * pRe, const char * input )
{
   constexpr size_t N{3} ;
   regmatch_t m[N] {} ;

   int rc = regexec( pRe, input, N, m, 0 ) ;

   if ( rc && (rc != REG_NOMATCH) )
   {
      posixre_error( pRe, rc ) ;
   }

   if ( !rc )
   { 
      printf( "'%s' -> ", input ) ;
      int len ;
      len = m[2].rm_eo - m[2].rm_so ; printf( "'%.*s ", len, &input[ m[2].rm_so ] ) ;
      len = m[1].rm_eo - m[1].rm_so ; printf( "%.*s'\n", len, &input[ m[1].rm_so ] ) ;
   }
}

To get at the capture expressions we have to pass an array of regmatch_t’s. The first element of that array is the entire match expression, and then we get the captures after that. The awkward thing to deal with is that the regmatch_t is a structure containing the start end end offset within the string.

If we want more granular info from the c++ matcher, it can also provide an array of capture info. We can also get info about whether or not the match worked, something we can do in perl easily

my @strings = ( "hi bye", "helloworld", "why now", "onetwo" ) ;

foreach ( @strings )
{
   if ( s/(\S+)\s+(\S+)/$2 $1/ )
   {
      print "$_\n" ;
   }
}  

This only prints the transformed line if there was a match success. To do this in C++ we can use regex_match

const char * pattern = R"((\S+)\s+(\S+))" ;

std::regex re( pattern ) ;

for ( auto s : strings )
{ 
   std::cmatch m ;

   if ( regex_match( s, m, re ) )
   { 
      std::cout << m[2] << ' ' << m[1] << '\n' ;
   }
}

Note that we don’t have to mess around with offsets as was required with the Posix C interface, and also don’t have to worry about the size of the capture match array, since that is handled under the covers. It’s not too hard to do wrap the posix C APIs in a C++ wrapper that makes it about as easy to use as the C++ regex code, but unless you are constrained to using pre-C++11 code and can also live with a Unix only restriction. There are also portability issues with the posix APIs. For example, the perl-style regular expressions like:

   R"((\S+)(\s+)(\S+))" ) ;

work fine with the Linux regex API, but that appears to be an exception. To make code using that regex work on Mac, I had to use strict posix syntax

   R"(([^[:space:]]+)([[:space:]]+)([^[:space:]]+))"

Actually using the Posix C interface, with a portability constraint that avoids the Linux regex extensions, would be horrendous.

Notes on “memory and resources” of Stroustrup’s “The C++ Programming Language”.

July 21, 2016 C/C++ development and debugging. No comments , , , , , , ,

Some chapter 34 notes.

array

There’s a fixed size array type designed to replace raw C style arrays. It doesn’t appear that it is bounds checked by default, and the Xcode7 (clang) compiler doesn’t do bounds checking for it right now. Here’s an example

#include <array>

using a10 = std::array<int, 10> ;

void foo( a10 & a )
{
   a[3] = 7 ;
   a[13] = 7 ;
}

void bar( int * a )
{
   a[3] = 7 ;
   a[13] = 7 ;
}

The generated asm for both of these is identical

$ gobjdump -d --reloc -C --no-show-raw-insn d.o

d.o:     file format mach-o-x86-64

Disassembly of section .text:

0000000000000000 <foo(std::__1::array<int, 10ul>&)>:
   0:   push   %rbp
   1:   mov    %rsp,%rbp
   4:   movl   $0x7,0xc(%rdi)
   b:   movl   $0x7,0x34(%rdi)
  12:   pop    %rbp
  13:   retq   
  14:   data16 data16 nopw %cs:0x0(%rax,%rax,1)

0000000000000020 <bar(int*)>:
  20:   push   %rbp
  21:   mov    %rsp,%rbp
  24:   movl   $0x7,0xc(%rdi)
  2b:   movl   $0x7,0x34(%rdi)
  32:   pop    %rbp
  33:   retq   
  34:   data16 data16 nopw %cs:0x0(%rax,%rax,1)

The foo() function here is also not compile-time bounds checked if the out of bounds access is changed to

   a.at(13) = 7 ;

however, this does at least generate an out of bounds error

$ ./d
libc++abi.dylib: terminating with uncaught exception of type std::out_of_range: array::at
Abort trap: 6

Even though we don’t get compile-time bounds checking (at least with the current clang compiler), array has the nice advantage of knowing its own size, so you can’t screw it up:

void blah( a10 & a )
{
   a[0] = 1 ;

   for ( int i{1} ; i < a.size() ; i++ )
   {
      a[i] = 2 * a[i-1] ;
   }
}

bitset and vector bool

The bitset class provides a fixed size bit array that appears to be formed from an array of register sized words. On a 64-bit platform (mac+xcode 7) I’m seeing that sizeof() == 8 for <= 64 bits, and doubles after that for <= 128 bits.

The code for something like the following (set two bits), is pretty decent, basically a single or immediate instruction:

using b70 = std::bitset<70> ;

void foo( b70 & v )
{
   v[3] = 1 ;
   v[13] = 1 ;
}

Array access operators are provided to access each bit position:

   for ( int i{} ; i < v.size() ; i++ )
   {
      char sep{ ' ' } ;
      if ( ((i+1) % 8) == 0 )
      {
         sep = '\n' ;
      }

      std::cout << v[i] << sep ;
   }
   std::cout << '\n' ;

There is no range-for support built in for this class. I was able to implement a wrapper that allowed that using a wrapper class

template <int N>
struct iter ;

template <int N>
struct mybits : public std::bitset<N>
{
   using T = std::bitset<N> ;

   using T::T ;
   using T::size ;

   inline iter<N> begin( ) ;

   inline iter<N> end( ) ;
} ;

and a helper iterator

template <int N>
struct iter
{
   unsigned pos{} ;
   const mybits<N> & b ;

   iter( const mybits<N> & bits, unsigned p = {} ) : pos{p}, b{bits} {}

   const iter & operator++()
   {
      pos++ ;

      return *this ;
   }

   bool operator != ( const iter & i ) const
   { 
      return pos != i.pos ;
   }

   int operator*() const
   { 
      return b[ pos ] ;
   }
} ;

plus the begin and end function bodies required for the loop

template <int N>
inline iter<N> mybits<N>::begin( )
{
   return iter<N>( *this ) ;
}

template <int N>
inline iter<N> mybits<N>::end( )
{
   return iter<N>( *this, size() ) ;
}

I’m not sure what the rationale for not including such range for support is, when std::vector has exactly that? vector is a vector specialization that is also supposed to be compact, but unlike bitset, allows for a variable sized bit array.

bitset also has a number of handy type conversion operators that vector does not (to string, and string to integer)

tuple

The std::tuple type generalizes std::pair, allowing for easy structures of N different types.

I saw that tuple has a tie method that allows it to behave very much like a perl array assignment. Such an assignment looks like

#!/usr/bin/perl

my ($a, $b, $c) = foo() ;

printf( "%0.1f $b $c\n", $a ) ;

exit 0 ;

sub foo
{
   return (1.0, "blah", 3) ;
}

A similar C++ equivalent is more verbose

#include <tuple>
#include <stdio.h>

using T = std::tuple<float, const char *, int> ;

T foo()
{
   return std::make_tuple( 1.0, "blah", 3 ) ;
}

int main()
{
   float f ;
   const char * k ;
   int i ;

   std::tie( f, k, i ) = foo() ;

   printf("%f %s %d\n", f, k, i ) ;

   return 0 ;
}

I was curious how the code that accepts a tuple return using tie, using different variables (as above), and using a structure return differed

struct S
{
   float f ;
   const char * s ;
   int i ;
} ;

S bar()
{
   return { 1.0, "blah", 3 } ;
}

In each case, using -O2 and the Xcode 7 compiler (clang), a printf function similar to the above ends up looking pretty much uniformly like:

$ gobjdump -d --reloc -C --no-show-raw-insn u.o 
...

0000000000000110 <h()>:
 110:   push   %rbp
 111:   mov    %rsp,%rbp
 114:   sub    $0x20,%rsp
 118:   lea    -0x18(%rbp),%rdi
 11c:   callq  121 <h()+0x11>
                        11d: BRANCH32   foo()
 121:   mov    -0x10(%rbp),%rsi
 125:   mov    -0x8(%rbp),%edx
 128:   movss  -0x18(%rbp),%xmm0
 12d:   cvtss2sd %xmm0,%xmm0
 131:   lea    0xd(%rip),%rdi        # 145 <h()+0x35>
                        134: DISP32     .cstring-0x145
 138:   mov    $0x1,%al
 13a:   callq  13f <h()+0x2f>
                        13b: BRANCH32   printf
 13f:   add    $0x20,%rsp
 143:   pop    %rbp
 144:   retq   

The generated code is pretty much dominated by the stack pushing required for the printf call. I used printf here instead of std::cout because the generated code for std::cout is so crappy looking (and verbose).

shared_ptr

Reading the section on shared_ptr, it wasn’t obvious that it was a thread safe interface. I wondered if some sort of specialization was required to make the reference counting thread safe. It appears that thread safety is built in

This can also be seen in the debugger (assuming the gcc libstdc++ is representitive)

Breakpoint 1, main () at sharedptr.cc:33
33    std::shared_ptr<T> p = std::make_shared<T>() ;
Missing separate debuginfos, use: debuginfo-install libgcc-4.8.5-4.el7.x86_64 libstdc++-4.8.5-4.el7.x86_64
(gdb) n
35    foo( p ) ;
(gdb) s
std::shared_ptr<T>::shared_ptr (this=0x7fffffffe060) at /usr/include/c++/4.8.2/bits/shared_ptr.h:103
103         shared_ptr(const shared_ptr&) noexcept = default;
(gdb) s
std::__shared_ptr<T, (__gnu_cxx::_Lock_policy)2>::__shared_ptr (this=0x7fffffffe060) at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:779
779         __shared_ptr(const __shared_ptr&) noexcept = default;
(gdb) s
std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count (this=0x7fffffffe068, __r=...)
    at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:550
550         : _M_pi(__r._M_pi)
(gdb) s
552      if (_M_pi != 0)
(gdb) s
553        _M_pi->_M_add_ref_copy();
(gdb) s
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_add_ref_copy (this=0x607010) at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:131
131         { __gnu_cxx::__atomic_add_dispatch(&_M_use_count, 1); }

This was looking at a call of the following form

using Tp = std::shared_ptr<T> ;

void foo( Tp p ) ;

int main()
{
   std::shared_ptr<T> p = std::make_shared<T>() ;

   foo( p ) ;

   return 0 ;
}

some c++11 standard library notes

July 9, 2016 C/C++ development and debugging. No comments , , , , ,

Some notes on Chapter 31, 32 (standard library, STL) of Stroustrup’s “The C++ Programming Language, 4th edition”.

Emplace

I’d never heard the word emplace before, but it turns out that it isn’t a word made up for c++, but is also a dictionary word, meaning to “put into place or position”.

c++11 defines some emplace functions. Here’s an example for vector

#include <vector>
#include <iostream>

int main()
{
   using pair = std::pair<int, int> ;
   using vector = std::vector< pair > ;

   vector v ;

   pair p{ 1, 2 } ;
   v.push_back( p ) ;
   v.push_back( {2, 3} ) ;
   v.emplace_back( 3, 4 ) ;

   for ( auto e : v )
   {
      std::cout << e.first << ", " << e.second << '\n' ;
   }

   return 0 ;
}

The emplace_back is like the push_back function, but does not require that a constructed object be created first, either explicitly as in the object p above, or implictly as done with the {2, 3} pair initializer list.

multimap

I’d written some perl code the other day when I wanted a hash that had multiple entries per key. Since my hashed elememts were simple, I just strung them together as comma separated entries (I could have also used a hash of array references). It looks like c++11 builds exactly the construct that I wanted into STL, and has both a multimap and unordered_multimap. Here’s an example of the latter

#include <unordered_map>
#include <string>
#include <iostream>

int main()
{
   std::unordered_multimap< int, std::string > m ;

   m.emplace( 3, "hi" ) ;
   m.emplace( 3, "bye" ) ;
   m.emplace( 4, "wow" ) ;

   for ( auto & v : m )
   {
      std::cout << v.first << ": " << v.second << '\n' ;
   }
  
   for ( auto f{ m.find(3) } ; f != m.end() ; ++f )
   {
      std::cout << "find: " << f->first << ": " << f->second << '\n' ;
   }
   
   return 0 ;
} 

Running this gives me

$ ./a.out 
4: wow
3: hi
3: bye
find: 3: hi
find: 3: bye

Observe how nice auto is here. I don’t have to care what the typename for the unordered_multimap find result is. According to gdb that type is:

(gdb) whatis f
type = std::__1::__hash_map_iterator<std::__1::__hash_iterator<std::__1::__hash_node<std::__1::__hash_value_type<int, std::__1::basic_string<char> >, void*>*> >

Yikes!

STL

The STL chapter outlines lots of different algorithms. One new powerful feature in c++11 is that the Lambdas can be used instead of predicate function objects, which is so much cleaner. I used that capability in a scientific computing programming assignment earlier this year with partial_sort.

The find_if_not algorthim caught my eye, because I just manually coded exactly that sort of loop translating intel assembly that used ‘REPL SCASB’ instructions, and that code was precisely of this find_if_not form. The c++ equivalent of the assembly was roughly of the following form:

int scan3( const std::string & s, char v )
{
   auto p = s.begin() ;
   for ( ; p != s.end() ; p++ )
   {
      if ( *p != v )
      {
         break ; 
      }
   }

   if ( p == s.end() )
   {
      return 0 ;
   }
   else
   {
      std::cout << "diff: " << p - s.begin() << '\n' ;

      return ( v > *p ) ? 1 : -1 ;
   }

Range for can also be used for this loop, but it is only slightly clearer:

int scan2( const std::string & s, char v )
{
   auto p = s.begin() ;
   for ( auto c : s )
   {
      if ( c != v )
      {
         break ;
      }

      p++ ;
   }

   if ( p == s.end() )
   { 
      return 0 ;
   }
   else
   { 
      std::cout << "diff: " << p - s.begin() << '\n' ;

      return ( v > *p ) ? 1 : -1 ;
   }
}

An STL version of this loop that uses a lambda predicate is

int scan( const std::string & s, char v )
{
   auto i = find_if_not( s.begin(),
                         s.end(),
                         [ v ]( char c ){ return c == v ; }
                       ) ;

   if ( i == s.end() )
   { 
      return 0 ;
   }
   else
   { 
      std::cout << "diff: " << i - s.begin() << '\n' ;

      return ( v > *i ) ? 1 : -1 ;
   }
}

I don’t really think that this is any more clear than explicit for loop versions. All give the same results when tried:

int main()
{
   std::vector< std::function< int( const std::string &, char ) > > v { scan, scan2, scan3 } ;

   for ( auto f : v )
   { 
      int r0 = f( "nnnnn", 'n' ) ;
      int rp = f( "nnnnnmmm", 'n' ) ;
      int rn = f( "nnnnnpnn", 'n' ) ;

      std::cout << r0 << '\n' ;
      std::cout << rp << '\n' ;
      std::cout << rn << '\n' ;
   }

   return 0 ;
}

The compiler does almost the same for all three implementations. With the cout’s removed, and compiling with optimization, the respective instruction counts are:

(gdb) p 0xee3-0xe70
$1 = 115
(gdb) p 0xf4c-0xef0
$2 = 92
(gdb) p 0xfc3-0xf50
$3 = 115

The listings for the STL and the C style for loop are almost the same. The Apple xcode 7 compiler seems to produce slightly more compact code for the range-for version of this function for reasons that are not obvious to me.