range for

Notes on “memory and resources” of Stroustrup’s “The C++ Programming Language”.

July 21, 2016 C/C++ development and debugging. , , , , , , ,

Some chapter 34 notes.

array

There’s a fixed size array type designed to replace raw C style arrays. It doesn’t appear that it is bounds checked by default, and the Xcode7 (clang) compiler doesn’t do bounds checking for it right now. Here’s an example

#include <array>

using a10 = std::array<int, 10> ;

void foo( a10 & a )
{
   a[3] = 7 ;
   a[13] = 7 ;
}

void bar( int * a )
{
   a[3] = 7 ;
   a[13] = 7 ;
}

The generated asm for both of these is identical

$ gobjdump -d --reloc -C --no-show-raw-insn d.o

d.o:     file format mach-o-x86-64

Disassembly of section .text:

0000000000000000 <foo(std::__1::array<int, 10ul>&)>:
   0:   push   %rbp
   1:   mov    %rsp,%rbp
   4:   movl   $0x7,0xc(%rdi)
   b:   movl   $0x7,0x34(%rdi)
  12:   pop    %rbp
  13:   retq   
  14:   data16 data16 nopw %cs:0x0(%rax,%rax,1)

0000000000000020 <bar(int*)>:
  20:   push   %rbp
  21:   mov    %rsp,%rbp
  24:   movl   $0x7,0xc(%rdi)
  2b:   movl   $0x7,0x34(%rdi)
  32:   pop    %rbp
  33:   retq   
  34:   data16 data16 nopw %cs:0x0(%rax,%rax,1)

The foo() function here is also not compile-time bounds checked if the out of bounds access is changed to

   a.at(13) = 7 ;

however, this does at least generate an out of bounds error

$ ./d
libc++abi.dylib: terminating with uncaught exception of type std::out_of_range: array::at
Abort trap: 6

Even though we don’t get compile-time bounds checking (at least with the current clang compiler), array has the nice advantage of knowing its own size, so you can’t screw it up:

void blah( a10 & a )
{
   a[0] = 1 ;

   for ( int i{1} ; i < a.size() ; i++ )
   {
      a[i] = 2 * a[i-1] ;
   }
}

bitset and vector bool

The bitset class provides a fixed size bit array that appears to be formed from an array of register sized words. On a 64-bit platform (mac+xcode 7) I’m seeing that sizeof() == 8 for <= 64 bits, and doubles after that for <= 128 bits.

The code for something like the following (set two bits), is pretty decent, basically a single or immediate instruction:

using b70 = std::bitset<70> ;

void foo( b70 & v )
{
   v[3] = 1 ;
   v[13] = 1 ;
}

Array access operators are provided to access each bit position:

   for ( int i{} ; i < v.size() ; i++ )
   {
      char sep{ ' ' } ;
      if ( ((i+1) % 8) == 0 )
      {
         sep = '\n' ;
      }

      std::cout << v[i] << sep ;
   }
   std::cout << '\n' ;

There is no range-for support built in for this class. I was able to implement a wrapper that allowed that using a wrapper class

template <int N>
struct iter ;

template <int N>
struct mybits : public std::bitset<N>
{
   using T = std::bitset<N> ;

   using T::T ;
   using T::size ;

   inline iter<N> begin( ) ;

   inline iter<N> end( ) ;
} ;

and a helper iterator

template <int N>
struct iter
{
   unsigned pos{} ;
   const mybits<N> & b ;

   iter( const mybits<N> & bits, unsigned p = {} ) : pos{p}, b{bits} {}

   const iter & operator++()
   {
      pos++ ;

      return *this ;
   }

   bool operator != ( const iter & i ) const
   { 
      return pos != i.pos ;
   }

   int operator*() const
   { 
      return b[ pos ] ;
   }
} ;

plus the begin and end function bodies required for the loop

template <int N>
inline iter<N> mybits<N>::begin( )
{
   return iter<N>( *this ) ;
}

template <int N>
inline iter<N> mybits<N>::end( )
{
   return iter<N>( *this, size() ) ;
}

I’m not sure what the rationale for not including such range for support is, when std::vector has exactly that? vector is a vector specialization that is also supposed to be compact, but unlike bitset, allows for a variable sized bit array.

bitset also has a number of handy type conversion operators that vector does not (to string, and string to integer)

tuple

The std::tuple type generalizes std::pair, allowing for easy structures of N different types.

I saw that tuple has a tie method that allows it to behave very much like a perl array assignment. Such an assignment looks like

#!/usr/bin/perl

my ($a, $b, $c) = foo() ;

printf( "%0.1f $b $c\n", $a ) ;

exit 0 ;

sub foo
{
   return (1.0, "blah", 3) ;
}

A similar C++ equivalent is more verbose

#include <tuple>
#include <stdio.h>

using T = std::tuple<float, const char *, int> ;

T foo()
{
   return std::make_tuple( 1.0, "blah", 3 ) ;
}

int main()
{
   float f ;
   const char * k ;
   int i ;

   std::tie( f, k, i ) = foo() ;

   printf("%f %s %d\n", f, k, i ) ;

   return 0 ;
}

I was curious how the code that accepts a tuple return using tie, using different variables (as above), and using a structure return differed

struct S
{
   float f ;
   const char * s ;
   int i ;
} ;

S bar()
{
   return { 1.0, "blah", 3 } ;
}

In each case, using -O2 and the Xcode 7 compiler (clang), a printf function similar to the above ends up looking pretty much uniformly like:

$ gobjdump -d --reloc -C --no-show-raw-insn u.o 
...

0000000000000110 <h()>:
 110:   push   %rbp
 111:   mov    %rsp,%rbp
 114:   sub    $0x20,%rsp
 118:   lea    -0x18(%rbp),%rdi
 11c:   callq  121 <h()+0x11>
                        11d: BRANCH32   foo()
 121:   mov    -0x10(%rbp),%rsi
 125:   mov    -0x8(%rbp),%edx
 128:   movss  -0x18(%rbp),%xmm0
 12d:   cvtss2sd %xmm0,%xmm0
 131:   lea    0xd(%rip),%rdi        # 145 <h()+0x35>
                        134: DISP32     .cstring-0x145
 138:   mov    $0x1,%al
 13a:   callq  13f <h()+0x2f>
                        13b: BRANCH32   printf
 13f:   add    $0x20,%rsp
 143:   pop    %rbp
 144:   retq   

The generated code is pretty much dominated by the stack pushing required for the printf call. I used printf here instead of std::cout because the generated code for std::cout is so crappy looking (and verbose).

shared_ptr

Reading the section on shared_ptr, it wasn’t obvious that it was a thread safe interface. I wondered if some sort of specialization was required to make the reference counting thread safe. It appears that thread safety is built in

This can also be seen in the debugger (assuming the gcc libstdc++ is representitive)

Breakpoint 1, main () at sharedptr.cc:33
33    std::shared_ptr<T> p = std::make_shared<T>() ;
Missing separate debuginfos, use: debuginfo-install libgcc-4.8.5-4.el7.x86_64 libstdc++-4.8.5-4.el7.x86_64
(gdb) n
35    foo( p ) ;
(gdb) s
std::shared_ptr<T>::shared_ptr (this=0x7fffffffe060) at /usr/include/c++/4.8.2/bits/shared_ptr.h:103
103         shared_ptr(const shared_ptr&) noexcept = default;
(gdb) s
std::__shared_ptr<T, (__gnu_cxx::_Lock_policy)2>::__shared_ptr (this=0x7fffffffe060) at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:779
779         __shared_ptr(const __shared_ptr&) noexcept = default;
(gdb) s
std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count (this=0x7fffffffe068, __r=...)
    at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:550
550         : _M_pi(__r._M_pi)
(gdb) s
552      if (_M_pi != 0)
(gdb) s
553        _M_pi->_M_add_ref_copy();
(gdb) s
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_add_ref_copy (this=0x607010) at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:131
131         { __gnu_cxx::__atomic_add_dispatch(&_M_use_count, 1); }

This was looking at a call of the following form

using Tp = std::shared_ptr<T> ;

void foo( Tp p ) ;

int main()
{
   std::shared_ptr<T> p = std::make_shared<T>() ;

   foo( p ) ;

   return 0 ;
}

Notes on C++11 and C++14 from scientific computing for physicists

May 1, 2016 C/C++ development and debugging. , , , , , , , , , , , , , , , , , , , , , , , ,

I recently wrapped up all the programming assignments for PHY1610, Scientific Computing for Physicists

In all the assignments, we were required to compile with either

-std=c++11

or

-std=c++14

It’s possible to use those options and still program using the older C++98 syntax, but I also used this as an opportunity to learn some new style C++.

With the cavaet that we were provided with boilerplate code for a number of assignments, there was a non-trivial amount of code written for this course:

$ cloc `cat f` 2>&1 | tee o
     186 text files.
     177 unique files.                                          
       4 files ignored.

http://cloc.sourceforge.net v 1.60  T=0.88 s (197.6 files/s, 16868.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C++                            111           1710           1159           7317
C/C++ Header                    62            819           1525           2237
-------------------------------------------------------------------------------
SUM:                           173           2529           2684           9554
-------------------------------------------------------------------------------

A lot of this code involved calling into external libraries (fftw3, cblas, lapack, gsl, netcdf, MPI, silo, boost exceptions, boost unittest, …) and was pretty fun to write.

Looking through my submissions, here are some of the newer language features that ended up in my code. Keep in mind that new for me is relative to the C++ language features that I was able to use in DB2 code, which is restricted by the features made available by the very oldest compiler we were using accross all platform offerings.

Using statements

I had only seen using statements for namespace selection, as in

using namespace std ;

This is, however, a more general construct, and also allows for what is effectively a scope limited typedef with a more natural syntax. Example:

using carray = rarray<std::complex<double>, 1> ;

Compare this to

typedef rarray<std::complex<double>, 1> carray ;

With the using syntax, the beginner programmer’s issue of remembering the order for the type,typename pair in a typedef statement is obliterated.

I got quite used to using using by the end of the course.

Testing language levels

The following macros were helpful when experimenting with different language levels:

#if defined __cplusplus && (__cplusplus >= 201103L)
   #define HAVE_CPLUSPLUS_11
#endif

#if defined __cplusplus && (__cplusplus >= 201402L)
   #define HAVE_CPLUSPLUS_14
#endif

enum class

C++11 introduces an ‘enum class’, different from an enum. For example, instead of writing:

/**
   interval and derivative solver methods supplied by gsl
 */
enum solver
{
   bisection,
   falsepos,
   brent,
   newton,
   secant,
   steffenson
} ;

you would write:

/**
   interval and derivative solver methods supplied by gsl
 */
enum class solver
{
   bisection,
   falsepos,
   brent,
   newton,
   secant,
   steffenson
} ;

The benefit of this compared to the non-class enum is that the enumeration names are not in the global scope. You would write

void foo( const solver s ) 
{
   if ( s == solver::falsepos )
}

not

void foo( const solver s ) 
{
   if ( s == falsepos )
}

This nicely avoids namespace clashes.

That is not the only benefit to C++11 enums. C++11 enums can also be forward referenced, provided the storage class of the enum is also specified.

If you have ever worked on code that is massively coupled and interdependent (such as DB2), you have seen places where piles of headers have to get dragged in for enum bodies, because it is not possible to forward reference an enum portably. This is a very nice feature!

A simple example of a forward declared C++11 enum is:

enum solver : int ;
void foo( const solver s ) ;

enum solver : int
{
  x = 0, y = 1
} ;

Or, using the non-global enum class syntax:

enum class what : int ;
void foo( const what s ) ;

enum class what : int
{
  x = 0, y = 1
} ;

I didn’t actually use enum classes for enum forward referencing in my phy1610 assignments, because they were too simple to require that.

There is huge potential for using enums with storage classes in DB2 code. I expect that is also true for many other huge scale C++ codebases. The fact that this feature does not have appear to be tied to a requirement to also use ‘enum class’ is very nice for transforming legacy code. I left IBM before the day of seeing the use of compilers that allowed that on all platforms, but can imagine there will be some huge potential build time savings once C++11 compilers are uniformly available for DB2 code (and the code is ported to compile with C++11 enabled on all platforms).

As a side note, the storage class qualification, even if not being used for forward referencing is quite nice. I used it for return codes from main, which have to fit within one byte (i.e. within the waitpid waitstatus byte). For example:

enum class RETURNCODES : unsigned char
{
    SUCCESS       ///< exit code for successful exectution
   ,HELP          ///< exit code when -help (or bad option is supplied)
   ,PARSE_ERROR   ///< exit code if there's a parse error */
   ,EXCEPTION     ///< exit code if there's an unexpected exception thrown */
} ;

Uniform initialization

A new initialization paradigm is available in C++11. Instead of using constructor syntax for initialization, as in

/**
   Input parameters for gsl solver iteration.
 */
struct iterationParameters
{
   const Uint     m_max_iter ;  ///< Maximum number of iterations before giving up.
   const double   m_abserr ;    ///< the absolute error criteria for convergence.
   const double   m_relerr ;    ///< the relative error criteria for convergence.
   const bool     m_verbose ;   ///< verbose output

   iterationParameters( const Uint     max_iter,
                        const double   abserr,
                        const double   relerr,
                        const bool     verbose ) :
         m_max_iter(max_iter),
         m_abserr(abserr),
         m_relerr(relerr),
         m_verbose(verbose)
   {
   }
} ;

one could write

/**
   Input parameters for gsl solver iteration.
 */
struct iterationParameters
{
   const Uint     m_max_iter ;  ///< Maximum number of iterations before giving up.
   const double   m_abserr ;    ///< the absolute error criteria for convergence.
   const double   m_relerr ;    ///< the relative error criteria for convergence.
   const bool     m_verbose ;   ///< verbose output

   iterationParameters( const Uint     max_iter,
                        const double   abserr,
                        const double   relerr,
                        const bool     verbose ) :
         m_max_iter{max_iter},
         m_abserr{abserr},
         m_relerr{relerr},
         m_verbose{verbose}
   {
   }
} ;

This is a little foreign looking and it is easy to wonder what the advantage is. One of the advantages is that this syntax can be used for container initialization. For example, instead of

std::vector<int> v ;
v.push_back( 1 ) ;
v.push_back( 2 ) ;
v.push_back( 3 ) ;

you can just do

std::vector<int> v{ 1, 2, 3 } ;

This is called uniform initialization, since this mechanism was extended to basic types as well. For example, instead of initializing an array with an assignment operator, as in

   constexpr struct option long_options[] = {
     { "help",   0, NULL, 'h' },
     { "number", 1, NULL, 'n' },
     { "lower",  1, NULL, 'l' },
     { "upper",  1, NULL, 'u' },
     { NULL,     0, NULL, 0   }
   } ;

you can write

   constexpr struct option long_options[]{
     { "help",   0, NULL, 'h' },
     { "number", 1, NULL, 'n' },
     { "lower",  1, NULL, 'l' },
     { "upper",  1, NULL, 'u' },
     { NULL,     0, NULL, 0   }
   } ;

Instead of just providing a special mechanism to initialize container class objects, the language was extended to provide a new initialization syntax that could be used to initialize contain those objects and all others.

However, this is not just a different syntax for initialization, because there the types have to match strictly. For example this init of a couple stack variables will not compile

   int more{3} ;
   float x1{-2.0} ;
   size_t size{meta.numThreads*20} ;

What is required is one of

   float x1{-2.0f} ;

   // or

   double x1{-2.0} ;

Additionally, suppose that meta.numThreads has int type. Such a uniform initialization attempt will not compile, since the product is not of type size_t. That line can be written as:

   size_t size{(size_t)meta.numThreads*20} ;

   // or:
   size_t size = meta.numThreads*20 ;

I found uniform initialization hard on the eyes because it looked so foreign, but did eventually get used to it, with one exception. It seems to me that a longer initialization expression like the following is harder to read

double x{ midpoint( x1, x1 + intervalWidth ) } ;

than

double x = midpoint( x1, x1 + intervalWidth ) ;

There were also cases with -std=c++11 where uniform init and auto variables (see below) did not interact well, producing errors later when my auto-uniform-init’ed variables got interpreted as initializer lists instead of the types I desired. All such errors seemed to go away with -std=c++14, which seemed to generally provide a more stable language environment.

New string to integer functions

The c++11 standard library has new string to integer functions
http://en.cppreference.com/w/cpp/string/basic_string/stoul
which are more convenient than the strtoul functions. These throw exceptions on error, but still allow the
collection of errno and error position if you want them.

using Uint = std::uintptr_t ;

/**
   Register sized signed integer type for loop counters and so forth.
 */
using Sint = std::intptr_t ;

/**
   wrapper for stoul to match the type of Uint above.
 */
#if defined _WIN64
   #define strToUint std::stoull
#else
   #define strToUint std::stoul
#endif

There are other similar functions like std::stod, for string to double conversion. There were also opposite convertors, such as to_string, for converting integer types to strings. For example:

const std::string filename{ fileBaseName + "_" + std::to_string( rank ) + ".out" } ;

Static assertions.

DB2 had a static assertion implementation (OSS_CTASSERT, or sqlzStaticAssert?) but there is now one in the standard. Here’s an example using the Uint “typedef” above:

/**
   Force a compilation error if size assumptions are invalid.
 */
inline void strToUintAssumptions()
{
#if defined _WIN64
   static_assert( sizeof(Uint) == sizeof(unsigned long long), "bad assumptions about sizeof uintptr_t, long long" ) ;
#else
   static_assert( sizeof(Uint) == sizeof(unsigned long), "bad assumptions about sizeof uintptr_t, long" ) ;
#endif
}

The advantage of static_assert over a typedef (variable sized array) implementation like DB2 HAD is that compilers likely produce a better error message when it fails (instead of something unintuitive like “reference of array location at offset -1 is invalid”).

Boost exceptions.

While not part of c++11, the boost exception classes were available for my assignments. These are pretty easy to use. As setup you define some helper classes, which really just provide a name for the exception, and a name to identify any of the data that you’d like to throw along with the underlying exception. This could look like the following for example:

#include <boost/exception/exception.hpp>
#include <boost/exception/info.hpp>

struct error : virtual std::exception, virtual boost::exception { } ;
struct regex_match_error : virtual error { } ;

struct tag_match_input ;
typedef boost::error_info<tag_match_input,std::string> match_info ;

struct tag_match_re ;
typedef boost::error_info<tag_match_re,std::string> re_info ;

struct tag_intdata ;
typedef boost::error_info<tag_intdata,long> intdata_info ;

Such classes would be best in a namespace since they are generic, but I didn’t bother for all these assignments.

I used the boost exceptions for a couple things. One of which, of course, was throwing exceptions, but the other was as an assert-with-data backend:

#define ASSERT_NO_ERROR (static_cast<void>(0))
#ifdef NDEBUG
   #define ASSERT_DATA_INT( expr, v1 )          ASSERT_NO_ERROR
   #define ASSERT_DATA_INT_INT( expr, v1, v2 )  ASSERT_NO_ERROR
#else
   #define ASSERT_DATA_INT( expr, v1 )          \
      ( (expr)                                  \
      ? ASSERT_NO_ERROR                         \
      : BOOST_THROW_EXCEPTION(                  \
            assert_error()                      \
               << intdata_info( v1 ) ) )
//...
#endif

This allowed me to assert with data as in

ASSERT_DATA_INT( sz > 0, sz ) ;
ASSERT_DATA_INT_INT( taskNumber < numTasks, taskNumber, numTasks ) ;

This way I get not just the abort from the assert, but also the underlying reason, and can dump those to the console with no additional effort than catching any other boost exception:

//...
#include <boost/exception/diagnostic_information.hpp>

int main( int argc, char ** argv )
{
   try {
      auto expected{7} ;

      ASSERT_DATA_INT_INT( argc == expected, argc, expected ) ;
   }
   catch ( boost::exception & e )
   {
      auto s { boost::diagnostic_information( e ) } ;
      std::cout << s << std::endl ;
      // ...

This generates something like:

$ ./bassert
bassert.cc(11): Throw in function int main(int, char**)
Dynamic exception type: boost::exception_detail::clone_impl<assert_error>
std::exception::what: std::exception
[tag_intdata*] = 1
[tag_intdata2*] = 7

I wonder how efficient constructing such an exception object is? When pre-processed the assertion above expands to

      ( (argc == expected) ? (static_cast<void>(0)) :
     ::boost::exception_detail::throw_exception_(
     assert_error() << intdata_info( argc ) << intdata2_info( expected )
     ,__PRETTY_FUNCTION__,"bassert.cc",11)
     ) ;

Stepping through this in the debugger I see some interesting stuff, but it included heap (i.e. new) allocations. This means that this sort of Boost exception may malfunction very badly in out of memory conditions where it is conceivable that one would want to throw an exception.

The runtime cost can’t be that inexpensive either (when the assert is triggered). I see four function calls even before the throw is processed:

assert_error const& boost::exception_detail::set_info(assert_error const&, boost::error_info const&)-0x4
assert_error const& boost::exception_detail::set_info(assert_error const&, boost::error_info const&)-0x4
assert_error::assert_error(assert_error const&)-0x4
void boost::throw_exception(assert_error const&)-0x4

and the total instruction count goes up to ~140 from 4 for the NDEBUG case (with optimization). Only 5 instructions get executed in the happy codepath. This is what we want in exception handling code: very cheap when it’s not triggered, with all the expense moved to the unhappy codepath.

The negative side effect of this sort of error handling looks like a lot of instruction cache bloat.

Boost test

The boost test library is also not a C++11 feature, but new for me, and learned in this course. Here’s a fragment of how it is used

#define BOOST_TEST_MAIN
#define BOOST_TEST_MODULE test

#define BOOST_TEST_DYN_LINK

#include <boost/test/unit_test.hpp>
#include <vector>

BOOST_AUTO_TEST_CASE( testExample )
{
   std::vector<int> v(3) ;

   BOOST_REQUIRE( 3 == v.size() ) ;
   BOOST_REQUIRE_MESSAGE( 3 == v.size(), "size: " + std::to_string( v.size() ) ) ;
}

A boost test after being run looks like:

$ ./test --report_level=detailed --log_level=all
Running 1 test case...
Entering test module "test"
test.cc:9: Entering test case "testExample"
test.cc:13: info: check 3 == v.size() has passed
test.cc:14: info: check 'size: 3' has passed
test.cc:9: Leaving test case "testExample"; testing time: 87us
Leaving test module "test"; testing time: 103us

Test module "test" has passed with:
  1 test case out of 1 passed
  2 assertions out of 2 passed

  Test case "testExample" has passed with:
    2 assertions out of 2 passed

Range for and auto type

The range for is much like perl’s foreach. For example, in perl you could write

my @a = ( 1, 2, 3 ) ;
foreach my $v ( @a )
{
   foo( $v ) ;
}

An equivalent C++ loop like this can be as simple as

std::vector<int> a{1, 2, 3 } ;
for ( auto v : a )
{
   foo( v ) ;
}

You can also declare the list of items to iterate over inline, as in

using iocfg = iohandler::cfg ;
for ( auto c : { iocfg::graphics, iocfg::ascii, iocfg::netcdf, iocfg::noop } )
{
   // ...
}

Observe that, just like perl, C++ no longer requires any explicit type for the loop variable, as it is deduced when auto is specified. It is still strongly typed, but you can write code that doesn’t explicitly depend on that type. I see lots of benefits to this, as you can have additional freedom to change type definitions and not have to adjust everything that uses it.

I can imagine that it could potentially get confusing if all variables in a function get declared auto, but did not find that to be the case for any of the code I produced in these assignments.

One gotcha with auto that I did hit was that care is required in computed expressions. I’d used auto in one case and the result got stored as a large unsigned value, instead of signed as desired (i.e. negative values got stored in unsigned auto variables). In that case I used an explicit type. Extensive use of auto may end up requiring more unit and other test if the types picked are not those that are desired.

std::chrono (ticks.h)

This is a nice portability layer for fine grain time measurements, allowing you to avoid platform specific functions like gettimeofday, and also avoid any composition of the seconds/subseconds data that many such interfaces provide.

Here’s a fragment of a class that allows interval time measurements and subsequent conversion:

class ticks
{
   using clock      = std::chrono::high_resolution_clock ;

   clock::time_point m_sample ;
public:

   static inline ticks sample()
   {
      ticks t ;
      t.m_sample = clock::now() ;

      return t ;
   }

   using duration   = decltype( m_sample - m_sample ) ;

   friend duration operator -( const ticks & a, const ticks & b ) ;
} ;

inline ticks::duration operator -( const ticks & a, const ticks & b )
{
   return a.m_sample - b.m_sample ;
}

inline auto durationToMicroseconds( const ticks::duration & diff )
{
   return std::chrono::duration_cast<std::chrono::microseconds>( diff ).count() ;
}

Note that the last function is using c++14 return type deduction. That does not work without coersion
in c++11, requiring:

inline auto durationToMicroseconds( const ticks::duration & diff )
-> decltype(std::chrono::duration_cast<std::chrono::microseconds>( diff ).count())
{
   return std::chrono::duration_cast<std::chrono::microseconds>( diff ).count() ;
}

which is very ugly.

Random numbers

/**
   A random number generator that produces integer uniformly
   distributed in the interval:

   [a, a + delta N]

   with separation delta between values returned.
 */
template <int a, int delta, int N>
class RandomIntegers
{
   std::random_device                        m_rd ;A
   //std::default_random_engine                m_engine ;
   std::mt19937                              m_engine ;
   std::uniform_int_distribution<unsigned>   m_uniform ;

public:
   /** constuct a uniform random number generator for the specified range */
   RandomIntegers( )
      : m_rd()
      , m_engine( m_rd() )
      , m_uniform( 0, N )
   {
      static_assert( N > 0, "Integer N > 0 expected" ) ;
      static_assert( delta > 0, "Integer delta > 0 expected" ) ;
   }

   /**
      return a uniform random number sample from {a, a + delta, ..., a + delta N}
    */
   int sample()
   {
      auto p = m_uniform( m_engine ) ;

      return a + p * delta ;
   }
} ;

constexpr

Instead of using #defines, one can use completely typed declarations, but still constant using the constexpr keyword. An example

constexpr size_t N{3} ;
std::tuple<int, N> t ;

nullptr

The days of not knowing what header defines NULL and dealing with conflicting definitions are over. Instead of using NULL, we now have a builtin language construct nullptr available.

Lambdas and sort

Custom sorting is really simple in c++ now. Here’s an example of a partial sort (sorting the top N elements, and leaving the rest unspecified). The sort function no longer has to be a function call, and can be specified inline

auto second_greater = [](auto & left, auto & right) { return left.second > right.second ; } ;
std::partial_sort( cvec.begin(),
                   cvec.begin() + N,
                   cvec.end(),
                   second_greater ) ;

The “inline” sort function here is using c++14 lambda syntax. For c++11, the parameter types can’t be auto, so something such as the following might be required

auto second_greater = [](const results_pair & left, const results_pair & right) { return left.second > right.second ; } ;

Useful standard helper methods

The standard library has lots of useful utility functions. I’m sure I only scratched the surface discovering some of those. Some I used were:

std::swap( m_sz, other.m_sz ) ;
std::fill( m_storage.begin(), m_storage.end(), v ) ;
std::copy( b.m_storage.begin(), b.m_storage.end(), m_storage.begin() ) ;
r.first  = std::max( l, m_myFirstGlobalElementIndex ) ;
r.second = std::min( u, m_myLastGlobalElementIndex ) ;

I also liked the copysign function, allowing easy access to the sign bit of a float or double without messing around with extracting the bit, or explicit predicates:

inline double signof( const double v )
{
   return std::copysign( 1.0, v ) ;
}

Mean and standard deviation were also really easy to calculate. Here’s an example that used a lambda function to calculate the difference from the mean to get at the squared difference from the mean:


      m_sum = std::accumulate( v.begin(), v.end(), 0.0 ) ;
      m_mean = m_sum / v.size() ;
      double mean = m_mean ; // for lambda capture

      std::vector<double> diff( v.size() ) ;

      std::transform( v.begin(), v.end(), diff.begin(), [mean](double x) { return x - mean; } ) ;

      m_sq_sum = std::inner_product( diff.begin(), diff.end(), diff.begin(), 0.0 ) ;

decltype

Attempting to mix auto with g++’s ‘-Wall -Werror’ causes some trouble. For example, this doesn’t work

void foo ( const size_t size )
{
   for ( auto i{0} ; i < size ; i++ )
   {
      // ...
   }
}

This doesn’t compile since the i < size portion generates sign vs unsigned comparison warnings. There are a few ways to fix this.

   // specify the type explicitly:
   for ( size_t i{0} ; i < size ; i++ )

   // let the compiler use the type of the size variable:
   for ( decltype(size) i{0} ; i < size ; i++ )

The decltype method is probably of more use in template code. For non-template code, I found that explicitly specifying the type was more readable.

std::valarray (myrarray.h)

The standard library has a vectored array construct, but I was disappointed with the quality of the generated code that I observed. It also turned out to be faster not to use it. For example:

void SineCosineVecOps( std::valarray<float> & s, std::valarray<float> & c, const std::valarray<float> & v )
{
   s = std::sin( v ) ;
   c = std::cos( v ) ;
}

void SineCosineManOps( std::valarray<float> & s, std::valarray<float> & c, const std::valarray<float> & v )
{
   for ( Uint i{0} ; i < ASIZE ; i++ )
   {  
      float theta = v[i] ;

      s[i] = std::sin( theta ) ;
      c[i] = std::cos( theta ) ;
   }
}

when run on a 300 element array executed close to 1.5x slower using the valarray vector assignment operation, and had close to 3x times the instructions (with optimization)!

Perhaps other compilers do better with valarray. g++ 5.3 is certainly not worth using with that container type.