Category Archives: STL

psa.exe partial tree sample

Process Status Analysis – the first steps

I am pleased to announce my first cross-platform and open source project, the Process Status Analysis tool, available on GitHub.

The Process Status Analysis (psa) version 0.2 is available for Windows and Linux (Debian derived / Ubuntu tested) Operating Systems.
Download: psa for Ubuntu Linux x64 (4906 downloads)
Download: psa for Ubuntu Linux x86 (4152 downloads)
Download: psa.exe for Windows x64 (4115 downloads)
Download: psa.exe for Win32 (4083 downloads)

You may wonder why I did it or what it brings new. Well, I did it for fun, in my spare time and I will continue improving it when I’ll find a time to do it.

The project is written in modern C++ using idioms from the C++ 1x standards. Even if initially was done as a C++ for Windows only, during the past days I managed the port of it for Linux using Visual Studio 2017’s project templates and a connection via SSH.
In general, the source code base is similar, differing just by OS specific stuff.

In case you want to find out more about how to develop C++ Linux projects from the best development tool (imho), Visual Studio, you can find more information on Visual Studio development team blog.

Related to this psa project, the Linux version requires libprocps4-dev library in order to build.

The main reason for starting this project was that I wanted to know what’s the total memory amount used by my Chrome browser. I know it uses a lot of resources, but not that much… 🙂

Even if my preferred processes analysis tool, the Process Hacker offers a lot of processes administration possibilities, but it didn’t provide what I want, so I decided to enjoy a bit.

Chrome processes in Process Hacker

Sample – Google Chrome processes in Process Hacker tool

So, what I achieved by psa.exe was something like:

C:\Windows\system32> psa -o chrome
PID [896] chrome.exe 237.5234 MB
PID [1496] chrome.exe 87.6875 MB
PID [2388] chrome.exe 166.5000 MB
PID [5860] chrome.exe 3.1211 MB
PID [7336] chrome.exe 273.2188 MB
PID [8444] chrome.exe 68.0508 MB
PID [8624] chrome.exe 63.3945 MB
PID [9180] chrome.exe 296.9766 MB
PID [10600] chrome.exe 292.6289 MB
PID [12352] chrome.exe 182.5977 MB
PID [13688] chrome.exe 2.3555 MB
PID [14052] chrome.exe 73.1875 MB
PID [16200] chrome.exe 211.9805 MB
PID [17284] chrome.exe 55.7148 MB
PID [18036] chrome.exe 208.6680 MB
PID [19012] chrome.exe 457.2305 MB
PID [19312] chrome.exe 143.4766 MB
-----------------------------------
Total used memory: 2824.31 MB

A bit too much in my humble opinion…
The features this tool offers includes:

Get all processes loaded in memory information

I case you want to have a snapshot of all the processes loaded in the OS’s memory you can have it with.

psa -a

Get process only used memory

With psa.exe you can reach the used memory by a specific parameter -o after the process name or at least a part of its name.

psa -o chrome             // find how much memory uses your Chrome!   o_O

Currently, there is no string replace ‘*’ but it’s ongoing.


Print processes tree

Storing the processes’ data within a generic tree done by me, I took the decision to print the processes’ tree output, similarly there is in Windows with tree.exe tool or on Linux in pstree or even htop.

./psa -t
./psa -t 1034

psa.exe partial tree sample

Process Status Analysis partial tree of Windows process

Top most “expensive” processes

In case you want to see what are the most expensive processes within your operating system, you can achieve it with:

psa -e 20

or simpler psa -e in case you’re sure you want top 10 expensive processes (the default value).

silviu@ubuntu-dev-server:~/projects/psa-lin/bin/x64/Release$ ./psa -e
Top 10 consuming memory processes
-------------------------------------------
PID        Process Name         RAM Usage
-------------------------------------------
[924]    /usr/lib/policykit-1/polkitd   270.68 MB
[906]    /usr/lib/accountsservice/accounts-daemon       269.43 MB
[842]    /usr/sbin/rsyslogd     250.39 MB
[878]    /usr/lib/snapd/snapd   197.29 MB
[531]    /lib/systemd/systemd-timesyncd         97.97 MB
[1614]    sshd: silviu@pts/0    93.16 MB
[1576]    sshd: silviu [priv]   93.16 MB
[863]    /usr/bin/lxcfs         93.13 MB
[427]    /sbin/lvmetad          92.55 MB
[1034]    /usr/sbin/sshd        63.98 MB
-------------------------------------------


C:\Windows\system32> psa -e 10
Top 10 consuming memory processes
-------------------------------------------
PID        Process Name         RAM Usage
-------------------------------------------
[19012]    chrome.exe           435.67 MB
[17684]    chrome.exe           329.51 MB
[7336]    chrome.exe            259.61 MB
[15576]    devenv.exe           248.20 MB
[896]    chrome.exe             222.85 MB
[2388]    chrome.exe            188.21 MB
[18428]    chrome.exe           184.48 MB
[15760]    chrome.exe           173.82 MB
[488]    Dropbox.exe            162.15 MB
[5488]    googledrivesync.exe   161.11 MB
-------------------------------------------
Total used memory: 2365.61 MB

Redirect output to a file

From the standard output the information can be easily redirected to a file.

c:\> psa -t > windows_processes_tree.txt
# ./psa -t > linux_processes_tree.txt        

Kill process feature

This feature was not implemented yet but in case we need it we can be done it easily with the existing tools on the target OS (ex. Task Manager, Process Exporer/Hacker, pskill.exe for Windows or the combination ps + kill on LInux).

Feedbacks and improvements
Any constructive feedback, suggestions, contributions to improvements are appreciated.
Feel free to add any issue you find, wish or suggestion you have in the GitHub repository, the 
Issues section or here as a comment.

Getting Table’s indexes experiences – workaround

Trying to get table indexes information in SQL Server 2012 I identified a strange situation within a specific method that I was using so long but it was not acting as expected in one situation.

The way of getting indexes information using the ODBC C API into that old and inherited method looks like:

nRetCode = ::SQLStatistics(hstmtAux,
                                    NULL,
                                    0,
                                    NULL,
                                    0,
                                    (TCHAR*)(LPCTSTR)strTempTable,
                                    SQL_NTS,
                                    SQL_INDEX_ALL,
                                    SQL_ENSURE);
if (nRetCode == SQL_SUCCESS || nRetCode == SQL_SUCCESS_WITH_INFO) {
  nRetCode = ::SQLBindCol(hstmtAux, 4, SQL_C_SHORT, &swNonUnique, sizeof(SWORD),
                          &cbNonUnique);
  nRetCode = ::SQLBindCol(hstmtAux, 5, SQL_CHAR, szIdxQualif,
                          sizeof(CHAR) * 130, &cbIdxQualif);
  nRetCode = ::SQLBindCol(hstmtAux, 6, SQL_C_CHAR, szIdxName,
                          sizeof(CHAR) * 130, &cbIdxName);
  nRetCode =
      ::SQLBindCol(hstmtAux, 7, SQL_C_SHORT, &swType, sizeof(SWORD), &cbType);
  nRetCode = ::SQLBindCol(hstmtAux, 8, SQL_C_SHORT, &swSeqInIdx, sizeof(SWORD),
                          &cbSeqInIdx);
  nRetCode = ::SQLBindCol(hstmtAux, 9, SQL_C_CHAR, szIdxColName,
                          sizeof(CHAR) * 130, &cbIdxColName);

  while (bNoFetch || nRetCode == SQL_SUCCESS_WITH_INFO ||
         (nRetCode = ::SQLExtendedFetch(hstmtAux, SQL_FETCH_NEXT, 1, &crow,
                                        &rgfRowStatus)) == SQL_SUCCESS) {
    if (cbIdxName != SQL_NULL_DATA &&
        _tcslen((TCHAR)szIdxName) < 0) {  
        // rest of the code 
    } 
  } // rest of the code 
}

Usually, I got the right information about indexes but in one situation I encounter a strange behavior. It’s about having a clustered index into a scenario. I have a table that contains two indexes referenced to some fields: IndexField_1 and IndexField_3 mapped over int, NULL fields. When IndexField_1 is Non-Unique, Non-Clustered and IndexField_3 is Clustered index I get the right information.
But if the index IndexField_1 is Clustered and the IndexField_3 is Non-Unique, Non-Clustered I get no information about IndexField_1 index (eg. szIdxName and szIdxColName are “” and their length is -1 that means SQL_NULL_DATA). Within while loop, with the next iteration, I get correct information about the second index IndexField_3.

Because SQLExtendedFetch() is deprecated I tried using SQLFetchScroll() but the behavior is the same from my interest point of view.

I was not sure whether the problem is with SQLStatistics, the bindings or SQLFetchScroll (they all always return SQL_SUCCESS). It looks such a problem with the driver when the first index is clustered.
According to SQLStatistics documentation if my swType parameter is SQL_TABLE_STAT I have no information for index or field. But for this scenario, I had no indexes of combined fields.
For the good scenario I observed that my while loop had 3 iterations including one of having swType = SQL_TABLE_STAT without information in szIdxName. But for the bad scenario, the loop had only 2 iterations. So it looks like SQLExtendedFetch() is not getting the last one index.

After some googling and research without very significant solutions, I decided to apply a workaround by avoiding the old API and I rewrite my method.

So, in order to get table indexes information, I have chosen a direct SQL query into SYS tables: sys.tables, sys.indexes, sys.schema.

SELECT DISTINCT I.[name] AS IndexName, I.is_unique AS IsUnique,
    I.is_primary_key AS IsPrimaryKey, I.type AS IndexType,
    I.type_desc AS IndexDesc,
    T.[object_id] AS ObjectID FROM sys.tables AS T INNER JOIN
        sys.indexes AS I ON T.[object_id] = I.[object_id]and T.name =
        'myTABLE' and
        I.type_desc<> 'HEAP' INNER JOIN sys.schemas AS S ON T.schema_id =
            S.schema_id and s.name = 'myTABLE'

Because I preferred getting also information about the index’s composed fields, I applied a second additional SQL query:

SELECT AC.Name as ColumnName FROM sys.tables as T INNER JOIN
    sys.indexes as I on T.[object_id] =
    I.[object_id] INNER JOIN sys.index_columns as IC on IC.[object_id] =
        I.[object_id] and IC.[index_id] =
            I.[index_id] INNER JOIN sys.all_columns as AC on IC.[object_id] =
                AC.[object_id] and IC.[column_id] =
                    AC.[column_id] WHERE I.object_id =
                        'NumericObjectID' and T.name = 'TableName' and I.name =
                            'IndexName' order by T.name,
                        I.name

and I have collected data into a container of defined structure according to my SQL Indexes interest information:

struct SQL_INDEX {
  CString index_name;
  bool is_unique;
  long object_id;
  bool is_primary;
  short index_type;
  CString index_desc;
  std::vector vectColumns;
};

The last member vectColumns stores information about the columns that are used for a specific index.

Finally, the new method that collects table indexes information looks like:

// collect table indexes information
void CFoo::GetIndexesInformation(const CString& strSchema,
                                 const CString& strTable,
                                 std::vector& vectKeys,
                                 CDatabase* pDBdata) {
  HSTMT hstmt = SQL_NULL_HSTMT;
  SQLRETURN nRetCode = 0;
  CString sqlQuery;

  sqlQuery.Format(
      _T("SELECT DISTINCT I.[name] as IndexName, I.is_unique as IsUnique, I.is_primary_key as IsPrimaryKey, I.type as IndexType, I.type_desc as IndexDesc, T.[object_id] as ObjectID \
FROM sys.tables as T \
INNER JOIN sys.indexes as I on T.[object_id] = I.[object_id] and T.name = '%s' and I.type_desc <> 'HEAP' \
INNER JOIN sys.schemas as S on T.schema_id = S.schema_id and s.name = '%s'"),
      strTable, strSchema);

  if ((nRetCode = ::SQLAllocStmt(pDBdata->m_hdbc, &hstmt)) == SQL_SUCCESS) {
    if (SQL_NULL_HSTMT != hstmt) {
      pDBdata->OnSetOptions(hstmt);

      nRetCode = ::SQLExecDirect(hstmt, (CHAR*)(LPCTSTR)sqlQuery,
                                 sqlQuery.GetLength());
      if (SQL_SUCCESS == nRetCode) {
        CHAR buffer[128] = {0};
        SQLLEN iLen = 0;
        short int val = 0;
        long obj_id = 0;

        while (::SQLFetch(hstmt) == SQL_SUCCESS) {
          SQL_INDEX ob;

          if (::SQLGetData(hstmt, 1, SQL_C_CHAR, buffer, 128, &iLen) ==
              SQL_SUCCESS)
            ob.index_name = buffer;

          if (::SQLGetData(hstmt, 2, SQL_C_SHORT, &val, sizeof(short int),
                           &iLen) == SQL_SUCCESS)
            ob.is_unique = (1 == val);

          if (::SQLGetData(hstmt, 3, SQL_C_SHORT, &val, sizeof(short int),
                           &iLen) == SQL_SUCCESS)
            ob.is_primary = (1 == val);

          if (::SQLGetData(hstmt, 4, SQL_C_SHORT, &val, sizeof(short int),
                           &iLen) == SQL_SUCCESS)
            ob.index_type = val;

          if (::SQLGetData(hstmt, 5, SQL_C_CHAR, buffer, 128, &iLen) ==
              SQL_SUCCESS)
            ob.index_desc = buffer;

          if (::SQLGetData(hstmt, 6, SQL_C_LONG, &obj_id, sizeof(long),
                           &iLen) == SQL_SUCCESS)
            ob.object_id = obj_id;

          vectKeys.push_back(ob);
        }
      }
    }
  }

  // collect index’s columns/fields information
  for (auto it = vectKeys.begin(); it != vectKeys.end(); ++it) {
    HSTMT hstmt2 = SQL_NULL_HSTMT;
    if ((nRetCode = ::SQLAllocStmt(pDBdata->m_hdbc, &hstmt2)) == SQL_SUCCESS) {
      if (hstmt2 != SQL_NULL_HSTMT) {
        pDBdata->OnSetOptions(hstmt2);

        CString sSQL;
        SQLLEN iLen = 0;

        sSQL.Format(_T("SELECT AC.Name as ColumnName \
FROM sys.tables as T inner join sys.indexes as I on T.[object_id] = I.[object_id] \
INNER JOIN sys.index_columns as IC on IC.[object_id] = I.[object_id] and IC.[index_id] = I.[index_id] \
INNER JOIN sys.all_columns as AC on IC.[object_id] = AC.[object_id] and IC.[column_id] = AC.[column_id] \
WHERE I.object_id = %ld and T.name = '%s' and I.name = '%s' order by T.name, I.name"),
                    it->object_id, strTable, it->index_name);

        nRetCode =
            ::SQLExecDirect(hstmt2, (UCHAR*)(LPCTSTR)sSQL, sSQL.GetLength());
        if (SQL_SUCCESS == nRetCode) {
          CHAR buffer[128] = {0};
          while (::SQLFetch(hstmt2) == SQL_SUCCESS) {
            if (::SQLGetData(hstmt2, 1, SQL_C_CHAR, buffer, 128, &iLen) ==
                SQL_SUCCESS) {
              it->vectColumns.push_back(buffer);
            }
          }
        }
      }
    }
  }
}

In this way I have complete information about the indexes of my tables.

std::vector vectIndexesSQL;
pFoo->GetIndexesInformation(strSchema, pDBTable->strTblName, vectIndexesSQL, pDataDB);

Conclusion: When the C/C++ API doesn’t give you any hopes don’t forget that SQL saves you.

Several C++ singleton implementations

This article offers some insight into singleton design-pattern.
The singleton pattern is a design pattern used to implement the mathematical concept of a singleton, by restricting the instantiation of a class to one object. The GoF book describes the singleton as: “Ensure a class only has one instance, and provide a global point of access to it.
The Singleton design pattern is not as simple as it appears at a first look and this is proven by the abundance of Singleton discussions and implementations. That’s way I’m trying to figure a few implementations, some base on C++ 11 features (smart pointers and locking primitives as mutexs). I am starting from, maybe, the most basic singleton implementation trying to figure different weaknesses and tried to add gradually better implementations.
The basic idea of a singleton class implies using a static private instance, a private constructor and an interface method that returns the static instance.

Version 1
Maybe, the most common and simpler approach looks like this:

class simpleSingleton {
  simpleSingleton();
  static simpleSingleton* _pInstance;

public:
  ~simpleSingleton() {}
  static simpleSingleton* getInstance() {
    if (!_pInstance) {
      _pInstance = new simpleSingleton();
    }
    return _pInstance;
  }
  void demo() {
    std::cout << "simple singleton # next - your code ..." << std::endl;
  }
};

simpleSingleton* simpleSingleton::_pInstance = nullptr;

Unfortunately this approach has many issues. Even if the default constructor is private, because the copy constructor and the assignment operator are not defined as private the compiler generates them and the next calls are valid:

// Version 1
simpleSingleton * p = simpleSingleton::getInstance(); // cache instance pointer p->demo();

// Version 2
simpleSingleton::getInstance()->demo();

simpleSingleton ob2(*p); // copy constructor
ob2.demo();

simpleSingleton ob3 = ob2; // copy constructor
ob2.demo();

So we have to define the copy constructor and the assignment operator having private visibility.

Version 2 – Scott Meyers version
Scott Meyers in his Effective C++ book adds a slightly improved version and in the getInstance() method returns a reference instead of a pointer. So the pointer final deleting problem disappears.
One advantage of this solution is that the function-static object is initialized when the control flow is first passing its definition.

class otherSingleton {
  static otherSingleton* pInstance;

  otherSingleton();

  otherSingleton(const otherSingleton& rs) { pInstance = rs.pInstance; }

  otherSingleton& operator=(const otherSingleton& rs) {
    if (this != &rs) {
      pInstance = rs.pInstance;
    }

    return *this;
  }

  ~otherSingleton();

 public:
  static otherSingleton& getInstance() {
    static otherSingleton theInstance;
    pInstance = &theInstance;

    return *pInstance;
  }

  void demo() {
    std::cout << "other singleton # next - your code ..." << std::endl;
  }
};

otherSingleton * otherSingleton::pInstance = nullptr;

The destructor is private in order to prevent clients that hold a pointer to the Singleton object from deleting it accidentally. So, this time a copy object creation is not allowed:

otherSingleton ob = *p;
ob.demo();


error C2248: otherSingleton::otherSingleton ' : cannot access private member declared in class 'otherSingleton'
error C2248: 'otherSingleton::~otherSingleton' : cannot access private member declared in class 'otherSingleton'

but we can still use:

// Version 1
otherSingleton *p = &otherSingleton::getInstance(); // cache instance pointer p->demo();
// Version 2
otherSingleton::getInstance().demo();

This singleton implementation was not thread-safe until the C++ 11 standard. In C++11 the thread-safety initialization and destruction is enforced in the standard.

If you’re sure that your compiler is 100% C++11 compliant then this approach is thread-safe. If you’re not such sure, please use the approach version 4.

Multi-threaded environment
Both implementations are fine in a single-threaded application but in the multi-threaded world things are not as simple as they look. Raymond Chen explains here why C++ statics are not thread safe by default and this behavior is required by the C++ 99 standard.
The shared global resource and normally it is open for race conditions and threading issues. So, the singleton object is not immune to this issue.
Let’s imagine the next situation in a multithreaded application:

static simpleSingleton* getInstance() {
  if (!pInstance)  // 1
  {
    pInstance = new simpleSingleton();  // 2
  }

  return pInstance;  // 3
}

At the very first access a thread call getInstance() and pInstance is null. The thread reaches the second line (2) and is ready to invoke the new operator. It might just happen that the OS scheduler unwittingly interrupts the first thread at this point and passes control to the other thread.
That thread follows the same steps: calls the new operator, assigns pInstance in place, and gets away with it.
After that the first thread resumes, it continues the execution of line 2, so it reassigns pInstance and gets away with it, too.
So now we have two singleton objects instead of one, and one of them will leak for sure. Each thread holds a distinct instance.

An improvement to this situation might be a thread locking mechanism and we have it in the new C++ standard C++ 11. So we don’t need using POSIX or OS threading stuff and now locking getInstance() from Meyers’s implementation looks like:

static otherSingleton& getInstance() {
  std::lock_guard lock(_mutex);
  static otherSingleton theInstance;
  pInstance = &theInstance;
  return *pInstance;
}

The constructor of class std::lock_guard (C++11) locks the mutex, and its destructor unlocks the mutex. While _mutex is locked, other threads that try to lock the same mutex are blocked.
But in this implementation we’re paying for synchronization overhead for each getInstance() call and this is not what we need. Each access of the singleton requires the acquisition of a lock, but in reality we need a lock only when initializing pInstance. If pInstance is called n times during the course of a program run, we need the lock only for the first time.
Writing a C++ singleton 100% thread safe implementation it’s not as simple as it appears as long as for many years C++ had no threading standard support. In order to implement a thread-safe singleton we have to apply the double-checked locking (DCLP) pattern.
The pattern consists of checking before entering the synchronized code, and then check the condition again.
So the first singleton implementation would be rewritten using a temporary object:

static simpleSingleton* getInstance() {
  if (!pInstance) {
    std::lock_guard lock(_mutex);

    if (!pInstance) {
      simpleSingleton* temp = new simpleSingleton;
      pInstance = temp;
    }
  }

  return pInstance;
}

This pattern involves testing pInstance for nullness before trying to acquire a lock and only if the test succeeds the lock is acquired and after that, the test is performed again. The second test is needed for avoiding race conditions in case other thread happens to initialize pInstance between the time pInstance was tested and the time the lock was acquired.
Theoretically, this pattern is correct, but in practice is not always true, especially in multiprocessor environments.
Due to this rearranging of writes, the memory as seen by one processor at a time might look as if the operations are not performed in the correct order by another processor. In our case, the assignment to pInstance performed by a processor might occur before the Singleton object has been fully initialized.
After the first call of getInstance() the implementation with pointers (non-smart) needs pointer to that instance in order to avoid memory leaks.

Version 3 – Singleton with smart pointers
Until C++ 11, the C++ standard didn’t have a threading model and developers needed to use external threading APIs (POSIX or OS dependent primitives). But finally C++ 11 standard has threading support.
Unfortunately, the first C++ new standard implementation in Visual C++ 2010 is incomplete and threading support is available only starting with beta version of VS 2011 or the VS 2012 release preview version.

class smartSingleton {
 private:
  static std::mutex _mutex;

  smartSingleton();
  smartSingleton(const smartSingleton& rs);
  smartSingleton& operator=(const smartSingleton& rs);

 public:
  ~smartSingleton();

  static std::shared_ptr& getInstance() {
    static std::shared_ptr instance = nullptr;

    if (!instance) {
      std::lock_guard lock(_mutex);

      if (!instance) {
        instance.reset(new smartSingleton());
      }
    }

    return instance;
  }

  void demo() {
    std::cout << "smart pointers # next - your code ..." << std::endl;
  }
};

As we know, in C++ by default the class members are private. So, our default constructor is private too. I added here in order to avoid misunderstanding and explicitly adding to public / protected.
Finally, feel free to use your special instance (singleton):

// Version 1
std::shared_ptr p = smartSingleton::getInstance(); // cache instance pointer
p->demo();

// Version 2
std::weak_ptr pw = smartSingleton::getInstance(); // cache instance pointer
pw.lock()->demo();

// Version 3
smartSingleton::getInstance()->demo();

And no memory leaks emotion… 🙂
Multiple threads can simultaneously read and write different std::shared_ptr objects, even when the objects are copies that share ownership.
But even this implementation using double checking pattern but is not optimal to double check each time.


Version 4 – Thread safe singleton C++ 11
To have a thread safe implementation we need to make sure that the class single instance is locked and created only once in a multi-threaded environment.
Fortunately, C++ 11 comes in our help with two new entities: std::call_once and std::once_flag. Using them with a standard compiler we have the guaranty that our singleton is thread safely and no memory leak.
Invocations of std::call_once on the same std::once_flag object are serialized.
Instances of std::once_flag are used with std::call_once to ensure that a particular function is called exactly once, even if multiple threads invoke the call concurrently.
Instances of std::once_flag are neither CopyConstructible, CopyAssignable, MoveConstructible nor MoveAssignable.

Here it is my proposal for a singleton thread safe implementation in C++ 11:

class safeSingleton {
  static std::shared_ptr<safeSingleton> instance_;
  static std::once_flag only_one;

  safeSingleton(int id) {
    std::cout << "safeSingleton::Singleton()" << id << std::endl;
  }

  safeSingleton(const safeSingleton& rs) { instance_ = rs.instance_; }

  safeSingleton& operator=(const safeSingleton& rs) {
    if (this != &rs) {
      instance_ = rs.instance_;
    }

    return *this;
  }

 public:
  ~safeSingleton() { std::cout << "Singleton::~Singleton" << std::endl; }

  static safeSingleton& getInstance(int id) {
    std::call_once(
        safeSingleton::only_one,
        [](int idx) {
          safeSingleton::instance_.reset(new safeSingleton(idx));

          std::cout << "safeSingleton::create_singleton_() | thread id " + idx
                    << std::endl;
        },
        id);

    return *safeSingleton::instance_;
  }

  void demo(int id) {
    std::cout << "demo stuff from thread id " << id << std::endl;
  }
};

std::once_flag safeSingleton::only_one;
std::shared_ptr<safeSingleton> safeSingleton::instance_ = nullptr;

The parameter to getInstance() was added for demo reasons only and should be passed to a new proper constructor. As you can see, I am using a lambda instead normal method.
This is how I tested my safeSingleton and smartSingleton classes.

std::vector v;
int num = 20;

for (int n = 0; n < num; ++n) {
  v.push_back(
      std::thread([](int id) { safeSingleton::getInstance(id).demo(id); }, n));
}

std::for_each(v.begin(), v.end(), std::mem_fn(&std::thread::join));

// Version 1
std::shared_ptr<smartSingleton> p = smartSingleton::getInstance(
    1);  // cache instance pointer p->demo("demo 1");

// Version 2
std::weak_ptr<smartSingleton> pw = smartSingleton::getInstance(2);  // cache instance pointer
pw.lock()->demo(2);

// Version 3
smartSingleton::getInstance(3)->demo(3);

So I create 20 threads and I launch them in parallel (std::thread::join) and each thread accesses getInstance() (with a demo id parameter). Only one of the threads that is trying to create the instance succeeds.
Additionally, if you’re using a C++11 100% compiler you could also delete the copy constructor and assignment operator. This will allow you to obtain an error while trying to use such deleted members.

Other comments
I tested this implementation on a machine with Intel i5 processor (4 cores). If you see some concurrent issues in this implementation please fell free to share here. I am open to other good implementations, too.
An alternative to this approach is creating the singleton instance of a class in the main thread and pass it to the objects which require it. In case we have many singleton objects this approach is not so nice because the objects discrepancies can be bundled into a single ‘Context’ object which is then passed around where necessary.

Update: According to Boris’s observation I removed std::mutex instance from safeSingleton class. This is not necessary anymore because std::call_once is enough to have thread safe behavior for this class.

Update2: According to Ervin and Remus’s observation, in order to make things clear I simplified the implementation version 3 and this is not using std::weak_ptr anymore.

References:
just::thread – Anthony Williams – Just Software Solutions Ltd
C++ and the Perils of Double-Checked Locking by Scott Meyers and Andrei Alexandrescu
Modern C++ Design: Generic Programming and Design Patterns Applied by Andrei Alexandrescu

pre vs. post increment operator – benchmark

Compiler: Visual C++ 2010
Operating System: Windows 7 32bits
Tested machine CPU: Intel core i3
Download: preVSpost (demo project) (2675 downloads)

A recent Visual C++ team’s comment on twitter.com reminded me a hot topic that exists in C++ programming world: there is a long discussion of using pre versus post increment operators, specially, for iterators. Even me I was witness to a discussion like this. The discussion started from a FAQ written by me on www.codexpert.ro.

The reason of preferring pre increment operators is simple. For each post-increment operator a temporary object is needed.
Visual C++ STL implementation looks similarly with next code:


_Myt_iter operator++(int) {
// postincrement
_Myt_iter _Tmp = *this;
++*this;
return (_Tmp);
}

But for pre-increment operator implementation this temporary object is not needed anymore.


_Myt_iter& operator++() {
// preincrement
++(*(_Mybase_iter *)this);
return (*this);
}

In the discussion that I mentioned above, somebody came with a dummy application and tried to prove that things have changed because of new compilers optimizations (the code exists in the attached file, too). This sample is too simple and far away to the real code. Normally the real code has more code line codes that eat CPU time even if you’re compiling with /O2 settings (is obviously).
Base on that VC++ team’s tweet related to viva64.com’s research I decided to create my own benchmark base on single and multicore architectures. For those that don’t know Viva64 is a company specialized on Static Code Analysis.
Starting from their project I extended the tested for other STL containers: std::vector, std::list, std::map, and std::unordered_map (VC++ 2010 hash table implementation).
For parallel core tests I used Microsoft’s new technology called Parallel Pattern Library.

1. How the tests were made
1.1. Code stuff
In order to get execution time I used same timer as Viva64 team (with few changes). Each container instance was populated with 100000 elements of same random data. An effective computing function was repeated 10 times. Into this function some template functions are called for 300 times. The single core computing function contains loops like this:


for (size_t i = 0; i != Count; ++i) {
x += FooPre(arr);
}

// where FooPre looks like
template T
size_t FooPre(const T &arr) {
size_t sum = 0;
for (auto it = arr.begin(); it != arr.end(); ++it)
sum += *it;
return sum;
}

For the parallel core computing the first simple for loop has changed in:


parallel_for (size_t(0), Count,
[&cnt,&arr] (size_t i) {
cnt.local() += FooPre(arr);
});

Where cnt is an instance of combinable class and the sum of partial computed elements is obtained by calling combine() method:
[cpp]cnt.combine(plus());[/cpp]
As you can see, the parallel_for function uses one of the new C++ standard features: a lambda function. This lambda function and the combinable class implements the so called parallel aggregation pattern and helps you to avoid the multithreaded common share resource issues. The code is executed on independent tasks. The reason that this approach is fast is that there is very little need for synchronization operations. Calculating the per-task local results uses no shared variables, and therefore requires no locks. The combine operation is a separate sequential step and also does not require locks.

1.2. Effective results
The tests were running on a Intel core i3 machine (4 cores) running Windows 7 on 32bits OS. I tested debug and release mode for single and multi cores computation. The test application was build in VC++ 2010 one of the first C++11 compliant.
The OX axis represents the execution repeated times, and the OY axis means time in seconds.

1.2.1. Single core computation
Debug

Release

1.2.2. Multi cores computation
As you know, multi core programming is the future. For C++ programmers Microsoft propose a very interesting library called Parallel Pattern Library.
The overall goal is to decompose the problem into independent tasks that do not share data, while providing a sufficient number of tasks to occupy the number of cores available.

This is how it looks my task manager when the demo application runs in parallel mode.

Isn’t it nice comparing to a single core use? 🙂

Debug

Release

1.2.3. Speedup
Speedup is an efficiency performance metric for a parallel algorithm comparing to a serial algorithm.

Debug

Release

Conclusions:
The biggest differences appear in the debugging area where the pre-increment is “the champion”.
With primitive types (like int and pointers), the opposite might be true, because of the pipe-lining that a CPU does. With post-increment, due to optimizations in release there is no copy to be returned for these simple types.
According to these results I have to agree with Viva64 team. Even if the results are so close in release version I keep my opinion that using pre increment operator is preferred instead of post increment operators. We all know how long it takes the debug period and how important is every second that we win in long debugging days.
If you still have doubts in using pre-increment operator or you need a flexible way of switching this operators in your code you can easily implement some macros like these:

#define VECTOR_ITERATOR(type, var_iter) std::vector::iterator var_iter;
#define VECTOR_FOR(vect, var_iter) for (var_iter = vect.begin(); var_iter != vect.end(); ++var_iter)

Numeric type conversion to std::string and vice versa

In our real applications we have to convert from strings to integer or to real variables and vice versa (double/float/int variable to std::string).
We can realize these conversions using C style CRT function or we can try C++ approach via STL.
Unfortunately, current C standard libraries do not offer a complete support for any type of conversion. For instance, if we try to put an integer into a C++ string object (std::(w)string) using a well known function itoa() then we get next error:

int x1 = 230;
std::string s1;
itoa(x1, s1, 10);

// error C2664: ‘itoa’ : cannot convert parameter 2 from ‘std::string’ to ‘char *’

A C style approach in order to avoid this error means using an intermediary buffer:

int x1 = 230;
std::string s1;
char szBuff[64]={0};
itoa(x1, szBuff, 10);
s1 = szBuff;

Same story if we try to convert a std::string to an int:

std::string s3 = "442";
int x3 = atoi(s3);

// error C2664: ‘atoi’ : cannot convert parameter 1 from ‘std::string’ to ‘const char *’

In this case we can use c_str() in order to return a constant pointer to char.

int x3 = atoi(s3.c_str());

An elegant way to get rid of such problems is to build two conversion function that use templates and C++ streams.
Base on this idea, I created a Sting2Numeric class that contains two static methods: Type2String() and String2Type().
where BadConvertion is a std::runtime_error‘s derived class.


class BadConversion : public std::runtime_error {
public:
  BadConversion(const std::string& s)
    : std::runtime_error(s) { }
};

class String2Numeric{
public:
 template
 static xstring Type2String(TypeT x) {
   xostringstream o;
   if (!(o << x)) 
throw BadConversion("Type2String(TypeT)");
return o.str();
}

template static TypeT String2Type(const xstring& s) { xistringstream i(s); TypeT x; if (!(i >> x)) throw BadConversion("String2Type(TypeT)"); return x; } };

 

Because of ANSI and UNICODE project’s compatibility I defined few macros:


#ifdef _UNICODE
#define xstring std::wstring
#define xostringstream std::wostringstream
#define xistringstream std::wistringstream
#else
#define xstring std::string
#define xostringstream std::ostringstream
#define xistringstream std::istringstream
#endif

Because of this compatibility I strongly recommend using a xstring alias instead of std::wstring or std::string.
When you want to convert an int, float, double, or other numerical type to a xstring in a C++ style you can use the Type2String() function. Vice versa, if you want to convert a xstring to these types you can use String2Type().

In order to avoid possible thrown exception I recommend to you using a try catch block whenever you’re using these functions. I prefer using xstring for string/wstring variables definition, too.
Here is a sample of using this class:

double y1 = 0.2312, y2 = 1.0123;
xstring s1 , s2;
try {
s1 = String2Numeric::Type2String(y1);
s2 = String2Numeric::Type2String(x2);
#ifdef _UNICODE
xstring s3(L"43.52");
#else
xstring s3("43.52");
#endif
x2 = String2Numeric::String2Type(s3);
y2 = String2Numeric::String2Type(s3);
} catch (BadConversion &eBC) {
std::cout << "An exception has been thrown: " << eBC.what() << std::endl;
}

The String2Numeric class can be extended. For instance, if the conversion throw an error then you can add detailed information in the exception message.

Download String2Numeric (2341 downloads) class.

Dynamic popup and drop down menus for custom representations

Many applications allow dynamic customization for visual objects or data views. For instance, well known Internet Explorer application provides toolbars customization using a popup menu that appears when the user execute right click mouse action in toolbar zone area.

Internet Explorer sample menu

Other sample where this kind of menu is very useful is when it’s used in order to customize database data representation in Windows controls like List control or grid control. These kind of applications allow data filtering and show/hide columns using this kind of menu. The user just right click on control header and gets what he need.

Starting from this idea, I implemented a class CDynamicPopupMenu. This class allows an easy building of this kind of menus. I used if in a demo dialog base application over a list control.

my demo application

Internally, this class uses a STL container (std::map) with a data structure used in order to embed items menu properties. When the menu is built, menu’s behavior is implemented using these properties.

Add new menu item
The new item add menu method has next definition:

void AddItemData(const int item_id, const int parent_id,
bool is_visible, bool check_flag, bool has_child,
const std::wstring item_name, bool enable_flag);

where:

  • item_id – represents internal item ID; the ID is used for menu customization, too;
  • parent_id – parent item ID used when we define a new items sub-group (a drop-down menu); the attribute value is 0 if menu item is a part of initial menu;
  • is_visible – this flag is used a item is checked / unchecked. In my demo application this flag is set true for all list control’s columns that we want to display. For “Select All” and “Check All” items this flag is false because we want to create new subgroup that contains new columns, but we don’t have “Select All” or “Check All” columns.
  • check_flag – this flag allow check/uncheck menu property;
  • has_child – if is true allows a subgroup definition (a new drop-down menu);
  • item_name – unicode menu item name;
  • enable_flag – defines if the item is enable or disable.

Add separator item
Add separator item method definition looks like this:

void AddItemSeparator(const int item_id, const int parent_id);

where:

  • item_id – menu item ID;
  • parent_id – parent item ID from the subgroup has started; the attribute value is 0 if menu item is a part of initial menu.

Menu add items sample
In my demo application, in CtestPopupMenuDlg::SetDefaultMapValues(void) method, among other things, you can find next calls:

m_pCustPPMenu->AddItemData(MI_MAINITEM_1, 0, true, true, false,
                           _T("Item 1"), true);
m_pCustPPMenu->AddItemSeparator(MI_MAIN_SEP1,0);
m_pCustPPMenu->AddItemData(MI_MAINITEM_4_GROUP_1, 0, true, true, true, 
                           _T("Group 1"), true);
m_pCustPPMenu->AddItemData(MI_GROUP_1_SUBITEM_1, MI_MAINITEM_4_GROUP_1,
false, false, 
                           false, _T("G1-Select All"), true);
m_pCustPPMenu->AddItemData(MI_GROUP_1_SUBITEM_2, MI_MAINITEM_4_GROUP_1,
true, false, 
                           false, _T("G1-Item 12"), true);

Get menu internal data
In order to access the internal data container (std::map) that stores all dynamic menu items you just can use next method:

DynamicMenuData* GetMenuData();

followed by:

DynamicMenuData* pItemsMap = m_pCustPPMenu->GetMenuData();

Create and display menu
Menu creation must be done just after we add all menu items. The menu is displayed only after TrackPopupCustomMenu() call. The definition of this method looks like this:

DWORD TrackPopupCustomMenu(POINT point, HWND hWnd);

where:

  • point – mouse coordinates where the menu start building;
  • hWnd – parrent window handle where the menu is created.

Function’s return value is the menu IDs that was selected. If no item was selected the function returns 0.
In my demo application, menus creation is called on list control right-click method (NM_RCLICK).

void CtestPopupMenuDlg::OnNMRclickList1(NMHDR *pNMHDR, LRESULT *pResult)
{
  POINT point;
  ::GetCursorPos(&point);

  int nSelItem = m_pCustPPMenu->TrackPopupCustomMenu(point, m_ctrlList.m_hWnd);

  if (0 < nSelItem) {
   pNMHDR->hwndFrom = m_ctrlList.m_hWnd;
   pNMHDR->idFrom = nSelItem;
   pNMHDR->code = WM_NOTIFY;

   OnNotify( 0, (LPARAM)pNMHDR, pResult);
  }

  *pResult = 0;
}

As you can see, I’m calling TrackPopupCostumMenu(), using mouse point property when the user right-click over list control.
I am saving list control handler, selected item ID and WM_NOTIFY value into a pointer to message notification structure NMHDR. Then I’m passing this pointer to OnNotify() method.
Using WM_NOTIFY message and OnNotify() method, I inform parent control window that a new event was generated.

BOOL CtestPopupMenuDlg::OnNotify(WPARAM wParam, LPARAM lParam, LRESULT* pResult)
{
  NMHDR *p = (NMHDR*) lParam;
  _ASSERT(p);
  _ASSERT(m_pCustPPMenu);
  bool bFlag = false;

  switch (p->idFrom)
  {
    case MI_MAINITEM_1:
    m_pCustPPMenu->GetItemCheckedFlag(MI_MAINITEM_1, bFlag);
    m_pCustPPMenu->SetCheckedItemFlag(MI_MAINITEM_1, !bFlag);
    FillData();
    break;
    
    case MI_MAINITEM_2:
    m_pCustPPMenu->GetItemCheckedFlag(MI_MAINITEM_2, bFlag);
    m_pCustPPMenu->SetCheckedItemFlag(MI_MAINITEM_2, !bFlag);
    FillData();
    break;

   // =======================
   // Silviu: this method store other menu items handlers, too
   // =======================

   default:
   break;
  }

  return CDialog::OnNotify(wParam, lParam, pResult);
}

I am calling GetItemCheckedFlag() if order to detect selected item check status (check / uncheck). Then, if item state means check I apply negation over this bool flag and I’m calling SetCheckedItemFlag() method. Finnaly this method produce changes in my control list, depending on my menu command (FillData() method).

Menu interaction with parent window (list control)
In my demo application, the interaction between dynamic menu and list control to be treated by FillData() method.
In order to use CDynamicPopupMenu’s internal container data is need to initialize a DynamicMenuData pointer with GetDynamicMenuData()’s returned value.

void CtestPopupMenuDlg::FillData()
{
  _ASSERT(m_pCustPPMenu);

  DynamicMenuData *pItemsMap = m_pCustPPMenu->GetDynamicMenuData();

  int nCol = 0;
  if ((NULL != pItemsMap) && (!pItemsMap->empty()))
  {
    // reset columns
    int nColumnCount = m_ctrlList.GetHeaderCtrl()->GetItemCount();
    for (int i=0;i < nColumnCount;i++) 
      m_ctrlList.DeleteColumn(0);

    for (iterDynMenu itm = pItemsMap->begin(); itm != pItemsMap->end(); ++itm)
    {
      if (m_pCustPPMenu->GetIsVisible(itm) && m_pCustPPMenu->GetIsChecked(itm))
      {
         m_ctrlList.InsertColumn(nCol++, itm->second.sItemName.c_str(), LVCFMT_LEFT, 70);
      }
    }
  }
}

Using that pointer to internal menu data, I iterate over internal container, and for those items that are visible and selected set on true I insert columns in my list control.
Similarly, when using such menus, the application can apply filters on real data.
CDynamicPopupMenu class contains other useful methods. This kind of menu can be used in different situations in order to change application’s behavior.

Download demo application: testPopupMenu (Visual C++ 2005 project)