Silviu-Marius Ardelean's blog

a software engineer's web log

Finally, I got Windows 10

Finally, I got the Windows 10 for my laptop… Even if this task is trivial, it was a surprisingly experience this time. But let me tell you the story of Windows 10 installation on my Samsung RC 710 laptop.

Back in August I tweeted “That’s all I have within #Windows10 X64 Ent setup on Samsung RC710 with SSD”. By that, I meant that my setup process was stuck in the boot phase using the installation of this Windows brand new version. That was reproduced within Windows 8.1 x64 upgrade tentative or with clean Windows 10 installation.
I tried few Windows 10 .ISOs and a friend’s DVD but no chance. The setup has started and stuck within few seconds.

win10_setup_stockes

Because the laptop got some hardware upgrades from the original configuration, I tried to restore to the initial configuration but nothing new. Reading this article where is specifying “For 64-bit installations, a small number of older PCs may be blocked from installation because they do not support CMPXCHG16b, PrefetchW, and LAHF/SAHF“, I tried an x86 .ISO but I got the same situation. Also, I tried some BIOS changes without any improvements.

Getting online contact with Microsoft it offered me no new things. I got only typical support trivial answers.

During the time on the same laptop I was running fine Win XP, Win 7 and Win 8.1 OSs without any such bad experiences. To me it was clear that Windows 10 has some backward compatibilities issues.

So, I took it on my own googling for my situation. Reading different forums I realized this is a common issue for Samsung old laptops and it is generated by the WiFi card.

The solution in my case was buying a brand new Atheros AR5B22 WiFi card and replacing the old WiFi card with that.

If you’re in a similarly situation and you you’re looking for instructions how to disassemble your laptop here it is a brief presentation.

By the way, you’re doing on your own risk. If you’re not confident, please contact a specialist.


samsung_laptop_to_win10_3samsung_laptop_to_win10_4samsung_laptop_to_win10_6samsung_laptop_to_win10_5upgrading_windows_on_samsung_laptop

With this new WiFi card plugged-in, the upgrade from Window 8.1 x64 to Windows 10 x64 became a trivial task.

It would be nice if Microsoft would get more in count such behaviors and would improve the Windows 10’s backward compatibility, especially because the old computers are included within their OS target.

Update 02.06.2016: Samsung admits they are lame: “Don’t Install Windows 10 Because We Suck At Making Drivers”. Sad…

You can find additional information here. That’s why, most probably I will never buy a Samsung phone or any other gadget made by them, anymore.

Share

Experiences with Adobe Acrobat/Reader Plug-ins

box_adobe_150x150I wrote this document after a challenging experience I had recently within an Adobe Acrobat/Reader plugin creation. Even if the Adobe’s SDK it’s nicely documented within PDF files, one of the major reason that determined me to write this article was the frustration I got sometimes when, for instance, trying to see “why the plugin was not loading into Acrobat/Reader” and the Google’s engine provided me a lot of references such “why the Reader plugin is not loading into a browser”. Also, the search functionality from Adobe’s forum didn’t helps me too much. I hope to help others by clarifying some challenges might meet a developer at begin of creation such kind of plugin.
Adobe has two products for .PDFs file handling: the freeware Reader capable for reading only and the Acrobat for read, write and effective .PDFs creation activities. Both Acrobat and Reader use the same SDK but the Reader APIs is a subset of those available for Acrobat (obvious).
There are three types of plugins: regular plugins, reader-enabled plug-ins and certified plug-ins.

General considerations

Plug-ins for Acrobat can be developed and distributed freely and no license is required from Adobe. The payment exception appears in case of DRM agreement which includes a $50,000 annual fee and a 5.5% revenue royalty. Adobe consider to apply digital rights management (DRM) in case the developed plugins functionality invoke “encrypting a PDF file or controlling access to a PDF file, then it is DRM. Also, if you add any functionality to the security settings of Adobe Acrobat (…). If your plug-in is required for someone to access the PDF file, then we would consider it to have DRM functionality”.

Only plug-ins that are shipped as part of Acrobat and Reader can be ‘certified’. This is so that if users wish to run Acrobat or Reader without any 3rd party plug-ins, they can do this easily by using the ‘Certified plug-ins only’ check box in the preferences.

Adobe maintains a registry of four-character prefixes for each company that develops extensions for its own products. The new companies that intend to develop such plugins should contact Acrobat Developer Support to obtain a four-character prefix to be used. Adobe’s prefix is ADBE or ACRO. This prefix is needed to be use with various elements as well as private data that it writes into PDF documents.

For Adobe Reader the plugin needs a special macro to be defined into project settings READER_PLUGIN. By defining it it’s easy to identify in case you’re calling an Acrobat only specific SDK function because it causes compiler errors.

The First challenges

After downloading the SDK the first instinct was trying the project samples. Once with this step appeared the first annoying situation: I loaded all.sln solution into Visual Studio I have been observed that whatever project I built and deployed into the Reader “plug_ins” subfolder I was not able to see them into Adobe Reader. The “plug_ins” subfolder or one more subfolder level down is the place you have to deploy the built plugins. These plugins are DLLs with an .API extension. The confusion has amplified because by downloading and installing the FileOpen WebPublisher Client plugin I observed that the plugin was running perfectly and I saw it even into Help – Adobe Third Part Plug-ins menu.

But deploying such plugins in the Adobe Acrobat “plug_ins” folder were up and running. I started reading the Developing Plug-ins and Application for Adobe Reader I followed the “why a plug-in might not load” founded steps but no solution for Acrobat Reader. Just in case I unchecked the “Use only certified plug-ins” Reader’s setting and nothing (‘Certified plug-ins only’ = Edit > Preference: Application Startup: Use only certified plug-ins (unchecked)).

Trying to debug over the plugin source code by attaching to Reader project or by starting the debug with Reader application didn’t help me more.

Later, after some challenges, I found out that the key point of understanding why the SDK sample DLLs were not loading into Adobe Reader it was that the plugins for Reader need to be signed before being deployed into “\Program Files\Adobe\Acrobat\plug_ins” directory. Such information is not present into that “why is not loadingmanual list.

How to sign the plug-in for Adobe Reader

As I mentioned earlier the plug-in in Reader must be signed by using a certificate provided by Adobe. It is strongly recommend that you make the application for a key at the beginning of the development process, since the application can be denied in case the plugin functionality is not in accordance with Adobe’s business goals. Also, this ensures that your agreement is in place when you are ready to build the Reader version of the plug-in. If the key is approved, the developer must build public key and pair key files using a tool in the Acrobat SDK.

In the moment somebody wants to develop a plug-in for Acrobat Reader has to fulfill an integration form, not before creating an Adobe ID. According to Adobe the approval process might take some time (up to two weeks). The application should be filled out completely and your responses will be used to determine your eligibility. If you are building a DRM-based Adobe Reader plug-in, we recommend you send an email to with details of your request so that we can guide you through the application process.

Generates the public and private pair keys by the Makekeys tool:

The size of the public key should be 98 bytes. The size of the public and private key pair should be 451 bytes. The size of the returned encrypted key should be 554 bytes. Save this .key generated files into a proper location cause later might be useful including it into your project.This tool is located into your SDK: ex. sdk110_v1\PluginSupport\Tools\Reader-enabling Tools\win.

Submit the new created Public Key file and the fulfilled form document to and wait to get the digital certificate. This will be a RIKLA-DigCert.rc file.  In case you will receive approval from Adobe there are several more steps you need to follow to receive your Reader Integrated Key for your plug-in. The Key arrives as a digital certificate. Once this is done the plug-in will load into Reader. Note that if the plug-in is recompiled the plug-in must again be signed (the same certificate and key-pair files can be used).

Once you get the digital certificate file, you should sign the fresh built plugin, before deploying it into Reader’s plug_ins folder.

Here, because of using SDK 11 I got some confusion because of the steps described in the “enabling the plug-in for Acrobat Reader” section, according to apps documentation guide. They are talking about Makecmd32.exe tool, some API_ENCRYPTED_GIGEST and API_DIGITAL_CERTIFATE IDs, etc. But the SDK 11 has no Makemd32.exe tool coming with. This tool can be downloaded within other RIKLATools.zip file but I preferred following the actual SDK 11 documentation especially because it has other signing approach. Instead of Makemd32.exe I had to use SignPlugin.exe (into SDK docs: Plug-ins and Applications > Developing Plug-ins and Applications > Creating an Adobe Reader Plug-In > Developing and enabling an Adobe Reader plug-in > Enabling the plug-in for Adobe Reader).

Plug-In Structure

The Acrobat/Reader applications have few steps approach for plugins: initialization + plugin, handshaking, exporting and importing HFTs and unloading the plugin, implemented as callbacks. A minimum operation that a plug-in must implement is PluginInit() callback function.

The plugin life cycle into Adobe Acrobat/Reader invokes next steps:

  • At startup search into its plugin directory (plug_ins). It looks in the .API files for the exported PlugInMain, it loads the plugin by invoking the LoadLibrary and call the function pointed by the symbol of PlugInMain.
  • For each detected plugin (.API) it attempts loading the file. If the plugin is successfully loaded the Reader/Acrobat invokes routines from PIMain.c and completes the handshaking process.
  • Invokes callbacks in the next order:

PIHandshake

PluginExportHFTs

PluginImportReplaceAndRegister

PluginInit

  • Before closing Reader/Acrobat the PluginUnload callback function it’s executed. That’s the proper place to release the allocated resources.

In the initialization phase the plugin hooks into Acrobat’s user interface by adding menu items, toolbars, etc. The unload procedure should free any memory the plug-in allocated and remove any user interface changes it made.

Handshaking is also one of most important step. The application performs checking with each plugin before opening it. It is the step where a plugin for Adobe Reader it is tested before loading. During this operation the plug-in specifies the name, some initialization procedures, signature test and optional an unload procedure if is needed. In case the signed test fails the loading process of that plugin is stopping.

How to create a plugin

Even if the Acrobat SDK allows creating plug-ins for OS platforms out of Windows (MAC, Unix/Linux) without too many differences (most because of configuration and used tools), I will describe down some details for plugin development on Windows platform.

Download the latest Acrobat SDK and unzip in a preferred location. Create an environment variable for AcroSDKPIDir that contained the SDK content.

Running Visual Studio “as administrator” it’s a good idea in order to be able to succeed the write into Adobe’s plug_ins folder. In order to establish an easier debug and deploy process I preferred to add two additional environment variable AcroPluginsDir containing the Acrobat plugins files and ReaderPluginsDir for Reader plugins files.

Having these environment variable set into your OS you can start the effective plugin creation.

According to Acrobat SDK you can start from an existing sample so called Starter project or you can start from an empty DLL project. The first version allows you having a fast up and running own plugin by just adjusting the files name and starting to apply the business logic.

In case you choose the clean approach you need to add paths to the SDK header files into C/C++ > General > Additional Include Directories as for instance:

This will be needed for instance to easily include “PIHeaders.h” file.
Add next preprocessors definitions into project settings: WIN_PLATFORM, WIN_ENV and READER_PLUGIN (C/C++ > Preprocessor > Preprocessor Definitions).
Include PIMain.c file into your project. This file is located into your Acrobat SDK path. In my case it is:

Add the standard Acrobat callbacks functions prototype into other .cpp file (functions invoked into plugin structure topic) and start the business logic implementation. Here you can inspire from the content of StarterInit.cpp file (Starter sample project). In case you want to add some menu, toolbar or other UI items these should be added into PluginInit() function.

The PlugInMain() function is the entry point into such plugins and it’s needed to add the export flag to PlugInMain() function via project settings:

Without this setting you will get a big surprise even if at the very first point of view the built plugin is signed and the DllMain() is accessed into a debugging flow. But none of the callbacks functions without this export.

In order to automatize the process for plugin build and deployment you might added some Post Build Event commands:

Conclusions: In my opinion, the Acrobat SDK it’s nicely designed but even if there are a lot of PDF references, somehow it doesn’t have the best online structured content, causing users to waste enough time to match all the pieces. Maybe because of complexity and flexibility that exposes it’s not very easy to find complete clean references.

Share

Some experiences with the last world-wide WordPress brute force attack

As you most probably know, this website uses WordPress. Last Saturday, trying to access the site admin area I was facing with an error generated by too many redirects.
error_redirects
Having other things to do, I ignored it for that moment. Later a friend of mine published on his website some information about an existing world-wide brute force attack over WordPress websites and then I started reading more information about this issue.

My Firefox’s Web Console has thrown such messages:

[13:38:30.162] GET http://my_site.ro/wp-admin/ [HTTP/1.1 302 Moved Temporarily 834ms]
[13:38:30.998] GET http://my_site.ro/wp-admin/ [HTTP/1.1 302 Moved Temporarily 403ms]
[13:38:31.405] GET http://my_site.ro/wp-admin/ [HTTP/1.1 302 Moved Temporarily 580ms]
[13:38:31.990] GET http://my_site.ro/wp-admin/ [HTTP/1.1 302 Moved Temporarily 558ms]
[13:38:32.558] GET http://my_site.ro/wp-admin/ [HTTP/1.1 302 Moved Temporarily 553ms]

Most probably I was also one target for that attack.

Having a strong password and not using admin user the effective website was not affected otherwise than in wp-admin area. I contacted my website hosting provider and after some emails exchange I was able to login into wp-admin area. First thing I did was to install and activate Limit Login Attempts plugin and the results didn’t expect too much to appear.
Today morning, this plugin sent me an interesting email.
results_of_limiting_login_attempts

So an attempt of brute force attack is cut.

In order to avoid any such unpleased issues it is strongly recommended following few basic steps:

  • Avoid using default users (ex. admin) having high privileges.
  • Use strong passwords that contain special characters also in order avoid dictionary attacks used by brute force methods. You can check if the password it’s strong enough using online free tools such passwordmeter.com or Password Checker.
  • Install and activate a tool such Limit Login Attempts.
  • Enjoy your life. 🙂
  • Share

    SubclassWindow() method issues in projects base on MFC Feature Pack

    The Problem
    Trying to paint a background image into client area of a MDI application build in VC++ 6.0 to VC++ 2005 IDE it’s not a difficult task.
    In case you need, you can find easily good references. For instance, there are two references from Microsoft (KB129471 and KB103786) and one I prefer: a FAQ wrote by a friend of mine.

    Unfortunately things are changing radically in case you’re following the same steps in a Visual C++ IDE that has MFC Feature Pack support. If you’re building from the scratch a VC++ 2008/VC++ 2010 a MDI project that has MFC Feature Pack support and you’re trying to apply sub-classing steps, you will have a big surprise in the moment you’re starting your application in debug mode. Effectively your application will crash in the moment you are trying to call SubclassWindow() in CMainFrame::OnCreate().

    Problem details
    Starting with MFC Feature Pack CMDIFrameWndEx is the new CMainFrame’s parent class instead of CMDIFrameWnd and the problem acts inside of Attach() method:

    and the issue appears in the second ASSERT() macro

    because CWnd::FromHandlePermanent(HWND hWnd) looks up into a permanent handle map and in returns existing CWnd pointer.

    CHandleMap is the wrapper that implements the mapping mechanism between the pointers of MFC wrapped classes and the Windows object handles. Internally, this class has to dictionaries (m_permanentMap and m_temporaryMap) implemented as CMapPtrToPtr, m_nHandles – the number of handles, m_nOffset – the offset of handles in the object and it has a m_pClass pointer of CRuntimeClass (a run time class associated with all MFC classes).
    In case you’re interest in more details, you can find more information here.

    We have a pointer to a CHandleMap instance that is assigned with the returned pointer of a handle map returned by afxMapHWND(). The returned pointer pWnd it’s assigned with the result returned by pMap->LookupPermanent(hWnd). LookupPermanet() effectively search into a the permanent hash map for exiting HANDLEs and in our case it find it.

    where

    If the item having nHash key was found into m_pHashTable then the condition if (pAssoc->key == key) is TRUE because the attribute m_hWndMDIClient of CMDIFrameWnd is used yet.
    So, effectively what LookupPermanent() has found in m_permanentMap map is m_hWndMDIClient. And because pMap->SetPermanent(m_hWnd = hWndNew, this) is one of the next call into Attach() method those ASSERTs are a must.
    Even if those ASSERT() calls from Attach() are available only in debug mode (because of ASSERT() macro behavior) a release build would not save the situation. Soon or later you’ll get conflicts and the application will crash.

    Trying to find where this has happened is not so complicated as long as we take in consider our CMainFrame class it’s derived from CMDIFrameWndEx a class that extends CMDIFrameWnd. If we are looking into CMDIFrameWndEx class implementation (AfxMDIClientAreaWnd.cpp) we will see that into this class SubclassWindow() method it’s called jet:

    Subclassing a CWnd derived instance that has already a mapped HWND item is an error and these ASSERTs try to avoid this from development moment. Having two different CWnd-derived objects with the same HWND is not possible – the only exception is CDC instances that have 2 HWNDs (m_hDC and m_hAttribDC).
    Related to my issue, according to Steve Horne from Microsoft, “anything that uses the MFC Feature Pack will be using CMDIFrameWndEx which is a very different beast. It has this feature built it as you’ve found out”.
    The worst part is that “If you were able to subclass the Ex client area, you’d probably end up breaking a lot of the FluentUI features.”
    The VS 2008 / VS 2010 wizard generates and use a lot of Feature Pack FluendUI items.

    A bad solution
    An approach might be trying to adapt sub-classing idea directly into CMainFrame class. So, the steps might be:

  • No CMDIClientWnd instance is needed (as in existing tutorials). So no more SubclassWindow() call in CMainFrame::OnCreate().
  • Handle WM_ERASEBKGND, WM_SIZE and WM_PAINT on CMainFrame.
  • CWnd::FromHandle() acquires a pointer to an MFC object pointer from CHandleMap via afxMapHWND().

    At the very first time everything looked nice. But unfortunately I have to admit Steve Horne’s observations. In different situations (most on resizing or moving messages) some of the FluentUI items were not correctly painted (some Ribbon items painting issues – different cases).

    So, a better solution is needed.

    A good but not perfect solution
    In my research, for projects base on MFC Feature Pack, there is no perfect solution for this issue. I mean something similarly with the good solutions that I mentioned in the beginning of this article but acts fine until the first IDE that use MFC Feature Pack.
    As we have seen on top trying to subclass a window with an already mapped is not a good idea.
    The solution is based on Joseph M. Newcomery’s idea, a well-known book writer and Microsoft Visual C++ MVP. Joe proposes “temporary” remapping only for the case we need – in my case painting actions. For the rest of the action the mapping process inside of framework continues in the classic way. It’s a “gross and ugly” solution but until having a better solution from Microsoft or others I consider it fine for my needs.

  • First step is to define a class CMDIClientWnd derived from CWnd and add WM_PAINT and WM_ERASEBKGND handle methods.
  • Catch the WM_PAINT message in CMainFrame via PreTranslateMessage() before the message is dispatched for execution and calling our redraw method.
  • Here is the RedrawClientArea() public method.

    So we create locally an instance of CMDIClientWnd and we attach it internally to ChandleMap::m_permanetMap via Attach(), not before detaching m_wndClientArea (an CMDIClientAreaWnd instance, attribute in CMDIFrameWndEx and as we have seen before it subclass the CMDIFrameWndEx in CMDIFrameWndEx::OnCreateClient()).

    The idea is that our CMDIClientWnd instance temporary replace m_wndClientArea instance of CMDIClientAreaWnd right before effective WM_PAINT message is dispatched via PreTranslateMessage().

  • Include your new class header (ex. MDIClientWnd.h) in MainFrm.cpp and call RedrawClientArea() in CMainFrame::OnSize().
  • If the child frames window is not tabbed style (when all client area is hidden) and the client area is still visible than we have to call RedrawClientArea() method from WM_MOVE and WM_SIZE handler of CChildFrame and we have to include MainFrm.h into ChildFrame.cpp.
  • Additionally, in order to make sure the painting message is received by main frame at application’s starting moment and your image is correctly painted from the beginning, please call pMainFrame->Invalidate() after pMainFrame->UpdateWindow() in InitInstance() method of your application class. Otherwise, if your application it’s starting with no opened document (for instance new document), your picture will appear only in the moment a WM_PAINT message is generated in CMainFrame (for instance when you resize your application, select the menu, etc).
  • A disadvantage of this approach is that the interest message (WM_PAINT) is not handled inside the class of m_wndClientArea, but the good point is that the rest of the messages are left at the correct class of the framework and will work correctly.
    Demo application (1544)

    Share

    Several C++ singleton implementations

      This article offers some insight into singleton design-pattern.
      The singleton pattern is a design pattern used to implement the mathematical concept of a singleton, by restricting the instantiation of a class to one object. The GoF book describes the singleton as: “Ensure a class only has one instance, and provide a global point of access to it.”
      The Singleton design pattern is not as simple as it appears at a first look and this is proven by the abundance of Singleton discussions and implementations. That’s way I’m trying to figure a few implementations, some base on C++ 11 features (smart pointers and locking primitives as mutexs). I am starting from, maybe, the most basic singleton implementation trying to figure different weaknesses and tried to add gradually better implementations.
      The basic idea of a singleton class implies using a static private instance, a private constructor and an interface method that returns the static instance.

      Version 1
      Maybe, the most common and simpler approach looks like this:

      Unfortunately this approach has many issues. Even if the default constructor is private, because the copy constructor and the assignment operator are not defined as private the compiler generates them and the next calls are valid:

      So we have to define the copy constructor and the assignment operator having private visibility.

      Version 2 – Scott Meyers version
      Scott Meyers in his Effective C++ book adds a slightly improved version and in the getInstance() method returns a reference instead of a pointer. So the pointer final deleting problem disappears.
      One advantage of this solution is that the function-static object is initialized when the control flow is first passing its definition.

      The destructor is private in order to prevent clients that hold a pointer to the Singleton object from deleting it accidentally. So, this time a copy object creation is not allowed:

      [code]error C2248: otherSingleton::otherSingleton ‘ : cannot access private member declared in class ‘otherSingleton’
      error C2248: ‘otherSingleton::~otherSingleton’ : cannot access private member declared in class ‘otherSingleton'[/code]

      but we can still use:

      This singleton implementation was not thread-safe until the C++ 11 standard. In C++11 the thread-safety initialization and destruction is enforced in the standard.

      If you’re sure that your compiler is 100% C++11 compliant then this approach is thread-safe. If you’re not such sure, please use the approach version 4.

      Multi-threaded environment
      Both implementations are fine in a single-threaded application but in the multi-threaded world things are not as simple as they look. Raymond Chen explains here why C++ statics are not thread safe by default and this behavior is required by the C++ 99 standard.
      The shared global resource and normally it is open for race conditions and threading issues. So, the singleton object is not immune to this issue.
      Let’s imagine the next situation in a multithreaded application:

      At the very first access a thread call getInstance() and pInstance is null. The thread reaches the second line (2) and is ready to invoke the new operator. It might just happen that the OS scheduler unwittingly interrupts the first thread at this point and passes control to the other thread.
      That thread follows the same steps: calls the new operator, assigns pInstance in place, and gets away with it.
      After that the first thread resumes, it continues the execution at line 2, so it reassigns pInstance and gets away with it, too.
      So now we have two singleton objects instead of one, and one of them will leak for sure. Each thread holds a distinct instance.

      An improvement to this situation might be a thread locking mechanism and we have it in the new C++ standard C++ 11. So we don’t need using POSIX or OS threading stuff and now locking getInstance() from Meyers’s implementation looks like:

      The constructor of class std::lock_guard (C++11) locks the mutex, and its destructor unlocks the mutex. While _mutex is locked, other threads that try to lock the same mutex are blocked.
      But in this implementation we’re paying for synchronization overhead for each getInstance() call and this is not what we need. Each access of the singleton requires the acquisition of a lock, but in reality we need a lock only when initializing pInstance. If pInstance is called n times during the course of a program run, we need the lock only for the first time.
      Writing a C++ singleton 100% thread safe implementation it’s not as simple as it appears as long as for many years C++ had no threading standard support. In order to implement a thread safe singleton we have to apply the double-checked locking (DCLP) pattern.
      The pattern consists in checking before entering in the synchronized code, and then check the condition again.
      So the first singleton implementation would be rewritten using a temporary object:

      This pattern involves testing pInstance for nullness before trying to acquire a lock and only if the test succeeds the lock is acquired and after that the test is performed again. The second test is needed for avoiding race conditions in case other thread happens to initialize pInstance between the time pInstance was tested and the time the lock was acquired.
      Theoretically this pattern is correct, but in practice is not always true, especially in multiprocessor environments.
      Due to this rearranging of writes, the memory as seen by one processor at a time might look as if the operations are not performed in the correct order by another processor. In our case the assignment to pInstance performed by a processor might occur before the Singleton object has been fully initialized.
      After the first call of getInstance() the implementation with pointers (non-smart) needs pointer to that instance in order to avoid memory leaks.

      Version 3 – Singleton with smart pointers
      Until C++ 11, the C++ standard didn’t have a threading model and developers needed to use external threading APIs (POSIX or OS dependent primitives). But finally C++ 11 standard has threading support.
      Unfortunately, the first C++ new standard implementation in Visual C++ 2010 is incomplete and threading support is available only starting with beta version of VS 2011 or the VS 2012 release oreview version.

      As we know, in C++ by default the class members are private. So, our default constructor is private too. I added here in order to avoid misunderstanding and explicitly adding to public / protected.
      Finally, feel free to use your special instance (singleton):

      And no memory leaks emotion… 🙂
      Multiple threads can simultaneously read and write different std::shared_ptr objects, even when the objects are copies that share ownership.
      But even this implementation using double checking pattern but is not optimal to double check each time.


      Version 4 – Thread safe singleton C++ 11
      To have a thread safe implementation we need to make sure that the class single instance is locked and created only once in a multi-threaded environment.
      Fortunately, C++ 11 comes in our help with two new entities: std::call_once and std::once_flag. Using them with a standard compiler we have the guaranty that our singleton is thread safely and no memory leak.
      Invocations of std::call_once on the same std::once_flag object are serialized.
      Instances of std::once_flag are used with std::call_once to ensure that a particular function is called exactly once, even if multiple threads invoke the call concurrently.
      Instances of std::once_flag are neither CopyConstructible, CopyAssignable, MoveConstructible nor MoveAssignable.

      Here it is my proposal for a singleton thread safe implementation in C++ 11:

      The parameter to getInstance() was added for demo reasons only and should be passed to a new proper constructor. As you can see, I am using a lambda instead normal method.
      This is how I tested my safeSingleton and smartSingleton classes.

      So I create 20 threads and I launch them in parallel (std::thread::join) and each thread accesses getInstance() (with a demo id parameter). Only one of the threads that is trying to create the instance succeeds.
      Additionally, if you’re using a C++11 100% compiler you could also delete the copy constructor and assignment operator. This will allow you to obtain an error while trying to use such deleted members.

      Other comments
      I tested this implementation on a machine with Intel i5 processor (4 cores). If you see some concurrent issues in this implementation please fell free to share here. I am open to other good implementations, too.
      An alternative to this approach is creating the singleton instance of a class in the main thread and pass it to the objects which require it. In case we have many singleton objects this approach is not so nice because the objects discrepancies can be bundled into a single ‘Context’ object which is then passed around where necessary.

      Update: According to Boris’s observation I removed std::mutex instance from safeSingleton class. This is not necessary anymore because std::call_once is enough to have thread safe behavior for this class.

      Update2: According to Ervin and Remus’s observation, in order to make things clear I simplified the implementation version 3 and this is not using std::weak_ptr anymore.

      References:
      just::thread – Anthony Williams – Just Software Solutions Ltd
      C++ and the Perils of Double-Checked Locking by Scott Meyers and Andrei Alexandrescu
      Modern C++ Design: Generic Programming and Design Patterns Applied by Andrei Alexandrescu ( Romanian like me 🙂 )

      Share

    Flexible changes for product version properties – Visual C++ binaries

    Manually editing of binary files version in the resource editor of Visual Studio IDE is not a viable solution. If we have dozens of projects in our solution, then for each kit building we should need manual resources file edit. Otherwise, we can use a special tool that does this thing for us.
    Unfortunately this approach is not the most flexible and could fail.

    For our flexible binaries properties changes and in order to avoid manual edit for each rebuild we can create and include a header file (version.h) that contains some constants of product version and file version of our project (.rc files).

    We have to include only these constants into this file (version.h):

    Then, for each .rc file wherever we have FileVersion and ProductVersion we have to use this constants.
    When we will build a new kit, we have to change only these constants and then to start the kit building process. Everything is fine until we add new controls in our projects resource files. Then, because of Visual Studio IDE automation we can get an unlikely surprise: the FileVersion and the ProductVersion properties could be reset to 1.0.0.0.

    In order to avoid this issue and edit the version only in a single place I propose the following workaround:

  • Create a version.h header file that have to contain these constants only (as on top). I should recommend to create it into the root solution in order of being easy included in all the projects.
  • Include this file in the project you need to use.
  • Use a text editor (ex. notepad.exe) and include next code section at the end of .rc2 resource file of your project (res\your_project.rc2) – this section contains the include version.h file section, too.
  • Edit “040904e4” block code with same data as if we should edit in a resources editor and use the version.h‘s file defined constants. As you can see in my example, for the FileVersion and ProductVersion properties I use my version.h constants. These properties will not be edited anymore.
  • Delete “// Version” section from default resource file your_project.rc (including comments – recommended).
  • Insert next lines into your_project.rc file after “3 TEXTINCLUDE BEGIN” and before “#define _AFX_NO_SPLITTER_RESOURCES\r\n“:
  • That code block looks like this:

    Don’t forget to edit .rc2 file name with the right file name of your project.

  • In your_project.rc file the section “// Generated from the TEXTINCLUDE 3 resource.” have to contain only next declaration:
  • The rest of the section’s lines have to be deleted.

  • We save both resources files: your_project.rc and your_project.rc2.
  • Rebuild the project and check the new generated binary properties. In the FileVersion we will have the major version (in my case 4.0.0.0) and in ProductVersion we have the current build version (4.3.2.198).
  • Observations
    Once you apply these steps, the product version properties will not possible from the Visual Studio IDE resource editor, anymore (only as edit text file or an external text editor). If we didn’t define something special in our project’s String Table we will see only IDS_ABOUTBOX.

    Demo application - AutoProductVersion (959)

    Share

    Dynamic popup and drop down menus for custom representations

    Many applications allow dynamic customization for visual objects or data views. For instance, well known Internet Explorer application provides toolbars customization using a popup menu that appears when the user execute right click mouse action in toolbar zone area.

    Internet Explorer sample menu

    Other sample where this kind of menu is very useful is when it’s used in order to customize database data representation in Windows controls like List control or grid control. These kind of applications allow data filtering and show/hide columns using this kind of menu. The user just right click on control header and gets what he need.

    Starting from this idea, I implemented a class CDynamicPopupMenu. This class allows an easy building of this kind of menus. I used if in a demo dialog base application over a list control.

    my demo application

    Internally, this class uses a STL container (std::map) with a data structure used in order to embed items menu properties. When the menu is built, menu’s behavior is implemented using these properties.

    Add new menu item
    The new item add menu method has next definition:

    where:

  • item_id – represents internal item ID; the ID is used for menu customization, too;
  • parent_id – parent item ID used when we define a new items sub-group (a drop-down menu); the attribute value is 0 if menu item is a part of initial menu;
  • is_visible – this flag is used a item is checked / unchecked. In my demo application this flag is set true for all list control’s columns that we want to display. For “Select All” and “Check All” items this flag is false because we want to create new subgroup that contains new columns, but we don’t have “Select All” or “Check All” columns.
  • check_flag – this flag allow check/uncheck menu property;
  • has_child – if is true allows a subgroup definition (a new drop-down menu);
  • item_name – unicode menu item name;
  • enable_flag – defines if the item is enable or disable.
  • Add separator item
    Add separator item method definition looks like this:

    where:

  • item_id – menu item ID;
  • parent_id – parent item ID from the subgroup has started; the attribute value is 0 if menu item is a part of initial menu.
  • Menu add items sample
    In my demo application, in CtestPopupMenuDlg::SetDefaultMapValues(void) method, among other things, you can find next calls:

    Get menu internal data
    In order to access the internal data container (std::map) that stores all dynamic menu items you just can use next method:

    followed by:

    Create and display menu
    Menu creation must be done just after we add all menu items. The menu is displayed only after TrackPopupCustomMenu() call. The definition of this method looks like this:

    where:

  • point – mouse coordinates where the menu start building;
  • hWnd – parrent window handle where the menu is created.
  • Function’s return value is the menu IDs that was selected. If no item was selected the function returns 0.
    In my demo application, menus creation is called on list control right-click method (NM_RCLICK).

    As you can see, I’m calling TrackPopupCostumMenu(), using mouse point property when the user right-click over list control.
    I am saving list control handler, selected item ID and WM_NOTIFY value into a pointer to message notification structure NMHDR. Then I’m passing this pointer to OnNotify() method.
    Using WM_NOTIFY message and OnNotify() method, I inform parent control window that a new event was generated.

    I am calling GetItemCheckedFlag() if order to detect selected item check status (check / uncheck). Then, if item state means check I apply negation over this bool flag and I’m calling SetCheckedItemFlag() method. Finnaly this method produce changes in my control list, depending on my menu command (FillData() method).

    Menu interaction with parent window (list control)
    In my demo application, the interaction between dynamic menu and list control to be treated by FillData() method.
    In order to use CDynamicPopupMenu’s internal container data is need to initialize a DynamicMenuData pointer with GetDynamicMenuData()’s returned value.

    Using that pointer to internal menu data, I iterate over internal container, and for those items that are visible and selected set on true I insert columns in my list control.
    Similarly, when using such menus, the application can apply filters on real data.
    CDynamicPopupMenu class contains other useful methods. This kind of menu can be used in different situations in order to change application’s behavior.

    Download demo application: testPopupMenu (Visual C++ 2005 project)

    Share

    Versionable Object’s Serialization in MDI applications

    This article represents a follow-up of the last article, “Versionable Object’s Serialization using MFC in non Document View applications”. In that article I presented to you a way to solve incompatibility issues between different file versions of the same application, based on MFC serialization into a dialog base application.
    But, the dialog base applications are not the best way to use and apply MFC serialization.
    Applications base on document view architecture (MDI or SDI) are the best solution when we want to develop MFC application with serialization support.
    Document View MFC architecture offers support for automatic save and load document to/from a file on a storage area, using a serialization mechanism. MDI (Multiple Document Interface) and SDI (Single Document Interface) application offers default serialization basic mechanism.

    SerAddressBookMDI Main Window

    The serialization is customizable. It’s important to define the right binary elements format, file version and element count. Finally, we have to complete the serialize method.
    Into a document view application some document class’s methods are mapped over New, Open, Save and Save As items, available in File menu. The application’s user can use these commands in order to create or open files, tracking document status changes and serialize data to/from a file.
    MDI applications create a CDocument derived class instance for each open document. SDI applications reuse the same single CDocument derived class instance for each open file.
    In a MDI application CDocument class and the classes derived from this are responsible with internal objects serialization control. This class tracks each change that appears in our document. In this way, our application knows that some changes have been made when we accidentally want to close the application, without saving last changed data.
    When a document is loaded, a CArchive instance is created for reading file internal data. When we create a new document, a CArchive store instance is created and this instance is used for store to a file process.
    CArchive routines are strongly optimized in order to provide a viable store/load mechanism, even if we are serializing a huge number of small items.

    In my demo application, I used the same idea as in my last article: an address book with two versions.

    In current application the serialization process it’s very different then the old application. The serialization process is realized by a CDocument class instance that interaction with the rest application classes. CAddressBook class place was taken by the document class CSerAddressBookMDIDoc.

    In a real application it’s recommended to use unique identifier (UID) in order to “detect” the right object. For simplicity, in my demo application this unique identifier was defined “name” attribute. For instance, I’m using it for a contact update process.

    Document class – CSerAddressBookMDIDoc

    The interface of document class looks like this:

    As you can see, this time I’m using DECLARE_DYNCREATE() macro. This macro allows dynamic document objects runtime creation (a MDI application’s requirement).
    In this class I reused some CAddressBook’s methods. These methods handle objects from m_cContactsList list.
    ContactList is an alias for our Contacts MFC list:

    The serialization data method of this document class is listed down:

    This method read (load) or write (store) serialized Contact class’s object using a CArchive object at runtime. If the code flow runs over true branch all information is saving from our list to our file. If code flow choose else branch it means that we are loading an existing file and all file data is loaded in our list.

    In order to store data in a file, initially I save file version (m_nFileVersionSchema) and items count. Then I iterate over all m_cContactsList items (Contact type item) and I serialize this data in order to store in my new file.

    If I want to load data from a file, I am reading file version, I clean my list, I get Contact stored items count and as long as this count variable value is positive I serialized with load flag all Contact file data.
    All serialized Contact entities go to m_cContactsList list. Each time we want to display our files data we have to iterate over this list.

    Internal serialized class – Contact

    As you have seen, in CSerAddressBookMDIDoc::Serialize() method, for both situations (store/load) a new Contact instance is created and this object is passed to Contact::Serialize() for load/store operation.
    The serializing Contact items method looks like this:

    If I want to store my data to a file, I obtain a runtime pointer to my serialized class, in order to set my file version schema. Then, depending on file version I serialize right object data and finally I reassign initial version schema value of my runtime class.

    If I am loading a file, I call Contact::Serialize() method, I get file version schema and depending on schema value, I add right data to my document class.

    View class – CSerAddressBookMDIView
    This class is responsible with document class content (loaded file’s data) graphical representation. In my demo application, the view class is derived from CListView and has REPORT flag set in order to display data into a grid like.

    Main responsible method with client window’s list control population is CSerAddressBookMDIView::PopulateList() and is listed down:

    First, we obtain a pointer to our current document. Then, we call the method that inserts right list control columns, depending on file version (CreateViews() method).
    We obtain a CListView pointer and an object reference to fist contact from contacts list. Then, as long as we have elements, we iterate over list’s elements (in a while() loop) and insert data to our list control.

    PopulateList() method is called from overwrite CSerAddressBookMDIView::OnUpdate() method. OnUpdate() method is called by MFC framework as long as a document is changed.
    The original OnUpdate() method is called by CDocument::UpdateAllViews() and is implemented in CView class.

    In order to add/remove/update records from our documents I created a special dialog, launched from my Menu menu.
    Display modal dialog method is listed down:

    Because I have to interact from my dialog window with contacts list of current document, I have to pass the pointer of my document class ( dlg.SetAddressDocument(pDoc) ) to my dialog window class. If the dialog is closed using Exit button (IDOK id) then the view is refilled, using PopulateList() call.

    CManipulateDataDlg class

    This class is responsible with the management of document contact list items. The difference between this dialog class and the dialog class of last article is that this class is not responsible with load/store process. This role was taken by document view architecture.

    Dialog’s control list population method looks like this:

    Each time we clean the contact list we obtain a reference to the beginning of document contacts list. Depending on file version schema (1 or 2), dialog’s controls are customized. Then, we iterate over contact list elements (ContactList) and I insert data into my control list.

    MDI support for many file extension

    Default MDI applications come with only one file support and only one file extension file format.
    Sometimes, our applications need to support different file format and more file extensions. In my demo application is necessary to support two file format and two file version (version 1 (*.sab1) and version 2 (*.sab2)).
    Same time, the application must support old file format conversion to new file format and vice versa.
    You can find multi file support detailed information for document view MFC application to Microsoft KB 141921. Other useful reference you can find here.
    Starting from these references my application support two file format. I figure some important changes that I made into my initialize method, CSerAddressBookMDIApp::InitInstance.

    First point that I should mention, after LoadStdProfileSettings() (function written by MFC wizard) call, is the initialization of m_pDocManager attribute (pointer to CDocManager class, used for document template management) with a new object pointer to CMultiDocManager (class defined be me according with Microsoft Knowledge Base 141921). CMultiDocManager class overwrites some methods from CDocManager: CreateNewDocument(), DoPromptFileName(), OnFileNew().

    Then, besides default document application template (with resource ID IDR_SerAddressBookTYPE), I create two new templates for my two different files format.
    All templates are added into my document template list (AddDocTemplate()). Last significant change from InitInstance() means the right frame window (IDR_SerAddressBookTYPE – contains Save and Save As options).

    Conclusion:
    Multiple Document Interface (MDI) architecture is the best for this kind of data container application. MFC framework offers stable and complete support for objects serialization: storing and loading process.
    Many of Microsoft Office applications are based on this architecture.

    Download demo application: SerAddressBookMDI (Visual C++ 2005 project)

    Share

    Versionable Object’s Serialization using MFC in non Document View applications

    Most existing applications operate with data that must be stored and loaded in different times and different locations. The data is stored in text or binary files with a well defined format.
    The Problem
    Initially, in the first version 1.0, an application operates with data structures that can be stored and loaded. But, next version (2.0) these data structures suffers changes. Some structure’s attributes are added and other could be removed. These things change files format and structure when a new file version is saved.
    Question: What happens when you are using an application version 2.0 and you are trying to load old files format (version 1.0)?
    Answer: Most cases should cause incompatibility troubles between the current new application and files in old format. This could throw exceptions and the application could have undefined behavior.
    That’s why the application must be written in order to be able to open both file versions.
    Solution
    In order to solve the compatibility issue exist many solutions, more or less professional. The recommended solution is the serialization.
    The serialization is a write / read object process to/from a persistent storage. Serialization it’s a good choice in order to maintain a good data structure. Many different frameworks offer serialization support. One of these is Microsoft Foundation Classes – MFC.

    If we want to use MFC serialization support, we can use a CArchive instance. This object, combined with a CFile instance provides a strong mechanism for objects serialization.

    Because, in different applications version, the file suffers significant structure changes we have to use MFC’s serialization concept called Versionable Schema.
    Versionable Schema means the using of CArchive class methods GetObjectSchema() and SetObjectSchema() and a constant VERSIONABLE_SCHEMA (that you can find it in afx.h file and has 0x80000000 value) combined with a OR LOGIC operator and the last application version number as a parameter of IMPLEMENT_SERIAL macro.
    The GetObjectSchema() method is used in order to detect stored objects version from a file that is loaded in our application. The complement of this method, SetObjectSchema() method, allow us to save the objects version.
    Different by the C++ I/O standard streams, the CArchive class is special designed only for objects serialization in binary files.

    In order to serialize a class’s objects we have to follow next steps:
    1. The class that we want to serialize has to be derived from the abstract class CObject (or other classes derived from CObject).
    2. Overwrite CObject’s Serialize() method.
    3. Use DECLARE_SERIAL macro in your class declaration.
    4. The serializable class has to have a default constructor, without arguments.
    5. Use IMPLEMENT_SERIAL macro in the implementation file of serializable class.

    More information you can find it here and in links of this page.

    But, from these steps until to a complete serialization and versionable application there are few significant steps to follow.
    Next, I will present to you a Dialog base sample application that supports serialization and is versionable.

      Sample application – SerAddressBook

    Next, I will present to you how you can create an address book application (based on a MFC Dialog application architecture).

    Suppose that initially our client requested us an address book that contaions: name, prename, address and phone number. But, once with the mobile phone and Internet area extensions our client needs two new fields for mobile phone number and for email address.
    File versions structure

    Our application with a file version 2 looks like this:
    Application window

    Because this is a demo application I kept on my window the possibility to save both version, using two radio buttons.
    A good application design helps us if we have new requirements and we have to change the application structure. The code changes have to be done without too many code interactions. Ideally, with add code only.
    That’s why, my application classes design looks like this:
    Classes Hierarchy

    Although Contact class and CAddressBook class are serializable, the objects serialization is implemented into Contact class.

      Contact class

    From Contact’s interface class you can observe:
    • I derived this class from the abstract CObject;
    DECLARE_SERIAL, macro calling;
    Serialize()‘s method declaration in order to overwrite the parent class;
    • Our class attributes.

    The last line represents an “alias” definition for a MFC list definition, used in order to store displayed data. This list is using for Contact object administration.
    In the implementation file we declare the IMPLEMENT_SERIAL macro and we are initializing the static variable with our current application version.

    Into this declaration you can observe the VERSIOABLE_SCHEMA constant combined on OR logic with 2 (my demo application last version). This, the third macro argument is essential for objects versioning, combined with CArchive::GetObjectSchema() and CArchive::SetObjectSchema().
    More details about this constant and it using process or about these methods you can find here.
    The implementation of Contacte::Serialize() looks like this:

    If CArchive constructor sets store-load flag on CArchive::store (save to file) then the code flow will follow true if’s block and objects data is sending to archive and stored in the file (including file version, too).
    When we want to open an existing file, our CArchive’s constructor receives CArcuhive::load flag and enter to else if’s block of Serialize(). Is extracting file version, and after that is loading all Contact objects.

      CAddressBook class

    CAddressBook class make the link between interface dialog class (CSerAddressBookDlg) and the serialized class Contact. This class contains a Contact objects list. CAddressBook class is administrating this contact list and is realizing load/store object.

    The interface of this class is looking like this:

    Into this declaration you can see the existence of a contact list instance (m_cContactsList). This class contains add, update or remove contact methods.

    Because our class has to be a serialized method we have to overwrite Serialize() method, the method has to me used by the client class ( in our case the interface class – CSerAddressBookDlg).

    Because CArchive class doesn’t provides any method or attribute in order to obtain the objects (I’m counting when I’m loading or storing), I decided to save the objects count into my files. That’s why, if I’m storing, I call next line:

    Same story, for file version, before starting Contact objects serialization:

    Then, into a while loop I’m iterating over the contact list. I am serializing the data and I’m storing to my new file.
    If I load a file from disk (else branch) then I follow next steps:
    • I’m cleaning contact list;
    • Get the objects count;
    • Get file version and serialize all objects for load;
    • Add all data to my Contact object list;

      CSerAddressBookDlg – The application interface class

    Once we have implemented this serialization mechanism the using of this one into our application became very easy.

    For instance, when the user wants to save into a file all he’s new data, he will call next method:

    As you can see, I have a CFile object that I’m using it, combined with a CArchive instance, for data storing to a file. Although my local CArchive instance receive as a first parameter the address of the file handler and the store flag CArchive::store.
    Next I call CAddressBook::Serialize() method and I’m closing the store operation.

    Loading file method, based on my Contact serialization mechanism looks like this:

    As you can see, I am creating a local CFile object, needed for reading operation. Although, I’m creating a local CArchive instance that received as constructor parameter the file handler address with CArchive::load flag.
    Then, I’m calling CAddressBook::Serialize() method. Is entering on else branch and finally we are disconnected the object from file.
    The last line contains PopulateList() call and is my populate list method. It populates my list control (a CListCtrl instance) with the file loaded data in order to display it into our dialog.

    Conclusions:
    The MFC’s Document View architecture offers complete serialization support. Each MDI/SDI application contains default serialization support. My demo solution presented is an adapted serialization version for dialog base applications.

    Download demo application: SerAddressBook (Visual C++ 2005 project)

    Share

    Progress database operations

    Preliminary remarks

    Application path (Sun Solaris Unix OS): /myApp/myapp111a/
    Database location: /myApp/db/test_db

    Usually, our workstations have Windows OS and we need to connect to Solaris Unix OS on SSH. That’s why we are using Putty application.
    User: root
    Password: xxxxxxxxxx

    The Samba daemon must be available and activated (in order to support Windows share and map drive).
    In this sample, the database is in /myApp/db/test_db path and it’s called, total. Basically, database files name doesn’t change, only the folder name (instead of test_db). In my example, the application port for this database it’s 2540 (might be changed).

    Creating empty database and access rights
    – Go to the folder that contains the database and create the folder tat contains the new database:

    – Create the new database called total:

    – Rights assignment for database folder (test_db):

    – Additionally, we can check database permissions:

    Database starting
    – Check the open ports, especially database opened ports

    – Start the database using the established port, but an unused port value:

    Loading database content
    The database and dumps loading process it’s realizing using Database Administration tool, available in Progress’s Windows suite. The suite path it’s: START -> Programs ->PROGRESS -> Data Administration.

    Launch this application and select Database menu, Connect option. It will appear Connect Database window and we will complete it as in next image. After that we select OK button.

    Connect Database window

    From this capture you can observe the database name, the network protocol (TCP), Progress server’s IP (192.168.42.10) and our opened database port (2540).
    Once successfully connected to this database, all the application functionalities will be available for us.

    If we want to load the database we access the menu: Admin -> Load Data and Definitions -> Data Definitions (*.df file). We select the table’s definition file from the local disk and select OK button.

    Dumps files stores database information. If we want to load data content then we have to follow Admin -> Load Data and Definitions -> Table Contents (*.d file) path. Select Select Some… button and then select all tables (*), then press OK.

    Select Tables window

    If we press OK button, then we will get a new window where we have to introduce the folder path that contains the dumps data files. Attention! This operation takes time.

    Progress database link to myApp

    In this step, following last step, we have already the database with structure and content.
    – Create a configuration file with .pf extension (ex. mydb.pf) in /myApp/myApp111/Total/Pf/ path:

    – Edit the new created file using and text editor (ex. vi, pico, mc), changing the first line:

    like this:

    2540 it’s the new port that we will use it for our database.

    – The last step it’s the link creating to our workstation. We create a new desktop shortcut to prowin32 application (\Program Files\PROGRESS\bin\ path) and we change shortcut properties: map a disc drive to the folder that stores total server folder: right click over My Network Places and select Map Network Drive…; select a free drive ( ex. N: ) and in Folder area introduce \\192.168.42.10\myApp111 (my myApp111 folder it’s the path where our application it’s stored) and finally we select Finish button.

    Change shortcut properties in order to use our application configuration file.
    "\Program Files\PROGRESS\bin\prowin32" -pf \total\pf\mydb.pf"

    As you can see the server Sun path stars from total folder and contains mydb.df file. First file path it’s not full completed because it uses map drive setting that was applied.

    Unix server’s folder myApp111 is the folder that contains that use Progress database.

    Create a database backup

    In my system the Unix’s Cron daemon runs every evening a backup script. But, sometimes it’s necessary to save backup in the middle of the day before an admin’s specific task
    The steps that you have to follow are:
    – Stop current database:

    – Build the backup:

    – Restart the database with the specific port (ex. 2540):
    # /progress/dlc/bin/proserve /myApp/baze/test_db/total -S 2540 -N TCP -L 300000

    Restoring a database backup
    My back-ups location on my server is to /hitachi/backup/ path. Cron’s daemon script generates backups in this location.
    The files backup could exist in two ways:
    1. .bk extension (ex. ronb2005.12.05.bk) – it means uncompressed backup
    2. .gz extension (ex. ronb2005.11.30.bk.gz) – it means compressed backup.

    In order to restore a database backup we follow next steps:
    – Create a new folder (ex. myrest) in /myApp/baze/ path and copy here the backup that we want to restore (ex. ronb2005.11.30.bk.gz):

    – If the backup it’s compressed, run restore commands:

    – If the backup file it’s uncompressed (1st backup version) then we don’t need gzip command:

    – Assign rights to database:

    Then, follow Starting a database procedure.

    Stop the database
    In order to stop a database we have to follow next steps:
    – Check witch it the port used by current started database:

    This it’s an additional step.
    – Execute the stop command:

    Restart a database
    First, when we want to restart a database we have to check that all the database clients are disconnected.

    This command shows us if there are connected users and if yes show the users name. We have the possibility to force client’s disconnection. We renounce to this action pressing X key and contact the users in order to disconnect. After few minutes we check again that all clients have been disconnected.

    – When all the clients are disconnected we start the brute force command:

    – Run restart command:

    Compress/Decompress database

    In order to compress a database we have to follow next steps:
    – We go to the folder that where the database is saved (ex. /hitachi/backup/):

    – Execute compress command:

    Then, we wait the process in order to finish. At the end of this command we will get a
    saved_back_up.bk.gz file with a smaller size.

    In order to decompress a compressed database we have to run next command:

    After a successful result of this command we have the original database backup file saved_back_up.bk. Then follow restoring database procedure.

    Share