C++11: enable_if

std::enable_if is another feature taken from the Boost C++ library that now ships with every C++11 compliant compiler.

As its name says, the template struct enable_if, enables a function only if a condition (passed as a type trait) is true; otherwise, the function is undefined. In this way, you can declare several “overloads” of a method and enable or disable them depending on some requirements you need. The nice part of this is that the disabled functions will not be part of your binary code because the compiler simply will ignore them.

Continue reading “C++11: enable_if”

C++11: unordered maps

The STL ships with a sorted map template class that is commonly implemented as a balanced binary tree.

The good thing on this is the fast search average time ( O(log2N) if implemented as a balanced binary tree, where N is the number of elements added into the map) and that when the map is iterated, the elements are retrieved following an established order.

C++11 introduces an unordered map implemented as a hash table. The good thing on this is the constant access time for each element ( O(1) ) but the bad things are that the elements are not retrieved in order and the memory print of the whole container can (I am just speculating here) be greater that the map’s one.

Continue reading “C++11: unordered maps”

C++: Smart pointers, part 1

This is the first of several posts I wrote related to smart pointers:

  1. Smart pointers
  2. unique_ptr
  3. More on unique_ptr
  4. shared_ptr
  5. weak_ptr

Memory management in C is too error prone because keeping track of each bunch of bytes allocated and deallocated can be really confusing and stressing.

Although C++ has the same manual memory management than C, it provides us some additional features that let us to do this management easier:

  • When an object is instantiated in the stack (e.g. Object o;); the C++ runtime ensure the destructor of such object is invoked when the object goes out of scope (the end of the enclosing block is reached, a premature ‘return’ is found or an exception is thrown); thus, releasing all memory and resources allocated for such object.
  • (Ab)using the feature of operator overloading, we can create classes that simulate the behavior of the pointers. Such classes are called: Smart pointers.

Continue reading “C++: Smart pointers, part 1”

C++: Primitive types

A primitive type is a data type where the values that it can represent have a very simple nature (a number, a character or a truth-value); the primitive types are the most basic building blocks for any programming language and are the base for more complex data types. These are the primitive types available in C++:


It is stored internally in one byte and the values that a variable of this type can represent are true or false. All boolean operations return a value of this type. This type was not available in early C, so, a lot of operations that return integer values instead of boolean ones can be used as boolean expressions (in that case, the compiler assumes that 0 represents false and any value different than 0 represents true). For example, the following two code excerpts have the same semantics:

int a = 2;
if (a != 0) //a != 0 evaluates to a boolean value. In this case, it evaluates to true
  printf("a is different than 0\n");


int a = 2;
if (a) //a is an "int", but since it is different than 0, the compiler evaluates it as true
  printf("a is different than 0\n");


It is stored internally as a byte and represents a character. When this data type was created, there was not immediate need of international character support in the language, so, it was completely useful to store all the set of characters needed to write anything on English. Anyway, when the use of computers was evolving, extending and becoming world-wide available, support for international characters was needed and evident and then new character encoding standards were defined. When these new encoding standards were available, a new character data type was needed, because one byte was not enough to support all symbols used in all human languages (Chinese glyphs, for example, are more than 40000). Although of this, char is still used as the standard character data type and a lot of legacy code still uses character strings based on char; some encoding algorithms exist that can store international characters on sequences of char characters, for example, UTF-8 that stores Unicode characters on 1, 2, 3 or 4 char characters.


It is is a wide-character type, represents a character but it is stored internally using 16 or 32 bits instead of the 8 bits of the char type. How many bits a value of this type uses, depends on the computer architecture, the operating system and the C++ compiler being used. Commonly, Windows uses 16-bit characters and the UNIXes use 32-bit characters. The encoding used to represent a wchar_t character is not defined by the standard and the decision of what encoding to use was deliberately left to the compiler. Both types, char and wchar_t can be treated as integer data types and the programmer is able to perform arithmetical operations using values of these types. When created, wchar_t was not a built-in data type but it was just a type alias (typedef); the current compilers use it as a built-in data type by default but the user can tell the compiler to treat it as a typedef (this is needed to support legacy code as well).


It is a “short integer” representing an integer that has less precision than a “full-blown int”. Though generally short represents a signed integer with 16-bit precision (that means that it can represent values between -32768 and 32767), the decision of what precision to use was left to the compiler implementor. unsigned short is the unsigned version of this 16-bit precision integer, but the values that it represents are between 0 and 65535.


It is the most common integer data type and it was used to represent a processor “word”; so, in 16-bit platforms, it used to be a 16-bit precision integer number and in 32-bit platforms, it is a 32-bit precision number. This “rule” was broken when 64-bit hardware became available and the int data type has still a 32-bit precision; that means that it can store numbers between −2147483648 to 2147483647 or between 0 and 4294967295 if using the unsigned int version instead.

long and unsigned long

They represent “long integer numbers” and their precisions depend on the compiler and the OS. For 16-bit OSes, they used to represent 32-bit precision integer numbers; for 32-bit hardware, they also represent 32-bit precision numbers and for 64-bit OSes, they have a 32-bit precision on Windows and 64-bit precision on UNIXes.

long long and unsigned long long

They represent 64-bit integers.


It represents a single-precision floating point number. It is stored in 32-bits (as defined in IEEE 754-2008) and it can represent numbers between 1.18(10^−38) and 3.4(10^38) with around 7 digits of mantissa.


It represents a double-precision floating point number It is stored in 64-bits and it can represent numbers between 2.2250738585072009(10^-308) and 1.7976931348623157(10^308) with approximately 16 digits of precision.

C99 exact-width integer types

C99 also introduced a set of exact-width integer types that represent signed and unsigned integer numbers with precision of 8, 16, 32 and 64-bit independently from the compiler, OS or processor architecture. They are:

  • 8-bit precision: int8_t and uint8_t
  • 16-bit precision: int16_t and uint16_t
  • 32-bit precision: int32_t and uint32_t
  • 64-bit precision: int64_t and uint64_t

These exact-width integer types are not built-in types; they are just aliases (typedefs) of the primitive types described above. They are still not supported for all compilers (for example, Microsoft introduced the stdint.h [that is the library header that declares them] just for Visual Studio 2010).

C++: ‘auto’

The new standard of C++ is being released and it exposes a lot of features. Starting this post, I will write about such new features, how they are used and their benefits.

Some bits of history: The last published C++ standard is called C++03 and was published, oh surprise!, in 2003. Since such day, a lot of compilers implemented the features described by that standard. The standard committee continued to work on the next standard and they codenamed it C++0x. The ‘x’ meant it should have been published between 2003 and 2009 but it was not released in such dates; so, the people started to joke saying the ‘x’ means TEN in roman notation or the same Bjarne Stroustrup joked on it saying the ‘x’ stands for any hexadecimal value ;)
Continue reading “C++: ‘auto’”

C++: Inheritance on template specialization

When you implement something like:

/* 1 */ template <typename T> class MyClass { ... }; //template class for every T

/* 2 */ template <> class MyClass<char> { ... };  //template class full specialization for T=char

You are saying the C++ compiler: “Please, use the first template for all the MyClass<T> instances with any T except for the MyClass<T> instances where T = char”.
Continue reading “C++: Inheritance on template specialization”

C++: First message

#include <iostream>
#include <string>

using namespace std;

int main()
  string greeting = "Ifmmp!xpsme\"";
  for (string::const_iterator i = greeting.begin(); i != greeting.end(); ++i)
       cout << static_cast<char>((*i) - 1);
  cout << endl;
  return 0;

Hello, in this blog I want to publish some interesting information about C++ and other related topics (object oriented programming, generic programming, etc.); the idea is having this information useful and very accurate; so, since I am a fan of these topics but I am far from being an expert on them, I want you all to help me to improve the accuracy of every post I will write; therefore, your comments and suggestions will be really appreciated and will let me make the content of this blog better and better every day. Welcome and thank you for reading me.