Tag: c++

Entries for tag "c++", ordered from most recent. Entry count: 150.

Warning! Some information on this page is older than 6 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: > 1 2 3 4 5 ... 19 >

# str_view - null-termination-aware string-view class for C++

Sun
19
Aug 2018

tl;dr I've written a small library, which I called "str_view - null-termination-aware string-view class for C++". You can find code and documentation on GitHub - sawickiap/str_view. Read on to see full story behind it...

Let me disclose my controversial beliefs: I like C++ STL. I think that any programming language needs to provide some built-in strings and containers to be called modern and suitable for developing large programs. But of course I'm aware that careless use of classes like std::list or std::map makes program very slow due to large number of dynamic allocations.

What I value the most is RAII - the concept that memory is automatically freed whenever an object referenced by value is destroyed. That's why I use std::unique_ptr all over the place in my personal code. Whenever I create and own an array, I use std::vector, but when I just pass it to some other code for reading, I pass raw pointer and number of elements - myVec.data() and myVec.size(). Similarly, whenever I own and build a string, I use std::string (or rather std::wstring - I like Unicode), but when I pass it somewhere for reading, I use raw pointer.

There are multiple ways a string can be passed. One is pointer to first character and number of characters. Another one is pointer to first character and pointer to the next after last character - a pair of iterators, also called range. These two can be trivially converted between each other. Out of these, I prefer pointer + length, because I think that number of characters is slightly more often needed than pointer past the end.

But there is another way of passing strings common in C and C++ programs - just one pointer to a string that needs to be null-terminated. I think that null-terminated strings is one of the worst and the most stupid inventions in computer science. Not only it limits set of characters available to be used in string content by excluding '\0', but it also makes calculation of string length O(n) time complexity. It also creates opportunity for security bugs. Still we have to deal with it because that's the format that most libraries expect.

I came up with an idea for a class that would encapsulate a reference to an externally-owned, immutable string, or a piece of thereof. Objects of such class could be used to pass strings to library functions instead of e.g. a pointer to null-terminated string or a pair of iterators. They can be then queried for length(), indexed to access individual characters etc., as well as asked for a null-terminated copy using c_str() method - similar to std::string.

Code like this already exists, e.g. C++17 introduces class std::string_view. But my implementation has a twist that I'm quite happy with, which made me call my class "null-termination-aware". My str_view class not only remembers pointer and length of the referred string, but also the way it was created to avoid unnecessary operations and lazily evaluate those that are requested.

If you consider such class useful in your C++ code, see GitHub - sawickiap/str_view project for code (it's just a single header file), documentation, and extensive set of tests. I share this code for free, on MIT license. Feel free to contact me if you find any bugs or have any suggestions regarding this library.

Comments | #productions #libraries #c++ Share

# Check of CommonLib using PVS-Studio

Mon
13
Nov 2017

Courtesy developers of PVS-Studio, I could use this static code analysis tool to check my home projects. I blogged about the tool recently. It's powerful and especially good at finding issues with 32/64-bit portability. Now I analyzed CommonLib - a library I develop and use for many years, which contains various facilities for C++ programming. That allowed me to find and fix many bugs. Here are the most interesting ones I've met:

 

 

 

Comments | #commonlib #c++ #pvs-studio Share

# Ported CommonLib to 64 Bits

Sun
12
Nov 2017

I recently updated my CommonLib library to support 64-bit code. This is a change that I planned for a very long time. I written this code so many years ago that 64-bit compilation on Windows was not popular back then. Today we have opposite situation - a modern Windows application or game could be 64-bit only. I decided to make the library working in both configurations. The way I did it was:

  1. Created 64-bit configurations in Visual Studio project.
  2. Went over all the code. Made changes required to work correctly in 64-bit configuration.
  3. Compiled the code - made sure it's compiling successfully.
  4. Fixed everything that compiler reported as warnings.
  5. Made sure tests are passing.
  6. Scanned the code with PVS-Studio static code analysis tool. I covered this point in a separate blog post.

The most important and also most frequent change I needed to make in the code was to use size_t type for every size (like the number of bytes), index (for indexing arrays), count (number of elements in some collection), length (e.g. number of characters in a string) etc. This is the "native" type intended for that purpose and as such it's 32-bit or 64-bit, depending on configuration. This is also the type returned by sizeof operator and functions like strlen. Before making this change, I carelessly mixed size_t with uint32.

I sometimes use maximum available value (all ones) as a special value, e.g. to communicate "not found". Before the change I used UINT_MAX constant for that. Now I use SIZE_MAX.

I know there is a group of developers who advocate for using signed integers for indices and counts, but I'm among the proponents of unsigned types, just like C++ standard recommends. In those very rare situations where I need a signed size, correct solution is to use ptrdiff_t type. This is the case where I apply the concept of "stride", which I blogged about here (Polish) and here.

I have a hierarchy of "Stream" classes that represent a stream of bytes that can be read from or written to. Derived classes offer unified interface for a buffer in memory, a file and many other. Before the change, all the offsets and sizes in my streams were 32-bit. I needed to make a decision how to change them. My first idea was to use size_t. It seems natural choice for MemoryStream, but it doesn't make sense for FileStream, as files can always exceed 4 GB size, regardless of 32/64-bit configuration of my code. Such files were not supported correctly by this class. I eventually decided to always use 64-bit types for stream size and cursor position.

There are cases where my usage of size_t in the library interface can't work down to the implementation, because underlying API supports only 32-bit sizes. That's the case with zlib (a data compression library that I wrapped into stream classes in files ZlibUtils.*), as well as BSTR (string type used by Windows COM that I wrapped into a class in files BstrString.*). In those cases I just put asserts in the code checking whether actual size doesn't exceed 32-bit maximum before downcasting. I know it's not the safe solution - I should report error there (throw an exception), but well... That's only my personal code anyway :)

BstrString::BstrString(const wchar_t *wcs, size_t wcsLen)
{
    (...)
    assert(wcsLen < (size_t)UINT_MAX);
    Bstr = SysAllocStringLen(wcs, (UINT)wcsLen);

From other issues with 64-bit compatibility, the only one was following macro:

/// Assert that ALWAYS breaks the program when false (in debugger - hits a breakpoint, without debugger - crashes the program).
#define	ASSERT_INT3(x) if ((x) == 0) { __asm { int 3 } }

The problem is that Visual Studio offers inline assembler in 32-bit code only. I needed to use __debugbreak intrinsic function from intrin.h header instead.

Overall porting of this library went quite fast and smoothly, without any major issues. Now I just need to port the rest of my code to 64 bits.

Comments | #visual studio #c++ #commonlib Share

# New Version of PVS-Studio

Thu
09
Nov 2017

PVS-Studio is a great, commercial static code analyzer. I blogged about it few years ago. Now the latest version is much more powerful. It supports C, C++, and C#, works on Windows and Linux, as a standalone app, as well as plugin for Visual Studio (including latest version 2017).

PVS-Studio is particularly good at finding issues with code portability between 32-bit and 64-bit. Out of my personal projects, I already ported CommonLib to 64 bits, and RegScript2 is written to support 64 bits from the start, but porting my main app (music visualization program) to 64 bits is a large task that I still have on my TODO list. Even if I know how to write portable code (use size_t not int etc. :) I made first commits to this repository 8 years ago, when my programming knowledge was much smaller, so I'm sure there are many nasty bugs there. Making it working as 64-bit app will be a difficult task and I'm sure PVS-Studio will help me with that. I will share my experiences and conclusions when I eventually do it.

In the meantime, I recommend to check their Blog, where developers of this tool share many valuable information. They also maintain list of articles describing errors they found in open source projects.

Comments | #tools #c++ #pvs-studio #visual studio Share

# Visual C++: IntelliSense Versus Macros

Thu
17
Aug 2017

When you code in C++ using Visual Studio, you may meet following problem: Your code uses preprocessor directives that depend on some macro that is defined elsewhere, e.g. in one of CPP files including the header file you write, and so IntelliSense gets lost and stops working, or even completely grays out that part of your code as inactive. For example:

// Some code...

#ifdef EXTERNALLY_DEFINED_MACRO

// Some code where IntelliSense stops working...

#endif

I just found a solution to that. It turns out there is a special macro predefined when code is processed by Visual Studio IntelliSense. It's called just __INTELLISENSE__. By using it, you can change parts of your code as seen by IntelliSense parser, e.g. define some macros, without influencing logic seen by the compiler. For example:

#ifdef __INTELLISENSE__
#define EXTERNALLY_DEFINED_MACRO
#endif

// Some code...

#ifdef EXTERNALLY_DEFINED_MACRO

// Some more code where IntelliSense is working again...

#endif

Comments | #c++ #visual studio Share

# Microsoft Visual Studio 2017 - My Experience

Thu
30
Mar 2017

Visual Studio 2017 came out recently. The list of news looks like it has been written by some marketing rather than technial guys. It starts with "Unparalleled productivity for any dev, any app, and any platform. Use Visual Studio 2017 to develop apps for Android, iOS, Windows, Linux, web, and cloud. Code fast, debug and diagnose with ease, test often, and release with confidence. You can also extend and customize Visual Studio by building your own extensions. Use version control, be agile, and collaborate efficiently with this new release!" - I've never seen so many buzzwords in just one paragraph.

Rest of the page is not different. They even call their installer "a new setup experience". They've also introduced "Lightweight Solution load", which is disabled by default - like everyone is assumed to prefer slower option :) Some other changes: "Visual Studio starts faster, is more responsive, and uses less memory than before." - that's unexpected direction. "Performance improvement: basic_string::operator== now checks the string's size before comparing the strings' contents." - wow, that's genius! They should file a patent for that ;) I hope they do the same for std::vector and other STL containers.

OK, but jokes aside, I've installed it on my personal PC, it installed quite fast and it works good. It preserved my settings, like the list of Include and Library Directories. Upgrade of my home projects went smoothly, without any problems.

There are many changes valuable for native code developers. What's New for Visual C++ in Visual Studio 2017 page mentions over 250 bug fixes, other compiler improvements and improved support for C++11, 14, and 17. I've already heard stories of programs running much faster after recompilation with this new compiler.

Contrary to what I thought before, Microsoft didn't abandon Graphics Diagnostics embedded into MSVS after they released new standalone PIX. They've actually added some new features to it.

So I definitely recommend upgrading to Visual Studio 2017. It is IMHO the best C++ IDE, and the new version is just next step in the right direction.

It seems that there is no new version of "Microsoft Visual C++ Redistributable Package" this time. Programs compiled with VS 2017 use VCRUNTIME140.DLL, just like in 2015 version.

Comments | #c++ #visual studio Share

# Operator New and Delete - Unnecessary Conditions

Fri
09
Dec 2016

I've seen following constructs many times in C++ code:

MyClass* obj = new MyClass();
if(obj == nullptr)
    // Handle allocation error.

// ...

if(obj != nullptr)
    delete obj;

Both of these conditions are unnecessary. Strictly speaking they are not a bug - program will run correctly, but they make no sense. If you used to write any of these, you should know that:

1. Operator new doesn't return null on failed allocation. By default it throws an exception of type std::bad_alloc in this case, so this is what you should handle if you really care about the state of your program after it runs out of memory (or if the allocated object is particularly big).

There is also special nothrow version of the new operator that returns null on failure, but you must call it explicitly. An alternative would be to overload new operator (global or for particular class) to change its behavior. But again, this is not what happens by default.

Note this is different behavior than malloc function from C. Obviously there are no exceptions in C language, so this function just returns null on failure.

2. Operator delete doesn't crash when you pass null pointer to it - it just does nothing, so the check for not-null is already inside, you don't have to write it. Of course trying to delete object from any non-null address that was already freed or is just invalid address still crashes, or causes other undefined behavior.

Note this is the same behavior as free function from C - it also accepts null pointer.

Comments | #c++ Share

# How to disable C++ exception handling using macros?

Thu
28
Jul 2016

Some time ago my colleague showed me a clever way to disable exception handling in C++ using a set of preprocessor macros:

#define throw
#define try          if(true)
#define catch(...)   if(false)

Please note that:

So following code that uses exception:

try
{
    // Do something
    if(somethingFailed)
        throw std::exception("Message.");
}
catch(const std::exception& e)
{
    // Handle the exception.
}

Will resolve after defining these three macros to following:

if(true)
{
    // Do something
    if(somethingFailed)
        std::exception("Message.");
}
if(false)
{
    // Handle the exception.
}

Of course it's not a solution to any real problem, unless you just want your try...catch blocks to stop working. Disabling exception handling for real (and associated performance penalty, as well as binary code size overhead) is the matter of compiler options. And of course this trick makes errors not handled properly, so when an exception would be thrown, the program will just continue and something bad will happen instead.

But I think the trick is interesting anyway, because it shows how powerful C++ is (it empowers you to do stupid things :)

Comments | #c++ Share

Pages: > 1 2 3 4 5 ... 19 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2024