Thursday 5 June 2008

Use more than 4GB RAM with Win32

If you need to allocate more than 2GB RAM, some of this will probably be used for caching undo-levels, caching data retrieved via the network, calculation results etc. The Windows API actually has a feature, that may help you use as much RAM as possible, even more RAM than Win32 can access, in case you're using a 64-bit operating system. The trick is to use the CreateFile function with the flags FILE_ATTRIBUTE_TEMPORARY and FILE_FLAG_DELETE_ON_CLOSE. Adding FILE_FLAG_RANDOM_ACCESS can also be beneficial.

From the Win32 documentation: "Specifying the FILE_ATTRIBUTE_TEMPORARY attribute causes file systems to avoid writing data back to mass storage if sufficient cache memory is available, because an application deletes a temporary file after a handle is closed. In that case, the system can entirely avoid writing the data. Although it doesn't directly control data caching in the same way as the previously mentioned flags, the FILE_ATTRIBUTE_TEMPORARY attribute does tell the system to hold as much as possible in the system cache without writing and therefore may be of concern for certain applications."

The FILE_FLAG_DELETE_ON_CLOSE is useful to ensure, that the file does not exist on the harddisk, when the application has stopped, or has been aborted. The Win32 file APIs support large files, so there is basically no limit to the amount of data stored this way.

Compared to AWE, this method works on all Win32 platforms, doesn't use more RAM than what makes overall sense for the PC's current use and doesn't require the application to run with "Lock Pages in Memory" privilege.

3 comments:

Patrick said...

The nice thing about this trick is also that switching views (using MapViewOfFile) is far cheaper than switching AWE views.
With AWE, switching a view halts the entire process, while with MMF (memory-mapped files), only a change in the PTE (Page Table Entries) is done.
Note however, that MapViewOfFile is not the last bit to this - after the mapping is registered, you'll encounter PageFaults at the first access to the pages in this virtual-memory range.
PageFaults can become quite costly when the data is not yet in physical memory (a round-trip to disk occurs, which is a blocking operation for the calling thread).
(You can apply a few tricks here, like read-ahead buffering or pre-'hitting' the pages involved).

Once the data is in physical memory however, it beats AWE hands down!

Now, the real challenge is building a thread-safe view-swapping engine, so the memory can actually be used from within the virtual memory of a 32-bit application!

And if you wonder: Yes, I've done this for my employer EveryAngle.com, making it possible to work in-memory with (parts of) databases sized 100+ GiB!

Cheers!

Patrick said...

For people investigating this further, I would like to warn against using this under WOW64;
It works, but because of the mismatch between win32 and win64 page sizes, PageFaults are more than two times as costly (according to our measurements).
You're better of building a native 64-bit application in this case. (For the Delphi-inclined, FPC comes to mind)

Lars D said...

You don't have to use memory mapping in order to use memory like this - just read the file content into preallocated memory. For undo-levels, cache data etc., this makes a lot of sense.