We have not idea whatever about how much memory compressed cache is using at a certain moment of time, sometimes not even after a fresh boot (since we might have already compressed some pages). Our goal is to improve meminfo output with compressed cache memory consumption, incluing data strutures to know our memory cost to the whole system. We could also fiddle with free to display this sort of information if available.
Design documentation is outdated, since now we compress page cache, compress clean pages, don't remove pages when a page fault is served and so on.
After page cache support implementation, it would be very nice to rename some files and regroup the functions, following new standards.
One of the main reasons LZO was removed from Compressed Cache is its high dependancy on user leve code. If we get rid of this dependancy, we may add it back to be tested.
Our current code was done in such a way that we cannot have a compressed cache bigger than the size allowed by normal zone allocation, something like 890Mb. It may be interesting to have a support for bigger sizes.
Marc-Christian Petersen has requested to have a look at the incompatibility between compressed cache and preempt/lockbreak. There's a reproducible system freeze when both options are selected at the same time. Petersen takes the following steps to reproduce it:
- cd /usr/src - rm -rf linux-2.4.18 - tar xzpf linux-2.4.18.tar.gz System freeze while untarring the file.
When we service a page fault, LRU list is not rearranged to be uptodate to the correct ordering. That happens in the latest version after we implemented the feature that does not remove the page from Compressed Cache when a fault is serviced.
Due to the statistics for our 0.21 version, it's become important try to add page cache support to improve the range of pages which will get compressed and then hopefully improve overall system performance.
Since we may sleep to get a comp cache entry, the buffer can be used by another entry, forcing us to compress the current entry again. A larger number of page buffers could decrease the number of recompressions.
The main virtual swap table (vswap_address) needs to be adaptable along with the whole compressed cache system. Maybe a solution will be something like dynamic tables (as mentioned below).
The hash table size should be set in function of the maximum number of compressed pages to avoid hurting performance. It has been a serious bootleneck in our recent performance tests. Currently the fragment hash table is defined in function of CONFIG_COMP_CACHE_SIZE (that have to be a power of 2), but it should be pretty much interesting to have it adaptable, independent of being a power of 2 number. A adaptable hash table (maybe using dynamic tables) would also allow us to impose no upper limit on compressed cache size, what is interesting, mainly for adaptivity.
One of the functions that spend more time when the compressed cache is used is our avl-tree related functions. We should replace it with some sort of buddy algorithm table idea or with a hash table because insertions and removals are O(1) and not O(lg n) like in AVL case.
This Freed bit is not nice and adds lots of special case all over the code.
In comp_cache_use_address() (mm/comp_cache/free.c), we use the set_pte function but we don't flush the tlb entries before, what may cause troubles for architectures that have to flush tlb. So far, we haven't noticed problems because in i386 that is not needed.
The 2.4.16-0.20pre2 version of Compressed Cache is not currently working under User Mode Linux (at least not up to 2.4.16-2um). There's a problem regarding ptes and the virtual swap addresses used by our code. We don't know why some bits are not correctly updated, what messes up everything when a pte faults a not updated address.
Page last updated on "Wed Jul 31 06:26:53 2002"
Send feedback to Rodrigo S. de Castro <firstname.lastname@example.org>