Discussion:
[jira] [Created] (LOG4J2-431) Create MemoryMappedFileAppender
Remko Popma (JIRA)
2013-10-17 02:05:41 UTC
Permalink
Remko Popma created LOG4J2-431:
----------------------------------

Summary: Create MemoryMappedFileAppender
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor


A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.

*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.

* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.

* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.

* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).

*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)

*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.

So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)

In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.

Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.



--
This message was sent by Atlassian JIRA
(v6.1#6144)
Claude Mamo (JIRA)
2013-12-23 09:48:55 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileAppenderTest.xml
MemoryMappedFileManagerTest.java
MemoryMappedFileAppenderTest.java
MemoryMappedFileManager.java
MemoryMappedFileAppender.java

I wanted to play around with java.nio.MappedByteBuffer so I wrote an appender using it. See the code and tests attached.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Remko Popma (JIRA)
2013-12-23 14:02:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855647#comment-13855647 ]

Remko Popma commented on LOG4J2-431:
------------------------------------

Claude, thank you for your contribution!
I've taken a quick look and it looks pretty good.

There may be a few corner cases left when the mapped buffer is nearly full and a new region needs to be mapped.
(E.g. if the remaining size is say 4 bytes and we want to write 10 bytes, we need to make sure that the first 4 bytes are written to the old buffer, then the buffer is remapped and the remaining 6 bytes are written to the new buffer. There may also be a (weird) case when the mapped region is extremely small and the input byte array is larger than the total size of the mapped buffer. In this case we need to write chunks of the input that fit in the buffer, remap the buffer and repeat...)
I could be wrong but could you take another look at these corner cases?

Apart from that it looked pretty good. I hope to be able to spend more time for a more detailed look next weekend or after New Year.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:01:52 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileManager.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:01:53 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileManager.java
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:07:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857 ]

Claude Mamo commented on LOG4J2-431:
------------------------------------

Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where unnecessary re-mapping is performed on the first log entry if the write is larger than the map size. I attached an updated version of the MemoryMappedFileManager.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:13:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857 ]

Claude Mamo edited comment on LOG4J2-431 at 12/23/13 7:13 PM:
--------------------------------------------------------------

Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where the initial buffer is not used if the first log entry exceeds the map size. I attached an updated version of the MemoryMappedFileManager.


was (Author: claude.mamo):
Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where unnecessary re-mapping is performed on the first log entry if the write is larger than the map size. I attached an updated version of the MemoryMappedFileManager.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:33:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857 ]

Claude Mamo edited comment on LOG4J2-431 at 12/23/13 7:32 PM:
--------------------------------------------------------------

Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (mappedFile.remaining() < 1) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where the initial buffer is not used if the first log entry exceeds the map size. I attached an updated version of the MemoryMappedFileManager.


was (Author: claude.mamo):
Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where the initial buffer is not used if the first log entry exceeds the map size. I attached an updated version of the MemoryMappedFileManager.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:33:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileManager.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:33:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileManager.java
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-23 19:40:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857 ]

Claude Mamo edited comment on LOG4J2-431 at 12/23/13 7:38 PM:
--------------------------------------------------------------

Hi Remko, I reviewed the cases you mentioned and yes it looks likes it doesn't cover one of the cases. I made a minor change and it should be fine now:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (mappedFile.remaining() < 1) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}


was (Author: claude.mamo):
Hi Remko, I reviewed the cases you mentioned and these should be covered:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (mappedFile.remaining() < 1) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}

I found a minor bug where the initial buffer is not used if the first log entry exceeds the map size. I attached an updated version of the MemoryMappedFileManager.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 10:38:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857 ]

Claude Mamo edited comment on LOG4J2-431 at 12/24/13 10:37 AM:
---------------------------------------------------------------

Hi Remko, I reviewed the cases you mentioned and yes it looks likes it doesn't cover one of the cases. I made a minor change and it should be fine now:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (mappedFile.remaining() < 1) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("MemoryMappedFileManager (" + getName() + ") " + ex);
}
}
{code}


was (Author: claude.mamo):
Hi Remko, I reviewed the cases you mentioned and yes it looks likes it doesn't cover one of the cases. I made a minor change and it should be fine now:

{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int length) {
int chunk = 0;

try {
do {
// re-map if no room is left in buffer
if (mappedFile.remaining() < 1) {
fileSize = randomAccessFile.length();
mappedFile = randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE, randomAccessFile.length(), mapSize);
}

chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 10:38:51 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileManager.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 10:40:51 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileManager.java
MemoryMappedFileAppender.java
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 10:40:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileAppender.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 12:57:51 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileManager.java
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 12:57:50 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileManager.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 13:50:51 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: (was: MemoryMappedFileManager.java)
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Claude Mamo (JIRA)
2013-12-24 13:50:52 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Claude Mamo updated LOG4J2-431:
-------------------------------

Attachment: MemoryMappedFileManager.java
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Remko Popma (JIRA)
2014-01-12 02:35:52 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868930#comment-13868930 ]

Remko Popma commented on LOG4J2-431:
------------------------------------

Claude, thanks for working on this!
Apologies that I haven't been able to take a good look at your contribution yet. I am currently focusing on resolving outstanding issues to help get log4j ready for the 2.0 GA release. I plan to start looking at your patches after that, and include this appender in a future 2.1 or 2.0.x release.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Remko Popma (JIRA)
2014-01-28 09:44:38 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Remko Popma reassigned LOG4J2-431:
----------------------------------

Assignee: Remko Popma
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
Ajay (JIRA)
2014-05-28 08:11:03 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010902#comment-14010902 ]

Ajay commented on LOG4J2-431:
-----------------------------

Hi Remko / Claude ,

I am planning to use these files as our requirement. Can any one of you can help , what changes I suppose to do to use as other log4j appenders. I tried by modifying jar by adding these files as proper folder but hard luck.

Thanks
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
Ajay (JIRA)
2014-05-28 08:13:02 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010902#comment-14010902 ]

Ajay edited comment on LOG4J2-431 at 5/28/14 8:12 AM:
------------------------------------------------------

Hi Remko / Claude ,

I am planning to use these files, for one of my requirement. Can any one of you can help , what changes I suppose to do to use this MemoryMap Appander as other log4j appenders. I tried by modifying jar by adding these files as proper folder but hard luck.

Thanks


was (Author: dev_ajay):
Hi Remko / Claude ,

I am planning to use these files as our requirement. Can any one of you can help , what changes I suppose to do to use as other log4j appenders. I tried by modifying jar by adding these files as proper folder but hard luck.

Thanks
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
Remko Popma (JIRA)
2014-05-28 09:13:01 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010946#comment-14010946 ]

Remko Popma commented on LOG4J2-431:
------------------------------------

Ajay, apologies, but I still haven't been able to spend time on this. I still intend to work on this, I just cannot give any estimates of when I will be able to work on this. Meanwhile, please feel free to experiment with the attached files and any feedback would be appreciated!
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
Remko Popma (JIRA)
2014-09-17 12:11:34 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Remko Popma updated LOG4J2-431:
-------------------------------
Fix Version/s: 2.1
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Remko Popma (JIRA)
2014-09-17 12:30:33 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14137142#comment-14137142 ]

Remko Popma commented on LOG4J2-431:
------------------------------------

The memory mapped file appender and unit tests are now available in branch LOG4J2-431 in git.
I plan to merge this into master after I've created some documentation.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Remko Popma (JIRA)
2014-09-18 02:38:33 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14138418#comment-14138418 ]

Remko Popma commented on LOG4J2-431:
------------------------------------

Update: user manual documentation is complete except for one outstanding change to rewrite the initial sentence and remove the "Beta" label. Further feedback welcome.

Next item would be a performance test report comparing this appender to RandomAccessFile and File appender, but that is not a showstopper and may be included in a subsequent release.

Java is supposed to be platform-independent so perhaps I worry too much, but since memory-mapped files may have some platform-specific idiosyncrasies, I tested on a number of platforms.
JUnit tests pass on
* 32-bit Windows XP (32 bit Oracle JVM 1.7.0_55)
* 64-bit Windows 7 (64 bit Oracle JVM 1.8.0_05 - and a few others, I'll update the exact JVM versions later)
* 64-bit Solaris 10 (64 bit Oracle JVM 1.7.0_06-b24)
* 64-bit RHEL 5.5 (Linux 2.6.18-194.el5) with 64 bit Oracle JDK1.7.0_05-b06
* 64-bit RHEL 6.5 (Linux 2.6.32-431.el6.x86_64) with 64 bit Oracle JDK1.7.0_05-b06 and 64 bit OpenJDK1.7.0_45 (rhel-2.4.3.3.el6-x86_64 u45-b15)

Test scenarios:
* Create new file with default region length, log less than file size. Ensure the log file is shrunk to actually used size, and all log statements are present in the file.
* Append to an existing file. Extend by default region length, log less than file length. Ensure that previous data is not overwritten, new data is appended, and the log file is shrunk to the actually used size when released.
* Create a new file with very small region length, repeatedly log to exceed region length. Ensure all data is correctly logged and the log file is shrunk to the actually used size when released.

More test scenarios welcome.

If no objections I will merge this into master tonight (8-12 hours from now).
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Remko Popma (JIRA)
2014-09-18 04:42:33 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14138418#comment-14138418 ]

Remko Popma edited comment on LOG4J2-431 at 9/18/14 4:41 AM:
-------------------------------------------------------------

Update: user manual documentation is complete except for one outstanding change to rewrite the initial sentence and remove the "Beta" label. Further feedback welcome.

Next item would be a performance test report comparing this appender to RandomAccessFile and File appender, but that is not a showstopper and may be included in a subsequent release.

Java is supposed to be platform-independent so perhaps I worry too much, but since memory-mapped files may have some platform-specific idiosyncrasies, I tested on a number of platforms.
JUnit tests pass on
* 32-bit Windows XP (32 bit Oracle JVM 1.7.0_55)
* 64-bit Windows 7 (64 bit Oracle JVM 1.8.0_05, 64 bit Oracle JVM 1.7.0_55 and 64 bit Oracle JVM 1.6.0_45)
* 64-bit Solaris 10 (64 bit Oracle JVM 1.7.0_06-b24)
* 64-bit RHEL 5.5 (Linux 2.6.18-194.el5) with 64 bit Oracle JDK1.7.0_05-b06
* 64-bit RHEL 6.5 (Linux 2.6.32-431.el6.x86_64) with 64 bit Oracle JDK1.7.0_05-b06 and 64 bit OpenJDK1.7.0_45 (rhel-2.4.3.3.el6-x86_64 u45-b15)

Test scenarios:
* Create new file with default region length, log less than file size. Ensure the log file is shrunk to actually used size, and all log statements are present in the file.
* Append to an existing file. Extend by default region length, log less than file length. Ensure that previous data is not overwritten, new data is appended, and the log file is shrunk to the actually used size when released.
* Create a new file with very small region length, repeatedly log to exceed region length. Ensure all data is correctly logged and the log file is shrunk to the actually used size when released.

More test scenarios welcome.

If no objections I will merge this into master tonight (8-12 hours from now).


was (Author: ***@yahoo.com):
Update: user manual documentation is complete except for one outstanding change to rewrite the initial sentence and remove the "Beta" label. Further feedback welcome.

Next item would be a performance test report comparing this appender to RandomAccessFile and File appender, but that is not a showstopper and may be included in a subsequent release.

Java is supposed to be platform-independent so perhaps I worry too much, but since memory-mapped files may have some platform-specific idiosyncrasies, I tested on a number of platforms.
JUnit tests pass on
* 32-bit Windows XP (32 bit Oracle JVM 1.7.0_55)
* 64-bit Windows 7 (64 bit Oracle JVM 1.8.0_05 - and a few others, I'll update the exact JVM versions later)
* 64-bit Solaris 10 (64 bit Oracle JVM 1.7.0_06-b24)
* 64-bit RHEL 5.5 (Linux 2.6.18-194.el5) with 64 bit Oracle JDK1.7.0_05-b06
* 64-bit RHEL 6.5 (Linux 2.6.32-431.el6.x86_64) with 64 bit Oracle JDK1.7.0_05-b06 and 64 bit OpenJDK1.7.0_45 (rhel-2.4.3.3.el6-x86_64 u45-b15)

Test scenarios:
* Create new file with default region length, log less than file size. Ensure the log file is shrunk to actually used size, and all log statements are present in the file.
* Append to an existing file. Extend by default region length, log less than file length. Ensure that previous data is not overwritten, new data is appended, and the log file is shrunk to the actually used size when released.
* Create a new file with very small region length, repeatedly log to exceed region length. Ensure all data is correctly logged and the log file is shrunk to the actually used size when released.

More test scenarios welcome.

If no objections I will merge this into master tonight (8-12 hours from now).
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Remko Popma (JIRA)
2014-09-18 17:00:39 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Remko Popma closed LOG4J2-431.
------------------------------
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Remko Popma (JIRA)
2014-09-18 17:00:38 UTC
Permalink
[ https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Remko Popma resolved LOG4J2-431.
--------------------------------
Resolution: Fixed

Merged into master in b1783a0aa3174fce0605a806522e9480a33e26d9.
Post by Remko Popma (JIRA)
Create MemoryMappedFileAppender
-------------------------------
Key: LOG4J2-431
URL: https://issues.apache.org/jira/browse/LOG4J2-431
Project: Log4j 2
Issue Type: New Feature
Components: Appenders
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Minor
Fix For: 2.1
Attachments: MemoryMappedFileAppender.java, MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml, MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
A memory-mapped file appender may have better performance than the ByteBuffer + RandomAccessFile combination used by the RandomAccessFileAppender.
*Drawbacks*
* The drawback is that the file needs to be pre-allocated and only up to the file size can be mapped into memory. When the end of the file is reached the appender would need to extend the file and re-map.
* Remapping is expensive (I think single-digit millisecond-range, need to check). For low-latency apps this kind of spike may be unacceptable so careful tuning is required.
* Memory usage: If re-mapping happens too often you lose the performance benefits, so the memory-mapped buffer needs to be fairly large, which uses up memory.
* At roll-over and shutdown the file should be truncated to immediately after the last written data (otherwise the user is left with a log file that ends in garbage).
*Advantages*
Measuring on a Solaris box, the difference between flushing to disk (with {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds for a RandomAccessFile.write.
(Of course different hardware and OS may give different results...)
*Use cases*
The difference may be most visible if {{immediateFlush}} is set to {{true}}, which is only recommended if async loggers/appenders are not used. If {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender means you won't need to touch disk very often.
So a MemoryMappedFileAppender is most useful in _synchronous_ logging scenarios, where you get the speed of writing to memory but the data is available on disk almost immediately. (MMap writes directly to the OS disk buffer.)
In case of a application crash, the OS ensures that all data in the buffer will be written to disk. In case of an OS crash the data that was most recently added to the buffer may not be written to disk.
Because by nature this appender would occupy a fair amount of memory, it is most suitable for applications running on server-class hardware with lots of memory available.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Loading...