Server performance tuning for Linux and Unix: Difference between revisions

From SubversionWiki
Jump to navigation Jump to search
(Server performance tuning for Linux/Unix)
 
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Server performance tuning for Linux/Unix ==
== General notes ==


Optimizing subversion server performance
There are several good web sites and books about how to setup subversion, but I couldn't find anything about how to optimize performance.  This is a guide to understanding and improving the performance of subversion when it is served using svnserve or HTTP (via apache).
Dan Christian
5 May 2008


General notes:
Much of the operating system tuning is similar to what would be done for a database or e-mail server.  It can be useful to search those areas for additional advice http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html


Most of these tips apply to both the Berkeley DB and the FSFS
These notes have a lot of Linux specific details, but the concepts should apply to most Unix based systemsFeel free to add details for other flavors of Unix here. Non-posix tuning notes should probably go on another page.
repository formatsFSFS is generally considered to be faster.
Database servers have the same kinds of problems and are a good source
for disk and operating system (OS) tuning advice.
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html


The repository can be stored as either a Berkeley DB database or in the FSFS repository formats.  This is hidden from users.  FSFS is generally considered to be faster.


Making reads cheaper:
== Making reads cheaper ==


The Unix concept of "file access time" (commonly call atime) is a
The Unix concept of "file access time" (commonly call atime) is a performance problem.  When filesystem semantics were being defined, it seemed like a good idea to know when a file was last accessed.  The down side is that every file open call now causes a disk write.  A few
performance problem.  When filesystem sematics were being defined, it
utilities use this information (e.g. tmpwatch and mail), but subversion never uses atime.  Subversion performance is improved by avoiding the access time writes.
seemed like a good idea to know when a file was last accessed.  The
down side is that every file open call now causes a disk write.  A few
utilites use this information (e.g. tmpwatch and mail), but subversion
never uses atime.  Subversion performance is improved by avoiding the
access time writes.


For a local filesystem, you can disable this behavior with mount
For a local filesystem, you can disable this behavior with mount options.  On Linux, it's the 'noatime' and 'nodiratime' options.  On a NFS filesystem, the atime recording happens on the server and must be disabled in the server's configuration.
options.  On linux, it's the 'noatime' and 'nodiratime' options.  On a
NFS filesystem, the atime recording happens on the server and must be
disabled in the server's configuration.


A lazy atime approach called "relatime" was introduced in Linux-2.6.20
A lazy atime approach called "relatime" was introduced in Linux-2.6.20 and mount-2.13.  This eliminates most atime writes without breaking the few utilities that need it.  This is most useful if the repository must be on the same partition as the mail spool and/or temporary
and mount-2.13.  This eliminates most atime writes without breaking
files.  See: http://kerneltrap.org/node/14148 and http://kernelnewbies.org/Linux_2_6_20
the few utilities that need it.  This is most useful if the repository
must be on the same partition as the mail spool and/or temporary
files.  See: http://kerneltrap.org/node/14148 and
http://kernelnewbies.org/Linux_2_6_20


== Making writes cheaper ==


Making writes cheaper:
Subversion uses uses the fsync() call (or the equivalent on non-Unix operating systems) to tell the operating system to write data to disk.  Up until that point, the data is usually only memory and the operating system will write it to disk "when it gets around to it".


Subversion uses uses the fsync() call (or the equivalent on non-Unix
By calling fsync() before finishing a commit, subversion is trying to guarantee that everything it said had been done would be there when the machine re-boots.  Waiting for data to write out to disk is often the slowest part of a commit.
operating systems) to tell the operating system to write data to disk.
Up until that point, the data is usually only memory and the operating
system will write it to disk "when it gets around to it".


By calling fsync() before finishing a commit, subversion is trying to
However, the operating system doesn't always hold up its end of the bargain.  On Linux, fsync() only ensures that the data is on its way to the disk "as soon as possible"If write cache is enabled on the drive, then it doesn't actually wait for the data to hit the disk platter before returning.  This means there is a window of time that a power loss can cause the disk state to not match what subversion returned.
guarantee that everything it said had been done would be there when
the machine re-bootsWaiting for data to write out to disk is often
the slowest part of a commit.


However, the operating system doesn't always hold up its end of the
One way to significantly increase fsync() performance is to use a RAID controller with a battery backed write cache.  The cache is treated as part of the disk system.  As soon as the data is in the cache, the fsync() can safely return.  This means you don't have to wait for the
bargain.  On Linux, fsync() only ensures that the data is on its way
disk head seek or the data transferIf power is interrupted, the RAID controller will finish writing out the cache when power is restored.
to the disk "as soon as possible".  If write cache is enabled on the
drive, then it doesn't actually wait for the data to hit the disk
platter before returningThis means there is a window of time that a
power loss can cause the disk state to not match what subversion
returned.


One way to significantly increase fsync() performance is to use a RAID
A newer way to avoid this problem is a flash based disk.  There is no latency from head movement or waiting for the disk to rotate.  This becomes more significant when writing many small files (like many FSFS writes).  The current downsides of flash disks are high cost, limited
controller with a battery backed write cache.  The cache is treated as
part of the disk system.  As soon as the data is in the cache, the
fsync() can safely return.  This means you don't have to wait for the
disk head seek or the data transfer.  If power is interrupted, the
RAID controller will finish writing out the cache when power is
restored.
 
A newer way to avoid this problem is a flash based disk.  There is no
latency from head movement or waiting for the disk to rotate.  This
becomes more significant when writing many small files (like many FSFS
writes).  The current downsides of flash disks are high cost, limited
capacity, and low write bandwidth (but these problems are improving).
capacity, and low write bandwidth (but these problems are improving).


== Reducing the number of writes ==


Reducing the number of writes:
As of subversion-1.5, transactions can be built up on a different filesystem than the one holding the repository.  This is valuable when the repository lives on a slower filesystem like NFS.
 
As of subversion-1.5, transactions can be built up on a different
filesystem than the one holding the repository.  This is valuable when
the repository lives on a slower filesystem like NFS.


To implement this, do the following:
To implement this, do the following:
Line 85: Line 44:
   start the servers
   start the servers


== Reduce directory index size ==


Reduce directory index size:
The subversion-1.5 repository format allows the revisions to be stored in subdirectories that don't grow past a specified size.  This allows repositories to store many more revisions than can (efficiently) be stored in one directory.
 
The subversion-1.5 repository format allows the revisions to be stored
in subdirectories that don't grow past a specified size.  This allows
repositories to store many more revisions than can (efficiently) be
stored in one directory.
 
Modern filesystems can handle hundreds of thousands of files in a
single directory.  However, performance can suffer as the directory
index starts to use multiple levels of indirection.  Some
administration tools may also have trouble with very large
directories.  Splitting the revision store into sub-directories avoids
all these problems.


The shard size can by adjusted by editing the "layout sharded" line in
Modern filesystems can handle hundreds of thousands of files in a single directory.  However, performance can suffer as the directory index starts to use multiple levels of indirectionSome administration tools may also have trouble with very large directories. Splitting the revision store into sub-directories avoids all these problems.
"db/format" after 'svnadmin create' but before populating the
repositoryThe default is 1000 revisions per subdirectory.
Non-sharded repositories can be loaded into a new, sharded,
repository using "svnadmin load" or "svnsync".


The shard size can by adjusted by editing the "layout sharded" line in "db/format" after 'svnadmin create' but before populating the repository.  The default is 1000 revisions per subdirectory. Non-sharded repositories can be loaded into a new, sharded, repository using "svnadmin load" or "svnsync".


Make write-once portions of the repository skip NFS cache checks:
== Optimize write-once files on NFS ==


If the repository is on a NFS filesystem, then a cache consistency
If the repository is on a NFS filesystem, then a cache consistency check is made every time a file is opened.  Since the revision files in a FSFS repository never change, it is worthwhile to skip the cache checks on these files.  The subversion-1.5 repository format store
check is made every time a file is opened.  Since the revision files
in a FSFS repository never change, it is worthwhile to skip the cache
checks on these files.  The subversion-1.5 repository format store
immutable files in specific subdirectories so that this can be done.
immutable files in specific subdirectories so that this can be done.


The NFS cache check can be disabled on Linux by passing the 'nocto'
The NFS cache check can be disabled on Linux by passing the 'nocto' option to the mount command (note: the man page claims this is ignored, but it isn't on linux-2.6).  You need coherency for some files, so the NFS volume is also mounted without the option on a
option to the mount command (note: the man page claims this is
different mount point.  Symbolic links are made from the cache coherent mount point to the 'nocto' mount for these directories: revs and txn-protorevs.
ignored, but it isn't on linux-2.6).  You need coherency for some
files, so the NFS volume is also mounted without the option on a
different mount point.  Symbolic links are made from the cache
coherent mount point to the 'nocto' mount for these directories: revs
and txn-protorevs.


Implementation example (not complete, just an outline of the key steps):
Implementation example (not complete, just an outline of the key steps):
Line 136: Line 73:
   start the servers
   start the servers
    
    
== Increase NFS caching timeout ==


Increase NFS caching timeout:
On Linux, metadata on files from NFS is only kept for a finite period of time.  This can be changed by passing the actimeo option to the mount command.  The man page claims the default is 60 (seconds), but some experimentation suggests it may be higher than that.  For a
 
'nocto' mount point, this value can be raised to something much larger (e.g. 3600).  See the above example.
On Linux, metadata on files from NFS is only kept for a finite period
of time.  This can be changed by passing the actimeo option to the
mount command.  The man page claims the default is 60 (seconds), but
some experimentation suggests it may be higher than that.  For a
'nocto' mount point, this value can be raised to something much larger
(e.g. 3600).  See the above example.
 


Distributing CPU load:
== Distributing CPU load ==


The subversion communicates with the clients by transmitting
The subversion communicates with the clients by transmitting differences in state, so the CPU load to calculate the difference can be significant.  By storing the repository on NFS, you can have
differences in state, so the CPU load to calculate the difference can
multiple "front end" (FE) systems that share the computational load and provide redundancy.  A network load balancer makes all front ends (FEs) appear as one server to users.
be significant.  By storing the repository on NFS, you can have
multiple "front end" (FE) systems that share the computational load
and provide redundancy.  A network load balancer makes all front ends
(FEs) appear as one server to users.


The FEs can either run svnserve or http-DAV.  If DAV is used, you need
The FEs can either run svnserve or http-DAV.  If DAV is used, you need to ensure that the load balancer keeps an entire transaction on the same FE (to allow transactions to be built up on local disk).  The load balancer must be configured with "machine affinity" set, so that
to ensure that the load balancer keeps an entire transaction on the
all HTTP connections from a client will be routed to the same server. You should also configure apache to keep a single TCP connection for the entire transaction (see example below).
same FE (to allow transactions to be built up on local disk).  The
load balancer must be configured with "machine affinity" set, so that
all HTTP connections from a client will be routed to the same server.
You should also configure apache to keep a single TCP connection for
the entire transaction (see example below).


Apache configuration to maintain a TCP connection:
Apache configuration to maintain a TCP connection:
Line 174: Line 96:
   MaxRequestsPerChild  1
   MaxRequestsPerChild  1


The last one is counter-intuitive, but see the "Note" at
The last one is counter-intuitive, but see the "Note" at http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild.
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild.
 


High storage system reliability:
== High storage system reliability ==


The purpose of a version control system is to store a sequence of
The purpose of a version control system is to store a sequence of file/directory versions so you can retreive them in the future.  None of this matters if the storage system fails.
file/directory versions so you can retreive them in the future.  None
of this matters if the storage system fails.


The simplest step is to do periodic backups of the repository.  This
The simplest step is to do periodic backups of the repository.  This limits the loss to the changes that happened since the last backup. If the repository is large and the commit rate is high, it may be impossible to backup frequently enough to prevent significant data
limits the loss to the changes that happened since the last backup.
loss.  For example, if your repository gets one commit per second and you do a backup every hour, you may lose 3600 revisions if the disk fails.  This is a large scale example, but the point is to gather your own numbers and figure out how much you might lose.
If the repository is large and the commit rate is high, it may be
impossible to backup frequently enough to prevent significant data
loss.  For example, if your repository gets one commit per second and
you do a backup every hour, you may lose 3600 revisions if the disk
fails.  This is a large scale example, but the point is to gather
your own numbers and figure out how much you might lose.


The next step is to make the disk system redundant using RAID
The next step is to make the disk system redundant using RAID technology.  This allows one (and sometimes more) disks to fail without losing data.  This still won't help if additional disks fail
technology.  This allows one (and sometimes more) disks to fail
without losing data.  This still won't help if additional disks fail
during recovery or the entire array is lost due to fire, theft, etc.
during recovery or the entire array is lost due to fire, theft, etc.


Advanced NFS servers can be configured to do synchronous mirroring
Advanced NFS servers can be configured to do synchronous mirroring and/or asynchronous mirroring (also known as snapshot replication).
and/or asynchronous mirroring (also known as snapshot replication).
These capabilities are available in some commercial servers, or you can find various free alternatives by searching for "NFS server high availability" or "NFS server snapshot replication".
These capabilities are available in some commercial servers, or you
can find various free alternatives by searching for "NFS server high
availability" or "NFS server snapshot replication".


Synchronous mirroring sends every write to two independent storage
Synchronous mirroring sends every write to two independent storage systems and requires a high bandwidth network (e.g. gigabit ethernet). It reduces performance, but the caching optimizations listed above can help.  The primary and slave systems are usually located in different
systems and requires a high bandwidth network (e.g. gigabit ethernet).
It reduces performance, but the caching optimizations listed above can
help.  The primary and slave systems are usually located in different
rooms (or buildings) and on different electrical circuits.
rooms (or buildings) and on different electrical circuits.


Asynchronous mirroring periodically updates a second storage system
Asynchronous mirroring periodically updates a second storage system with changes from the master.  It periodically makes a "snapshot" of the system every few minutes and then transmits the difference between the previous snapshot and the current one to the slave.  This uses
with changes from the master.  It periodically makes a "snapshot" of
less bandwidth, but lags the main filesystem by a several minutes.  It can allow a backup filesystem to be located in another geographic region.
the system every few minutes and then transmits the difference between
 
the previous snapshot and the current one to the slave.  This uses
Another approach is to use subversion tools to maintain a mirror.  Setup svnsync to periodically sync a back up server off the main one.  This can lag behind by the polling interval, but it is simple to setup.
less bandwidth, but lags the main filesystem by a several minutes.  It
 
can allow a backup filesystem to be located in another geographic
You can eliminate the lag by setting up a post-commit script that runs "svnadmin dump --incremental -r N" of that commit onto a separate partition/server.  This creates a transaction log of commits that can be replayed on a recent backup to restore full state.
region.
 
== Watch your entropy ==
 
When servers handled lots of queries, certain protocols can deplete the [http://en.wikipedia.org/wiki/Entropy_(computing) entropy] [http://en.wikipedia.org/wiki/Urandom pool].  The svn:// protocol (served by svnserve) and [http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer SASL] cyphers read from /dev/random for every new connection.  If the entropy pool becomes depleted, then the service will become very slow.
 
The pool should have 100+ bits in it for good operation.  You can check the entropy pool size on linux like this:
  sysctl kernel.random.entropy_avail


Another approach is to use subversion tools to maintain a mirror.
This should not be a problem if APR was configured with "--with-devrandom=/dev/urandom". Sasl has a similar configuration option (called???). You may need to check how the packages supplied with you OS are configured.
Setup svnsync to periodically sync a back up server off the main one.
This can lag behind by the polling interval, but it is simple to setup.


You can eliminate the lag by setting up a post-commit script that runs
"svnadmin dump --incremental -r N" of that commit onto a separate
partition/server.  This creates a transaction log of commits that can
be replayed on a recent backup to restore full state.


(TODO: reference the SVN benchmarking capabilities in mstone http://mstone.sourceforge.net/)


(TODO: reference the benchmarking capabilites in mstone)
[[User:Dchristian|Dchristian]] 13:42, 30 May 2008 (PDT)

Latest revision as of 13:17, 15 July 2008

General notes

There are several good web sites and books about how to setup subversion, but I couldn't find anything about how to optimize performance. This is a guide to understanding and improving the performance of subversion when it is served using svnserve or HTTP (via apache).

Much of the operating system tuning is similar to what would be done for a database or e-mail server. It can be useful to search those areas for additional advice http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html

These notes have a lot of Linux specific details, but the concepts should apply to most Unix based systems. Feel free to add details for other flavors of Unix here. Non-posix tuning notes should probably go on another page.

The repository can be stored as either a Berkeley DB database or in the FSFS repository formats. This is hidden from users. FSFS is generally considered to be faster.

Making reads cheaper

The Unix concept of "file access time" (commonly call atime) is a performance problem. When filesystem semantics were being defined, it seemed like a good idea to know when a file was last accessed. The down side is that every file open call now causes a disk write. A few utilities use this information (e.g. tmpwatch and mail), but subversion never uses atime. Subversion performance is improved by avoiding the access time writes.

For a local filesystem, you can disable this behavior with mount options. On Linux, it's the 'noatime' and 'nodiratime' options. On a NFS filesystem, the atime recording happens on the server and must be disabled in the server's configuration.

A lazy atime approach called "relatime" was introduced in Linux-2.6.20 and mount-2.13. This eliminates most atime writes without breaking the few utilities that need it. This is most useful if the repository must be on the same partition as the mail spool and/or temporary files. See: http://kerneltrap.org/node/14148 and http://kernelnewbies.org/Linux_2_6_20

Making writes cheaper

Subversion uses uses the fsync() call (or the equivalent on non-Unix operating systems) to tell the operating system to write data to disk. Up until that point, the data is usually only memory and the operating system will write it to disk "when it gets around to it".

By calling fsync() before finishing a commit, subversion is trying to guarantee that everything it said had been done would be there when the machine re-boots. Waiting for data to write out to disk is often the slowest part of a commit.

However, the operating system doesn't always hold up its end of the bargain. On Linux, fsync() only ensures that the data is on its way to the disk "as soon as possible". If write cache is enabled on the drive, then it doesn't actually wait for the data to hit the disk platter before returning. This means there is a window of time that a power loss can cause the disk state to not match what subversion returned.

One way to significantly increase fsync() performance is to use a RAID controller with a battery backed write cache. The cache is treated as part of the disk system. As soon as the data is in the cache, the fsync() can safely return. This means you don't have to wait for the disk head seek or the data transfer. If power is interrupted, the RAID controller will finish writing out the cache when power is restored.

A newer way to avoid this problem is a flash based disk. There is no latency from head movement or waiting for the disk to rotate. This becomes more significant when writing many small files (like many FSFS writes). The current downsides of flash disks are high cost, limited capacity, and low write bandwidth (but these problems are improving).

Reducing the number of writes

As of subversion-1.5, transactions can be built up on a different filesystem than the one holding the repository. This is valuable when the repository lives on a slower filesystem like NFS.

To implement this, do the following:

 stop all servers that can write to the repository
 cd REPO_PATH/db
 mv transactions /LOCAL/DISK/PATH/
 ln -s /LOCAL/DISK/PATH/transactions .
 start the servers

Reduce directory index size

The subversion-1.5 repository format allows the revisions to be stored in subdirectories that don't grow past a specified size. This allows repositories to store many more revisions than can (efficiently) be stored in one directory.

Modern filesystems can handle hundreds of thousands of files in a single directory. However, performance can suffer as the directory index starts to use multiple levels of indirection. Some administration tools may also have trouble with very large directories. Splitting the revision store into sub-directories avoids all these problems.

The shard size can by adjusted by editing the "layout sharded" line in "db/format" after 'svnadmin create' but before populating the repository. The default is 1000 revisions per subdirectory. Non-sharded repositories can be loaded into a new, sharded, repository using "svnadmin load" or "svnsync".

Optimize write-once files on NFS

If the repository is on a NFS filesystem, then a cache consistency check is made every time a file is opened. Since the revision files in a FSFS repository never change, it is worthwhile to skip the cache checks on these files. The subversion-1.5 repository format store immutable files in specific subdirectories so that this can be done.

The NFS cache check can be disabled on Linux by passing the 'nocto' option to the mount command (note: the man page claims this is ignored, but it isn't on linux-2.6). You need coherency for some files, so the NFS volume is also mounted without the option on a different mount point. Symbolic links are made from the cache coherent mount point to the 'nocto' mount for these directories: revs and txn-protorevs.

Implementation example (not complete, just an outline of the key steps):

 stop all servers that can write to the repository
 sudo mount -t nfs nfs_server:/mount_point /mnt/svn -o \
   rw,nosuid,tcp,rsize=32768,wsize=32768
 sudo mount -t nfs nfs_server:/mount_point /mnt/svn-nocto -o \
   rw,nosuid,tcp,rsize=32768,wsize=32768,nocto,actimeo=3600
 cd /mnt/svn/repo_path
 mv revs revs-nocto
 mv txn-protorevs txn-protorevs-nocto
 ln -s /mnt/svn-nocto/repo_path/db/revs-nocto revs
 ln -s /mnt/svn-nocto/repo_path/db/txn-protorevs-nocto txn-protorevs
 start the servers
 

Increase NFS caching timeout

On Linux, metadata on files from NFS is only kept for a finite period of time. This can be changed by passing the actimeo option to the mount command. The man page claims the default is 60 (seconds), but some experimentation suggests it may be higher than that. For a 'nocto' mount point, this value can be raised to something much larger (e.g. 3600). See the above example.

Distributing CPU load

The subversion communicates with the clients by transmitting differences in state, so the CPU load to calculate the difference can be significant. By storing the repository on NFS, you can have multiple "front end" (FE) systems that share the computational load and provide redundancy. A network load balancer makes all front ends (FEs) appear as one server to users.

The FEs can either run svnserve or http-DAV. If DAV is used, you need to ensure that the load balancer keeps an entire transaction on the same FE (to allow transactions to be built up on local disk). The load balancer must be configured with "machine affinity" set, so that all HTTP connections from a client will be routed to the same server. You should also configure apache to keep a single TCP connection for the entire transaction (see example below).

Apache configuration to maintain a TCP connection:

 # 1. Enable HTTP persistent connections so a single transaction can
 #    be built up over a single connection.
 KeepAlive             on
 # 2. Allow as many KeepAlives as required (0 => infinite) to keep
 #    the same connection alive.
 MaxKeepAliveRequests  0
 # 3. Limit a child to serving only this 1 connection.
 MaxRequestsPerChild   1

The last one is counter-intuitive, but see the "Note" at http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild.

High storage system reliability

The purpose of a version control system is to store a sequence of file/directory versions so you can retreive them in the future. None of this matters if the storage system fails.

The simplest step is to do periodic backups of the repository. This limits the loss to the changes that happened since the last backup. If the repository is large and the commit rate is high, it may be impossible to backup frequently enough to prevent significant data loss. For example, if your repository gets one commit per second and you do a backup every hour, you may lose 3600 revisions if the disk fails. This is a large scale example, but the point is to gather your own numbers and figure out how much you might lose.

The next step is to make the disk system redundant using RAID technology. This allows one (and sometimes more) disks to fail without losing data. This still won't help if additional disks fail during recovery or the entire array is lost due to fire, theft, etc.

Advanced NFS servers can be configured to do synchronous mirroring and/or asynchronous mirroring (also known as snapshot replication). These capabilities are available in some commercial servers, or you can find various free alternatives by searching for "NFS server high availability" or "NFS server snapshot replication".

Synchronous mirroring sends every write to two independent storage systems and requires a high bandwidth network (e.g. gigabit ethernet). It reduces performance, but the caching optimizations listed above can help. The primary and slave systems are usually located in different rooms (or buildings) and on different electrical circuits.

Asynchronous mirroring periodically updates a second storage system with changes from the master. It periodically makes a "snapshot" of the system every few minutes and then transmits the difference between the previous snapshot and the current one to the slave. This uses less bandwidth, but lags the main filesystem by a several minutes. It can allow a backup filesystem to be located in another geographic region.

Another approach is to use subversion tools to maintain a mirror. Setup svnsync to periodically sync a back up server off the main one. This can lag behind by the polling interval, but it is simple to setup.

You can eliminate the lag by setting up a post-commit script that runs "svnadmin dump --incremental -r N" of that commit onto a separate partition/server. This creates a transaction log of commits that can be replayed on a recent backup to restore full state.

Watch your entropy

When servers handled lots of queries, certain protocols can deplete the entropy pool. The svn:// protocol (served by svnserve) and SASL cyphers read from /dev/random for every new connection. If the entropy pool becomes depleted, then the service will become very slow.

The pool should have 100+ bits in it for good operation. You can check the entropy pool size on linux like this:

 sysctl kernel.random.entropy_avail

This should not be a problem if APR was configured with "--with-devrandom=/dev/urandom". Sasl has a similar configuration option (called???). You may need to check how the packages supplied with you OS are configured.


(TODO: reference the SVN benchmarking capabilities in mstone http://mstone.sourceforge.net/)

Dchristian 13:42, 30 May 2008 (PDT)