$ sudo apt-get install iozone3
Iozone was run using the following parameters:
iozone -a -R -c -U /communal/ -f /communal/testfile
Firstly iozone was benchmarked using a local client, Vostro 1700 writing to it's non-system internal disk (SATA II, 2.5" 7200RPM drive).
The client then mounted the NFS share under /communal (entry added to /etc/fstab) and performance tests run. The initial runs were performed using two homeplugs (Comtrend 902s powerline adaptors in the same room, gigabit switch at either end). It was discovered that the best throughput occurred when the read and wsize were set to 32k (rsize=32768, wsize=32768). The diagram below shows the run for this optimal rsize.
However, the throughput seemed to max out at 2.4MB/s (equivalent to 20Mbit/s). This was not good news. This required a check with a direct connection between the client and server. (The Foxconn R3-S10 only has a 100Mbit/s card, so our maximum throughput will be 12.5MB/s).
We achieve 10.5MB/s with a wsize and rsize of 32768- excellent. The network is almost certainly our bottleneck.
It looks like we have a problem with the Powerline adaptors or my electrical wiring. The adaptors are in the same room, on the same breaker. The comtrend web interface states that the upload and download speeds are exceeding 100Mbit/s. Several repetitions were made of iozone and a dd tests.
dd if=/dev/zero of=/communal/bigfile bs=1024k count=16
This showed we could not achieve >3MB/s using the powerlines, but consistently acheived 10MB/s using a direct cable connection.
- Carefully check powerline networking.
- The NFS server is probably bound by the 100Mb/s network. Gigabit networking kit would be needed to prove this.
- Maximum values in the iozone tests were acheived using a rsize and a wsize of 32768. See below
$ grep communal /etc/fstab
datastore:/communal /communal nfs hard,rw,addr=192.168.1.23,rsize=32768,wsize=32768 0 0