aloah,
als tip fuer alle, die evtll. probleme mit nem sauberen empfang haben:
wsize=n
Set the write buffer size to n bytes. The
default value is 32768 when using Version 3 of
the NFS protocol. The default can be negotiated
down if the server prefers a smaller transfer
size. When using Version 2, the default value is
8192."
wenn ihr also einen nfs3-server habt, testweise mal den wert auf 32kb erhoehen.
zudem hab ich noch in den hp-docs nen hinweis gefunden, dass der write-buffer der blockgroesse des zielmediums entsprechen sollte(wobei diese wohl bei klug formatierter movie-partition zu hochdosiert ist)
dann fuer alle, die sfu am laufen haben(oder einen anderen nfs3 server), sei empfohlen, das ganze so aussehen zu lassen:
rw, rsize=32768 ,wsize=32768 ,nfsvers=3
wobei es sein kann, dass es "v3" statt "nfsvers=3" (alternativ "vers=3") heissen sollte. bei grossen wsize&rsize werten ist natuerlich tcp zu waehlen, da ansonsten bei einem loss wieder alle paekchen erneut gesendet werden muessen(standard dbox2 mtu=1500; bedeutet bei einem packet loss bei udp bei block size 8kb resend von 6 frames), bei tcp eben nur das verlorene.(alternativ: wsize&rsize 1024, proto udp;btw: bei tcp transport sollte bei nfs der server die blocksize bestimmen).
dann gibts (fuer ...ix) noch die "async" option beim export einer nfs-share, die bewirkt, dass der nfs-server nicht erst den schreibvorgang beendet, bis der das naechste datenpaket annimmt (leider habe ich aehnliches bei keinem nfs-server fuer win gesehen).
noch ein paar tips bzgl timeo und wsize:
timeout > 5%, ~ badxid
Server zu langsam, timeo größer machen
timeout > 5%, badxid ~ 0
Pakete gehen verloren, rsize wsize kleiner machen
aus der manpage von netBSD:
Increasing the read and write size with the -r and -w
options respectively will increase throughput if the
hardware can handle the larger packet sizes. The default
size for version 2 is 8k when using UDP, 64k when using
TCP. The default size for v3 is platform dependent: on
i386, the default is 32k, for other platforms it is 8k.
Values over 32k are only supported for TCP, where 64k is
the maximum. Any value over 32k is unlikely to get you
more performance, unless you have a very fast network.
wers gaz hartnaeckig treiben will, halte sich an die test routinen aus dem
nfs-faq:
You will want to experiment and find an rsize and wsize that works and is as fast as possible. You can test the speed of your options with some simple commands, if your network environment is not heavily used. Note that your results may vary widely unless you resort to using more complex benchmarks, such as Bonnie, Bonnie++, or IOzone.
The first of these commands transfers 16384 blocks of 16k each from the special file /dev/zero (which if you read it just spits out zeros really fast) to the mounted partition. We will time it to see how long it takes. So, from the client machine, type:
# time dd if=/dev/zero of=/mnt/home/testfile bs=16k count=16384
This creates a 256Mb file of zeroed bytes. In general, you should create a file that's at least twice as large as the system RAM on the server, but make sure you have enough disk space! Then read back the file into the great black hole on the client machine (/dev/null) by typing the following:
# time dd if=/mnt/home/testfile of=/dev/null bs=16k
Repeat this a few times and average how long it takes. Be sure to unmount and remount the filesystem each time (both on the client and, if you are zealous, locally on the server as well), which should clear out any caches.
Then unmount, and mount again with a larger and smaller block size. They should be multiples of 1024, and not larger than the maximum block size allowed by your system. Note that NFS Version 2 is limited to a maximum of 8K, regardless of the maximum block size defined by NFSSVC_MAXBLKSIZE; Version 3 will support up to 64K, if permitted. The block size should be a power of two since most of the parameters that would constrain it (such as file system block sizes and network packet size) are also powers of two. However, some users have reported better successes with block sizes that are not powers of two but are still multiples of the file system block size and the network packet size.
Directly after mounting with a larger size, cd into the mounted file system and do things like ls, explore the filesystem a bit to make sure everything is as it should. If the rsize/wsize is too large the symptoms are very odd and not 100% obvious. A typical symptom is incomplete file lists when doing ls, and no error messages, or reading files failing mysteriously with no error messages. After establishing that the given rsize/ wsize works you can do the speed tests again. Different server platforms are likely to have different optimal sizes.
fuer die crosslinker unter euch sollte es dahingegen recht einfach sein; prot tcp, block size 64kb(maximum) netzwerk quali koennt ihr ja mal mit nem floodping und ner grossen packetsize testen(bsp: ping -f -s 1024 x.x.x.x), zudem koennt ihr dann auch halbwegs abschaetzen, wie ihr den timeout vom nfs setzen muesst.
viel spass beim tuefteln!