Gå til innhold

BSD/UnixBSD/UnixFilserver med zfs - hva bør man tenke på?


tingo

Anbefalte innlegg

Etter at FreeBSD 8.0 kom ut med bedre / mer oppdatert støtte for nye ting i zfs, så fant jeg ut at jeg skulle sette opp en ny filserver, og bruke zfs på den. Så langt har jeg fulgt RootOnZFS (mer spesifikt mirror pool varianten) og har nå en maskin som kjører FreeBSD 8.0-stable:

tingo@kg-f2$ uname -a
FreeBSD kg-f2.kg4.no 8.0-STABLE FreeBSD 8.0-STABLE #0: Sat Dec 12 12:02:54 CET 2009	 [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

 

med følgende zpool:

tingo@kg-f2$ zpool list
NAME	SIZE   USED  AVAIL	CAP  HEALTH  ALTROOT
zroot  59.5G  28.8G  30.7G	48%  ONLINE  -
tingo@kg-f2$ zpool status
 pool: zroot
state: ONLINE
scrub: none requested
config:

NAME		   STATE	 READ WRITE CKSUM
zroot		  ONLINE	   0	 0	 0
  mirror	   ONLINE	   0	 0	 0
	gpt/disk0  ONLINE	   0	 0	 0
	gpt/disk1  ONLINE	   0	 0	 0

errors: No known data errors

Jeg bruker smartmontools til å "passe på" diskene.

Spørsmålet er da: er det noe mer man må passe på / følge opp når det gjelder zfs?

(jeg er vant med ufs på FreeBSD, og der gjør man ingenting spesielt, annet enn å følge opp logger og bytte disker når de begynner å dø på seg)

 

 

For selve filserverdelen har jeg 5 disker som skal inn. Hva er da beste måten for å sette opp dem på? Alle diskene i en raidz? Eller?

Lenke til kommentar
Videoannonse
Annonse

Etter å ha spurt flere steder så fant jeg ut at jeg bare skulle droppe glabel. Nå ser det slik ut:

root@kg-f2# zpool status
 pool: storage
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Fri Dec 18 15:04:58 2009
config:

NAME		STATE	 READ WRITE CKSUM
storage	 ONLINE	   0	 0	 0
  raidz1	ONLINE	   0	 0	 0
	ad8	 ONLINE	   0	 0	 0
	ad10	ONLINE	   0	 0	 0
	ad12	ONLINE	   0	 0	 0
	ad14	ONLINE	   0	 0	 0
	ada0	ONLINE	   0	 0	 0

errors: No known data errors

 pool: zroot
state: ONLINE
scrub: scrub completed after 0h7m with 0 errors on Mon Dec 14 22:33:12 2009
config:

NAME		   STATE	 READ WRITE CKSUM
zroot		  ONLINE	   0	 0	 0
  mirror	   ONLINE	   0	 0	 0
	gpt/disk0  ONLINE	   0	 0	 0
	gpt/disk1  ONLINE	   0	 0	 0

errors: No known data errors

Så får jeg se hvordan det funker.

Lenke til kommentar
  • 1 måned senere...

Oppdatering: etter å ha oppdatert FreeBSD til nyeste 8.0-stable (ok, den var nyeste 15 januar), så oppdaget jeg at zfs var blitt oppgradert tll versjon 14:

root@kg-f2# dmesg | grep ZFS
ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present;
ZFS filesystem version 14
ZFS storage pool version 14

og at det var på tide å oppgradere zpool'ene mine. Men siden jeg booter fra zfs, så er det best å ikke gå i fella. Først oppdatere bootcode:

root@kg-f2# gpart bootcode -p /boot/gptzfsboot -i 1 ad4
root@kg-f2# gpart bootcode -p /boot/gptzfsboot -i 1 ad6

deretter oppgradere poolen som jeg booter fra:

root@kg-f2# zpool upgrade zroot
This system is currently running ZFS pool version 14.

Successfully upgraded 'zroot' from version 13 to version 14

Og så ble det spenning - ville serveren boote etterpå?

Joda, den gjorde det. Da gjensto det bare å oppgradere den siste poolen:

root@kg-f2# zpool upgrade storage
This system is currently running ZFS pool version 14.

Successfully upgraded 'storage' from version 13 to version 14

Jeg liker FreeBSD, og jeg begynner å like zfs også.

Endret av tingo
Lenke til kommentar
  • 1 måned senere...

Testet litt med bonnie++ på zfs filserveren min. Først en test på zroot poolen (mirror):

tingo@kg-f2$ bonnie++ -d /tmp
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kg-f2.kg4.no     8G   127  95 215133  59 99279  35   453  99 166915  38 231.5  10
Latency             43959us    1255ms    5770ms   46531us   99971us     451ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kg-f2.kg4.no        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
             files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 26226  94 +++++ +++ 25339  97 26680  95 +++++ +++ 25533  97
Latency              7058us      66us      90us   16259us      59us     160us
1.96,1.96,kg-f2.kg4.no,1,1269866648,8G,,127,95,215133,59,99279,35,453,99,166915,38,231.5,10,16,,,,,26226,94,+++++,+++,25339,97,26680,95,+++++,+++,25533,97,43959us,1255ms,5770ms,46531us,99971us,451ms,7058us,66us,90us,16259us,59us,160us

Merk: jeg vet ingenting om bonnie++, så dere må ikke spørre meg hva resultatene betyr. :-)

Deretter tester jeg storage poolen (raidz, fem disker):

tingo@kg-f2$ bonnie++ -d /storage/test
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kg-f2.kg4.no     8G   149  98 174635  47 81490  28   465  99 119856  28 148.6   7
Latency             88433us    1981ms    2171ms   23597us     317ms     788ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kg-f2.kg4.no        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
             files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 29039  95 +++++ +++ 26096  98 27308  96 +++++ +++ 25904  98
Latency              7013us      66us      92us   16332us      61us     176us
1.96,1.96,kg-f2.kg4.no,1,1269867362,8G,,149,98,174635,47,81490,28,465,99,119856,28,148.6,7,16,,,,,29039,95,+++++,+++,26096,98,27308,96,+++++,+++,25904,98,88433us,1981ms,2171ms,23597us,317ms,788ms,7013us,66us,92us,16332us,61us,176us

Nei, jeg vet ikke om dette er bra eller dårlig. Det er ikke gjort noen tuning av serveren.

Si ifra dersom dere vil at jeg skal teste med andre parametre eller noe annet.

Endret av tingo
Lenke til kommentar

Sånn, kjappt. ca 175 Mb/s les, 81 Mb/s overskriv og 119 Mb/s skriv.

 

Jeg er helt nybegynner på dette, og ut i fra output fra Bonnie++ så vil jeg også tolke det slikt, men er det ikke allikevel omvendt?

 

Output fra min egen OpenSolaris gir følgende:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
opensolaris     16G 90129  77 181109  24 144246  25 94860  95 576024  43 806.1   2
                   ------Sequential Create------ --------Random Create--------
                   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
             files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                16 +++++ +++ +++++ +++ +++++ +++ 30359  97 +++++ +++ +++++ +++
opensolaris,16G,90129,77,181109,24,144246,25,94860,95,576024,43,806.1,2,16,+++++,+++,+++++,+++,+++++,+++,30359,97,+++++,+++,+++++,+++

 

Jeg ville jo da tolke dette som 181 MB/s write, 144 MB/s rewrite og 576 MB/s read.

 

Output fra DD:

 

magnus@opensolaris:/pool/bonnie$ dd if=/dev/zero of=/pool/test1.txt bs=64K count=1M
1048576+0 records in
1048576+0 records out
68719476736 bytes (69 GB) copied, 376.661 s, 182 MB/s
magnus@opensolaris:/pool/bonnie$ dd if=/pool/test1.txt of=/dev/zero bs=64K
1048576+0 records in
1048576+0 records out
68719476736 bytes (69 GB) copied, 106.403 s, 646 MB/s

 

Mitt ZFS-pool består av 2 vdev's: 8x 1.5tb raidz2 og 8x 2tb raidz2

 

magnus@opensolaris:~$ zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool   25.4T  10.6T  14.8T    41%  ONLINE  -
rpool   928G  10.4G   918G     1%  ONLINE  -

magnus@opensolaris:~$ zpool status
 pool: pool
state: ONLINE
scrub: scrub completed after 7h56m with 0 errors on Mon Mar 29 02:11:40 2010
config:

       NAME         STATE     READ WRITE CKSUM
       pool         ONLINE       0     0     0
         raidz2     ONLINE       0     0     0
           c7t0d0   ONLINE       0     0     0
           c7t1d0   ONLINE       0     0     0
           c7t2d0   ONLINE       0     0     0
           c7t3d0   ONLINE       0     0     0
           c7t4d0   ONLINE       0     0     0
           c7t5d0   ONLINE       0     0     0
           c7t6d0   ONLINE       0     0     0
           c7t7d0   ONLINE       0     0     0
         raidz2     ONLINE       0     0     0
           c11t0d0  ONLINE       0     0     0
           c11t1d0  ONLINE       0     0     0
           c11t2d0  ONLINE       0     0     0
           c11t3d0  ONLINE       0     0     0
           c11t4d0  ONLINE       0     0     0
           c11t5d0  ONLINE       0     0     0
           c11t6d0  ONLINE       0     0     0
           c11t7d0  ONLINE       0     0     0

errors: No known data errors

 

Hvilke tall er de "rette"? :)

 

Må også si jeg ville trodd jeg ville fått større write-hastighet med 2 vdevs, uten at det har noen særlig praktisk betydning - alle data blir skrevet til zfs-poolet over gigabit ethernet.

Lenke til kommentar
  • 4 måneder senere...

Oppdatering: kom hjem etter en ferietur, og da ser det plutselig slik ut på filserveren min:

root@kg-f2# zpool status
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub completed after 0h0m with 0 errors on Wed Aug  4 20:15:59 2010
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE      27     0     0  16K repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0

errors: No known data errors

 pool: zroot
state: ONLINE
scrub: scrub completed after 0h9m with 0 errors on Wed Aug  4 20:25:41 2010
config:

NAME           STATE     READ WRITE CKSUM
zroot          ONLINE       0     0     0
  mirror       ONLINE       0     0     0
    gpt/disk0  ONLINE       0     0     0
    gpt/disk1  ONLINE       0     0     0

errors: No known data errors

Fra /var/log/messages:

Aug  4 19:38:04 kg-f2 ntpd[951]: kernel time sync status change 2001
Aug  4 20:15:56 kg-f2 kernel: ad12: FAILURE - READ_DMA48 status=51<READY,DSC,ERROR> error=40<UNCORRECTABLE> LBA=751627904
Aug  4 20:15:57 kg-f2 kernel: ad12: FAILURE - READ_DMA48 status=51<READY,DSC,ERROR> error=40<UNCORRECTABLE> LBA=375818624
Aug  4 20:15:58 kg-f2 kernel: ad12: FAILURE - READ_DMA48 status=51<READY,DSC,ERROR> error=40<UNCORRECTABLE> LBA=375818880
Aug  4 20:15:59 kg-f2 kernel: ad12: FAILURE - READ_DMA48 status=51<READY,DSC,ERROR> error=40<UNCORRECTABLE> LBA=751628032
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833552384 size=65536 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833577472 size=512 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833578496 size=1024 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833580032 size=512 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833579520 size=512 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=384833577984 size=512 error=5
Aug  4 20:15:59 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=192419266560 size=65536 error=5

smartctl sier at disken er "frisk":

root@kg-f2# smartctl -H /dev/ad12
smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.0-STABLE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

Jeg finner heller ikke noe spesielt å sette fingeren på i full output:

root@kg-f2# smartctl -a /dev/ad12
smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.0-STABLE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Device Model:     SAMSUNG HD103SJ
Serial Number:    S246J1KSB01865
Firmware Version: 1AJ100E4
User Capacity:    1,000,204,886,016 bytes
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 6
Local Time is:    Sat Aug  7 11:27:14 2010 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
				was never started.
				Auto Offline Data Collection: Disabled.
Self-test execution status:      ( 121)	The previous self-test completed having
				the read element of the test failed.
Total time to complete Offline 
data collection: 		 (9540) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
				Auto Offline data collection on/off support.
				Suspend Offline collection upon new
				command.
				Offline surface scan supported.
				Self-test supported.
				No Conveyance Self-test supported.
				Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
				power-saving mode.
				Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
				General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 159) minutes.
SCT capabilities: 	       (0x003f)	SCT Status supported.
				SCT Feature Control supported.
				SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
 1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       4
 2 Throughput_Performance  0x0026   252   252   000    Old_age   Always       -       0
 3 Spin_Up_Time            0x0023   073   073   025    Pre-fail  Always       -       8209
 4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       45
 5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
 7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
 8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
 9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       5579
10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       46
191 G-Sense_Error_Rate      0x0022   252   252   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   057   053   000    Old_age   Always       -       43 (Lifetime Min/Max 24/47)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       4
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       0
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       46

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       90%      5547         375818624
# 2  Short offline       Completed without error       00%      2033         -

Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
   1        0        0  Completed_read_failure [90% left] (0-65535)
   2        0        0  Not_testing
   3        0        0  Not_testing
   4        0        0  Not_testing
   5        0        0  Not_testing
Selective self-test flags (0x0):
 After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Betyr det at jeg bare kan gjøre en 'zpool clear' på poolen og kjøre videre?

Endret av tingo
Lenke til kommentar

Hva sier iostat -en?

Det kan tyde på at du har en disk som holder på å feile, evt noe annet som fikk den til å få hikke.

Jeg ville egentlig bare clearet feilen og bestilt opp en reserve disk. Da kan du bare bytte den ut når den feiler.

zfs vil jo feile ut disken når den først feiler skikkelig.

Lenke til kommentar

FreeBSD skjønner ikke '-en' til iostat:

root@kg-f2# uname -a
FreeBSD kg-f2.kg4.no 8.0-STABLE FreeBSD 8.0-STABLE #3: Fri Mar  5 18:16:51 CET 2010
    [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64
root@kg-f2# iostat -en
iostat: illegal option -- e
usage: iostat [-CdhIKoTxz?] [-c count] [-M core] [-n devs] [-N system]
      [-t type,if,pass] [-w wait] [drives]
root@kg-f2# iostat -n
iostat: option requires an argument -- n
usage: iostat [-CdhIKoTxz?] [-c count] [-M core] [-n devs] [-N system]
      [-t type,if,pass] [-w wait] [drives]

cleare feilen; ja, jeg tenker det samme.

Endret av tingo
Lenke til kommentar

Vi får se når disken feiler da:

root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub completed after 0h0m with 0 errors on Wed Aug  4 20:15:59 2010
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE      27     0     0  16K repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0

errors: No known data errors

clearet feilen:

root@kg-f2# zpool clear storage ad12
root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Wed Aug  4 20:15:59 2010
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE       0     0     0  16K repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0

errors: No known data errors

så kjører vi en scrub:

root@kg-f2# zpool scrub storage
root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub completed after 0h0m with 0 errors on Sat Aug  7 20:31:16 2010
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE      23     0     0  14.5K repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0

errors: No known data errors

Disken ser ikke helt frisk ut, spør du meg.

Lenke til kommentar
  • 1 måned senere...

Mer moro med zfs filserver:

Fra /var/log/messages:

Sep 12 02:06:12 kg-f2 kernel: siisch0: Timeout on slot 30
Sep 12 02:06:12 kg-f2 kernel: siisch0: siis_timeout is 00040000 ss 40000000 rs 40000000 es 00000000 sts 801f0040 serr 00000000
Sep 12 02:06:12 kg-f2 kernel: siisch0: Error while READ LOG EXT
Sep 12 02:06:14 kg-f2 kernel: siisch0: Timeout on slot 30
Sep 12 02:06:14 kg-f2 kernel: siisch0: siis_timeout is 00040000 ss 40000000 rs 40000000 es 00000000 sts 801f0040 serr 00000000
Sep 12 02:06:14 kg-f2 kernel: siisch0: Error while READ LOG EXT
Sep 12 02:06:16 kg-f2 kernel: siisch0: Timeout on slot 30
Sep 12 02:06:16 kg-f2 kernel: siisch0: siis_timeout is 00040000 ss 40000000 rs 40000000 es 00000000 sts 801f0040 serr 00000000
Sep 12 02:06:16 kg-f2 kernel: siisch0: Error while READ LOG EXT
Sep 12 02:06:18 kg-f2 kernel: siisch0: Timeout on slot 30
Sep 12 02:06:18 kg-f2 kernel: siisch0: siis_timeout is 00040000 ss 40000000 rs 40000000 es 00000000 sts 801f0040 serr 00000000
Sep 12 02:06:18 kg-f2 kernel: siisch0: Error while READ LOG EXT
Sep 12 02:06:20 kg-f2 kernel: siisch0: Timeout on slot 30
Sep 12 02:06:20 kg-f2 kernel: siisch0: siis_timeout is 00040000 ss 40000000 rs 40000000 es 00000000 sts 801f0040 serr 00000000
Sep 12 02:06:20 kg-f2 kernel: siisch0: Error while READ LOG EXT

Dette skjer mens jeg kopierer filer (over Gbps nett) til maskinen, og samtidig kjører en 'scrub' på den aktuelle poolen. Man må jeg teste hva serveren tåler. :)

root@kg-f2# zpool status
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub in progress for 5h32m, 58.51% done, 3h55m to go
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE     323     0     0  2.32M repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       8     0     0  97.5K repaired

errors: No known data errors

 pool: zroot
state: ONLINE
scrub: scrub completed after 0h9m with 0 errors on Sat Sep 11 17:47:15 2010
config:

NAME           STATE     READ WRITE CKSUM
zroot          ONLINE       0     0     0
  mirror       ONLINE       0     0     0
    gpt/disk0  ONLINE       0     0     0
    gpt/disk1  ONLINE       0     0     0

errors: No known data errors
root@kg-f2#

Så langt så ser det bra ut; zfs leverer.

Lenke til kommentar

Bli med i samtalen

Du kan publisere innhold nå og registrere deg senere. Hvis du har en konto, logg inn nå for å poste med kontoen din.

Gjest
Skriv svar til emnet...

×   Du har limt inn tekst med formatering.   Lim inn uten formatering i stedet

  Du kan kun bruke opp til 75 smilefjes.

×   Lenken din har blitt bygget inn på siden automatisk.   Vis som en ordinær lenke i stedet

×   Tidligere tekst har blitt gjenopprettet.   Tøm tekstverktøy

×   Du kan ikke lime inn bilder direkte. Last opp eller legg inn bilder fra URL.

Laster...
×
×
  • Opprett ny...