Let me start by saying that I am very new to OpenIndiana and Solaris
10/11 in general. I normally deal with Red Hat Linux. I wanted to use
OI for ZFS support for a vmware shared storage server to mount LUNs on
my SAN.

Setup:

2 servers with multiple fiber-channel connections directly connected
to my SAN (no SAN switch).


Situation:

I have multipath working and I create the zpool with no problem using
the multipath disk device.
---
[email protected]:~# zpool create lun00 c2t60060E80104B8F6004F327FE00000000d0
[email protected]:~# zpool status lun00
  pool: lun00
 state: ONLINE
  scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        lun00                                    ONLINE       0     0     0
          c2t60060E80104B8F6004F327FE00000000d0  ONLINE       0     0     0

errors: No known data errors
---

Now I can export the pool from nfs01 to nfs02 with no problem..

---
[email protected]:~# zpool status lun00
  pool: lun00
 state: ONLINE
  scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        lun00                                    ONLINE       0     0     0
          c2t60060E80104B8F6004F327FE00000000d0  ONLINE       0     0     0

errors: No known data errors
---

The issue comes up when I then export the pool off nfs02 and try to
import it again back on nsa01.

---
[email protected]:~# zpool import lun00
Assertion failed: rn->rn_nozpool == B_FALSE, file
../common/libzfs_import.c, line 1093, function zpool_open_func
Abort (core dumped)
---

No matter how many times I try to import the pool on 01 I have this
issue. Both servers are running the same version of OI and all the
same updates. They are also the same servers purchased and spec'ed at
the same time for this project.


Any guidance would be appreciated.


-- 
Jason Cox

_______________________________________________
OpenIndiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to