当前位置:文档之家› Celerra Setup Guide for SRM

Celerra Setup Guide for SRM

EMC Celerra Setup Guide for VMware Site Recovery Manager

Cormac Hogan

Product Support Engineering

December 2008 EMC provide a Celerra simulator in a VM for test & training purposes only. EMC do not support the use of the Celerra simulator in production.

The Celerra simulator is downloadable free of charge from the EMC powerlink web site and is a great Site Recovery Manager learning tool.

The Celerra simulator has a single Control Station (Management Network). These may be allocated DHCP addresses or configured with a static IP address. You will connect to these via a web interface to do some of the configuration.

When deploying the Celerra simulator VM, you will need:

?Allocate 3GB of memory and a single network interface to the Celerra Simulator VM.

?The virtual disk requires 40 GB of disk space.

? 2 IP addresses – one for the Control Station and one for the Data Mover.

Time to setup: For a pair of replicated Celerra Simulators, you need to consider giving yourself in the region if 4 hours. The main issue here is the reboot of the simulator. It is slow to start-up, but after the VM has started, it may take the Celerra Simulator itself an additional 15 minutes before it becomes manageable.

This is very tricky and not at all intuitive. Do not deviate from the setup steps listed

below or you will run into problems.

Part 1 – Control Station Configuration Steps

Import the Celerra virtual appliance onto the ESX & boot the VM. The simulator runs a modified RH Linux OS.

There are 2 logins configured on the Celerra Simulator:

root/nasadmin

nasadmin/nasadmin.

Step 1: Delete any old data mover IP addresses. Login as nasadmin and check using the following command:

[nasadmin@celerra_B_VM ~]$ server_ifconfig ALL -all

dmover_sim_B :

loop protocol=IP device=loop

inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255

UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost

10-21-68-73 protocol=IP device=cge0

inet=10.21.68.73 netmask=255.255.252.0 broadcast=10.21.71.255

UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:ae:41:e3

el31 protocol=IP device=el31

inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost

el30 protocol=IP device=el30

inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost [nasadmin@celerra_B_VM ~]$

The only interfaces that need to be removed are those that use a cgeX device. For instance, in the above output the only interface is the one called 10-21-68-73. The name is simply a representation of the IP address of the interface. Remove this interface using the following command:

[nasadmin@celerra_B_VM ~]$ server_ifconfig dmover_sim_B -delete 10-21-68-73

dmover_sim_B : done

[nasadmin@celerra_B_VM ~]$

Once this is removed, you can now work on removing and recreating the Control Station network rather than the data mover network.

Step 2: To change the Control Station (Management) network settings, as the root user use the command netconfig –d eth0. This allows you to choose DHCP or setup static networking on the interface. After making the change, run an ifdown eth0and an ifup eth0.

Repeat if using a second and or third interface, eth1 & eth2. However we will only be using a single interface in this configuration.

Ignore the dart_eth0and dart_eth1interfaces –these are used for communicating with back-end storage. In the case of the Celerra Simulator, it communicates to a simulated EMC Clariion back-end.

Run an ifconfig eth0 to verify that your changes have taken affect. Verify that you can ping the new IP address. You can also ssh to the Control Station if the network is functional.

Step 3: Now we setup the Data Mover networking. To make sure that we are using MAC addresses unique to this Celerra, and not some older MACs from the original cloned Celerra, we have to clean out the old interfaces and re-add them. Login as nasadmin and cd to /opt/blackbird/tools, (blackbird is the EMC codename for Celerra), run the command configure_nic ALL –l which lists all the defined Data Mover interfaces. This may return something like this:

[nasadmin@celerra_B_VM ~]$ cd /opt/blackbird/tools

[nasadmin@celerra_B_VM tools]$ ./configure_nic ALL -l

---------------------------------------------------------------

server_2: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4

---------------------------------------------------------------

---------------------------------------------------------------

server_3: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4

---------------------------------------------------------------

[nasadmin@celerra_B_VM tools]$

The objective is to clear all these entries, reboot the Celerra, and re-add new entries. To delete the old entries, use the following command for each datamover defined: configure_nic -d cgeX.

To clear these entries, run:

[nasadmin@celerra_B_VM tools]$ ./configure_nic server_2 -d cge0

server_2: deleted device cge0.

---------------------------------------------------------------

server_2: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

---------------------------------------------------------------

[nasadmin@celerra_B_VM tools]$ ./configure_nic server_3 -d cge0

server_3: deleted device cge0.

---------------------------------------------------------------

server_3: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

---------------------------------------------------------------

[nasadmin@celerra_B_VM tools]$ ./configure_nic ALL -l

---------------------------------------------------------------

server_2: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

---------------------------------------------------------------

---------------------------------------------------------------

server_3: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

---------------------------------------------------------------

All interfaces to the data movers have now been cleared.

Step 4: Now before we reboot we initialize the Celerra ID as we want to make sure that both the source and target Celerra IDs are unique when replicating between them.

Change to the root user, go to /opt/blackbird/tools and run the command init_storageID. It asks you do you want to reboot the Celerra. Answer y at this time.

I’ve found this to be slow so I allow it to sit for a while, then CNTL C and use reboot –n.

Step 5:After logging back in,cd to the/opt/blackbird/tools directory again, run the command ./configure_nic -a ethX. For each one of these commands that you run, a new cge interface is added to the data mover. This means that if you add eth0 as your first argument, a cge0 is created which will communicate to the outside world via eth0. Similarly, if you specified eth1 as your first argument, you data mover cge0 interface would communicate to the outside world via eth1. And so on.

[nasadmin@celerraVM tools]$ ./configure_nic server_2 -a eth0

server_2: added new device cge0 in slot 3.

Use server_ifconfig to configure the newly added device

after reboot the virtual machine.

---------------------------------------------------------------

server_2: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4

---------------------------------------------------------------

[nasadmin@celerraVM tools]$ ./configure_nic server_3 -a eth0

server_3: added new device cge0 in slot 3.

Use server_ifconfig to configure the newly added device

after reboot the virtual machine.

---------------------------------------------------------------

server_3: network devices:

Slot Device Driver Stub Ifname Irq Id Vendor

---------------------------------------------------------------

3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4

---------------------------------------------------------------

[nasadmin@celerraVM tools]$

Once again we must reboot. You may notice that I had 2 data movers here. In the Celerra s imulator that I have, there appears to be two and I’m unsure which is the active one. So therefore I added the interface to both. Normally one would expect to see only a single data mover defined –but to be sure, I’m configuring both. Reboot.

Step 6: Login as root & setup IP address and hostname using the following commands: [root@celerraVM ~]# ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:50:56:AF:46:30

inet addr:10.21.68.252 Bcast:10.21.71.255 Mask:255.255.252.0

inet6 addr: fe80::250:56ff:feaf:4630/64 Scope:Link

UP BROADCAST RUNNING MULTICAST DYNAMIC MTU:1500 Metric:1

RX packets:897 errors:0 dropped:0 overruns:0 frame:0

TX packets:261 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:74495 (72.7 KiB) TX bytes:29526 (28.8 KiB)

Interrupt:11 Base address:0x1400

[root@celerraVM ~]# more /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost localhost.localdomain localhost

10.21.68.252 celerraVM

# Internal DART Server Primary Network

[root@celerraVM ~]# cd /etc/sysconfig/

[root@celerraVM sysconfig]# cat network

NETWORKING=yes

FORWARD_IPV4="no"

DOMAINNAME=https://www.doczj.com/doc/cb11118464.html,

HOSTNAME=celerraVM

[root@celerraVM sysconfig]# hostname celerraVM

[root@celerraVM sysconfig]# hostname

celerraVM

[root@celerraVM sysconfig]#

Step 7: Update the Celerra identification details

Log off as root and login as nasadmin/nasadmin and run the command nas_cel -l [nasadmin@celerraVM ~]$ nas_cel -l

id name owner mount_dev channel net_path CMU

0 localhost 0 127.0.0.1 BB005056AF1EE60000 Notice that the name is localhost. We need to update this to be the current Celerra settings. Become root and use the following command:

[nasadmin@celerraVM ~]$ su -

Password:

[root@celerraVM ~]# NAS_DB=/nas

[root@celerraVM ~]# export NAS_DB

[root@celerraVM ~]# /nas/bin/nas_cel -update id=0

operation in progress (not interruptible)...spawn /usr/bin/htdigest

/nas/http/conf/digest DIC_Authentication BB005056AF1EE60000_BB005056AF1EE60000 Adding user BB005056AF1EE60000_BB005056AF1EE60000 in realm DIC_Authentication

New password:

Re-type new password:

id = 0

name = celerraVM

owner = 0

device =

channel =

net_path = 10.21.68.252

celerra_id = BB005056AF1EE60000

Warning 177********: server_4 : failed to create the loopback interconnect

[root@celerraVM ~]#

Notice that I need to set the NAS_DB environment variable originally. Return to the nasadmin user and re-run the nas_cel –l command:

[nasadmin@celerraVM ~]$ nas_cel -l

id name owner mount_dev channel net_path CMU

0 celerraVM 0 10.21.68.252 BB005056AF1EE60000 [nasadmin@celerraVM ~]$

This looks much better and completes the Command Line setup. The remainder of the tasks we will implement from the Control Manager web interface, namely adding the Data mover to the network, creating an iSCSI target and LUN.

Part 2 – Data Mover Network Configuration steps

Step 1–Connect to the IP address of your Celerra Simulator eth0 interface and login as nasadmin/nasadmin

Step 2: Navigate to Data Movers, , Network. If you did not clean up the data mover networks as described earlier in the document, it may be that your data mover has some older pre-defined networking, so you will first have to remove that. If no cge interfaces exist, proceed to step 4.

Step 3: Select the cgeX network interfaces and click the Delete button. Do not touch the elX network interfaces as these are used for communicating to the simulated backed storage (Clariion).

Once all the old cge network interfaces are removed, we can now add a new cge interface. The old interfaces would have been using old MAC addresses. Since we set up new ones using configure_nic earlier, we need to re-add the newer interfaces to the data mover using this method.

Step 4: Add the new interfaces by clicking on the New button.

The interface cge0is correct. Populate the IP Address & Netmask, allow the Broadcast Address to automatically populate and click OK. This will be using the other IP addresses that we discussed in the introduction.

Step 5: Verify that you can ping this interface once it is created. Do not try to ping it from the Celerra Control Station – ping it from outside the Celerra, i.e. your desktop.

Use the following commands to test the network connectivity of the Data Mover: [root@celerra_A_VM_2 ~]# NAS_DB=/nas

[root@celerra_A_VM_2 ~]# export NAS_DB

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all

server_4 :

10-21-68-178 protocol=IP device=cge0

inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255

UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e

loop protocol=IP device=loop

inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255

UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost el31 protocol=IP device=el31

inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost

el30 protocol=IP device=el30

inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[root@celerra_A_VM_2 ~]#

You can also try downing this interface and bringing it up again:

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 down

server_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all

server_4 :

10-21-68-178 protocol=IP device=cge0

inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255

DOWN, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e

loop protocol=IP device=loop

inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255

UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost el31 protocol=IP device=el31

inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost

el30 protocol=IP device=el30

inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 up

server_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all

server_4 :

10-21-68-178 protocol=IP device=cge0

inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255

UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e

loop protocol=IP device=loop

inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255

UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost el31 protocol=IP device=el31

inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost

el30 protocol=IP device=el30

inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255

UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

Lastly, since the Control Station eth0 and the Data Mover cge0 share the same MAC address, ensure that the Virtual Switch and VM network are in promiscuous mode.

If you cannot, go back and check the configure_nic steps in part 1. Until you can ping this interface from outside the Celerra, there is no point in continuing any further.

Part 3 – iSCSI Configuration Steps

The final steps of the configuration are to create and present an iSCSI LUN to our ESX servers.

Step 1: First thing to do is to license the features that we are going to use. Select the Celerra in the Celerra Manager screen and then the Licenses tab. You will not need any license keys to enable the features; it is a simple matter of enabling them. However you may have to first of all initialize the license table. If you fail to enable a license with the error ‘license table is not initialized’, run this command:

[nasadmin@celerra_A_VM_2 ~]$ nas_license -init

done

[nasadmin@celerra_A_VM_2 ~]$

Then repeat the enabling of the licensable features. The following features should be enabled before continuing to the next steps.

Step 2: In Celerra Manager, click on the Wizards button.

Step 3: Click on New iSCSI Target

Step 4: Verify that the Data Mover is correct and click Next

Step 5: Add a target Alias Name (I used celerra_b_sim but it doesn’t really matter what you choose), ensure that Auto Generate Target Qualified Name is checked and click Next.

Step 6: Add the Data Mover Interface to the Target Portals by click the Add button. Then click Next.

The IQN is comprised of the Celerra id which was created back in part 1 step 7.

Step 7: Click Finish.

Step 8: Verify that the command was successful and proceed to create the iSCSI LUN and present it to the ESX by clicking Close.

Step 9: Now you can enable software iSCSI on your ESX server and add the IP addresses of your Data Mover (now an iSCSI target) to the list of Dynamically Discovered Targets. This should be straight forward so I will not document it here.

Step 10: At the Wizards window, select New iSCSI Lun just above the target option chosen previously.

Step 11: Verify that the Data Mover is correct and click Next.

Step 12: The Target Portals view should display the IP address of your Data Mover that you created earlier. Notice also the IQN used for the interface. Once verified, click Next to continue.

Step 13: A file system(DART?) of 4.7GB called vol1 has already been setup by default on the simulator. Verify that it is available & selected. Click Next to continue.

Step 14: Create a new LUN of 1600MB. This will hold our demo VM. We make it small so that snapshots can be stored on the file system. Notice also the % of file system used.

Step 15: You should already have added this Data Mover to the list of Dynamic Targets to be discovered by the ESX software iSCSI initiator in step 1. If you have done this, then you should see the initiator from the ESX available here for masking.

If you do not see the ESX software iSCSI initiator in the Known Initiator list, log onto the VI client for your ESX server, enable your software iSCSI initiator, add the data mover as a target, open the iSCSI port (3260) in the firewall and click rescan. Your ESX software iSCSI initiator should appear in the Known Initiator list.

Click on the Grant button to grant the ESX software iSCSI initiator LUN access for the Protected/Source LUN.

Note: When configuring the recovery side, do not grant access to the recovery/target as we will not be able to replicate the LUN if you do that. Click Next to continue.

Step 16: Click Next to skip over this CHAP screen. We will not be setting up CHAP.

Step 17: Click Finish.

Step 18: Verify that the commands were successful and click Close.

Step 19: On the recovery/source side LUN, create a VMFS file system on the iSCSI LUN and run a VM on it. Use one of the small 1GB JEOS VMs. Accept the default 1MB file block size, but give the VMFS label something recognisable, like celerra_sim_vol.

Step 20: Repeat these steps for the recovery side Celerra simulator, keeping in mind the difference at step 14, and move onto the final part of the setup which is replication between the two simulators.

Part 4 – Replication Configuration Steps

We will do most of these steps from within the CLI. The steps can be summarised as follows:

1. Create a trust between the data movers at the local and remote sites.

2. Create an interconnect to allow the data movers to communicate

3. Set the iSCSI LUN on the recovery Celerra read-only

4. Configure the replication between the local and remote iSCSI LUNs.

Step 1: Create a trust between the data movers at the local and remote sites.

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -create -ip -passphrase

e.g.

[nasadmin@celerraVM ~]$ nas_cel -l

id name owner mount_dev channel net_path CMU

0 celerraVM 0 10.21.68.252 BB005056AF1EE60000 [nasadmin@celerraVM ~]$ nas_cel -create celerra_B_VM -ip 10.21.68.250 -passphrase vmware

operation in progress (not interruptible)...

id = 1

name = celerra_B_VM

owner = 0

device =

channel =

net_path = 10.21.68.250

celerra_id = BB005056AE2C0F0000

passphrase = vmware

[nasadmin@celerraVM ~]$

The phrase must be the same in both cases. I use vmware as the phrase.

To verify the trust relationship, run this command on both Celerras:

# nas_cel –l

[nasadmin@celerraVM ~]$ nas_cel -l

id name owner mount_dev channel net_path CMU

0 celerraVM 0 10.21.68.252 BB005056AF1EE60000

1 celerra_B_VM 0 10.21.68.250 BB005056AE2C0F0000 [nasadmin@celerraVM ~]$

Step 2: Create an interconnect to allow the data movers to communicate

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -interconnect -create

-source_server

-destination_system { | id=}

-destination_server

-source_interfaces { | ip= [,{ | ip=},...]

-destination_interfaces { |

ip=}

[,{ | ip=},...]

?from the source/protected side:

[nasadmin@celerraVM ~]$ nas_cel -interconnect -create srm_inter -source_server server_4 -destination_system celerra_B_VM -destination_server dmover_sim_B -source_interfaces ip=10.21.68.75 -destination_interfaces ip=10.21.68.73

operation in progress (not interruptible)...

id = 20003

name = srm_inter

source_server = server_4

source_interfaces = 10.21.68.75

destination_system = celerra_B_VM

destination_server = dmover_sim_B

destination_interfaces = 10.21.68.73

bandwidth schedule = use available bandwidth

crc enabled = yes

number of configured replications = 0

number of replications in transfer = 0

current transfer rate (KB/sec) = 0

average transfer rate (KB/sec) = 0

sample transfer rate (KB/sec) = 0

status = The interconnect is OK.

?from the target/recovery side:

[nasadmin@celerra_B_VM ~]$ nas_cel -interconnect -create srm_inter -source_server dmover_sim_B -destination_system celerra_VM -destination_server server_4 -source_interfaces ip=10.21.68.73 -destination_interfaces ip=10.21.68.75

operation in progress (not interruptible)...

id = 20003

name = srm_inter

source_server = dmover_sim_B

source_interfaces = 10.21.68.73

destination_system = celerra_VM

destination_server = server_4

destination_interfaces = 10.21.68.75

bandwidth schedule = use available bandwidth

crc enabled = yes

number of configured replications = 0

number of replications in transfer = 0

current transfer rate (KB/sec) = 0

average transfer rate (KB/sec) = 0

sample transfer rate (KB/sec) = 0

status = The interconnect is OK.

Check the status of the interconnects

[nasadmin@celerraVM ~]$ nas_cel -interconnect -list

id name source_server destination_system destination_server

20001 loopback server_4 unknown unknown

20003 srm_inter server_4 celerra_B_VM dmover_sim_B

[nasadmin@celerra_B_VM ~]$ nas_cel -interconnect -list

id name source_server destination_system destination_server

20001 loopback dmover_sim_B unknown unknown

20003 srm_inter dmover_sim_B celerra_VM server_4

Note: The interconnect name must be the same on both Celerra.

You can use the –info option for additional information.

Step 3 : Set the iSCSI LUN on the recovery Celerra read-only

If you followed the steps correctly in part 3, step 13, you will not have presented the LUN at the recovery/target side to any initiators. Therefore you can go ahead and make this LUN read-only for replication using the following command:

# server_iscsi -lun -number

-create -readonly yes

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -lun -modify 0 -target celerra_b_dm -readonly yes

If this command succeeds, skip to step 4. However, if you did present the LUN, the command to make the LUN read-only will fail with the error:

cfgModifyLun failed. LUN 0 is used by initiators and cannot be modified to Read-Only.

Error 4020: dmover_sim_B : failed to complete command

To unmask the LUN from the initiator, type the following command:

# server_iscsi -mask -clear -initiator

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -mask -clear celerra_b_dm -initiator https://www.doczj.com/doc/cb11118464.html,.vmware:cs-pse-d02-2954bbcd dmover_sim_B : done

Now retry your attempt to make the LUN read-only to allow us to use it in replication. [nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -lun -modify 0 -target celerra_b_dm -readonly yes

dmover_sim_B : done

Step 4: When the LUN has been made read-only, apply the LUN mask to assign it to the recovery side ESX software iSCSI initiator using the command:

# server_iscsi -mask -set

-initiator -grant

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -mask -set

celerra_b_dm -initiator https://www.doczj.com/doc/cb11118464.html,.vmware:cs-pse-d02-2954bbcd -

grant 0

dmover_sim_B : done

[nasadmin@celerra_B_VM ~]$

Step 5: Now we are finally ready to do the replication. Use the following command from the

source/protected Celerra:

# nas_replicate -create -source -lun

-target -destination -lun -target

-interconnect { | id= }

[-source_interface { ip= |

}] [-destination_interface

{ ip= | }] [ { -

max_time_out_of_sync | -manual_refresh } ]

-overwrite_destination [ -background ]

e.g.

[nasadmin@celerraVM ~]$ nas_replicate -create srm_replic -source -lun

0 -target https://www.doczj.com/doc/cb11118464.html,.emc:bb005056af1ee60000-8 -destination -lun

0 -target https://www.doczj.com/doc/cb11118464.html,.emc:bb005056ae2c0f0000-10 -interconnect

srm_inter -source_interface ip=10.21.68.75 -destination_interface

ip=10.21.68.73

OK

[nasadmin@celerraVM ~]$

If you get the OK response, it means that the replication request was successful. Check the

status of the sync by running the following commands:

[nasadmin@celerraVM ~]$ nas_replicate -l

Name Type Local Mover Interconnect Celerra Status

srm_replic iscsiLun server_4 -->srm_inter celerra_B_VM OK

相关主题
文本预览
相关文档 最新文档