<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic [14.11.2019 00:16:40] Info [CSnapReplicaVmxMerger] Removing scsi from config 'VMHM01_replica'. Current version '4', minimal needed '7' [14.11.2019 00:16:40] Info [VmxFile] Removing Scsi. [14.11.2019 00:16:40] Info [VmxFi in Aerohive Migrated Content</title>
    <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/14-11-2019-00-16-40-info-csnapreplicavmxmerger-removing-scsi/m-p/62458#M1393</link>
    <description>&lt;P&gt;From Veeam Support&lt;/P&gt;&lt;P&gt;As you can see the replication job creates a brand new virtual disk for that affected machine.&lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt; Base on that I would suggest as follow:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt; 1. To get everything back on track I would suggest to upgrade to at least version 7. Just so you are aware upgrading them will not mean that cleanup will be done automatically. Replica vmdk's should be cleaned up manually.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. If that does not help please remove the affected VM out from the job and create a brand new test job just for that VM to try out.&lt;/P&gt;</description>
    <pubDate>Tue, 19 Nov 2019 11:26:12 GMT</pubDate>
    <dc:creator>cstang</dc:creator>
    <dc:date>2019-11-19T11:26:12Z</dc:date>
    <item>
      <title>[14.11.2019 00:16:40] Info [CSnapReplicaVmxMerger] Removing scsi from config 'VMHM01_replica'. Current version '4', minimal needed '7' [14.11.2019 00:16:40] Info [VmxFile] Removing Scsi. [14.11.2019 00:16:40] Info [VmxFi</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/14-11-2019-00-16-40-info-csnapreplicavmxmerger-removing-scsi/m-p/62458#M1393</link>
      <description>&lt;P&gt;From Veeam Support&lt;/P&gt;&lt;P&gt;As you can see the replication job creates a brand new virtual disk for that affected machine.&lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt; Base on that I would suggest as follow:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt; 1. To get everything back on track I would suggest to upgrade to at least version 7. Just so you are aware upgrading them will not mean that cleanup will be done automatically. Replica vmdk's should be cleaned up manually.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. If that does not help please remove the affected VM out from the job and create a brand new test job just for that VM to try out.&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2019 11:26:12 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/14-11-2019-00-16-40-info-csnapreplicavmxmerger-removing-scsi/m-p/62458#M1393</guid>
      <dc:creator>cstang</dc:creator>
      <dc:date>2019-11-19T11:26:12Z</dc:date>
    </item>
    <item>
      <title>Re: [14.11.2019 00:16:40] Info [CSnapReplicaVmxMerger] Removing scsi from config 'VMHM01_replica'. Current version '4', minimal needed '7' [14.11.2019 00:16:40] Info [VmxFile] Removing Scsi. [14.11.2019 00:16:40] Info [VmxFi</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/14-11-2019-00-16-40-info-csnapreplicavmxmerger-removing-scsi/m-p/62459#M1394</link>
      <description>&lt;P&gt;Can you elaborate on what your question is? What type of HiveManager are you trying to deploy? What are the resources available on the server you are using? &lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2019 23:18:20 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/14-11-2019-00-16-40-info-csnapreplicavmxmerger-removing-scsi/m-p/62459#M1394</guid>
      <dc:creator>samantha_lynn</dc:creator>
      <dc:date>2019-11-19T23:18:20Z</dc:date>
    </item>
  </channel>
</rss>

