<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement? in Aerohive Migrated Content</title>
    <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76909#M7283</link>
    <description>&lt;P&gt;Has anyone gotten a solid response from any level at Aerohive in regards resolving this situation?  I've now inquired to sale engineering and sales rep twice - while we seem to be stable (after turning off all the bell's &amp;amp; whistles) for months now - I can't get word if it would be wise to update our system, we're still running NG at 12.8.3.3-NGVAFEB19.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Has anyone disabled the following running on a higher version with success?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;B&gt;Application Visibility and Control -&amp;nbsp;&lt;/B&gt;turn off&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Collect statistics every X minutes&lt;/B&gt;&amp;nbsp;- set to 60&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Kernel Diagnostic Data Recorder(KDDR) -&amp;nbsp;&lt;/B&gt;turn off&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 06 Feb 2020 21:21:41 GMT</pubDate>
    <dc:creator>k_berrien</dc:creator>
    <dc:date>2020-02-06T21:21:41Z</dc:date>
    <item>
      <title>Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76891#M7265</link>
      <description>&lt;P&gt;After moving from Classic to NG in our on-prem environment (about a month ago), we started seeing issues.  Unable to push out configs, error in data reporting (# of clients connected, etc), a call to support brought up an obscure and rare hardware requirement for NG on prem.  No SAN support, must be a directly connected drive, preferably SSD.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have skepticism from my colleagues,  this is a highly rare request for a VM environment in our experience.  How did any of you else address this specifically?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thanks,&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2019 03:10:03 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76891#M7265</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-08-13T03:10:03Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76892#M7266</link>
      <description>&lt;P&gt;We are in the same boat.  We didn't have any issues until 19.5.1.7, and I do not think they had the SAN requirement until recently.  Currently we seem to be SOL since aerohive wont support their product, we have no local disk to even test if it would resolve the issues we are seeing.  My feeling is this requirement is absolutely ridiculous with virtualization.  I would very much like to know how you solve this issue because we are ready to dump aerohive over this.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Aug 2019 22:11:55 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76892#M7266</guid>
      <dc:creator>j_cross</dc:creator>
      <dc:date>2019-08-13T22:11:55Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76893#M7267</link>
      <description>&lt;P&gt;Jesse, I'll keep you in mind if we end up at a resolution (or not).  We're in day 1 of this.  We recently sent staff to Aerohive training where it was all NG and a big push to migrate from classic.  No mention of this obscure requirement, and since we're new to NG I don't know if the requirement was recently added.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Our virtual environment (Nutanix) will not support directly connected drives, and as they stated, it defeats the entire point of virtualization.  We could force it to live fully on SSD (at a license cost), but that still violates the Aerohive requirements.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the record, we're on 12.8.3.3-NGVAFEB19, running within vmware, 600 APs, for the past month since we moved from classic,  and I spiked it up to 32 gigs / 8 cores yesterday (the 5000 AP requirement) to see what would happen.   Cloud NG has all that big data &amp;amp; machine learning stuff, I'm not sure how much of that is in on-prem - but I'd happily give up the highly detailed reporting and such in exchange for a reliably functional management and basic status information.  We never used the reports/dashboard anyways as we're govt/edu and don't have the staff or time to take advantage of it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;At present, we're getting into it with Aerohive at a level above support.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But please, again to all... if anyone has found a resolution please let us both know your details.&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2019 00:32:11 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76893#M7267</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-08-14T00:32:11Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76894#M7268</link>
      <description>&lt;P&gt;Jesse, how many APs in your environment?  I just exchanged info with another user who is running 12.8.2.2 on virt/SAN but only has 137 APs - not seeing any issues as of yet.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are running 600 APs with minimal connections (summer vacation).&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2019 01:54:35 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76894#M7268</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-08-14T01:54:35Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76895#M7269</link>
      <description>&lt;P&gt;We are only at 50 and see the issue, though we never saw the issue on older versions, the first time it popped up for us was after running on 19.5.1.7.&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2019 01:57:30 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76895#M7269</guid>
      <dc:creator>j_cross</dc:creator>
      <dc:date>2019-08-14T01:57:30Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76896#M7270</link>
      <description>&lt;P&gt;We have an update.  We discussed the situation with an Aerohive SE and have made the following suggested adjustments.  As the issue for us was temporarily fixed  by a NG reboot, it's unclear if the changes really 'fixed' our connected device per AP inaccuracy &amp;amp; unable to push configs issue.  The jury is still out but I have seen memory utilization drop by 30%.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Adjustments:  Edit all network policies - Additional Settings - Device Data Collection (left hand column)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;B&gt;Application Visibility and Control - &lt;/B&gt;turn off&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Collect statistics every X minutes&lt;/B&gt; - set to 60&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Kernel Diagnostic Data Recorder(KDDR) - &lt;/B&gt;turn off&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;        push to all APs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This disables much of the statistical reporting features.  Our dashboard mostly comes up blank now.  So, NG has less work to do storing/accessing all the extensive information such as  AP #5 had 30 gigs of Netflix in the last hour, etc..&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We could also disable &lt;B&gt;Statistics Collection &lt;/B&gt;as well, but for now we're leaving that.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I kinda think of this as NG Lite maybe...  we really never used the statistical data within Classic, and NG has a whole lot more (not having the time or staff to leverage it) - so disabling this data reporting isn't a terrible concern, but perhaps someday we'll miss it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is NG 12.8.3.3, 600 APs, summer vacation - low usage, 40 gb memory, 8 cores (though it was suggested we could probably drop to 6 cores).&lt;/P&gt;</description>
      <pubDate>Wed, 21 Aug 2019 20:19:15 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76896#M7270</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-08-21T20:19:15Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76897#M7271</link>
      <description>&lt;P&gt;Have you seen any issues since making these changes?&lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2019 21:38:51 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76897#M7271</guid>
      <dc:creator>j_cross</dc:creator>
      <dc:date>2019-09-05T21:38:51Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76898#M7272</link>
      <description>&lt;P&gt;To review the problems WE experienced where:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Pushing config's would not process.&lt;/LI&gt;&lt;LI&gt;Devices connected to AP would be wildly inaccurate.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We're into week 2 with students/staff back - and a check each day shows I can push updates and the client connection count is accurate (hiveNG vs. command line "show station").  We're not at our normal user load yet - I don't see our student BYOD population connecting much yet, but we're certainly much higher than over the summer break.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Before I cheer, I'm waiting to hear of any complaints from the schools (it takes time for people to speak up about problems sometimes, and everyone is busy with school opening - I'd like to see a month + with the changes enacted - as we initially ran 1 month without seeing the problem before we made the NG adjustments.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But so far, things look good.  I would say, if you are experiencing problems - applying the changes we did are pretty benign it would seem, you're just basically telling the APs to stop sending all the detailed telemetry - and it can be reversed easily.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2019 22:14:44 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76898#M7272</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-09-05T22:14:44Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76899#M7273</link>
      <description>&lt;P&gt;A month in now,  we're averaging 4,000 connections a day on 600 APs and things remain the same as my post above.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Sep 2019 19:22:33 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76899#M7273</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-09-19T19:22:33Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76900#M7274</link>
      <description>&lt;P&gt;I have just received this requirement from Aerohive support after opening a third ticket on a similar issue where Hivemanager stops receiving data from the APs and all my dashboards show "Data unavailable" and I get cannot get required device list when trying to get APs.  A reboot fixes the issue for maybe a month or a little bit longer. We've been on Hivemanger NG since like Sept of 2015 and I've been having issues since probably version 12.8.1.2. The last upgrade I did in July 3rd was a fresh install of 19.5.1.7 and everything was fine for about two months, and then happened again about a month ago, but I didn't have time to call in a ticket and just rebooted to fix it. Then it just happened again yesterday and I opening a ticket today and got that answer. &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Of course I went livid because blaming SAN storage (that has tiered enterprise SSD and multiple 10Gb links)  and suggesting that it may cause a problem and data corruption I found that as an unacceptable response to the issue we're having.  We've been running MSSQL, Exchange, and even Oracle on AIX for years and none of them have issues with being on a SAN.  I'm sure despite the big hyperconverged push by the Dell's, Nutanix's etc, that even Hivemanager Online being hosted on AWS, some portions of it are probably on SAN storage somewhere.   HivemanagerNG was working great for us for about two years, and then started having these problems after a certain update, while all these other, probably bigger, applications than HivemanagerNG are working fine, so that tells me something broke in Hivemanager and they just don't know what.&lt;/P&gt;</description>
      <pubDate>Sat, 02 Nov 2019 04:05:49 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76900#M7274</guid>
      <dc:creator>wagrowski</dc:creator>
      <dc:date>2019-11-02T04:05:49Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76901#M7275</link>
      <description>&lt;P&gt;This seems to be the thread matching issues we're seeing, so I'm glad to find a group also suffering what we are. Though I wish none of us were going through this with what used to be a pretty solid product.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We upgraded to 19.5.1.7 and hit a known issue where the tables/indexes aren't properly cleaned out during the upgrade. That was messing with our stats collection, and it was taking out our PPSK services when HM would stop servicing much of anything. I could restart all services from the appliance manager or reboot the VA and we'd be back for about a week. Support's official fix was to redeploy the VA from scratch and import a copy of the VHM. I did that and am still experiencing major problems.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I just opened my third support ticket related to this as well, and this time the first question was about whether we were using a SAN. We are, and I got the "we don't support that." I'm currently pursuing help by asking for info regarding the exact error we're currently receiving when trying to push any kind of config or update ("Could not download the captive web portal file. CWP files abnormal when checking cwp files on AFS.") Let's say they won't help fix my VA, but they should be able to explain the error message and I can work from there.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We're running 200 APs with ~5000 daily unique clients, also as an academic institution (small residential 4-year college). We're heavy users of PPSK, so when HM goes weird it really causes some problems for all of our students. I haven't tweaked the settings Kevin Berrien posted from the defaults, so that's a possible path for a dirty fix. But I don't find it acceptable that a "local SSD" magically fixes all of this. You don't built a VMware cluster around local SSD disks instead of a SAN. We have sub-millisecond latency on HM writes and most reads, much like John Wagrowski's posts contends, so it should be just about as fast as a local SATA/SAS SSD. In fact, our SAN analytics shows HM is only doing about 100-150 IOPS on average. That's pretty meager compared to the demands of larger applications. (HM is our biggest IOPS VM in total.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also actually run Elasticsearch for logging and reporting. I'm ingesting 30,000-40,000 events per minute to a system backed by &lt;I&gt;shared spinning disk&lt;/I&gt;. The fact that HM is exploding with 200 devices is nuts, and I'd love to pull some stats out of the HM ES instance to know what's going on in the indexes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Once I get a bit farther our account SE is definitely getting a note with some feedback. If I enjoyed conspiracy theories I'd say this is a play to end on-prem HM and force everyone to cloud. Which I'd be happy to do but we literally can't afford it. Such a move would subsequently force me to find a new manufacturer. I've been an Aerohive champion since I selected it in 2012/2013, but this HMNG debacle has got me on the ropes. I find that really quite disappointing.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2019 10:16:55 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76901#M7275</guid>
      <dc:creator>aprice</dc:creator>
      <dc:date>2019-11-15T10:16:55Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76902#M7276</link>
      <description>&lt;P&gt;Alan,  just read your post and figured I'd drop in with an update.  Now many months after the analytics disabling  we continue to be issue free. &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, what was supposed to be a temporary fix now appears permanent.  We're still running 12.8.3.3-NGVAFEB19 - and I've inquired through the same channels and Aerohive staff who assisted with our "temporary fix" with the simple question - CAN WE UPGRADE?   I have not gotten a response after posing the question about a month or so ago.  I get the feeling, either this is a major sticky issue/denial within the company, or the company has changed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We certainly share your feelings of disappointment.  Every shop has those 2-3 products when asked they will rave about.  Aerohive was one of those products for us, but before any hardware upgrade (or even out of precaution of resolving our present state) we would likely walk into product demo's thinking we almost HAVE to change platforms.  Our situation is similar, we've invested in staff and VM infrastructure that is far above all the requirements for any OTHER product our schools or city needs - putting that investment aside and re-purchasing as a service for cloud isn't in our budget as well.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2019 21:31:22 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76902#M7276</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-11-15T21:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76903#M7277</link>
      <description>&lt;P&gt;Hi Kevin.&lt;/P&gt;&lt;P&gt;That's good/interesting to know. While working on other stuff (see below if you're bored) I proactively tuned my settings down to the same as your SE/post suggested in order to "eliminate" HMVA congestion from our possible root causes. That's really decimated my client stats, which is pretty awkward to glace at or troubleshoot, but I can live without the application data. Once I get the other bits worked out I may try to go back to 10 minute stat intervals, abandon KDDR (I've never needed it), and probably skip application info until the software backend is...fixed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After getting HMVA stabilized, and some creativity on our Cisco UCS blades to get a local SSD into one of them, we still have some pretty big issues. So while I eliminated this one thanks to this thread I'm only one or two steps down the path to a fix. The first thing I encountered is that our HTTPS certificate, of all things, destroyed our ability to push configs to the APs. So a call to support reached a tech who'd seen that, pointed to the cert, deleted it from our VA, and we could push configs again. That, in turn, broke our API usage to generate WiFi keys...so that's offline for now and people have to write me an email to get a key.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Those two problems aside, I'm curious about one aspect of your particular installation: what APs are you using, and what HiveOS do you have on them?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm still running into massive connectivity and performance problems on our AP230s with 8.4r11. I updated to this build a few weeks ago in order to try and get better Network 360 analytics to troubleshoot the &lt;I&gt;other &lt;/I&gt;connectivity problems. But I think it may have its own problems and am trying to sort out what to do. Ironically, 8.4r11 contains a config option to specify a syslog UDP port, which 8.2r6 (our previous version) does not. So I was able to troubleshoot some issues only by having a version that seems to cause those issues. And, it turns out 8.2x, 10.x, and maybe 8.4x have a known bug that causes the 5GHz radio to flap if beamforming is enabled...which I discovered accidentally by 1) reviewing my logs and 2) searching for help with said logging. I'm basically in a problem loop right now. It's awesome.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Anyway, I'd be curious what your model and OS versions are since those seem to be stable for you with 3x the active APs I've got. Clearly something is still (freshly?) amiss in my setup, and we're only two weeks out from final exams.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(Fun side note: I think Elasticsearch in HMVA is configured to keep indexes open in perpetuity and needs manual intervention to delete older data. I can tell you from personal experience, by bricking my little one-node ES cluster, that this is a &lt;U&gt;great &lt;/U&gt;way to wreck Elasticsearch. Indexes should be closed, or at least moved to a warm state [newer ES feature], if they aren't being written to. When left open they eat memory and add massive time to system startup as every index is initialized. If too many are open and your config isn't tuned it can actually trigger the "high water mark" and shut down ALL indexing until you manually clear the issue on every impacted index.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So Aerohive/Extreme, if you're reading this thread I really hope your next HMVA release includes enhancements to Elasticsearch, a new version, and some index template tuning! Or that this already exists and the VAMS UI just needs some phrasing work to make that clear.)&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2019 11:54:55 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76903#M7277</guid>
      <dc:creator>aprice</dc:creator>
      <dc:date>2019-11-19T11:54:55Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76904#M7278</link>
      <description>&lt;P&gt;Alan, per your query...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We're running mostly AP230's at 8.2r6, then a mix of other 350's (6.5r12), 120's (6.5r10), 121's (6.5r10) , 130s (8.2r6) for a total count of 591 APs, on-prem 12.8.3.3-NGVAFEB19.   We run this on a Nutanix virtual system, with RAID SSD &amp;amp; spindle.  HM VM is 8 core, 40 gb RAM.  The HM Virtual Appliance Management System shows 60% mem utilitization, and 4% on idle (ie, not pushing updates or other admin driven activity).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Our school district is 6K students over 11 schools.  We're about at that point where mobile (wifi only) devices out number traditional, so perhaps 3K of wireless devices, plus staff &amp;amp; HS student BYOD (2K high school students).  Our municipal wifi usage is minor in comparison.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We use 1 PPSK SSID, 1 radius SSID (NPS on 2 seperate domains), and the odd open guest and such in specific locations or times.&lt;/P&gt;</description>
      <pubDate>Wed, 20 Nov 2019 03:34:55 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76904#M7278</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2019-11-20T03:34:55Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76905#M7279</link>
      <description>&lt;P&gt;Thanks Kevin. That's a helpful comparison.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We're all AP230s that were on 8.4r11. I reverted to 8.2r6 to try and get rid of another portion of the problems were having (seems better, but not fixed, so far). We have the default VA configuration running on Cisco UCS and Nimble storage. That's been fine up until the "not supported" change, and possibly exacerbated by the problems introduced during our last VA upgrade.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We also have 1 PPSK SSID, 1 802.1x with RADIUS/NPS, and an open guest network with speed and port limits. Looks like we're a bit heavier in clients but also a single campus college with fewer spaces an APs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We'll see how these latest changes play out, what's next for the "Cloud IQ" VA (since HM is apparently gone), and when some of these HiveOS problems get fixed.&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 08:56:33 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76905#M7279</guid>
      <dc:creator>aprice</dc:creator>
      <dc:date>2019-11-28T08:56:33Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76906#M7280</link>
      <description>&lt;P&gt;I totally agree with that last paragraph!  We're also on 19.5.1.7 (which is the first update they put Client/Network 360 integrated into onprem I believe) with 282 devices.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I just got off the phone with support (1 hour on hold waiting for a pickup and another hour troubleshooting...wait time seems longer since Extreme purchase).  We had an issue where a bunch of APs got a config months ago with an accidental email address autofilled from my browser in it.  This caused "The CLI 'ssid [email address] qos-classifier [email address] execute&amp;nbsp;failed, cause by: Unknown error".  &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;While the config was nowhere in HiveManager, a CLI reset of the AP and complete config update would bring that misconfig right back.  Only thing that fixes it is a HM GUI "reset to default" which clears everything and you have to assign it back policy, locations, etc.  Then a complete config update.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To me, this very much seems like a bug in HiveManager where it's holding onto data somewhere and not setting configs properly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To ATAC, this is magically caused by HiveManager running on a SAN.  What kind of logic is that?  We run our VM infrastructure from a SAN with SSD caching and that thing is not a slouch.  We have DB applications that run just fine with just as much I/O going on.  It seems totally counterintuitive that they would just throw a blanket statement across anything they can't figure out as...."must be your SAN".&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've checked HiveManager's IOPS and majority of the hits are all SSD cache hits.  It's also not our highest running VM in terms of storage hits.   &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="3ed984a66454490aab254bfbe58499d8_0690c000009YLMGAA4.png"&gt;&lt;img src="https://community.extremenetworks.com/t5/image/serverpage/image-id/271i5235CC4B2AE31DBB/image-size/large?v=v2&amp;amp;px=999" role="button" title="3ed984a66454490aab254bfbe58499d8_0690c000009YLMGAA4.png" alt="3ed984a66454490aab254bfbe58499d8_0690c000009YLMGAA4.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This really does seem to be trying to eliminate On-Prem and force Cloud as you mentioned.  We too love the Aerohive product and have been running it since 2013/2014, but this kind of thing does make me question what to do at our next refresh cycle.   If they as a company want to go this route, do so, but tell the customers that it's because of that reason rather than start not supporting a product they've put out.  I understand it's hard to develop an on-prem product that does the same as cloud, but I would rather hear that instead of this cop-out on actual issues that might be happening.&lt;/P&gt;</description>
      <pubDate>Tue, 10 Dec 2019 02:45:50 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76906#M7280</guid>
      <dc:creator>akoshy</dc:creator>
      <dc:date>2019-12-10T02:45:50Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76907#M7281</link>
      <description>&lt;P&gt;Hi Abe.&lt;/P&gt;&lt;P&gt;Depending on how far through this whole thread you read, and if your issue returns, we had somewhat similar issues and the fix ended up being a combination of about four things. These all got buried in my paragraphs...&lt;/P&gt;&lt;P&gt;And who knows, maybe this topic will help the next person who finds themselves stuck on 19.5.1.7.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;I upgraded to 19.1.5.7, which resulted in a known bug where the indexes were not properly cleaned up. Fix #1 was to deploy a new VA, export my VHM, turn off the damaged VA, and import the VHM to the new VA. This is relatively easy but I did lose some VA (not VHM) settings.&lt;/LI&gt;&lt;LI&gt;I still had problems. Fix #2 was to follow @Kevin Berrien​'s post and turn data collection metrics wayyyy down. That stabilized things enough to start troubleshooting again.&lt;/LI&gt;&lt;LI&gt;After installing SSDs in one of my blades to get back into a "supported" state the tech found the continuing issues were due to the HTTPS certificate I installed to secure the GUI. In the previous VM my certificate worked great, but in 19.5.1.7 it was causing all of my config updates to fail. Support got into the VA as root and erased the cert back to the default self-signed one, and configs worked again.&lt;/LI&gt;&lt;LI&gt;In reviewing all of this it appears the Elasticsearch indexes driving everything do not get automatically maintained (frozen, closed, or deleted) and must be manually purged through the VAMS interface. If this is true it will eventually cause nasty problems for a lot of people. I just fought index problems &lt;I&gt;again&lt;/I&gt; on our actual ES cluster after a change with ES 7.x caused our open index count to skyrocket. So...index overhead is really fun to deal with and should be fun when it takes out our PPSKs again.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After &lt;B&gt;all&lt;/B&gt; of this we have what seems to be a stable, happy HMVA again. I am planning to purge indexes quarterly, sooner if I start to notice problems.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Recently, I turned 10 minute client stats back on (down from the 60 in Kevin's post) because I didn't like the massive gaps/spikes in my data. Things seem to be running fine. I left application stats and KDDR logs disabled. I hope to bring applications back since it was neat to know what was going on and makes the dashboard prettier. But "pretty" is not operationally important so I'm not in a rush to test fate again.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Alan&lt;/P&gt;</description>
      <pubDate>Wed, 11 Dec 2019 10:27:56 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76907#M7281</guid>
      <dc:creator>aprice</dc:creator>
      <dc:date>2019-12-11T10:27:56Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76908#M7282</link>
      <description>&lt;P&gt;On 19.5.1.7-NGVA On Prem (VM on SAN SSD) with 300 AP,&amp;nbsp;we had this problem for the first time on december.&lt;/P&gt;&lt;P&gt;The ElasticSearch service was stopped and can be restarted for only two hours max after reboot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We had the same response from the support : SAN is not supported (we never had this information when we installed HiveManagerNG).&lt;/P&gt;&lt;P&gt;After a long negociating, the tech support accept to connect to the console and type this command : curl -X DELETE "localhost:9200/hm-*?pretty".&lt;/P&gt;&lt;P&gt;He said the problem will come back and he was right it came this morning after 50 days.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Feb 2020 16:04:18 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76908#M7282</guid>
      <dc:creator>nicolas_lesaint</dc:creator>
      <dc:date>2020-02-06T16:04:18Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76909#M7283</link>
      <description>&lt;P&gt;Has anyone gotten a solid response from any level at Aerohive in regards resolving this situation?  I've now inquired to sale engineering and sales rep twice - while we seem to be stable (after turning off all the bell's &amp;amp; whistles) for months now - I can't get word if it would be wise to update our system, we're still running NG at 12.8.3.3-NGVAFEB19.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Has anyone disabled the following running on a higher version with success?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;B&gt;Application Visibility and Control -&amp;nbsp;&lt;/B&gt;turn off&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Collect statistics every X minutes&lt;/B&gt;&amp;nbsp;- set to 60&lt;/LI&gt;&lt;LI&gt;&lt;B&gt;Kernel Diagnostic Data Recorder(KDDR) -&amp;nbsp;&lt;/B&gt;turn off&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Feb 2020 21:21:41 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76909#M7283</guid>
      <dc:creator>k_berrien</dc:creator>
      <dc:date>2020-02-06T21:21:41Z</dc:date>
    </item>
    <item>
      <title>Re: Non-SAN on-prem Aerohive NG hardware requirement.  How did you implement?</title>
      <link>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76910#M7284</link>
      <description>&lt;P&gt;We're on 19.5.1.7, and after everything in this thread we seem to be okay with the following settings:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;AVC: off&lt;/LI&gt;&lt;LI&gt;Stats collection: 10 second interval&lt;/LI&gt;&lt;LI&gt;KDDR: off&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I do maintain the logs via the VA management system (port 3000 on the server) at the start of each month. I purge anything older than 30 days. I'm not clear on whether the system maintains itself correctly and haven't wanted to risk it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Bear in mind, this is ALSO after a fresh deployment of the VA and an import of our data. I don't know how much things will go off the rails during an in-place upgrade, since they sure did for us originally.&lt;/P&gt;</description>
      <pubDate>Fri, 07 Feb 2020 00:58:32 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/aerohive-migrated-content/non-san-on-prem-aerohive-ng-hardware-requirement-how-did-you/m-p/76910#M7284</guid>
      <dc:creator>aprice</dc:creator>
      <dc:date>2020-02-07T00:58:32Z</dc:date>
    </item>
  </channel>
</rss>

