Tuesday, July 22, 2014

Cisco power levels - Lightweight APs and channel and power plan (what you thought you knew)

I was recently called out to investigate why a wireless network was having voice over Wi-Fi issues.  It had been fine several months ago, and now they were having issues with dropped calls as nurses went into patient rooms.  I opened up the Prime Infrastructure floor plan and checked to see if any APs were offline, overloaded, and to see the power and channel plan.  At first glance, everything appeared normal.  Well, as normal as I would expect for a WLAN that had been designed for 802.11g data with APs in the hallways and now they’re running voice on the 5GHz radios.  As  you can see, most of the power levels are a power level of one and transmitting at 50mw.  Or are they…



When I visited the site, I had seen an 1130 series AP on the ceiling and incorrectly assumed that the entire hospital was most likely that model.  Turns out it was one of three, and the rest were 1242 series.


We’ve always been told that power level 1 is “the highest power”, and we assume that is either 100mw for 802.11b or 50mw for OFDM.  Turns out that a lot of digging will reveal that is not the truth.  Stay tuned.


I used AirMagnet Survey Pro to see what the 5GHz WLAN spectrum looked like on the floor.  I used a 15 foot guess range and used a Proxim 8494 adapter, which in my opinion, is most likely better than the chipset in the Cisco 7925 phone the nurses are using.  I dialed it down to -65dBm and this is what I saw (below).  I concur with what the nurses are complaining about.  When the user approaches the window, there are a lot of dropped calls.




After seeing the results, I Googled and found an old blog post from George Stefanick.  Here is his post on power levels:




I used his blog post to remind myself of the debug commands and ran those commands while onsite and made a quick and dirty map on a whiteboard of what we were looking at.


For those of you who are just too tired to remember the channels and what band they’re in and don’t feel like looking it up, I’ll jot them down:

UNII-1 channels = 36,40,44 & 48
UNII-2 channels = 52,56,60 & 64
UNII-2e channels = 100,104,108,112,116,120,124,128,132,136 & 140 (a lot of organizations do not support these)
UNII-3 channels = 149, 153, 157 & 161


Now to use the debug commands to see what we’re looking at on these access points. (I changed the name for obvious reasons)

debug ap enable TEST-1
debug ap command "show controller do 1" TEST-1

<lots of output omitted>

TEST-1: -Channel Range- -------Rates------ Max Power Allowed

TEST-1: 36 to 48 by 4 6.0 to 54.0                    11
TEST-1: 52 to 60 by 4 6.0 to 54.0                  17 (THREE CHANNELS)
TEST-1: 64 to 64 by 4 6.0 to 54.0                   
TEST-1: 100 to 116 by 4 6.0 to 54.0 17 UNII-2e, not supported by our clients
TEST-1: 132 to 140 by 4 6.0 to 54.0 17 UNII-2e, not supported by our clients

TEST-1: 149 to 153 by 4 6.0 to 54.0 17 (TWO CHANNELS)
TEST-1: 157 to 157 by 4 6.0 to 54.0    14
TEST-1: 161 to 161 by 4 6.0 to 54.0   

Wow!  Did you catch that?  Most of the access points were on channels that the maximum power was 11dBm (12.5mw).  No wonder we’re not getting signal in some of those locations.  I can only guess that Cisco’s RRM decided to change some channels within the last few months (the channel plan was not frozen) and the result was decreased WLAN coverage.

A coworker and I whipped up a quick and dirty channel plan for the patient wing where the channels would support 17 dBm.  We decided to use channels 52, 56, 60, 149 & 153.  We assigned one channel twice, made the changes, and resurveyed.  It is my belief that every time you make a dramatic change like this, you NEED to resurvey using a WLAN Survey software package.  Here’s the result:

Remember that this is a hospital.  Rooms are available one minute, and two hours later they’re not.  Rooms open and close all the time, and we just have to accept that.

That room with the gray area is still a problem for us, even at an increased power level.  We’re going to do a complete redesign of the WLAN in the near future which will take care of the problem areas. 

After making the change and resurveying, we took two 7925 series phones to the floor and called each other.  I placed one phone on my laptop, situating the mouthpiece over the speaker of my laptop and played Beethoven while we walked through every room we could get into and never dropped the call.

If anyone has a link to where Cisco documents this in a comprehensive spreadsheet, please let us know!



















Tuesday, July 1, 2014

Adding a VLAN to a trunk for a WiSM1 WLAN controller


I came across something today I thought was a bit odd.  When adding a VLAN to a trunk in a core switch for the WiSM1 module, the command is slightly different than the usual “add” command.

The following two commands:

wism module 2 controller 1 allowed-vlan 2347

wism module 2 controller 2 allowed-vlan 2347

Result in the following configuration.  There’s no “add”…

wism module 2 controller 1 allowed-vlan 101,102,778,2347

wism module 2 controller 2 allowed-vlan 101,102,778,2347

 Here’s the lack of “add”…

My_Core_Switch(config)#wism module 2 controller 1 allowed-vlan ?

  WORD  vlan range 1-1001,1006-4094


What I found was the fact that issuing the command for wism module 2 controller 1 (to add VLAN 2347) on the switch resulted in a missed ping from my workstation  to the management IP of the controller.  Before making this change, I started a continuous ping from my wired workstation to the WLAN controller.  This has never been the case when using the normal “add” command when adding a VLAN to a trunk to a 4404 or 5508 WLAN controller.

The problem is when entering the command for the second module  "wism module 2 controller 2 allowed-vlan 2347".   The second module lost 25 of the continuous pings.  I was able to replicate this on the backup WiSM farm.  Same exact behavior.  My test workstation on Wi-Fi lost connectivity for almost two minutes.  I assume the trunk to the WLAN controller went down.

I didn’t see anything in the switch's logs about the interface going up and down in the switch.  However, I did look at the logs in the WLAN controller and it showed that it definitely took the trunk down for 90 seconds or so – depending how you look at the traps and interpret them.