OperationalError: (‘08001’, ‘[08001] [Microsoft][ODBC Driver 17 for SQL Server]Client unable to establish connection (0) (SQLDriverConnect)’) with pyodbc on Mac OS x

I was getting the following weird error when running pyodbc on my Mac and trying to connect to my Windows Database:

  • OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Client unable to establish connection (0) (SQLDriverConnect)')

Using the wonderful powers of Google I came across this article:


Following the information from JH88:

brew install openssl@1.1

# you might need to delete the old symlink first
# rm /usr/local/opt/openssl

ln -s /usr/local/Cellar/openssl@1.1/1.1.1l /usr/local/opt/openssl

I still wasn’t getting it to work, turns out Brew had installed version 1.1.1l_1 on my machine so I needed to do the following:

rm /usr/local/opt/openssl

ln -s /usr/local/Cellar/openssl@1.1/1.1.1l_1 /usr/local/opt/openssl

After that quick fix everything started working correctly again.

Veeam Warning 1327 while doing upgrade to version 11.x

I was upgrading the veeam environment to version 11 and I was getting an error when trying to do the upgrade. At some point the main hard drive for the install had been changed and there were registry entries for Veeam that were causing issues with the upgrade. The error I was getting was:

Warning 1327.Invalid Drive H:

Really not an informative error. I searched through Google and couldn’t find anything aside from some references to checking the configuration. After doing that I wasn’t able to find anything I ended up having to go through the registry and all of the Veeam entries and finding one reference for Veeam pointing to the H:. Really painful and you would think they could make this error a little easier to find.


Check your Raid: Cisco ASA and Sourcefire

Over the past week I had an issue where one of my Cisco 5545 with a Sourcefire module went down and failed and I couldn’t get it restarted. When I looked at the console for the SFR Module I saw disk errors and I opened a ticket with Cisco to have them look at it. One thing that I found appalling was the quality of Cisco TAC engineers has dropped dramatically. I spent more time on the phone with these guys not knowing what to do and and my showing them commands that I had just googled and what needed to be done. If these guys are supposed to be the experts in the device and technology I am not impressed. Especially since Cisco keeps raising my rates and the quality seems to get lower rather than better.

Back to the issue:

The Cisco 5545 Sourcefire unit has two SSDs in a Raid 1 configuration, so you would think that if one failed the other would take over. At least that is what I thought, however it turns out that both of the SSDs had failed and there was no notification at all on the unit itself or in the logs as to one of the units being bad, let alone both of them. The only place I found it was running the “sh raid” command on the terminal. After seeing the failure of this unit, I then went through the rest of my 5545s with Sourcefire modules and found two others that had a failed drive and there was no warning, no error lights on the drive or the firewall itself. I had to run the command to find the issue.

Here is what a healthy raid set looks like:

Version : 1.2
Creation Time : Fri Feb 19 18:27:16 2021
Raid Level : raid1
Array Size : 124969216 (119.18 GiB 127.97 GB)
Used Dev Size : 124969216 (119.18 GiB 127.97 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Jun  2 20:05:01 2021
      State : clean

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

   Name : ciscoasa:0  (local to host ciscoasa)
   UUID : 244baa9a:b6e40506:f7384510:fcb42706
 Events : 12123

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
2 8 16 1 active sync /dev/sdb

Here’s what an unhealthy raid set looks like:

        Version : 1.2
  Creation Time : Mon May 25 12:42:13 2020
     Raid Level : raid1
     Array Size : 124969216 (119.18 GiB 127.97 GB)
  Used Dev Size : 124969216 (119.18 GiB 127.97 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Jun  2 20:01:32 2021
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : ciscoasa:0  (local to host ciscoasa)
           UUID : 0ed2ca7c:260897dd:f183f4bf:c0f15bfb
         Events : 12258234

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       2       0        0        2      removed

       2       8       16        -      faulty   /dev/sdb

I can’t believe there is no logs or notifications, Solarwinds didn’t pick up the hardware issues. You would think there would be some sort of notification sent out or the HD light on the firewall would turn orange, what a novel concept to notify people of a failed hardware item before it causes major problems.

So if you run any of the modules in your ASA firewalls, make sure to check the raid level and that the drives are in a healthy state, if not get the ticket open with TAC. Where they can give you such brilliant ideas as move the faulty drive to another ASA, or swap the drives(which causes the firewall to crash, so don’t do it).

Script to update Address entries in Fortigate

With our VPN being over utilized I had to implement split tunneling on our vpn. However there are some web services that require a known IP address to access. Unfortunately these services are on AWS which the IP address changes often. I built this script to lookup the addresses and then update the Fortigate firewalls VPN Routing list to make sure that the traffic goes over the vpn tunnel and through our known IP address to access the service.


#Update Epsilon on the Fortigate Firewall VPNs

from nslookup import Nslookup
from netmiko import ConnectHandler
import cred

device1 = {
    “host”: cred.hostname,
    “username”: cred.rancid_username,
    “password”: cred.rancid_password,
    “device_type”: “fortinet”,
    “secret”: cred.rancid_password,

#Connect to the Fortinet
net_connect = ConnectHandler(**device1)

#Listing of the domains to query
DOMAIN_FILE = open(“domains.txt”, “r”)
#DNS Server to query
DNS_SERVER = [‘x.x.x.x’]

    #queries the specified dns server to get the info for the urls and writes the data to a config file
    dns_query = Nslookup(dns_servers=(DNS_SERVER))
    ips_record = dns_query.dns_lookup(line)
        for x in ips_record.answer:
            FILE_CONFIG.write(“edit ” + line + “_” + str(ORDERNUMBER) + ‘\n’)
            FILE_CONFIG.write(“set subnet ” + x + ”″ +’\n’)
            FILE_CONFIG.write(“next” +’\n’)


FILE_CONFIG = open(‘config.txt’,’w’)
FILE_CONFIG.write(“config firewall address” +’\n’)

for line in DOMAIN_FILE:
line = line.rstrip(‘\n’)

FILE_CONFIG.write(“end” +’\n’)


# write to the Fortigate
output2 = net_connect.send_config_from_file(config_file=”config.txt”)

Python script to pull ip addresses from Meraki and then update Microsoft trusted locations in Azure AD

I needed a script that would automatically get all of the ip addresses for my stores and then upload them to Microsoft on a weekly basis and create a trusted location. I did this so that we wouldn’t have to use MFA for all of the stores and they would be able to log in with just their username and password. Since all of the data was sitting in Meraki I decided to pull it from there and then upload it to Microsoft. Here is the script that goes through all of the wireless devices in my Meraki org and then updates Microsoft with that data and sends an email when it’s been done successfully. I’ve posted the script for reference and anonymized the portions of the script that I didn’t put into the cred.py file.

I used the wireless devices from the Meraki cloud because they were easier to pull out and integrate into the script.

I posted the code to my GitHub repository so that others can see what I did and make recommendations for changes or other things that should be added.




Future changes for the script:

Keep just one database of the ip addresses and update it as needed instead of creating a new site

Additional error checking in case the script doesn’t run

Microsoft MFA login with Fortigate and Forticlient for SSLVPN

Since I am tired of being a beta tester for Cisco products. I decided to try a different firewall this time around for my company. I looked at both Fortigate and Palo Alto as they seemed to be the leaders in the market right now. I did a bake off for features/functionality vs cost and Fortigate came out as the winner. The firewall was implemented with minimal issues and has been working flawlessly for us. While we were on this project we are also in the process of moving to Azure AD so I decided to combine the Microsoft MFA with our new firewall/vpn solution to save ourselves some money since then we wouldn’t need another 2 factor solution.

I went through the documentation from Fortigate and Microsoft on setting up the SAML authentication and it was pretty good for the most part. Here was the main document that I followed to get everything setup:
I did run into a few issues that I had to fix to get everything working with group memberships, so that users would be enabled to login based on their group and would have the correct policy applied to them.

Here are some things to be aware of and the changes I needed to make:

1. You must be on the 6.4.x code for Fortigate. There are issues with the lower code versions and SAML not working correctly or populating the tables with the necessary information.
2. Wipe out all of the extra entries under Users and Attributes Claims in Azure AD. This is all you should have:
3. Here is the necessary configuration on the Fortigate side:
config user saml
edit “azure”
set cert “Fortinet_Factory”
set entity-id “https://XXXXXXX/remote/saml/metadata”
set single-sign-on-url “https://XXXXXX/remote/saml/login”
set single-logout-url “https://SSSSSSSS/remote/saml/logout”
set idp-entity-id “https://sts.windows.net/6XXXXXXX/”
set idp-single-sign-on-url “https://login.microsoftonline.com/XXXXX/saml2”
set idp-single-logout-url “https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0”
set idp-cert “REMOTE_Cert_2”
set user-name “username”
set group-name “group”

After these changes everything worked perfectly, I am now in the process of rolling out our new vpn to the users in the company along with the Microsoft MFA client.

Setting up the WLANPi as a remote capture device for Mac OS over USB

I wanted the ability to bring up Wireshark and then start taking packet captures with my wlanpi from my Mac. I didn’t want to always have to sacrifice wireless connection while I was doing it. Since most recent Macs lack a dedicated ethernet interface and I don’t always have a dongle around with me. My requirements were though to keep everything as stock as possible so that all I would have to do is hook the wlanpi up to my machine ensure that it was running and then be able to take wireless packet captures.

  1. I copied over my public key to the wlanpi under the default address. I wanted it to be as simple as possible and why mess with the generic user: ssh-copy-id -i ~/.ssh/id_rsa.pub wlanpi@wlanpi.local
  2. By following and using this wonderful github project from Adrian Granados there are only a few modifications that need to be made.
  3. When you are doing this part of his setup, the username will be wlanpi. $ sudo groupadd pcap
    $ sudo usermod -a -G pcap wlanpi
    $ sudo chgrp pcap /usr/sbin/tcpdump
    $ sudo chmod 750 /usr/sbin/tcpdump
    $ sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump
  4. servername is going to be wlanpi.local
    username is going to be wlanpi
    This is the tricky part you need to specify your private key in the config, but you can’t browse to your .ssh directory by default. So when you click on the … and it brings up the directory window you will do a “Command + Shift +G” and then in the search field type ~/.ssh

Passing the CWSP – PearsonVue Online

I passed the CWSP on my first try last week, which I was happy about. With Covid-19 running rampant and changing things I had to take the test at home with PearsonVue. I liked the idea of testing at home and ran through all of the steps for PearsonVue that they recommended. However that didn’t stop from needing to reboot my machine multiple times and have to spend quite a bit of time trying to get setup and talking to their online people over chat. It took about 30 minutes for them to finally present the test and for me to be able to take it online successfully.

Once I finally got the technical issues resolved with their service and I was able to take the test I was surprised at how quick it went and I was glad I could do it at home. Seeing as how it would have taken me longer to drive to and from the testing center than to take the test. The content was tough as I deal with it almost every day in regards to network security and trying to build a secure network

So now I get to display this logo:

Cwsp 200x200

My next exam is going to be the CWDP and given the fact that I am still working from home and testing centers aren’t open I will be taking this one also at home. Hopefully the whole testing experience will go better.

How to rebuild an F5 Physical Load Balancer

Because I forget this and it always seems to cause me more pain than it should to have to rebuild one. I’ve had this happen 3 times in the 8 years of dealing with the physical 1600 LTMs all of them have failed due to some power problem that won’t let them startup completely and I end up spending 8 or more hours having to rebuild them and figure out what the heck happened to them. Luckily they have always been in a fault tolerant pair so I haven’t been down completely, but have never wanted to push the amount of time one is down because of how important they are to my company.


Call into Support and open a ticket with the s/n of the failed unit and the error message on the screen.

If you don’t already have enhanced 4 hour replacement ask for an upgrade to it via credit card. Waiting more than 4 hours is very painful and dangerous for us.

Wait 4 hours for the new unit to come in.

While waiting:
Unrack the currently failed unit making sure that all of the cables are correctly labeled and ready to be plugged into the new unit.

Download the current version ISO along with any hot fixes to match the current install version. Download your latest backup for the unit and have it all ready and waiting to go on your laptop.

On the active unit make sure to clear out any ssh keys if needed from the failover interface

Also Reset the Device Trust under Device Management/Device Trust on the active unit

When the new unit finally arrives rack it and plug in at least the serial cable and the management ethernet cable. Before powering on plug in the recovery USB stick if it came with one that has the version of LTM that you need on it. This will greatly simplify the upgrade process and get it to at least the major version you need.

Once the unit has been upgraded to at least the major base version that you need. Login via the serial console with root/default and type config. This will let you set the management IP address for the unit.

Once the management address is set, connect to it via the browser with admin/default and start going through the licensing and configuration process.

Upload the hot fixes if necessary to the replacement unit and update to the version needed to restore the backup file. Once the hot fixes are done updating go ahead and restore the backup to the failed unit.

Hookup the failover ethernet cable.

Set backup the HA configuration between the units and ensure that you can ssh between the units on their failover interfaces.

Push the configuration from the Active unit to the new unit with an override, if it fails or there is any issue during the time run this command on the failed unit to see what the issue is:

tmsh show cm sync-status

Once it’s all done and happy it should be back in sync and in an active/standby state.

Then plug in the last of the cables for the internal/external interfaces and then you should be done.

Pack the old unit up and ship it out.

Reboot Meraki APs

I have found myself several times over the last couple of months needing to reboot all of the APs within a Meraki network. Sometimes due to changes or sometimes due to them not responding for some reason. There really isn’t a clean way of going through and rebooting them aside from one a time within the console. I thought hey I can make this one better and do it via the API. So I went through and built this script to allow someone to put in the Org id and then it will pull back all of the networks that are in that Org and allow you to choose one to reboot all of the APs. It will ask should it go as fast as possible or would you like to put in a delay so that they all don’t go down at the same time. I’ve tested it a couple of times and everything works as its supposed to. As always I look forward to any comments or updates that I can put into the code to make it better.


As usually my code isn’t fancy or special, just serviceable and able to get done what I need and save me some time and headaches.