It’s been one year since I officially transitioned from systems administration to information security. What a wild ride. This career change didn’t come with a how-to manual, that’s for sure. But armed with years of experience and a lot of determination I feel that I’ve made dramatic progress in both my own professional development and in the organization I support.
I successfully pursued CCSP certification. This was driven by both personal ambition and a genuine need to bring in some additional expertise to support cloud initiatives at my employer. The process took me about 3 months of daily study but it wasn’t particularly bad. The overlap between CISSP and CCSP is significant, so that likely helped quite a bit. I definitely feel it was a worthwhile endeavor and I learned a decent amount from it. You can see more about that here, here, and here.
I levelled up to Pro Hacker on HackTheBox. That also took a while and proved VERY challenging. Hacking is a unique thing to learn since every hack is unique. I’ve found it taps every facet of my IT skillset and forces me to look at things in new ways. Learning to hack and beginning to see things as an attacker has absolutely made me better at defending my organization.
I dove deeply into numerous policy frameworks. While compliance is not security, it is a requirement. To that end I’ve studied up on several regulatory and internal policy frameworks and developed the supporting programs to enforce them.
I’ve run digital red team exercises. Hacking into the systems at my employer. Running physical penetration tests. Recording what I find, then working to ensure no on else can do those things.
I’ve scanned for and mitigated vulnerabilities. Repeatedly *sigh*
Didn’t get to go to Black Hat this year because of The Rona. Boo. I really wanted to. I consider the first time I went to be a pivotal point in career. It was the moment where I 100% committed to making this change. This year I plan to do summer camp to its fullest. BSides, Def Con and Black Hat.
What do I want to accomplish this year?
Personally, I want to move up another level or two on HackTheBox and obtain OSCP certification. Those goals should more than keep me busy.
For fun and excitement over my staycation I’m testing out OpenCanary. Like many people I’ve been instructed to find ways to cut costs while maintaining or improve the level of security at work. One of the tools in my toolkit has been our Thinkst Canary device. I wanted to roll more of them out, and probably will in the future, but for now there’s no budget for that. The Canary devices are essentially just listening devices. They act as one or more legitimate service that no one should ever attempt to contact. If someone does contact them, you know something is up. It’s a nice addition to your blue team tool bag.
Ideally I’ll be putting one of these at each of our locations. Found a useful video here which I’m following to get this thing up and running. I have several Pi’s laying around so I’ll set this up on a R Pi 3 B+ and see how it works.
Ideally, I’d like to set this thing up to catch smb,nfs,rdp,ssh,and telnet. If it gets a hit it should open a ticket with our service desk and send an alert to the cyber security Teams channel.
Here’s a direct link to the github page with the instructions. Apt couldn’t find python-pip or python-virtualenv, I had to use python3-pip and python3-virtualenv. This happened several more times with different commands. Basically, if you try something and it doesn’t work remember you probably need python3. After running through the instructions I was able to get it running without much effort, though a bit of experience with Linux will be very helpful here. I would not say this is a beginner project.
Right off the rip I can see one pretty significant difference between the paid service and the open source project. It doesn’t appear the open source project has a GUI. At all. I can work with that, but it does make me appreciate the white glove treatment we get from the paid implementation.
Once I’m done testing out OpenCanary I’ll post a follow up and report back how it works.
I’ll just say it, I love the new Windows Terminal. This thing is great. It appears the finally took some lessons from the open source community, specifically tiling window managers like my beloved I3. It has tabs, panes, configurable color schemes, transparency, hardware accelerated graphics, can launch shells for cmd, posh, and azure all right there.
I installed it yesterday after finding out that it’s reached version 1.0. We have the Windows store blocked on corporate devices, so I had to download the package and do an offline installation with Add-AppxPackage. Install was quick and easy.
The panes are an amazing improvement. It’s something I’d gotten VERY used to from having customized i3 installed on my Kali boxes and I always missed it when I had to switch back to Windows. So it’s nice to be able to just spawn new panes and switch around with keyboard shortcuts.
I haven’t dug too deeply into the settings config file yet. But it looks easy enough and includes links for more info. I absolutely plan to rice this thing as heavily as I do my Kali installs. But for now I’m just happy to have this thing available.
I’ll often use multiple terminals to do things like keep multiple ping loops. Generally while making routing/switching changes etc. I’ll have one loop per site. I’ve always hated arranging the windows manually. I also often use multiple windows when stacking ps remote sessions. So that will also be nice to just pop a pane and let it go. You can set custom environment variables for each platform as well. So I can set specific variables for POSH and CLI. that will be quite handy as well.
So there you have it, and I’ll say it one more time. I love the new Windows Terminal. Kudos to the team who built it.
So I’d never hear of “compensating controls in a hybrid cloud” before. I learned about today while reading the CCSP book. I knew the concept, but never formally. I’ve always made a point to keep things monitored. I’ve also implemented redundant monitoring before. But reading about this has made me want to standardize this technique as a new baseline going forward.
I’m envisioning redundant systems with automatic provisioning. One set to a higher warning threshold. Icinga2 and LibreNMS perhaps? Will have to include the deployment of their configuration into the system build process. Though I could also automate it externally I suppose. I already have Icinga2 pulling computer objects directly from Active Directory via LDAP query. That works well. I’ll need something similar for Libre though. Still, that shouldn’t be a heavy lift.
I also want to check into Azure specific offerings. I know they have Azure Monitor. But having recently discovered Operations Management Suite. I’m now wondering what else is out there.
Another thing to consider would be automatic remediation actions. With two monitoring systems it’s possible for both to attempt remediation. This could lead to some undesired and potentially unexpected behavior. So there would have to be some more logic in B to detect if A had run. If it had, B wouldn’t do so as well.
Probably something like a log file would do the job. If system A runs the last step is to log “success” in a file. System B looks for that file entry before running. If system A is down or fail, system B will act.
Anyway, there’s my thoughts for today. Time to start work. Check out some other professional development posts .
So, following up to my last post, I’m making progress on getting CCSP certified. 120 pages into The Official (ISC)² Guide to the CCSP CBK and it’s basically what I expected. Dry as a desert, but good knowledge regardless. It’s been good really digging into the technical and policy differences surrounding IaaS, PaaS, and Saas. These are one of the topics I’ve always worked with, but never really studied in any serious depth.
One new technology I’ve read about is “bit splitting” which is just a cloud version of cryptographic splitting. Conceptually, I like the idea of splitting up data into multiple locations. There are some obvious challenges, especially the increased chance for availability issues, but assuming those can be effectively managed what a great idea.
I’m also growing more interested in a true DRM system. Looking into Azure Rights Management. The idea of basically encrypting damn near everything kind of has me uneasy, but the benefits that come with it are very tempting indeed.
Learned about homomorphic encryption which was totally new to me. So that’s neat.
Another thing I’ve since learned about is Azure Stack. From the sound of it, this is basically what openstack wants to be, but much more heavily integrated into Azure. (for obvious reasons) I will absolutely be setting up a test/dev of this going forward. The ability to spin up a hybrid cloud using the same toolset for on-prem and public cloud sounds AMAZING. But, that said, this is Microsoft. My experience with them has always involved some bizarre gotcha somewhere. So I’m sure that when I do go to build it out, I’ll find something somewhere that blows the idea all to hell.
I took a couple of practice tests as well. 83% on the first and 30% on the second one. CLEARLY some more reading to do… so much for passing this exam cold.
That’s it for today, stay tuned if this is of any interest and I’ll add about working towards getting CCSP certified as things progress.
So, given that I won’t be doing any travelling, conferences, or really anything for a while I’ve decided to pursue CCSP certification as a compliment to my existing CISSP. Today is day one down that path. I recently bought both the official study guide and the exam questions from (ISC)².
It’s been a few years since I did my last certification exam. I’m actually kind of looking forward to it. Learning new things has always thrilled me. A large portion of the material in the books is review, but there is definitely some new stuff.
HackTheBox and red teaming practice is great for learning about things like breaking and entering, but regulatory framework? Not so much. Even though I have experience with HIPAA/PCI/CALEA yada yada, it’s mostly been OJT. It’s good to dig in and do some more formalized study. Given that my new position at work is very much blue team, some supplemental research is necessary anyway, so why not get certified for all that reading, right?
I think I’m going to spin up a home lab to complement the book materials as well. I’m thinking a hybrid on-prem/Azure environment. The goal there will be to build a best practices to the max fortress. Play with all the bells and whistles. OMS, ATP, etc. That should be fun. I already have a vSphere 6.5 environment with shared storage, layer 3 switching, and all the trimmings in my basement. (Even a 24u rack! The electricity company loves me…) So most of the pre-work for that is done.
As my studies progress toward getting that CCSP certification I’m going to keep a running log on this page mostly for my own benefit. I’ve also created a new category called Professional Development to keep things organized. So if this interests you, stay tuned. Feel free to reach out if this interests you as well. Email is my title of this site @ Proton Mail.
It’s been fun and educational putting my INTEL-SA-00213 Detection Script together. first writing it, refining it, Adding SMB Logging getting feedback from the Reddit PowerShell folks, learning about the PSScriptAnalyzer, etc. But there comes a point where it’s time to walk away from something. This little tool does everything I need. I could tweak and add features, and obsess further, but why? What good will come of it. It’s been a a neat little project but it’s done.
I learned a good deal during this, so for my own mental retention, and to share them let’s recap. There is a preferred order in which to arrange comment based help. Temporary files are best handled using $env:TEMP and New-TemporaryFile. Don’t bother specifying Mandatory=$true or Mandatory=$false in parameters, as it’s implied. Use Write-Debug as a form of commenting instead of pure comments, as it has the added benefit of automatically adding -Verbose functionality. When testing a Web path for validating a parameter, use the -Method Head option for Invoke-WebRequest to avoid downloading the file twice.
This was also my first project build fully in Visual Studio Code and GitHub. Which I now love and will never go back to my old way of version control. (Which was, admittedly, kludgey and stupid…)
All in all, a fun exercise which produced a tool that I will be using to check for and mitigate live vulnerabilities. If you use it let me know, I’d love to hear how it works out for you. If you want any new features or changes, I’d be happy to do that as well.
Here’s the link to get the script.
Following up on yesterdays post about my INTEL-SA-00213 detection script I’ve added some logging functionality. It’s rudimentary, but effective. Pass a valid -LogDir argument and it will generate a results.txt file. The file contains the hostname and output separated by a comma. The script uses Add-Content as well so this can be run from multiple hosts and the results will be appended to existing content.
I plan to make the output file customization via argument as well, and still need to tie this thing into SCCM. As it stands right now though version 2.0 or 2.2 could easily be used for a GPO startup script.
This is rapidly becoming more than just a utility script. I’ve never drilled this deep into parameters before and am learning quite a bit. It’ll be good to keep adding more functionality until I’ve got this thing well baked and I’ve learned as much as I can from it.
Anyway, if anyone is interested, here’s a link to the GitHub repository. I’m always looking for ideas and feedback!
Security Update Page
CSME Detection Tool
So CVE-2019-0090 / INTEL-SA-00213 looks rather ugly, especially given that there is no software fix available. So, I need to to see if any of my nodes are affected. To that end I’m putting together a quick and dirty PowerShell script to make scanning easier. As of now it can automatically download the Intel detection utility from the web from a custom HTTP(S) location or from SMB and then run it and report results.
In the next day or two I’m going to add the ability to log to a remote location and build out a SCCM package and hardware report.
For you you can pass the -DownloadFromWeb or -DownloadFromSMB arguments to tell the script how you’d like to obtain the file. You can also specity -WebURL and -SmbPath to tell the script to download from custom locations. By default the script will download the Intel utility directly from Intel. Stay tuned for updates.
If anyone is interested, here’s a link to the GitHub repository. I’m always looking for ideas and feedback!
Intel Advisory page
Intel security update page
Intel CSME Detection Tool
I spent far too long trying to enumerate this one… But I learned a good deal about a system I’ve never touched before which is always a good thing. Once I got a foot hold the rest was fairly quick to fall into place. Overall I liked it. Will be putting together a walk through video of this one for sure.