my personal blog about systemcenter

All posts in Uncategorized

Getting Microsoft Defender to work with Google Santa enabled

Categories: Uncategorized
Comments Off on Getting Microsoft Defender to work with Google Santa enabled

Google Santa is an open source project that helps OSX Administrators secure the workstations, its whitelists binaries on either SHA256 or Certificate Level. (Download

Santa Supports local database and remote sync server for configuration, the first post will cover local database, remote sync server will be covered later

For this test we are going to whitelist the certificates used by Microsoft Defender ATP

Mixing Whitelisting and Modern Protections might be overkill but its very good for locking down high profile target

Santa’s default configuration is monitor mode so to enforce the rules we need to change the Santa configuration

Following the example config from the documentation

ClientMode is change from 1 (monitor mode) to 2 (Enforced)

And the testing here is done with local config so we need to remove the SyncBaseURL Key/String to support local modification of the allow/deny list

And we can see that the default configuration is Monitor Mode

Install the config file

And we are in lockdown mode , so any binary will be block unless it matches binary or certificate rules (or is system file)

So let’s start the Microsoft Defender ATP installer

And Santa picks up the different daemons Defender ATP will run and blocks the execution as they are not in the allow list yet , running this in monitor mode would make the install successful without errors on the first run and then looking at log files , I hit ignore for a bunch of times and then go for the log files

Santa Logs are at /var/db/santa/santa.log

action=EXEC|decision=DENY|reason=UNKNOWN|sha256=241fde944258965f8912bfc30b55a60c821642722131e64b1d3dfce2d1913354|cert_sha256=e552705f4fa93f4b571e2804a107ce74a49f45e26729d192665d59a5cd3934a8|cert_cn=Developer ID Application: Microsoft Corporation (UBF8T346G9)|pid=687|ppid=1|uid=0|user=root|gid=0|group=wheel|mode=L|path=/Applications/Microsoft Defender

action=EXEC|decision=DENY|reason=UNKNOWN|sha256=9a01cc98d7e1c5d3f1cde3f6b06b8d1540a0c35f80bf7026e8bf8274b05403cd|cert_sha256=09d93952b7b31903e1d9b85d5c8b48bbb86ad9830757ee5e75cd114fbb7e7303|cert_cn=Developer ID Application: Microsoft Corporation (UBF8T346G9)|pid=774|ppid=1|uid=501|user=fr-santatest|gid=20|group=staff|mode=L|path=/Library/Application Support/Microsoft/MAU2.0/Microsoft AU AU Daemon

And we can see that there are 2 different certs being used, 1 for the main Defender ATP files and 1 for the Microsoft Update Application

santactl rule –whitelist –sha256 09d93952b7b31903e1d9b85d5c8b48bbb86ad9830757ee5e75cd114fbb7e7303 –certificate

Added rule for SHA-256: 09d93952b7b31903e1d9b85d5c8b48bbb86ad9830757ee5e75cd114fbb7e7303.

FR-SantaTests-Mac:~ root# santactl rule –whitelist –sha256 e552705f4fa93f4b571e2804a107ce74a49f45e26729d192665d59a5cd3934a8 –certificate

Added rule for SHA-256: e552705f4fa93f4b571e2804a107ce74a49f45e26729d192665d59a5cd3934a8.

We use santactl to add rules to our whitelist , and after this Microsoft Defender ATP is now fully functional with Santa running as additional protection

Edit : now with correct ATP vs APT , thanks Jan , Skype Updater Escalation Prevent through GPO

Categories: Uncategorized
Comments Off on , Skype Updater Escalation Prevent through GPO

There was published a issue with the skype installer

This can elevate normal users on a pc to system on older OS that don’t use Windows 10 Apps

On windows 10 you can install version 8 only if you set the installer to Windows 7 or 8 , when testing that the update service was not installed

On the 7.x branch the update service was added on my test pc , but wasn’t visible on the 8 branch

Its recommended to stay on the newest version and use Windows 10 Apps when possible

For the workaround (that will break automatic updates but preserve security)


Create a new Group Policy


Go to Windows Settings , Security Settings , System Settings

Select the Skype Update Service and select disabled


Verify its set to disabled


Set the gpo filter for testing


Link the gpo (testing to root acceptable)


Run a gpupdate /force or wait a bit , after that the settings is set to disabled and cant be modified

ATA 1.6 Update 1 , Auto Update gateways

Categories: Uncategorized
Comments Off on ATA 1.6 Update 1 , Auto Update gateways


Microsoft have released the first update to version 1.6 a short while ago


This is the first update that can use the new auto update of gateways


We didnt have autoupdate enabled so all gateways want a update


Enable and Save Smile


and a few seconds later the gateway agents starts to update , and 5 minutes later here all agents are updated

Very Very Smooth Smile

The Argument for the Anti Home Lab

Categories: Uncategorized
Comments Off on The Argument for the Anti Home Lab

First off Knowledge is King

Second if its faster/easier/better to use home lab don’t stop 🙂 keep learning

That being said it’s beyond my understanding why more people don’t use shared labs

Pooling resources is what we been doing for the last decade and many more than that

If everyone is buying NUC’s that isn’t utilized all of the time why is that a good idea ?

If you got unlimited money to spend go on home labs keep going down that path

There are plenty of arguments against it
     Red Tape
     X broke my Lab now i have to rebuild
     Y powered off my server during a demo
     Company won’t pay for hardware
     Company won’t pay for power/colo
     We won’t use the people to maintain it we are a consultant company not a hosting company
     Just use Azure/Ravello/AWS/Google Cloud/Air
     If I leave company I loose access to lab

If each company that have consultants took 1 or 2 hours of billable time each month and “used” that for a shared lab i believe everyone would be better off.
And since many agree that pay isn’t always the first reason to join a company perhaps offering a access to a real playground will provide you easier access to talent, and if nothing else retain the people they have

So can a company really afford not to provision lab equipment for their staff?

We also have some good friends that help out sometimes and they get access to the playground, help being anything from hardware/software/time or just being good guys/girls

We are a small company (even by Danish standards) but we still have and maintain a rack where we have our gear for testing/playing/demoing

One of the reasons is that many (at least imo) don’t want hardware at home even if the company paid for it, its noisy and uses to much electricity and at least when we are using edge stuff the lack of multiple public ip addresses is a pain

If i didn’t have a decent lab i wouldn’t spend the time i have poking around

Waiting for slow hardware isn’t good for my mood
Fabric work is hard to do purely virtual, you need iron at some point (for now)
Forgetting to power off cloud usage is a pain
Forgetting to power on before a demo is a pain
Having to power on to test/check something is a pain

So what did we do

Got a rack at a colo with a 100mb/s connection not impressive but more than enough for playing around and 32 ip addresses , and ipv6 almost there

Bought old servers , if someone decommissioned a old server we bought that compared to them throwing it out

This meant our playground with very little cost went from a few old HP server to a fully stuff C7000 with 16 blades and now back to rack servers
The cost between brokers and used hardware been in the range of 1-2 hours pr person pr month , the old blades we gave away most of them as we didn’t have any use so they live on helping other places same for the C7000

Rack+Power is covered but we don’t power on servers that isn’t used so we try to be reasonable no point in having 5 host online that isn’t used for anything

Moving from 1g to 10g cost a bit for the switch and we dont have RDMA or 25/50/100 in scope but 10g is good enough for most of the testing as it isnt performance we are benchmarkin

And with S2D for Windows Server 2016 we ended up with 4 NVMe boards and 20 SSD again small sizes and not enterprise class but good enough for testing , and way better  than simulating everything in VM’s

For storage we mostly have DAS , but we also have a few NetApp’s , one rented out for a customer and then returned so its “free” as the rent covered the base cost.

How does it look

We try to structure it in demo , playground and Reserved

Demo         : Stable environment, controlled changes so it’s always ready for customer demo, no breaking demo allowed except for named VM’s for DR/Backup
Playground    : Everything Goes, don’t except anything to be rock solid, still don’t break anything on purpose, send email if you does crazy stuff to warn

Reserved    : Named Host / Part of Host / VLAN  , dedicated to named “used”

And when I say try it’s because during the whole Windows Refresh Cycle rebuilds are more often than we prefer but it will be more stable as we go toward GA of the 2016 wave

And once in a while breaking access require heading onsite, (killed firewall with a upgrade night before holiday)

Learnings are DO NOT KEEP any production on any parts of the environment , separate compute/storage/firewall/ip Everything

We have some servers in the same rack that’s “production” but the only thing that are shared are the dual pdu’s , and we haven’t broken that yet

Licensing covered on trial mostly and a few NFR again back to refresh of whole Environments

What can’t we do currently ?
         top of mind items for now

        No NSX
         No RemoteFX/OpenGL
         No FC integration (currently)
         No TOR integration (aka no Arista)
         No Performance Testing (Consumer Grade SSD , No RDMA , 10G)
         No Very Large Scale Testing (2k+ VM) with dedupe and delta disk we probably could but not full VM’s)
         No Virtual Connect or anything fancy blade for other vendors but again don’t believe in blades anymore)   
         No hardcore fault domains

Wish List for near future

        RDMA/ High Performing Storage
         More Azure running aka balance between credits and features aka send more money

Wish List for 2018+

        Everything in Azure   

/Flemming , comments at [email protected] or @flemmingriis

Reblogged from Tao Yang , spend your money wisely

Categories: Uncategorized
Comments Off on Reblogged from Tao Yang , spend your money wisely

The following blog post is a copy from Tao Yang site , he does amazing work and publish it so everyone can enjoy it , personally I couldn’t live without his work.

You can agree with the posts or not , personally publishing others work almost 1-1 and charging for it is bad taste , but that’s the double edges sword of MIT licensing and publishing your stuff for others.

Nothing wrong with charging for your time/mp/scripts , just create them yourself or pay others to make them

Spend Your Money Wisely

clip_image002As what I’d like to consider myself as – a seasoned System Center specialist, I have benefitted from many awesome resources from the community during my career in System Center. These resources consist of blogs, whitepapers, training videos, management packs and various tools and utilities. Although some of them are not free (and in my opinion, they are not free for a good reason), but large percentage of these resources I value the most are all free of charge.

This is what I like the most about the System Center community. Over the last few years, I got to know many unselfish people and organisations in the System Center space, who have made their valuable work completely free and open source for the broader community. Due to what I am going to talk about in this post, I am not going to mention any names in this post (unless I absolutely have to) . But if anyone is interested t know my opinion, I’m happy to write a separate post introducing what I believe are valuable resources.

First of all, I’m just going to put it out there, I am not upset, and this is not going to be a rant and I’m trying to stay positive.

I started working on System Center around 2007-2008 (ConfigMgr and OpsMgr at that time) . I started working on OpsMgr because my then colleague and now fellow SCCDM MVP (like I mentioned, not going to mention names) has left the company we were working for and I had to pick up the MOM 2005 to OpsMgr 2007 project he left behind. The very first task for me was to figure out a way to pass the server’s NetBIOS name to the help desk ticketing system and I managed to achieve this by creating a PowerShell script and utilised the command notification channel to execute the script when alerts were raised. I then used the same concept and developed a PowerShell script to be used in the command notification to send content rich notification emails which covered many information not available from native email notification channel.

When I started blogging 5 years ago, this script was one of the very first posts I published here. I named this solution “Enhanced SCOM Alert Notification Emails”. Since it was published, it has received many positive feedbacks and recommendations. I have since published the updated version (2.0) here:

After version 2.0 was published, a fellow member in the System Center community, Mr. Tyson Paul has contacted me, told me he has updated my script. I was really happy to see my work got carried on by other members in the community and since then, Tyson has already made several updates to this script and published it on his blog (for free of course):

Version 2.1:

Version 2.2:

This morning, I have received an email from a person I have never heard of. This person told me his organisation has developed a commercial solution called “Enhanced Notification Service for SCOM” and I can request a NFR by filling out a form from his website. As the name suggests (and I had a look on the website), it does exactly what mine and Tyson’s script does – sending HTML based notification emails which include content rich information including associated knowledge articles.

Well, to be fair, on their website, they did mention a limitation of running command notifications that you have a AsyncProcessLimit of 5. But, there is a way to increase this limit and if your environment is still hitting the limit after you’ve increased it, I believe you have a more serious issue to fix (i.e. alert storm) rather than enjoying reading those “sexy” notification emails. Anyways, I don’t want to get into technical argument here, it’s not the intention of this post.

So, do I think someone took my idea and work from Tyson and myself? It is pretty obvious, make your own judgement. Am I upset? not really. If I want to make a profit from this solution, I wouldn’t have published out on my blog in the first place. And believe me, there are many solutions and proof-of-concepts I have developed in the past that I sincerely hope some software vendors can pickup and develop a commercial solution for the community – simply I don’t have the time and resources to do all these by myself (i.e. my recently published post on managing ConfigMgr log files using OMS would be a good commercial solution).

In the past, I have also seen people took scripts I published on my blog, replaced my name with theirs from the comment section and published it on social media without mentioning me whatsoever. I knew it was my script because other comments in the script are identical to my initial version. When I saw it, I have decided not to let these kind behaviour get under my skin, and I believe the best way to handle it is to let it go. So, I am not upset when I read this email today. Instead, I laughed! Hey, if this organisation can make people to pay $2 per OpsMgr agent per year (which means for a fully loaded OpsMgr management group would cost $30k per year for “sexy” notification emails), all I’m going to say is:

However, I do want to advise the broader System Center community: Please spend your money wisely!

There is only so much honey in the pot. You all have a budget. This is what the economist would call Opportunity Cost. If you have a certain needs or requirement and you can satisfy your requirement using free solutions, you can spend your budget on something that has a higher Price-Performance Ratio. If you think there’s a gap between the free and paid solution, please ask your self these questions:

  • Are these gaps really cost me this much?
  • Are there any ways to overcome this gap?
  • Have I reached out the the SMEs and confirm if this is a reasonable price?
  • How much would it cost me if I develop an in-house solution?

Lastly, I receive many emails from people in the community asking me for advise, and providing feedback to the tools I have published. I am trying my best to make sure I answer all the emails (and apologies if I have missed). So if you have any doubts in the future that you’d like to know my opinion, please feel free to contact me. And I am certain, not only myself, but other SMEs and activists in the System Center community would also love to help a fellow community member.