my personal blog about systemcenter

The Argument for the Anti Home Lab

Categories: Uncategorized
Comments Off on The Argument for the Anti Home Lab

First off Knowledge is King

Second if its faster/easier/better to use home lab don’t stop :) keep learning

That being said it’s beyond my understanding why more people don’t use shared labs

Pooling resources is what we been doing for the last decade and many more than that

If everyone is buying NUC’s that isn’t utilized all of the time why is that a good idea ?

If you got unlimited money to spend go on home labs keep going down that path

There are plenty of arguments against it
     Red Tape
     X broke my Lab now i have to rebuild
     Y powered off my server during a demo
     Company won’t pay for hardware
     Company won’t pay for power/colo
     We won’t use the people to maintain it we are a consultant company not a hosting company
     Just use Azure/Ravello/AWS/Google Cloud/Air
     If I leave company I loose access to lab

If each company that have consultants took 1 or 2 hours of billable time each month and “used” that for a shared lab i believe everyone would be better off.
And since many agree that pay isn’t always the first reason to join a company perhaps offering a access to a real playground will provide you easier access to talent, and if nothing else retain the people they have

So can a company really afford not to provision lab equipment for their staff?

We also have some good friends that help out sometimes and they get access to the playground, help being anything from hardware/software/time or just being good guys/girls

We are a small company (even by Danish standards) but we still have and maintain a rack where we have our gear for testing/playing/demoing

One of the reasons is that many (at least imo) don’t want hardware at home even if the company paid for it, its noisy and uses to much electricity and at least when we are using edge stuff the lack of multiple public ip addresses is a pain

If i didn’t have a decent lab i wouldn’t spend the time i have poking around

Waiting for slow hardware isn’t good for my mood
Fabric work is hard to do purely virtual, you need iron at some point (for now)
Forgetting to power off cloud usage is a pain
Forgetting to power on before a demo is a pain
Having to power on to test/check something is a pain

So what did we do

Got a rack at a colo with a 100mb/s connection not impressive but more than enough for playing around and 32 ip addresses , and ipv6 almost there

Bought old servers , if someone decommissioned a old server we bought that compared to them throwing it out

This meant our playground with very little cost went from a few old HP server to a fully stuff C7000 with 16 blades and now back to rack servers
The cost between brokers and used hardware been in the range of 1-2 hours pr person pr month , the old blades we gave away most of them as we didn’t have any use so they live on helping other places same for the C7000

Rack+Power is covered but we don’t power on servers that isn’t used so we try to be reasonable no point in having 5 host online that isn’t used for anything

Moving from 1g to 10g cost a bit for the switch and we dont have RDMA or 25/50/100 in scope but 10g is good enough for most of the testing as it isnt performance we are benchmarkin

And with S2D for Windows Server 2016 we ended up with 4 NVMe boards and 20 SSD again small sizes and not enterprise class but good enough for testing , and way better  than simulating everything in VM’s

For storage we mostly have DAS , but we also have a few NetApp’s , one rented out for a customer and then returned so its “free” as the rent covered the base cost.

How does it look

We try to structure it in demo , playground and Reserved

Demo         : Stable environment, controlled changes so it’s always ready for customer demo, no breaking demo allowed except for named VM’s for DR/Backup
Playground    : Everything Goes, don’t except anything to be rock solid, still don’t break anything on purpose, send email if you does crazy stuff to warn

Reserved    : Named Host / Part of Host / VLAN  , dedicated to named “used”

And when I say try it’s because during the whole Windows Refresh Cycle rebuilds are more often than we prefer but it will be more stable as we go toward GA of the 2016 wave

And once in a while breaking access require heading onsite, (killed firewall with a upgrade night before holiday)

Learnings are DO NOT KEEP any production on any parts of the environment , separate compute/storage/firewall/ip Everything

We have some servers in the same rack that’s “production” but the only thing that are shared are the dual pdu’s , and we haven’t broken that yet

Licensing covered on trial mostly and a few NFR again back to refresh of whole Environments

What can’t we do currently ?
         top of mind items for now

        No NSX
         No RemoteFX/OpenGL
         No FC integration (currently)
         No TOR integration (aka no Arista)
         No Performance Testing (Consumer Grade SSD , No RDMA , 10G)
         No Very Large Scale Testing (2k+ VM) with dedupe and delta disk we probably could but not full VM’s)
         No Virtual Connect or anything fancy blade for other vendors but again don’t believe in blades anymore)   
         No hardcore fault domains

Wish List for near future

        RDMA/ High Performing Storage
         More Azure running aka balance between credits and features aka send more money

Wish List for 2018+

        Everything in Azure   

/Flemming , comments at [email protected] or @flemmingriis

Using Microsoft Operations Management Suite and Applocker together

Categories: AppLocker, OMS
Comments Off on Using Microsoft Operations Management Suite and Applocker together

We are trying to evaluate Microsoft Operations Management Suite together with Microsoft Applocker and Device Guard as a replacement for using 3rd party log tools to gather logs from mobile workstations (Device Guard covered in later post)

This is the first proof of concept tryint to monitor the data load pr source , before setting up automation to act on the alarms


The first baby steps is to create a OMS workspace


In OMS data settings we add the Windows Event Log used by Applocker


We download the OMS Agent


Hit the installer


And select connect to Microsoft Azure Operational Insight


Enter out workspace ID and Key to assiociate agent and workspace


a few seconds later we can see out clients added to the Microsoft Operations Management Suite

We then add our Applocker Policy


See that we arent allow to run a random exe file


We can then see in the eventlog that Applocker writes the usual 8004


We then go to Operations Insight , search for * EventID = 8004 , this can be limited more with adding the proper source , save the search


Add the Search to a shiny dashboard , we can now monitor


and drill down to see whats the app being blocked

Lenovo X250S , wont power on , power led blinking

Categories: Hardware, Lenovo
Comments Off on Lenovo X250S , wont power on , power led blinking

My Lenovo didnt start up yesterday , holding down power and removing battery didnt help


So after searching around removing the CMOS battery was suggested  , a few screws later and the internal battery and cmos battery was removed , and the pc could power back up.

Enabling AzureStack test enviroment using Nested Hyper-V

Categories: AzureStack, Hyper-V, Nested, Unsupported
Comments Off on Enabling AzureStack test enviroment using Nested Hyper-V

So with preview software the 2nd thing to do is go down the unspported/untested path

So after baremetal deployment on a few different host we move on to do nested AzureStack install , this is to enable faster deployment and rollback when we break something on purpose to test.

But always follow support guide lines for testing and use a dedicated phsycial machine , (we does this on mutiple other install in our testing setup)

Time from VM creation to AzureStack running 2 hours 15 minutes Smile

NOTE : Do not count this as stability testing always use supported hardware for that

So say Hello to our Little VM


16 Cores , 256Gb Memory , 5 drives at 300G

Invoke-WebRequest -OutFile ~/Enable-NestedVm.ps1

~/Enable-NestedVm.ps1 -VmName “STACK10”

Enable it for nesting Hyper-V


Install a Windows Server TP4 with US locales


For now GUI only (Sorry Snover)


And we can see all our drives


Add a static IP address and add DNS/Gateway for access to Azure Active Directory


And rename to the next stack install Smile


I prefeer to online the drives and initialize them



Time to jump to Azure Active Directory and create a tenant admin user


Extract the 9GB file and run the Azure Stack Installer


This leaves us with a few files Smile


Add local credentials , and verify that all prereq are met , note the space warning is not seen on bare metal installs , need to see if its because the drives are initialised but everything is fine for install


Sign in with the Azure Active Directory Admin user


And accept the AAD


And we are ready to install


Installer is a very very cool powershell scripts series that does everything for us , it reboots a few times but signing on gives us the progress bar


First off is creating a domain , and joining the host to it


After that a Virtual Disk is created through storage spaces


And 2 hours later we are done Smile


And ready to roll

and while we are playing we might as well dedupe it


Smile , not really fair as there are many other installs on the same host but still fun Smile

Upgrading Microsoft Advanced Threat Analytics from 1.4 to version 1.5

Categories: AD, ATA, Microsoft Advanced Threat Analytics
Comments Off on Upgrading Microsoft Advanced Threat Analytics from 1.4 to version 1.5

Our experience upgrading Microsoft ATA to version 1.5


Follow these steps to update to ATA version 1.5:

1.Download update 1.5

2.Update the ATA Center

3.Download the updated ATA Gateway package

4.Update the ATA Gateways

We did a prodcution upgrade of our ATA installation , and ratined data to avoid relearing everything

After the upgrade of the central ATA server we jumped into the ATA console


Health Center says System Healthy but a drill down to configuration shows that all gateways are outdated as step 4 wasn’t completed


So configuration shows all gateways needs a update


And since this is one of the first upgrades everything is handled manually


Update is 30 seconds in our environment


Gateways goes into not synced and after a few seconds its synced correctly and we can continue with the next gateways


And repeat times 4 , we had one gateway that needed a reboot


And we now have ATA running version 1.5 ready to detect once again

Overall very smooth installation and in our small environment we have less than an hour downtime for upgrading to a never and better install.

Personally I would love the system heath to report on the main screen that gateways needs update , if this is handled from separate teams in a large org it could be help full on the main screen