One of the things that many consultants run up against are network challenges during deployments. In certain scenarios it can cripple a deployment i.e. pre-provisioning devices fail because apps don’t install fast enough. Microsoft has recently released a “new-old” solution that many SCCM admins remember. Now, in Public Preview is “Microsoft Connected Cache (MCC).” We will cover a few things today:
- What is Microsoft Connected Cache?
- Deploying Microsoft Connected Cache
- Microsoft Connected Cache Monitoring
What is Microsoft Connected Cache?
So, what’s the whole point?
Most things that come down from Intune leverage delivery optimization services, but that isn’t enough in most cases. The MCC is a software-only caching solution that keeps all of the software deployment within the enterprise/education networks.
Simply, you have the MCC Azure Resource that manages the Connected Cache nodes, that you point your Windows devices, Linux devices, or VMs to get their content from. This is something that has existed in SCCM for quite some time.
Some of the supported scenarios for this service are:
- Windows Autopilot Pre-provisioning
- Co-management or Cloud-native devices where patches and apps are delivered from Microsoft Intune
- Apps installed from the Microsoft Store
The content types you can get are Windows Updates, Office C2R apps and updates, Client Apps, and Windows Defender definition updates. The service endpoints for this can be found here.
Strategically, you would expect small organizations to leverage a Windows 11 device (supported up to 50 Windows devices), whereas enterprises would use a Windows Server or Ubuntu device.
Microsoft provides a nice little table around it:
| Enterprise configuration | Download speed range | Download speeds and approximate content volume delivered in 8 Hours |
|---|---|---|
| Branch office | < 1 Gbps Peak | 500 Mbps => 1,800 GB 250 Mbps => 900 GB 100 Mbps => 360 GB 50 Mbps => 180 GB |
| Small to medium enterprises/Autopilot provisioning center (50 – 500 devices in a single location) | 1 – 5 Gbps | 5 Gbps => 18,000 GB 3 Gbps => 10,800 GB 1 Gbps => 3,600 GB |
| Medium to large enterprises/Autopilot provisioning center (500 – 5,000 devices in a single location) | 5 – 10 Gbps Peak | 9 Gbps => 32,400 GB 5 Gbps => 18,000 GB 3 Gbps => 10,800 GB |
Essentially, you can see the flow below which shows that the MCC Azure resource is used to create the cache nodes, which you provision with the PowerShell scripts from the cache node. This is powered by the Azure IoT Edge container management service.
Once it comes online, it will report status and metrics up to the Azure resource. You push down settings from Intune telling client devices where they can get their cached data from. Devices can fallback to the CDN if they run into issues.

A few fun facts for you:
- This is a cold cache product, meaning it is driven by client requests. The only way you could “preseed” things would be by using a test ring with Windows Update. That would allow you to ensure data is ready when true production users hit it.
- Content is cached for 30 days but is only living in the open handles aka hot cache path for 24 hours.
- Don’t overthink the service account type. Ideally, you will use a gMSA (Group Managed Service Account) if it lives on-prem.
Deploying Microsoft Connected Cache
Overall, you can see in the video below how to deploy Microsoft Connected Cache. It will take you through the start-to-finish process. Basically:
- Create the Connected Cache Azure Resource
- Meet the different pre-requisites
- Windows Enterprise E3/E5 (or Education A3/A5)
- Choose Linux (preferred) or Windows devices with WSL installed (achieve that via wsl.exe –install –no-distribution)
- Port 80 cannot be used on the node you’re going to leverage
- Endpoints mentioned earlier must be reachable
- Windows host must support nested virtualization
- If using Linux, it must be Ubuntu 22.04 or RHEL 8 or 9 (RHEL requires Moby)
- Single NIC on nodes with at least a 1 Gbps NIC
- NIC and BIOS should support SR-IOV for optimal performance
- Recommended hardware specs below after the pre-reqs.
- Must have a local user or gMSA that has “logon as a batch” permissions
- $User environment variable with the username of the account that will be used to run MCC
| omponent | Branch office | Small / medium enterprise | Large enterprise |
|---|---|---|---|
| CPU cores | 4 | 8 | 16 |
| Memory | 8 GB, 4 GB free | 16 GB, 4 GB free | 32 GB, 4 GB free |
| Disk storage | 100 GB free | 500 GB free | 2x 200-500 GB free |
| NIC | 1 Gbps | 5 Gbps | 10 Gbps |
Now, you can check out the video which shows the setup from beginning to end:
A few tips:
- If you have any issues during the node install, run “wsl.exe –update” to ensure your WSL is up-to-date if you’re using Windows.
- You can run wsl -d Ubuntu-22.04-Mcc-Base to access to MCC container and “sudo iotedge list” to see what containers are running. If you don’t see MCC, then reboot the runtime with “sudo systemctl restart iotedge”
Microsoft Connected Cache Monitoring
The last part I wanted to cover is monitoring for the MCC service. Overall, the monitoring is interesting, I think. You can see below what the base metrics look like. You will notice it keys in on outbound traffic and showing you the content types.
It would have been nice if it would show you who is using the service, but that isn’t the focus here:

You can leverage the “metrics” section within monitoring to create your own charts. The metrics you can leverage are:
- Egress Mbps (Egress Throughput)
- Egress Volume (Volume of Data Egressed)
- Hit Mbps (Hit Throughput)
- Hit Ratio (Measuring how many cache hits it can take successfully)
- Hits (Hit Count)
- Inbound (Inbound Throughput)
- Miss Mbps (Miss Throughput)
- Misses (Misses Count)
- Outbound (Outbound Throughput)]
You can even aggregate by average, count, etc. This is an example of what it looks like:

Overall, it’s nice from an analytic perspective, but my main feedback would be that I would like to see what devices are hitting my cache node.
Even the activity logs, only show the operations taken on the nodes themselves:

