๐ŸŽ New User? Get 20% off your first purchase with code NEWUSER20 ยท โšก Instant download ยท ๐Ÿ”’ Secure checkout Register Now โ†’
Menu

Categories

Azure Arc-Enabled Servers: On-Prem Fleet Management Without an Agent Sprawl (2026)

Azure Arc-Enabled Servers: On-Prem Fleet Management Without an Agent Sprawl (2026)
Azure Arc-enabled servers fleet management - Dargslan 2026

Most enterprises in 2026 do not run a single-cloud or single-datacentre fleet. There are workloads in Azure, workloads on AWS or GCP, and a long tail of on-prem servers that nobody is migrating - because they work, because the regulator says so, or because the application owner has not asked for a change. Operating that fleet with separate tooling per location is what creates the agent sprawl that makes hybrid environments painful: one monitoring agent here, another there, a separate patching system, three different identity stores, a per-environment incident-response process. Azure Arc projects on-prem and other-cloud servers as resources inside Azure Resource Manager. Once a server is Arc-enabled, the same RBAC, the same policies, the same Defender for Cloud, the same Update Manager and the same Log Analytics workspace apply to it as to a native Azure VM. This guide is the production rollout - the onboarding script, the extensions you actually need, the cost model that does not surprise the CFO, and the audit story - with a free PDF cheat sheet of the commands.

Why Azure Arc is the right hybrid abstraction

The traditional answer to "manage a hybrid fleet" was a per-product agent. Datadog for monitoring, a separate patching server, a separate inventory tool, a separate compliance tool. Each agent has a separate identity, a separate update cadence, separate firewall rules and a separate licensing model. Arc replaces the foundation: the on-prem server appears in Azure Resource Manager as a first-class resource (under the resource type Microsoft.HybridCompute/machines), and every Azure service that operates on resources can operate on it. Defender for Cloud, Update Manager, Azure Policy, Log Analytics, Monitor Alerts, RBAC, even Run Command - all of them treat the Arc machine like a native Azure VM. The agent that delivers this - the Connected Machine Agent (azcmagent) - is one binary, signed, on a published release cadence.

What Arc gives you (and what it does not)

Arc-enabled servers give you, out of the box: presence in ARM, RBAC over the resource, Azure Policy assignment, an identity (system-assigned managed identity), and the ability to install Azure VM extensions. Through extensions you get monitoring (Azure Monitor Agent), patching (Azure Update Manager), security posture (Defender for Cloud sensors), inventory and change tracking, and remote command execution (Run Command).

What Arc does not give you: it is not a hypervisor, it does not move the workload, it does not change the OS lifecycle, and it does not back up your data (you onboard Arc machines into Azure Backup separately if you want that). The pattern is "same control plane, same management story" - not "lift and shift".

Planning the rollout

Three decisions up front:

  • Resource group layout. One RG per environment / location is the simplest pattern. Arc machines from the on-prem datacentre go into rg-arc-onprem-; AWS workloads into a separate group. RBAC inheritance follows.
  • Tagging convention. Tag every Arc machine with Environment, Owner, CostCentre, OS. Most cost reporting and policy assignment downstream assumes consistent tags.
  • Service principal for onboarding. Create a dedicated SP with the Azure Connected Machine Onboarding role - that is the only permission needed to register a new Arc machine. Generate an installation token (or use the SP credentials for unattended onboarding).

Onboarding scripts (Windows + Linux)

The onboarding installer downloads the Connected Machine Agent and registers the machine with ARM. For Windows, one PowerShell command from an elevated prompt:

$ErrorActionPreference = "Stop"
Invoke-WebRequest -Uri https://aka.ms/AzureConnectedMachineAgent -OutFile install.ps1
.\install.ps1
azcmagent connect `
    --service-principal-id $env:SP_ID `
    --service-principal-secret $env:SP_SECRET `
    --tenant-id $env:TENANT_ID `
    --subscription-id $env:SUBSCRIPTION_ID `
    --resource-group rg-arc-onprem-westeurope `
    --location westeurope `
    --tags "Environment=Prod,Owner=PlatformOps,CostCentre=42,OS=WindowsServer2022"

For Linux, the same flow with the bash installer:

curl -sSL https://aka.ms/azcmagent | bash
sudo azcmagent connect \
    --service-principal-id "$SP_ID" \
    --service-principal-secret "$SP_SECRET" \
    --tenant-id "$TENANT_ID" \
    --subscription-id "$SUBSCRIPTION_ID" \
    --resource-group "rg-arc-onprem-westeurope" \
    --location "westeurope" \
    --tags "Environment=Prod,Owner=PlatformOps,CostCentre=42,OS=Ubuntu22"

The agent runs as a service (himdsd on Linux, HIMDS on Windows), opens an outbound TLS connection to the Azure control plane, and the machine appears in the portal within a minute. Network requirement: outbound HTTPS to a small set of *.his.arc.azure.com, guestnotificationservice.azure.com and similar endpoints (the exact list is in the docs, fewer than 15 hostnames). No inbound ports are needed, which is what makes the on-prem story workable.

Extensions - the useful ones

Once the machine is Arc-enabled, install the extensions that map to the management story. The five worth installing on day one:

  • AzureMonitorWindowsAgent / AzureMonitorLinuxAgent - sends metrics, performance counters, syslog/Windows Event Log to Log Analytics. Driven by Data Collection Rules.
  • MicrosoftDefenderForServers - the Defender sensor, opt-in via the Defender for Cloud plan.
  • AzureUpdateManager - manages the OS patching schedule from Azure.
  • RunCommand - lets administrators (with the Virtual Machine User Login + Run Command RBAC) execute scripts on the machine through Azure, no SSH/RDP exposure required.
  • Custom Script Extension - bootstrap script execution post-onboarding.

Install via Azure Policy ("Configure Azure Monitor agent on Linux Arc machines") so every newly-onboarded machine receives them automatically; do not manually click through the portal once you are past pilot.

Defender for Cloud on Arc machines

Defender for Servers Plan 2 covers Arc machines at the same per-server price as native VMs (around $15/server/month list). What you get on an Arc machine: the MDE sensor (the same EDR engine as Defender for Endpoint), file integrity monitoring, adaptive application controls, JIT VM access (where applicable), vulnerability management via Defender for Cloud's built-in scanner, and recommendations against the Microsoft Cloud Security Benchmark.

The recommendation flow is identical to Azure VMs: Defender continuously evaluates the configuration, raises Security Recommendations, and assigns a Secure Score. The hybrid view in the Azure portal shows on-prem and Azure VMs side by side - the right view for the platform team to track the fleet's posture as a single number.

Update Manager - patching across the fleet

Azure Update Manager is the cross-fleet patching service. It supports Windows and Linux, native Azure VMs and Arc machines, with the same policy model:

  • Periodic assessment - the agent reports installed patches and missing patches to ARM. Visible in the portal.
  • Maintenance configurations - an ARM resource that defines a maintenance window (start time, duration, recurrence) and a list of patches to install. Assignable to a dynamic scope (e.g. "all Arc machines tagged Environment=Prod").
  • On-demand updates - one-off patch run scoped to a list of machines.

The result is a single-pane view of patch posture across native and Arc machines, with the same compliance reporting and the same RBAC. For most platform teams this alone justifies the Arc rollout.

RBAC, identity and Run Command

Arc machines participate in Azure RBAC like any other ARM resource. The roles that matter:

  • Azure Connected Machine Onboarding - lowest privilege, used by the SP that does the initial enrolment. Cannot manage anything beyond the connect call.
  • Azure Connected Machine Resource Administrator - manage the resource (extensions, tags) but not data.
  • Virtual Machine Contributor - manage the resource fully.
  • Virtual Machine User Login + Run Command Contributor - execute scripts via Run Command. The combination that lets an admin run a one-off command without SSH/RDP exposure - the request is logged in the Activity Log, the result is captured.

Pair Run Command with PIM. The administrator elevates into the role just-in-time, runs the script, the role expires, the activity is in the audit log. Compare to the historical pattern of "SSH key on a jumphost" and the operational improvement is large.

The system-assigned managed identity on the Arc machine is the right way for on-host scripts to call Azure services - Key Vault, Storage, Monitor - without storing credentials. azcmagent exposes a local IMDS endpoint at http://localhost:40342 that returns a token for the machine identity; the standard Azure SDKs use it transparently.

Cost model - the part nobody talks about

Arc enrolment itself is free. The features that hang off Arc are not. The realistic cost model for a 100-server fleet:

  • Connected Machine Agent: $0.
  • Azure Monitor Agent + Log Analytics: depends on log volume; $2-10 per server per month is typical for OS-level metrics + key event logs.
  • Defender for Servers Plan 2: ~$15 per server per month.
  • Update Manager: free for Arc machines (charges apply to Azure Resource Graph queries beyond the free tier).
  • Run Command: free.

The trap is unbounded log ingestion. A misconfigured Data Collection Rule that ships every Windows Security Event from every host doubles the Log Analytics bill overnight. Set a per-table cap, sample where appropriate, and use a Sentinel workspace separate from operational Log Analytics if you want different retention.

Common pitfalls

  • Onboarding without tags. Untagged Arc machines are an ungovernable mess. Bake the tags into the onboarding script.
  • SP secret stored on the on-prem host long-term. The SP secret is needed only at onboarding; do not leave it in a config file. Use the installation script with the secret in an environment variable that is unset after.
  • Manual extension installation. Use Azure Policy initiatives so every newly-onboarded machine receives the right extensions automatically.
  • Skipping the network test. The agent needs ~15 hostnames reachable on 443. Test with azcmagent check before the production rollout - on a restrictive proxy this can be the entire blocker.
  • Treating Arc as backup. Arc gives you management; not backup, not DR. Onboard Arc machines into Azure Backup separately if you need it.

Audit checklist

  1. Every on-prem server is Arc-enabled with consistent tags (1 pt)
  2. Onboarding SP scoped to the Connected Machine Onboarding role only (1 pt)
  3. Azure Policy installs the standard extensions on new machines (1 pt)
  4. Defender for Servers Plan 2 enabled where the licence justifies it (1 pt)
  5. Log Analytics has per-table caps and a documented retention policy (1 pt)

5/5 = PASS, 3-4 = WARN, <3 = FAIL.

FAQ

Does Arc work behind a corporate proxy?

Yes - azcmagent config set proxy.url http://proxy:3128 applies a proxy. Test with azcmagent check.

Can I Arc-enable AWS or GCP VMs?

Yes - the agent installs the same way on any Linux or Windows host with outbound HTTPS. Multi-cloud onboarding is a first-class scenario.

What if the host loses internet for a day?

The agent reconnects when connectivity returns. Some features (Run Command, recent metrics) require live connection; assessment data resyncs automatically.

Does this give my apps an Azure identity?

The system-assigned managed identity belongs to the machine, not the apps. An app on the host can use that identity to call Azure services through the local IMDS endpoint.

Can I Arc-enable a domain controller?

Technically yes; pragmatically no. DCs have specific assumptions and the management overlay can interact poorly with replication. Onboard member servers and skip the DCs.

Related Dargslan resources

Share this article:
Dargslan Editorial Team (Dargslan)
About the Author

Dargslan Editorial Team (Dargslan)

Collective of Software Developers, System Administrators, DevOps Engineers, and IT Authors

Dargslan is an independent technology publishing collective formed by experienced software developers, system administrators, and IT specialists.

The Dargslan editorial team works collaboratively to create practical, hands-on technology books focused on real-world use cases. Each publication is developed, reviewed, and...

Programming Languages Linux Administration Web Development Cybersecurity Networking

Stay Updated

Subscribe to our newsletter for the latest tutorials, tips, and exclusive offers.