Wanner go to Denmark? - Azure Denmark East
Last verified: 4 September 2025. Re check service availability and region pairing before you act.
Availability note: In a LinkedIn post, Microsoft Denmark CEO Mette Louise Kaagaard said the Danish Azure region is planned for the end of 2025. Treat this as guidance, not a formal GA date.
Source: LinkedIn post
I keep hearing the same questions about Denmark East. Will it be ready for our stack on day one? How do we pick a secondary region? What is the least painful way to move? This post is my answer, written the way I run platform work in real teams. Opinionated where it helps, simple where it must be, and focused on the things you will actually do in a change window.
What ready looks like in real life
Ready is not a launch party. Ready is the boring checklist that keeps people out of trouble. You can create resources in denmarkeast
without hitting policy walls. The core SKUs you rely on exist and quota is in place. Private endpoints resolve with no magic. Logs land where you expect. You can lose a zone without drama and you can fail over to another region without guesswork. When all of that is true, let product teams loose.
Decide first, build second
Before any pull requests, write down three decisions on one page.
Primary and secondary. Denmark East is the target primary. Until Microsoft publishes an official pair, pick Sweden Central or Germany West Central as the secondary. My default is Sweden Central unless there is a routing or data reason to go West Europe. The key is to choose and document why.
Networking stance. Most teams should keep Azure Front Door at the front. It gives you stable edge routing while you move origins underneath. Note the DNS records you will switch at cutover, who owns them, and the TTLs you will use. I like 60 seconds for the hour around cutover, then back to 5 minutes.
Data posture. For each datastore write one sentence that says how you keep it safe. Example: SQL uses auto failover groups with an RPO below 30 seconds and an RTO under 15 minutes. If you cannot write that sentence yet, you are not ready.
Governance that helps, not hinders
Make it possible to do the right thing. Add denmarkeast
to your AllowedLocations policy and say it out loud so teams know they can deploy there. If you run subscription vending or a CAF style module, cut a small release that includes:
- Region code
dke
in naming - Budgets and anomaly alerts
- A Log Analytics workspace in Denmark East plus a Data Collection Rule
- A short allow list of known good SKUs on day one
This is a few lines of JSON and a pair of Bicep modules. Ship it early so your first canary deploy works the moment the region appears for your tenant.
Networking that stays predictable
If you are already on Front Door, you are in a good place. Do three quick checks. Health probes and WAF rules behave the way you think. Private endpoints in Denmark East resolve through your Private DNS zones without per app patches. Your WAN or ExpressRoute paths to Denmark East and to your chosen secondary look clean. Write down a baseline round trip time today so you can spot regressions later.
Keep the shape simple. Zonal load balancers inside the region, Front Door on the outside, Private Link where it makes sense, and DNS with sensible TTLs.
Monitoring that proves what happened
Create a Log Analytics workspace in Denmark East and a Data Collection Rule that captures the signals you actually read. Performance counters, core platform logs, Syslog or Windows Events if you run VMs. Point standard diagnostic settings at that workspace. If you use Sentinel or Defender, plan the onboarding, but do not block region adoption on it. You can stage that work in the first month.
I keep a short EVIDENCE.md
in the platform repo with screenshots and timestamps from drills. Future you will be grateful.
Resilience without theatrics
Think in two layers. Inside the region use zones. For data choose ZRS or GZRS when available. Outside the region use the native feature for each service. SQL uses auto failover groups or geo replication. Service Bus uses Geo DR. Event Hubs writes to capture so you can rehydrate. AKS runs in two regions with GitOps and a replicated ACR. Practice once. Keep the timings. That is your RTO and RPO.
A realistic validation plan
When Denmark East shows up for your tenant, spend one focused session and tick these off.
- Region visibility.
az account list-locations | grep -i denmark
. If it is not there, stop and wait. - SKU reality check. List VM SKUs, App Service plans, AKS versions. If a building block is missing, say so and do not promise dates you cannot keep.
- Policy sanity. Create a canary resource in a fresh resource group to prove AllowedLocations is correct.
- Network smoke. Bring up a tiny App Service or Container App in Denmark East, point Front Door at it, and watch probes and WAF.
- Observability. Send diagnostics to the Denmark East workspace and confirm ingestion.
- Resilience drill. Simulate a zone loss, then fail to the secondary region. Time both and save the output.
Moving workloads without headaches
There are three ways to get to Denmark East. I recommend re deploy and cutover for most cases because it is predictable and easy to test.
Path A. Re deploy and cutover. Lay the foundations with IaC. Resource groups, virtual networks and subnets, Private DNS links, Log Analytics and the Data Collection Rule, baseline policy. Deploy the same stacks you run today but keep them dark with internal endpoints. Seed the data. For storage, either object replication or a measured AzCopy. For SQL or Managed Instance, geo replication or a restore with a catch up plan. For Key Vault, backup and restore and then re hydrate secrets and keys. For ACR, replicate images. Warm everything up. When it looks good, take a short freeze, drain queues, do a final database sync or backup, and then flip traffic with Front Door or DNS. Lower TTLs for the hour, watch telemetry, and only then retire the old region after a safe quarantine.
Path B. Azure Resource Mover. If a resource is supported and a redeploy would be awkward, the mover can help. You still validate dependencies, initiate and commit the move, then revisit diagnostics, access policies, and private endpoints. It is a tool, not a magic trick.
Path C. Portal move to another region. A few services have a native move wizard. Handy for one offs and smaller estates. Not a platform wide strategy.
Rule of thumb. If a service has a clean redeploy story, choose Path A. Keep Path B and Path C for the exceptions.
Naming and pipelines
Use denmarkeast
in IaC and a short code like dke
in resource names. Add the region to your CI matrix so deploys do not need a late night pull request. If your pipelines still store cloud credentials, fix that today. OIDC to Azure keeps secrets out of the runner.
FAQ you might hear
Will Danish users see better latency on day one
If you cut your origins over cleanly behind Front Door, usually yes. Measure it from both office and CI. Then keep the numbers.
Which region should be secondary, North Europe or West Europe
Pick the one that matches your network paths and data needs. I default to North Europe unless there is a strong reason to go West Europe.
How much of this is EU Data Boundary safe
Most of Azure is in boundary. A few non regional features still need configuration. Note the security operations exception in your DPIA and move on.
Closing thought
Denmark East is not a reason to redesign everything. It is a chance to tidy your platform and raise the floor. Ship the policies, make the network predictable, wire the logs, and practice the failover. Do that and the rest is just plumbing.