AI Platform Engineer

Who we are

OMHU CPH A/S is a fast-growing, design-driven DTC furniture brand headquartered in Copenhagen. We sell our iconic TEDDY sofa collection across 25+ markets including the EU, the United States, the UK, Switzerland, and Norway entirely through our own Shopify store.

We are a young company moving fast. We have 40 people, strong brand momentum, and an ambition to build one of the most data- and AI-forward operations in European DTC.

We are currently building a modern data platform, data warehouse, semantic layer, and agent infrastructure, as a long-term competitive advantage.

This is not a company where AI is a buzzword. It is a core part of how we intend to scale.

The role

As our first AI Platform Engineer, you will own the technical foundation that all of our AI agents and applications run on. You build and own the engine room for all AI systems, ensuring everything runs reliably, securely, cost-efficiently, and at scale.

Your work directly impacts how fast we can scale revenue without scaling headcount. You own all technical decisions related to the AI platform. No one will define this for you.
 
Initial focus areas include building internal agents across finance, operations, and marketing, automating manual workflows, and rapidly prototyping and deploying new AI- driven features.

This is a greenfield role with real architectural ownership. You will make decisions that compound over time

What you will build and own

Everything you build must be used across the company. If it is not used, it has failed.

Agent infrastructure

  • Design and maintain our multi-agent architecture, including agent orchestration, memory management, tool use, and inter-agent communication

  • Configure and deploy agents, including tone of voice, skills, personas, and access boundaries

  • Build and maintain a shared plugin marketplace, structured implementation workflows, company knowledge base integrations, reusable agent components

  • Design multi-agent handoff patterns and queue management so agents work together without bottlenecks or runaway API costs

IT infrastructure & deployment

  • Own our AI infrastructure stack: cloud-hosted environments, on-premise e.g. Mac Studio (Apple Silicon), and cloud LLM integrations

  • Manage containerization and deployment via Docker, ensuring clean separation between dev, staging, and production environments

  • Set up and maintain CI/CD pipelines (GitHub Actions or equivalent) so new agents and applications are tested and deployed consistently

  • Optimize infrastructure for cost, performance, and uptime across all running agents and applications

Security & access control

You are accountable for ensuring that no AI system can create material business risk or financial exposure.

  • Establish and enforce security standards across the entire agent layer. Secrets management, prompt injection defense, API key rotation, role-based access control (RBAC)

  • Implement Zero Trust networking principles using tools like Cloudflare or Tailscale

  • Ensure GDPR compliance is built into the platform architecture, not bolted on afterward

  • Own threat modelling for agentic systems. What can each agent access, what can it do, and what happens if it goes wrong

MCP architecture

  • Design and implement Model Context Protocol (MCP) server architecture that connects agents securely to our data warehouse, semantic layer (Cube), and internal systems

  • Build standardized, reusable MCP connectors so new agents can be integrated quickly without reinventing data access patterns each time

LLM layer & optimization

  • Own model selection, routing, and fallback logic across our LLM provider portfolio (Anthropic, OpenAI, and others)

  • Implement token optimization. Every agent should run as efficiently as possible

  • Set up prompt versioning and management, treating prompts as production code

  • Build LLM observability infrastructure: logging, tracing, evaluation, and alerting across all running agents

Developer experience & standards

Your work should increase the speed of execution across all teams.

  • Create standardized agent scaffolding and templates so new agents follow consistent structure

  • Write clear internal documentation so the platform is understandable and maintainable as the team grows

  • Build internal tooling that makes it easy to spin up, test, and retire agents quickly

What we are looking for

Must-have

  • Strong backend engineering skills with production-quality code, including testing, structure, and maintainability. You define the language and stack based on long-term scalability, performance, and cost

  • Hands-on experience building with LLM APIs

  • Experience with LLM orchestration frameworks such as LangChain, LangGraph, LlamaIndex, or equivalent

  • Solid understanding of containerization (Docker) and CI/CD pipelines

  • Experience with API design, access control patterns, and secrets management

  • Comfort working independently and making architectural decisions without a large team behind you

Strong advantage

  • Experience with local model deployment on Apple Silicon, Ollama, MLX, llama.cpp, or similar

  • Familiarity with MCP (Model Context Protocol) server architecture

  • Experience with Cloudflare, Tailscale, or Zero Trust networking

  • Knowledge of data warehousing concepts, BigQuery, dbt, or similar

  • TypeScript / JavaScript for MCP server implementations and web-facing tooling

  • Background in a DTC, ecommerce, or scale-up environment

 

The languages and tools you will work with

Python · YAML · Bash / Shell · TypeScript · Docker · GitHub Actions · Cloudflare / Tailscale · Ollama / MLX · LangChain / LangGraph · OpenClaw · BigQuery · Cube · dbt (working knowledge) if it is not outdated before you start. You are expected to continuously evaluate, challenge, and define the tools and technologies we should use going forward.

 

Who you are

You rely on data, not opinions
You move fast and iterate in production
You continuously improve systems and challenge how things are done

You are someone who thinks about security before it becomes a problem, documents what you build, and takes pride in systems that other people can understand and build on.

You are pragmatic over theoretical. You would rather ship a solid, well-structured solution today than architect a perfect one that never ships.

You are comfortable being the sole owner of a platform and energized, not intimidated, by the responsibility that comes with that. You understand that at a company like OMHU, what you build in the next 12 months will shape how 40 people work.

You do not need to know the furniture business. But you care deeply about whether what you build is actually used.

 

What we offer

  • A foundational role with genuine architectural ownership from day one and you will build systems that scale the output of the entire company

  • Short decision cycles and direct access to leadership. No bureaucracy

  • A modern, well-resourced AI stack, including dedicated Apple Silicon hardware and a serious data infrastructure investment

  • A fast-moving, design-driven company with strong brand momentum and international reach

Why join OMHU

  • Build and scale one of Scandinavia’s fastest growing D2C furniture brands.

  • Work with a lightweight and fast-moving internal team.

  • High level of autonomy, ownership, and the ability to make a visible impact.

  • Opportunity to shape the future of OMHU’s paid growth engine from the inside.