Skip to main content

Architecture

Immich uses a traditional client-server design, with a dedicated database for data persistence. The frontend clients communicate with backend services over HTTP using REST APIs. Below is a high level diagram of the architecture.

High Level Diagram

Immich Architecture

The diagram shows clients communicating with the server's API via REST. The server communicates with downstream systems (i.e. Redis, Postgres, Machine Learning, file system) through repository interfaces. Not shown in the diagram, is that the server is split into two separate containers immich-server and immich-microservices. The microservices container does not handle API requests or schedule cron jobs, but primarily handles incoming job requests from Redis.

Clients

Immich has three main clients:

  1. Mobile app - Android, iOS
  2. Web app - Responsive website
  3. CLI - Command-line utility for bulk upload
info

All three clients use OpenAPI to auto-generate rest clients for easy integration. For more information about this process, see OpenAPI.

Mobile App

The mobile app is written in Flutter. It uses Isar Database for a local database and Riverpod for state management.

Web Client

The web app is a TypeScript project that uses SvelteKit and Tailwindcss.

CLI

The Immich CLI is an npm package that lets users control their Immich instance from the command line. It uses the API to perform various tasks, especially uploading assets. See the CLI documentation for more information.

Server

The Immich backend is divided into several services, which are run as individual docker containers.

  1. immich-server - Handle and respond to REST API requests
  2. immich-microservices - Execute background jobs (thumbnail generation, metadata extraction, transcoding, etc.)
  3. immich-machine-learning - Execute machine learning models
  4. postgres - Persistent data storage
  5. redis- Queue management for immich-microservices

Immich Server

The Immich Server is a TypeScript project written for Node.js. It uses the Nest.js framework, with TypeORM for database management. The server codebase also loosely follows the Hexagonal Architecture. Specifically, we aim to separate technology specific implementations (infra/) from core business logic (domain/).

REST Endpoints

The server is a list of HTTP endpoints and associated handlers (controllers). Each controller usually implements the following CRUD operations:

  • POST /<type> - Create
  • GET /<type> - Read (all)
  • GET /<type>/:id - Read (by id)
  • PUT /<type>/:id - Updated (by id)
  • DELETE /<type>/:id - Delete (by id)

DTOs

The server uses Domain Transfer Objects as public interfaces for the inputs (query, params, and body) and outputs (response) for each endpoint. DTOs translate to OpenAPI schemas and control the generated code used by each client.

Microservices

The Immich Microservices image uses the same Dockerfile as the Immich Server, but with a different entrypoint. The Immich Microservices service mainly handles executing jobs, which include the following:

  • Thumbnail Generation
  • Metadata Extraction
  • Video Transcoding
  • Smart Search
  • Facial Recognition
  • Storage Template Migration
  • Sidecar (see XMP Sidecars)
  • Background jobs (file deletion, user deletion)
info

This list closely matches what is available on the Administration > Jobs page, which provides some remote queue management capabilities.

Machine Learning

The machine learning service is written in Python and uses FastAPI for HTTP communication.

All machine learning related operations have been externalized to this service, immich-machine-learning. Python is a natural choice for AI and machine learning. It also has some pretty specific hardware requirements. Running it as a separate container makes it possible to run the container on a separate machine, or easily disable it entirely.

Each request to the machine learning service contains the relevant metadata for the model task, model name, and so on. These settings are stored in Postgres along with other system configs. For each request, the microservices container fetches these settings in order to attach them to the request.

Internally, the machine learning service downloads, loads and configures the specified model for a given request before processing the text or image payload with it. Models that have been loaded are cached and reused across requests. A thread pool is used to process each request in a different thread so as not to block the async event loop.

All models are in ONNX format. This format has wide industry support, meaning that most other model formats can be exported to it and many hardware APIs support it. It's also quite fast.

Machine learning models are also quite large, requiring quite a bit of memory. We are always looking for ways to improve and optimize this aspect of this container specifically.

Postgres

Immich persists data in Postgres, which includes information about access and authorization, users, albums, asset, sharing settings, etc.

info

See Database Migrations for more information about how to modify the database to create an index, modify a table, add a new column, etc.

Redis

Immich uses Redis via BullMQ to manage job queues. Some jobs trigger subsequent jobs. For example, Smart Search and Facial Recognition relies on thumbnail generation and automatically run after one is generated.