Image Pipeline

How to use ImagePipeline directly and customize it

At the core of Nuke is the ImagePipeline class. Use the pipeline directly to load images and decide how to display them later.

ImagePipeline #

The pipeline returns ImageTask objects to manage an active download.

let task = ImagePipeline.shared.loadImage(
    with: url,
    progress: { _, completed, total in
        print("progress updated")
    },
    completion: { result in
        print("task completed")
    }
)

Load Image #

Use loadImage method to load an image for the given url or request.

ImagePipeline.shared.loadImage(with: url) { result in
   print("task completed")
)

The pipeline checks if the image exists in any of the cache layers, prioritizing the fastest caches (memory cache). If there is no cached data, the pipeline starts the download. When the data is loaded, it decodes the data, applies the processors, and decompresses the image in the background.

See Image Pipeline Guide to learn how images are downloaded and processed.

Load Data #

Use loadData method to load image data.

ImagePipeline.shared.loadData(with: url) { result in
   print("task completed")
)

Complete Signature #

A complete signature of loadImage method:

@discardableResult
func loadImage(
    with request: ImageRequestConvertible,
    queue: DispatchQueue? = nil,
    progress: ((ImageResponse?, Int64, Int64) -> Void)? = nil,
    completion: ((Result<ImageResponse, Error>) -> Void)
) -> ImageTask 

You can customize the callback queue, observe progress1, etc.

loadImage returns always calls a completion closure asynchronously. To check if the image is stored in a memory cache, use pipeline.cachedImage(for: url).

ImageTask #

When you start the request, the pipeline returns an ImageTask object, which can be used for cancellation and more.

let task = ImagePipeline.shared.loadData(with: url) { result in
   print("task completed")
)

The pipeline maintains a strong reference to the task until the request finishes or fails; you do not need to maintain a reference to the task unless it is useful to do so for your app’s internal bookkeeping purposes.

Cancellation #

Mark task for cancellation.

task.cancel()

Priority #

Change the priority of the outstanding task.

task.priority = .high

Progress #

In addition to the progress closure, you can observe the progress of the download using Foundation.Progress.

let progress = task.progress

Configuration #

Default Configuration #

The default image pipeline is initialized with the following dependencies:

// Shared image cache with a size limit of ~20% of available RAM.
imageCache = ImageCache.shared

// Data loader with a default `URLSessionConfiguration` and a custom `URLCache`
// with memory capacity 0, and disk capacity 150 MB.
dataLoader = DataLoader()

// Custom aggressive disk cache is disabled by default.
dataCache = nil

// By default uses the decoder from the global registry and the default encoder.
makeImageDecoder = ImageDecoderRegistry.shared.decoder(for:)
makeImageEncoder = { _ in ImageEncoders.Default() }

Each operation in the pipeline runs on a dedicated queue:

dataLoadingQueue.maxConcurrentOperationCount = 6
dataCachingQueue.maxConcurrentOperationCount = 2
imageDecodingQueue.maxConcurrentOperationCount = 1
imageEncodingQueue.maxConcurrentOperationCount = 1
imageProcessingQueue.maxConcurrentOperationCount = 2
imageDecompressingQueue.maxConcurrentOperationCount = 2

There is a list of pipeline settings which you can tweak:

// Automatically decompress images in the background by default.
isDecompressionEnabled = true

// Configure what content to store in the custom disk cache.
dataCacheOptions.storedItems = [.finalImage] // [.originalImageData]

// Avoid doing any duplicated work when loading or processing images.
isDeduplicationEnabled = true

// Rate limit the requests to prevent trashing of the subsystems.
isRateLimiterEnabled = true

// Progressive decoding is an opt-in feature because it is resource intensive.
isProgressiveDecodingEnabled = false

// Don't store progressive previews in memory cache.
isStoringPreviewsInMemoryCache = false

// If the data task is terminated (either because of a failure or a
// cancellation) and the image was partially loaded, the next load will
// resume where it was left off.
isResumableDataEnabled = true

And also a few global options shared between all pipelines:

// Enable to start using `os_signpost` to monitor the pipeline
// performance using Instruments.
ImagePipeline.Configuration.isSignpostLoggingEnabled = false

Custom Pipeline #

If you want to build a system that fits your specific needs, you won’t be disappointed. There are a lot of things to tweak. You can set custom data loaders and caches, configure image encoders and decoders, change the number of concurrent operations for each stage, disable and enable features like deduplication and rate-limiting, and more.

To learn more, see the inline documentation for ImagePipeline.Configuration and Image Pipeline Guide.

The protocols that can be used for customization:

To create a pipeline with a custom configuration, either call the ImagePipeline(configuration:) initializer or use the convenience one:

let pipeline = ImagePipeline {
    $0.dataLoader = ...
    $0.dataLoadingQueue = ...
    $0.dmageCache = ...
    ...
}

And then set the new pipeline as the default one:

ImagePipeline.shared = pipeline
  1. The first parameter (ImageReponse) represents an intermediate response used for progressive decoding