This guide describes in detail what happens under the hood when you call
This method loads an image with the given request and displays it in the view.
Before loading a new image, it prepares the view for reuse by canceling any outstanding requests and removing a previously displayed image.
If the image is in the memory cache, it is displayed immediately with no animations. If not, it loads the image using an image pipeline. While it is loading, a
placeholder is displayed. When the request completes, Nuke displays the image (or
failureImage in case of an error) with the provided animation.
This section describes the steps that the pipeline performs when getting an image ready. As a visual aid, use the block diagram (the data cache part does not yet reflect all changes from Nuke 9).
Check if the requested image is in the memory cache.
Check if the processed image data is in the disk cache (assuming disk cache for processed images is enabled). If yes, the iimage is decoded, decompressed, stored in the memory cache, and is delivered to the client.
Check if the original image data is in the disk cache. If yes, it repeats the same steps from the previous point, but this time, it also applies the processors.
The disk cache described in steps 2 and 3 is disabled by default. The pipeline relies on the HTTP-compliant disk cache on a
URLSessionlevel. To learn how to enable the disk cache, see Aggressive LRU Disk Cache.
Now that you saw a high-level overview, let’s dive into more detail.
Data Loading and Caching #
URLSession to load image data. The data is cached on disk using
URLCache, which by default is initialized with a memory capacity of 0 MB (Nuke only stores processed images in memory) and disk capacity of 150 MB.
URLSession class natively supports the following URL schemes:
DataLoader works great for most situation, but if you need to provide a custom networking layer, you can using a
DataLoading protocol. See Third Party Libraries guide to learn more about. See also, Alamofire Plugin.
Resumable Downloads #
If the data task is terminated when the image is partially loaded (either because of a failure or a cancellation), the next load will resume where the previous left off. Resumable downloads require the server to support HTTP Range Requests. Nuke supports both validators:
Last-Modified. Resumable downloads are enabled by default. You can learn more in “Resumable Downloads”.
Memory Cache #
The processed images are stored in a fast in-memory cache (
ImageCache). It uses LRU (least recently used) replacement algorithm and has a limit of ~20% of available RAM.
ImageCache automatically evicts images on memory warnings and removes a portion of its contents when the application enters background mode.
The pipeline avoids doing any duplicated work when loading images. For example, let’s take these two requests:
let url = URL(string: "http://example.com/image") pipeline.loadImage(with: ImageRequest(url: url, processors: [ ImageProcessor.Resize(size: CGSize(width: 44, height: 44)), ImageProcessor.GaussianBlur(radius: 8) ])) pipeline.loadImage(with: ImageRequest(url: url, processors: [ ImageProcessor.Resize(size: CGSize(width: 44, height: 44)) ]))
Nuke will load the data only once, resize the image once and blur it also only once. There is no duplicated work done. The work only gets canceled when all the registered requests are, and the priority is based on the highest priority of the registered requests.
Coalescing can be disabled using
isDeduplicationEnabled configuration option.
When you instantiate
Data, the data can be in a compressed format like
UIImage does not eagerly decompress this data until you display it. It leads to performance issues like scroll view stuttering. To avoid it, Nuke automatically decompresses the data in the background. Decompression only runs if needed; it won’t run for already processed images.
See Image and Graphics Best Practices to learn more about image decoding and downsampling.
Progressive Decoding #
If progressive decoding is enabled, the pipeline attempts to produce a preview of any image every time a new chunk of data is loaded. See it in action in the demo project.
When the pipeline downloads the first chunk of data, it creates an instance of a decoder used for the entire image loading session. When the new chunks are loaded, the pipeline passes them to the decoder. The decoder can either produce a preview or return nil if not enough data is downloaded.
Every image preview goes through the same processing and decompression phases that the final images do. The main difference is the introduction of backpressure. If one of the stages can’t process the input fast enough, then the pipeline waits until the current operation is finished, and only then starts the next one. When the data is fully downloaded, all outstanding progressive operations are canceled to save processing time.
Nuke is tuned to have at little overhead as possible. It uses multiple optimization techniques to achieve that: reducing the number of allocations, reducing dynamic dispatch, CoW, etc. There is virtually nothing left in Nuke that could be changed to improve main thread performance.
If you measure just Nuke code, it takes about 0.004 ms (4 microseconds) on the main thread per request and about 0.03 ms (30 microseconds) overall, as measured on iPhone 11 Pro using Nuke 9.3.0.
Nuke is fully asynchronous and performs well under stress.
ImagePipeline schedules its operations on dedicated queues. A queue limits the number of concurrent tasks, manages the request priorities, cancels the work when needed. Under extreme load,
ImagePipeline will also rate-limit requests to prevent saturation of the underlying systems.
To learn more about Nuke performance, see “Nuke 9”.
If you want to see how the system behaves, how long each operation takes, and how many are performed in parallel, enable the
isSignpostLoggingEnabled option and use the
os_signpost Instrument. For more information see Apple Documentation: Logging and WWDC 2018: Measuring Performance Using Logging.
Image loading frameworks are often used in table and collection views with a large number of cells. They must perform well to achieve buttery smooth scrolling.
Please keep in mind that this performance test (sources) makes for a very nice-looking chart, but in practice, the difference between Nuke and say SDWebImage will be that dramatic. Unless your app drops frames on a table or a collection view rendering, there is no real reason to switch.
Nuke has an incredible number of performance features: progressive decoding, prioritization, coalescing of tasks, cooperative cancellation, parallel processing, backpressure, prefetching. It forces Nuke to be massively concurrent. The actor model is just part of the solution. To manage individual image requests, it needed a structured approach for managing async tasks.
The solution is
Task, which is a part of the internal infrastructure. When you request an image, Nuke creates a dependency tree with multiple tasks. When a similar image request arrives (e.g. the same URL, but different processors), an existing subtree can serve as a dependency of another task.
Nuke supports progressive decoding and task design reflects that. Tasks send events upstream: data chunks, image scans, progress updates, errors. Tasks send priority updates and cancellation requests downstream. This design is inspired by reactive programming, but is optimized for Nuke. Tasks are much simpler and faster than a typical generalized reactive programming implementation. The complete implementation takes just 237 lines.
Some tasks implement backpressure. For example, if you are fetching a progressive JPEG and have an expensive processor, such as blur, the processing task will only produce processed images as fast as it can, skipping the scans it has no capacity to handle.
All of the tasks are synchronozied on a single serial dispatch queue. This a simple and reliable way to achieve performance and thread safety.
To learn more about how Nuke manages concurrency, see Concurrency in Nuke.